I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.
― Abraham Maslow
One of the most insidious and nefarious properties of scientific models is their tendency to take over, and sometimes supplant, reality.
— Erwin Chargaff
In scientific computing the quality of the simulations is slaved to the quality of the models being solved. The simulations cannot be more useful than the models allow. This absolute fact is too often left from the considerations of the utility of computing for science. Models are immensely important for the conduct of science and their testing essential to progress. When a model survives a test it is a confirmation of existing understanding. When a model fails and is overturned, science has the opportunity to leap forward. Both of these events should be cherished as cornerstones of the scientific method. Scientific computing as articulated today does not fully honor this point-of-view.
…all models are approximations. Essentially, all models are wrong, but some are useful. However, the approximate nature of the model must always be borne in mind…
— George E.P. Box
The purpose of models is not to fit the data but to sharpen the questions.
— Samuel Karlin
The centrality of the utility of models is defined by the role of models in connecting simulations to reality. When a scientist steps back from the narrow point-of-view associated with computing and looks at science more holistically, the role of models becomes much clearer. Models are approximate, but tractable, visions of reality that have utility in their necessary simplicity. Our models also define in loose terms what we envision about reality. In science our models define well how we understand the World. In engineering our models define how and what we build. If we expand our models, we expand our grasp of reality and our capacity for creation. Models connect the World’s reality to our intellectual grasp of that reality.
Science is not about making predictions or performing experiments. Science is about explaining.
― Bill Gaede
Computing has allowed more complex models to be used because it is freed of the confines of analytical techniques. Despite this freedom, the nature of models has been relatively stagnant with the approach to modeling still tethered to the (monume
ntal) ideas introduced in 17th, 18th and 19th centuries. Despite the ability to solve much more complicated models of reality that should come closer to “truth,” we are still trapped in this older point-of-view. In total too little progress is being made in removing these restrictions in how we think about modeling the World. Ultimately these restrictions are holding us back from a more pervasive understanding and control over the natural World. The costs of this seeming devotion to an antiquated perspective are immense, essentially incalculable. Succinctly put, the potential that computing represents is far, far from being realized today.
It’s not an experiment if you know it’s going to work.
― Jeff Bezos
If science is to be healthy the models of reality should constantly be challenged by experiment. Experiments should be designed to critically challenge or confirm our models. Too often this essential role is missing from computational experiments, and to some extent can only come from reality itself, that is classical experiments. This hasn’t stopped the hubris of some that define computations as replacements for experiments when they conduct direct numerical experiments and declare them to be ab initio.
The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.
― Albert Einstein
This is very common in turbulence, for example, and this approach should be blamed for helping to stagnate progress in this field. The truly dangerous trend is for real World experiments being replaced by computations, which is happening with frightening regularity. This creates an intellectual disconn
ect of science’s lifeblood and its modeling by allowing modeling to replace experiments. With models then taking the role of experiment a vicious cycle ensues where faulty models are not displaced by experimental challenges. Instead an incorrect or incomplete model can increase its stranglehold on thought.
The real world is where the monsters are.
― Rick Riordan
Nothing is more damaging to a new truth than an old error.
— Johann Wolfgang von Goethe
Indeed lack of progress in understanding turbulence can largely be traced to the slavish devotion to classical ideas, and the belief that the incompressible Navier-Stokes equations somehow contain the truth. I feel that they do not, and it would be foolish to adopt this belief. That has not stopped the community from fully and completely adopting this belief. Incompressibility is itself an unphysical approximation (albeit a useful one), but woefully unphysical in its implied infinite speed of propagation for sound waves. It is also strains any connections of the flow to the second law of thermodynamics, which almost certainly plays a key role in turbulence. Incompressibility removes thermodynamics from the equations in the most brutish way possible. Computing has only worked to strengthen these old and stale idea’s hold on the field, and perhaps set progress backwards by decades. This need not be the case, but outright intellectual laziness has set in.
It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.
― Richard P. Feynman
Experimental observations are only experience carefully planned in advance, and designed to form a secure basis of new knowledge.
― Sir Ronald Fisher
Classically experiments are conducted to either confirm our understanding, or
challenge it. A convincing experiment that challenges our understanding is invaluable to the conduct of science. Experimental work that provides this sort of data is essential to progress. When the data is confirmatory, it provides the basis of validation or calibration of models. Too often the question of whether the models are right or wrong is not considered. As a result the models tend to drift over time out of applicability. The derivation and definition of different models based on the feedback from real data is too infrequent. Explaining data should be a far more important task in the day-in-day-out conduct of science.
Theories might inspire you, but experiments will advance you.
― Amit Kalantri
Experiment is the sole source of truth. It alone can teach us something new; it alone can give us certainty.
― Henri Poincaré
In computational modeling and simulation this is happening even less. Part of the reason is the lack of active questioning of the models by scientists. Models have been applied for decades without significant challenge to assertion that all we need is a faster, bigger computer for reality to yield to the model’s predictive power. The incapacity of the model to be predictive is rarely even considered as an outcome.
Another way of expressing this problem is the lingering and persistent weakness of validation (and it brother in arms, verification). Too often the validation received by models is actually validation disguised as calibration without the correctness of the model even considered. The ultimate correctness of a model should always be front and center in validation, yet this question is rarely asked. Properly done validation would expose models as being wrong, or similarly hamstrung in their ability to model aspects of reality. The consequence is the failure to develop new models and too much faith placed in heavily calibrated old models.
Humans see what they want to see.
― Rick Riordan
Remember, you see in any situation what you expect to see.
The current situation is not healthy. Science is based on failures, and failure is not allowed today. The validation outcome that a model is wrong is viewed as a failure. Instead it is an outstanding success that provides the engine for scientific progress so vitally needed. In most computational simulations this outcome is ruled out from the outset. Rather than place the burden of evidence on the model being correct, we tend to do the opposite and place the burden on proving models wrong. This is backwards to the demands of progress. We might consider a different tact. Thi
s comes as an affront to the viewpoint that scientific computing is an all-conquering capability that only needs a big enough computer to enslave reality to its power. Nothing can be further from the truth. In the process we are wasting the massive investment in computing rather than harnessing it.
The formulation of the problem is often more essential than its solution, which may be merely a matter of mathematical or experimental skill.
― Albert Einstein
To succeed scientific computing needs to embrace the scientific method again instea
d of distancing itself from the engine of progress so distinctly. We need leadership in science that demands a different path be taken. This path needs to embrace risks and allow for failure while defining a well-defined structure that puts experiment and modeling in proper roles and appropriate contexts.
Never in mankind’s history have we so fundamentally changed our means of existence with so little thought.
― James Rozoff
Have been reading your blog regularly now. You make a lot of points that I personally have felt existing in the scientific community but haven’t heard spoken out apoud.
I have a CFD background so was interested in your thoughts above. While the incompressible ns equations truly are an approximation most of CFD done is actually the rans variety which basiacally models everything up to the most benign of scales.
I am guessing you are talking about dns using the incompressible ns equations. I would like to know more about the way you are thinking about putting thermodynamics into the equations since I haven’t come across literature that talks of the same.
Vikram,
Of course the application of compressible CFD is common in some circumstances. The issue is what sort of physical effects do compressibility and thermodynamics impact under conditions where incompressibility is assumed to hold. This question is rarely entertained. Key to this is the turbulence literature, which is almost uniformly incompressible. The implicit LES community does usually solve the full equations including thermodynamics. Interestingly the turbulence simulations are remarkably effective.
I think I am getting confused by your answer. Yes, most of turbulence is done using DNS at low Reynolds’ numbers where the incompressible assumption is done and the equations modified.
But here’s the thing. LES, like RANS is further simplification of the NS equations though not as much. So in effect LES is being done keeping those same incompressible equations as the starting point and then filtering and then using algebraic or other models for the lower spatial scales which in effect are the turbulence scales. Of course this is for those LES simulations done at low speeds and Reynolds’ number. Its different when the compressible equations are taken.
If they are remarkable effective then there’s a contradiction somewhere.
Reblogged this on Mohsen Behnam, PhD and commented:
I love this article!
Pingback: How to Win at Supercomputing | The Regularized Singularity
Pingback: The ideal is the enemy of the real | The Regularized Singularity