• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: May 2014

Lessons from the History of CFD (Computational Fluid Dynamics)

30 Friday May 2014

Posted by Bill Rider in Uncategorized

≈ 7 Comments

“I never, never want to be a pioneer… It’s always best to come in second, when you can look at all the mistakes the pioneers made — and then take advantage of them.”— Seymour Cray

About a year ago I attended a wonderful event in San Diego. This event was the JRV symposium (http://dept.ku.edu/~cfdku/JRV.html) organized by Z. J. Wang of Kansas State University. The symposium was a wonderful celebration of the careers of three giants of CFD, Bram Van Leer, Phil Roe and Tony Jameson who helped create the current state of the art. All three have had a massive influence on modern CFD with a concentration in aerospace engineering. Like most things, their contributions were based on a foundation provided by those who preceded them. It is the story of those pioneers that needs telling lest it be forgotten. It turns out that scientists are terrible historians and the origins of CFD are messy and poorly documented. See Wikipedia for example, http://en.wikipedia.org/wiki/Computational_fluid_dynamics, this history is appallingly incomplete.

I wanted to make a contribution, and with some prodding Z. J. agreed. I ended up giving the last talk of the symposium titled “CFD Before CFD” (http://dept.ku.edu/~cfdku/JRV/Rider.pdf). As it turned out the circumstances for my talk became more trying. Right before my talk Bram Van Leer delivered what he announced would probably be his last scientific talk. The talk turned out to be a fascinating history of the development of Riemann solvers in the early 1980’s. In addition, it was a scathing condemnation of the lack of care some researchers take in understanding and properly citing the literature. It was a stunningly hard act to follow.

Image

In talking about the history of CFD, I used the picture of the accent of man to illustrate the period of time. Part I of the history would correspond to the emergence of man from the forests of Africa to the savannas where apes become Australopithecus, but before the emergence of the genus Homo. The history of man before man!

The talk was a look at the origins of CFD that occurred at Los Alamos during the Manhattan Project in World War II. Part of the inspiration for the talk was a lecture Bram gave a couple years prior called the “History of CFD: Part II”. As I discovered during a discussion with Bram, there is no Part I. With the material available on the origins of CFD so sketchy, incomplete and wrong, it is something I need to work to rectify. First of all, it wasn’t called CFD until 1967 (this term was invented by C.K. Chu of Columbia University) although the term rapidly gained acceptance with Pat Roache’s book of the same title probably putting the term “over the top”.

ImageImage

So, I’m probably committed to giving a talk titled the “History of CFD: Part I”. The talk last summer was a down payment. History is important to study because it contains many lessons and objective experience that we might learn from. The invention of CFD was almost certainty properly placed in 1944 Los Alamos during World War II. It is probably appropriate that the first operational use of electronic computers coincides with the first CFD. It isn’t well known that Hans Bethe and Richard Feynman two Nobel Prize winners in physics executed the first calculations! Really they led a team of people pursuing calculations supporting the atomic bomb work at Los Alamos. Feynman executed the herculean task of producing the first truly machine calculations. Prior to this calculators were generally people (women primarily) who produced the operations with the assistance of mechanical calculators. Bethe led the “physics” part of the calculation which used two methods for numerical integrations: one invented by Von Neumann based on shock capturing, and a second developed by Rudolf Peierls based on shock tracking. Von Neumann’s method was ultimately unsuccessful because Richtmyer hadn’t invented the artificial viscosity, yet. Without dissipation at the shock waves, Von Neumann’s method eventually explodes into oscillations, and become functionally useless.

ImageImage

CFD continued to be invented at Los Alamos after the war as the Cold War unfolded. The invention of artificial viscosity happened during the postwar work at Los Alamos where the focus had shifted to the hydrogen bomb. Computation was a key to continued progress. For example, the Monte Carlo method was an invention there in that period. First with the invention of useful shock capturing schemes by Richtmyer (building on Von Neumann’s work from 1944) in 1948. This was closely followed by seminal work by Peter Lax (started during his brief time on staff at Los Alamos in 1949-1950 plus summers there for more than a decade), and Frank Harlow in 1952. These three bodies of work formed the foundation for CFD that Van Leer, Jameson and Roe among others built on.

 

My sense was that once Richtmyer showed how to make shock capturing methods work, Lax and Harlow were able to proceed with great confidence. Knowing something is possible has the incredible effect of assuring that efforts can be redoubled with assurance of success. When you haven’t seen a demonstration of success, problems along the way are much more difficult to overcome.

Like so many innovations made there, the chief developments for the long term did not continue to be centered in Los Alamos, but spread outward to the rest of the World. This is Imagecommon and not unlike other innovations such as the Internet (started by DoD/DARPA but perfected outside the defense industry). While Los Alamos was a hotbed of development for CFD methods, over time, it ceased to be the source of innovation. This state of affairs was a constant source of consternation on my part while I worked at the Lab. Ultimately computation had a very utilitarian role there, and once they were functional, innovation wasn’t necessary.

Rumor has it that Harlow was nearly fired in his early time at Los Alamos because the value of his work was not appreciated. Fortunately another senior person came to Frank’s defense and his work continued. Indeed my experience at Los Alamos showed a prevailing culture that didn’t always appreciate computation as a noble or even useful practice. Instead it was viewed with suspicion and distrust, an unfortunate necessity of work. It is a rather sad commentary on how inventions fail to be appreciated by the place where they took place.

Harlow’s efforts formed the foundation of engineering CFD in many ways. The basic methods and philosophy inspired scientists the world over. No single scientific paper quite had the prevailing impact of his 1965 article in Scientific American with Jacob Fromm. This article showed the power of computational experiments and inspired visualization, and captured a generation who created CFD as a force. The only downside is the strong tendency to create CFD that is merely “Colorful Fluid Dynamics” and eschew a more measured scientific approach. Nonetheless, Frank planted the seeds that sprouted around the World.

Peter Lax

Peter Lax

For that matter Lax’s work while started at Los Alamos had almost no impact there. While Lax’s work formed the basis of the mathematical theory of hyperbolic PDEs and their numerical solution, and is immensely relevant to the Lab’s work, it receives almost no attention at all. Lax’s efforts had the greatest appeal in aeronautics and astrophysics through the work of Jameson and Van Leer/Roe. Interestingly enough the line of thinking from Lax did compete with the Von Neumann-Richtmyer approach in astrophysics, and resulted in the Lax thread winning out.

Von Neumann and Richtmyer’s work is the workhorse of shock physics codes to this day, but the attitude toward the method is hardly healthy. The basic methodology is viewed as being a “hack” and the values of the coefficients for artificial viscosity are merely knobs, to be adjusted. This attitude persists despite copious theory that says the opposite. Overcoming the misperceptions of artificial viscosity within a culture like the one that exists at Los Alamos (and its sister Labs the World over) is daunting, and seemingly impossible. Progress on this front is slowly happening, but the now traditional viewpoint is resilient. Lax’s work is also making inroads at the Labs primarily to some stunningly good work by French researchers led by the efforts of Pierre-Henri Maire and Bruno Depres who have created a cell-centered Lagrangian methodology that works. This was something that seemed “impossible” 10 or 15 years ago because it had been tried by a number of talented scientists, but always met with failure.

ImageImage

The origins of weather and climate modeling are closely related to this work. Von Neumann used his experience with shock physics at Los Alamos to confidently start the study of weather and climate in collaboration with Jules Charney. Despite the incredibly primitive state of computing, the work began shortly after World War II. Joseph Smagorinsky whose 1963 paper is jointly viewed as the beginning of global climate modeling and large eddy simulation successfully executed the second generation of the weather and climate modeling. The subgrid turbulence model with Smagorinsky’s name is nothing more than a three-dimensional extension of the Richtmyer-Von Neumann artificial viscosity. Charney suggested adding this stabilization to the simulations in a 1956 conference on the first generation of such modeling. Success with computing shocks in pursuit of nuclear weapons gave him the confidence it could be done. The connection of shock capturing dissipation to turbulence dissipation is barely acknowledged by anyone despite the very concept being immensely thought provoking.

The impact of climate science on the public perception of modern science is the topic of next week’s post. Stay tuned.

 

Not Every Experiment is the Same Kind of Experiment

23 Friday May 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

“Experiment is the sole source of truth. It alone can teach us something new; it alone can give us certainty.” ― Henri Poincaré

“What we observe is not nature itself, but nature exposed to our method of questioning.” ― Werner Heisenberg,

Image

There is a real tension existing between these two quotes, truth and our perception of truth. Experiments are the key to science including theory where it is tested and ideas are prodded from our heads based on what we observe. At the same time our observations are imperfect and biased. In moving science forward both concepts are key to progress and keeping things in perspective. For the person engaged in computational science the challenge is uniquely fraught with conflict. This includes the new concept of computational experiments and their rightful role in advancing knowledge. Their perspective is undoubtably useful although having an artificial view of reality taking the role of “truth” is largely inappropriate. That said, the “truth” of experimental observations is also an illusion to the extent that observations are flawed as well; however these flaws are of an entirely different sort than simulation’s flaws.

Observations are flawed by our ability to correctly sense reality, or the distortions made through our means of detection, or the outright changes to reality made through our attempts to observe something. Simulations largely do not suffer from these issues in that we can perfectly observe them, but instead the reality we observe through simulation is itself intrinsically flawed. On the one hand we have a flawed view of the truth, and on the other we have a flawed truth with perfect vision. The key is that neither is perfect, and that both are useful.

Science is fundamentally predicated on experiments. Experiments are the engine of discovery and credibility. Not all experiments serve the same intent, nor should they follow the same protocols. There are many different types of experiments and it is useful to develop taxonomy of experiments to keep things organized. Ultimately since we all want to be better scientists, it might just help us do better science.

Image

The classic experiment is the test of a hypothesis and it still holds the center of any discussion of science. Every other kind of experiment is a subset of this kind, but it useful to enrich the discussion with other experiments types. The differing types of experiments are constructed with a particular end in mind, and with that end in mind the choice to emphasize different qualities can be made. A key example is the notion of a specific validation experiment where the goal is to primarily provide data for ascertaining the credibility of computational simulations.

Measurement is the key to experiments. Measurement is by its very nature imprecise, we cannot exactly measure everything. Moreover, we don’t necessarily measure the right things. Often what we choose to measure is guided by theory, and if the theory is too flawed, we may not measure the important things. In other cases we simply cannot measure what is really important. In other words, the core of measurement is error. We need to be very exacting in our analysis of how much error is associated with an experimental measurement. Too often we aren’t very clear about this. For example, some experiments measure a quantity that actually fluctuates. The tendency is to report the mean value measured, and then some statistical measure of variation like the standard deviation. Rarely, if ever, the statistical choices made by the experimental analysis are justified. Does the quantity actually fall into a normal distribution? In spite of the fluctuations what is the experimental measurement error? Is this error biased?

Replicate experiments are another area where far too few examples exist. Experiments are often complex and expensive. In addition they are not repeatable, nor are they repeated. This results in certain uncertainties being completely unknown. Or to take the famous Donald Rumsfeld quip, the repeatability becomes a known unknown that is willfully unexplored. Usually the temptation to do a different experiment is too great to overcome. In this case any statistical evidence simply does not exist even though many of these cases are extremely sensitive to the initial conditions. If one is looking at a system described by a well-posed initial value problem and the initial conditions are impeccably well described, a single experiment might be justified. If all of this does not hold, the single experiment is outright dangerous. For complex systems the situation where the experiment is demonstrably repeatable does not usually present itself. An archetype of the sort of experiment that is not repeatable is the Earth’s climate, and in this case we have no choice.

Discovery experiments are where science most classically lives, or at least it is the ideal. A scientist makes a hypothesis about something, and an experiment is devised to test it. If the experiment and related measurements are good enough, a result is produced. The hypothesis is either confirmed (or no evidence against it), or it is disproven. These experiments are in fact far and few between, but when they can be done (correctly) they are awesome.

Image

Computational experiments are a modern invention, and rightly the source of great controversy. I’d argue strongly they should be even more controversial than they are generally characterized. Generically, a computer code is a model (or a hypothesis) and a problem can be devised based on the model. Calculations can then be done to test the given hypothesis. The problem most succinctly is that the computational experiments are not proofs in the same sense as a physical experiment. Just as physical experiments have measurement error, computational experiments have computational error, but they also have more problems. The model itself may not be correct, or incomplete. The data used by the code may be incorrect or the experiment may be set up in flawed manner. Because of the artificial nature of the computational experiment, the whole enterprise is subject to an extra level of scrutiny. If such scrutiny produces evidence of correctness, the experiment can be taken more seriously, but rarely as seriously as the physical experiment. The benefit of computation is that it is more flexible than nature and most often much cheaper or less dangerous.

Often the statement is made that the computation is a “direct numerical simulation (DNS)” or “first-principles”. Very rarely is this statement actually justified or supported by any evidence. Most often it is false. These labels seem to be an excuse to avoid doing any analysis of the errors associated with the calculation, or worse yet claim they are small and unimportant without the slightest amount of justification. This is proof by authority, and it ultimately harms the conduct of science. If one is claiming to do DNS then the burden of proof should be very high. To be blunt, the use of DNS usually is offered with even less proof than admittedly cruder approximations. This isn’t to day that DNS should not be employed as a scientific tool, but rather its application should be taken with a rather large grain of salt. Scientists should demand more evidence of quality from a proposed DNS, and reject its results if such evidence is not provided. Doing anything less threatens both science in general, and poses an existential risk to computational science.

The concept of validation experiments is a new “invention,” or more properly a refinement on the basic concepts in experimental science. The primary purpose of these experiments is the validation of computer simulations. A simple-minded view would say that any other experiment would serve this purpose. The simple-minded view is correct, but this purpose is served poorly by classic experiments and the standards of reporting results. More importantly, many essential details for a successful simulation of the experiment are left out of the description. The definition of the experiments is more complete in the sense of providing key details for a high fidelity simulation of the precise experimental setup. Usual experimental science often leaves out many details that can cloud the sense of validation received by comparison, or at the very least offer substantial uncertainty as to the source of any discrepancies.

The point of this discussion isn’t to over-complicate things, but rather clarify differing intent for experiments. One simply doesn’t “experiment” for the same reasons, but rather many different reasons. The texture of the distinction can help provide a better environment for focus on why things are done and where the emphasis should be. Exploring a scientific hypothesis in the classical sense is different than validating a computer code. These differing purposes call for a refinement of emphasis in the conduct of the experiment. I will note that validation is a form of hypothesis testing, i.e., “is a computer simulation a representation of reality and to what degree and purpose can it be trusted?”   Computational experiments are another problem altogether, and require even greater attention to detail.

High Energy Density Laboratory Astrophysics (HEDLA) 2014

16 Friday May 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 

 

This is the 10th edition of the conference and the first one in Europe. It is a mix of astrophysicists, plasma physicists, particle physicists, experimental physicists and a handful of nuclear engineers (like me).

Image

This week was spent at a conference generally outside my field. Doing this is a mixed bag, great at being exposed to new things and new people, but often being way out of my depth. I had been invited to give a talk about V&V, as calculations are very important in both astrophysics and high energy density experiments. While calculations are important, the physicists’ mode of investigation almost seems to be intrinsically at odds with V&V. While outside my field I felt increasingly like I had gone back in time as the week of talks unfolded. I felt like the community acted like the classical Los Alamos physics community I had come to understand while working there. I came away thinking that they need more V&V, but not in the same way applied programs need it. Interactions here are likely to be instructive for the subtleties to be found elsewhere.

Physicists are highly motivated to study the impact of the completeness of the model of reality to such a degree that it inhibits virtually any attention to verification. Validation in a loose sense is the focus, but it tends to take on an ad hoc character, as models are changed whole cloth during a computational investigation while seeking to see the fidelity of the modeling to observations. These are essentially sensitivity studies, but the practice that is accepted is very ad hoc and lacks a certain systematic flavor. More commonly, the whole study embodies a curiosity driven approach. Perhaps, this is generally OK; however, some of the calculations left me feeling very uneasy.

HEDLA is involved with doing a broad spectrum of experimental work with astrophysical significance. A host of phenomena can be profitably examined using the modern facilities in high energy density physics. These facilities include laser fusion centers (NIF, Rochester, LMJ,…) and electromagnetic centers (magpie and Z). The topics to study are dynamics such as jets, radiating shock waves, material characterization and equation of state, and so on. The goal is to understand the physics in a more controlled environment than the pure observational environment of astronomy. The problem with the approach is the difficulty of measuring quantities in the environment offered by the experiments, which is generally very very hot and very small. The other opportunity is the more direct validation of the physical models available in the computer codes developed? These codes share the dual role of providing design and analysis of the experiments and exploring astrophysical theories and concepts.

This issue with combining astrophysics with experimental physics isn’t the quality of the science. The science in this community is strong, exploratory and interesting. The problem is that the experiments are hard to do, hard to diagnose and painfully expensive. Under these conditions the curiosity-driven approach to science becomes problematic. Experiments need to be carefully designed and the quantitative aspects of the work grow in priority. It clashes with the more qualitative mode of investigation that dominates astrophysics where the key is to understand the basic principles governing observed phenomena. An example is the sensitivity of dependence to initial conditions where the experiments could provide a measure of repeatability except replicate experiments are never done; they are beaten out by more interesting unique experiments. This is in spite of the replication issue being capable of addressing the true error bar for every experiment that is done.

Take for example core collapse supernova where computation has played a major role in understanding what is probably happening. Early on pure hydrodynamic simulations could not recover the behavior apparent from observations (the mixing of elements into the envelope of the exploding star). Adding multiple physical effects has provided a better qualitative picture of what is likely to be happening. When the simulations added asymmetry in the initial conditions, neutrino transport with coupling to the hydro, magnetic fields and rotation every thing became better. Suddenly the character of the simulations became much more like the observations. The issue of initial conditions comes up in spades here. Supernovas are difficult to make explode, and the question remains about how often there are duds that don’t explode. We really only see the supernovas that explode, the duds may happen, but we don’t see them.

The question is whether this successful approach can be used for very expensive experimental design and analysis. I’m not so sure.

Using codes in conjunction with expensive, complex experiments should naturally evoke refined V&V. V&V is natural in the sort of engineering uses of computation that experimental design engenders. Conversely V&V seems to be almost unnatural for physics investigations. V&V implies a certain stability of modeling and theory that this field does not have. The careful and complete investigation of a stable model in an anathema to open-ended physics investigations. In other words the places where V&V is well grounded and natural are exactly the areas where the physics research community isn’t interested in. So the key is to craft a path forward that at once provides better quality of simulation for high energy density physics without clashing with the sorts of investigations important to the vibrancy of the community.

In a strong sense I think this is a perfect example for the flexible approach to V&V I’ve been advocating. In essence the idea is to apply V&V in a limited and carefully defined manner crafted to the needs of the community. The codes should probably have a greater level of foundational V&V in terms of the implementation of the basic numerical methods and physical models. Beyond the foundational V&V the application specific V&V should be far greater when the codes are applied to experimental design and analysis to assure that the outcomes of the experimental work has sufficient value. On the other hand the hard-nosed V&V concepts are inappropriate for the curiosity-driven astrophysics investigations. This isn’t that they couldn’t be applied, but rather that they would be potentially counter-productive. Once a mechanism is well enough established to transition to an experimental study, more V&V should kick in.

 We also visited the French version of NIF, the LMJ, which is a CEA run facility. We had a wonderful tour and since I saw NIF a year or so ago, it was useful to compare notes. Mostly the facility is similar, but seems more austere and less boastful. It is lower power and probably consciously avoids the word “ignition”. Interestingly the facility is still being constructed, but overall looks quite a bit like NIF (minus the landscaping, façade and other window dressing). The French are much more transparent about the connection of LMJ to their defense work. In addition the tour was dramatically more technical (although they probably have a smaller number of visitors by a lot).

 Overall it was a good experience and gave me lots to think about. V&V should connect all the way from engineering and a heavy hand to physicists and a much lighter touch. Wherever codes are used seriously in design and analysis V&V should play some role, even if it is minor. After my talk I met a blogger attending the meeting (Adam Frank who blogs at http://www.npr.org/blogs/13.7/). He asked me the question about V&V and climate change. It was a good question that led to a much longer discussion. In a nutshell my opinion climate change needs to have serious discussion about V&V issues, but the atmosphere is so poisonous toward dialog that will never happen. One should be able to criticize how climate science is done without being labeled a denier. Right now, that cannot happen, and we are all poorer for it.

Image

Important details about verification that most people miss  

14 Wednesday May 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Verification as usually defined for computational science is an important, but confusing aspect of simulation. The core concepts of verification are three-fold:

  • Does a simulation converge?
  • What is the rate of this convergence?
  • And what is the magnitude of numerical error?

Answering these questions provides evidence of the quality of the numerical solution. This is an important component of the overall quality of simulation. Scientists who tend to focus on the fidelity of the physical modeling often systematically overlook this aspect of the overall quality in simulation.

V&V often comes with a confusing “word soup” defining important terms. This “word soup” may be at its worst in verification. Verification and validation mean the same thing within a non-technical context, but in the framing of simulation quality they have quite specific technical meanings. So that the overall simulation quality can be assessed and understood, the activities surrounding each are distinctly different. The pithy statement of what the two words mean is useful; verification is the determination of whether the model is being solved correctly, and validation is the determination of whether the model is correct. Each involves the accumulation of evidence that this correctness is present.

Scientists tend to focus on the correctness of the model itself. The determination of correctness of the solution of the model is a mathematical problem involving basic numerical analysis. Validation necessarily involves observational or experimental data, and its comparison to the simulation. A necessary observation is that validation involves several error modes that color any comparison: the size of the numerical error in solving the model, and the magnitude of the experimental or observational error. Too often, one or both of these are overlooked. For high quality work, both of these must be accounted for in the assessment of model correctness.

For the numerical error, verification is used, but this differs from the verification process used to determine the correctness of the model solution. Thus, the distinction in verification is made between its two uses. Code verification is the process of determining model solution correctness and necessarily involves comparison of numerical solutions with analytical solutions that are unambiguously correct. For the purpose of error estimation several procedures may be used, but solution (or calculation) verification is perhaps the most convincing methodology.

A big issue is the implementation of verification is the confounding definitions and purpose of verification. In this vein the outcomes of the different forms of verification focus on differing metrics. I am going to try to address these confounding definitions and idiosyncrasies clearly.

For code verification and the determination of implementation and solution procedure correctness, the key metric is the rate of convergence. This rate of convergence is then compared with the analysis of the formal order of accuracy for the method being tested. If the solution to the problem is sufficiently smooth, the computed order of accuracy should closely match the order of accuracy from the numerical analysis as the mesh density becomes high.

In addition, the magnitude of the error is available in code verification. The code and computational physics communities systematically overlook the utilization in practical terms of the error magnitude in code verification. This aspect could be used to great effect in determining the efficacy of numerical methods. The determination of order of accuracy and error magnitude is not limited to smooth solutions. If solutions are discontinuous, the convergence rate and error magnitude are usually completely overlooked. Comparisons between the analytical solution and the numerical result are limited to the viewgraph or eyeball norm. This is a mistake and a missed opportunity to discuss the impact of numerical methods. Most practical problems have various forms of discontinuous behavior, and the magnitude of error for these problems define the efficiency of the method.

Solution verification is important for estimating the numerical error in applied simulation. No analytical solution exists in these cases, and the goal is two fold: determine whether the model is converging toward a mesh independent solution, and the magnitude of the error. Often scientists will show a couple of mesh solutions to assess whether the solution is sensitive to the mesh resolution. This is better than nothing, but only just. This does not provide the key property of the magnitude of numerical error in the solution. The error magnitude is a function of the mesh resolution; different mesh resolution has different error magnitudes (for a convergent simulation). An auxiliary quantity of interest is the rate of convergence, but the error magnitude is the primary metric of interest.

Lastly, the expectations for the rate of convergence are not often clearly enough stated. Concisely, the rate of convergence is a function of the details of the numerical method, and the nature of the solution. This is true for both code and solution verification. If the solution does not possess sufficient smoothness (regularity) or certain degenerate features, the convergence rate will deviate from the design order of accuracy, which a numerical method can achieve under ideal circumstances. Typically, the observed convergence rate is expected to be the minimum of the design order of accuracy and the solution regularity.

If instead, another error estimation procedure is utilized (such as adjoint methods, PDE-based methods, Z-Z, etc…), there is a secondary burden for the simulation code to address. In these cases the error estimation itself needs to be verified (code verified using analytical error estimate comparison, and solution verification comparison for applied use). I have rarely observed the successful use of verification for such estimation procedures.

Finally, I’ll mention the concerns I have about commercial CFD codes, or codes downloaded and used without detailed knowledge of the solution procedures therein. In the vast majority of cases these codes do not have a well-evidenced pedigree. The codes often report to having a good pedigree, but the evidence of that pedigree is sorely lacking. Those writing, selling and distributing these codes rarely provide the necessary evidence to have good faith in such codes. This lack of evidenced pedigree and deep knowledge of the solution procedures greatly limits the effective estimation of numerical error when using such codes.

ASME V&V Symposium, Las Vegas, Nevada May 7-9, 2014.

09 Friday May 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

This week I’m traveling all week (and next week too). Traveling is a mixed bag; I’m seeing new things, and exposed to new ideas; I’m away from home and family. It’s a yin and yang sort of thing. I love it, and I hate it. Today, I’ll focus on the good part, the part that keeps my brain humming along. Without mixing up whom you interact with your ideas may become stale, or fail to consider issues that come from a differing perspective. You also limit your ability to contribute to wider debates, and learn about new ideas developed elsewhere. You need it for intellectual vibrancy.

logoslide3

First, I went to visit the University of Notre Dame where DOE has funded a center to study a really cool material processing idea. The physical idea itself is fascinating involving reacting shock waves and the production of exotic materials. At the university they are combining disciplines to design and predict the experimental results. The process has never been executed before, but they believe it can be if they get everything right. They are using computational physics, multiscale modeling and advanced computing ideas along with the laboratory work. The key is engagement across areas that often don’t mix in an academic setting. This mixing is itself a challenge at a university where the tendency is for work to be done in narrow-deep silos of knowledge. I’m there leading a team of National Lab scientists who are there to help and advise the Center as well as make sure the work stays aligned with things the Lab values. Key among those values is multi-disciplinary work requiring disparate skills for success. This part was “easy” since it was the first two days of the two-week trip, and full of cool mind-expanding ideas. The hard part was having to “on” for the entirety of the meeting as the chairman of the committee.

VVS2014Header

The last three days have been spent at the third ASME V&V Symposium. The meeting has a real sense of being important to the V&V community. There seems to be a nexus at hand where the field will either grow and flourish, or decay and die. Opportunity lies to the right, and danger lies to the left; or is it the other way around. Decisions are being made that are important such as whether to have a V&V journal; how V&V figures into the regulation of medical devices and other aspects of the biological sciences. The danger with a Journal is that it becomes a cul-de-sac where all V&V work is done, and offers an excuse for not applying V&V in application areas. The distinction is important, and the spectrum of detailed V&V work needs to run from the development of methodology, to its application in purely applied work. For V&V to flourish, it needs both theoretical work and committed application in domain science and engineering. Regulatory work offers similar, but different challenges. The tension exists between strong regulation of action where people do things “right” and too much rigidity to allow for innovations in scientific methods to continue to benefit the quality of work.

For me, there was a big event. I was giving a plenary talk the second morning of the meeting. My talk’s abstract was published on the blog earlier, “What kind of person does V&V?” and in retrospect it drew attention appropriately to the talk. It also raised expectations. I had to deliver. Part of the talk was a desire to work on my own approach to giving a talk. In other words, I was pushing myself. I integrated elements of “Presentation Zen” and TED talks into the presentation. These elements meant using more images and fewer words while focusing on stories and free flowing narratives to give the message. My intent was to provoke thought and self-reflection in the V&V community. The talk was crafted into five narrative arcs some of which have been posted here. The first of these was the analogies between V&V and Human Resources, next I spoke about the danger of technology being too easy or simple (V&V is technology), the third is the use of “you idiot” appended to questions to screen bad questions out, followed by imploring V&V to act more as coaches and less as referees.

The final story revolved around the ongoing revolution in computing and data science coupled with the end of Moore’s law. V&V is needed to help manage the transition in computational science that will occur in the next decade. Without V&V computational science might be lost or go seriously off the rails. Without Moore’s law in effect we no longer have bigger, faster computers to simply crush problems with resolution. Instead, we will have to rely upon being smarter and improving methods, but new methods produce different answers, and V&V is necessary to build trust and confidence in the new methodology. Furthermore, the direction that computing is going offers the possibility to leverage the technology in a myriad of creative ways. V&V is core to success in many of these opportunities.

I felt very good at the end, and the objectives of the talk seemed to be generally achieved. I was imploring V&V to be collaborative, flexible and emotionally intelligent. As I’ve discovered if you give a good talk you’ll get questions and comments. People will want to talk to you. I received both in spades.

I’ll draw to a close with few observations about the meeting. The topic of V&V is quickly maturing. The practice of V&V is improving across the board. More and more talks are getting at subtleties in the practice rather than the basics. This is clearest in the new biological science use of V&V and particular medical device modeling. The quality of the work is much better than the previous year, and in fact the pace of improvement is astounding especially compared to the physical sciences that birthed V&V. A deeper concern is the penetration of V&V into the application sciences. Is V&V being done better and more extensively where it is needed in engineering practice? Or key scientific endeavors such as climate science. Climate science is politically charged and needs V&V, but the sensitivity to criticism acts to effectively poison any V&V despite the magnitude of the need. Pat Roache compared the discussions the climate community has about quality to a couple having a hushed argument behind closed doors fearing their children might overhear. There is a distinct lack of domain science expertise at the meeting, and that is a major concern.

I will also mention the extensive use of commercial codes as a concern. The verification work on many of these codes is not up to standards. I can’t see it doesn’t exist, I just haven’t seen it. As many have noted V&V is evidence based, and the evidence isn’t there. The use of commercial codes for CFD is extensive. Laboratory and government built or company internal software is now dominated by commercially sold CFD codes. The actual quality of these codes is difficult to understand, and they aren’t very open to discussing their “secrets”. In the end we need better transparency so that the solutions they create can be trusted. Like many applied codes the “robustness” of the code takes precedent over accuracy. As such low order numerical methods are highly favored. Another concern is the utilization of antiquated and simple methods for multiphysics coupling.

The good thing about the meeting was seeing lots of old friends, great conversations and a whole lot to think about when I get home. People who manage science federally and have restricted our attendance at meetings fail to understand what is important about conferences. Yes, we present our work to our peers, and our peers give us feedback, but much more happens. First and foremost, we see the work our peers are doing. We engage people socially, and we laugh, argue and eat together. The social aspects of science are critical to a well-functioning activity. Conferences are essential to the conduct of science because they allow people to interact as people. Presenting a paper at a conference is only one small aspect of a much broader engagement.

Next week I’m in France and seeing what the application folks in high energy density physics are up to. I have my hopes and have my fears.

To Do Better Research, Ask Better Questions

02 Friday May 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

“It is not the answer that enlightens, but the question.” – Decouvertes

Research is all about answering questions. The nature and quality of the question determines the power of the answers. I’ll just assert that we haven’t been asking very good questions lately and the quality of the research is showing the shortcomings. Lack of real risk coupled to intolerance to failure in research agendas are major problems today. Together these tendencies are tantamount to choosing a research agenda that produces little or nothing of value. These twin ills are reflected in the quality of the research questions. Poor questions that fail to probe the boundaries of knowledge lead to poor research that keeps those boundaries fixed. It is easier to continue to ask the same questions as before. There is a distinct predisposition to asking softball questions because you can be sure of the answer. If I’m sure of the answer, I haven’t asked a good question. The answer will do little to enlighten me beyond what is already self-evident.

For example, I realize now that major opportunities were missed in my previous life in Los Alamos. Up there, the nuclear weapons’ designers are kings. They also project a certain distain for computer codes despite using them virtually every day in the conduct of their work. I missed some really good questions that might have opened some doors to deeper discussions that are sorely necessary for progress. Instead we just beat around the proverbial bush and avoided the issues that hold back progress. I can imagine a dialog (past the third line its not clear where it would actually lead),

Me: “why do you believe your calculation is right?”

Designer: “I don’t, the code always lies to me”

Me: “then why do you use it?”

Designer: “it helps me solve my problems”

Me: “even if it lies?”

Designer: “I know how to separate the truth from the lies”

Me: “So it does contain some useful information?”

Designer: “Yes.”

Me: “How do you know where the utility ends, and the lies begin?”

Designer: “My judgment”

Me: “How do you know your judgment is sound?”

Designer: “I match the calculations against a lot of experimental data”

Me: “Do you know that the path taken to solution is unique, or can it be done multiple ways?”

Designer: “There is probably more than one way, but lots of experiments provide more confidence,”

Me: “What are the implications of this non-uniqueness?”

Designer: “I haven’t thought about that.”

Me: “Why? Isn’t that important or interesting?”

Designer: “It is a little frightening.”

This is the point where the discussion starts to veer into interesting and essential territory. We are confronted with systems dripping with uncertainty of all sorts. Many scientists are inherently biased toward solving well-posed initial value problems. For instance they will generally interpret experiments as a unique instantiation of the physical system, and expect the simulation to get that precise answer. This is reasonable for a stable system, but completely unreasonable for unstable systems. Remarkably, almost every technological and natural system of great interest has instabilities in it. Even more remarkably these systems often have a large enough ensemble of unstable events for them average out to reliable behavior. Nonetheless, they are not, nor should not be simulated as well-posed problems. Dealing with this situation rationally is a huge challenge that we have not stood up to as a community despite its pervasive nature.

Recently, I picked up a Los Alamos glossy (LANL publication, National Security Science) that discussed the various issues associated with nuclear weapons in today’s world. The issues are complex and tinged with deep geopolitical and technical issues. Take for instance the question of what the role of nuclear weapons is in national security today. Maybe a better question would be to answer the question, “imagine a world where the USA didn’t have nuclear weapons, but other nations did, what would it be like?” “Would you be comfortable in that World?”

The importance and training of a new generation of weapons’ designers was also highlighted in the glossy. In the dialog associated with that discussion, the gem of the “codes lie” shows up. This is a slightly more pejorative version of George Box’s quote “All models are wrong” without the positive retort “but some are useful.” I strongly suspect that the “codes lie” would be followed by “but they were useful” if the article had probed a bit deeper, but glossy publications don’t do that sort of thing. The discussion in the LANL glossy didn’t go there, and lost the opportunity to get to the deeper issues. Instead it was purely superficial spin. My retort is that codes don’t lie, but people sure do. Codes have errors. Some of these errors result from omission of important, but unknown physical effects. Other errors are committed out of necessity, such as numerical integration, which is never perfect. Other errors are merely the finite nature of knowledge and understanding such as the use of mathematics for governing equations, or imperfect knowledge of initial conditions. The taxonomy of error is the business of verification and validation with uncertainty quantification. The entire V&V enterprise is devoted to providing evidence for the quality (or lack thereof) of simulation.

We analyze systems with computer codes because those systems are deeply nonlinear and complex. The complexity and nonlinearity exceeds our capacity to fully understand. The computer code allows us to bridge our human capability for comprehension to these cases. Over time intuition can be developed when combined with concrete observation leads to confidence. This confidence is an illusion. Once the circumstances depart from where the data and simulations have taken us, we encounter a rapid degradation in predictive intuition. There is where danger lies. The fact is that the codes have errors, but people lie. People lie to gain advantage, or more commonly they lie to themselves because to answer truthfully requires them to stare in to the abyss of ignorance. In that abyss we can find the research questions worth answering and allowing mankind’s knowledge to advance.

The key is to get to a better question. It is about pulling a thread, doing an interrogation of the topic that peels away the layers of triviality, and gets to something with depth. First, the codes are more powerful than they will admit, but more deeply the path to solution is not unique. Both aspects are deeply important to the entire enterprise. I might imagine doing the same dialog with regard to climate science where similar issues naturally arise. Answers to these questions gets to the heart of computational science and its ability to contribute to knowledge.

The punch line is to push you to get at better, deeper questions as the route to better research. We need to ask questions that are uncomfortable, even unsettling. Not uncomfortable because of their personal nature (those are the “you idiot” questions where adding that phrase makes sense at the end out the question), but uncomfortable because they push us up to the chasm of our knowledge and understanding. These are questions that cause one to rethink their assumptions and if answered expand their knowledge.

I had an episode the other day that provided such a thread to pull. The issue resolves around the perniciousness of calibration and the false confidence that it produces. People looking at reactor criticality hold their calculations to a withering standard demanding five digits of accuracy. When I saw how they did this, my response was “I don’t believe that”. This was a sort of question, “can you justify those five digit?” The truth is that this answer is highly calibrated where the physical data is adjusted (homogenized) to allow this sort of accuracy, but its not “accuracy” in the sense that numerical modeling is built upon. It is precision. It is a calibrated precision where the impact of data and numerical uncertainty has been compensated for. This procedure and capability lacks virtually any predictive capability at the level of accuracy asserted. The problem is that reactor criticality is a horribly nonlinear problem, and small deviations are punished with an exponential effect. Practically speaking, the precision of getting the criticality correct (its an eigenvalue problem) is enormously important and this importance justifies the calibration.

A similar issue arises in climate science where the global energy balance must be nailed lest the Earth heat or cool unphysically. There a calibration is conducted that only applies to the specific mesh, numerical integration and subgrid models. If any of these things change the calibration must change as well to maintain the proper energy balance. The issue is whether the overall approach can be trusted at all as the system being modeled departs from the observed system that has been calibrated. For computational science this may be one of the most important issues to answer, “how far can a calibrated model be trusted?” “How can a calibrated model be trusted to assist in decisions?” Without the calibration the model is functionally useless, but with the calibration is it useful?

Questions are a way on encapsulating the core of what is wrong with computational science’s obsession with high performance computing. The question that would be better to ask is “are we focused on leveraging the right technological trends to maximize the impact of computational science on society at large?” I believe that we are not. We are missing the mark by a rather large margin. We are in the process of “doubling down” on the emphases of the past while largely ignoring how the World has changed. The change we see today is a merely the beginning of even bigger things to come. The approaches of the past will not suffice moving forward. For instance, the real hard truth is that the secrets of physical systems we are interested in will not simply submit to brute force computational power. Rather we need to spend some time thinking deeply about the questions we are trying to answer. With a little bit of deep thought we might actually start asking better questions and start down the path of getting more useful answers.

Scientific computing was once a major playing in the computing industry. Now it is merely a gnat on a whale’s back. The scientific computing community seems to be trying to swim against the incoming tidal wave instead of trying to ride it. Opportunity lies in front of us; can we muster the bravery to grasp it?

“The uncreative mind can spot wrong answers, but it takes a very creative mind to spot wrong questions.” – Anthony Jay

 

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...