• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: July 2015

Do Even Know What These Terms Mean?

31 Friday Jul 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

The beginning of wisdom is the definition of terms.

― Socrates

Accuracy, Fidelity, Resolution and Stability, Robustness, Reliability

These six words matter a lot in the discussion of numerical methods yet their meanings are quite poorly understood or even regularly conflict. Even worse they are poorly articulated in the literature despite their prominent importance in the character of numerical methods and their instantiation in code. Maybe I can make some progress toward rectifying this issue here. Consider this post to be a follow-on and compliment to the earlier post on improving CFD codes, https://williamjrider.wordpress.com/2015/07/10/cfd-codes-should-improve-but-wont-why/.

proofOne of the biggest issues in providing clarity in these matters is the ultimate inability of classical mathematics to provide clear guidance as we move from the abstract and ideal world to the real and messy reality. In using methods to solve problems with a lot of reality things that work well or even optimally in an idealized version of the World fall apart. In the ideal World very high-order approximations are wonderful, powerful and efficient. Reality tears this apart and all of these characteristics change to problematic, fragile and expensive. The failure to bridge methods over to the real world undermines progress and strands innovation behind.

Accuracy is a simple thing to define, the deviation of an approximation from reality. For an idealized mathematical representation this measurement is “easy”. The problem is measuring this for any real world circumstance is quite difficult to impossible. One way to define accuracy is related to the asymptotic rate of convergence as the degree of approximation is changed this is “order-of-accuracy”. The connection of order of accuracy, and numerical precision is rather ill defined and fuzzy. Moreover reality makes this even harder.

A clear way that the World becomes real is shock waves (and other similar nonlinear structures). A shock wave or similarly discontinuous solution renders most numerical methods first-order accurate at best. The current knowledge about how formal ideal high-order accuracy connects to measurable accuracy in the presence of discontinuities is sketchy. This manifests itself in having research in methods focus on high-order accuracy without any understanding of how this would translate to accuracy for real World problems. Today the efficacy of high-order methods is merely an article of faith.

We have alternative definitions of accuracy focus on the terms fidelity and resolution. Both of these terms are even more fuzzy that accuracy. These both get applied to methods that provide their value to (more) real World circumstances where formal accuracy is diminished. Thus important classes of methods are defined as “high-fidelity” or “high-resolution”. Both of these definitions are used to imply capability to provide good solutions when the ugliness of reality intrudes into our idealized reality.

220px-Peter_Lax_in_Tokyo copy 2Peter Lax provided a definition of resolution in an unfortunately obscure source. There and in a fascinating interview with Lax by Phil Colella (http://history.siam.org/oralhistories/lax.htm) the concept was discussed with an astounding proposition, perhaps higher than second-order was too accurate and produced solutions that did not have enough “room” to capture true solutions. It is not a final answer, but it does yield a direction to thinking about such things.

Here is the key passage on the topic of resolution:

“COLELLA: So you have shocks, why second-order? What were you thinking?

LAX: Well, you want greater accuracy, but even more you want greater resolution. I defined a concept of resolution. If you take a difference method and you consider a set of initial value problems of interest, which in practice could be some ball in L1-space, anything will do, and then you look at the states into which it develops after the unit-time, any given time, that’s another set. The first surprise is that this set is much smaller for nonlinear…for linear equations where time is reversible, the size of this set is roughly the same as the original set. For nonlinear equations, which are not reversible and where the wave information is actually destroyed, it’s a much smaller set. And the measure of the set that is relevant is what’s called entropy or capacity with respect to some given scale delta. So the first thing to look at is what is the capacity or entropy of this set of exact solutions. Then you take a numerical method, you start, you discretize the same set of initial data, then you look at what you get after time t goes to whatever the test time was. A method has a proper resolving power if the size of this set is comparable to the size of the exact solution; if it’s very much smaller it clearly cannot resolve. And first-order methods have resolution that is too low, and many details are just washed out. Second-order methods have better resolution. In fact, I was trying to – well, I want to bring up the question: could it be that methods that are even higher order (third, fourth) have perhaps too much resolution, more resolution than is needed? I just bring this up as a question.”

I might offer a bit of support for that concept in the case of genuinely nonlinear problems below. In a nutshell, the second-order methods with conservation form provide truncation error that matches important aspects of the true physics. Higher order methods will not capture this aspect of the physics. I’ll also note that Len Margolin and I have followed a similar, but different line of thinking looking at implicit large eddy simulation (ILES). ILES is an observation that high-resolution methods appear to provide effective turbulence modeling without the benefit of explicitly added subgrid modeling.

So let’s talk about the archetype of nonlinear, real World, messy computations, shock waves. In some ways shocks are really nice, they are inherently dissipative even in the case where the system is free of explicit molecular viscosity. Dissipation in the limit of zero viscosity is one of the most profound aspects of our mathematical description of reality. For physical systems with a quadratic nonlinearity including shocks and turbulence, this dissipation scales, C \left(\partial_x u\right)^3 with u being the velocity and C being a constant. At its core is the imposition of reality on idealized math to describe reality, and provide a useful, utilitarian description of mathematically singular structures. This character is present in turbulence as well. Both have basically the same scaling law and deep philosophical implications.

sankaran_fig1_360This form of nonlinear dissipation comes directly from the application of the conservation form to methods with second-order accuracy. For energy this term is precisely the form of the asymptotic law except for its connection to the discrete system. If the method achieves a formally higher than second-order accuracy this term disappears. For very simple second-order schemes there are truncation errors that compete with this fortuitous term, but if the linear accuracy of the method is higher order, this term is the leading and dominant truncation error. This may explain why schemes like PPM, and FCT methods produce high quality turbulence simulations without explicit modeling, but methods like minmod or WENO do not. The minmod scheme has a nonlinear truncation error that dominates the control volume term. For WENO method the higher order accuracy means the dissipation is dominated by a combination hyperviscous terms.

These deep philosophical implications are ignored by the literature for the most part, with shocks and turbulence defining a separation of focus. The connections between these topics are diffuse and unfocused. A direct connection would be a stunning breakthrough, but entrenched interests in both areas conspire against this. This remarkable similarity of the limiting dissipation in the absence of viscosity have been systematically ignored by scientists. I see it as utterly compelling or simply brutally obvious for a quadratic nonlinearity. Either way the similarity is meaningful. One of the key problems is that turbulence is almost completely grounded in the belief that it can be completely described by incompressible flow. No one seems to ever question this assumption.

Incompressibility is a physically limited approximation of reality, but not reality. It renders the equations to be intractable in some ways (see the Clay prize for proving the existence of solutions!). The unphysical nature of the equations is two-fold: sound speeds are infinite and thermodynamics are removed (especially harmful is the loss of the second law). Perhaps more problematically is the loss of the very nonlinearity known to be the source of dissipation without viscosity for shock waves, that is the steepening of arbitrary disturbances into discontinuous shock waves.

I’ve written before about stability and robustness with a focus on the commonality of their definition https://williamjrider.wordpress.com/2014/12/03/robustness-is-stability-stability-is-robustness-almost/. The default basic methodology for stability analysis was discussed too https://williamjrider.wordpress.com/2014/07/15/conducting-von-neumann-stability-analysis/. If we add the term “reliable” the situation is quite analogous to the issues with accuracy. We ultimately don’t have the right technical definitions for the useful character of practical reliability and robustness of numerical methods and their executable instantiation in code. Stability is necessary for robustness and reliable, but robustness and reliable imply even more. Typically the concept of robustness applies to practical computational methods used for real World (i.e., messy) problems.Regions_02

The key issue for high order methods is the inherently non-smoothness and lack of clean structure in the real world. This messiness renders high-order methods of questionable utility. Showing that high-order methods improve real world, practical, pragmatic calculation is challenge for the research community working in this area. Generally high-order methods show a benefit, but at a cost that makes their viability in production software questionable. In addition the high-order methods tend to be more fragile than their lower order cousins. The two questions of robustness in use and efficiency are the keys to progress.

Given all of these considerations, what is a path forward to improving existing production codes with higher order methods?

I will close with a set of proposals on how we might see our way clear to improving methods in codes by balancing requirements for high-order accuracy, high-resolution, robustness and stability. The goal is to improve the solution of “real” “practical” problems, not idealized problems associated with publishing research papers.

  1. For practical accuracy high-order only matters for linear modes in the problem. Therefore seek high-order only for the leading order terms in the expansion. Full nonlinear accuracy is a waste of effort. Full nonlinear accuracy only matters if the flow is fully resolved and the fields contain the level of smoothness equal to the scheme (they never do!). This would allow the quadratures usually invoked by formally high-order methods could be reduced along with the costs.
  2. For nonlinear structures, you just need second-order accuracy, which gives you a truncation error that matches the asymptotic structure of dissipation analytically. Removing this term may actually harm the solution rather than improve. The reasoning follows Lax’s comments above.
  3. Nonlinear stability is more important than linear stability in fact nonlinear stability will allow you to use methods that are locally linearly unstable. Extending useful nonlinear stability beyond monotonicity is one of the keys to improving codes.
  4. Developing nonlinear stability principles beyond monotonicity preservation is one of the keys to progress. A test of a good principle is its ability to allow the use of linearly unstable methods without catastrophe. The principle should not create too much dissipation outside of shocked regions (this is why ENO and WENO are not good enough principles). In a key way monotonicity-preserving methods naturally extended the linear monotone and first order methods. The next step beyond monotonicity preservation has not built upon this foundation, but rather introduced an entirely different concept. A more hierarchical approach may be needed to achieve something of more practical utility.
  5. The option of fully degrading the accuracy of the method to first-order accuracy must always be in play. This step is the key to robustness. Methods that do not allow this to happen will never be robust enough for production work. This is another reason why ENO and WENO don’t work for production codes.

Logically, all things are created by a combination of simpler less capable components

– Scott Adams (or the Dogbert Principle that applies to high resolution schemes in Laney, Culbert B. Computational gasdynamics. Cambridge University Press, 1998).

References

Lax, Peter D. “Accuracy and resolution in the computation of solutions of linear and nonlinear equations.” Selected Papers Volume I (2005): 184-194.

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39.9 (2002): 821-841.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225.2 (2007): 1827-1848.

Broadwell, James E. “Shocks and energy dissipation in inviscid fluids: a question posed by Lord Rayleigh.” Journal of Fluid Mechanics 347 (1997): 375-380.

Bethe, Hans Albrecht. Report on” The Theory of Shock Waves for an Arbitrary Equation of State”… 1942.

It’s really important to have the fastest computer

24 Friday Jul 2015

Posted by Bill Rider in Uncategorized

≈ 1 Comment

It is dangerous to be right in matters on which the established authorities are wrong.

― Voltaire

Last week I received a question via email that pro370220425mpted this post. It proposed that the title of the post is true. It talked about the benefits of pushing the envelope with high performance computing. The gist of the thought is that by pushing the envelope with computing we can effectively use the mainstream high end computing resources better. This is true without a doubt. It is a benefit of having a bleeding edge research program in any area. A better question is whether it is the most beneficial way to allocate our current efforts.

Everything in excess is opposed by nature.

― Hippocrates

The ability of such a program to impact the World positively is a much more difficult and nuanced question. For high performance computinimages-1g we have seen decades long focus on the power and speed of the hardware that has fueled a growth in peak computing speed consistent with Moore’s law. Unfortunately a host of essential capabilities for realizing this computing power as a scientific capability have not been similarly supported. Without these other capabilities such as physical models, solution methods, algorithms, the computing hardware is nothing more than a very expensive way to use electricity. The very things that make computers really useful for the purposes of modeling and simulation are the things we have not invested in for these same decades.

The distance between insanity and genius is measured only by success

― Ian Fleming

The issue with it is that this benefit does not exist in a vacuum. There are limits to the financial and human resources that may be devoted to the objective of “predictive” modeling and simulation. My politically incorrect assertion is that the focus on high performance computing hardware is a suboptimal approach to achieving the end result of maximizing the capability for modeling and simulation. The devotion to progress in hardware is sapping the resources that might be applied to attacking this problem in a more balanced manner.

If we have no heretics we must invent them, for heresy is essential to health and growth.

― Yevgeny Zamyatin

This imbalance is primarily exemplified by the failure to invest in people, experiments and models. When I speak of investing in people, it goes far beyond simply paying people. Investing in people means creating systems where people can develop and grow in their capability while feUnknown-1 copy 13eling safe and secure to take huge risks. Talented people who take risks are necessary for progress, and without such risk-taking progress stagnates. Without taking risks we cannot develop talent, the two are intertwined.

An expert is someone who knows some of the worst mistakes that can be made in his (her) subject, and how to avoid them.

– Werner Heisenberg

We have destroyed the vitality of our experimental sciences, which further amplifies the destruction of our scientific staff. Experimental science is absolutely necessary to advance science. This has the knock-on effect of undermining the creation of new, better models for science. Having the World’s fastest computer cannot replace any of these shortcomings.

It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.

― Richard Feynman

A big part of the problem is that aspects of modeling and simulation closer to the modeling are starved. The modeling associated with high performance computing is static and relatively free of progress. We are not progressing toward introducing new models into codes and ultimately practical use in a healthy way. The same can be said for solution methods that provide either more powerful or effective ways of solving models. These methods then provide the impetus for algorithms that systematicalimages-1 copyly provide means of solution. Our current computing emphasis is geared toward efficiently delivering existing algorithmic solutions for existing methods for existing models.

History warns us … that it is the customary fate of new truths to begin as heresies and to end as superstitions.

― Thomas Henry Huxley

Inadequacies in models are simply allowed to persist while the fiction of their rescue by more powerful computing platforms a false promise. The real scientific answer to this issue is that a more powerful computer, better software, better algorithms or better methods cannot save a model that is incorrect. Despite this maxim we keep attacking modeling and simulation dominantly through computing hardware.

…a more powerful computer, better software, better algorithms or better methods cannot save a model that is incorrect.

The fastest computer has even taken a page from Cold War nationalism with the “missile gap” being replaced by the “supercomputer gap”. Chinese prowess in Titan-supercomputercomputing is demonstrated by their supercomputer and funding is supposed to follow the American effort to regain the crown of fastest. I, for one, am far more worried about the Chinese and Russian investments in human resources in modeling, methods and algorithms than computers. By all appearances these investments are significant. The American public should be far more worried by the encroachment of mediocrity into the research staff at our National Labs than our lack of the fastest computers.

In the republic of mediocrity, genius is dangerous.

― Robert G. Ingersoll

The management of our National research programs and devotion to intellectually questionable priorities is by far a greater threat to our National security than anything our adversaries are doing. A perfect example of the problem is the increasingly legacy nature of our computer codes. We looked at the local history of one typimages-1e of code and noted that up until 1990 we had a new code every five years. Since 1990 we have simply kept the same old codes. This is 25 years with the same code with the same methods and same approach. We have basically lost an entire generation of staff. We have lost a generation of progress and research. The models and methods are frozen in time. This is a recipe for mediocrity at best, disaster at worst.

Progress is born of doubt and inquiry. The Church never doubts, never inquires. To doubt is heresy, to inquire is to admit that you do not know—the Church does neither.

― Robert G. Ingersoll

So, yes, having leading edge computing is a great, wonderful and important thing for the country. It’s true for any country desiring intcell-phoneernational leadership. Getting to properly defining what leading edge computing is actually comprised of becomes difficult. A completely naive and incorrect way to define leading edge is having the fastest or most powerful computer on Earth. Everyone knows what a fast computer is, but a powerful computer is a more subtle question. I would argue that in many respects my iPhone is more powerful and useful than virtually any supercomputer I’ve used. The problem in defining powerful is the limited utility of supercomputers. Supercomputers are important for solving scientific problems, which are necessarily limited in context.

The riskiest thing we can do is just maintain the status quo.

― Bob Iger
Moreover, this effort for leading edge computing lies in a resource constrained trade space and the focus on hardware leaves other efforts starved for funding or focus.  Even this discussion leaves most of the important nuance untouched, the dependence on people and their talent. The issues around the efficacy of the HPC efforts are subtle and far more nuanced than the mere power of the computer. A powerful supercomputer is useless without talented people to use it. The problem in the United States is that people are something we have chronically and systematically under-invested in. Our universities are in decline and part of a vastly corrupt system that Wal-Mart-Greeterunderserves the public at a massive cost. The consequences of this decline in education are then amplified by the destruction of the social contracts associated with post-educational work.

Unhappiness lies in that gap between our talents and our expectations.

― Sebastian Horsley

Employees are viewed as commodities and infinitely replicable, even at National Labs. The lifetime employment necessary for deep sustainable expertise has been replaced by an attitude more appropriate for Wal-Mart. Over the past couple of decades the sort of strong scientific leadership once provided by the National Labs has been replaced by ostrich-head-in-sandLab employees who are little more than “sheeple” who bend to political will rather than speak up and offer their expertise instead of politically correct pablum. Today the Labs simply do what they are told. Their spirit has been beat out of them. I might even be so bold and to say that the attempt to lead in scientific computing says more about our lack of scientific leadership than our commitment to it.

The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum….

― Noam Chomsky
If the models, methods, and codes used on our fastest computer are lacking, the computer’s value is diminished to the point of being worthless. A computer will provide results that are as good as the codes running on it. If the people running the problems on the computer are similarly lacking, the computer’s value is diminished as well. The computer is only answering questions as good as the people asking the questions. I a582af380087cd231efd17be2e54ce16believe that we have systematically failed to make investments in the models; methods, codes and people sufficient to make the focus on computing power pay off. We have created a new generation of legacy codes (just because its written in C++ does not keep it from being a legacy code!), with legacy models and methods and a staff that cannot fully understand the codes or calculations they are running. The fiction that all we need to do is refine the mesh and run calculations on bigger computers to predict nature continues to hold sway.

Every truth in this world stretched beyond its limits will become a false doctrine.

― K.P. Yohannan
This is a situation that the mismanagement of the labs has created.  The DOE has done the same thing to its Labs that DOD did. DOD foolishly destroyed its research mediocritydemotivatorlabs 30 years ago, and over the last 20 years, DOE has followed a similar path toward mediocrity. The national resource of these Labs is being allowed to fade and wither.  We have allowed the Labs to atrophy. Our approach to high performance computing is but one example of this. The situation is even worse when you look at what we have done to our experimental sciences. Under these conditions having the lead in computing hardware will do little actually support our national security because we have failed to keep competence in the fundamentals necessary for efficacy.

Change almost never fails because it’s too early. It almost always fails because it’s too late.

― Seth Godin
In other woMainframe_fullwidthrds for the hardware to really matter in the delivery of predictive simulation and modeling, everything upstream of the machine needs to be right.  This includes the people.  We have failed to invest in leading edge technology in the very things that make the supercomputer valuable. We have a skewed and unbalanced view of how computing works, which allows the justification of our current programmatic path, but fails to deliver true progress.

Heretics are the new leaders. The ones who challenge the status quo, who get out in front of their tribes, who create movements.

― Seth Godin

Inside Out: Lessons from a kid’s movie

18 Saturday Jul 2015

Posted by Bill Rider in Uncategorized

≈ 3 Comments

Happiness is the pleasantest of emotions; because of this, it is the most dangerous. Having once felt happiness, one will do anything to maintain it, and losing it, one will grieve.

― Kij Johnson

imgres-1 copy

Rarely has any movie stuck with me like Inside-Out. It is hard to imagine how an animated movie about an 11-year-old girl has left such a lasting impression on me, but it has. It is advertised as a kid’s movie with hidden themes for their parents to enjoy, but that is wrong. It is a movie for adults that will entertain your kids. The movie is well done like you’d expect from Pixar and it packs a massive punch. It’s a really good movie, but it’s a better adult movie than kid’s movie.

What has given this movie such staying power?

Inside Out - Emotion Poster CollaborationIt gave me a lot to consider that can apply to my own life. It provided an open playful theme to deeply think about extremely deep themes in how you personally manage your response to reality through your emotions. The main characters in Inside-Out are the emotions of the 11-year-old girl, Riley. The dominant character is Joy, with other key emotions Sadness, Anger, Fear and Disgust all vying to respond to her World. When the story begins, Joy dominates Riley’s life, but a major life event throws everything into turmoil. The main conflict is how much should you allow sadness into your life and what happens when you don’t.

Riley has spent 11 years growing up in Minnesota living a life of fun, family, friends, and especially hockey. Her family picks up, and moves to San Francisco and all hell breaks loose inside Riley. Everything she has known and loved in her life has been taken away. Sadness begins to creep into her reactions, and Joy tries to reign in this reaction. Push comes to shove and disaster ensues. Riley’s emotions fall apart along with her. Riley begins to make bad decisions based on inappropriate emotional responses, and her life spirals out of control. The foundations of her entire psyche begin to fall apart. Meanwhile Joy and Sadness try to get back into her emotionaltumblr_njvlfsRIVU1un8fiuo1_400wheelhouse (along with all the hijinks that the child movie viewers will enjoy).

Emotions are not irrational, but how we rationalize the world. The key is to use the right ones for the information we are being handed. We need to use our cognitive tools represented in emotions to process our own world and respond appropriately. Irrational responses are applying the wrong emotion to the situation, which can result in bad outcomes. The world throws a lot at us and the reactions need to be proper. We have a myriad of events that range from wonderful to horrible, and our emotions need to match the event. This is rational. Therefore sadness, fear, anger or disgust despite being negative are important and proper for many things. They are necessary for our survival.

In the end, Joy and Sadness return to save the day with a key realization. maxresdefaultSadness is appropriate and important as a response to traumatic events. Joy is not. Trying to make everything joyous and happy is simply wrong and inappropriate. Sadness adds a texture to formerly unambiguously happy memories that now have complexity the younger Riley lacked. Riley is growing up and rebuilding her emotional world. Joy is no longer so dominant; the other emotions now have much more sway in her emotional makeup. Letting Sadness in was the key to her response to crisis.

The lesson will always repeat itself, unless you see yourself as the problem–not others.

― Shannon L. Alder

A key lesson for us today is that unhappiness and sadness are the right response to many things in life. We have a culture that drives home the message that we should always be happy and satisfied, and feel joy. Many situations call for sadness, or anger, or fear, or disgust. If we don’t feel the right emotion our reaction to the situation is inappropriate and harmful to our long-term well-being. The movie has the deep and impactful message for all of us that “negative” emotions are more than simply okay, Inside-Out-2they are powerful and proper responses to many of life’s events.

Even more powerfully there are circumstances where happiness or joy is utterly wrong and harmful. The attempt to imprint joy onto these situations hurts us and provides an improper personal response to life’s travails. Life is about balance and the push and pull of events. For us to learn, grow and develop correctly, we must process and respond to events in a way that fits the events. When we do not respond properly we can hurt our future. Moreover the full repertoire of emotions is needed to live our lives with the tools to deal with what life throws at us. The so-called negative emotions should be embraced when they fit the circumstances. In the case of Riley sadness was proper and healthy, and the attempt to feel nothing but joy left her in a tailspin of almost disastrous magnitude.

Don’t cry because it’s over, smile because it happened.

― Dr. Seuss

 

CFD codes should improve, but won’t, Why?

10 Friday Jul 2015

Posted by Bill Rider in Uncategorized

≈ 2 Comments

legacy-code-1

If failure is not an option, then neither is success.

― Seth Godin

Alternate titles

Progress in CFD has stalled. Why?

Why are methods in CFD codes so static?

Why is the status quo in CFD so persistent?

Status quos are made to be broken.

― Ray Davis

12099970-aerodynamic-analysis-hitech-cfdThere are a lot of reasons for lack of progress in CFD codes, and here I will examine one particular issue. The reality is that there is a myriad of issues plaguing modern codes. I’ve written about issues with our modeling and its lack of suitability for tackling modern simulation questions. One of the major issues is the declaration that success won’t be reached until computers are far more powerful. This is also testimony to the lack of faith in innovation and creativity in research (risk aversion and fear of failure being key). As a result funding and focus for improving the fundamentals of CFD codes has dried up. It like the community has collectively thrown up it hands and said, “its not worth it!”

The riskiest thing we can do is just maintain the status quo.

― Bob Iger

imagesWe have an overly focused research program toward utilizing the next generation of computing hardware. The major overarching issue is a general lack of risk taking in our research programs spanning from government funded pure research, through applied research programs and extending to industrially focused research. Without a tolerance for failure and hence a risk, the ability to make progress is utterly undermined. This more than anything explains why the codes are generally vehicles of status quo practice rather than dynamos of innovation.

Yesterday’s adaptations are today’s routines.

― Ronald A. Heifetz

If one travels back into the mid-1980’s there was a massive revolution in numericalTVD_Rigion_and_schemes_for_Unstructured_01methods in CFD codes. Methods that were introduced at that time remain at the core of CFD codes today. The reason was the development of new methods that were so unambiguously better than the previous alternatives that the change was a fait accompli. Codes produced results with the new methods that were impossible to achieve with previous methods. At that time a broad and important class of physical problems in fluid dynamics were suddenly open to successful simulation. Simulation results were more realistic and physically appealing and the artificial and unphysical results of the past were no longer a limitation.  img605

These methods were high-resolution methods such as flux corrected transport (FCT), high-order Godunov, total variation diminishing (TVD), and other formulations for solving hyperbolic conservation laws. These terms are in other words the convective or inertial terms in the governing equations transporting quantities through waves most typically through the bulk motion of the fluid. These new (at that time) methods produced results that when compared with preceding options were simply superior bysolvers virtually any conceivable standard. In addition, the new methods were not either overly complex or expensive to use. The principles associated with their approach to solving the equations combined the best, most appealing aspects of previous methods in a novel fashion. They became the standard method almost overnight.

Novelty does not require intelligence, but ignorance, which is why the young excel in this branch.

― Anthony Marais

200px-ParabolicExtrapThis was accomplished because the methods were nonlinear even for linear equations meaning that the domain of dependence for the approximation is a function of the solution itself. Earlier methods were linear meaning that the approximation was the same without regard for the solution. Before the high-resolution methods you had two choices either a low-order method that would wash out the solution, or a high-order solution that would have unphysical solutions. Theoretically the low-order solution is superior in a sense because the solution could be guaranteed to be physical. This happened because the solution was found using a great deal of numerical or artificial viscosity. The solutions were effectively laminar (meaning viscously dominated) thus not having energetic structures that make fluid dynamics so exciting, useful and beautiful.

When your ideas shatter established thought, expect blowback.

― Tim Fargo

The new methods would use higher accuracy approximations as much as possible (or978-3-662-03915-1safe to do so), and only use the lower accuracy, dissipative method when absolutely necessary. Making these choices on the fly is the core of the magic of these methods. The new methods alleviated the bulk of this viscosity, but did not entirely remove it. This is good and important because some viscosity in the solution is essential to connect the results to the real world. Real world flows all have some amount of viscous dissipation. This fact is essential for success in computing shock waves where having dissipation allows the selection of the correct solution.

The status quo is never news, only challenges to it.

― Malorie Blackman

The dissipation is the essence of important phenomena such as turbulence as well. The viscous nature of things can be seen through a technique known as the method of modified equations. This method of numerical analysis derives the equations that the numerical method effectively solves. Because of numerical error when you solve an equation numerically, the solution more closely matches a more complex equation.

qg-2d-euler-shock-diffraction-densityIn the case of simple hyperbolic conservation laws that define the inertial part of fluid dynamics, the low order accuracy methods solve an equation with classical viscous terms that match those seen in reality although generally the magnitude of viscosity is much larger than the real world. Thus these methods produce laminar (syrupy) flows as a matter of course. This makes these methods unsuitable for simulating most conditions of interest to engineering and science. It also makes these methods very safe to use and virtually guarantee a physically reasonable (if inaccurate) solution.

images-1The new methods get rid of these large viscous terms and replace it with a smaller viscosity that depends on the structure of the solution. The results with the new methods are stunningly different and produce the sort of rich nonlinear structures found in nature (or something closely related). Suddenly codes produced solutions that matched reality far more closely. It was a night and day difference in method performance, once you tried the new methods there was no going back.

Negative results are just what I want. They’re just as valuable to me as positive results. I can never find the thing that does the job best until I find the ones that don’t.

― Thomas A. Edison

This is the crux of the issue with moving on to even more advanced methods, the quantum leap in performance to be had then simply won’t be repeated. The newer methods will not yield a change like the initial movement to high-resolution methods. The newer methods will be better and more accurate, but not Earth-shatteringly so. In today’s risk adverse world making a change for the sake of continual improvement is almost impossible to sell. The result is stagnation and lack of progress.

The problems don’t end there by a long shot. Because of the massive improvement in solutions to be had with the first generation of high resolution methods to a very large extent cost wasn’t an issue. With the next generation of methods, the improvements are far more modest and the cost of using them is an issue. So far, these methods are simply too expensive to displace the older methods.finite-volume-methods-for-hyperbolic-problems-gourab-chakraborty-1-638

The issues don’t even stop there. The new methods also tend to have relatively large errors compared to their cost. In addition the newer methods tend to be fragile and may not handle difficult situations robustly. The demands of maintaining formally high-order accuracy are quite expensive (time, space integration demands are costly whereas the first generation high resolution methods are simple and cheap). The result is that the newer approaches are methods that “do not pay their way.”

The balance of accuracy and cost has not been negotiated well. This whole dynamic is worth a good bit of discussion.

The key to this issue is the lack of capacity for high-order accuracy to be achieved in practical problems. To get high-order accuracy the solution needs to be smooth and differentiable. Real problems conspire against this sort of character at virtually every turn with singular structures both in the solution itself, not to mention geometry or physical properties. Real objects are rough and imperfect, which tends to breed more structure in solutions. Shock waves are the archetype of the problem that undermines high-order accuracy, but the problem hardly stops there.

The measure of intelligence is the ability to change.

― Albert Einstein

images-1All of these factors conspire to produce in real problems, results only improve their accuracy at first-order (or worse), which means that double the mesh produces half the error. In other words, the accuracy is linearly proportional to the mesh spacing. This is a big deal as the second-order means that halving the mesh yields a four times reduction in error. Third-order would yield an eight times reduction. The reality is everything gives first-order accuracy or worse. The key for high-order working at all is that the high-order methods give a lower starting point for the error, which it sometimes does. The problem is that high-order methods are too expensive to justify the improvements they provide. The question is whether the benefits of practical accuracy can be achieved without incurring the costs typical for such methods.

Sometimes a clearly defined error is the only way to discover the truth

― Benjamin Wiker

The higher costs of the high-order methods are associated with a multitude of the characteristics of these methods. The basic steps associated with creating the high-order approximations use more data animages-1d involve many more operations than existing methods. If this wasn’t bad enough these methods often require a multiple of evaluations to integrate their approximations using quadratures. In cases of time-dependent methods, these methods often require more steps and require smaller time steps than the standard methods. To make matters even worse these methods are often not applicable to complex geometries associated with real problems. If you add on relative fragility and small gains in practical accuracy, you get the state of affairs we see today.

Restlessness is discontent — and discontent is the first necessity of progress. Show me a thoroughly satisfied man — and I will show you a failure.

― Thomas A. Edison

Meanwhile the theoretical and mathematical communities will tie themselves to high formal order of accuracy even when the methods are inefficient. The very communities that we should depend on to break this log jam are not motivated to deal with the actual problem. We are left in a lurch where no progress is being made toward improving the work horse methods in our codes.

To improve is to change; to be perfect is to change often.

― Winston S. Churchill

proof The cost part is almost a uniformly disappointing part of these methods most of which is dedicated to achieving formally high-order results. The irony is that the formal order of accuracy is immaterial to their practical and pragmatic utility. Almost no effort has been devoted to understanding how this cost accuracy dynamic can be negotiated. Without progress and understanding of these issues, the older methods, which now are standard will simply not move forward. Thus we had a great leap forward 25-30 years ago followed by stasis and stagnation.

Change almost never fails because it’s too early. It almost always fails because it’s too late.

― Seth Godin

Here are some “fun” research papers to read on these topics.

[Harten83] Harten, Ami. “High resolution schemes for hyperbolic conservation laws.”Journal of computational physics 49, no. 3 (1983): 357-393.

[HEOC87] Harten, Ami, Bjorn Engquist, Stanley Osher, and Sukumar R. Chakravarthy. “Uniformly high order accurate essentially non-oscillatory schemes, III.” Journal of computational physics 71, no. 2 (1987): 231-303.

[HHL76] Harten, Amiram, James M. Hyman, Peter D. Lax, and Barbara Keyfitz. “On finite‐difference approximations and entropy conditions for shocks.”Communications on pure and applied mathematics 29, no. 3 (1976): 297-322.

[Lax73] Lax, Peter D. Hyperbolic systems of conservation laws and the mathematical theory of shock waves. Vol. 11. SIAM, 1973.

[LW60] Lax, Peter, and Burton Wendroff. “Systems of conservation laws.”Communications on Pure and Applied mathematics 13, no. 2 (1960): 217-237.

[Boris71] Boris, Jay P., and David L. Book. “Flux-corrected transport. I. SHASTA, A fluid transport algorithm that works.” Journal of computational physics 11, no. 1 (1973): 38-69.

(Boris, Jay P. A Fluid Transport Algorithm that Works. No. NRL-MR-2357. NAVAL RESEARCH LAB WASHINGTON DC, 1971.)

[VanLeer73] van Leer, Bram. “Towards the ultimate conservative difference scheme I. The quest of monotonicity.” In Proceedings of the Third International Conference on Numerical Methods in Fluid Mechanics, pp. 163-168. Springer Berlin Heidelberg, 1973.

[Shu87] Shu, Chi-Wang. “TVB uniformly high-order schemes for conservation laws.”Mathematics of Computation 49, no. 179 (1987): 105-121.

[GLR07] Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

[RM05] Margolin, L. G., and W. J. Rider. “The design and construction of implicit LES models.” International journal for numerical methods in fluids 47, no. 10‐11 (2005): 1173-1179.

[MR02] Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

Modeling Issues for Exascale Computation

03 Friday Jul 2015

Posted by Bill Rider in Uncategorized

≈ 3 Comments

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham Maslow

A collection of my thoughts on issues with modeling and simulation for the future with an emphasis on modeling. Without modeling improvements, the promise of simulation cannot be achieved. Our current approach to high performance computing focused on faster computers without balance is intellectually bankrupt. Without changes in our fundamental philosophy on modeling in computational simulation, the investments in hardware will yield little benefit for society.

GenericPickupA common point of view these days regards the existing code base as a massive investment to be preserved and ported to the new generation of computers. What is often not articulated is the antiquated nature of the modeling available in these codes. The approach used to model material has been in use for over 50 years and has quite a lot in common with a “textbook” approach to material description found in undergraduate courses in engineering. These undergraduate courses define the basic approach to analysis of systems, which are by their very nature macroscopic. Are these descriptions appropriate for highly detailed computer models? And at what point must the description change to reflect the physics of the scales being explored?

Computers allow a far more detailed description of these systems discretizing the macroscopic system into pieces that should resolve ever more of the microstructure and its response. The problem is that description of the materials is still almost isomorphic to the philosophy expressed in the undergraduate textbook. The material’s description is invariant all the way from the macroscopic view to scales that are clearly uncovering microscopic details. This highlights several clear scientific problems that extending the current code base to exascale computation will only exacerbate.

A key to moving forward is the recognition that the class of problems that can be simulated has grown. The older homogeneous average response modeling is still useful and valid, but only for a restricted class of problems. New capabilities and models will enrich simulation’s impact through providing avenues to new classes of problem solving. The new class of problems is defined by creating simulations that more faithfully provide the role of experiments and device testing. The simulations should be able to selectively probe the cases where off-normal response of devices arises. This will allow the analysis to assist in the determination of the limits of operation and safety for engineered systems.scg001

  1. At a macroscopic level, systems are not deterministic, yet the models we rely upon are. The models are exercised in an overly deterministic manner.
  2. The material descriptions are invariant to the scales, which they are described at.
  3. The questions answered by the codes do not match the asked about these systems any more.
  4. A scientific vibrant field would not tolerate the level of inflexibility implied by current modeling practice. Vibrant science would demand that the models evolve to better match reality.

Addressing this set of issues is going to require deep scientific investigations, and perhaps more deep cultural evolution. We have a wealth of approaches investigating and solving problems associated with multiscale, multiphysics which bridge detailed microstructural simulation to macroscopic scales. The problem is that these approaches are not being utilized to solve the applied problems we currently tackle with our code base. None of these methods are being forced to displace the ancient techniques we rely upon today. As a result the state of practice is stuck in quicksand and remains static.Unknown-2

The importance of modeling as a driver for simulation capability should be obvious as well as its role as the essence of utility for the entire enterprise. This importance is not as obvious when looking at the balance of efforts in simulation science. For example no amount of accuracy, or computer power, or software quality can rescue a model that is inadequate or wrong. Only a focus on improving the model itself can rescue it. Today improving models is far down on the list of priorities for simulation despite its primal role in the quality of the enterprise. Nearby issues of solution methods and algorithms for model are also poorly funded. Most of the emphasis is tilted toward high performance computing and are implicitly predicated on the models themselves being correct.

CFD_tunnel_comparisonEven if the models were judged to be correct, the advances in experimental science should be providing pressure to improve models. Improvements in detection, information and analysis will all yield ever better experimental measurements and access to uniquely innovative experimental investigations. These should provide a constant impetus to advance models beyond their current state. This tension is essential to the conduct of high quality science. If science is healthy there is a push and pull with theory and experiment where a theoretical advance will drive experiments, or new experimental observations will drive theory to explain. Without modeling being allowed to advance in response to experimental evidence, the fundamental engine of science is broken.

Furthermore the culture of analysis in engineering and science reinforces these approaches. First and foremost is the commitment to deterministic outcomes in simulation. Experimental science makes it very clear that our everyday macroscopic world has stochastic elements. There is a deterministic aspect to events, but the non-deterministic aspects are equally essential. By and large our analysis of experiments and simulations works steadfastly to remove the stochastic. Usually this is the adoption of averaging (or regression fits to data). These average properties or events then become the working model of our systems. In the past this approach allowed great progress, but more and more our engineered systems are defined more properly by the extremes of behavior that they can exhibit.

keep-calm-and-put-your-head-in-the-sandOur entire modeling approach especially that used in simulation are completely ill suited to address these extreme behaviors. A fundamental change in modeling and simulation philosophy is necessary to advance our understanding. Our models do not produce actual physically realizable simulations because not system actually acts like an average system everywhere. Instead the average behavior results from variations in behavior throughout the system. Sometimes these variations produce effects that are exactly associated with the newer questions being asked about extreme behavior.

The new methods do not displace the need for the old methods, indeed the new methods should appropriately limit to the solutions found by the old methods. The new methods allow the resolution of scale-dependent behavior, and off-average behavior of the system, but need to be self-consistent with traditional methods for simulation. Perhaps just as importantly, those conducting simulations should be deeply aware of when the old methods lose validity both in terms of scale-dependent behavior, and the questions being addressed through the simulation.earth_system_interactions

This brings the idea of questions to the forefront of the discussion. What questions are being addressed via simulation? There is a set of questions that older simulation methods are distinctly capable of answering. These questions are not the same questions driving the need for simulation capability today. In providing new models for simulation, the proper questions are primal in importance.

The current simulation capability is tied to answering old questions, which are valid today, but less important as new topics are crowding them out. Examples of the older questions are “what is the performance of this system under average conditions?” “What is the yield of this production process?” “How large is the average margin of performance beyond the requirements for the system?” With the key aspect of the questions being answered being the capacity of the modeling to attack the average properties and performance of engineered systems. By the same token the uncertainty we can assess via modeling today via simulation is the lack of knowledge about the average behavior of these systems, which is not the same as the uncertainty in the behavior of the actual system.PW-2013-10-29-Johnston-dragon_first

This mindset influences the experimental comparisons done for the purposes of validation as well. Experimental data is often processed into an average, and then compared to the simulation. No single experiment is appropriately simulated, but rather the simulation is modeling the average of the experiments. As such, the simulations are not truly modeling reality because for many physical systems, the average response of the system is never produced in a single experiment. As discussed below, this mindset infects the interpretation of experiments in a deeply pernicious manner.

The new questions being asked of simulations are subtly different, and require different models, methods, algorithms and codes to answer. Key among these questions is “how much variation in the behavior of the system can be expected?” “How often will the system manifest certain extreme behavior?” “How will the entire population of the system behave under certain conditions?” “What is the worst behavior to expect from the system and how likely is it to happen?”

Ideally, the calculation should be the same as observations from a physical experiment (validation), not the average of all experiments. In this way our simulations do not model any reality today because they are almost invariably too homogeneous and deterministic in character. Experiments, on the other hand, are heterogeneous and variable yielding some degree of stochastic response. Systems truly have both a variable stochastic character, which usually acts as a non-deterministic component around a major homogeneous and deterministic aspect of the system. Today our models are predominantly focused on the homogeneous, deterministic aspect of these systems. This aspect is the focus of traditional models and the older questions. The new questions are clearly focused on the secondary stochastic aspects that we average away today. The result is a strong tendency to treat single experiments inappropriately as instances of average response when they are simply a single instance from a population of possible experiments. When the deterministic calculation is forced to compare too closely to the non-deterministic aspects of an experiment, problems ensue.

tumblr_static_tumblr_static_982sepnf784c0ws04swc0ok8c_1280Of course this decomposition is only approximate. For nonlinear systems the separation between stochastic and deterministic is dependent on the circumstances, and the nature of the system itself. Some instances of the system will yield a different decomposition because of the coupling of the system’s response to variability. Examples of the newer questions to be addressed by simulation abound in areas such as device engineering, stockpile stewardship and weather/climate modeling. For example, a key aspect of an engineered device is the portion of the population of devices that can be expected to fail under the extreme conditions associated with normal use. This may have significant reliability consequences and economic side effects. Similar questions are key in stockpile stewardship in part to address shortcomings in the degree of testing in the field or as populations of devices diminish and reduce statistical method’s effectiveness. Extreme weather events such a rain, wind or snowfall have extreme consequences on the mortality and economic impacts on society. The degree to which climate change causes an increase in such occurrences has significant policy consequences. Simulations are being relied up to an ever-greater degree to estimate this issue.

In many cases the modeling in our workhorse engineering analysis codes is quite recognizable from our undergraduate engineering textbooks. Rather than forming a distinct field of study as the modeling unveils more mesoscopic and ultimately microscopic details, the modeling is still couched in terms of the macroscopic methods used in classical desk modeling and calculations. The modeling does not account for the distinct aspects of being applied to a discretized system where smaller scales are available. Many of these models are clearly associated with average, steady state behavior of the full macroscopic system. The multiscale modeling is simply short-circuited by the traditional view of modeling defined in many codes. For example continuum codes for fluids, solids, heat transfer and mechanics all use uniform, homogenized properties for solving problems. The philosophy is virtually identical to the macroscopic material description that would be familiar to undergraduate engineering students.

This is madness! This was a reasonable fifty years ago as these methods first came into use and the number of computational elements was small and the elements were large. Today these methods are quite mature, and the number elements is huge and their size is clearly separated from the large scale. The scale separation dictates that a model that more properly describes the material at the scale of simulation overturns the homogenized models. A homogenized material can only describe the homogenized outcome, or the average solution for the material. Furthermore this homogeneous model will not match any actual circumstances from reality.

One of the key aspects of real experiments is the ever-present random component of results. The initial and boundary conditions all have a random uncontrolled variability that yields the variation in results. In homogenized simulations, this aspect of reality is washed out and for this reason the simulation is unrealizable in the real World. At times the random component is significant enough that the result of the experiment will radically depart from the average response. In these cases, however small in probability, the simulations fall completely short of serving to replace experiments and testing. This aspect of simulation is woefully lacking from current plans despite in centrality to the role of the simulation in providing a transformative scientific tool.

aerodynamicsAnother place where current simulation approaches fall demonstrably short of serving the modeling of reality are ideal models. These models are often mathematically beautiful evoking Hamiltonian structure and deep provable properties that breed devotion by the mathematically inclined. All of this simply detracts from the lack of physical reality bound up in this idealization. These models lack dissipative forces, which define the presence of the second law of thermodynamics, a necessary element for continua associated with reality. By too greatly focusing on the beauty and majesty of the ideal model, the primal focus of modeling reality is ultimately sacrificed. This is simply too great a price to pay for beauty. More perniciously the approach produces models with seemingly wonderful properties and rigor that seduce the unwary into modeling the World in utterly unphysical manners. In many cases the modeling is constructed as the solution to the ideal model plus an explicit model for the non-ideal effects. It should be a focus of modeling to assess whether the intrinsically unphysical aspects of the ideal model are polluting the objective of modeling reality.

jaguar-7In computing there is a chain of activities that provide value to the World. Modeling is the closest thing to reality. No amount of computing speed, algorithmic efficiency, and methodological accuracy can rescue a model that is inadequate. Once a model is defined in needs to be solved on the computer via a method. The method can be streamlined and made more efficient via algorithmic advances. Finally we must consider that all of these need to have software for implementation and as well as mapping to the computing hardware. At the end of the chain the computing hardware is dependent on everything above it for its capacity to impact our reality. Again, modeling is the absolute key to any value at all in simulation.

Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won’t come in.

― Isaac Asimov

 

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...