• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: November 2017

11 Things in Computational Science that Sound Awesome, but are Actually Terrible

24 Friday Nov 2017

Posted by Bill Rider in Uncategorized

≈ 2 Comments

From the bad things and bad people, you learn the right way and right direction towards the successful life.

― Ehsan Sehgal

Computational science is an extremely powerful set of disciplines for conducting scientific investigations. The end result of computational science is usually grounded in the physical sciences, and engineering, but depends on a chain of expertise spanning much of modern science. Doing computational science well completely depends on all of these disparate disciplines working in concert. A big area of focus these days are the supercomputers being used. The predicate for acquiring a these immensely expensive machines is the improvement in scientific and engineering product arising from their use. While this should be true, getting across this finish line requires a huge chain of activities to be done correctly.

Unknown-2Let’s take a look at all the things we need to do right. Computer engineering and computer science are closest to the machines needed for computational science. These disciplines make these exotic computers accessible and useful for domain science and engineering. A big piece of this work is computer programming and software engineering. The computer program is a way of expressing mathematics in a way for the computer to operate on. Efficient and correct computer programs are a difficult endeavor all by themselves. Mathematics is the language of physics and engineering and essential for the conduct of computing. Mathematics is a middle layer of work between the computer and their practical utility. It is a deeply troubling and ironic trend that applied mathematics is disappearing from computational science. As the bridge between the computer and its practical use, it forms the basis for conducting and believing the computed results. Instead of being an area of increased focus, the applied math is disappearing into either the maw of computer programming or domain science/engineering. It is being lost as a separate contributor. Finally, we have the end result in science and engineering. Quite often we lose sight of computers and computing as a mere tool that must follow its specific rules for quality, reliable results. Too often the computer is treated like it is a magic wand.

imgresAnother common thread to horribleness is the increasing tendency for science and engineering to be marketed. The press release has given way to the tweet, but the sentiment is the same. Science is marketed for the masses who have no taste for the details necessary for high quality work. A deep problem is that this lack of focus and detail is creeping back into science itself. Aspects of scientific and engineering work that used to be utterly essential are becoming increasingly optional. Much of this essential intellectual labor is associated with the hidden aspects of the investigation. Things related to mathematics, checking for correctness, assessment of error, preceding work, various doubts about results and alternative means of investigation. This sort of deep work has been crowded out by flashy graphics, movies and undisciplined demonstrations of vast computing power.

Some of the terrible things we discuss here are simply bad science and epileofshitngineering. These terrible things would be awful with or without a computer being involved. Other things come from a lack of understanding of how to add computing to an investigation in a quality focused manner. The failure to recognize the multidisciplinary nature of computational science is often at the root of many of the awful things I will now describe.

Fake is the new real, You gotta keep a lot a shit to yourself.

― Genereux Philip

Without further ado, here are some terrible things to look out for. Every single item on the list will be accompanied by a link to a full blog post expanding on the topic.

  1. If one follows high performance computing online (institutional sites, Facebook, Twitter) you might believe that the biggest calculations on the fastest computers are the very best science. You are sold that these massive calculations have the greatest impact on the bottom line. This is absolutely not the case. These calculations are usually one-off demonstrations with little or no technical value. Almost everything of enduring value happens on the computers being used by the rank and file to do the daily work of science and engineering. These press release calculations are simply marketing. They almost never have the pedigree or hard-nosed quality work necessary for good science and engineering. – https://williamjrider.wordpress.com/2016/11/17/a-single-massive-calculation-isnt-science-its-a-tech-demo/, https://williamjrider.wordpress.com/2017/02/10/it-is-high-time-to-envision-a-better-hpc-future/
  2. The second thing you come across is the notion that a calculation with larger-finer mesh is better than one with a coarser mesh. In the naïve pedestrian analysis, this would seem to be utterly axiomatic. The truth is that computational modeling is an assembly of many things all working in concert. This is another example of proof by brute force. In the best circumstances this would hold, but most modeling is hardly taking places under the best conditions. The proposition is that the fine mesh allows one to include all sorts of geometric details, so the computational world looks more like reality. This is a priori What isn’t usually discussed is where the challenge is in modeling. Is geometric detail driving uncertainty? What is biggest challenge, and is the modeling focused there? – https://williamjrider.wordpress.com/2017/07/21/the-foundations-of-verification-solution-verification/, https://williamjrider.wordpress.com/2017/03/03/you-want-quality-you-cant-handle-the-quality/, https://williamjrider.wordpress.com/2014/04/04/unrecognized-bias-can-govern-modeling-simulation-quality/
  3. In concert with these two horrible trends, you often see results presented as the result of single massive calculation that magically unveils the mysteries of the universe. This is computing as a magic wand, and has very little to do with science or engineering. This simply does not happen. Real science and engineering takes 100’s or 1000’s of calculations to happen. There is an immense amount of dag006background work needed to create high quality results. A great deal of modeling is associated with bounding uncertainty or bounding the knowledge we possess. A single calculation is incapable of this sort of rigor and focus. If you see a single massive calculation as the sole evidence of work, you should smell and call “bullshit”. – https://williamjrider.wordpress.com/2016/11/17/a-single-massive-calculation-isnt-science-its-a-tech-demo/
  4. One of the key elements in modern computing is the complete avoidance of discussing how the equations in the code are being solved. The notion is that this detail has no importance. On the one hand, this is evidence of progress, our methods for solving equations are pretty damn good. The methods and the code itself is still an immensely important detail, and constitute part of the effective model. There seems to be a mentality that the methods and codes are so good that this sort of thing can be ignored. All one needs are a sufficiently fine mesh, and the results are pristine. This is almost always false. What this almost willful ignorance shows are lack of sophistication. The methods are immensely important to the results, and we are a very long way from being able to apply the sort of ignorance of this detail that is rampant. The powers that be want you to believe that the method disappears from importance because the computers are so fast. Don’t fall for it. – https://williamjrider.wordpress.com/2017/05/19/we-need-better-theory-and-understanding-of-numerical-errors/, https://williamjrider.wordpress.com/2017/05/12/numerical-approximation-is-subtle-and-we-dont-do-subtle/
  5. The George Box maxim about models being wrong, but useful is essential to keep in mind. This maxim is almost uniformly ignored in the high-performance computing bullshit machine. The politically correct view is that the super-fast computers will solve the models so accurately that we can stop doing experiments. The truth is that eventually, if we are doing everything correct, the models will be solved with great accuracy and their incorrectness will be made evident. I strongly expect that we are already there in many cases; the models are being solved too accurately and the real answer to our challenges is building new models. Model building as an enterprise is being systematically disregarded in favor of chasing faster computers. We need far greater balance and focus on building better models worthy of the computers they are being solved on. We need to build the models that are needed for better science and engineering befitting the work we need to do. –https://williamjrider.wordpress.com/2017/09/01/if-you-dont-know-uncertainty-bounding-is-the-first-step-to-estimating-it/
  6. Calculationa03ce13fa310c4ea3864f4a3a8aabc4ffc7cd74191f3075057d45646df2c5d0ael error bars are an endangered species. We never see them in practice even though we know how to compute them. They should simply be a routine element of modern computing. They are almost never demanded by anyone, and their lack never precludes publication. It certainly never precludes a calculation being promoted as marketing for computing. If I was cynically minded, I might even day that error bars when used are opposed to marketing the calculation. The implicit message in the computing marketing is that the calculations are so accurate that they are basically exact, no error at all. If you don’t see error bars or some explicit discussion of uncertainty you should see the calculation as flawed, and potentially simply bullshit. – https://williamjrider.wordpress.com/2017/07/07/good-validation-practices-are-our-greatest-opportunity-to-advance-modeling-and-simulation/, https://williamjrider.wordpress.com/2017/09/22/testing-the-limits-of-our-knowledge/, https://williamjrider.wordpress.com/2017/04/06/validation-is-much-more-than-uncertainty-quantification/
  7. One way for a calculation to seem really super valuable is to declare that it is direct numerical simulation (DNS). Sometimes this is an utterly valid designator. The other term that follows DNS is “first principles”. Each of these terms seeks to endow the calculation with legitimacy that it may, or may not deserve. One of the biggest problems with DNS is the general lack of evidence for quality and legitimacy. There is a broad spectrum of the technical World that seems to be OK with treating DNS as equivalent (or even better) with experiments. This is tremendously dangerous to the scientific process. DNS and first principles is still based on solving a model, and models are always wrong. This doesn’t say that DNS isn’t useful, but this utility needs to be proven and bounded by uncertainty. – https://williamjrider.wordpress.com/2017/11/02/how-to-properly-use-direct-numerical-simulations-dns/
  8. Most press releases are rather naked in the implicit assertion that the bigger computer gives a better answer. This is treated as being completely axiomatic. As such there is no evidence provided to underpin this assertion. Usually some colorful graphics, or color movies beautifully rendered accompany the calculation. Their coolness is all the proof we need. This is not science or engineering even though this mode of delivery dominates the narrative today. –https://williamjrider.wordpress.com/2017/01/20/breaking-bad-priorities-intentions-and-responsibility-in-high-performance-computing/, https://williamjrider.wordpress.com/2014/09/19/what-would-we-actually-do-with-an-exascale-computer/, https://williamjrider.wordpress.com/2014/10/03/colorful-fluid-dynamics/
  9. Modeling is the use of mathematics to connect reality to theory and understanding. Mathematics is translated into methods and algorithms implemented in computer code. It is ironic that the mathematics that forms the bridge between physical world and the computer is increasingly ignored by science. Applied mathematics has been a tremendous partner for physics, engineering and computing throughout the history of computational science. This partnership has waned in priority over the last thirty years. Less and less applied math is called upon and happens being replaced by computer programming or domain science and engineering. Our programs seem to think that the applied math part of the problem is basically done. Nothing could be further from the truth. – https://williamjrider.wordpress.com/2014/10/16/what-is-the-point-of-applied-math/, https://williamjrider.wordpress.com/2016/09/27/the-success-of-computing-depends-on-more-than-computers/
  10. A frequent way of describing a computation is to describe the mesh as defining the solution. Little else is given about the calculation such as the equations being solved or how the equations are being approximated. Frequently, the fact that the solutions are approximated is left out. This fact is damaging to the accuracy narrative of massive computing. The designed message is that the massive computer is so powerful that the solution to the equations is effectively exact. The equations themselves basically describe reality without error. All of this is in service of saying computing can replace experiments, or real-world observations. The entire narrative is anathema to science and engineering doing each great disservice. – https://williamjrider.wordpress.com/2015/07/03/modeling-issues-for-exascale-computation/
  11. Computational science is often described in terms that are not consistent with the rest of science. We act like it is somehow different in a fundamental way. Computers are just tools for doing science, and allowing us to solve models of reality far more generally than analytical methods. With all of this power comes a lot of tedious detail needed to do things with quality. This quality comes from the skillful execution of this entire chain of activities described at the beginning of this Post. These details all need to be done right to get good results. One of the biggest problems in the current computing narrative is ignorance to the huge set of activities bridging a model of reality and the computer itself. The narrative wants to ignore all of this because it diminishes the sense that these computers are magical in their ability. The power isn’t magic, it is hard work, success is not a forgone conclusion, and everyone should ask for evidence, not take their word for it. – https://williamjrider.wordpress.com/2016/12/22/verification-and-validation-with-uncertainty-quantification-is-the-scientific-method/

csm_group1_2c3e352676Taking the word of the marketing narrative is injurious to high quality science and engineering. The narrative seeks to defend the idea is that buying these super expensive computers is worthwhile, and magically produces great science and engineering. The path to advancing the impact of computational science dominantly flows through computing hardware. This is simply a deeply flawed and utterly naïve perspective. Great science and engineering is hard work and never a foregone conclusion. Getting high quality results depends on spanning the full range of disciplines associated with computational science adaptively as evidence and results demand. We should always ask hard questions of scientific work, and demand hard evidence of claims. Press releases and tweets are renowned for simply being cynical advertisements and lacking all rigor and substance.

One reason for elaborating upon things that are superficially great, but really terrible is cautionary. The current approach allows shitty work to be viewed as successful by receiving lots of attention. The bad habit of selling horrible low-quality work as success destroys progress and undermines accomplishing truly high-quality work. We all need to be able to recognize these horrors and strenuously reject them. If we start to effectively police ourselves perhaps this plague can be driven back, and progress can flourish.

The thing about chameleoning your way through life is that it gets to where nothing is real.

― John Green

 

 

The Piecewise Parabolic Method (PPM)

17 Friday Nov 2017

Posted by Bill Rider in Uncategorized

≈ Leave a comment

A method which can solve this problem well should be able to handle just about anything which can arise in one-dimensional pure hydrodynamic flow. PPM is such a scheme.

– P.R. Woodward

Colella, Phillip, and Paul R. Woodward. “The piecewise parabolic method (PPM) for gas-dynamical simulations.” Journal of computational physics 54, no. 1 (1984): 174-201.

This is one of the most important methods in the early history of the revolutionaryfig9 developments for solving hyperbolic PDEs in the 1980’s. For a long time this was one of the best methods available to solve the Euler equations. It still outperforms most of the methods in common use today. For astrophysics, it is the method of choice, and also made major inroads to the weather and climate modeling communities. In spite of having over 4000 citations, I can’t help but think that this paper wasn’t as influential as it could have been. This is saying a lot, but I think this is completely true. This partly due to its style, and relative difficulty as a read. In other words, the paper is not as pedagogically effective as it could have been. The most complex and difficult to understand version of the method is presented in the paper. The paper could have used a different approach to great effect by perhaps providing a simplified version to introduce the reader and deliver the more complex approach as a specific instance. Nonetheless, the paper was a massive milestone in the field.

It was certainly clear that high-order schemes were not necessarily bringing greater accuracy so physics would have to step in to shore up the failing numerics.

– Jay Boris

Part of the problem with the paper is the concise and compact introduction to the two methods used in the accompanying review article, PPMLR and PPMDE. The LR stands for Lagrange-Remap where the solution is solved on a Lagrangian grid and then remapped back to the original grid for an utterly Eulerian solution. Both the Lagrangian and Eulerian grids are unevenly spaced, and this results in far more elaborate formulas. As a result it is hard to recognize the simpler core method lurking inside the pages of the paper. The DE stands for direct Eulerian, which can be very simple for the basic discretization. Unfortunately, the complication for the DE flavor of PPM comes with the Riemann solver, which is far more complex in the Eulerian frame. The Largangian frame Riemann solver is very simple and easy to evaluate numerically. Not so for the Eulerian version, which has many special cases and requires some exceedingly complex evaluations of the analytical structure of the fig1_350Riemann solution. Advances that occurred later greatly simplified and clarified this presentation. This is a specific difficulty of being an early adopter of methods, the clarity of presentation and understanding is dimmed by purely narrative effects. Many of these shortcomings have been addressed in the recent literature discussed below.

The development of the PPM gas dynamics scheme grew out of earlier work in the mid 1970s with Bram van Leer on the MUSCL scheme. The work of Godunov inspired essential aspects of MUSCL.

– Paul R. Woodward

The paper had a host of interesting and important sub-techniques for solving hyperbolic PDEs. Many of these “bells” and “whistles” are not part of the repertoire for most methods today. The field actually suffers from some extent by not adopting most of these strategies for attacking difficult problems. It is useful to list the special approaches along with a description and context that might make them easier to adopt more broadly (https://williamjrider.wordpress.com/2016/06/14/an-essential-foundation-for-progress/, https://williamjrider.wordpress.com/2017/06/30/tricks-of-the-trade-making-a-method-robust/, https://williamjrider.wordpress.com/2016/08/08/the-benefits-of-using-primitive-variables/). The paper is written in such a way that these algorithms seem specifically tailored to PPM, but they are far broader in utility. Generalizing their use more broadly would serve the quality of numerical solutions immensely. To a large extent Phil Colella extended many of these techniques to piecewise linear methods that form the standard approach in production codes today.ParabolicExtrap

  • Shock flattening – Shocks are known to be horrifically nonlinear and difficult both forgiving and brutal. This technique acknowledges this issue by blending a bit of safe first order method with the nonlinearly adaptive high-order methods when a strong shock is encountered. The principle is to use a bit more first-order when the shock is strong because oscillations can escape. For weak shocks this is unnecessary. Rather than penalize the solution everywhere the method is made locally more dissipative where the danger is the greatest.
  • Contact steepening – contact discontinuities will smear out without limit if dissipation is applied to them. In other words, errors made in their solution are with you forever. To keep this from happening, the amount of dissipation applied at these waves is minimized. This sort of technique must be applied with great caution because at a shock wave this is exceedingly dangerous. Additionally, the method to limit the dissipation can produce a very good interface tracking method that is far simpler than the elaborate methodology using interface geometry. It is a useful pragmatic way to move interfaces with little dissipation along with relative simplicity. This basic approach is the actual interface tracking method in many production codes today although few use methods as elaborate or as high quality as that used in the original PPM.
  • Extra dissipation – Monotonicity preservation and Riemann solvers are two 41598_2017_13484_Fig11_HTMLelaborate ways of producing dissipation while achieving high quality. For very nonlinear problems this is not enough. The paper describes several ways of adding a little bit more, one of these is the shock flattening, and another is an artificial viscosity. Rather than use the classical Von Neumann-Richtmyer approach (that really is more like the Riemann solver), they add a small amount of viscosity using a technique developed by Lapidus appropriate for conservation form solvers. There are other techniques such as grid-jiggling that only really work with PPMLR and may not have any broader utility. Nonetheless, there may be aspects of the thought process that may be useful.
  • High-order edges – One of PPM’s greatest virtues is the use of formally higher order principles in the method. Classic PPM uses fourth-order approximations for its edge values. As a result, as the Courant number goes to zero, the method becomes formally fourth-order accurate. This is a really powerful aspect of the method. It is also one of the clear points where the method can be generalized. We can use whatever high-order edge value we like for PPM. One of the maxims to take from this approach is the power of including very high-order discretizations even with otherwise lower order approximation methods. The impact of the high-order is profoundly positive.
  • Steepened edge values – For horrible nonlinear problems, the simple use of high-order differencing is not advisable. The nature of the high-order approximation can be decomposed into several pieces, and the approximation can be built more carefully and appropriately for complex problems. In this way, the high order edge values are a bit hierarchical. This is partially elaboration, but also reflects a commitment to quality that is imminently laudable.

Generalized Monotonicity – PPM uses a parabola and as a result the limiters so well-known don’t work to provide monotone results. As a result, the limiter for PPM takes two steps instead of the single step needed for a linear profile. I don’t like the original presentation in the paper and recast the limiter into an equivalent algorithm that uses two applications of the median function per edge. The first step makes sure the edge value being used is bounded by the cell averages adjacent to it. The second step asks whether the parabola is monotone in the cell and limits it to one that is by construction should it not be (https://williamjrider.wordpress.com/2016/06/07/the-marvelous-magical-median/, https://williamjrider.wordpress.com/2016/06/22/a-path-to-better-limiters/ https://williamjrider.wordpress.com/2015/08/06/a-simple-general-purpose-limiter/, https://williamjrider.wordpress.com/2014/01/11/practical-nonlinear-stability-considerations/, https://williamjrider.wordpress.com/2015/08/07/edge-or-face-values-are-the-path-to-method-variety-and-performance/ ).

Before launching into a systematic description of the PPM algorithm, it is worthwhile to first explain the goals and constraints that have influenced its design. These are:

  1. Directional operator splitting.
  2. Robustness for problems involving very strong shocks.
  3. Contact discontinuity steepening.
  4. Fundamental data in the form of cell averages only.
  5. Minimal dissipation.
  6. Numerical errors nevertheless dominated by dissipation, as opposed to dispersion.
  7. Preservation of signals, if possible, even if their shapes are modified, so long as they travel at roughly the right speeds.
  1. Minimal degradation of accuracy as the Courant number decreases toward 0.

– Paul R. Woodward

Over time PPM has mostly been interpreted monolithically as opposed to some basic principles. PPM is really a wonderful foundation with the paper only providing a single instantiation of a panoply of powerful methods. This aspect has come to the fore more recently, but would have served the community better far earlier. Some of these comments are the gift of 2020 hindsight. A great deal of the pedagogical clarity with regard to Godunov-type methods is the result of its success, and only came to common use in the late 1980’s, if not the 1990’s. For example, the language to describe 1-s2.0-S0021999109003830-gr17Riemann solvers with clarity and refinement hadn’t been developed by 1984. Nevertheless, the monolithic implementation of PPM has been a workhorse method for computational science. Through Paul Woodward’s efforts it is often the first real method to be applied to brand new supercomputers, and generates the first scientific results of note on them.

The paper served as a companion to the adjacent paper that reviewed the performance of numerical methods for strong shocks. The review was as needed as it was controversial. The field of numerical methods for shock waves as set to explode into importance and creative energy. The authors Phil Colella and Paul Woodward would both play key roles in the coming revolution in methods. Woodward had already made a huge difference by spending time in Europe with Bram van Leer. Paul helped Bram with implementing advanced numerical methods using methodologies Paul learned at the Livermore Labs. Bram exposed Paul to his revolutionary ideas about numerical methods chronicled in Bram’s famous series of papers (https://williamjrider.wordpress.com/2014/01/11/designing-new-schemes-based-on-van-leers-ideas/, https://williamjrider.wordpress.com/2014/01/06/van-leers-1977-paper-paper-iv-in-the-quest-for-the-ultimate/, https://williamjrider.wordpress.com/2014/01/05/review-of-the-analysis-of-van-leers-six-schemes/). The ideas therein were immensely influential in changing how hyperbolic equations were solved.

One of the great successes in numerical methods for hyperbolic conservation laws has been the use of nonlinear hybridization techniques, known as limiters, to maintain positivity and monotonicity in the presence of discontinuities and underresolved gradients.

– Michael Sekora and Phil Collela

Bram’s ideas created a genuine successor to Godunov’s method. The methods he created were novel in producing a nonlinearly adaptive numerical method where the method would adapt locally to the nature of the solution. This overcame the limitations of Godunov’s theorem regarding the accuracy of numerical methods for hyperbolic equations. Bram’s ideas were geometric in nature, and reflected the approach of the physicist. Paul being a physicist gravitated into the same view, and added a genuine does of pragmatism. Bram also wasn’t the first person to overcome Godunov’s theorem. He may have actually been the third (or fourth). The first is most likely to have been Jay Boris who invented the flux-corrected transport (FCT) method in 1971. In addition, Kolgan in the Soviet Union and Ami Harten might lay claims to overcoming Godunov’s barrier theorem. Some of these different methods played a role in the comparison in the review article by Woodward and Colella. In the light of history many of the differences in the results were more due to the approaches to systems of equations and related difficulties than the nonlinearly adaptive principles in the methods.

The strong, fluid-dynamic shock problem had become the number one computational roadblock by the fall of 1970 so I was urged to concentrate on the problem full time, finally developing the FCT convection algorithm in the winter.

– Jay Boris

In totality, the methods developed by three or four men in the early 1970’s set the stage for revolutionary gains in method performance. At the time of the developments, the differences in the methods were fiercely debated and hotly contested. The reviews of the papers were contentious and resulted in bitter feelings. Looking back with the virtues of time and perspective several things stand out. All the methods represented a quantum leap in performance, and behavior over the methods available prior.  The competition and ideas so hotly contested probably helped to spur developments, but ultimately became counter-productive as the field matured. It seems clear that the time was ripe for the breakthrough. There was a combination of computers, mathematics and applications that seeded the developments. For the same basic idea to arise independently in a short period of time means the ideas were dangling just out of reach. The foundations for the breakthrough were common and well-known.

Paul Woodward is an astrophysicist, and PPM found its most common and greatest use in his field. For a long time the nature of PPM’s description meant that the exact versions of the method described in the canonical 1984 paper were the exact method used in other codes. Part of this results from PPM being a highly tuned, high-performance method with a delicate balance between high-resolution methodology and various safety measures needed for difficult highly nonlinear problems. In a manner of speaking it is a recipe that produces really great results. Imagine PPM as something akin to the Toll House chocolate chip cookie recipe. The cookies you get by following the package exactly are really, really good. At the same time, you can modify the recipe to produce something even better while staying true to the basic framework. The basic cookies will get you far, but with some modification you might just win contests or simply impress your friends. PPM is just like that.

IMG_26881-594x396
TollHouseCookies
nestle-tollhouse-cookie-dough

At this point I’ve said quite little about the method itself. The core of the method is a parabolic representation of the solution locally in a cell. The method is totally one-dimensional in nature. This parabola is determined by the integral average in a cell and the point values of the quantity at the edge of the cell. What is not so widely appreciated is the connection of PPM to the fifth scheme in Van Leer’s 1977 paper. This method is interesting because the method evolves both cell averages like any finite volume code, and the point values at the cell boundary. It is compact and quite supremely accurate compared with other third-order methods. The PPM is a way of getting some of the nice properties of this method from a finite volume scheme. Rather than evolve the point values on the edge, they are recovered from the finite volumes.

Rather than belabor the technical details of PPM, I’ll point to the recent trends that have extended the method beyond its classical form. One of the original authors has used the parabola to represent valid extrema in the solution rather than damping them by forcing monotonicity. I did the same thing in my own work largely paralleling Phil’s work. In addition, the change in the high-order edge reconstruction has been recognized and implemented to good effect by both Phil, Paul, myself and others. The connection of Riemann solvers has also been generalized. All of this reflects the true power of the method when projected onto the vast body of work that arose after the publication of this paper.  Even today PPM remains one of the very best methods in existence especially with the modifications recently introduced.

IMG_5467
IMG_5468
IMG_5466

Personally, I’ve come to know both Phil and Paul personally and professionally. In the numerical solution of hyperbolic PDEs both men have played a significant personal role and witnessed history being made. They helped make CFD what it is today. It’s always an interesting experience to read someone’s work then come to know the person. A big part of a deeper appreciation is finding out the underlying truths of the paper. You start to realize that the written, published record is a poor reflection of the real story. Some of this comes through the hard work of reading and re-reading a paper, then deriving everything in it for yourself. A deeper appreciation came from expressing the same method in my own language and mathematics. Finally taking each of these expressions into conversations with the authors who clarified most of the remaining questions. The academic literature is a scrubbed and largely white-washed reflection of reality. What we are allowed to read and see is not the truth, but an agreed upon distortion.

When the numerics fails, substitute the physics.

– Steve Zalesak

the scientists who use such algorithms must have both input to and knowledge of their design. There may come a day when we no longer hold to this view, when the design of such algorithms can be left to expert numerical analysts alone, but that day has not yet arrived.

– Steve Zalesak

Woodward, Paul, and Phillip Colella. “The numerical simulation of two-dimensional fluid flow with strong shocks.” Journal of computational physics 54, no. 1 (1984): 115-173.

Carpenter Jr, Richard L., Kelvin K. Droegemeier, Paul R. Woodward, and Carl E. Hane. “Application of the piecewise parabolic method (PPM) to meteorological modeling.” Monthly Weather Review 118, no. 3 (1990): 586-612.

Woodward, Paul R. “Piecewise-parabolic methods for astrophysical fluid dynamics.” In Astrophysical Radiation Hydrodynamics, pp. 245-326. Springer Netherlands, 1986.

Godunov, S. K. “A finite difference method for the computation of discontinuous solutions of the equations of fluid dynamics.” Sbornik: Mathematics 47, no. 8-9 (1959): 357-393.

Plewa, Tomasz, and Ewald Mueller. “The consistent multi-fluid advection method.” arXiv preprint astro-ph/9807241 (1998).

Van Leer, Bram. “Towards the ultimate conservative difference scheme. V. A second-order sequel to Godunov’s method.” Journal of computational Physics 32, no. 1 (1979): 101-136.

Van Leer, Bram. “Towards the ultimate conservative difference scheme. IV. A new approach to numerical convection.” Journal of computational physics 23, no. 3 (1977): 276-299.

Bell, John B., Phillip Colella, and John A. Trangenstein. “Higher order Godunov methods for general systems of hyperbolic conservation laws.” Journal of Computational Physics 82, no. 2 (1989): 362-397.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

Rider, William J. “Reconsidering remap methods.” International Journal for Numerical Methods in Fluids 76, no. 9 (2014): 587-610.

Kolgan, V. P. “Application of the principle of minimum values of the derivative to the construction of finite-difference schemes for calculating discontinuous gasdynamics solutions.” TsAGI, Uchenye Zapiski 3, no. 6 (1972): 68-77.

J. P. Boris “A Fluid Transport Algorithm That Works,” Proceedings of the seminar course on computing as a language of physics, 2-20 August 1971, InternationalCentre for Theoretical Physics, Triest, Italy.

 

 

We are all responsible for this mess; It is everyone’s fault

10 Friday Nov 2017

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Never attribute to malevolence what is merely due to incompetence

― Arthur C. Clarke

shutterstock_318051176-e1466434794601-800x430A year ago, I sat in one of my manager’s office seething in anger. After Trump’s election victory, my emotions shifted from despair to anger seamlessly. At that particular moment, it was anger that I felt. How could the United States possibly have elected this awful man President? Was the United States so completely broken that Donald Trump was a remotely plausible candidate, much less victor.

Is ours a government of the people, by the people, for the people, or a kakistocracy rather, for the benefit of knaves at the cost of fools?

― Thomas Love Peacock

fig10_roleApparently, the answer is yes, the United States is that broken. I said something to the effect that we too are to blame for this horrible moment in history. I knew that both of us voted for Clinton, but felt that we played our own role in the election of our reigning moron-in-chief. Today a year into this national nightmare, the nature of our actions leading to this unfolding national and global tragedy is taking shape. We have grown to accept outright incompetence in many things, and now we have a genuinely incompetent manager as President. Lots of incompetence is accepted daily without even blinking, I see it every single day. We have a system that increasingly renders, the competent, incompetent by brutish compliance with directives born of broad-based societal dysfunction.

In a hierarchy, every employee tends to rise to his level of incompetence.

― Laurence J. Peter

What does the “Peter Principle” say about the United States? The President is incompetent. Not just a little bit, he is utterly and completely unfit for the job he has. He is the living caricature of a leader, not actually one. His whole shtick is loudly and brashly sounding like what a large segment of the population thinks a leader should be. Under his leadership, our government has descended into the theatre of the absurd. He doesn’t remotely understand our system of government, economics, foreign policy, maxresdefaultscience, or really anything other than marketing himself. His is an utterly self-absorbed anti-intellectual completely lacking empathy and the basic knowledge we should expect him to have. The societal destruction wrought by this buffoon-in-chief is profound. Our most important institutions are being savaged. Divisions in society are being magnified and we stand on the brink of disaster. The worst thing is that this disaster is virtually everyone’s fault whether you stand on the right or the left, you are to blame. The United States was in a weakened state and the Trump virus was poised to infect us. Our immune system was seriously compromised and failed to reject this harmful organism.

I love the poorly educated.

– Donald Trump

Sorry losers and haters, but my I.Q. is one of the highest -and you all know it! Please don’t feel so stupid or insecure, it’s not your fault.

– Donald Trump

Trump is making everything worse. One of the keys to understanding the damage being done to the United States is seeing the poor condition of Democracy prior to the election. A country doesn’t just lurch toward such a catastrophic decision overnight, we were already damaged. In a sense, the body politic was already weakened and ripe for infection. We have gone through a period of more than 20 years of massive dysfunction led by the dismantling of government as a force for good in society. The Republican party is committed to small government, and part of their approach is to attack it. Government is viewed as an absolute evil. Part of the impact of this is the loss of competence in governing. Any governmental incompetence supports their imagesargument about the need to diminish it. The result has been a steady march toward dysfunction and poor performance along with deep seated mistrust, if not outright distain.

All of this stems from deeper wounds left in our history. The deepest wound is the Civil War and the original national sin of slavery. The perpetuation of institutional racism is one of the clearest forces driving our politics. We failed to heal the wounds of this war, and continue to wage a war against blacks. First through the scourge of Jim Crow laws, and now with the war on drugs with its mass incarceration. Our massive prison population is driven by our absurd and ineffective efforts to combat drug abuse. We actively avoid taking actions that would be effective in battling drug addiction. While it is a complete failure as a public health effort, it is a massively effective tool of racial oppression. More recent wounds were left by the combination of the Vietnam war and Civil rights movement in the 1960’s along with Watergate and Nixon’s corruption. The Reagan revolution and the GOP attacks on the Clinton’s were their revenge for progress. In a very real way the country has been simmering in action and reaction for the last 50 years. Trump’s election was the culmination of this legacy and our inability to keep the past as history.

Government exists to protect us from each other. Where government has gone beyond its limits is in deciding to protect us from ourselves.

― Ronald Reagan

Ku_Klux_Klan_Virgina_1922_Parade
Supreme_Court_US_2010
Unknown
BinLaden
september-9-11-attacks-anniversary-ground-zero-world-trade-center-pentagon-flight-93-second-airplane-wtc_39997_600x450

Part of the hardest aspect of accepting what is going on comes in understanding how Trump’s opposition led to his victory. The entire body politic is ailing. The Republican party is completely inept at leading, unable to govern. This shouldn’t come as any surprise; the entire philosophy of the right is that government is bad. When your a priori assumption is that government is inherently bad, the nature of your governance is half-hearted. A natural outgrowth of this philosophy is rampant incompetence in governance. Couple this to a natural tendency toward greed as a core value, and you have the seeds of corruption. Corruption and incompetence is an apt description of the Republican party. The second part of this toxic stew is hate and fear. The party has spent decades stoking racial and religious hatred, and using fear of crime and terrorism to build their base. The result is a governing coalition that cannot govern at all. They are utterly incompetent, and no one more embodies their incompetence than the current President.

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.

― Isaac Asimov

635933172260783601-hillary-clinton-miami-rally-super-tuesday-27The Democrats are no better other than some basic human capacity for empathy. For example, the Clintons were quite competent, but competence is something we as a nation don’t need any more, or even believe in. Americans chose the incompetent candidate for President over the competent one. At the same time the Democrats feed into the greedy and corrupt nature of modern governance with a fervor only exceeded by the Republicans. They are what my dad called “limousine liberals” and really cater to the rich and powerful first and foremost while appealing to some elements of compassion (it is still better than “limousine douchebags” on the right). As a result the Democratic party ends up being only slightly less corrupt than the Republican while offering none of the cultural red meat that drives the conservative culture warriors to the polls.

In individuals, insanity is rare; but in groups, parties, nations and epochs, it is the rule.

― Friedrich Nietzsche

The thing that sets the Democratic party back is a complete lack unity or discipline. They are fractious union of special interests that can barely tolerate one another. They cannot unify to help each other, and each faction is single issue group that can’t be bothered to form an effective coalition. The result is a party that is losing despite holding a majority of the votes. Many of the Democratic voters can’t be bothered to even vote. This losing coalition has let GOP driven fear and hate win along with a systematic attack on our core values as a democratic republic (vast sums of money in politics, voter rights, voter suppression, and gerrymandering). They are countered by a Republican party that is unified and supporting of their factions. The different factions work together to form a winning coalition in large part through accepting each other’s extreme views as part of their rubric of beliefs.\

maxresdefault copyWhile both parties cater to the greedy needs of the rich and powerful, the differences in the approach is completely seen in the approach to social issues. The Republicans appeal to traditional values along with enough fear and hate to bring the voters out. They stand in the way of scary progress and the future as the guardians of the past. They are the force that defends American values, which means white people and Christian values. With the Republicans, you can be sure that the Nation will treat those we fear and hate with violence and righteous anger without regard to effectiveness. We will have a criminal justice system that exacts vengeance on the guilty, but does nothing to reform or treat criminals. The same forces provide just enough racially biased policy to make the racists in the Republican ranks happy.

The Democrats stand for a progressive and empathic future that is represented by many different groups each with their own specific grievances. One of the biggest problems on both sides is intolerance. This might be expected on the right, after all white supremacy is hardly a tolerant world view. The left helps the right out by being even less tolerant. The left’s factions cannot tolerate any dissent, on any topic. We hear endless whining about micro-aggressions, and cultural appropriation along with demands for politicalblamedemotivator correctness. They are indeed “snowflakes” who are incapable of debate and standing up for their beliefs. When they don’t like what someone has to say, they attack them and completely oppose the right to speak. The lack of tolerance on the left is one of the forces that powered Trump to the White House. It did this through a loss of any moral high ground, and the production of a divided and ineffective liberal movement. The left has science, progress, empathy and basic human decency on their side yet continue to lose. A big part of their losing strategy is the failure to support each other, and engage in an active dialog on the issues they care so much about.

A dying culture invariably exhibits personal rudeness. Bad manners. Lack of consideration for others in minor matters. A loss of politeness, of gentle manners, is more significant than is a riot.

― Robert A. Heinlein

The biggest element in Trump’s ascension to the Presidency is our acceptance of incompetence in our leaders. We accept incompetence too easily; incompetence is promoted across society. We have lost the ability to value and reward expertise and competence. Part of this can be blamed on the current culture where marketing is more important than substance. Trump is pure marketing. His entire brand is himself, sold to people who have lost the ability to smell the con. A big part of the appeal of Trump was the incompetence of governing that permeates the Republican view.

This is where the incompetence and blame comes to work. Success at work depends little on technical success because technical success can be faked. What has become essential at work is compliance with rules and control of our actions. Work is not managed, our compliance with rules is managed.  Increasingly the incompetence of the government is breeding incompetence at my work. The government agency that primarily runs my Lab is a complete disaster. We have no leadership either management orimages science. Both are wrought by the destructive tendency of the Republican party that makes governing impossible. They are a party of destruction, not creation. When Republicans are put in power they can’t do anything, their entire being is devoted to taking things apart. The Democrats are no better because of their devotion to compliance, regulation and compulsive rule following without thought. This tendency is paired with the liberal’s inability to tolerate any discussion or debate over a litany of politically correct talking points.

The management incompetence has been brewing for years. Our entire management construct is based lack of trust. The Lab itself is not to be trusted. The employees are not to be trusted. We are not trusted by the left or the right albeit for different reasons. The net result of all of this lack of trust is competence being subservient to lack-of-trust-based compliance with oversight. We are made to comply and heel to the will of the government. This is the will of a government that is increasingly completely incompetent and unfit to run anything, much less a nuclear weapons enterprise! The management of the Lab is mostly there to launder money and drive the workforce into a state of compliance with all directives. The actual accomplishment of high quality technical work is the least important thing we do. Compliance is the main thing. We want to be managed to never ever fuck up, ever. Ipeter_nanosf you are doing anything of real substance and performing at a high level, fuck ups are inevitable. The real key to the operation is the ability of technical competence to be faked. Our false confidence in the competent execution of our work is a localized harbinger of “fake news”.

Fox treats me well, it’s that Fox is the most accurate.

– Donald Trump

We have non-existent peer review and this leads to slack standards. Our agency tells us that we cannot fail (really, we effectively have to succeed 100% of the time). The way to not fail is lower our standards, which we have done in response. We aid our lower standards by castrating the peer review we ought to depend on. We now have Labs that cannot stand to have an honest critical peer review because of the consequences. In addition, we have adopted foolish financial incentives for executive management to compound problems. Since the executive bonuses are predicated on successful review, reviews have become laughable. Reviewers don’t dare raise difficult issues unless they never want to be invited back. We are now graded on a scale where everyone gets an “A” without regard to actual performance. Our excellence has become a local version of “fake news”.

At the very time that we need to raise our standards, we are allowing them to plummet lower and lower. Our reviews have become focused on spin and marketing of the work. Rather than show good work, provide challenges, and receive honest feedback, we form a message focused on “everything is great, and there is nothing to worry about”. Let’s be clear, the task of caring for nuclear weapons without testing them is incredibly challenging. To do this task correctly we need to be focused5064 on raising our level of excellence across the board in science and engineering. Our technical standards should be higher than ever because of the difficulty and importance of this enterprise. Requiring 100% success might seem to be a way to do this, but it isn’t.

If you are succeeding 100% of the time, you are not applying yourself. When one is working at a place where you are mostly succeeding, but occasionally failing (and learning/growing), the outcomes are optimal. This is true in sports, business, science and engineering. Organizations are no different to do the best work possible, you need to fail and be working on the edge of failure. Ideally, we should be doing our work in a mode where we succeed 70-80% of the time. Our incompetent governance and leadership does not understand how badly they are undermining the performance of this vital enterprise. So, the opposite has happened, and the people leading us in the government are too fucking stupid to realize it. Our national leadership has become more obsessed with appearances than substance. All they see is the 100% scores and they conclude everything is awesome while our technical superiority is crumbling. Greatness in America today is defined by declaring greatness and refusing to accept evidence to the contrary.

Look at the F-35 as an example of our current ability to execute a big program. This aircraft is a completely corrupt massive shit storm. It is a giant, hyper-expensive fuckup. Rather than a working aircraft the F-35 was a delivery vehicle for pork barrel spending. God knows how much bullshitting went into the greenlighting of the program over the years. The bottom line is that the F-35 costs a huge amount of money, while being a complete failure as a weapon’s system. My concern that the F-35 is an excellent representative of our current technical capability. If it is, we are in deep trouble. We are expensive, corrupt and incompetent (sounds like a description of the President!). I’m very glad that we never ask our weapon’s lab to fly. Given our actual ability, we can guess the result.

160908_pol_trump-forum-jpg-crop-promo-xlarge2-1This is the place where we get to the core of the accent of Trump. When we lower our standards on leadership we get someone like Trump. The lowering of standards has taken place across the breadth of society. This is not simply National leadership, but corporate and social leadership. Greedy, corrupt and incompetent leaders are increasingly tolerated at all levels of society. At the Labs where I work, the leadership has to say yes to the government, no matter how moronic the direction is. If you don’t say yes, you are removed and punished. We now have leadership that is incapable of engaging in active discussion about how to succeed in our enterprise. The result are labs that simply take the money and execute whatever work they are given without regard for the wisdom of the direction. We now have the blind leading the spineless, and the blind are walking us right over the cliff. Our dysfunctional political system has finally shit the bed and put a moron in the White House. Everyone knows it, and yet a large portion of the population is completely fooled (or simply to foolish or naïve to understand how bad the situations is).

We are a paper tiger; a real opponent may simply destroy us. Our national superiority militarily and technically may already be gone. We are vastly confident of our outright superiority. This superiority requires our nation to continually bring their best to the table. We have almost systematically undermined our ability to apply our best to anything. We’ve already been attacked and defeated in the cyber-realm by Russia. Our society and democracy was assaulted by the Russians, and we were routed. Our incompetent governance has done virtually nothing. The seeds of our defeat have been sown for years all across our society. We are too incompetent to even realize how vulnerable we are.

I will admit that this whole line of thought might be wrong. The Labs where I work might be local hotbeds of incompetent management. What we see locally is not indicative of broader national trends. This seems very unlikely. What is more terrifying is the prospect that the places where I work are well managed comparatively. If this is true then it is completely plausible for us to have an incompetent President. So, the reality we have is stark incompetence across society that has set the stage for national tragedy. Our institutions and broad societal norms are under siege. Every single day of the Trumptrump_fired_tw-865x452 administration lessens the United States’ prestige. The World had counted on the United States for decades, but cannot any longer. We have made a decision as a nation that disqualifies us from a position of leadership. The Republican party has the greatest responsibility for this, but the Democrats are not blameless. Our institutional leadership shares the blame too. Places like the Labs where I work are being destroyed one incompetent step at a time. All of us need to fix this.

We have a walking, talking, tweeting example of our incompetence leading us, and it is everyone’s fault. We all let this happen. We are all responsible. We own this.

Ask not what your country can do for you; ask what you can do for your country.

― John F. Kennedy

 

 

 

 

 

How to properly use direct numerical simulations (DNS)

02 Thursday Nov 2017

Posted by Bill Rider in Uncategorized

≈ 3 Comments

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham H. Maslow

Nothing stokes the imagination for the power of computing to shape scientific discovery like direct numerical simulation (DNS). Imagine using the magic of the computer to unveil the secrets of the universe. We simply solve the mathematical equations that describe nature accurately at immense precision, and magically truth comes out the other end. DNS also stokes the demand for computing power, the bigger the Unknown-2computer, the better the science and discovery. As an added bonus the visualizations of the results are stunning almost Hollywood-quality and special effect appealing. It provides the perfect sales pitch for the acquisition of the new supercomputer and everything that goes with it. With a faster computer, we can just turn it loose and let the understanding flow like water bursting through a dam. With the power of DNS, the secrets of the universe will simply submit to our mastery!

If science were only that easy.  It is not and this sort of thing is a marketing illusion for the naïve and foolish.

vyxvbzwxThe saddest thing about DNS is the tendency for scientist’s brains to almost audibly click into the off position when its invoked. All one has to say is that their calculation is a DNS and almost any question or doubt leaves the room. No need to look deeper, or think about the results, we are solving the fundamental laws of physics with stunning accuracy! It must be right! They will assert, “this is a first principles” calculation, and predictive at that. Simply marvel at the truths waiting to be unveiled in the sea of bits. Add a bit of machine learning, or artificial intelligence to navigate the massive dataset produced by DNS, (the datasets are so fucking massive, they must have something good! Right?) and you have the recipe for the perfect bullshit sandwich. How dare some infidel cast doubt, or uncertainty on the results! Current DNS practice is a religion within the scientific community, and brings an intellectual rot into the core computational science. DNS reflects some of the worst wishful thinking in the field where the desire for truth, and understanding overwhelms good sense. A more damning assessment would be a tendency to submit to intellectual laziness when pressed by expediency, or difficulty in progress.

mellado_turb_mixing_01Let’s unpack this issue a bit and get to the core of the problems. First, I will submit that DNS is an unambiguously valuable scientific tool. A large body of work valuable to a broad swath of science can benefit from DNS. We can study our understanding of the universe in myriad ways at phenomenal detail. On the other hand, DNS is not ever a substitute for observations. We do not know the fundamental laws of the universe with such certainty that the solutions provide an absolute truth. The laws we know are models plain and simple. They will always be models. As models, they are approximate and incomplete by their basic nature. This is how science works, we have a theory that explains the universe, and we test that theory (i.e., model) against what we observe. If the model produces the observations with high precision, the model is confirmed. This model confirmation is always tentative and subject to being tested with new or more accurate observations. Solving a model does not replace observations, ever, and some uses of DNS are masking laziness or limitations in observational (experimental) science.

To acquire knowledge, one must study;

but to acquire wisdom, one must observe.

― Marilyn Vos Savant

One place where the issue of DNS comes to a head is validation. In validation, a code (i.e., model) is compared with experimental data for the purposes of assessing the model’s ability to describe nature. In DNS, we assume that nature is so well understood that our model can describe it in detail, the leap too far is saying that the model can replace observing nature. This presumes that the model is completely and totally validated. I find this to be an utterly ludicrous prospect. All models are tentative descriptions of reality, and intrinsically limited in some regard. The George Box maxim immediately comes to mind “all models are wrong”. This is axiomatically true, and being wrong, models cannot be used to validate. With DNS, this is suggested as a course of action violating the core principles of the scientific method for the sake of convenience. We should not allow this practice for the sake of scientific progress. It is anathema to the scientific method.

dag006This does not say that DNS is not useful. DNS can produce scientific results that may be used in a variety of ways where experimental or observational results are not available. This is a way of overcoming a limitation of what we can tease out of nature. Realizing this limitation should always come with the proviso that this is expedient, and used in the absence of observational data. Observational evidence should always be sought and the models should always be subjected to tests of validity. The results come from assuming the model is very good and provides value, but cannot be used to validate the model. DNS is always second best to observation. Turbulence is a core example of this principle, we do not understand turbulence; it is an unsolved problem. DNS as a model has not yielded understanding sufficient to unveil the secrets of the universe. They are still shrouded. Part of the issue is the limitations of the model itself. In turbulence DNS almost always utilizes an unphysical model to describe fluid dynamics with a lack of thermodynamics and infinitely fast acoustic waves. Being unphysical in its fundamental character, how can we possibly consider it a replacement for reality? Yet in a violation of common sense driven by frustration of lack of progress, we do this all the time.

One of the worst aspects of the entire DNS enterprise is the tendency to do no assessment of uncertainty with its results. Quite often the results of DNS are delivered without any uncertainty of approximation or the model. Most often no uncertainty at all is included, estimated or even alluded to. The results of DNS are still numerical approximations with approximation error. The models while detailed and accurate are always approximations and idealizations of reality. This aspect of the modeling must be included for the work to be used for high consequence work. If one is going to use DNS as a stand-in for experiment, this is the very least that must be done. The uncertainty assessment should also include the warning that the validation is artificial and not based on reality. If there isn’t an actual observation available to augment the DNS in the validation, the reader should be suspicious, and the smell of bullshit should alert one to deception.

MD_Web_Image_v1
images

Some of our models are extremely reliable, and have withstood immense scrutiny. These models are typically the subject of DNS. A couple of equations are worth discussing in depth, Schrödinger’s equations for quantum physics, molecular & atomic dynamics and the Navier-Stokes equations for turbulence. These models are frequent topics of DNS investigations, and all of them are not reality. The equations are mathematics and a logical constructive language of science, but not actual reality. These equations are unequal in terms of their closeness to fundamentality, but our judgment should be the same. The closeness to “first principles” should be reflected in the assessment of uncertainty, which also reflects the problem being solved by the DNS. None of these equations will yield truths so fundamental as to not be questioned or free of uncertainty.

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

― Arthur C. Clarke

Another massive problem with DNS is the general lack of uncertainty assessment. It is extremely uncommon to see any sort of uncertainty assessment accompanying DNS. If we accept the faulty premise that DNS can replace experimental data, the uncertainty associated with these “measurements” must be included. This almost universally shitty practice further undermines the case of using DNS as a replacement for experiment. Of course, we are accepting far too many experimental results without their own error bars these days. Even if we make the false premise that the model being solved DNS is true to the actual fundamental laws, the solution is still 2621-fluid-dynamics-look-two-giantsapproximate. The approximate solution is never free of numerical error. In DNS, the estimate of the magnitude of approximation error is almost universally lacking from results.

Let’s be clear, even when used properly DNS results must come with an uncertainty assessment. Even when DNS is used simply as a high-fidelity solution of a model, the uncertainty of the solution is needed for assessment of the utility of the results. This utility is ultimately determined by some comparison with observations with phenomena seen in reality. We may use DNS to measure the power of a simpler model to provide consistency with the more fundamental model included in DNS. This sort of utility is widespread in turbulence, material science or constitutive modeling, but credibility of the use must always be determined with experimental data. The observational data always has primacy and DNS should always be subservient to realities results.

Cielo rotatorUnfortunately, we also need to address an even more deplorable DNS practice. Sometimes people simply declare that their calculation is a DNS without any evidence to support this assertion. Usually this means the calculation is really, really, really, super fucking huge and produces some spectacular graphics with movies and color (rendered in super groovy ways). Sometimes the models being solved are themselves extremely crude or approximate. For example, the Euler equations are being solved with or without turbulence models instead of Navier-Stokes in cases where turbulence is certainly present.  This practice is so abominable as to be almost a cartoon of credibility. This is the use of proof by overwhelming force. Claims of DNS should always be taken with a grain of salt. When the claims take the form of marketing they should be met with extreme doubt since it is a form of bullshitting that tarnishes those working to practice scientific integrity.

The world is full of magic things, patiently waiting for our senses to grow sharper.

― W.B. Yeats

logoPart of doing science correctly is honesty about challenges. Progress can be made with careful consideration of the limitations of our current knowledge. Some of these limits are utterly intrinsic. We can observe reality, but various challenges limit the fidelity and certainty of what we can sense. We can model reality, but these models are always approximate. The models encode simplifications and assumptions. Progress is made by putting these two forms of understanding into tension. Do our models predict or reproduce the observations to within their certainty? If so, we need to work on improving the observations until they challenge the models. If not, the models need to be improved, so that the observations are produced. The current use of DNS short-circuits this tension and acts to undermine progress. It wrongly puts modeling in the place of reality, which only works to derail necessary work on improving models, or work to improve observation. As such, poor DNS practices are actually stalling scientific progress.

I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I’ll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be.

― Isaac Asimov

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...