• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Category Archives: Uncategorized

Standard Tests without Metrics Stall Progress

04 Tuesday Oct 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

What’s measured improves

― Peter F. Drucker

s5-euler-shock-wave_exact_cmpIn area of endeavor standards of excellence are important. Numerical methods are no different. Every area of study has a standard set of test problems that researchers can demonstrate and study their work on. These test problems end up being used not just to communicate work, but also test whether work has been reproduced successfully or compare methods. Where the standards are sharp and refined the testing of methods has a degree of precision and results in actionable consequences. Where the standards are weak, expert judgment reigns and progress is stymied. In shock physics, the Sod shock tube (Sod 1978) is such a standard test. The problem is effectively a “hello World” problem for the field, but suffers from weak standards of acceptance focused on expert opinion of what is good and bad without any unbiased quantitative standard being applied. Ultimately, this weakness in accepted standards contributes to stagnant progress we are witnessing in the field. It also allows a rather misguided focus and assessment of capability to persist unperturbed by results (standards and metrics can energize progress, https://williamjrider.wordpress.com/2016/08/22/progress-is-incremental-then-it-isnt/).

Sod’s shock tube is an example of a test problem being at the right time in the right place. It was published right at the nexus of progress in hyperbolic PDE’s, but before breakthroughs were well publicized. The article introduced a single problem applied to a large number of methods all of which performed poorly in one way or another. The methods were an amalgam of old and new methods demonstrating the general poor state of affairs for shock capturing methods in the late 1970’s. Since its publication is has become the opening ante for a method to demonstrate competence in computing shocks. The issues with this problem were highlighted in an earlier post, https://williamjrider.wordpress.com/2016/08/18/getting-real-about-computing-shock-waves-myth-versus-reality/, where a variety of mythological thoughts are applied to computing shocks.

This problem is a very idealized shock problem in one dimension that is amenable to semi-analytical solution. As a result an effectively exact solution may be obtained via solution of nonlinear equations. The evaluation of the exact solution appropriately for comparison with numerical solutions is itself slightly nontrivial. The analytical solution needs to be properly integrated over the mesh cells to represent the correct integrated control volume values (more over this integration needs to be done for the correct conserved quantities). Comparison is usually done via the primitive variables, which may be derived from the conserved variable using standard techniques (I wrote about this a little while ago https://williamjrider.wordpress.com/2016/08/08/the-benefits-of-using-primitive-variables/ ). A shock tube is the flow that is results when two semi-infinite slabs of gas at different state conditions are held separately. They are then allowed to interact and a self-similar flow is created. This flow can contain all the basic compressible flow structures, shocks, rarefactions, and contact discontinuities.

300px-sodshocktubetest_regionsSpecifically, Sod’s shock tube (https://en.wikipedia.org/wiki/Sod_shock_tube) has the following conditions: in a one dimensional domain filled with an ideal gamma law gas, \gamma = 1.4, x\in \left[0,1\right], the domain is divided into two equal regions; on x\in \left[0,0.5\right], \rho = 1, u=0, p=1; on x\in \left[0.5,1\right], \rho = 0.1, u=0, p=0.125. The flow is described by the compressible Euler equations (conservation of mass, \rho_t + \left(\rho u\right)_x = 0, momentum \left(\rho u \right)_t + \left(\rho u^2 + p\right)_x = 0 and energy \left[\rho \left(e + \frac{1}{2} u^2 \right) \right]_t + \left[rho u \left(e + \frac{1}{2}u^2 \right) + p u\right]_x = 0), and an equation of state, p=\left(\gamma-1 \right)\rho e. At time zero the flow develops a self-similar structure with a right moving shock followed by a contact discontinuity, and a left moving rarefaction (expansion fan). This is the classical Riemann problem. The solution may be found through semi-analytical means solving a nonlinear equation defined by the Rankine-Hugoniot relations (see Gottlieb and Groth for a wonderful exposition on this solution via Newton’s method).

The crux of the big issue with how this problem is utilized is that the analytical solution is not used for more than display in plotting comparisons with numerical solutions. The quality of numerical solutions is then only assessed qualitatively. This is a huge problem that directly inhibits progress. This is a direct result of having no standard beyond expert judgment on the quality. It leads to the classic “hand waving” argument for the quality of solutions. Actual quantitative differences are not discussed as part of the accepted standard. The expert can deftly focus on the parts of the solution they want to and ignore the parts that might be less beneficial to their argument. Real problems can persist and effectively be ignored (such as the very dissipative nature of some very popular high-order methods). Under this lack of standard relatively poorly performing methods can retain a high level of esteem while better performing methods are effectively ignored.

screenshot
imgres

With all these problems, why does this state of affairs persist year after year? The first thing to note is that the standard of expert judgment is really good for experts. The expert can rule by asserting their expertise creating a bit of a flywheel effect. For experts whose favored methods would be exposed by better standards, it allows their continued use with relative impunity. The experts are then gate keepers for publications and standards, which tends to further the persistence of this sad state of affairs. The lack of any standard simply energizes the status quo and drives progress into hiding.

The key thing that has allowed this absurdity to exist for so long is the loss of accuracy associated with discontinuous solutions. For nonlinear solutions of the compressible Euler equations, high order accuracy is lost in shock capturing. As a result the designed order of accuracy for a computational method cannot be measured with a shock tube solution. As a result, one of the primary aims of verification is not achieved using this problem. One must always remember that order of accuracy is the confluence of two aspects, the method and the problem. Those stars need to align for the order of accuracy to be delivered.

Order of accuracy is almost always shown in results for other problems where no discontinuity exists. Typically a mesh refinement study, error norms, order of accuracy is provided as a matter of course. The same data is (almost) never shown for Sod’s shock tube. For discontinuous solutions the order of accuracy is (less than one). Ideally, the nonlinear features of the solution (shocks and expansions) converge at first-order, and the linearly degenerate feature (shears and contacts) converge at less than first order based on the details of the method (see the paper by Aslam, Banks and Rider (me). The core of the acceptance of the practice of not showing the error or convergence for shocked problems is the lack of differentiation of methods due to similar convergence rates for all methods (if they converge!). The relative security offered by the Lax-Wendroff theorem further emboldens people to ignore things (the weak solution guaranteed by it has to be entropy satisfying to be the right one!).

This is because the primal point of verification cannot be satisfied, but other aspects are still worth (or even essential) to pursue. Verification is also all about error estimation, and when the aims of order verification cannot be achieved, this becomes a primary concern. What people do not report and the aspect that is missing from the literature is the relatively large differences in error levels from different methods, and the impact of these differences practically. For most practical problems, the design order of accuracy cannot be achieved. These problems almost invariably converge at the lower order, but the level of error from a numerical method is still important, and may vary greatly based on details. In fact, the details and error levels actually have greater bearing on the utility of the method and its efficacy pragmatically under these conditions.

imgresFor all these reasons the current standard and practice with shock capturing methods are doing a great disservice to the community. The current practice inhibits progress by hiding deep issues and failing to expose the true performance of methods. Interestingly the source of this issue extends back to the inception of the problem by Sod. I want to be clear that Sod wasn’t to blame because none of the methods available to him were acceptable, but within 5 years very good methods arose, but the manner of presentation chosen originally persisted. Sod on showed qualitative pictures of the solution at a single mesh resolution (100 cells), and relative run times for the solution. This manner of presentation has persisted to the modern day (nearly 40 years almost without deviation). One can travel through the archival literature and see this pattern repeated over and over in an (almost) unthinking manner. The bottom line is that it is well past time to do better and set about using a higher standard.

At a bare minimum we need to start reporting errors for these problems. This ought to not be enough, but it is an absolute minimum requirement. The problem is that the precise measurement of error is prone to vary due to details of implementation. This puts the onus on the full expression of the error measurement, itself an uncommon practice. It is uncommonly appreciated that the difference between different methods is actually substantial. For example in my own work with Jeff Greenough, the error level for the density in Sod’s problem between fifth order WENO, and a really good second-order MUSCL method is a factor of two in favor of the second-order method! (see Greenough and Rider 2004, the data is given in the tables below from the paper). This is exactly the sort of issue the experts are happy to resist exposing. Beyond this small step forward the application of mesh refinement with convergence testing should be standard practice. In reality we would be greatly served by looking at the rate of convergence to problems feature-by-feature. We could cut up the problem into regions and measure the error and rate of convergence separately for the shock, rarefaction and contact. This would provide a substantial amount of data that could be used to measure quality of solutions in detail and spur progress.

Two tables of data from Greenough and Rider 2004 displaying the density error for Sod’s problem (PLMDE = MUSCL).

green-rider-plmde

rider-green-weno

We still use methods quite commonly that do not converge to the right solution for discontinuous problems (mostly in “production” codes). Without convergence testing this sort of pathology goes undetected. For a problem like Sod’s shock tube, it can still go undetected because the defect is relatively small. Usually it is only evident when the testing is on a more difficult problem with stronger shocks and rarefactions. Even then it is something that has to be looked for showing up as reduced convergence rates, or the presence of constant un-ordered error in the error structure, E = A_0 + A_h h^\alpha instead of the standard E = A_h h^\alpha . This subtlety is usually lost in a field where people don’t convergence test at all unless they expect full order of accuracy for the problem.

Now that I’ve thrown a recipe for improvement out there to consider, I think it’s worthwhile to defend expert judgment just a bit. Expertise has its role to play in progress. There are aspects of science that are not prone to measurement, science is still a human activity with tastes and emotion. This can be a force of good and bad, the need for dispassionate measurement is there as a counter-weight to the worst instincts of mankind. Expertise can be used to express a purely qualitative assessment that can make the difference between something that is merely good and great. Expert judgment can see through complexity to remediate results into a form with greater meaning. Expertise is more of a tiebreaker than the deciding factor. The problem today is that current practice means all we have is expert judgment and this is a complete recipe for the status quo and an utter lack of meaningful progress.

The important outcome from this discussion is crafting a path forward that makes the best use of our resources. Apply appropriate and meaningful metrics to the performance of methods and algorithms to make progress or lack of it concrete. Reduce, but retain the use of expertise and apply it to the qualitative aspects of results. The key to doing better is striking an appropriate balance. We don’t have it now, but getting to an improved practice is actually easy. This path is only obstructed by the tendency of the experts to hold onto their stranglehold.

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

― Werner Heisenberg

Historical montage of Sod shock tube results from Sod 1978, Harten 1983, Huynh 1996, Jiang and Shu 1996.  First Sod’s result for perhaps the best performing method from his paper (just expert judgment on my part LOL).

sod

Harten, Ami. “High resolution schemes for hyperbolic conservation laws.”Journal of computational physics 49, no. 3 (1983): 357-393.

harten

Suresh, A., and H. T. Huynh. “Accurate monotonicity-preserving schemes with Runge–Kutta time stepping.” Journal of Computational Physics 136, no. 1 (1997): 83-99.

huynh

Jiang, Guang-Shan, and Chi-Wang Shu. “Efficient Implementation of Weighted ENO Schemes.” Journal of Computational Physics 126, no. 1 (1996): 202-228.

shu

Sod, Gary A. “A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws.” Journal of computational physics 27, no. 1 (1978): 1-31.

Gottlieb, J. J., and C. P. T. Groth. “Assessment of Riemann solvers for unsteady one-dimensional inviscid flows of perfect gases.” Journal of Computational Physics 78, no. 2 (1988): 437-458.

Banks, Jeffrey W., T. Aslam, and W. J. Rider. “On sub-linear convergence for linearly degenerate waves in capturing schemes.” Journal of Computational Physics 227, no. 14 (2008): 6985-7002.

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” Journal of Computational Physics 196, no. 1 (2004): 259-281.

 

The Success of Computing Depends on Mathematics More Than Computers

27 Tuesday Sep 2016

Posted by Bill Rider in Uncategorized

≈ 2 Comments

 The best dividends on the labor invested have invariably come from seeking more knowledge rather than more power.

– The Wright brothers

Some messages are so important that they need to be repeated over and over. This is one of those times. Computing is mostly not about computers. A computer is a tool, powerful, important and unrelentingly useful, but a tool. Using computing is a fundamentally human activity that uses a powerful tool to augment human’s capacity for calculation and monotony. Today we see attitudes expressing more interest in the computers themselves with little regard for how they are used. The computers are essential tools that enable a certain level of utility, but the holistic human activity is at the core of what they do. This holistic approach21SUPERCOMPUTERS1-master768 is exactly the spirit that has been utterly lost by the current high performance computer push. In a deep way the program lacks the appropriate humanity in its composition, which is absolutely necessary for progress.

Most clearly the computers themselves are not an end in themselves, but only useful insofar as they can provide benefit to the solution of problems for mankind. Taking human thinking and augmenting it derives the benefits for humanity. It is our imagination and inspiration automated as to enable solutions through primarily approximate means. The key to all of the true benefits of computing come from the fields of physics, engineering, medicine, biology, chemistry and mathematics. Subjects closer to the practice of computing do not necessarily push benefits forward to society at large. It is this break in the social contract that current high performance computing has entirely ignored. The societal end product is a mere after thought and little more than simply a marketing ploy for a seemingly unremitting focus on computer hardware.

Mathematics is the door and key to the sciences.

— Roger Bacon

This approach is destined to fail, or at the very most not reap the potential benefits the investment should yield. It is completely and utterly inconsistent with the vgesamthubschrauber-01enerated history of scientific computing. The key to the success and impact of scientific computing has been its ability to augment its foundational fields as a supplement to human’s innate intellect in an area that human’s ability is a bit diminished. While it supplements raw computational power, the impact of the field depends entirely on human’s natural talent as expressed in the base science and mathematics. One place of natural connection is the mathematical expression of the knowledge in basic science. Among the greatest sins of modern scientific computing is the diminished role of mathematics in the march toward progress.

Computing should never be an excuse to not think; the truth is that computing has become exactly that; it is an excuse to stop thinking, and simply automatically get “answers”. The importance of this connection cannot be underestimated. It is the complete and total foundation of computing. This is where the current programs become completely untethered from logic, common sense and the basic recipe of success. The mathematics program is virtually absent from the drive toward MorleyWangXuElementsgreater scientific computing. For example I work in an organization that is devoted to applied mathematics, yet virtually no mathematics actually takes place. Our applied mathematics programs have turned into software programs. Somehow the decision was made 20-30 years ago that software “weaponized” mathematics, and in the process the software became the entire enterprise, and the mathematics itself became lost, an afterthought of the process. Without the actual mathematical foundation for computing, important efficiencies, powerful insights and structural understanding is scarified.

The software has become the major product and end point of almost all research efforts in mathematics to the point of displacing actual math. The product of work needs to be expressed in software and the construction and maintenance of the software packages has become the major enterprise being conducted. In the process the centrality of the mathematical exploration and discovery has been submerged. Software is a difficult, valuable and important endeavor in itself, but distinct from mathematics. In many cases the software itself has become the3_code-matrix-944969 raison d’être for math programs. In the process of the emphasis on the software instantiating mathematical ideas, the production and assault on mathematics has stalled. It has lost its centrality to the enterprise. This is horrible because there is so much yet to do.

Worse yet, the mathematical software is horribly expensive to maintain and loses its modernity with a frightful path. We hear calls to preserve the code base because it was so expensive. A preserved code base loses its value more surely than a car depreciates. The software is only as good as the intellect of the people maintaining it. In the process we lose intellectual ownership of the code. This is beyond the horrible accumulation of technical debt in the software, which erodes its value like mold or dry rot. None of these problems are the worst of the myriad of issues around this emphasis; the worst issue is the opportunity cost of turning our mathematicians into software engineers and removing the attention from some of our most pressing issues.

A single discovery of a new concept, principle, algorithms or technique can render one of these software packages completely obsolete. We seem to be in an era where we believe that more computer power is all that is needed to bring reality to heel. These discoveries can allow results and efficiencies that were comtitan2pletely unthinkable to be achieved. Discoveries make the impossible, possible, and we are denying ourselves the possibility of these results through our inept management of mathematics proper role in scientific computing. What might be some of the important topics in need of refined and focused mathematical thinking?

The work of Peter Lax and others has brought great mathematical understanding, discipline and order to the world of shock physics. Amazingly this has all happened in one dimension plus time. In two or three dimensions where the real World happens, we know far less. As a result our knowledge and mastery over the equations of (compressible) fluid dynamics is limited and incomplete. Bringing order and understanding to the real World of fluids could have a massive impact on our ability to solve realistic problems. Today we largely exist on the faith that our limited one-dimensional knowledge gives us the key to multi-dimensional real World problems. A program to expand our knowledge and fill these gaps in knowledge would be a boon to analytical and numerical methods seeding a new renaissance for scientific computing, physics and engineering.

One of the key things to understanding the power of computing is the comprehension that the ability compute belies a deep understanding that enables analytical, physical and domain specific knowledge. A problem intimateldag006y related to the multi-dimensional issues with compressible fluids is the topic of one of the Clay prizes. This is a million dollar prize for proving the existence of solutions to the Navier-Stokes equations. There is a deep problem with the way this problem is posed that may make its solution both impossible and practically useless. The equations posed in the problem statement are fundamentally wrong. They are physically wrong, not mathematically although this wrongness has consequences. In a very deep practical way fluids are never truly incompressible; incompressible is an approximation, but not a fact. This makes the equations have an intrinsically elliptic character (because incompressibility implies infinite sound speeds, and lack of thermodynamic character).

Physically the infinite sound speeds remove causality from the equations, and the removal of thermodynamics takes them further outside the realm of reality. This also creates immense mathematical difficulties that make these equations almost intractable. So this problem touted as the route to mathematically contribute to understanding turbulence may be a waste of time for that endeavor as well. Again, we need a concerted effort to put this part of the mathematical physics World into better order. The benefits to computation through some order would be virtually boundless.

This gets at one of the greatest remaining unsolved problems in physics, turbulence. The ability to solve problems depends critically upon models and the mathematics that makeimagess such models tractable or not. The existence theory problems for the incompressible Navier-Stokes equations are essential for turbulence. For a century it has largely been assumed that the Navier-Stokes equations describe turbulent flow with an acute focus on incompressibility. More modern understanding should have highlighted that the very mechanism we depend upon for creating the sort of singularities turbulence observations imply has been removed in the process of the choice of incompressibility. The irony is absolutely tragic. Turbulence brings almost an endless amount of difficulty to its study whether experimental, theoretical, or computational. In every case the depth of the necessary contributions by mathematics is vast. It seems somewhat likely that we have compounded the difficulty of turbulence by choosing a model with terrible properties. If so, it is likely that the problem remains unsolved, not due to its difficulty, but rather our blindness to the shortcomings, and the almost religious faith many have followed in attacking turbulence with such a model.

Before I close I’ll touch on a few more areas where some progress could either bring great order to a disordered, but important area, or potentially unleash new approachimageses to problem solving. An area in need of fresh ideas, connections and better understanding is mechanics. This is a classical field with a rich and storied past, but suffering from a dire lack of connection between the classical mathematical rigor and the modern numerical world. Perhaps in no way is this more evident in the prevalent use of hypo-elastic models where hyper-elasticity would be far better. The hypo-elastic legacy comes from the simplicity of its numerical solution being the basis of methods and codes used around the World. It also only applies to very small incremental deformations. For the applications being studied, it should is invalid. In spite of this famous shortcoming, hypo-elasticity rules supreme, and hyper-elasticity sits in an almost purely academic role. Progress is needed here and mathematical rigor is part of the solution.

A couple of areas of classical numerical methods are in dire need of breakthroughs with the current technology simply being accepted as good enough. A key one is the solution of sparse linear systems of equations. The current methods are relatively fragile and it’s been 30-40 years since we had a big improvement. Furthermore these successes are somewhat hollowed by the lack of a robust solution path. Right now the gold standard of scaling comes from multigrid, invented in the mid-1970’s to mid-1980’s. Robust solvers use some sort of banded method with quadratic scaling or pre-conditioned Krylov methods (which a_12122_tex2html_wrap26re less reliable). This area needs new ideas and a fresh perspective in the worst way. The second classical area of investigation that has stalled is high-order methods. I’ve written about this a lot. Needless to say we need a combination of new ideas, and a somewhat more honest and pragmatic assessment of what is needed in practical terms. We have to thread the needle of accuracy, efficiency and robustness in both cases. Again without mathematics holding us to the level of rigor it demands progress seems unlikely.

Lastly we have broad swaths of application and innovation waiting to be discovered. We need to work to make optimization something that yields real results on a regular basis. The problem in making this work is similar to the problem with high-order methods; we need to combine the best technology with an unerring focus on the practical and pragmatic. Optimization today only applies to problems that are far too idealized. Other methodologies are laying in wait of great impact among these the generalization of statistical methods. There is an immense need for better and more robust statistical methods in a variety of fields (turbulence being a prime example). We need to unleash the forces of innovation to reshape how we apply statistics.

When you change the way you look at things, the things you look at change.

― Max Planck

The depth of the problem for mathematics does seem to be slightly self-imposed. In a drive for mathematical rigor and professional virtue in applied mathematics, the field has lost a great deal of connection to physics and engineering. If one looks to the past for guidance, the obvious truth is that the ties between physics, engineering and mathematics have been quite fruitful. There needs to be healthy dynamics of push and pull between these areas of emphasis. The wjohn-von-neumann-2orlds of physics and engineering need to seek mathematical rigor as a part of solidifying advances. Mathematics needs to seek inspiration from physics and engineering. Sometimes we need the pragmatic success in the ad hoc “seat of the pants” approach to provide the impetus for mathematical investigation. Finding out that something works tends to be a powerful driver to understanding why something works. For example the field of compressed sensing arose from a practical and pragmatic regularization method that worked without theoretical support. Far too much emphasis is placed on software and far too little on mathematical discovery and deep understand. We need a lot more discovery and understanding today, perhaps no place more than scientific computing!

Mathematics is as much an aspect of culture as it is a collection of algorithms.

—  Carl Boyer

Note: Sometimes my post is simply a way of working on narrative elements for a talk. I have a talk on Exascale computing and (applied) mathematics next Monday at the University of New Mexico. This post is serving to help collect my thoughts in advance.

A Requiem for Personal Integrity in Public Life

20 Tuesday Sep 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Above all, don’t lie to yourself. The man who lies to himself and listens to his own lie comes to a point that he cannot distinguish the truth within him, or around him, and so loses all respect for himself and for others. And having no respect he ceases to love.

― Fyodor Dostoyevsky

Over the past handful of years the capacity to maintain professional success and personal integrity has become increasingly strained. Simultaneously the whole concept of integrity within the public life has similarly become strained. I’ve been startled by the striking symmetry between my private, professional and public life as the incredibleimgres and terrifying shit show of the 2016 American Presidential election. It all seems to be coming together in a massive orgy of angst, lack of honesty and fundamental integrity across the full spectrum of life. As an active adult within this society I feel the forces tugging away at me, and I want to recoil from the carnage I see. A lot of days it seems safer to simply stay at home and hunker down and let this storm pass. It seems to be present at every level of life involving what is open and obvious to what is private and hidden.

How do we deal with all of the conflict, tension and danger when the rules of the game seem to have been thrown out? Is this confluence of effects common to everyone helping to explain the World, or is it personal? I think it’s worth pondering the breadth and scope of the challenges as we head forward toward a hopefully more optimistic future and 2017.

I’ve highlighted the concept of integrity as the focal point and the thing at more imminent risk. This is an expansion of my previous discussion of peer review that revolves around the same axis. It gets to the ability to care with some depth about the work I do, and whether that concern is actually appreciated. What does personal integrity mean to me? Like most things it is complex and combines multiple aspects of the content of my daily life. The greatest element of personal angst deals with the truth and any willingness for truths to be articulated openly. Research and progress depends upon good ideas being applied to areas of opportunity. Another less charitable way of sayingfight-club-poster the same thing is that progress depends on finding important, valuable problems and driving solutions. A second piece of integrity is hard work, persistence, and focus on the important valuable problems linked to great National or World concerns. Lastly, a powerful aspect of integrity is commitment to self. This includes a focus on self-improvement, and commitment to a full and well-rounded life. Every single bit of this is in rather precarious and constant tension, which is fine. What isn’t fine is the intrusion of outright bullshit into the mix undermining integrity at every turn.

When people don’t express themselves, they die one piece at a time.

― Laurie Halse Anderson

The core of the issue attacking integrity is the power and prevalence of bullshit in professional and public life. At a professional level bullshit has become the prevalent means of communication of results. Why create real work when fake work can be spun into results of equal and greater value. In fact bullshit is better because it can be whatever it needs to be for success. People continually produce results of mquick-fix-movie-to-watch-office-space-imageinimal value that get marketed as breakthroughs. The lack of integrity at the level of leadership simply takes this bullshit and passes it along. Eventually the bullshit gets to people who are incapable of recognizing the difference. Ultimately, the result of the acceptance of bullshit produces a lowering of standards, and undermines the reality of progress. Bullshit is the death of integrity in the professional world.

There exists an appalling symmetry within the broadest public sphere. We are witnessing a political movement of disturbing power founded on bullshit. We see outright lies produced every day and never actively challenged. This bullshit may actually elect a completely unqualified and dangerous person President of the United States. Why work with facts or truth when bullshit is so incredibly effective? When we look more deeply at this problem we start to see that our political dysfunction is built upon a virtual mountain of bullshit. We see reality television, Facebook, Fox News, CNN, online dating, and a host of other modern things all-operating dude_wtfwithin a vibrant and growing bullshit economy. Taken in this broad context, the dominance of bullshit in my professional life and the potential election of Donald Trump are closely connected.

A big part of the acceptance of bullshit as the medium of universal discourse is related to fear. Increasingly our professional and public lives are ruled by irrational fears. Fear of failure professionally is rampant. Fear of terrorism is also rampant. Fear of immigrants is yet another common fear tied to terrorism, racism and economic stress. Fear is a powerful emotion that overrules most rational responses to problems. It leads to people shrinking away from the sort of professional risk that research depends upon. Fear is also one of the most powerful tools of political despots. In each of these cases bullshit can be used to either quell or amplify fears. In the technical World bullshit can produce seeming success without regard to actual technical accomplishment. The acceptance of bullshit in place of actual results quells the risk of failure. In the political World bullshit can produce fear where little or none is warranted. We are seeing trillions of dollars and millions of votes being generimagexsated by fear mongering bullshit. Even worse it has paved the way for greater purveyors of bullshit like Trump. We may well see bullshit providing the vehicle to elect an unqualified con man as the President.

Here are two major paths to take in life, follow and be rewarded by the power structure, or confront power with the reality of their failing. So I’m confronted with guidance “don’t be a troublemaker” versus show integrity “speak truth to power”. Which is it? Which of these paths do our institutions support today? That’s easy. Be quiet and go quietly through life, and don’t make waves. Better yet, join the bullshit economy and contribute to its vibrant growth as our greatest export.

If you can’t speak truths and provide honest assessments in today’s World, or call out lies in public, can you have personal integrity? When actual bullshit has become the path to professional success, how does one come back? How much of yourself gets left behind in the process? And how does one keep ones self from being a mere shadow of one’s true self? I’ve been struggling with these questions at work with ever-greater regularity. My personal devotion to progress and research is continually undermined by bullshit replacing progress. Today it is easier to make shit up and pass it off as being completely equivalent to the result of honest good work. Moreover bullshit doesn’t have the downside of potentially not working, its “success” is virtually guaranteed. Just tell the people above you in the food chain what they want to hear, you’ll be rewarded in spades! Th1466041941244_947e beauty of it is the made up shit can conform to whatever narrative you desire, and completely fit whatever your message is. In such a world real problems can simply be ignored when the path to progress is problematic for those in power.

These same issues happen to energize a political dialog that becomes a shouting match. There is no truth and the whole thing degenerates into a shouting match. Beyond the shouting match you get the ability to ignore real problems that are inconvenient. A perfect example is climate change. For many traditional businesses climate change is really an inconvenient truth. When bullshit rules, it can be publically ignored without real risk to political success. We are seeing this play out right in front of all of us. We are in danger of losing all connection to facts, truth and science as guiding forces for determining optimal solutions to our very real problems. The road to this end is paved by allowing bullshit to be viewed as equivalent to solid facts.

Is there a path forward? Perhaps things simply need to devolve into the natural outcome from the current path. Nothing short of catastrophe will stop this orgy of bullshit choking public life. One might hope that we have the collective wisdom to avoid a calamity, one might hope.

One of the greatest regrets in life is being what others would want you to be, rather than being yourself.

― Shannon L. Alder

Is Coupled or Unsplit Always Better Than Operator Split?

16 Friday Sep 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

No.

The ideal is the enemy of the real.

― Susan Block

Ijohn-von-neumann-2n the early days of computational science we were just happy to get things to work for simple physics in one spatial dimension. Over time, our grasp on more difficult coupled multi-dimensional physics became ever more bold and expansive. The quickest route to this goal was the use of operator splitting where the simple operators, single physics and one-dimensional were composed into complex operators. Most of our complex multiphysics codes operate in this operator split manner. Research into doing better almost always entails doing away with this composition of operators, or operator splitting and doing everything fully coupled. It is assumed that this is always superior. Reality is more difficult than this proposition, and most of the time the fully coupled or unsplit approach is actually worse with lower accuracy and greater expense with little identifiable benefits. So the question is should we keep trying to do this?

This is another example where the reality of simulating difficult problems gives a huge home field advantage to simple approaches. It is much the same as the issues with high-order methods for discretization. Real problems bring complexities and singularities (shocks, corners, turbulence, etc.), and this relegates results to first-order accuracy or less. Operator splitting is often first order accurate without extensive and difficult measures. We have the situation where reality collides with the simplest approach. The truth is that the simple operator split approach is really good and powerful in many, if not most cases. It is important to realize when this is not and something better really is needed.

The unsplit, fully coupled approach really yields an unambiguous benefit when the Supernove-Shocks-1solution involves a precise dynamic balance. This is when you have the situation of equal and opposite terms in the equations that produce solutions in near equilibrium. This produces critical points where solution make complete turns in outcomes based on the very detailed nature of the solution. These situations also produce substantial changes in the effective time scales of the solution. When very fast phenomena combine in this balanced form, the result is a slow time scale. It is most acute in the form of the steady-state solution where such balances are the full essence of the physical solution. This is where operator splitting is problematic and should be avoided.

Such balances are also rarely the entire problem, and often only present in a localized region in time and space. As such the benefit of coupling is not present everywhere and the cost of it should not be applied by the entire procedure. Unfortunately, this isn’t what people do, once they remove operator splitting and fully couple, they do it everywhere. A way forward is to only apply fully coupling where it has a favorable impact on the solution in the region of critical points, and use more effective, accurate and efficient operator splitting elsewhere.

The other reason for not applying coupled methStabilityods is their disadvantage for the fundamental approximations. When operators are discretized separately quite efficient and optimized approaches can be applied. For example if solving a hyperbolic equation it can be very effective and efficient to produce an extremely high-order approximation to the equations. For the fully coupled (unsplit) case such approximations are quite expensive, difficult and complex to produce. If the solution you are really interested in is first-order accurate, the benefit of the fully coupled case is mostly lost. This is with the distinct exception of small part of the solution domain where the dynamic balance is present and the benefits of coupling are undeniable.

This entire dialog is even stronger when considering multi-physics where procedures for solving single physics are highly optimized and powerful. The fully coupled methods tend to be clunky and horribly expensive often being defined by dropping the entire system into an implicit system without regard to the applicability and utility of such an approximation for the problem at hand. To make matters worse the implicitness often undermines accuracy in really pernicious ways in the very regions where the coupling is actually necessary. Moreover the cost of this less accurate approximation is vastly greater due to the nature of the full system, and the departure from all the tricks of the trade leading to efficiency.

A really great path forward is the encouragement to pursue fully coupled methods only where their benefit is greatest. This is another case where the solution 24-Figure17-1method should be adaptive and locally tailored to the nature of the solution. One size fits all is almost never the right answer (to anything). Unfortunately this whole line of attack is not favored by anyone these days, we seem to be stuck in the worst of both worlds where codes used for solving real problems are operator split, and research is focused on coupling without regard for the demands of reality. We need to break out of this stagnation! This is ironic because stagnation is one of the things that coupled methods excel at!

The secrets of evolution are death and time—the deaths of enormous numbers of lifeforms that were imperfectly adapted to the environment; and time for a long succession of small mutations.

― Carl Sagan

 

 

I’m Better When I Don’t Care

12 Monday Sep 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

When we are no longer able to change a situation, we are challenged to change ourselves.

― Viktor E. Frankl

urlToday’s title is a conclusion that comes from my recent assessments and experiences at work. It has completely thrown me off stride as I struggle to come to terms with the evidence in front of me. The obvious and reasonable conclusions from the consideration of recent experiential evidence directly conflicts with most of my most deeply held values. As a result I find myself in a deep quandary about how to proceed with work. Somehow my performance is perceived to be better when I don’t care much about my work. One reasonable conclusion is that when I have little concern about outcomes of the work, I don’t show my displeasure when those outcomes are poor.

Do I continue to act naturally and care about my work despite the evidence that such concerns are completely unwelcome? Instead do I take my energy and concern elsewhere and turn work into nothing but a paycheck as feedback seems to directly say? Is there a middle path that preserves some personal integrity while avoiding the issues that seem to cause tension? Can I benefit by making work more impersonal and less important to me? Should I lose any sense of deeper meaning and importance to the outcomes at work?

Persoleadersnal integrity is important to pay attention to. Hard work, personal excellence and a devotion to progress has been the path to my success professionally. The only thing that the current environment seems to favor is hard work (and even that’s questionable). The issues causing tension are related to technical and scientific quality, or work that denotes any commitment to technical excellence. It’s everything I’ve written about recently, success with high performance computing, progress in computational science, and integrity in peer review. Attention to any and all of these topics is a source of tension that seems to be completely unwelcome. We seem to be managed to mostly pay attention to nothing but the very narrow and well-defined boundaries of work. Any thinking or work “outside the box” seems to invite ire, punishment and unhappiness. Basically, the evidence seems to indicate that my performance is perceived to be much better if I “stay in the box”. In other words I am managed to be predictable and well defined in my actions, don’t provide any surprises.

vyxvbzwxThe only way I can “stay in the box” is to turn my back on the same values that brought me success. Most of my professional success is based on doing “out of the box” thinking working to provide real progress on important issues. Recently it’s been pretty clear that this isn’t appreciated any more. To stop thinking out of the box I need to stop giving a shit. Every time I seem to care more deeply about work and do something extra not only is it not appreciated; it gets me into trouble. Just do the minimum seems to be the real directive, and extra effort is not welcome seem to be the modern mantra. Do exactly what you’re told to do, no more and no less. This is the path to success.

When a person is punished for their honesty they begin to learn to lie.

― Shannon L. Alder

I have evidence that my performance is perceived to be better when I don’t give a shit. I’ve quick-fix-movie-to-watch-office-space-imagedone the experiment and the evidence was absolutely clear. When I don’t care, don’t give a shit and have different priorities than work, I get awesome performance reviews. When I do give a shit, it creates problems. A big part of the problem is the whole “in the box” and “out of the box” issue. We are managed to provide predictable results and avoid surprises. It is all part of the low risk mindset that permeates the current World, and the workplace as well. Honesty and progress is a source of tension (i.e., risk) and as such it makes waves, and if you make waves you create problems. Management doesn’t like tension, waves or anything that isn’t completely predictable. Don’t cause trouble or make problems, just do stuff that makes us look good. The best way to provide this sort of outcome is just come to work and do what you’re expected to do, no more, no less. Don’t be creative and let someone else tell you what is important. In other words, don’t give a shit, or better yet don’t give fuck either.hqdefault

Why should I work so hard or put so much effort into something that isn’t appreciated? I have other things to do in my (finite) life where effort is appreciated. The conclusion is that I should do much more about things away from work, and less at work. In other words I need to stop giving a shit at work. Its what my feedback is telling me, and it’s a route to sanity. It is appalling that its come to this, but the evidence is crystal clear.

Some men are born mediocre, some men achieve mediocrity, and some men have mediocrity trust upon them.

― Joseph Heller

 

 

The Real Problem with Classified E-mail

09 Friday Sep 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

635933172260783601-hillary-clinton-miami-rally-super-tuesday-27The news is full of stories and outrage at Hillary Clinton’s e-mail scandal. I don’t feel that anyone has remotely the right perspective on how this happened, and why it makes perfect sense in the current system. It epitomizes a system that is prone to complete breakdown because of the deep neglect of information systems both unclassified and classified within the federal system. We just don’t pay IT professionals enough to get good service. The issue also gets to the heart of the overall treatment of classified information by the United States that is completely out of control. The tendency to classify things is completely running amok far beyond anything that is in the actual best interests of society. Tobill-clinton-pardoned-john-deutch-rcompound things, further it highlights the utter and complete disparity in how laws and rules do not apply to the rich and powerful. All of this explains what happened, and why; yet it doesn’t make what she did right or justified. Instead it points out why this sort of thing is both inevitable and much more widespread (i.e., John Deutch, Condoleezza Rich, Colin Powell,imgres and what is surely a much longer list of violations of the same thing Clinton did).

Management cares about only one thing. Paperwork. They will forgive almost anything else – cost overruns, gross incompetence, criminal indictments – as long as the paperwork’s filled out properly. And in on time.

― Connie Willis

imagesLast week I had to take some new training at work. It was utter torture. The DoE managed to find the worst person possible to train me, and managed to further drain them of all personality then treated him with sedatives. I already take a massive amount of training, most of which utterly and completely useless. The training is largely compliance based, and generically a waste of time. Still by the already appalling standards, the new training was horrible. It is the Hillary-induced E-mail classification training where I now have the authority to mark my classified E-mails as an “E-mail derivative classifier”. We are constantly taking reactive action via training that only undermines the viability and productivity of my workplace. Like most my training, this current training is completely useless, and only serves the “cover your ass” purpose that most training serves. Taken as a whole our environment is corrosive and undermines any and all motivation to give a single fuck about work.

Let’s get to the point of why Hillary was compelled to use a private e-mail system in the first place? Why classified information appeared in the place? Why people in positions of power feel they don’t have to follow rules?

Most people watching the news have little or no idea about the classified computing or e-mail systems. So let’s explain a few things about the classified systems people work on that will get to the point of why all of this is so fucking stupid. For starters the classified computing systems are absolutely awful to use. Anyone trying to get real work done on these systems is confronted with the utter horror they are to use. No one interested in productively doing work would tolerate them. In many government places the unclassified computing systems are only marginally better. The biggest reasons are lack of appropriately skilled IT professionals and lack of investment in infrastructure. Fundamentally we don’t pay the IT professionals enough to get first-rate service, and anyone who is good enough to get a better private sector job does. Moreover these incompetencedemotivatorprofessionals work on old hardware with software restrictions that serve outlandish and obscene security regulations that in many cases are actually counter-productive. So, if Hillary were interested in getting anything done she would be quite compelled to leave the federal network for greener, more productive pastures.

The more you leave out, the more you highlight what you leave in.

― Henry Green

Where one might think that the government would give classified work the highest priority, the environment for working there is the worst. Keep in mind that it is worse than the already shitty and atrocious unclassified environment. The seeming purpose of evermistakesdemotivator_largeything is not my or anyone’s actual productivity, but rather the protection of information, or at least the appearance of protection. Our approach to everything is administrative compliance with directives. Actual performance on anything is completely secondary to the appearance of performance. The result of this pathetic approach to providing the taxpayer with benefit for money expended is a dysfunctional system that provides little in return. It is primed for mistakes and outright systematic failures. Nothing stresses the system more than a high-ranking person hell-bent on doing their job. The sort of people who ascend to high positions like Hillary Clinton find the sort of compliance demanded by the system awful (because it is), and have the power to ignore it.

peter_nanosOf course I’ve seen this abuse of power live and in the flesh. Take the former Los Alamos Lab Director, Admiral Pete Nanos who famously shut the Lab down an denounced the staff as “Butthead Cowboys!” He blurted out classified information in an unclassified meeting in front of hundreds if not thousands of people. If he had taken his training, and been compliant he should have known better. Instead of being issued a security infraction like any of the butthead cowboys in attendance would have gotten, he got a pass. The powers that be simply declassified the material and let him slide by. Why? Power comes with privileges. When you’re in a position of power you find the rules are different. This is a maxim repeated over and over in our World. Some of this looks like white privilege, or rich white privilege where you can get away with smoking pot, or raping unconscious girls with no penalty, or lightened penalties. If you’re not white or not rich you pay a much stiffer penalty including prison time.7597423806_3213679a80_b

I learned the lesson again at Los Alamos in another episode that will remain slightly vague in this post. I went to a meeting that honored a Lab scientist’s career. During the course of the meeting another Lab director read an account of this person’s work noting their monumental accomplishments and contributions to the national security. All of the account was good, true and correct except it was classified in its content. I took the written text to the classification office at the Lab and noted its issues. They agreed that it was indeed classified. Because the people who wrote the account (very high ranking DoE person) and the person who read it were so high ranking they would not touch this with the proverbial ten-foot pole. They knew a violation had occurred, but their experience also told them that it was foolish to pursue it. This pursuit would only hurt those who pointed out the problem and those committing the violations were immune.

Let me ask you, dear reader, how do you think someone would treat the Secretary of State of the United States. How much more untouchable would they be? It is certainly wrong in a perfect World, but we live in a very imperfect world.

A secret’s worth depends on the people from whom it must be kept.

― Carlos Ruiz Zafón

Castle_Union
bomb.jpg_1718483346

The core philosophy in all of this is that we have lots of secrets to protect because we are the biggest and baddest country on Earth. It was certainly true at one time, but every day I wonder less and less if we still are and gain assurance that we are not. So we have created a system that is predicated on our lead in science and technology, but completely and utterly undermines our ability to keep that lead. We have a system that is completely devoted to undermining our productivity at every turn in the service of protecting information that loses its real value every day. To put it differently our current approach and policy is utter and complete fucking madness!

I also want to be clear that classification of a lot of material is absolutely necessary. It is essential to the safety and security of the Nation and the World. The cavalier and abusive way that classification is applied today runs utterly counter to this. By classifying everything in sight, we reduce the value and importance of the things that must be classified. By using classification of documents to cover everything with a blanket, the real need and purpose of classification is obscured and harmed deeply. All of this said I have not discussed the most widely abused version of classification, “Official Use Only,” which is applied in an almost entirely unregulated manner. It is abused widely and casually. Among the areas regulated by this awful policy is the Export Controlled Information, which is easily one of the worst laws I’ve ever come in contact with. It is just simply put stupid and incompetent. It probably does much more harm than good to the national security of the nation.

Power does not corrupt. Fear corrupts… perhaps the fear of a loss of power.

― John Steinbeck

GOP 2016 Debate

Republican presidential candidate, businessman Donald Trump stands during the Fox Business Network Republican presidential debate at the North Charleston Coliseum, Thursday, Jan. 14, 2016, in North Charleston, S.C. (AP Photo/Chuck Burton)

Let’s be clear about the Country and World we live in. The rich and powerful are corrupt. The rich and powerful are governed by entirely different rules than everyone else. Mistakes, violations of the law, and morality itself for the rich and powerful are fundamentally different than the common man. So to be clear Hillary Clinton committed abuses of power. Donald Trump has committed abuses of power too. Barack Obama has as well. Either Hillary or Trump will continue to do so if elected President. Until the basic attitudes toward power and money change we should expect this to continue. The same set of abuses of power happen across the spectrum of society in every organization and business. The larger the organization or business, the worse the abuse of power can expect to be. As long as it is tolerated it can be expected to continue.

A man who has never gone to school may steal a freight car; but if he has a university education, he may steal the whole railroad.

― Theodore Roosevelt

Our societal approach to classification of documents is simply a tool of this sort of rampant abuse of power. We see any sense of a viable “whistleblower” protection to be imgres-1complete and utter bullshit. People who have highlighted huge systematic abuses of power involving murder and vast violation of constitutional law are thrown to the proverbial wolves. There is no protection, it is viewed as treason and these people are treated as harshly as possible (Snowden, Assange, and Manning come to mind). As I’ve noted above people in positions of authority can violate the law with utter impunity. At the same time classification is completely out of control. More and mocnt4_fr53-1re is being classified with less and less control. Such classification often only serves to hide information and serve the needs of the status quo power structure.

In the end, Hillary had really good reasons to do what she did, and believe that she had the right to do so. Everything in the system is going to provide her with the evidence that the rules for everyone else do not apply to her. Hillary wasn’t correct, but we have created an incompetent, unproductive computing system that virtually compelled her to choose the path she took. We have created a culture where the most powerful people do not have to follow the rules that the regular guy rules. The system has been structured by fear and lack of trust without any regard for productivity. If we want to remain the most powerful country, we need to change our priorities on productivity, secrecy and the corruption of power.

The whole issue of runaway classification, classified e-mails and our inability to produce a productive work environment in National Security is at the nexus of incompetence, lack of trust, corruption resulting in a systematic devotion to societal mediocrity.

Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.

– Edward Snowden

 

How to strive for excellence in modeling & simulation

02 Friday Sep 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Where the frontier of science once was is now the centre.

― Georg Christoph Lichtenberg

mediocritydemotivatorI’ll just say up front that my contention is that there is precious little excellence to be found today in many fields. Modeling & simulation is no different. I will also contend that excellence is relatively easy to obtain, or at the very least a key change in mindset will move us in that direction. This change in mindset is relatively small, but essential. It deals with the general setting of satisfaction with the current state and whether restlessness exists that ends up allowing progress to be sought. Too often there seems to be an innate satisfaction with too much of the “ecosystem” for modeling & simulation, and not enough agitation for progress. We should continually seek the opportunity and need for progress in the full spectrum of work. Our obsession with planning, and micromanagement of research ends up choking the success from everything it touches by short-circuiting the entire natural process of progress, discovery and serendipity.

Iunderachievementdemotivatorn my view the desire for continual progress is the essence of excellence. When I see the broad field of modeling & simulation the need for progress seems pervasive and deep. When I hear our leaders talk such needs are muted and progress seems to only depend on a few simple areas of focus. Such a focus is always warranted if there is an opportunity to be taken advantage of. Instead we seem to be in an age where the technological opportunity being sought is arrayed against progress, computer hardware. In the process of trying to force progress where it is less available the true engines of progress are being shut down. This represents mismanagement of epic proportions and needs to be met with calls for sanity and intelligence in our future.

If you find that you’re spending almost all your time on theory, start turning some attention to practical things; it will improve your theories. If you find that you’re spending almost all your time on practice, start turning some attention to theoretical things; it will improve your practice.

–Donald Knuth

So how do we get better at modeling & simulation? The first thing is mindset; do we think going into our work, my goal is to make this thing good? Or state of the art? Or simply how can I make it better? The final question is the “right” one, you can always make it better, and in the process the other two questions will get answered. Too often today we never get into the fundamental mode of simply working toward continual improvement as our default mode of operation. The manner of energizing our work to do this is frightenly simple to pursue, but rarely in evidence today.

8The way toward excellence, innovation and improvement is to figure out how to break what you have. Always push your code to its breaking point; always know what reasonable (or even unreasonable) problems you can’t successfully solve. Lack of success can be defined in multiple ways including complete failure of a code, lack of convergence, lack of quality, or lack of accuracy. Generally people test their code where it works and if they are good code developers they continue to test the code all the time to make sure it still works. If you want to get better you push at the places where the code doesn’t work, or doesn’t work well. You make the problems where it didn’t work part of the ones that do work. This is the simple and straightforward way to progress, and it is stunning how few efforts follow this simple, and obvious path. It is the golden path that we deny ourselves of today.

The reasons for not engaging in this golden path are simple and completely and utterly pathological. The golden path is not easy to manage. This golden path is epitomized by out of the box thinking. Today we prize in the box thinking because it is suitable for management, and strict accountability. This strict accountability is the consequence of societal structures that lack trust and implicitly fear independent thought. Out of the box is unstructured and innovative eschewing management control. As such we introduce systems that push everything inside the proverbial box. Establishing results that are predictable has become tantamount to being trustworthy. Out of the box thinking is dangerous and the subject of fear because it cannot be predicted. This is the core of our current lack of innovation, and the malaise in modeling & simulation.

The element of thinking that is missing from how things currently progress is a sense of satisfaction about too much of what has driveimages-1n the success of modeling & simulation to date. We are too satisfied that the state of the art is fine and good enough. We lack a general sense that improvements, and progress are always possible. Instead of a continual striving to improve, the approach of focused and planned breakthroughs has beset the field. We have a distinct management approach that provides distinctly oriented improvements while ignoring important swaths of the technical basis for modeling & simulation excellence. The result of this ignorance is an increasingly stagnant status quo that embraces “good enough” implicitly through a lack of support for “better”.

There seems to be a belief that the current brand of goal-oriented micromanagement is good for technical achievement. Nothing could be further from the truth; the current goal based management philosophy is completely counter-productive and antithetical to good science and achievement. It leads to systematic goal reduction and lack of risk-taking on the part of organizations. A big part of this is the impact of the management style on the intrinsic motivations of the scientists. Scientists tend to be quite easily and intrinsically motivated by curiosity and achievement while the management system is focused on extrinsic motivation.

The test of a man isn’t what you think he’ll do. It’s what he actually does.

― Frank Herbert

We end up undermining all of the natural and simple aspects that lead to productive, innovative excellence in work, replacing these factors with a system that undermines what comes naturally. All of this has a single root, lack of trust and faith in the people doing the work. Without rebuilding the fundamental trust in providing intrinsically motivated and talented people a productive environment, I fear nothing can be done to improve our outcomes and grasp the excellence that is there for the taking. People would gravitate toward excellence naturally if the management would simply trust them and work to resonate with people’s natural inclinations.

computer-modeling-trainingModeling & simulation arose to utility in support of real things. It owes much of its prominence to the support of national defense during the cold war. Everything from fighter planes to nuclear weapons to bullets and bombs utilized modeling & simulation to strive toward the best possible weapon. Similarly modeling & simulation moved into the world of manufacturing aiding in the design and analysis of cars, planes and consumer products across the spectrum of the economy. The problem is that we have lost sight of the necessity of these real world products as the engine of improvement in modeling & simulation. Instead we have allowed computer hardware to become an end unto itself rather than simply a tool. Even in computing, hardware has little centrality to the field. In computing today, the “app” is king and the keys to the market hardware is simply a necessary detail.

Cielo rotatorTo address the proverbial “elephant in the room” the national exascale program is neither a good goal, nor bold in any way. It is the actual antithesis of what we need for excellence. The entire program will only power the continued decline in achievement in the field. It is a big project that is being managed the same way bridges are built. Nothing of any excellence will come of it. It is not inspirational or aspirational either. It is stale. It is following the same path that we have been on for the past 20 years, improvement in modeling & simulation by hardware. We have tremendous places we might harness modeling & simulation to help produce and even enable great outcomes. None of these greater societal goods is in the frame with exascale. It is a program lacking a soul.

The question always comes to what am I suggesting be done instead? We need to couch our overall efforts in modeling & simulation in supporting real world objectives. Something like additive manufacturing comes to mind as a modern example that would serve us far better than faster computers. We need to define a default attitude that progress is always possible and always something to be sought. Unfortunately, this more sensible and productive approach is politically untenable today. The real problem isn’t intellectual or bound in thoughtful dialog, but rather bound to a deep lack of faith and trust in science and scientists. We have poorly thought through programs focused on marketing and micromanagement as a direct result. Progress be damned.

There is a very real danger present when we suppress our feelings to act on inspiration in exchange for the “safety” of the status quo.

We risk sacrificing the opportunity to live a more fulfilling and purpose driven life. We risk sacrificing the opportunity to make a difference in the lives of others. We risk sacrificing the beautiful blessing of finding a greater sense of meaning in our own lives.

In short, we run the very real risk living a life of regret.

― Richie Norton

 

Progress is incremental; then it isn’t

22 Monday Aug 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Taking a new step, uttering a new word, is what people fear most.

― Fyodor Dostoyevsky

the-future-is-ours-01The title is a bit misleading so it could be concise. A more precise one would be “Progress is mostly incremental; then progress can be (often serendipitously) massive” Without accepting incremental progress as the usual, typical outcome, the massive leap forward is impossible. If incremental progress is not sought as the natural outcome of working with excellence, progress dies completely. The gist of my argument is that attitude and orientation is the key to making things better. Innovation and improvement are the result of having the right attitude and orientation rather than having a plan for it. You cannot schedule breakthroughs, but you can create an environment and work with an attitude that makes it possible, if not likely. The maddening thing about breakthroughs is their seemingly random nature, you cannot plan for them they just happen, and most of the time they don’t.

For me, the most important aspect of the work environment is the orientation toward excellence and progress. Is work focused on being “the best” or the best we can be? Are we trying to produce “state of the art” results, or are we trying to push the state of the art further? What is the attitude and approach to critique and peer review? What is the attitude toward learning, and adaptively seeking new connections between ideas? How open is the work to accepting, even embracing, serendipitous results? Is the work oriented toward building deep sustainable careers where “world class” expertise is a goal and resources are extended to achieve this end?

Increasingly when I honestly confront all these questions, the answers are troubling. There seems to be the attitude that all of this can be managed, but control of progress is largely an illusion. Usually the answers are significantly oriented away from those that would signify these values. Too often the answers are close to the complete opposite of the “right” ones. What we see is a broad aegis of accountability used to bludgeon the children of progress to death in their proverbial cribs. If accountability isn’t enough to kill progress, compliance is wheeled out as progress’ murder weapon. Used in combination we see advances slow to a crawl, and expertise fail to form where talent and potential was vast. The tragedy of our current system is lost futures first among human’s whose potential greatness is squandered, and secondly in the progress and immense knowledge they would have created. Ultimately all of this damage is heaped upon the future in the name of safety and security that feeds upon pervasive and malignant fear. We are too afraid as a culture to allow people the freedoms needed to be great and do great things.

So much of images-2modern management seems to think that innovation is something to be managed for and everything can be planned. Like most things where you just try too damn hard, this management approach has exactly the opposite effect. We are actually unintentionally, but actively destroying the environment that allows progress, innovation and breakthroughs to happen. The fastidious planning does the same thing. It is a different thing than having a broad goal and charter that pushes toward a better tomorrow. Today we are expected to plan our research like we are building a goddamn bridge! It is not even remotely the same! The result is the opposite and we are getting less for every research dollar than ever before.

Without deviation from the norm, progress is not possible.

― Frank Zappa

In a lot of respects getting to an improved state is really quite simple. Two simple changes in how we plan and how we view success at work can make an enormous difference. First we need to always strive to improve, get better whether we are talking personally or in terms of our work. Secondly, we need to not simply be “state of the art” or “world class,” we need to advanced the state of the art, or define what it means to be world class. The driving aim is to strive to be the best and make things better as our default setting. The power of default setting is incredible. The default is so often the unconscious choice that setting the default may be the single most important decision commonly made. As soon as we accept that we, or our work are “good enough” and “fit to purpose” we have lost the battle for the future. The frequency of the default setting of “good enough” is sufficient to ensure that mediocrity creeps inevitably into the frame.

A goal ensures progress. But one gets much further without a goal.

― Marty Rubin

imgresA large part of the problem with our environment is an obsession with measuring performance by the achievement of goals or milestones. Instead of working to create a super productive and empowering work place where people work exceptionally by intrinsic motivation, we simply set “lofty” goals and measure their achievement. The issue is the mindset implicit in the goal setting and measuring; this is the lack of trust in those doing the work. Instead of creating an environment and work processes that enable the best performance, we define everything in terms of milestones. These milestones and the attitudes that surround them sew the seeds of destruction, not because goals are wrong or bad, but because the behavior driven by achieving management goals is so corrosively destructive.

The result is loss of an environment that can enable the best results as a focus, and goal setting that becomes increasingly risk adverse. When goals and milestones are used to judge people, they start to set the bar lower to make sure they meet the standard. The better approach is to create the environment, culture and processes that enable the work to be the best, and reap the rewards that flow naturally. Moreover in the process of creating the environment, culture and process the workplace is happier, as well as higher performing. Intrinsic motivation is harnessed instead of crushed. Everyone benefits from a better workplace and better performance, but we lack the trust needed to do this. Setting goals and milestones simply over charges the achievement and leaves little or no room for the risk necessary for innovation. We find ourselves in a system where the innovation is killed by the lack of risk taking that milestone driven management creates.

So how does progress really work? The truth is that there are really very few major breakthroughs, and almost none of them are every planned. Most of the time people simply make incremental changes and improvements, which have small, but positive changes on what they work on. These are bricks in the wall and gentle nudges to the status quo. Occasionally these small positive changes cause something greater. Occasionally the little thing becomes something monumental and creates a massive improvement. The trick is that you typically can’t tell what little change will have the big impact in advance. Without looking for the small changes as a way of life, and a constant property, the next big thing never comes.The_Thinker,_Auguste_Rodin

This is the trap of planning. You can’t plan breakthroughs and can’t schedule a better future. Getting to massive improvements is more about creating an environment of excellence, and continuous improvement than any sort of change agenda. The key to getting breakthroughs is to get really good people to work on improving the state of the art or state of the knowledge continuously. We need broad and expansive goals with aspirational character. Instead we have overly specific goals that simply ooze a deep distrust for those conducting the work. With the lack of trust and faith in how the work is done people retract to promising the sure thing, or simply the thing they have already accomplished. The death of progress is found by having a culture of simply implementing and staying at the state of the art or being world class.

The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

― George Bernard Shaw

Lots of examples exist in the technical world whether it is new numerical methods, or technology (like GPS for example). Almost none of these sought to change the World, but they did by simply taking a key step over a threshold where the change became great. Social movements are another prime example.

lead_large
Same-sex-couples-can-get-married-in-Dona-Ana-County
url
ellins_wide-708f8033c6ce48e1fc061f936d5899c99f127ba2-s900-c85
equality

. Take the fight for marriage equality as a great example of the small things leading to huge changes. A county clerk in New Mexico (Dona Ana where Las Cruces is located) stood up and granted marriage licenses to gay and lesbian citizens. This step along with other small actions across the country launched a tidal wave of change that culminated in making marriage equality the law for the entire nation.

Steve_Jobs_Headshot_2010-CROPSo the difference is really simple and clear. You must be expanding the state of the art, or defining what it means to be world class. Simply being at the state of the art or world class is not enough. Progress depends on being committed and working actively at improving upon and defining state of the art and world-class work. Little improvements can lead to the massive breakthroughs everyone aspires toward, and really are the only way to get them. Generally all these things are serendipitous and depend entirely on a culture that creates positive change and prizes excellence. One never really knows where the tipping point is and getting to the breakthrough depends mostly on the faith that it is out there waiting to be discovered.

 

Be the change that you wish to see in the world.

― Mahatma Gandhi

Getting Real About Computing Shock Waves: Myth versus Reality

18 Thursday Aug 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Taking a new step, uttering a new word, is what people fear most.

― Fyodor Dostoyevsky

Crays-Titan-SupercomputerComputing the solution to flows containing shock waves used to be exceedingly difficult, and for a lot of reasons it is now modestly difficult. Solutions for many problems may now be considered routine, but numerous pathologies exist and the limit of what is possible still means research progress are vital. Unfortunately there seems to be little interest in making such progress from those funding research, it goes in the pile of solved problems. Worse yet, there a numerous preconceptions about results, and standard practices about how results are presented that contend to inhibit progress. Here, I will outline places where progress is needed and how people discuss research results in a way that furthers the inhibitions.

I’ve written on this general topic before along with general advise on how to make good decisions in designing methods, https://williamjrider.wordpress.com/2015/08/14/evolution-equations-for-developing-improved-high-resolution-schemes-part-1/. In a nutshell, shocks (discontinuities) provide a number of challenges and some difficult realities to thesupersonic-bullet_660table. To do the best job means making some hard choices that often fly in the face of ideal circumstances. By making these hard choices you can produce far better methods for practical use. It often means sacrificing things that might be nice in an ideal linear world for the brutal reality of a nonlinear world. I would rather have something powerful and functional in reality than something of purely theoretical interest. The published literature seems to be opposed to this point-of-view with a focus on many issues of little practical importance.

It didn’t used to be like this. I’ve highlighted the work of Peter Lax before https://williamjrider.wordpress.com/2015/06/25/peter-laxs-philosophy-about-mathematics/, and it would be an understatement to say that his work paved the way for progress in compressible fluid mechanics. Other fields such as turbulence, solid mechanics, electro-magnetics have all suffered from the lack of similar levels of applied mathematical rigor and foundation. Despite this shining beacon of progress other fields have failed to build Peter_Laxupon this example. Worse yet, the difficulty of extending Lax’s work is monumental. Moving into high dimensions invariably leads to instability and flow that begins to become turbulent, and turbulence is poorly understood. Unfortunately we are a long way from recreating Lax’s legacy in other fields (see e.g., https://williamjrider.wordpress.com/2014/07/11/the-2014-siam-annual-meeting-or-what-is-the-purpose-of-applied-mathematics/).

If one takes a long hard look at problems that pace our modeling and simulation, turbulence figures prominently. We don’t understand turbulence worth a damn. Our physical understanding is terrible and not sufficient to simply turn our understanding over to supercomputers to crush (see https://williamjrider.wordpress.com/2016/07/04/how-to-win-at-supercomputing/). In truth, this is an example where our computing hubris exceeds our intellectual grasp considerably. We need significantly greater modeling understanding to power progress. Such understanding is far too often assumed to exist images-1where it does not. Progress in turbulence is stagnant and clearly lacks key conceptual advances necessary to chart a more productive path. It is vital to do far more than simply turn codes loose on turbulent problems and let great solutions come out because they won’t. Nonetheless, it is the path we are on. When you add shocks and compressibility to the mix, everything gets so much worse. Even the most benign turbulence is poorly understood much less anything complicated. It is high time to inject some new ideas into the study rather than continue to hammer away at the failed old ones. In closing this vignette, I’ll offer up a different idea: perhaps the essence of turbulence is compressible and associated with shocks rather than being largely divorced from these physics. Instead of building on the basis of the decisively unphysical aspects of incompressibility, turbulence might be better built upon a physical foundation of compressible (thermodynamic) flows with dissipative discontinuities (shocks) that fundamental observations call for and current theories cannot explain.

Further challenges with shocked systems occur with strong shocks where nonlinearity is ramped up to a level that exposes any lingering short-comings. Multiple materials are another key physical difficulty that exposes any solution methodology’s weaknesses to acute focus. Again and again the greatest rigor in simpler settings provide a foundation for good performance when things get more difficult. Methods that ignore a variety of difficult and seemingly unfortunate realities will underperform compared to those that confront these realities directly. Usually the methods that underperform simply add more dissipation to overcome things. The dissipation usually is added in a rather heavy-handed manner because it is unguided by theory and works in opposition to unpleasant realities. Rather than seeing these realities as being the result of being pessimistic, it is the result of pragmatism. The result of being irrationally optimistic is always worse than pragmatic realism.

logoLet’s get to one of the biggest issues that confounds the computation of shocked flows, accuracy, convergence and order-of-accuracy. For computing shock waves, the order of accuracy is limited to first-order for everything emanating from any discontinuity (Majda & Osher 1977). Further more nonlinear systems of equations will invariably and inevitably create discontinuities spontaneously (Lax 1973). In spite of these realities the accuracy of solutions with shocks still matters, yet no one ever measures it. The reasons why it matter are far more subtle and refined, and the impact of accuracy is less pervasive in its victory. When a flow is smooth enough to allow high-order convergence, the accuracy of the solution with high-order methods is unambiguously superior. With smooth solutions the highest order method is the most efficient if you are solving for equivalent accuracy. When convergence is limited to first-order the high-order methods effectively lower the constant in front of the error term, which is less efficient. One then has the situation where the gains with high-order must be balanced with the cost of achieving high-order. In very many cases this balance is not achieved.

What we see in the published literature is convergence and accuracy only being assessed for smooth problems where the full order of accuracy may be seen. In the cases that are actually driving the development of methods where shocks are present accuracy and convergence is ignored. If you look at the published papers and the examples, the order of accuracy is measured and demonstrated on smooth problems almost as a matter of coursodse. Everyone knows that the order of accuracy cannot be maintained with a shock or discontinuity, and no one measures the solution accuracy or convergence. The problem is that these details still matter! You need convergent methods, and you have interest in the magnitude of the numerical error. Moreover there are still significant differences in these results on the basis of methodological differences. To up the ante, the methodological differences carry significant changes in the cost of solution. What one finds typically is a great deal of cost to achieve formal order of accuracy that provides very little benefit with shocked flows (see Greenough & Rider 2005, Rider, Greenough & Kamm 2007). This community in the open, or behind closed doors rarely confronts the implications of this reality. The result is a damper on all progress.

The standard for complex flow is well-known and documented before (i.e., “swirlier is better” https://williamjrider.wordpress.com/2014/10/22/821/). When combined with our appallingly poor understanding of turbulence, you have a perfect recipe for computing and selling complete bullshit (https://williamjrider.wordpress.com/2015/12/10/bullshit-is-corrosive/). The side-dish for the banquet of bullshit is the even broader use of the viewgraph norm (https://williamjrider.wordpress.com/2014/10/07/the-story-of-the-viewgraph-norm/) where nothing quantitative is used for comparing results. At its worst, the viewgraph norm is used in comparing results where an analytical solutions is available. So we have a case where an analytical solution is available to do a complete pileofshitassessment of error and we ignore its utility perhaps only using it for plotting. What a massive waste! More importantly it masks problems that need attention.

Underlying this awful practice is a viewpoint that the details, and magnitude of the error does not matter. Nothing could be further from the truth, the details matter a lot and there are huge differences from method to method. All these differences are systematically swept under the proverbial rug. With shock waves one has a delicate balance between the sharpness of the shock and the creation of post-shock oscillations. Allowing a shock wave to be slightly broader can remove many pathologies and produce a cleaner looking solution, but also increases the error. Determining the relative quality of the solutions is left to expert pronouncements, and experts determine what is good and bad instead of the data. I’ve written about how to do this right several times before, and its not really difficult, https://williamjrider.wordpress.com/2015/01/29/verification-youre-doing-it-wrong/. What ends up being difficult is honestly confronting reality and all the very real complications it brings to the table. It turns out that most of us simply prefer to be delusional.

imagesIn the end shocks are a well-trod field with a great deal of theoretical support for a host issues of broader application. If one is solving problems in any sort of real setting, the behavior of solutions is similar. In other words you cannot expect high-order accuracy almost every solution is converging at first-order (at best). By systematically ignoring this issue, we are hurting progress toward better, more effective solutions. What we see over and over again is utility with high-order methods, but only to a degree. Rarely does the fully rigorous achievement of high-order accuracy pay off with better accuracy per unit computational effort. On the other hand methods which are only first-order accurate formally are complete disasters and virtually useless practically. Is the sweet spot second-order accuracy? (Margolin and Rider 2002) Or just second-order accuracy for nonlinear parts of the solution with a limited degree of high-order as applied to the linear aspects of the solution? I think so.

Perfection is not attainable, but if we chase perfection we can catch excellence
― Vince Lombardi Jr.

Lax, Peter D. Hyperbolic systems of conservation laws and the mathematical theory of shock waves. Vol. 11. SIAM, 1973.

Majda, Andrew, and Stanley Osher. “Propagation of error into regions of smoothness for accurate difference approximations to hyperbolic equations.”Communications on Pure and Applied Mathematics 30, no. 6 (1977): 671-705.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” Journal of Computational Physics 196, no. 1 (2004): 259-281.

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

 

The benefits of using “primitive variables”

08 Monday Aug 2016

Posted by Bill Rider in Uncategorized

≈ 6 Comments

 

Simplicity is the ultimate sophistication.

― Clare Boothe Luce

urlWhen one is solving problems involving a flow of some sort, conservation principles are quite attractive since these principles follow nature’s “true” laws (true to the extent we know things are conserved!). With flows involving shocks and discontinuities, the conservation brings even greater benefits as the Lax-Wendroff theorem demonstrates (https://williamjrider.wordpress.com/2013/09/19/classic-papers-lax-wendroff-1960/). In a nutshell you have guarantees about the solution through the use of conservation form that are far weaker without it. A particular set of variables is the obvious variables because they arise naturally in conservation form. For fluid flow these are density, momentum and total energy. The most seemingly straightforward thing to do is use these same variables to discretize the equations. This is generally a bad choice and should be avoided unless one does not care about the quality of results.

While straightforward and obvious, the choice of using conserved variables is almost always a poor one, and far better results can be achieved through the use of primitive variables for most of the discretization and approximation work. This is even true if one is using characteristic variables (which usually imply some sort of entirely one-dimensional character). The primitive variables have simple and intuitive meaning physically, and often equate directly to what can be observed in nature (conservedcsd240333fig7variables don’t). The beauty of primitive variables is that they trivially generalize to multiple dimensions in ways that characteristic variables do not. The other advantages are equally clear specifically the ability to extend the physics of the problem in a natural and simple manner. This sort of extension usually causes the characteristic approach to either collapse or at least become increasingly unwieldy. A key aspect to keep in mind at all times is that one returns to the conservation variables for the final approximation and update of the equations. Keeping the conservation form for the accounting of the complete solution is essential.

To keep the bulk of the discussion simple, I will focus on the Euler equations of fluid dynamics. These equations describe the conservations of mass, \rho_t + m_x = 0, momentum, m_t + (m^2/\rho + p)_x = 0 and total energy, E_t + \left[m/\rho(E + p) \right]_x = 0 in one dimension. Even in this very simple setting the primitive variables are immensely useful as demonstrated by HT Huynh, in another of his massively under-appreciated papers. In this paper he masterfully covers the whole of the techniques and utility of primitive variables. Arguably, the use of primitive variables went mainstream with the papers of Colella and Woodward. In spite of the broad appreciation of that paper, the use of primitive variables in work is still more a niche than common practice. The benefits become manifestly obvious whether one is analyzing the equations (which is equivalent to the more complex variable set!), or discretizing the solutions.

Study the past if you would define the future.

― Confucius

ClimateModelnestingThe use of the “primitive variables” came from a number of different directions. Perhaps the earliest use of the term “primitive” came from meteorology in terms of the work of Bjerknes (1921) whose primitive equations formed the basis of early work in computing weather in an effort led by Jules Charney (1955). Another field to use this concept is the solution of incompressible flows. The primitive variables are the velocities and pressure, which is distinguished from the vorticity-streamfunction approach (Roache 1972). In two dimensions the vorticity-streamfunction solution is more efficient, but lacks simple connection to measurable quantities. The same sort of notion separates the conserved variables from the primitive variables in compressible flow. The use of primitive variables as an effective approach computationally may have begun in the computational physics work at Livermore in the 1970’s (see e.g., Debar). The connection of the primitive variables to classical analysis of compressible flows and simple physical interpretation also plays a role.

What are the primitive variables? The basic conserved variables form compressible fluid flow are density, \rho, momentum, m=\rho u, and total energy, E = \rho e + \frac{1}{2} \rho u^2. Here the velocity is u and the internal energy is e. One also has the equation of state p=P(\rho,e) as the constitutive relation. Let’s take the Euler equations and rewrite them using the primitive variables, the conservations of mass, \rho_t + (\rho u)_x = 0, momentum, (\rho u)_t + (\rho u^2 + p)_x = 0 and total energy, \left[\rho (e + \frac{1}{2}u^2)\right]_t + \left[u(\left(rho (e + \frac{1}{2}u^2)+ p\right) \right]_x = 0. Except for the energy equation, the expressions are simpler to work with, but this is the veritable tip of the proverbial iceberg.

What are the equations for the primitive variables? The primitive variables can be expressed and evolved using simpler equations, which are primarily evolution equations dependent on differentiability, which must be present for any sort of accuracy to be in play anyway. The mass equation is the same although one might expand the derivative, \rho_t + u \rho_x + \rho u_x = 0. The momentum equation is replaced by an equation of motion, u_t + u u_x + \frac{1}{\rho} p_x = 0. The energy equation could be replaced with a pressure equation, p_t + u p_x + \gamma p u_x = 0 (\gamma is the generalized isentropic derivative \partial_\rho p|_S) or an internal energy equation, \rho e_t + \rho u e_x + p u_x = 0. One can use either energy representation to good measure, or better yet, use both and avoid having to evaluate the equation of state. Moreover if one wants you can evaluate the difference between the pressure from the evolution equation and the state relation as an error measure.

How does on convert to the primitive variables, and convert back to the conserved variables? If one is interested in analysis of the conservative equations, then one linearizes the equations about a point, U_t + \left(F(U)\right)_x = 0 \rightarrow U_t + \partial_U F(U) U_x = 0 where U is the vector of conserved varibles, and F(U) is the flux function. The matrix A_c = partial_U F(U) is the flux Jacobian. One does an eigenvalue decomposition, $ to analyze the equations. From this decomposition, A_c = R_c \Lambda L_c, one can get the eigenvalues, \Lambda, and the characteristic variables, L_c \Delta U. The analysis is difficult and non-intuitive with the conserved variables.

Here we get to the cool part of this whole thing, there is a much easier and more intuitive path through the primitive variables. One can get a matrix representation of the primitive variables which I’ll call V in vector form, V_t + A_p V_x = 0. One can get the terms in A_p easily from the differential forms, and recognizing that \gamma p = \rho c^2, with c being the speed of sound, the eigen-analysis is so simple that it can be done by hand (and it’s a total piece of cake for Mathematica). Using similar notation as the conserved form, A_p = R_p \Lambda L_p. The first thing to note is that \Lambda is exactly the same, i.e., the eigenvalues are identical. One then gets a result for the characteristics, L_p \Delta V that matches the textbooks, and that L_p \Delta V = L_c \Delta U. All the differences in the transformation are bound up in the right eigenvectors R_c and R_p, and the ease of physical insight provided by the analysis.

24-Figure17-1Now we can elucidate how to move between these two forms, and even use the primitive variables for the analysis of the conserved form directly. Using Huynh’s paper as a guide and repeating the main results one defines a matrix of partial derivatives of the conserved variables, U with respect to the primitive variables, V, M= \partial_V U. This matrix then can be inverted into M^{-1} and we then may define an identity, A_c = M A_p M^{-1}, which might allow the conserved eigen-analysis to be executed in terms of the more convenient primitive variables. The eigenvalues of A_c and A_p are the same. We can get the left and right eigenvectors through L_c = L_p M^{-1} and R_c = M R_p. All of this follows the simple application of the chain rule to the linearized versions of the governing equations.

The primitive variable idea can be extended in a variety of nifty and useful ways. One can augment the variable set in ways that can yield some extra efficiency to the solution by avoiding extra evaluations of the constitutive (or state) relations. This would most classically involve using both a pressure and energy equation in the system. Miller and Puckett provide a nice example of this technique in practice, building upon the work of Colella, Glaz and Ferguson where expensive equation of state evaluations are avoided. One must note that the system of equations being used to discretize the system is carrying redundant information that may have utility beyond efficiency.

One can go beyond this to add variables to the system of equations that are redundant, but carry information implicit in their approximation that may be useful in solving equations. One might add an equation for the specific volume of the fluid to compare with density. Similar things could be done with kinetic energy, vorticity, or entropy. In each case the redunency might be used to discover or estimate error or smoothness of the underlying solution and perhaps adapt the solution method on the basis of this information.

Using the primitive variables for discretization is almost as good as using characteristic variables in terms of solution fidelity. Generally if you can get away with 1-D ideas, the characteristic variables are unambiguously the best. The primitive variables are almost as good. The key is to use a local transformation to the primitive variables for the work of discretization even when your bookkeeping is all in conserved variables. Even if you are doing characteristic variables, the construction and use of them is enabled by primitive variables. The resulting expressions for the characteristics are simpler in primitive variables. Perhaps almost as important the expressions for the variables themselves are far more intuitively expressed in primitive variables.

A real source of power of the primitive variables comes when you extend past the simpler case of the Euler equations to things like magneto-hydrodynamics (MHD i.e., compressible magnetic fluids). Doing discretization of the MHD with conserved variables is a severe challenge and analysis of their mathematical characteristic structure can be a decent into utter madness. Doing the work in these more complex systems using the primitive variables is extremely advantageous. It is an approach that is far too often left out and the quality and fidelity of numerical methods suffers as a result.

Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage to move in the opposite direction.

― Ernst F. Schumacher

Lax, Peter, and Burton Wendroff. “Systems of conservation laws.”Communications on Pure and Applied mathematics 13, no. 2 (1960): 217-237.

Huynh, Hung T. “Accurate upwind methods for the Euler equations.” SIAM Journal on Numerical Analysis 32, no. 5 (1995): 1565-1619.

Colella, Phillip, and Paul R. Woodward. “The piecewise parabolic method (PPM) for gas-dynamical simulations.” Journal of computational physics 54, no. 1 (1984): 174-201.

Woodward, Paul, and Phillip Colella. “The numerical simulation of two-dimensional fluid flow with strong shocks.” Journal of computational physics54, no. 1 (1984): 115-173.

Van Leer, Bram. “Upwind and high-resolution methods for compressible flow: From donor cell to residual-distribution schemes.” Communications in Computational Physics 1, no. 192-206 (2006): 138.

Bjerknes, V. “The Meteorology of the Temperate Zone and the General Atmospheric CIRCULATION. 1.” Monthly Weather Review 49, no. 1 (1921): 1-3.

Charney, J. “The use of the primitive equations of motion in numerical prediction.” Tellus 7, no. 1 (1955): 22-26.

Roache, Patrick J. Computational fluid dynamics. Hermosa publishers, 1972.

DeBar, R. B. Method in two-D Eulerian hydrodynamics. No. UCID-19683. Lawrence Livermore National Lab., CA (USA), 1974.

Miller, Gregory Hale, and Elbridge Gerry Puckett. “A high-order Godunov method for multiple condensed phases.” Journal of Computational Physics128, no. 1 (1996): 134-164.

Colella, P., H. M. Glaz, and R. E. Ferguson. “Multifluid algorithms for Eulerian finite difference methods.” preprint (1996).

 

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 56 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...