• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Category Archives: Uncategorized

Can we overcome toxic culture before it destroys us?

09 Friday Dec 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Then the shit hit the fan.

― John Kenneth Galbraith

ways-to-know-if-your-law-firm-has-a-culture-problemI’m an unrelenting progressive. This holds true for politics, work and science where I always see a way for things to get better. I’m very uncomfortable with just sitting back and appreciating how things are. Many who I encounter see this as a degree of pessimism since I see the shortcomings in almost everything. I keenly disagree with this assessment. I see my point-of-view as optimism. It is optimism because I know things can always get better, always improve and constantly achieve a better end state. The people who I rub the wrong way are the proponents of the status quo, who see the current state of affairs as just fine. The difference in worldview is really between my deep reaching desires for a better world versus a world that is good enough already. Often the greatest enemy of getting to a better world is a culture that is a key element of the world, as it exists. Change comes whether culture wants it or not, and problems arise when the prevailing culture is unfit for these changes. Overcoming culture is the hardest part of change, and even when the culture is utterly toxic, it opposes changes that would make things better.

images
wellsfargo-750xx2715-1530-0-155_1474296054765_6143571_ver1-0
34318gs-large

I’ve spent a good bit of time recently contemplating the unremitting toxicity of our culture. We have suffered through a monumental presidential election with two abysmal candidates both despised by a majority of the electorate. The winner is an abomination of a human being clearly unfit for a public office worthy of respect. He is totally unqualified for the position he will hold, and will likely be the most corrupt person to ever hold the job. The loser was thoroughly qualified, potentially corrupt too, and would have had a images-1failed presidency because of the toxic political culture in general. We have reaped this entire legacy by allowing the public and political institutions to whither for decades. It is arguable that this erosion is the willful effort of those charged by the public with governing us. Among the institutions that are under siege and damaged in our current era are the research institutions where I work. These institutions have cultures from a bygone era, completely unfit for the modern world yet unmoving and not evolved in the face of new challenges.

This sentiment of dysfunction applies to the obviously toxic public culture, but the workplace culture too. In the workplace the toxicity is often cloaked in tidy professional wrapper, and seems wondrously nice, decent and completely OK. Often this professional wrapper shows itself as horribly passive aggressive behavior that the organization basically empowers and endorses. The problem is not the behavior of the people in the culture toward each other, but the nature of the attitude toward work. Quite often we have this layered approach that lends a well-behaved, friendly face on the complete 11disempowerment of employees. Increasingly the people working in the trenches are merely cannon fodder, and everything important to work happens with managers. Where I work the toxicity of the workplace and politics collide to produce a double whammy. We are under siege from a political climate that undermines institutions and a business-management culture that undermines the power of the worker.

Great leaders create great cultures regardless of the dominant culture in the organization.

― Bob Anderson

I’m reminded of the quote “culture eats strategy” (attributed to Peter Drucker) and wonder whether or not anything can be done to cure our problems without first addressing the toxicity of the underlying culture. I’ll hit upon a couple examples of the toxic cultures in the workplace and society in general. Both of these stand in opposition to a life well led. No amount of concrete strategy and clarity of thought can allow progress when the culture opposes it.

I am embedded in a horribly toxic workplace culture, which reflects a deeply toxic broader public culture. Our culture at work is polite, and reserved to be true, but toxic to all the principles our managers promote. Recently a high level manager espoused a set of high-level principles to support: diversity & inclusion, excellence, leadership, and partnership & collaboration. None of these principles is actually seen in reality and everything about how our culture operates opposes them. Truly leading and standing for the values espoused with such eloquence by identifying and removivyxvbzwxng the barriers to their actual reality would be a welcome remedy to the normal cynical response. Instead the reality is completely ignored and the fantasy of living to such values is promoted. It is not clear whether the manager knows the promoted values are fiction, or simply exists in a disconnected fantasy world. Either situation is utterly damning. The manager either knows the values are fiction, or they are so disconnected from reality that they believe the fiction. The end result is the same, no actions to remove the toxic culture are ever taken and the culture’s role in undermining values is not acknowledged.

In a starkly parallel sense we have an immensely toxic culture in our society today. The two toxic cultures certainly have connections, and the societal culture is far more destructive. We have all witnessed the most monumental political event of our livshutterstock_318051176-e1466434794601-800x430es resulting directly from the toxic culture playing out. The election of a thoroughly toxic human being as President is a great exemplar of the degree of dysfunction today. Our toxic culture is spilling over into societal decisions that may have grave implications for our combined future. One outcome of the toxic societal choice could be a sequence of events that will induce a crisis of monumental proportions. Such crises can be useful in fixing problems and destroying the toxic culture, and allowing its replacement by something better. Unfortunately such crises are painful, destructive and expensive. People are killed. Lives are ruined and pain is inflicted broadly. Perhaps this is the cost we must bear in the wake of allowing a toxic culture to fester and grow in our midst.

Reform is usually possible only once a sense of crisis takes hold…. In fact, crises are such valuable opportunities that a wise leader often prolongs a sense of emergency on purpose.

― Charles Ruhig

Cultures are usually developed, defined and encoded through the resolution of crisis. In these crises old cultures fade being replaced by a new culture that succeeds in assisting the resolution of the crisis. If the resolution of the crisis is viewed as a success, the culture becomes a monument to that success. People wishing to succeed adopt the cultural norms and re-enforce the culture’s hold. Over time such cultural touchstones become aged and incapable of dealing with modern reality. We see this problem in spades today either in the workplace or society-wide. The older culture in place cannot deal effectively with the realities of today. Changes in economics, technology and populations are creating a seimagest of challenges for older cultures, which these older cultures are unfit to manage. Seemingly we are being plunged headlong toward a crisis necessary to resolve the cultural inadequacies. The problem is that the crisis will be an immensely painful and horrible circumstance. We may simply have no choice, but to go through it, and hope we have the wisdom and strength to get to the other side of the abyss.

Crisis is Good. Crisis is a Messenger

― Bryant McGill

A crisis is a terrible thing to waste.

― Paul Romer

What can be done about undoing these toxic cultures without crisis? The usual remedy for a toxic culture is a crisis that demands effective action. This is an unpleasant prospect whether part of an organization or country, but it is the course we find ourselves on. One of the biggest problems with the toxic culture issue is its self-defeating nature. The toxic culture itself defends itself. Our politicians and managers are curlreatures whose success has been predicated on the toxic culture. These people are almost completely incapable of making the necessary decisions for avoiding the sorts of disasters that characterize a crisis. The toxic culture and those who succeed in them are unfit to resolve crises successfully. Our leaders are the most successful people in the toxic culture and act to defend such cultures in the face of overwhelming evidence that the culture is toxic. As such they do nothing to avoid the crisis even when it is obvious and make the eventual disaster inevitable.

Can we avoid this? I hope so, but I seriously doubt it. I fear that events will eventually unfold that will having us longing for the crisis to rescue us from the slow-motion zombie existence today’s current public-workplace cultures inflict on all of us.

The Chinese use two brush strokes to write the word ‘crisis.’ One brush stroke stands for danger; the other for opportunity. In a crisis, be aware of the danger–but recognize the opportunity.

― John F. Kennedy

We are ignoring the greatest needs & opportunities for improving computational science

02 Friday Dec 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

 

We are lost, but we’re making good time.

― Star Trek V

Lately I’ve been doing a lot of thinking about the focus of research. Time and time again the entirety of our current focus seems to be driven by things that are not optimal. Little active or critical though has been applied to examining the best path forward. If progress is to be made a couple of questions should be central to our choices: is there a distinct opportunity to progress? Or would progress produce a large impact? Good choices would combine the opportunity for successful progress with impact and importance of the work. This simple principle in decision-making would make a huge difference at improving our choices.

The significant problems we face cannot be solved at the same level of thinking we were at when we created them.

– Albert Einstein

Examples of the two properties of opportunity and impact coming together abound in the history of science. The archetype of this would be the atomic bomb coming from the discovbomb.jpg_1718483346eries of basic scientific principles combined with overwhelming need in the socio-political worlds. At the end of the 19th century and beginning of the 20th century a massive revolution occurred in physics fundamentally changing our knowledge of the universe. The needs of global conflict pushed us to harness this knowledge to unleash the power of the atom. Ultimately the technology of atomic energy became a transformative political force probably stabilizing the world against massive conflict. More recently, computer technology has seen a similar set of events play out in a transformative way first scientifically, then in engineering and finally in profound societal impact we are just beginning to see unfold.

imagesIf we pull our focus into the ability of computational power to transform science, we can easily see the failure to recognize these elements in current ideas. We remain utterly tied to the pursuit of Moore’s law even as it lies in the morgue. Rather than examine the needs of progress, we remain tied to the route taken in the past. The focus of work has become ever more computer (machine) directed, and other more important and beneficial activities have withered from lack of attention. In the past I’ve pointed out the greater importance of modeling, methods, and algorithms in comparison to machines. Today we can look at another angle on this, the time it takes to produce useful computational results, or workflow.

Simultaneous to this unhealthy obsession, we ignore far great opportunities for progress sitting right in front of us. The most time consuming part of computational studies is rarely the execution of the computer code. The time consuming part of the solution to a problem is defining the model to be solved (often generating meshes), and analyzing the results from any solution. If one actually wishes to engage in rigorous V&V because the quality of the results really mattered, the focus would be dramatically different (working off the observation that V&V instills diminishing returns for speeding up computations). If one takes the view that V&V is simply the scientific method, the time demands only increase and dramatically and the gravity of engaging in time consuming activities only grows. What we suffer from is magical thinking on the part of those who “lead” computational science, by ignoring what should be done in favor of what can be more easily funded. This is not leadership, but rather the complete abdication of it.

When we look at the issue we are reminded of Amdahl’s law. Amdahl’s law basically establishes a law of has a program dominated by a single process eventually the parts you can’t speed up will eventually control the speed under optimization. Today we focus on speeding up the computation that isn’t even the dominant cost in computational science. We are putting almost no effort into speeding up the parts of computational science that take all the time. As a result the efforts put into improving computation will yield fleeting benefits to the actual conduct of science. This is a tragedy of lostUnknown-2opportunity. There is a common lack of appreciation for actual utility in research that arises from the naïve and simplistic view of how computational science is done. This view arises from the marketing of high performance computing work as basically only requiring a single magical calculation where science almost erupts spontaneously. Of course this never happens and the lack of scientific process in computational science is a pox on the field.

For engineering calculations with complex geometries, the time to develop a model often takes months. In many cases this time budget is dominated by mesh generation. There are aspects of trial and error where putative meshes are defined, tested and then refined. On top of this, the specification of the physical modeling of the problem is immensely time-consuming. Testing and running a computational model more quickly can come in handy as can faster mesh generation, but the human element in these practices are usually the choke point. We see precious little effort to do anything consequential to impact this part of the effort in computational science. For many problems this is the single largest component of the effort.

Once the model has been crafted and solved via computation, the results need to be analyzed and understood. Again, the human element in this practice is key. Effort in computing today for this purpose is concentrated in visualization technology. This mayjohn-von-neumann-2 be the simplest and clearest example of the overwhelmingly transparent superficiality of current research. Visualization is useful for marketing science, but produces stunningly little actual science or engineering. We are more interested in funding tools for marketing work than actually doing work. Tools for extracting useful engineering or scientific data from calculation usually languish. They have little “sex appeal” compared to flashy visualization, but carry all the impact on the results that matter. If one is really serious about V&V all of these issues are compounded dramatically. For doing hard-nosed V&V visualization has almost no value whatsoever.

If you are inefficient, you have a right to be afraid of the consequences.

― Murad S. Shah

Crays-Titan-SupercomputerIn the end all of this is evidence that current high performance computing programs have little interest in actual science or engineering. They are hardware focused because the people leading them like hardware; don’t care or understand science and engineering. The people running the show are little more than hardware-obsessed “fan boys” who care little about science. They succeed because of a track record of selling hardware-focused programs, not because it is the right thing to do. The role of computation is science should be central to our endeavor instead of a sideshow that receives little attention and less funding. Real leadership would provide a strong focus on completing important work that could impact the bottom line, doing better science with computational tools.

He who is not satisfied with a little, is satisfied with nothing.

― Epicurus

 

Dissipation isn’t bad or optional

24 Thursday Nov 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Von Neumann told Shannon to call his measure entropy, since “no one knows what entropy is, so in a debate you will always have the advantage.

― Jeremy Campbell

imagesToo often in seeing discourse about numerical methods, one gets the impression that dissipation is something to be avoided at all costs. Calculations are constantly under attack for being too dissipative. Rarely does one ever hear about calculations that are not dissipative enough. A reason for this is the tendency for too little dissipation to cause outright instability contrasted with too much dissipation with low-order methods. In between too little dissipation and instability are a wealth of unphysical solutions, oscillations and terrible computational results. These results may be all too common because of people’s standard disposition toward dissipation. The problem is that too few among the computational cognoscenti recognize that too little dissipation is as poisonous to results as too much (maybe more).

Why might I say that it is more problematic than too much dissipation? A big part of the reason is the nature physical realizability of solutions. A solution with too much dissipation is utterly physical in the sense that it can be found in nature. The solutions with too little dissipation more often than not are not found in nature. This is not because those solutions are unstable, but rather solutions that are stable, and have some dissipation; however, they simply aren’t dissipative enough to match natural law. What many do not recognize is that natural systems actually produce a large amount of dissipation without regard to the size of the mechanisms for explicit dissipative physics. This is both a profound physical truth, and the result of acute nonlinear focusing. It is important for numerical methods to recognize this necessity. Furthermore, this fact of nature reflects an uncomfortable coming together of modelling and numerical methods that many simply choose to ignore as an unpleasant reality.

In this house, we obey the laws of thermodynamics!

– Homer Simpson

Entropy stability is an increasingly important concept in the design of robust, accurate and convergent methods for solving systems defined by nonlinear conservation laws (see Tadmor 2016) The schemes are designed to automatically satisfy an entropy inequality that comes from the second law of thermodynamics, d S/d t \le 0. Implicit in the thinking about the satisfaction of the entropy inequality is a view that approaching the limit of $latex d S / d t = 0$ as viscosity becomes negligible (i.e., inviscid) is desirable. This isSupersonic-bullet-shadowgram-Settles.tif a grave error in thinking about the physical laws of direct interest, as the solution of conservation laws does not satisfy this limit when flows are inviscid. Instead the solutions of interest (i.e., weak solutions with discontinuities) in the inviscid limit approach a solution where the entropy production is proportional to variation in the large scale solution cubed, d S / d t \le C \left(\Delta u\right)^3. This scaling appears over and over in the solution of conservation laws including Burgers’ equation, the equations of compressible flow, MHD, and incompressible turbulence (Margolin & Rider, 2001). The seeming universality of these relations and their implications for numerical methods are discussed below in more detail, but follow the profound implications turbulence modelling are explored in detail for implicit LES modelling (our book edited by Grinstein, Margolin & Rider, 2007). Valid solutions will invariably produce the inequality, but the route to achievement varies greatly.

The satisfaction of the entropy inequality can be achieved in a number of ways and the one most worth avoiding is oscillations in the solution. Oscillatory solutions from nonlinear conservation laws are as common as they are problematic. In a sense, the proper solution is strong attractor for solutions and solutions will adjust to produce the necessary amount of dissipation in the solution. One vehicle for entropy production is oscillations in the solution field. Such oscillations are unphysical and can result in a host of issues undermining other physical aspects of the solution such as positivity of quantities such as density and pressure. They are to be avoided to whatever degree possible. If explicit action isn’t taken to avoid oscillations, one should expect them to appear.

There ain’t no such thing as a free lunch.

― Pierre Dos Utt

A more proactive approach to dissipation leading to entropy satisfaction is generally desirable. Another path toward entropy satisfaction is offered by numerical methods in control volume form. For second-order numerical methods the analysis of the approximation via the modified equation methodology unveils nonlinear dissipation terms that provide the necessary form for satisfying the entropy inequality via a nonlinearly dissipative term in the truncation error. This truncation error takes the form  C u_x u_{xx} , which integrates to replicate inviscid dissipation as a residual term in the “energy” equation, C\left(u_x\right)^3. This term comes directly from being in conservation form and disappears when the approximation is in non-conservative from. In large part the overly large success of these second-order methods is related to this character.

Other options to add this character to solutions may be achieved by an explicit nonlinear (artificial) viscosity or through a Riemann solver. The nonlinear hyperviscosities discussed before on this blog work well. One of the pathological misnomers in the community is the belief that the specific form of the viscosity matters. This thinking infests direct numerical simulation (DNS) as it perhaps should, but the reality is that the form of dissipation is largely immaterial to establishing physically relevant flows. In other words inertial range physics does not depend upon the actual form or value of viscosity its impact is limited to the small scales of the flow. Each approach has distinct benefits as well as shortcomings. The key thing to recognize is the necessity of taking some sort of conscious action to achieve this end. The benefits and pitfalls of different approaches are discussed and recommended actions are suggested.

Enforcing the proper sort of entropy production through Riemann solvers is another possibility. A Riemann solver is simply a way of upwinding for a system of equations. For linear interaction modes the upwinding is purely a function of the characteristic motion in the flow, and induces a simple linear dissipative effect. This shows up as a linear even-order truncation error in modified equation analysis where the dissipation coefficient is proportional to the absolute value of the characteristic speed. For nonlinear modesupersonic-bullet_660s in the flow, the characteristic speed is a function of the solution, which induces a set of entropy considerations. The simplest and most elegant condition is due to Lax, which says that the characteristics dictate that information flows into a shock. In a Lagrangian frame of reference for a right running shock this would look like, c_{\mbox{left}} > c_{\mbox{shock}} > c_{\mbox{right}} with c being the sound speed. It has a less clear, but equivalent form through a nonlinear sound speed, c(\rho) = c(\rho_0) + \frac{\Delta \rho}{\rho} \frac{\partial \rho c}{\partial \rho}. The differential term describes the fundamental derivative, which describes the nonlinear response of the sound speed to the solution itself. This same condition can be seen in a differential form and dictates some essential sign conventions in flows. The key is that these conditions have a degree of equivalence. The beauty is that the differential form lacks the simplicity of Lax’s condition, but establishes a clear connection to artificial viscosity.

The key to this entire discussion is realizing that dissipation is a fact of reality. Avoiding it is simply a demonstration of an inability to confront the non-ideal nature of the universe. This is simply contrary to progress and a sign of immaturity. Let’s just deal with reality.

The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

– Sir Arthur Stanley Eddington

References

Tadmor, E. (2016). Entropy stable schemes. Handbook of Numerical Analysis.

Margolin, L. G., & Rider, W. J. (2002). A rationale for implicit turbulence modelling. International Journal for Numerical Methods in Fluids, 39(9), 821-841.

Grinstein, F. F., Margolin, L. G., & Rider, W. J. (Eds.). (2007). Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press.

Lax, Peter D. Hyperbolic systems of conservation laws and the mathematical theory of shock waves. Vol. 11. SIAM, 1973.

Harten, Amiram, James M. Hyman, Peter D. Lax, and Barbara Keyfitz. “On finite‐difference approximations and entropy conditions for shocks.” Communications on pure and applied mathematics 29, no. 3 (1976): 297-322.

Dukowicz, John K. “A general, non-iterative Riemann solver for Godunov’s method.” Journal of Computational Physics 61, no. 1 (1985): 119-137.

A Single Massive Calculation Isn’t Science; it is a tech demo

17 Thursday Nov 2016

Posted by Bill Rider in Uncategorized

≈ 2 Comments

People almost invariably arrive at their beliefs not on the basis of proof but on the basis of what they find attractive.

― Blaise Pascal

21SUPERCOMPUTERS1-master768When we hear about supercomputing, the media focus, press release is always talking about massive calculations. The bigger is always better with as many zeros as possible with some sort of exotic name for the rate of computation, mega, tera, peta, eta, zeta,… Up and to the right! The implicit proposition is that bigger the calculation, the better the science. This is quite simply complete and utter bullshit. These big calculations providing the media footprint for supercomputing and winning prizes are simply stunts, or more generously technology demonstrations, and not actual science. Scientific computation is a much more involved and thoughtful activity involving lots of different calculations many at a vastly smaller scale. Rarely, if ever, do the massive calculations come as a package including the sorts of evidence science is based upon. Real science has error analysis, uncertainty estimates, and in this sense the massive calculations produce a disservice to computational science by skewing the picture of what science using computers should look like.

This post aims to correct this rather improper vision, and replace it with a discussion of what computational science should be.

With a substantial amount of focus on the drive toward the first exascale supercomputer, it is high time to remind everyone that a single massive calculation is a stunt meant to sell the purchase of said computers, and not science. This week the supercomputing community is meeting in Salt Lake City for a trade show sc16logomasquerading as a scientific conference. It is simply another in a phalanx of echo chambers we seem to form with increasing regularity across every sector of society. I’m sure the cheerleaders for supercomputing will be crowing about the transformative power of these computers and the boon for science they represent. There will be celebrations of enormous calculations and pronouncements about their scientific value. There is a certain lack of political correctness to the truth about all this; it is mostly pure bullshit.

The entire enterprise pushing toward exascale is primarily a technology push program. It is a furious and futile attempt to stave off the death of Moore’s law. Moore’s law has TOP500-the-list-graphic-150x150provided an enormous gain in the power of computers for 50 years and enabled much of the transformative power of computing technology. The key point is that computers and software are just tools; they are incredibly useful tools, but tools nonetheless. Tools allow a human being to extend their own biological capabilities in a myriad of ways. Computers are marvelous at replicating and automating calculations and thought operations at speeds utterly impossible for humans. Everything useful done with these tools is utterly dependent on human beings to devise. My key critique about this approach to computing is the hollowing out of the investigation into devising better ways to use computers and focusing myopically on enhancing the speed of computation.

Truth is only relative to those that ignore hard evidence.

― A.E. Samaan

The core of my assertion that its mostly bullshit comes from looking at the scientific method and its application to these enormous calculations. The scientific method is fundamentally about understanding the World (and using this understanding via engineering). The World is observed either in its natural form, or through experiments deserrorbars2vised to unveil difficult to see phenomena. We then produce explanations or theories to describe what we see, and allow us to predict what we haven’t see yet. The degree of comparison between the theory and the observations confirms our degree of understanding. There is always a gap between our theory and our observations, and each is imperfect in its own way. Observations are intrinsically prone to a variety of errors, and theory is always imperfect. The solutions to theoretical models are also imperfect especially when solved via computation. Understanding these imperfections and the nature of the comparisons between theory and observation is essential to a comprehension of the state of our science.

Cielo rotatorAs I’ve stated before, the scientific method applied to scientific computing is embedded in the practice of verification and validation. Simply stated, a single massive calculation cannot be verified or validated (it could be, but not with current computational techniques and the development of such capability is a worthy research endeavor). The uncertainties in the solution and the model cannot be unveiled in a single calculation, and the comparison with observations cannot be put into a quantitative context. The proponents of our current approach to computing want you to believe that massive calculations have intrinsic scientific value. Why? Because they are so big, they have to be the truth. The problem with this thinking is that any single calculation does not contain steps necessary for determining the quality of the calculation, or putting any model comparison in context.

The context of any given calculation is determined by the structure of the errors associated with the computational modeling. For example it is important to understand the nature of any numerical errors, and producing an estimate of these errors. In some (many, most) cases a very good comparison between reality and a model is the result of calibration of uncertain model parameters. In many cases the choices for the modeling parameters are mesh dependent, which produces the uncomfortable outcome where a finer mesh produces a systematically worse comparison. This state of affairs is incredibly common, and generally an unadvertised feature.

An important meta-feature of the computing dialog is the skewing of computer size, design and abilities. For example, the term capability computer comes up where these computers can produce the largest calculations we see, the ones on press releasegesamthubschrauber-01s. These computers are generally the focus of all the attention and cost the most money. The dirty secret is that they are almost completely useless for science and engineering. They are technology demonstrations and little else. They do almost nothing of value to the myriad of programs reporting to use computations to do produce results. All of the utility to actual science and engineering come from the homely cousins of these supercomputers, the capacity computers. These computers are the workhorses of science and engineering because they are set up to do something useful. The capability computers are just show ponies, and perfect exemplars of the modern bullshit based science economy. I’m not OK with this; I’m here to do science and engineering. Are our so-called leaders OK with the focus of attention (and bulk of funding) being non-scientific, media-based, press release generators?

Crays-Titan-SupercomputerHow would we do a better job with science and high performance computing?

The starting point is the full embrace of the scientific method. Taken at face value the observational or experimental community is expected to provide observational uncertainties with their data. These uncertainties should be de-convolved between errors/uncertainties in raw measurement and any variability in the phenomena. Those of us using such measurements for validating codes should demand that observations always come with these uncertainties. By the same token, computational simulations have uncertainties from a variety numerical errors and modeling choices and assumptions that should be demanded. Each of these error sources needs to be characterized to put any comparison with observations/experimental data into context. Without knowledge of these uncertainties on both sides of the scientific process, any comparison is completely untethered.

If nothing else, the uncertainty in any aspect of this process provides a degree of confidence and impact of comparative differences. If a comparison between a model and data is poor, but the data has large uncertainties, the comparison suddenly becomes more palatable. On the other hand small uncertainties with the data would imply that the model is potentially too incorrect. This conclusion would be made once the modeling uncertainty has been explored. One reasonable case would be the identification of large numerical errors in the model’s solution. This is the case where a refined calculation might be genuinely justified. If the bias with a coarse grid is sufficient, a finer grid calculation could be a reasonable way of getting more agreement. Therimages-1e are certainly cases where exascale computing is enabling for model solutions with small enough error to make models useful. This case is rarely made or justified in any massive calculation rather being asserted by authority.

On the other hand numerical error could be a small contributor to the disagreement. In this case, which is incredibly common, a finer mesh does little to rectify model error or uncertainty. The lack of quality comparison is dominated by modeling error, or uncertainty about the parameterization of the models. Worse yet, the models are poor representations of the physics of interest. If the model is a poor representation solving it very accurately is a genuinely wasteful exercise, at least if your goal is scientific in nature. If you’re interested in colorful graphics and a marketing exercise, computer power is your friend, but don’t confuse this with science (or at least good science). The worst case of this issue is a dominant model form error. This is the case where the model is simply wrong, and incapable of reproducing the data. Today many examples exist where models we know are wrong are beat to death with a supercomputer. This does little to advance science, which needs to work at producing a new model that ameliorates the deficiencies in the old model. Unfortunately our supercomputing programs are sapping the vitality from our modeling programs. Even worse, many people seem to confuse computing power as a remedy to model form error.

Equidistributed error is probably the best goal of modeling and simulation that is a balance of numerical and modeling error/uncertainty. This would be the case where the combination of modeling error and uncertainty with a numerical solution has the smallest value. The standard exascale computing driven model would have the numerical error driven to be nearly zero without regard for the modeling error. This ends up being a small numerical error by fiat or proof by authority, proof by overwhelming power. Practically, this is foolhardy and technically indefensible. The issue is the inability to effectively hunt down modeling uncertainties under these conditions, which is hamstrung by the massive cal2-29s03culations. The most common practice is to assess the modeling uncertainty via some sort of sampling approach. This requires many calculations because of the high-dimensional nature of the problem. Sampling converges very slowly with any mean value for the modeling being proportional to the inverse square root of the number of samples and the measure of the variance of the solution.

Thus a single calculation will have an undefined variance. With a single massive calculation you have no knowledge of the uncertainty either modeling or numerical (at least without have some sort of embedded uncertainty methodology). Without assessing the uncertainty of the calculation you don’t have a scientific or engineering activity. For driving down the inherent uncertainties especially where the modeling uncertainty dominates, you are aided by smaller calculations that can be executed over and over as to drive down the uncertainty. These calculations are always done on capacity computers and never on capability computers. In fact if you try to use a capability computer to do one of these studies, you will be punished and get kicked off. In other words the rules of use enforced via the queuing policies are anti-scientific.

Supernove-Shocks-1The uncertainty structure can be approached at a high level, but to truly get to the bottom of the issue requires some technical depth. For example numerical error has many potential sources: discretization error (space, time, energy, … whatever we approximate in), linear algebra error, nonlinear solver error, round-off error, solution regularity and smoothness. Many classes of problems are not well posed and admit multiple physically valid solutions. In this case the whole concept of convergence under mesh refinement needs overhauling. Recently the concept of measure-valued (statistical) solutions has entered the fray. These are taxing on computer resources in the same manner as sampling approaches to uncertainty. Each of these sources requires specific and focused approaches to their estimation along with requisite fidelity.

Modeling uncertainty is similarly complex and elaborate. The hardest aspect to evaluate is the form of the physical model. In cases where multiple reasonable models exist, the issue is evaluating the model’s (or sub-model’s) influence on solutions. Models often have adjustable parameters that are unknown or subject to calibration. Most commonly the impact of these parameters and their values are investigated via sampling solutions, an expensive prospect. Similarly there are modeling issues that are purely random, or statistical in nature. The solution to the problem is simply not determinate. Again sampling the solution of a range of parameters that define such randomness is a common approach. All this sampling is very expensive and very difficult to accurately compute. All of our focus on exascale does little to enable good outcomes.

The last area of error is the experimental or observational error and uncertainty. This is important in defining the relative quality of modeling, and the sense and sensibility of using massive computing resources to solve models. We have several standard components in the structure of the error in experiments: the error in measuring a quantity, and then the variation in the actual measured quantity. In one case there is some intrinsic uncertainty in being able to measure something with complete precision. The second part of this is the variation of the actual value in the experiment. Turbulence is the archetype of this sort of phenomena. This uncertainty is intrinsically statistical, and the decomposition is essential to truly understand the nature of the world, and put modeling in proper and useful context.

dag006The bottom line is that science and engineering is evidence. To do things correctly you need to operate on an evidentiary basis. More often than not, high performance computing avoids this key scientific approach. Instead we see the basic decision-making operating via assumption. The assumption is that a bigger, more expensive calculation is always better and always serves the scientific interest. This view is as common as it is naïve. There are many and perhaps most cases where the greatest service of science is many smaller calculations. This hinges upon the overall structure of uncertainty in the simulations and whether it is dominated by approximation error, modeling form or lack of knowledge, and even the observational quality available. These matters are subtle and complex, and we all know that today neither subtle, nor complex sells.

What can be asserted without evidence can also be dismissed without evidence.

― Christopher Hitchens

 

Facts and Reality are Optional

09 Wednesday Nov 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

There is nothing more frightful than ignorance in action.

― Johann Wolfgang von Goethe

Our political climate and capability as a nation to engage each other in meaningful, respectful conversations has plummeted to dismal lows. The best description of our 2016 political campaign is a “rolling dumpster fire.” At the core of all of our dysfunction is a critical break from fact-based discussion, confronting objective reality and the ascendency of emotion and spin into the fact-vacuum and alternative reality. One might think that working at a scientific-engineering Laboratory would free me from thtoilet-fireis appalling trend, but the same dynamic is acutely felt there too. The elements undermining facts and reality in our public life are infesting my work. Many institutions are failing society and contributing to the slow-motion disaster we have seen unfolding. We need to face this issue head-on and rebuild our important institutions and restore our functioning society, democracy and governance.

A big part of the public divorce from facts is the lack of respect and admiration for expertise. Just as the experts and the elite have become suspicious and suspect in the imagexsbroader public sphere, the same thing has happened in the conduct of science. In many ways the undermining of expertise in science is even worse and more corrosive. Increasingly, there is no tolerance or space for the intrusion of expertise into the conduct of scientific or engineering work. The way this tolerance manifests itself is subtle and poisonous. Expertise is tolerated and welcomed as long as it is confirmatory and positive. Expertise is not allowed to offer strong criticism or the slightest rebuke without regard for the shoddiness of work. If an expert does offer anything that seems critical or negative they can expect to be dismissed and never invited back to provide feedback again. Rather than welcome their service and attention, they are derided as troublemakers and malcontents. We see in every corner of the scientific and technical World a steady intrusion of mediocrity and outright bullshit into our discourse as a result.

Let’s give an example of how this plays out. I’ve seen this happen personally and witnessed it play out via external reviews I observed. I’ve been brought in

pileofshitto to review technical work for a large important project. The expected outcome was a “rubber stamp” that said the work was excellent, and offered no serious objections. Basically the management wanted me to sign off on the work as being awesome. Instead, I found a number of profound weaknesses in the work, and pointed these out along with some suggested corrective actions. These observations were dismissed and never addressed by the team conducting the work. It became perfectly clear that no such critical feedback was welcome and I wouldn’t be invited back. Worse yet, I was punished for my trouble. I was sent a very clear and unequivocal message, “don’t ever be critical of our work.” 

This personal example of dysfunction is simply the tip of the iceberg for an adversarial attitude toward critical feedback. . We have external review committees visit and treated the same. Most seasoned reviewers know that this is not to be a critical review. It is a light touch and everyone expects to get a glowing report. Any real issues are addressed on the down low and even that is treated with kid gloves. If any reviewer has the audacity to raise an important issue they can expect not to be ever invited back. The end result is the bullshit_everywhere-e1345505471862increasingly meaningless nature of any review, and the hollowing out of expertise’s seal of approval. In the process experts and expertise become covered in the bullshit they pedal and become diminished in the end.

This dynamic in review is widespread and fuels the rise of bullshit in public life as well as science and engineering. This propensity to bullshit is driven by a system that cannot deal with conflict or critical feedback. Moreover the system is tilted toward a preconceived result, all is well and no changes are necessary. When this is not the case one is confronted with engaging in conflict against these expectations, or simply getting in line with the bullshit. More and more the bullshit is winning the day. I’ve been personally punished for not towing the line and making a stink. I’ve seen others punished too. It is very clear that failing to provide the desired result bullshit will be punished. The punishments for honesty means that bullshit is on the rise as nothing exists to produce a drive toward quality and results. In the end bullshit is a lot less effort and rewarded a lot more highly.

At the end of the day we can see that the system starts to seriously erode integrity at every level. This is exactly what we are witnessing society-wide. Institutions across the spectrum of public and private life are losing their integrity. Such erosion of integrity in an environment that cannot deal with critical feedback produces a negative loop that feeds upon itself. Bullshit begets more bullshit until the whole thing collapses. We may have just witnessed what the collapse of our political system looks like. We had an election that was almost completely bullshit start to finish. We have elected a completely and shutterstock_318051176-e1466434794601-800x430utterly incompetent bullshit artist president. Donald Trump was completely unfit to hold office, but he is a consummate con man and bullshit artist. In a sense he is the emblem of the age and the perfect exemplar of our addiction to bullshit over substance.

I personally see myself as a person of substance and integrity. It is increasingly difficult to square who I am with the system I am embedded in. I am not a bullshitter, when I produce bullshit people notice, and I am embarrassed. I am a straight shooter who is committed to progress and excellence. I have a broad set of expertise in science and engineering with a deep desire to contribute to meaningful things. This fundamental nature is increasingly at odds with how the World operates today. I feel a deep drive on the part of the workplace to squash everything positive I stand for. In the place of standing up for my basic nature as a scientific expert, a member of the elite, if you will, I am expected to tow the line and produce bullshit. This bullshit is there to avoid dealing with real issues head on and avoid conflict. The very nature of things stands in opposition to progress and quality, which are threatened by the current milieu.

This gets to the heart of the discussion about what we are losing in this dynamic. We are losing progress society wide. When we allow bullshit to creep into every judgment we imagesmake, progress is sacrificed. We bury immediate conflict for long-term decline and plant the seeds for far more deep, widespread and damaging conflict. Such horrible conflict may be unfolding right in front of us in the nature of the political process. By finding our problems and being critical we identify where progress can be made, where work can be done to make the World better. By bullshitting our way through things, the problems persist and fester and progress is sacrificed.

In the current environment where expertise is suspect we see wrong beliefs persist without any real resistance. Falsehoods and myths stand shoulder to shoulder with trust and get treated with equivalence. In this atmosphere the sort of political movements founded completely on absolute bullshit can thrive. Make no mistake, Donald Trump is a master bullshitter, and completely lacks all substance, yet in today’s World he has complete viability. All of us are responsible because we have allowed bullshit to stand on even footing with fact. We have allowed the mechanisms and institutions standing in the way of such bullshit to be weakened and infested with bullshit too. It is time to stand up for truth, integrity and expertise as a shield against this assault against society.

Everything present in the political rise of Donald Trump is playing out in the dynamic at my workplace. It is not as extreme and its presence is subtle, but it is there. We have allowed bullshit to become ubiquitous and accepted. We turn away from calling bullshit out and demanding that real integrity be applied to our work. In the process we leadersimplicitly aid and abed the forces in society undermining progress toward a better future. The result of this acceptance of bullshit can be seen in the reduced production of innovation, and breakthrough work, but most acutely in the decay of these institutions.

We have lost the ability to demand difficult decisions to solve seemingly intractable problems. When we do not operate on facts, we can turn away from difficulties and soothe ourselves with falsehoods. Instead of identifying problems and working toward progressive solutions, the problems are minimized and allowed to fester. This is true in the broader public sphere as well as in our scientific environment. I have been actively discouraged from pointing out problems or being critical. The result is stagnation and the steady persistence of problematic states. Instead of working to solve weaknesses, we are urged to accept them or explain them away. This will ultimately yield a catastrophic outcome. At the National level we may have just witnessed such a catastrophe play out in plain view.

In the workplace I feel the key question to ask is “If we don’t look for problems, how can we do important work?” Progress depends on finding weakness and attacking it. This is the principle that I focus on. Confidence comes from being sure you know where to look for problems and up to the challenge of solving them. Empty positivity is a sign of weakness. Yet this is exactly what I am being asked to do at work. The resulting bullshit is a sign of weakness and lack of confidence is being able to constructively solve problems. The need to be positive all the time and avoid criticism is weakness, lack of drive, and lack of conviction in the possibility of progress. We need to refresh out commitment to be constructively critical in the knowledge and belief that we are equal to the task of making the World better. This means stamping out bullshit wherever we see it. There is a lot to do because today we are drowning in it.

With the benefit of time i have a couple projections for the future:

  1. The GOP and President Trump will do little or nothing to help the people that voted for them. The key to our democracy is whether they will take any responsibility. If history is our guide they will deflect the blame onto minorities, LBGT, women and
    U.S. Republican presidential candidate Trump speaks at a rally in Columbus

    U.S. Republican presidential candidate Donald Trump speaks at a rally in Columbus, Ohio, November 23, 2015. REUTERS/Jay LaPrete – RTX1VIY0

    everyone, but themselves. Will the people fall for the same con as they did when they elected these charlatans?

  2. Things will be very dark and dismal for an extended time, and we will spiral toward violence. This may be violence directed by the new ruling class against “enemies of the state”. It also may be violence directed toward the ruling class. Mark my words blood will be shed by Americans at the hands of other Americans.
  3. The only way out of this darkness is to work steadfastly to repair our institutions and figure out how to solve our problems in a collective manner for the benefit of all. I work for one of these institutions and we should be taking a long hard look at our role in the great unraveling we are in the midst of.

Facts are stubborn things; and whatever may be our wishes, our inclinations, or the dictates of our passion, they cannot alter the state of facts and evidence.

― John Adams

Footnote: I started writing this on Monday, and like almost everyone I thought the election would turn out differently. It was a genuinely shocking result that makes this topic all the more timely. Instead the results amplified the importance of this entire discussion immensely. The prospect of a President Trump fills me with dread because of the very issues discussed here. Trump exists in an alternative reality and his lack of presence in an objective reality will have real consequences. He is a reality TV star and professional buffoon. He is the most stunningly unqualified person to ever hold that office. I fear what is coming. I also feel the need to be resolved to pick up the pieces from the disaster that will likely unfold. We need to rebuild our institutions and reinstitute a knowledge/facts/reality based governance to guide society forward.

Can Software Really Be Preserved?

04 Friday Nov 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

images-1In the past quarter century the role of software in science has made a huge change in importance. I work in a computer research organization that employs many applied mathematicians. One would think that we have a little maelstrom of mathematical thought. Very little actual mathematics takes place with most of them writing software as their prime activity. A great deal of emphasis is placed on software as something to be preserved or invested in. This dynamic places a great deal of other forms of work on the backburner like mathematics (or modeling or algorithmic-methods investigation). The proper question to think about is whether the emphasis on software along with collateral decreases in focus on mathematics or physical modeling is a benefit to the conduct of science.

Doing mathematics should always mean finding patterns and crafting beautiful and meaningful explanations.

― Paul Lockhart

I’ll focus on my wife’s favorite question, “what is code?”(I put this up on a slide when she was in the audience, and she rolled her eyes at me and walked out). If we understand what exactly code is we can answer the question of whether it can be preserved and whether it is worthwhile to do.

55306675The simplest answer to the question at hand is that code is a set of instructions that a computer can understand that provides a recipe provided by humans for conducting some calculations. These instructions could integrate a function, or a differential equations, sort some data out, filter an image, or millions of other things. In every case the instructions are devised by humans to do something, and carried out by a computer with greater automation and speed than humans can possibly manage. Without the guidance of humans, the computer is utterly useless, but with human guidance it is a transformative tool. We see modern society completely reshaped by the computer. Too often the focus of humans is on the tool and not the things that give it power, skillful human instructions devised by creative intellects. Dangerously, science is falling into this trap, and the misunderstanding of the true dynamic may have disastrous consequences for the state of progress. We must keep in mind the nature of computing and man’s key role in its utility.

Mathematics is the cheapest science. Unlike physics or chemistry, it does not require any expensive equipment. All one needs for mathematics is a pencil and paper.

― George Pólya

The manner of treating applied mathematics today serves as an instructive lesson in how out of balance the dynamic is today. Among the sciences mathematics may be the most purely thoughtful endeavor. Some have quipped that mathematics is the most cost efficient discipline, requiring nothing more than time, pen and paper. Often massive progress happens without pen and paper whole the mathematical mind ponders and schemes about theorem, proof and conceptual breakthroughs. Increasingly this idealized model is foreign to mathematicians and the desire for a more concrete product has taken hold. This is most keenly seen in the drive for software as a tangible end product.

Elmer-pump-heatequationNothing is remotely wrong with creating working software to demonstrate a mathematical concept. Often mathematics is empowered by the tangible demonstration of the utility of the ideas expressed in code. The problem occurs when the code becomes the central activity and mathematics is subdued in priority. Increasingly, the essential aspects of mathematics are absent from the demands of the research being replaced by software. This software is viewed as an investment that must be transferred along to new generations of computers. The issue is that the porting of libraries of mathematical code has become the raison d’etre for research. This porting has swallowed innovation in mathematical ideas whole, and the balance in research is desperately lacking.

Instead of focusing on being mathematicians, we increasingly see software engineering and programming as the focal point for people’s work. Software engineering and maintenance of complex software is a worthy endeavor (more later), but our talented mathematicians should be discovering math, not porting code and finding bugs as their principle professional focus. The discovery of deep, innovative and exciting mathematics promises to provide far more benefit to the future of computing than any software instantiation. New mathematical ideas if focused upon and delivered will ultimately unleash far greater benefits in the long run. This is an obvious thing, yet focus is entirely away from this model. We are steadfastly turning our mathematicians into software engineers.

Let’s get to the crux of the problem with current thinking about software. Mathematical software is like a basic plumbing of lots of codes used for scientific activities, but this model is deeply flawed. It is not like infrastructure at all where the code would be repaired and services after it is built. This leads to the current maintainers of the code to not innovate or extend the intellectual ideas in software, which I would contend is necessary to intellectually own the software. Instead a mathematical body of code is more like an automobile. The auto must be fueled and services, but over time becomes old and outdated needing to be replaced. The classic car has a certain luster and beauty, but its efficiency and utility is far less than a new car. Any automobile can take you places, but eventually the old car cannot compete with the new car. This is how we should think about our mathematical software. It should be serviced and maintained by software professionals, but mathematicians should be working on a new model all the time.

For so much of what we do with computers mathematics forms the core and foundation of the capability. The lack of focus on the actual execution of mathematical research will have long lasting effects on our future. In essence we are living on the mathematical (and physics, engineering, …) research of the past without reinvesting in the next generation of breakthroughs. We are emptying the pipeline of discovery and leadersimpoverishing our future. In addition we are failing to take advantage of the skills, talents and imagination of the current generation of scientists. We are creating a deficit of possibility that will harm our future in ways we can scarcely imagine. The guilt lies in the failure of our leaders to have sufficient faith in the power of human thought and innovation to continue to march forward into the future in the manner we have in the
past. People if turned loose on challenging problems will solve them; we always have and past is prolog.

Progress is possible only if we train ourselves to think about programs without thinking of them as pieces of executable code.

― Edsger W. Dijkstra

The key to this notion is putting software in its proper place. Just as a computer itself, software is a tool. Software is an expression of intellect plain and simple. If the intellectual capital isn’t present the value of the software is diminished. Intellectual ownership is a big deal and the key to real value. Increasingly we are creating software where no one working on really owns the knowledge encoded. This is a massively dangerous trend. Unfortunately we are not funding the basic process where the ownership is obtained. Full ownership is established through the creative process, the ability to innovate and create new knowledge grants ownership. Without the creation of new knowledge the intellectual ownership is incomplete. An additional benefit of the ownership is new capability for mankind. The foundation of all of this is mathematical research.

Our foundation is crumbling beneath our feet from abject neglect. Again, like everything else today, the reason for this is a focus on money as the arbitrator of all that is good or bad. We simply do what we are paid to do, no more and no less. No one is paying for math, they are paying for software, it’s as simple as that.

Programs must be written for people to read, and only incidentally for machines to execute.

― Harold Abelson

 

Compliance kills…

28 Friday Oct 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Progress, productivity, quality, independence, value, … Compliance kills almost everything I prize about work as a scientist. Compliance is basically enslavement to mediocrity and subservience to authority unworthy of being followed.

In the republic of mediocrity, genius is dangerous.

― Robert G. Ingersoll

original-tweet-bill

bill-response-to-si

si-tweet-too

Earlier this week I had an interesting exchange on Twitter with my friend Karen and a current co-worker Si. It centered around the fond memories that Karen and I have about working at Los Alamos. The gist of the conversation was that the Los Alamos we worked at was wonderful, even awesome. To me the experience at Los Alamos from 1989-1999 was priceless and the result of an impressively generous and technically masterful organization. I noted that it isn’t the way it was and that fact is absolutely tragic. Si countered that it’s still full of good people that are great to interact with. All of this can be true and not the slightest bit contradictory. Si tends to be positive all the time, 7678607190_33e771ac97_bwhich can be a wonderful characteristic, but I know what Los Alamos used to mean, and it causes me a great deal of personal pain to see the magnitude of the decline and damage we have done to it. The changes at Los Alamos have been done in the name of compliance, to bring an unruly institution to heel and conform to imposed mediocrity.

How the hell did we come to this point?

In relative terms, Los Alamos is still a good place largely because of the echos of the same Los_Alamos_colloquiumculture that Karen and I so greatly benefited from. Organizational culture is a deep well to draw from. It shapes so much of what we see from different institutions. At Los Alamos it has formed the underlying resistance to the imposition of the modern compliance culture. On the other hand, my current institution is tailor made to complete compliance, even subservience to the demands of our masters. When those masters have no interest in progress, quality, or productivity, the result in unremitting mediocrity. This is the core of the discussion, our master’s prime directive is compliance, which bluntly and specifically means “don’t ever fuck up!” In this context Los Alamos is the king of the fuck-ups, and others simply keep places nose clean thus succeeding in the eyes of the masters..

The second half of the argument comes down to recognizing that accomplishment and 03ce13fa310c4ea3864f4a3a8aabc4ffc7cd74191f3075057d45646df2c5d0aeproductivity is never a priority in the modern world. This is especially true once the institutions realized that they could bullshit their way through accomplishment without risking the core value of compliance. Thus doing anything real and difficult is detrimental because you can more easily BS your way to excellence and not run the risk of violating the demands of compliance. In large part compliance assures the most precious commodity in the modern research institution, funding. Lack of compliance is punished by lack of funding. Our chains are created out of money.

In the end they will lay their freedom at our feet and say to us, Make us your slaves, but feed us.

― Fyodor Dostoyevsky

mediocritydemotivatorA large part of the compliance is lack of resistance to intellectually poor programs. There was once a time when the Labs helped craft the programs that fund them. With each passing year this dynamic breaks down, and the intellectual core of crafting well-defined programs to accomplish important National goals wanes. Why engage in the hard work of providing feedback when it threatens the flow of money? Increasingly the only sign of success is the aggregate dollar figure flowing into a given institution or organization. Any actual quality or accomplishment is merely coincidental. Why focus on excellence or quality when it is so much easier to simply generate a press release that looks good.

We make our discoveries through our mistakes: we watch one another’s success: and where there is freedom to experiment there is hope to improve.

― Arthur Quiller-Couch

This entire compliance dynamic is at the core of so many aspects dragging us into the mire of mediocrity. Instead of working to produce a dynamic focused on excellence, progress and impact, we simply focus on following rules and bullshitting something that resembles an expected product. Managing a top rate scientific or engineering institution is difficult and requires tremendous focus on the things that matter. Every bit of our current focus is driving us away from the elements of success. Our masters are incapable
of supporting hard-nosed, critical peer reviews, allowing failure to positively arise from earnest efforts, empowering people to think independently, and rewarding efforts essential for progress. At the heart of everything is an environment that revolves around
fear, and control. We have this faulty belief that we can manage everything to avoid anyoffice-space bad things ever happening. In the end the only way to do this is stop all progress and make sure no one ever accomplishes anything substantial.

So in the end make sure you get those TPS reports in on time. That’s all that really matters.

Disobedience is the true foundation of liberty. The obedient must be slaves.

― Henry David Thoreau

Science is still the same; computation is just a tool to do it

25 Tuesday Oct 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Any physical theory is always provisional, in the sense that it is only a hypothesis: you can never prove it. No matter how many times the results of experiments agree with some theory, you can never be sure that the next time the result will not contradict the theory.

― Stephen Hawking

unnamedOver the past few decades there has been a lot of sturm and drang around the prospect that computation changed science in some fundamental way. The proposition was that computation formed a new way of conducting scientific work to compliment theory, experiment/observation. In essence computation had become the third way for science. I don’t think this proposition stands the test of time and should be rejected. A more proper way to view computation is as a new tool that aids scientists. Traditional computational science is primarily a means of investigating theoretical models of the universe in ways that classical mathematics could not. Today this role is expanding to include augmentation of data acquisition, analysis, and exploration well beyond the capabilities of unaided humans. Computers make for better science, but recognizing that it does change science at all is important to make good decisions.

timeline-18The key to my rejection of the premise that computation is a close examination of what science is. Science is a systematic endeavor to understand and organize knowledge of the universe in a testable framework. Standard computation is conducted in a systematic manner to conduct studies of the solution to theoretical equations, but the solutions always depend entirely on the theory. Computation also provides more general ways of testing theory and making predictions well beyond the approaches available prior to computation. Computation frees us of limitations for solving the equations comprising the theory, but nothing about the fundamental dynamic in play. The key point is that utilizing computation is as an enhanced tool set to conduct science in an otherwise standard way.

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham H. Maslow

Why is this discussion worth having now?

Some of the best arguments for the current obsession with exascale computing are couched in advertising computing as a new way of doing science that somehow is game changing. It just isn’t a game changer; computation is an incredible tool that opens new options for progress. Looking at computing, as simply a really powerful tool that enhances standard science just doesn’t sound as good or as compelling for generating money. The problem is that computing is just that, a really useful and powerful tool, and little more. The proper context for computing carries with it important conclusions about how it should be used, and how it should not be used, neither is evident in today’s common rhetoric. As with any tool, computation must be used correctly to yield its full benefits.

IBM_704_mainframe
Mainframe_fullwidth
Mainframe Computer
800px-Cray_Y-MP_GSFC

This correct use and full benefit is the rub with current computing programs. The current programs focus almost no energy on doing computing correctly. None. They treat computing as a good unto itself rather than treating it as a deep, skillful endeavor that must be completely entrained within the broader scientific themes. Ultimately science is about knowledge and understanding of the World. This can only come from two places: the observation of reality, and theories to explain those observations. We judge theory by how well it predicts what we observe. Computation only serves as a vehicle for more effectively apply theoretical models and/or wrangling our observations practically. Models are still the wellspring of human thought. Computation does little to free us from the necessity for progress to be based on human creativity and inspiration.

The-most-powerful-Exascale-ComputerObservations still require human ingenuity and innovation to be achieved. This can take the form of the mere inspiration of measuring or observing a certain factor in the World. Another form is the development of measurement devices that allow measurements. Here is a place where computation is playing a greater and greater role. In many cases computation allows the management of mountains of data that are unthinkably large by former standards. Another way of changing data that is either complementary or completely different is analysis. New methods are available to enhance diagnostics or see effects that were previously hidden or invisible. In essence the ability to drag signal from noise and make the unseeable, clear and crisp. All of these uses are profoundly important to science, but it is science that still operates as it did before. We just have better tools to apply to its conduct.

imagesOne of the big ways for computation to reflect the proper structure of science is verification and validation (V&V). In a nutshell V&V is the classical scientific method applied to computational modeling and simulation in a structured, disciplined manner. The high performance computing programs being rolled out today ignore verification and validation almost entirely. Science is supposed to arrive via computation as if by magic. If it is present it is an afterthought. The deeper and more pernicious danger is the belief by many that modeling and simulation can produce data of equal (or even greater) validity than nature itself. This is not a recipe for progress, but rather a recipe for disaster. We are priming ourselves for believing some rather dangerous fictions.

This is a healthy attitude expressed by Einstein. Replace theory with computation and ask the same question then inquire whether our attitudes toward models and simulation are equally healthy?

You make experiments and I make theories. Do you know the difference? A theory is something nobody believes, except the person who made it. An experiment is something everybody believes, except the person who made it.

― Albert Einstein

The archetype of this thought process is direct numerical simulation (DNS). DNS is most prominently associated with turbulence, but the mindset presents itself in many fields. The logic behind DNS is the following: if we solve the governing equations without any modeling in a very accurate manner, the solutions are essentially exact. These very accurate and detailed solutions are just as good as measurements of nature. Some would contend that the data from DNS is better because it doesn’t have any measurement error. Many modelers are eager to use DNS data to validate their models, and eagerly await more powerful computers to expand the grasp of DNS to more complex situations. This entire mindset is unscientific and prone to the creation of bullshit. A big part of the problem is a lack of V&V with DNS, but the core issue is deeper. The belief that the equations are exact, not simply accepted models from currently accepted theory.dag006

Let me explain why I would condemn such a potentially useful and powerful activity so strongly. The problem with DNS used in this manner is that it does include a model of reality. Of course this ignores the fact that the equations themselves are a model of reality. The argument behind DNS is that the equations being solved are unquestioned. This lack of questioning is itself unscientific on the face of it, but let me go on. Others will argue that the equations being solved have been formally validated, thus their validity for modeling reality established. Again, this has some truth to it, but the validation is invariably for quantities that may be observed directly, and generally statistically. In this sense the data being used from DNS is validated by inference, but not directly. Using such unvalidated data for modeling is dangerous (it may be useful too, but needs to be taken with a big grain of salt). The use of DNS data needs to exercise caution and be applied in a circumspect manner, not in evidence today.

Perhaps one of the greatest issues with the application of DNS is its failure to utilize V&V systematically. The first leap of faith with DNS believes that no modeling is happening. The equations being solved are not exact, but rather models of reality. Next the error associated with the numerical integration of the equations is rarely (to never) quantified simply assumed to be negligibly small. Even if we were to accept DNS as equivalent to experimental data, the error needs to be defined as part of the data set (in essence the error bar). Other uncertainties almost required for any experimental dataset are also lacking with DNS. The treatment of data from DNS should be higher than any experimental data reflecting the caution such artificial information should be used with. Instead, the DNS computations are treated with less caution. In this way standard practice today veers all the way into cavalier.

imagesThe deepest issue with current programs pushing forward on the computing hardware is their balance. The practice of scientific computing requires the interaction and application of great swathes of scientific disciplines. Computing hardware is a small component in the overall scientific enterprise and among the aspect least responsible for the success. The single greatest element in the success of scientific computing is the nature of the models being solved. Nothing else we can focus on has anywhere close to this impact. To put this differently, if a model is incorrect no amount of computer speed, mesh resolution or numerical accuracy can rescue the solution. This is the statement of how scientific theory applies to computation. Even if the model is unyieldingly correct, then the method and approach to solving the model is the next largest aspect in terms of impact. The damning thing about exascale computing is the utter lack of emphasis on either of these activities. Moreover without the application of V&V in a structured, rigorous and systematic manner, these shortcomings will remain unexposed.

Rayleigh-Taylor_instabilityIn summary, we are left to draw a couple of big conclusions: computation is not a new way to do science, but rather an enabling tool for doing standard science better. If we want to get the most out of computing requires a deep and balanced portfolio of scientific activities. The current drive for performance with computing hardware ignores the most important aspects of the portfolio, if science is indeed the objective. If we want to get the most science out of computation, a vigorous V&V program is one way to inject the scientific method into the work. V&V is the scientific method and gaps in V&V reflect gaps in scientific credibility. Simply recognizing how scientific progress occurs and following that recipe can achieve a similar effect. The lack of scientific vitality in current computing programs is utterly damning.

A computer lets you make more mistakes faster than any other invention with the possible exceptions of handguns and Tequila.

― Mitch Ratcliffe

 

 

Why China Is Kicking Our Ass in HPC

19 Wednesday Oct 2016

Posted by Bill Rider in Uncategorized

≈ 3 Comments

The problem with incompetence is its inability to recognize itself.

― Orrin Woodward

mqdefault-1My wife has a very distinct preference in late night TV shows. First, the show cannot be on late night TV, she is fast asleep by 9:30 most nights. Secondly, she is quite loyal. More than twenty years ago she was essentially forced to watch late night TV while breastfeeding our newborn daughter. Conan O’Brien kept her laughing and smiling through many late night feedings. He isn’t the best late night host, but he is almost certainly the silliest. His shtick is simply stupid with a certain sophisticated spin. One of the dumb bits on his current show is “Why China is kickingmqdefault-2 our ass”. It features Americans doing all sorts of thoughtless and idiotic things on video with the premise being that our stupidity is the root of any loss of American hegemony. As sad as this might inherently be, the principle is rather broadly applicable and generally right on the money. The loss of preeminence nationally is more due to shear hubris; manifest overconfidence and sprawling incompetence on the part of Americans than anything being done by our competitors.

The conventional view serves to protect us from the painful job of thinking.

― John Kenneth Galbraith

21SUPERCOMPUTERS1-master768High performance computing is no different. By our chosen set of metrics, we are losing to the Chinese rather badly through a series of self-inflicted wounds instead of superior Chinese execution. Nonetheless, we are basically handing the crown of international achievement to them because we have become so incredibly incompetent at intellectual endeavors. Today, I’m going to unveil how we have thoughtlessly and idiotically run our high performance computing programs in a manner that undermines our success. My key point is that stopping the self-inflicted damage is the first step toward success. One must take careful note that the measure of superiority is based on a benchmark that has no practical value. Having metric of success with no practical value is a large part of the underlying problem.

Never attribute to malevolence what is merely due to incompetence

― Arthur C. Clarke

As a starting point I’ll state that current program kicking off, the Exascale Computing Project is a prime example of how we are completely screwing things up. It is basically a lexicon of ignorance and anti-intellectual thought paving the way to international mediocrity. The biggest issue is the lack of intellectual depth in the whole basis of the program, “The USA must have the faster computer”. The fastest computer does not mean anything unless we know how to use it. The fastest computer does not matter if it is fastest at doing meaningless things, or it isn’t fast doing things that are important. The fastest computer is simply a tool in a much larger “ecosystem” of computing. This fastest computer is the modern day equivalent of the “missile gap” from the cold war, which ended up being nothing but a political vehicle.vyxvbzwx

If part of this ecosystem is unhealthy, the power of the tool is undermined. The extent to which it is undermined should be a matter of vigorous debate. This current program is inadvertently designed to further unbalance an ecosystem that has been under duress for decades. We have been focused on computer hardware for the past quarter of a century while failing to invest in physics, engineering, modeling and mathematics all essential to the utility of the tool of computing. We have starved innovation in the use of computing and the set of most impactful aspects of the computing ecosystem. The result is an intellectually hollow and superficial program that will be a relatively poor investment in terms of benefit to society for dollar spent. In essence the soul of computing is being lost. Our quest for exascale computing belies a program that is utterly and unremittingly hardware focused. This hardware focus is myopic in the extreme and starves the ecosystem of major elements of its health. These elements are the tie to experiments, modeling, numerical methods and solution algorithms. The key to Chinese superiority, or not is whether they are making the same mistakes as we are making. If they are, their “victory” is hollow; if they aren’t their victory will be complete.

If you conform, you miss all of the adventures and stand against the progress of society.

― Debasish Mridha

john-von-neumann-2Scientific computing has been a thing for about 70 years being born during World War 2. During that history there has been a constant push and pull of capability of computers, software, models, mathematics, engineering, method and physics. Experimental work has been essential to keep computations tethered to reality. An advance in one area would spur the advances in another in a flywheel of progress. A faster computer would make new problems previously seeming impossible to solve suddenly tractable. Mathematical rigor may suddenly give people faith in a method that previously seemed ad hoc and unreliable. Physics might ask new questions counter to previous knowledge, or experiments would confirm or invalidate model applicability. The ability to express ideas in software allows algorithms and models to be used that may have been too complex with older software systems. Innovative engineering provides new applications for computing that extend the scope and reach of computing to new areas of societal impact. Every single one of these elements is subdued in the present approach to HPC, and robs the ecosystem of vitality and power. We have learned these lessons in the recent past, yet swiftly forgotten them when composing this new program.

Control leads to compliance; autonomy leads to engagement.

― Daniel H. Pink

This alone could be a recipe for disaster, but it’s the tip of the iceberg. We have been mismanaging and undermining our scientific research in the USA for a generation both at research institutions like Labs and our universities. Our National Laboratories are mere shadows of their former selves. When I look at how I am managed the conclusion is obvious: I am well managed to be compliant to a set of conditions that have nothing to do with succeeding technically. Good management is applied to following rules and basically avoid any obvious “fuck ups”. Good management is not applied to successfully executing a scientific program. This being the prime directive today, the entire scientific enterprise isfig10_role under siege. The assault on scientific competence is broad-based and pervasive as expertise is viewed with suspicion rather than respect. Part of this problem is the lack of intellectual stewardship reflected in numerous empty thoughtless programs. The second piece is the way we are managing science. A couple of easy things engrained into the way we do things that lead to systematic underachievement is inappropriately applied project planning and intrusive micromanagement into the scientific process. The issue isn’t management per se, but its utterly inappropriate application and priorities that are orthogonal to technical achievement.

One of the key elements in the downfall of American supremacy in HPC is the inability to tolerate failure as a natural outgrowth of any high-end endeavor. Our efforts are simply not allowed to fail at anything lest it be seen as a scandal or waste of money. In the process we deny ourselves the high-risk, but high-payoff activities that yield great leaps forward. Of course a deep-seated fear is at the root of the problem. As a direct result of this attitude, we end up not trying very hard. Failure is the best way to learn anything, and if you aren’t failing, your aren’t learning. Science is nothing more than a giant learning exercise. The lack of failure means that science simply doesn’t get done. All of this is obvious, yet our management of science has driven failure out. It is evident across a huge expanse of scientific endeavors, and HPC is no different. The death of failure is also the death of accomplishment. Correcting this problem alone would allow for significantly greater achievement, yet our current governance attitude seems utterly incapable of making progress here.

Tied like a noose around a neck is the problem of short-term focus. The short-term focus is the twin of the “don’t fail” attitude. We have to produce results and breakthroughs on a quarterly basis. We have virtually no idea where we are going beyond an annual basis, and the long-term plans continually shift with political whims. This short-term myopic view is being driven harder with each passing year. We effectively hastability-in-lifeve no big long-term goals as a nation beyond simple survival. Its like we have forgotten to dream big and produce any sort of inspirational societal goals. Instead we create big soulless programs in the place of big goals. Exascale computing is perfect example. It is a goal without a real connection to anything societally important and is crafted solely for the purpose of getting money. It is absolutely vacuous and anti-intellectual at its core by viewing supercomputing as a hardware-centered enterprise. Then it is being managed like everything else with relentless short-term focus and failure avoidance. Unfortunately, even if it succeeds, we will continue our tumble into mediocrity.

This tumble into mediocrity is fueled by an increasingly compliance oriented attitude toward all work. Instead of working to conduct a balanced and impactful program to drive the capacity of computing to impact the real World, our programs simply comply with the intellectually empty directives from above. There is no debate about how the programs are executed because PI’s and Labs are just interested in getting money. The program is designed to be funded instead of succeed, and the Labs don’t act as honest brokers any longer being primarily interested in filling their own coffers. In other words, the program is designed as a marketing exercise, not a science program. Instead of a flywheel of innovative excellence and progress we produce a downward spiral of compliance driven mediocrity serving intellectually empty and unbalanced goals. If everyone gets their money and can successfully fill out their time sheets and gets a paycheck, it is a success.

At the end of the Cold War in the early 1990’s the USA’s Nuclear Weapons’ Labs were in danger of a funding free fall. Nuclear weapons’ testing ended in 1992 and the prospect of maintaining the nuclear weapons’ stockpile without testing, loomed large. A science-based stockpile stewardship (SBSS) program was devised to serve as a replacement, and HPC was oimages-1ne of the cornerstones of the program. SBSS provided a backstop against financial catastrophe at the Labs and provided long-term funding stability. This HPC element in SBSS was the ASCI program (which became the ASC program as it matured). The original ASCI program was relentlessly hardware focused with lots of computer science, along with activities to port older modeling and simulation codes to the new computers. This should seem very familiar to anyone looking at the new ECP program. The ASCI program is the model for the current exascale program. Within a few years it became clear that ASCI’s emphasis on hardware and computer science was inadequate to provide modeling and simulation support for SBSS with sufficient confidence. Important scientific elements were added to ASCI including algorithm and method development, verification and validation, and physics model development as well as stronger ties to experimental programs. These additions were absolutely essential for success of the program. That being said, these elements are all subcritical in terms of support, but they are much better than nothing.

IUnknown-3f one looks at the ECP program the composition and emphasis looks just like the original ASCI program without the changes made shortly into its life. It is clear that the lessons learned by ASCI were ignored or forgotten by the new ECP program. It’s a reasonable conclusion that the main lesson taken from ASC program was how to get money by focusing on hardware. Two issues dominate the analysis of this connection:

  1. none of the lessons learned by ASC necessary to conduct science have been learned by the exascale program. The exascale program is designed like the original ASCI program and fails to implement any of the programmatic modifications necessary for applied success. It is reasonable to conclude that the program has no serious expectation of applied scientific impact. Of course they won’t say this, but actions do speak louder than words!
  2. The premise that exascale computing is necessary for science is an a priori assumption that has been challenged repeatedly (see JASONS reviews for example). The unfunded and neglected aspects of modeling, methods and algorithms all provide historically validated means to answer these challenges. Rather than address these challenges, they were rejected out of hand and never technically addressed. We simply see an attitude that bigger is better by definition and its been sold more as a patriotic call to arms than a balanced scientific endeavor. It remains true that faster computers are better, if you do everything right, we are not supporting the activities to do everything right (V&V, experimental connection and model development being primal in this regard).

Beyond the troubling lack of learning from past mistakes other issues remain. Perhaps the most obviously damning aspect of our current programs is their lack of connection to massive national goals. We simply don’t have any large national goals beyond being “great” or being “#1”. The HPC program is a perfect example. The whole program is tied to simply making sure that the USA is #1. In the past when computing came of age, the supercomputer was merely a tool that demonstrated utility in accomplishing something important to the nation or the world. It was not an end unto itself. This assured a definite balance in how the HPC was executed because the success was measured by HPC’s impact on a goal beyond itself. Today there is no goal beyond the HPC and the supercomputing as an activity suffers greatly. It has no measure of success outside itself. Any science done by supercomputer is largely for marketing, and press release. Quite often the results have little or no importance aside from the capacity to generate a flashy picture to impress people who know little or nothing about science.

Cielo rotatorTaken in sufficient isolation the objectives of the exascale program are laudable. An exascale computer is useful if it can be reasonably used. The issue is that such a computer does not live in isolation; it exists in a complex trade space where other options exist. My premise has never been that better or faster computer hardware is inherently bad. My premise is that the opportunity cost associated with such hardware is too high. The focus on the hardware is starving other activities essential for modeling and simulation success. The goal of producing an exascale computer is not an objective of opportunity, but rather a goal that we should actively divest ourselves of. Gains in supercomputing are overly expensive and work to hamper progress in related areas simply by the implicit tax produced by how difficult the new computers are to use. Improvements in real modeling and simulation capability would be far greater if we invested our efforts in different aspects of the ecosystem.

The key to holding a logical argument or debate is to allow oneself to understand the other person’s argument no matter how divergent their views may seem.

― Auliq Ice

 

 

 

The ideal is the enemy of the real

12 Wednesday Oct 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

God save me from idealists.

― Jim Butcher

Coming up with detailed mathematical analysis, much less the solution of (partial) differential equations is extremely difficult. In the effort to progress on this important, but critically difficult task various simplifications and idealizations can make all the difference between success and failure. This difficulty highlights the power and promise of numerical methods for solving such equations because simplifications and idealizations are not absolutely necessary for solution. Nonetheless, much of the faith in a numerical method is derived by congruence of the solution numerically with analytical solutions. This process is known as verification and paly an essential role in helping to provide evidence of the credibility of numerical simulations. Our faith in the ability of numerical simulations to solve difficult problems is thus grounded to some degree by the scope and span of our analytical knowledge. This tie is important to both recognize and carefully control because of analytical knowledge is necessarily limited in ways that numerical methods should not be.view

In developing and testing computational methods, we spend a lot of time working on solving the ideal equations for a phenomenon. This is true in fluids, plasma, and many other fields. These ideal equations are usually something that comes from the age of classical physics and mathematics. Most commonly these ideal equations are associated with the names of greats of science, Newton, Euler, Poincare. This near obsession is one of the greatest dangers to progress I can think of. The focus on the ideal is the consequence of some almost religious devotion to classical ideas, and deeply flawed. By focusing on the classical ideal equations many of the important, critical and interesting aspects of reality escape attention. We remain anchored to the past in a way that undermines our ability to master reality with modernity.

Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve.

― Karl R. Popper

These ideal equations are starting points for investigations of the physical world, and arose in an environment where analytical work was the only avenue for understandi19.3_F2_Thornquistng. Simplicity and stripping away the complexities of reality were the order of the day. Today we are freed to a very large extent from the confines of analytical study by the capacity to approximate solutions to equations. We are free to study the universe as it actually is, and produce a deep study of reality. The analytical methods and ideas still have utility for gaining confidence in these numerical methods, but their lack of grasp on describing reality should be realized. Our ability to study the reality should be celebrated and be the center of our focus. Our seeming devotion to the ideal simply distracts us and draws attention from understanding the real World.

The more pernicious and harmful aspect of ideality was a reverence for divinity ingalileosolutions. The ideal equations are supposed to represent the perfect, and in a sense the “hand of God” working in the cosmos. As such they represent the antithesis of moderni
ty. As such they represent the inappropriate injection of religiosity into the study of reality. For this reason alone the ideal equations should be deeply suspect at a philosophical level. These sort of religious ideas should not be polluting the unfettered investigation of reality. More than this we can see that the true engine of beauty in the cosmos is removed from these equations. So much of what is extraordinary about the universe is the messiness driven by the second law of thermodynamics. This law takes many forms and always removes the ideal from the equations, and injects the hard, yet beautiful face of reality.

A thing can be fine on paper but utterly crummy in the field.

― Robert A. Heinlein

images-1Not only are these equations suspect for philosophical reasons, they are suspect for the imposed simplicity of the time they are taken from. In many respects the ideal equations miss most of fruits of the last Century of scientific progress. We have faithfully extended our grasp of reality to include more and more “dirty” features of the actual physical World. To a very great extent the continued ties to the ideal contribute to the lack of progress in some very important endeavors. Perhaps no case more amply demonstrates this handicapping of progress as well as turbulence. Our continued insistence that turbulence is tied to the ideal nature of incompressibility is becoming patently ridiculous. It highlights that important aspects of the ideal are synonymous with the unphysical.

I have spoken out about the issues with incompressibility several times in the past (https://williamjrider.wordpress.com/2014/03/07/the-clay-prize-and-the-reality-of-the-navier-stokes-equations/, https://williamjrider.wordpress.com/2015/03/06/science-requires-that-modeling-be-challenged/, https://williamjrider.wordpress.com/2016/04/08/the-singularity-abides/, https://williamjrider.wordpress.com/2016/04/15/the-essential-asymmetry-in-fluid-mechanics/, https://williamjrider.wordpress.com/2016/09/27/the-success-of-computing-depends-on-more-than-computers/). Here I will simply reiterate these points from the perspective of the concept of ideal equations. Incompressibility is a simple and utterly ideal in the sense that no nontrivial flow is exactly incompressible (\nabla \cdot {\bf u} = 0). Real and nontrivial flow fields are only approximately incompressible. It is important to recognize that approximate and exactly incompressible are very different at their core. Exactly incompressible flows are fundamentally unphysical and unrealizable in the real world. Put differently, they are absolutely pathological.

An important thing to recognize in this discussion is the number of important aspects of reality that are sacrificed with incompressibility. The list is stunning and gives a hint of the depth of the loss. Gone is the second law of thermodynamics unless viscous effects are present. Gone is causality. Gone are important nonlinearities. This approximation is taken to the extreme of being an unphysical constraint that produces a deeply degenerate system of equations. Of greater consequence is the demolition of physics that may be at the heart of explaining turbulence itself. The essence of turbulence needs a singularity formation to make sense of observations. This is at the core of the Clay Prize, yet in the derivation of the incompressible equations, the natural nonlinear process for singularity formation is removed by fiat. Incompressibility creates a system of equations that is simple and yet only a shadow of the more general equations it claims to represent. I fear it is an albatross about the neck of fluid mechanics.

Crays-Titan-Supercomputer

There are other idealities that need to be overturned. In many corners of fluid mechanics symmetries are assumed. Many scientists desire that they should be maintained under all sorts of circumstances. They rarely ask whether the symmetry is maintained in the face of perturbations from the symmetry that would reasonably be expected to exist in reality (in fact it is absolutely unreasonable to assume perfect symmetry). Some assumptions are reasonable in some situations where the flows are stable, but other cases would destroy these symmetries for any realistic flow. Pushing a numerical method to maintain symmetry under such circumstances where the instability would grow should be abhorrent and avoided. In the physical actual universe the destruction of symmetry is the normal evolution of a system and preservation is rarely observed. As such expectations of symmetry preservation in all cases define an unhealthy community norm.

A great example of this sort of dynamic occurs in modeling stars that end their lives in an Supernove-Shocks-1explosion like type II supernovas. The classic picture was a static spherical star that burned elements in a series of concentric spheres or increasing mass as one got deeper into the star. Eventually the whole process becomes unstable as the nuclear reactions shift from exothermic to endothermic when iron is created. We observe explosions in such stars, but the idealized stars would not explode. Even if we forced the explosion, the evolution of the post-explosion could not match important observational evidence that implied deep mixing of heavy elements into the expanding envelope of the star.

It is a place where the idealized view stood in the way of progress of decades and the release of ideality allowed progress and understanding. Once these extreme symmetries were released and the star was allowed to rotate, have magnetic fields, and mix elements across the concentric spheres models and simulations started to match observations. We got exploding stars; we got the deep mixing necessary for both the explosion itself and the post explosion evolution. The simulations began to explain what we saw in nature. The process of these exploding stars is essential for the understanding of the universe because such stars are the birthplace of the matter that our World is built from. When things were more ideal the simulations failed to a very large extent.

This sort of issue appears over and over in science. Time and time again, the desire to study things in an ideal manner acts to impede the unveiling of reality. By now we should know better, but it is clear that we don’t. The idea of sustaining the ideal equations and 24-Figure17-1evolution as the gold standard is quite strong. Another great example of this is the concept of kinetic energy conservation. Many flows and numerical methods are designed to exactly conserve kinetic energy. This only occurs in the most ideal of circumstances when flows have no natural dissipation (itself deeply unphysical) while retaining well-resolved smooth structure. So the properties are only seen in flows that are unphysical. Many believe that such flows should be exactly preserved as the foundation for numerical methods. This belief is somehow impervious to the observation that such flows are utterly unphysical and could never be observed in reality. It is difficult to square this belief system with the desire to model anything practical.

We need to recognize the essential tension between the need to test methods using the solution to idealized equations with the practical simulation of reality. We need to free ourselves of the limiting aspects of the mindset around the ideal equations. The important aspect of matching solutions to ideal equations must be acknowledged without imposing unphysical limits on the simulation. The imperative for numerical methods is modeling reality. To match aspects of the ideal equations solution many sacrifice physical aspect of numerical methods. Modeling reality should always be the preeminent concern for the equations and the methods for solution. Numerical methods unleash many of the constraints that analytical approaches abide by and these should be taken advantage of to a maximal degree.

Quite frequently, the way that numerical methods developers square their choices is an unfortunate separation of modeling from the numerical solutions. In some cases the choice that is followed is the philosophy where the ideal equations are solved along with the explicit modeling of any non-ideal physics. As such the numerical method is desired to be unwaveringly true to the ideal equations. Quite often the problem with this approach is that the non-ideal effects are necessary for the stability and quality of the solution. Moreover the coupling between the numerical solution and modeling is not clean, and the modeling can’t be ignored in the assessment of the numerical solution.

dag006A great example of this dichotomy is turbulent fluid mechanics and it’s modeling. It is instructive to explore the issues surrounding the origin of the models with connections to purely numerical approaches. There is the classical thinking about modeling turbulence that basically comes down to solving the ideal equations as perfectly as possible, and modeling the entirety of turbulence with additional models added to the ideal equations. It is the standard approach and by comparison to many other areas of numerical simulation, a relative failure. Nonetheless this approach is followed with almost a religious fervor. I might surmise that the lack of progress in understanding turbulence is somewhat related to the combination of adherence to a faulty basic model (incompressibility) and the solution approach that supposes that all the non-ideal physics can be modeled explicitly.

unnamedIt is instructive in closing to peer more keenly at the whole turbulence modeling problem. A simple, but very successful model for turbulence is the Smagorinsky model originally devised for climate and weather modeling, but forming the foundation for the practice of large eddy simulation (LES). What is under appreciated about the Smagorinsky model is its origins. This model was originally created as a way of stabilizing shock calculations by Robert Richtmyer and applied to an ideal differencing method devised by John Von Neumann. The ideal equation solution without Richtmyer’s viscosity was unstable and effectively useless. With the numerically stabilizing term added to the solution, the method was incredibly powerful and forms the basis of shock capturing. The same term was then added to weather modeling to stabilize those equations. It did just that and remarkably it suddenly transformed into a “model” for turbulence. In the process we lost the role it played for numerical stability, but also the strong and undeniable connection between the entropy generated by a shock and observed turbulence behavior. This connection was then systematically ignored because the unphysical incompressible equations we assume turbulence is governed by do not admit shocks. In this lack perspective we find the recipe for lack of progress. It is too powerful for a connection not to be present. Such connections creates issues that undermine some core convictions in the basic understanding of turbulence that seem too tightly held to allow the lack of progress to question.

We cannot become what we need by remaining what we are.

― John C. Maxwell

 

 

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 56 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...