• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Category Archives: Uncategorized

Codes of Myth and Legend

18 Friday Apr 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

If you work actively in modeling and simulation you will encounter the codes of bygone days. If I were more of a historian it could come in really handy although comments in these codes often leave much to be desired. These codes are the stuff of myth and legend, or at least it seems like. The authors of the codes are mythically legendary. They did things we can’t do any longer; they created useful tools for conducting simulation. This is a big problem because we should be getting steadily better at conducting simulations. This becomes an even bigger problem when these people no longer work, and their codes live on.

“What I cannot create, I do not understand.” – Richard Feynman

Too large a portion of the simulation work done today is not understood in any deep way by those doing the simulation. In other words the people using the code and conducting the simulation don’t really understand much about how the answer they are using was arrived at. People run the code, get an answer do analysis without any real idea of how the code actually got the answer. This is dangerous. This actually works to undermine the scientific enterprise. Moreover, this trend is completely unnecessary, but some deep cultural undercurrents that extend well beyond the universities, labs and offices where simulation should be having a massively positive impact on society drive it.

The people who created these codes are surely proud of their work. I’ve been privileged to work with some, and all of them are generally horrified with how long their codes continue to be used to the exclusion of newer replacements. It was their commitment to apply applied technical solutions to real problems that made a difference. Those that create those earlier technical solutions were committed to applying technology to solving problems, and they were good at it. The spirit of discovery that allowed them to create codes and then see them used for meaningful work has dissipated in broad swaths of science and engineering. The disturbing point is that we don’t seem to be very good at it any more. At least we aren’t very good at getting our tools to solve problems. The developers of the mythic codes generally feel quite distressed by the continued reliance on their aging methods, and the lack of viable replacements.

Why?

I don’t believe it is the quality of the people, nor is it the raw resources available. Instead, we lack the collective will to get these things done. Our systems are letting us down. Society is not allowing progress. Our collective judgment is that the risk of change actually outweighs the need for or benefit of progress. Progress still lives on in other areas such as “big data” and business associated with the Internet, but even there you can see the forces of stagnation looming on the horizon. Areas where society should place its greatest hope for the future is under threat by the same forces that are choking the things I work on. This entire narrative needs to change for the good of society and the beneficial aspects of progress.

Remarkably, the systems devised and implemented to achieve greater accountability are themselves at the heart of achieving less. The accountability is a ruse, a façade put into place to comfort the small-minded. The wonder of solving interesting problems on a computer seems to have worn off being replaced by a cautious pessimism about the entire enterprise. None of these factors are necessary, and all of them are absolutely self-imposed limitations. Let’s look at each of these issues and suggest something better.

All the codes were created in the day when computing was in its infancy, and supported ambitious technological objectives. Usually a code would cut its teeth of the most difficult problems available and if it proved useful, the legend would be born. The mythic quality is related to the code’s ability to usefully address the problems of importance. The success of the technology supported by the code would lend itself to the code’s success and mythic status. The success of the code’s users would transfer to the code; the code was part of the path to a successful career. Ambition would be satisfied through the code’s good reputation. As such, the code was part of a flywheel with ambitious projects and ambitious people providing the energy. The legacy of the code creates something that is quite difficult to overcome. It may require more willpower to move on than the code originally harnessed in taking on its mantle of legend.

We seem to have created a federal system that is maximizing the creation of entropy. It is almost as if the government were expressing a deep commitment to the second law of thermodynamics. Despite being showered with resources, the ability to get anything of substance done is elusive. Beyond this, the elusive nature of progress is growing in prominence. Creating a code that has real utility for real applied problems takes focus, ingenuity, luck and commitment. Each of these is in limited supply. The research system of today seems to sap each of these in a myriad of ways. It seems almost impossible to focus on anything today. If I told you how many projects I work on, you’d immediately see part of the problem (7 or 8 a year). This level of accounting comes at me from a myriad of sources, some entirely local, and some National in character. All of it tinged with the sense that I can’t be trusted.

It takes a great deal of energy to drive these projects toward anything that looks coherent; none of this equals the creation and stewardship of a genuine capability. Ingenuity is being crushed by the increasingly risk adverse and politically motivated research management system. Lack of commitment is easy to see with the flighty support for most projects. Even when projects are supported well, the management system slices and dices the effort into tiny bite-sized pieces, and demands success in each. Failure is not tolerated. Wisdom dictates that the lack of tolerance for failure is tantamount to destroying the opportunity for success. In other words, our risk aversion is causing the very thing that it was designed to avoid. Between half-hearted support, and risk aversion the chance for real innovation is being choked to death.

The management of the Labs where I work is becoming ever more intrusive. Take for example the financial system. Every year my work is parceled into ever-smaller chunks. This is done in the name of accountability. Instead the freedom to execute anything big is being choked by all this accountability. The irony is that the detailed accounting is actually assuring that less is accomplished, and the people driving the micromanagement aren’t accountable for the damage they have caused in the slightest. The micro accounting of my time is also driving a level of incrementalism into the work that destroys the ability to do anything game changing. This incrementalism goes hand-in-hand with the lack of any risk-taking. We are dictated to succeed by fiat, and by the same logic success on a large scale will also be inhibited.

When it comes to code development the incremental attitude results in work being accreted onto the same ever-older code base. The low risk path is to add a little bit more onto the already useful (legacy) code. This is done despite the lack of real in-depth knowledge of how the code actually works to solve problems. The part of the code that leads to its success is almost magical, and as magic can’t be tampered with. The foundation for all the new work is corrupted by the lack of understanding which then poisons the quality of the work built on top of the flawed base. As such, the work done on top of the magical foundation is intrinsically superficial. Given the way we manage science today superficiality should actually be expected. Our science and engineering management is focused almost to exclusion on the most superficial aspects of the work.

The fundamental fact is that a new code is a risk. It may not replace or improve upon the existing capability. Success can never be guaranteed, nor should it be. Yet we have created a system of managing science that cannot tolerate any failure. Existing codes already solve the problem well enough for somebody to get answers, and the low risk path is to build upon this. Instead of building upon the foundation of knowledge and applying this to better solutions, it is cheaper and lower risk to simply refurbish the old code. Like much of our culture today the payoff is immediate rather than delayed. You get new capability right away rather than a much better code later. Right away and crummy beats longer term and great every time. Why? Short attention spans? No real accountability? Deep institutional cynicism?

A good analogy is the state of our crumbling physical infrastructure. The United States’ 20th Century infrastructure is rotting before our eyes. When we should be thinking of a 21st Century infrastructure, we are trying to make the last Century’s limp along. Think of an old bridge that desperately needs replacement. It is in danger of collapse and represents a real risk rather than a benefit to its users. More often than not in today’s America, the old bridge is simply repaired, or retrofitted regardless of its state of repair. You can bumble along this path until the bridge collapses. Most bridges don’t, but some do to tragic consequences. Usually there is a tremendous amount of warning that is simply ignored. Reality can’t be fooled. If the bridge needs replacing and you fail to do so, a collapse is a reasonable outcome. Most of the time we just do it on the cheap.

We are doing science exactly the same way. In cases where no one can see the bridge collapse, the guilty are absolved of the damage they do. Just like physical infrastructure, we are systematically discounting the future value for the present cost. The management (can’t really call them leaders) in charge simply is not stewards of our destiny; they are just trying holding the crumbling edifice together until they can move on with the hollow declaration of success. Sooner or later, this lack of care will yield negative consequences.

All this caution is creating an environment that fails to utilize existing talent, and embodies a pessimistic view of man’s capacity to create. This might be the saddest aspect of the overall malaise, the waste of potential. Our “customers” actually ask very little of us, and the efforts don’t really push our abilities; except, perhaps, our ability to withstand work that is utter dreck. The objectives of the work are small-minded with a focus on producing sure results and minimizing risks. The system does little to encourage big thoughts, dreams, or risks behind creating big results. Politicians heighten this sense by constantly discussing how deeply in debt our country is as an excuse for not spending money on things of value. Not every dollar spent is the same; a dollar invested in something of value is not the same as a dollar spent on something with no return. All of this is predicated on the mentality of scarcity, and a failure to see our fellow man or yourself as engines of innovation and unseen opportunities. History will not be kind to our current leadership when it is realized how much was squandered. The evidence that we should have faith in man’s innate creative ability is great, and the ignorance of the possibility of a better world is hard to stomach.

The first step toward a better future is change in the assumptions regarding what an investment in the future looks like. One needs to overthrow the scarcity mentality and realize that money invested wisely will yield greater future value. Education and lifetime learning is one such investment. Faith in creativity and innovation is another investment. Big audacious goals and lofty objectives are another investment. The big goal is more than just an achievement; it is an aspiration that lifts the lives of all who contribute. It also lifts the lives of all that are inspired by it. Do we have any large-scale societal goals today? If we don’t, how can we tolerate such lack of leadership? We should be demanding something big and meaningful in our lives. Something that is worth doing and something that would make the current small-minded micromanagement and lack of risk taking utterly unacceptable. We should be outraged by the state of things, and the degree to which we are being led astray.

All of us should be actively engaged in creating a better world, and solving the problems that face us. Instead we seem to be just hanging on to the imperfect world handed to us. We need to have more faith in our creative problem solving abilities, and less reverence for what was achieved in the past. The future is waiting to be created.

“If you want something new, you have to stop doing something old” ― Peter F. Drucker

 

What constraints are keeping us from progressing?

11 Friday Apr 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” – Clarke’s first law

It would be easy to point fingers at the crushing bureaucratic load we face at many of our premier research institutes. I think that this only compounds the real forces holding us back as a sort of mindless ally in the quest for mediocrity. I for one can feel my ability to think and create being syphoned away by meaningless paperwork, approvals, training and mindless formality. The personal toll is heartbreaking and the taxpayers should be up in arms. Of course most of this is driven by our scandal mongering political system and the increasingly tabloidesque media. These items are merely various forms of societal dissipation aimed at driving entropy into its all-consuming conclusion.

When I came across the article in the Daily Beast (Our Mindless Government Is Heading for a Spending Disaster) yesterday on the book “The Rule of Nobody,” by Phillip K. Howard it became clear that I’m not alone in feeling this way. Our Labs are actually not run by anyone, and certainty not the management of the Lab. The problem with this approach is not partisan, but rather associated with a tendency to be lazy in our rule. The core of what drives this trend is the inability to reinvent our governance. This failure to reinvent is then at the core of the deeper issue, the fear of risk or failure. We have a society-wide inability to see failure for what it is; failure is a necessary vehicle for success. Risk is the thing that allows us to step forward toward both accomplishment and failure. You cannot have one without the other. Somehow as a culture we have forgotten how to strive, to accept the failure as a necessary element for a healthy Country. Somehow this aversion has crept into our collective consciousness. It is sapping our ability to accomplish anything of substance.

In scientific research the inability to accept risk and the requisite failure is incredibly destructive. Research at its essence is doing something that has never been done before. It should be risky and thus highly susceptible to failure. Our ability to learn the limits of knowledge is intimately tied to failure. Yet failure is the very thing that we are not encouraging as a society. In fact, failure is punished without mercy. The aggregate impact of this is the failure to accept the sort of risk that leads to large-scale success. To get a “Google” or a “moon landing” we have to fund, accept and learn from innumerable failures. Without the failure the large success will elude us as well.

Another is the artificial limitation we place on our thinking in the guise of thinking “it’s impossible”. Impossible also implies risk and the large chance of outright failure. We quit pushing the limits of what might be possible and escape into the comfortable confines of the safe possible, A third piece is the inability to marshal our collective efforts in the pursuit of massive societal goals. These goals capture the imagination and drive the orientation toward success beyond us to greater achievements. Again, it is the inability to accept risk. The last I’ll touch upon is the lack of faith in the creative abilities of mankind. Man’s creative energies have continually overcome limitations for millennia and there is no reason to think this won’t continue. Algorithmic improvement’s impacts on computing are but one version of the large theme of man’s ability to create a better world.

It seems that my job is all about NOT taking risks. The opposite should be true. Instead we spend all our time figuring out how to not screw up, how to avoid any failure. This, of course, is antithetical to success. All success, all expertise is built upon the firm foundation of glorious failure and risk. Failure is how we learn and risk helps to stoke the flames of failure. Instead we have grown to accept creeping mediocrity as the goal of our entire society. When the biggest goal at work is “don’t screw up” it is hard to think of a good reason to do anything. We have projects that have scheduled breakthroughs and goals that are easy to meet. Very few projects are funded that actually attack big goals. Instead instrumentalism abounds and the best way to get funded is to solve the problem first then use the result to justify more funding. It’s a vicious cycle, and it is swallowing too much of our efforts.

Strangely enough, the whole viscous cycle also keeps us from doing the mundane. Since our efforts are so horrifically over managed there is no energy to actually execute what should be the trivial aspects of the job. Part of this related to the slicing and dicing of our work into such small pieces any coherence is lost. The second part is the lack of any overarching vision of where we are going. The lack of big projects with scope kills the ability to do consequential tasks that should be easy. Instead we do all sorts of things that seem hard, but really amount to nothing. We are a lot of motion without any real progress. Some of us noted a few weeks ago that new computer codes were started every five to seven years. Then about 25 years ago that stopped. Now everything has to be built upon existing codes because it lowers the risk. We have literally missed four or five generations of new codes. This is failure on an epic scale because no one will risk something new.

“Can we travel faster than the speed of light?” My son once asked me. A reading of the standard, known theories of physics would give a clear unequivical “No, it would be impossible.” I don’t buy this as the ultimate response. A better and more measured response would be “not with what we know today, but there are always new things to be learned about the universe.” “Maybe we can using physical principles that haven’t been discovered yet.” Some day we might travel faster than light, or effectively so, but it won’t look like Star Trek’s warp drive (or maybe it will, who knows). The key is to understand that what is possible or impossible is only a function of what we know today, and our state of knowledge is always growing.

In mathematics these limits on possibility often take the form of barrier theorems. These state what cannot be done. These barriers can be overcome if the barriers are looked at liberally with an eye toward loopholes. A common loophole is linearity. Linearity infuses many mathematical proofs and theorems, and the means to overcoming the limitations are appealing to nonlinearity. One important example is Godunov’s theorem where formal accuracy and monotonicity were linked. The limit only exists for linear numerical methods, and a nonlinear numerical method can be both greater than first order accurate and monotone. The impossible was possible! It was simply a matter of thinking about the problem outside the box of the theorem.

In most of the areas that have traditionally supported scientific computing are languishing today. Almost nothing in the way of big goal oriented projects exist to spur progress. The last such program was the ASCI program from the mid-1990’s, which unfortunately focused too much on pure computing as the route to progress. ASCI bridged the gap between the CPU dominated early era to the growth in massively parallel computation. If fact parallel computing has masked the degree to which we are collectively failing to use our computers effectively. This era is drawing to a close, and in fact Moore’s law is rapidly dying.

While some might see the death of Moore’s law as a problem, it may be an opportunity to reframe to quest for progress. In the absence of computational improvements driven by the technology, the ability to progress could be again given to the scientific community. Without hardware growing in capability the source of progress resides in the ability of algorithms, methods and models to improve. Even under the spell of Moore’s law, these three factors have accounted for more improvement in computational capability than hardware. What will our response be to losing Moore’s law? Will we make investments appropriately in progress? Will we refocus our efforts on improving algorithmic efficiency, better numerical methods and improved modeling? Hope springs eternal!

In the final analysis, such an investment requires a great deal of faith in man’s eternal ability to create, to discover and be inspired. History provides an immense amount of evidence that this faith would be well placed. As noted above, we have created as much if not more computational capability through ingenious algorithms, methods, heuristics, and models than our massive strides in computational hardware.

It is noteworthy that the phone in my pocket today has the raw computational power of a Cray 2. It sits idle most of the time and gets used for email, phone calls, texts and light web browsing. If you had told me that I’d have this power available to me like these 25 years ago, I would have been dumbstruck. Moreover, I don’t really use it for anything like I’d have used a Cray 2. The difference is that the same will almost certainly not happen in the next 25 years. The “easy” progress simply riding the coattails of Moore’s law is over. We will have to think hard to progress and take a different path. I believe the path is clear. We have all the evidence needed to continue our progress.

Unrecognized Bias can govern modeling & simulation quality

04 Friday Apr 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

We are deeply biased by our perceptions and preconceptions all the time. We make many decisions without knowing we are making a decision constantly. Any recognition of this would probably terrify most rational people. We often frame our investigations to prove the conclusion we have already made. Computer modeling and simulation has advanced to the point where it is forming biases. If one’s most vivid view of an unseeable event is a simulation, a deep bias can be shaped in favor of the simulation that unveiled the unseeable. We are now at the point where we need to consider if improvement in modeling and simulation can be blocked by such biases.

For example in one modeling effort, for high explosives efforts had a favored a computer code that is Lagrangian (meaning the mesh moves with the material). The energy release from explosives causes fluid to rotate vigorously and this rotation can render the mesh into a tangled mess. Besides becoming inaccurate, the tangled mesh will invariably endanger the entire simulation. To get rid of the problem, this code converts tangled mesh elements into particles. This is a significant upgrade over the practice of “element death” where the tangled grid is completely removed when it becomes a problem along with mass, momentum and energy… Conservation laws are laws, not suggestions! Instead the conversion to particles allows the simulation to continue, but bring all the problems with accuracy and ultimately conservation that particles bring along (I’m not a fan of particles).

More tellingly, competitor codes and alternative simulation approaches will add particles to their simulation. The only reason the particles are added is to give the users something that looks more like what they are used to. In other words the users expect particles in interesting parts of the flow, and the competitors are eager to give it to them whether it is a good idea or not (it really isn’t!). Rather than develop an honest and earnestly better capability, the developers focus on providing the familiar particles.

Why? The analysts running the simulations have come to expect particles, and the particles are common where the simulations are the most energetic, and interesting. To help make the analysts solving the problems believe the new codes particles come along. I, for one, think particles are terrible. Particles are incredibly seductive and appealing for simulation, but ultimately terrible because of their inability to satisfy even more important physical principles, or provide sufficient smoothness for stable approximations. Their discrete nature causes an unfortunate trade space to be navigated without sufficiently good alternatives. In some cases you have to choose between smoothness for accuracy and conservation. Integrating particles is often chosen because they can be done without dissipation, but dissipation is fundamental to physical, casual events. Causality, dissipation and conservation all trump a calculation with particles without these characteristics. In the end the only reason for the particles is the underlying bias of the analysts who have grown to look for them. Nothing else, no reason based on science, it is based on providing the “customer” what they want.

“If I had asked people what they wanted, they would have said faster horses.”– Henry Ford.

There you have it, give people what they don’t even know they need. This is a core principle in innovation. If we just keep giving people what they think they want, improvements will be killed. This is the principle that code related biases create. They are biased strongly toward what they already have instead of what is possible.

Modeling and simulation has been outrageously successful over the decades. This success has spawned the ability to trick the human brain to believing that what they see is real. The fact that simulations look so convincing is a mark of massive progress that has been made. This is a rather deep achievement, but it is fraught with the danger of coloring perceptions in ways that cannot be controlled. The anchoring bias I spoke of above is part of that danger. The success now provides a barrier to future advances. In other words enough success has been achieved that the human element in determining quality may be a barrier to future improvements.

It might not come as a surprise for you to think that I’ll say V&V is part of the answer.

V&V has a deep role to play in improving upon this state of affairs. In a nutshell, the standard for accepting and using modeling and simulation must improve in order to allow the codes to improve. A colleague of mine has the philosophy, “you can always do better.” I think this is the core of innovation, success and advances. There is always a way to improve. This needs to be a steadfast belief that guides our choices, and provides the continual reach toward bettering our capabilities.

What can overcome this very human reaction to the visual aspects of simulation?

First, the value of simulation needs to be based upon the comparisons with experimental measurements, not human perceptions. This is easier said than done. Simulations are prone to being calibrated to remove differences from experimental measurements. Most simulations cannot match experimental observables without calibration, and/or the quality standards cannot be achieved without calibration. The end result is the inability to assess the proper value of a simulation without the bias that calibration brings. An unambiguously better simulation will require a different calibration, and potentially a different calibration methodology.

 

In complex simulations, the full breadth of calibration is quite difficult to fully grapple with. There are often multiple sources of calibration in simulation including any subgrid physics, or closure relations associated with physical properties. Perhaps the most common place to see calibration is the turbulence model. Being an inherently poorly understood area of physics; turbulence modeling is prone to being a dumping ground for uncertainty. For example, ocean modeling often uses a value for the viscous dissipation that far exceeds reality. As a friend of mine like to say, “if the ocean were as viscous as we model it, you could drive to England (from the USA).” Without strong bounds being put on the form and value of parameters in the turbulence model, the values can be modified to give better matches to more important data. This is the essence of a heavy-handed calibration common. An example might be the detailed equation of state for a material. Often a simulation code has been used in determining various aspects of the material properties or analyzing the experimental data used.

 

I have witnessed several difficult areas of applied modeling and simulation overwhelmed by calibration. The use of calibration is so commonly accepted, the communities engage in it without thinking. If one isn’t careful the ability to truly validate the state of “true” modeling knowledge becomes nearly impossible. The calibration begins to become intimately intertwined with what seems to be fundamental knowledge. For example, a simulation code might be used to help make sense of experimental data. If one isn’t careful errors in the simulation used in reducing the experimental data can be transferred over to the data itself. Worse yet, the code used in interpreting the data might utilize a calibration (it almost certainty does). At that point you are deep down the proverbial rabbit hole. Deep. How the hell do you unwind this horrible knot? You have calibrated the calibrator. Even more pernicious errors might be the failure to characterize the uncertainties in the modeling and simulation that is used to help look at the experiment. In other cases calibrations are used so frequently that they simply get transferred over into what should be fundamental physical properties. If these sorts of steps are allowed to proceed forward, the original intent can be lost.

These steps are in addition to a lot of my professional V&V focus, code verification and numerical error estimation. These practices can provide unambiguous evidence that a new code is a better solution on analytical problems and real applications. Too often code verification simply focuses upon the correctness of implementations as revealed by the order of convergence. The magnitude of the numerical error can be revealed as well. It is important to provide this evidence along with the proof of correctness usually associated with verification. What was solution verification should be called numerical error estimation, and it provides important evidence on how well real problems are solved numerically. Moreover, if part of a calibration is accounting for numerical error, the error estimation will unveil this issue clearly.

The bottom line is to ask questions. Ask lots of questions, especially ones that might seem to be stupid. You’ll be surprised how many stupid questions actually have even stupider answers!

#predictive or 11 things to make your simulation model the real world

28 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

It feels almost dirty to put a “#” hashtag in my title, but what the hell! The production of predictive models is the holy grail of modeling and simulation.  On the other hand we have the situation where a lot of scientists and engineers who think they have predictivity when in fact they have cheated. By “cheating” I usually mean one form or another of calibration either mindfully or ignorantly applied to the model. The model itself ends up being an expensive interpolation and any predictivity is illusory. 

When I say that you are modeling the real world, I really mean that you actually understand how well you compare.  A model that seems worse, but is honest about your simulation mastery is better than a model that seems to compare better, but is highly calibrated.  This seems counter intuitive as I’m say that a greater disparity is better.  In the case where you’ve calibrated your agreement, you have lost the knowledge of how well you model anything.  Having a good idea of what you don’t know is essential for progress.

A computational model sounds a lot better than an interpolation. In such circumstances simulation ends up being a way of appearing to add more rigor to the prediction when any real rigor was lost in making the simulation agree so well to the data. As long as one is simply interpolating the cost is the major victim of this approach, but in the case where one extrapolates there is danger in the process.  In complex problems simulation is almost always extrapolating in some sense.  A real driver for this phenomenon is mistakenly high standards for matching experimental data, which drive substantial overfitting of data (in other words forcing a better agreement than the model should allow). In many cases well-intentioned standards of accuracy in simulation drive pervasive calibration that undermines the ability to predict, or assess the quality of any prediction.  I’ll explain what I mean by this and lay out what can be proactively done to conduct bonafide modeling. 

I suppose the ignorant can be absolved of the sin they don’t realize they are committing. Their main sin is ignorance, which is bad enough.  In many cases the ignorance is utterly willful.  For example, physicists tend to show a lot of willful ignorance of numerical side effects. They know it exists yet continue to systematically ignore it, or calibrate for its effects.  The delusional calibrators are cheating purposefully and then claiming victory despite having gotten the answer by less than noble means.  I’ve seen example after example of this in a wide spectrum of technical fields. Quite often nothing bad happens until a surprise leaps up from the data.  The extrapolation finally becomes poor and the response of the simulated system surprises. 

The more truly ignorant will find that they get the best answer by using a certain numerical method, or grid resolution and with no further justification declare this to be the best solution. This is the case for many, many engineering applications of modeling and simulation.  For some people this would mean using a first-order method because it gives a better result than the second-order method.  They could find that using a more refined mesh gives a worse answer and then use the coarser grid.  This is easier than trying track down why either of these dubious steps would give better answers because they shouldn’t.  In other cases, they will find a dubious material or phenomenological model gives better results, or a certain special combination.  Even more troubling is the tendency to choose expedient techniques whereby mass, momentum or energy is simply thrown away, or added in response to a bad result. Generally speaking, the ignorant that apply these techniques have no general idea how accurate their model actually is, its uncertainties, or the uncertainties in the quantities they are comparing to.

While dummies abound in science, charlatans are a bigger problem.  While calibration when mindfully done and acknowledged is legitimate, the misapplication of calibration as mastery in modeling is rampant. Again, like the ignorant, the calibrators often have no working knowledge of many of innate uncertainties in the model.  They will joyfully go about calibrating over numerical error, model form, data uncertainty, and natural variability without a thought.  Of course the worst form of this involves ignorant calibrators who believe they have mastery over things they understand poorly. This ultimately is a recipe for disaster, but the near term benefits of these practices are profound. Moreover the powers that be are woefully prepared to unmask these pretenders.

At its worst calibration will utilize unphysical, unrealizable models to navigate the solution into complete agreement with data.  I’ve seen examples where fundamental physical properties (like equation of state or cross sections) are made functions of space, when they should be invariant of position. Even worse the agreement will be better than it has a right to be, not even include the possibility that the data being calibrated to is flawed. Other calibrations will fail to account for experimental measurement error, or natural variability and never even raise the question of what these might be.  In the final analysis the worst aspect of this entire approach is lost opportunity to examine the state of our knowledge and seek to improve it.

How to do things right:

1. Recognize that the data you are comparing to isn’t accurate, and variable. Try to separate these uncertainties into their sources, measurement error, intrinsic variability, or unknown factors.

2. Your simulation results are similarly uncertain for a variety of reasons. More importantly you should be able to more completely and mindfully examine their sources and estimate their magnitude. Numerical errors arise from finite resolution, uncoverged nonlinearities (the effects of linearization), unconverged linear solvers, and outright bugs.  The models often can have their parameters change, or even change to other models.  The same can be said of the geometric modeling.

3. Much of the uncertainty in modeling can be explored in a concrete manner by modifying the details of the models in a manner that is physically defensible. The values in or from the model can be changed in ways that can be defended in a strict physical sense.

4. In addition different models are often available for important phenomena and these different approaches can yield a degree of uncertainty.  To some degree different computer codes themselves constitute different models and can be used to explore differences in what would be considered reasonable defensible models of reality.

5. A key concept in validation is a hierarchy of experimental investigations that cover different levels of system complexity, and modeling difficulty. These sources of experimental (validation) data provide the ability to deconstruct the phenomena of interest into its constituent pieces and validate them independently. When everything is put together for the full model a fuller appreciation for the validity of the parts can be achieved allowing greater focus on the source of discrepancy.

6. Be ruthless in uncovering what you don’t understand because this will define your theoretical and/or experimental program.  If nothing else it will help you mindfully and reasonably calibrate while places limits of extrapolation.

7. If possible work on experiments to help you understand basic things you know poorly and use the results to reduce or remove the scope of calibration.

8. Realize that the numerical solution to your system itself constitutes a model of one sort or another.  This model is a function of the grid you use, and the details of the numerical solution.

9. Separate your uncertainties between the things you don’t know and the things that just vary. This is the separation of epistemic and aletory uncertainty.  The key to this separation is that that epistemic errors can be removed through learning more. Aletory uncertainty is part of the system that is harder to control. 

10. Realize that most physical systems are not completely well determined problems.  In other words if you do an experiment that should be the same over and over some of the variation in results is due to imperfect knowledge of the experiment.  One should not try to exactly match the results of every experiment individually; some of the variation in results is real physical noise.

11. Put everything else into the calibration, but realize that it is just papering over what you don’t understand.  This should provide you with the appropriate level of humility.

 

If I had a time machine…

21 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

First an admission, I thought it was Friday yesterday and it was time to post.  It pretty much sums up my week, so as penance I’m giving you a bonus post.  I also had a conversation yesterday that really struck a cord with me, and it relates to yesterday’s topic, legacy codes.  In a major project area I would like to step into a time machine and see what where current decision-making will take us.  The conversation offered me that very opportunity.

A brief aside to introduce the topic, I’m a nuclear engineer educationally, and worked as a reactor safety engineer for three years at Los Alamos.  Lately I’ve returned to nuclear engineering as part of a large DOE project.  After having gone into great depth with modern computational science, the return to nuclear engineering has been bracing.  It is like stepping into the past.  I am trying to bring modern concepts in computational science quality to bear on the analysis of nuclear reactors.  To say it is an ill fit is a dramatic over-statement, it is a major culture clash.  The nuclear engineering community’s idea of quality is so antiquated that almost none of my previous 20 years experience is helpful; it is the source of immense frustration.  I have to hold back my disgust constantly at what I see.

A big part of the problem is the code base that nuclear engineering uses.  It is legacy code.  The standard methodology is almost always based on the way things were done in the 1970’s, when the codes were written.  You get to see lots of Fortran, lots of really crude approximations, and lots of code coupling via passing information through the file system.  Nuclear reactor analysis is almost always done with a code and model that is highly calibrated.  It is so calibrated that there isn’t any data left over to validate the model.  We have no idea whether the codes are predictive (it is almost assured that they are not).

It is a giant steaming pile of crap.  The best part is that this steaming pile of crap is the mandated way doing the analysis.  The absurd calibrated standards are written into regulations that the industry must follow.  It creates a system where nothing will ever get any better, and rather than follow the best scientific approach to doing this analysis, we do things in the slipshod way they were done in the 1970’s.  I am mindful that we didn’t know any better back then and we had limitations in the methodology we could apply.  After all, a wristwatch can beat the biggest supercomputer in the world in the early to mid-1970’s today.  This is a weak excuse for continuing to do things today like we did it then, but we do.  We have to.

We still use codes today that ended their active development in that era.  Some of the codes have been revamped with modern languages and interfaces, but the legacy intellectual core remains stuck in the mid-1970’s (40 years ago!).  The government simply stopped funding the development of new methods, and began to mandate the perpetuation of the legacy methodology.  The money dried up for new development and has been replaced by maintenance of the legacy capability and legacy analysis methodology that is unworthy of the task it is set to in the modern world.

Here is the punch line in my mind. We are setting ourselves on a course to do the same with the analysis of the nuclear weapons.  I think looking at computational analysis of nuclear reactors gives us a “time machine” that shows the path nuclear weapon’s analysis is on.  We have stopped developing anything new, and started to define a set of legacy capabilities that must be perpetuated.  Some want to simply work on porting the existing codes to the next generation of computers without adding anything to the intellectual basis.  Will this create an environment just like reactor safety analysis in 15 years?  Will we be perpetuating the way we do things now for perpetuity?  I worry that this is where our leaders are taking us?

I believe that three major factors are at play.  One is a deep cultural milleu that is strangling scientific innovation, reducing both aggregate funding and the emphasis and capacity for innovation.  The United States simply lacks faith that science can improve our lives and acts accordingly.  The second two factors are more psychological.  The first is a belief that we have a massive sunk cost in software and it must be preserved.  This is the same fallacy that makes people lose all their money in Las Vegas. It is stupid, but people buy it. Software can’t be preserved; it begins to decay the moment it is written.  More tellingly, the intellectual basis of software must either grow or it begins to die.  We are creating experts in preserving past knowledge, which is very different from creating new knowledge.

Lastly, when the codes began to become useful to analysis an anchoring bias was formed.  A lot of what nuclear engineers analyze can’t be seen.  As such, a computer code becomes the picture many of us have of phenomena.  Think about radiation transport and what it “looks” like.  We can’t see it visually.  Our path to seeing it is computational simulation.  When we do “see” it, it forms a powerful mental image.  I can attest to learning about radiation transport for years and the power of simulation to put this concept into vivid images.  This image becomes an anchor bias that is difficult to escape.  This image includes both the simulation’s picture of reality as well as the simulation’s errors and model deficiencies.  The bias means that an unambiguously better simulation will be rejected because it doesn’t “look” right.  It is why legacy codes are so hard to displace.  For reactor safety the anchoring bias has been written into regulatory law.

This resonates with my assessment of how the United States’ is managing to destroy its National Laboratory system through systematic mismanagement of the scientific enterprise.  It fact it is self-consistent.  The deeper question is why our leader makes decisions like this?  These are two cases where a collective decision has been made to mothball a technology, an important and controversial technology, as everything nuclear is.  Instead of applying the best of modern science to the mothballed technology, we mothball everything about it.

It would seem that the United States will only invest in the analysis of significant technological programs while the technology is being actively developed.  In other words, the computational tools are built only when the thing they are being analyzed is being built too.  We do an awful job of stewardship using computation.  This is true in spades with nuclear reactors, and I fear may be true with the awkwardly named “stockpile stewardship program”.  It turns out that the entirety of the stewardship is grounded on ever-faster computers rather than a holistic, balanced approach.  We aren’t making new nuclear weapons, and increasingly we aren’t applying new science to their stewardship.  We aren’t actually doing our best to do this important job.  Instead we are holding fast to a poorly constructed, politically expedient plan laid out 20 years ago.

On the other hand maybe it’s just the United States ceding scientific leadership in yet another field.  We’ll just let the Europeans and Chinese have computational science too.

Legacy Code is Terrible in More Ways than Advertised

20 Thursday Mar 2014

Posted by Bill Rider in Uncategorized

≈ 2 Comments

From the moment I started graduate school I dealt with legacy code.  I started off by extending the development of a modeling code by my advisor’s previous student.  My hatred of legacy code had begun.  The existing code was poorly written, poorly commented and obtuse.  I probably responded by adding more of the same.  The code was crap so why should I write nice code on top of this basis.  Bad code can help encourage more bad code.  The only positive was contributing to the ultimate death of that code so that no other poor soul would be tortured by developing on top of my work.  Moving to a real professional job only hardened my views; a National Lab is teeming with legacy code.  I soon encountered code that made the legacy code of grad school look polished.  These codes were better documented, but written even more poorly.  Lots of dusty deck Fortran 4 was encountered with memory management techniques associated with computing on CDC supercomputers. Programming devices created at the dawn of computer programming languages were common.  I encountered spaghetti code that would make your head spin.  If I tried to flowchart the method it would look like a Mobius strip.  What dreck!

On the positive side, all this legacy code powered my desire to write better code.  I started to learn about software development and good practices.  I didn’t want to leave people cleaning up my messes and cursing the code I wrote.  They probably did anyway.  The code I wrote was good enough to be reused for purposes I never intended and as far as I know is still in use today.  Nonetheless, legacy code is terrible, and expensive, but necessary.  Replacing legacy code is terrible, expensive and necessary too.  Software is just this way.  There is a deeper problem with legacy code, legacy ideas.  Software is a way of actualizing ideas, algorithms into action.  It is the way that computers can do useful things.  The problem is that the deeper ideas behind the algorithms often get lost in the process. 

Writing code is a manner of concrete problem solving.  Writing a code for general production use is a particularly difficult brand of problem solving because of the human element involved.  The code isn’t just for you to use, but for others to use.   Code should be written for humans, not the computer.  You have to provide them with a tool they can wield.  If the users of a code wield the code successfully, the code begins to take on a character of its own.  If the problem is hard enough and the code is useful enough, the code begins to become legendary. 

A legendary code then becomes a legacy code that must be maintained.  Often the magic that makes it useful is shrouded in the mystery of the techniques for problem solving used.  It becomes a legacy code when the architect who made it useful moves on.  At this point the quality of the code and the clarity of the key ideas becomes paramount in importance.   If the ideas are not clear, the ideas become fixed because subsequent stewards of the capability cannot change them without breaking them.  Too often the “wizards” who developed the code were too busy solving their user’s problems to document what they are doing. 

These codes are a real problem for scientific computing.  They also form the basis of collective achievement and knowledge in many cases, the storehouse of powerful results.  They often become entrenched because they solve important problems for important users, and their legendary capability takes on the air of magic.  I’ve seen this over-and-over and it is a pox on computational science.  It is one of the major reasons that ancient methods for computing solutions continue to be used long after they should have been retired.  For a lot of physics particularly those involved with transport (first order hyperbolic partial differential equations), the numerical method has a large impact on the physical model. 

More properly, the numerical method is part of the model itself, thus the numerical solution and physical modeling are not separable.  Part of the reason is the need to add some sort of stabilization mechanism to the solution (some form of numerical or artificial viscosity).  If the numerical model changes, the related models need to change too.  Any calibrations need to be redone (and there are always calibrations!).  If the existing code is useful there is huge resistance to change because any new method is likely to be worse on the problems that count. Again, I’ve seen this repeatedly over the past 25 years.   The end result is that old legacy codes simply keep going long after their appropriate shelf life. 

Worse yet, a new code can be developed to put the old method into a new code base.  The good thing is that the legacy “code” goes away, but the legacy method remains.  It is sort of like getting a new body and simply moving the soul out of the old body into the new one.  If the method being transferred is well understood and documented, this process has some positives  (i.e., fresh code).  It also represents the loss of the opportunity to refresh the method along with the code.  Since the legacy code started it is likely that the numerical solver technology has improved.  Not improving the solver is a lost opportunity to improve the code.

I am defining the “soul” of the code, the approximations made to the laws of physics.  These are differential equation solvers and the quality of the approximation is one of the most important characteristics of the code.  The nature of the approximations and the errors made therein often define the code’s success.  It really is the soul or personality of the code.  Changes to this part of a successful legacy code are almost impossible.  The more useful or successful the code is the harder such changes are to execute.  I might argue these are the conditions where it is more important to achieve. 

Some algorithms are more of a utility. An example is numerical linear algebra.  Many improvements have taken place with the efficiency that we can solve linear algebra on a computer.  These are important utility that massively impacts the efficiency, but not the solution itself.  We can make the solution on the computer much faster without any effect on the nature of the approximations we make to the laws of physics.  Good software abstracts the interface to these methods so that improvements can be had independent of the core code.  There are fewer impediments to this sort of development because the answer doesn’t change.  If the solution has been highly calibrated and/or highly trusted, getting it faster is naturally accepted.  Too often changes (i.e., improvements) in the solution are not accepted so naturally.

In the minds of many of the users of the code, the legacy code often provides the archetype of what a solution should look like.  This is especially true is the code is used to do useful programmatic work and analyze or engineer important systems.  This mental picture provides an anchor to their computational picture of reality.  Should that picture become too entrenched, the users of the code begin to lose objectivity and the anchor becomes a bias.  This bias can be exceedingly dangerous in that the legacy code’s solutions, errors, imperfections and all become their view of reality.  This view becomes an outright impediment to improving on the legacy code’s results.  It should be a maxim that results can always be improved; the model and method in the code are imperfect reflections of nature and should always be subject to improvements.  These improvements can happen via direct focused research, or the serendipitous application of research from other sources.  Far too often the legacy code acts to suffocate research and stifle creativity because of assumptions made in its creation both implicit and explicit.

One key concept with Legacy codes is technical debt.  Technical debt is an accumulation issues that have been solved in a quick and dirty manner rather than systematically.  If the legacy codes are full of methods that are not well understood, technical debt will accumulate and begin to dominate the development.  A related concept is technical inflation where basic technology passes what is implemented in a code.  Most often this term is applied to aspects of computer science.  In reality technical inflation may also apply to the basic numerical methods in the legacy code.  If the code has insufficient flexibility, the numerical methods become fixed, and rapidly lose any state-of-the-art character (if it even had it to begin with!).  Time only increases the distance between the code and the best available methods.  The lack of connectivity ultimately short-circuits the ability of the methods in the legacy code to influence the development of better methods.  All of these factors conspire to accelerate the rate of technical inflation.

In circumstances where the legacy “code” is replaced, but the legacy methodology is retained (i.e., a fresh code base).  The presence of the intellectual legacy can strangle innovation.  If the fresh code is a starting point for real extensions from the foundational methods and not overly constrained to the past, progress can be had.  This sort of endeavor must be entered into carefully with a well thought-through plan.  Too often this is not the approach, and legacy methods are promulgated forward without a genuine change.  With each passing year the intellectual basis that the methodology was grounded upon ages and understanding is lost.  Technical inflation sets in and the ability to close the gap recedes.  In many cases the code developers will lose sight of what is going on in the research community as it becomes increasingly irrelevant to them.  Eventually, the technical inflation becomes a cultural barrier that will threaten the code.  The results obtained with the code cease to be scientific, and the code developers become curators or priests. They are paying homage to the achievements of the past, and sacrificing their careers at the altar of expediency.  The original developers of the methodology move from legendary to mythic status and all perspective is lost.  The users of the code become a cult. 

Believe me, I’ve seen this in action.  It isn’t pretty.  Solving the inherent problems at this stage require the sorts of interventions that technical people suck at.  

Depending on the underlying culture of the organization using and/or developing the code, the cult can revolve around different things.  At Los Alamos, it is a cult of physicists with numerical methods, software and engineering slighted in importance.  At Sandia, it is engineering that defines the cult.  Engineers are better at software engineering too, so that gets more priority.  The numerical methods and the underlying models are slighted.  In the nuclear industry, legacy code and methods are rampant, with all things bowing to the cult of nuclear regulation.  This regulation is supposed to provide safety, but I fear the actual impact is to squash debate and attention to any details other than the regulatory demands.  This might be the most troubling cult I’ve seen.  It avoids any real deep thought and enshrines legacy code as the core of a legally mandated cult of calibration.  This calibration is papering over a deep lack of understanding and leads to over-confidence or over-spending, probably both. The calibration is deeply entrenched into their problem solving approach that they have no real idea how well the actual systems are being modeled.  Understanding is not even on the radar.  I’ve seen talented and thoughtful engineers self-limit their approach to problem solving because of the sort of fear the regulatory environment brings.  Instead of bringing their “A” game, the regulation induces a thought paralyzing fear. 

The way to avoid the issues is avoid using legacy code and/or methods that are poorly understood.  Important application results should not be dependent on things you do not understand.  Codes are holistic things.  The quality of results depends on many things and people tend to focus on single aspects of the code usually in a completely self-absorbed manner.  Code users think that their efforts are the core of quality, which lends itself to justifying crude calibrations.  People developing closure models tend to focus on their efforts and believe that their impact is paramount.  Method developers focus on the impact of the methods.  The code developer thinks about the issues related to the quality of the code and its impact.  With regulatory factors all independent thought is destroyed.  The fact is that all of these things are intertwined.  It is the nature of a problem that is not separable and must be solved in a unified fashion.  Every single aspect of the code from its core methods, to the models it contains to the manner of its use must be considered in providing quality results.

What sort of person does V&V?

14 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

The proper way to say the title of this talk is with more than a bit of distain.  Too often, I have encountered a disturbingly negative attitude toward V&V and those who practice it. I think it is time for us to shoulder some of the blame and rethink our approach to engaging other scientists and engineers on the topic of modeling and simulation (M&S) quality. 

 V&V should be an easy sell to the scientific and engineering establishment.  It hasn’t been, it has been resisted at every step.  V&V is basically a rearticulation of the scientific method we all learn, use and ultimately love and cherish.  Instead, we find a great deal of animosity toward V&V, and outright resistence to including it as part of the M&S product.  To some extent it has been successful in growing as a discipline and focus, but too many barriers still exist.  Through hard learned lessons I have come to the conclusion that a large part of the reason is the V&V community’s approach.  For example, one of the worst ideas the V&V community has ever had is “independent V&V”.  In this model V&V comes in independently and renders a judgment on the quality of M&S.  It ends up being completely adversarial with the M&S community, and a recipe for disaster.  We end up less engaged and hated by those we judge.  No lasting V&V legacy is created through the effort.  The M&S professionals treat V&V like a disease and spend a lot of time trying to simply ignore or defeat it.  This time could be better spent improving the true quality, which ought to be everyone’s actual objective.  Archetypical examples of this appraoch in action are federal regulators (NRC, the Defense Board…).  This idea needs to be modified into something collaborative where the M&S professions end up owning the quality of their work, and V&V engages as a resource to improve quality.

 The fact is that everyone doing M&S wants to do the best job they can, but to some degree don’t know how to do everything.  In a lot of cases they haven’t even considered some of the issues we can help with.  V&V expertise can provide knowledge and capability to improve quality if they are welcome and trusted.  One of the main jobs of V&V should be build trust so that they might provide their knowledge to important work.  In sense, the V&V community should be quality “coaches” for M&S.  Another way the V&V community can help is to provide appropriately leveled tools for managing quality.  PCMM can be such a tool if its flexibility is increased. Most acutely, PCMM needs a simpler version.  Most modeling and simulation professionals will do a very good job with some aspects of quality.  Other areas of quality fall outside their expertise or interest.  In a very real sense, PCMM is a catalog of quality measures that could be taken.  Following the framework helps M&S professionals keep all the aspects of quality in mind and within reach.  The V&V community can then provide the necessary expertise to carry out a deeper quality approach.

 If V&V allows itself to get into the role of judge and jury on quality, progress will be poor. V&V’s job is to ask appropriate questions about quality as partners with M&S professionals interested in improving the quality of their work.  By taking this approach we can produce a M&S future where quality continuously improves.

What is the role of Passion in Science and Engineering?

14 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 In most people’s minds science and engineering doesn’t invoke images of passionate devotion.  Instead they think of equations, computers, metal being cut and exotic instruments.  Nonetheless like all manner of human endeavor, passion plays a key role in producing the most stunning progress and achievement.  Passion is one of those soft things we are so intently uncomfortable with, and like most soft things our success is keenly determined by how well we engage the issues around them. 

 So what am I passionate about?  What got me to do what I do today?  I started thinking about what got me started in computational simulation, and led to getting a PhD and ending in computational science today.  At the heart of the journey is a passionate idealistic sentiment, “if I can simulate something on the computer, it implies that I understand it.”  To me, a focus on V&V is a natural outgrowth of this idealism.  Too often, I lose sight of what got me started.  Too often, I end up doing work that has too little meaning, too little heart. I need to get in better touch with that original passion that propelled me through graduate school, and those first few years as a professional.  The humdrum reality of work so often squeezes my passion for modeling and simulation out.  When you feel passionately about what you are doing, it stops being work.

 Employers and employees are most comfortable with hard skills and tangible things that can be measured.  Often interviews and hiring revolve around the technical skills associated with the job, soft skills are either ignored or an afterthought.  Things like knowledge, ability to solve problems, money, and time are concrete and measureable.  They lend themselves to metrics.  Soft things are feelings (like passion), innovation, inclusion, emotion, and connectedness.  Most of these things are close to the core of what defines success and evade measurement.  Hard skills are necessary, but woefully insufficient. 

 Scientists and especially engineers are very uncomfortable with this.  Take a wonderfully written and insightful essay as an example.  Its quality is a matter of opinion, and can’t be quantified in a manner that makes the scientific world happy.  Yet the quality exists, and the capacity of such an essay to move ones emotion, shape one’s opinions, and enrich the lives of those that read it are clear.  If we don’t value the soft stuff success will elude us.

 A well-written persuasive argument can shape action and ultimately lead to greater material gains in what can be measured.  The inability to measure this quality should in no way undermine its value.  Yet, so often, it does.  We end up valuing what we can measure and fail to address what cannot be measured.  We support the development of hard skills and fail to develop the soft skills.  Passion is one of these soft things that does not receive the care and feeding it needs.  It is overlooked as a way forward to productivity and innovation.  Fun is another thing and its link to passion is strong.  People feel fun in doing things they have passion for.  With the passion and fun comes effortless work and greater achievement than skillful execution without those characteristics.

 Passion needs to be fed.  Passion can ignite innovation and productivity.  If you work at what you have passion for, you’ll likely be happier, and more productive.  Your life and the lives of those you touch will be better.  Too often people fail to find passion in work and end up channeling themselves into something outside of work where they can find passion.  At the foundation of many great achievements is passionate work.   As passion is lost all that is left is work, and with the loss of passion, the loss of possibility.

 Maybe you should find your own passion again.  Something propelled you to where you are today.  It must be powerful to have done that.

The Clay Prize and The Reality of the Navier-Stokes Equations

07 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ 22 Comments

The existence of solutions to the (incompressible) Navier-Stokes is one of the Clay Institutes Millennium Prizes.  Each of the problems is a wickedly difficult mathematical problem, and the Navier-Stokes existence proof is no exception.  The interest in the problem has been enlivened by a claim that the problem has been solved.  Terry Tao has made a series of stunning posts to his blog outlining majestically the mathematical beauty and difficulty associated with the problem. In my mind, the issue is whether it really matters to the real world, or whether it is formulated in a manner that leads to utility. 

I fear the answer no.  We might have enormous mathematical skill applied to a problem that has no meaning to reality.

The key word I left out so far is “incompressible”, and incompressible is not entirely physical.  An incompressible fluid is impossible.  Really.  It is impossible, but before you stop reading out of disgust with me let me explain why.

One characteristic of incompressibility is the endowment of infinitely fast sound waves into the Navier-Stokes equations.  By infinite, I mean infinite, sound is propagated everywhere instantaneously.  This is clearly unphysical (a colleague of mine from Los Alamos deemed the sound waves to be “superluminal”).  It violates a key principle known as causality.

More properly, incompressibility is an approximation to reality, and there is a distinct possibility that this approximation causes the resulting equations to depart from reality in essential ways for a given application.   For very many applications incompressibility is enormously useful as an approximation, but like all approximations it has limits on its utility.  Some of these limitations are well known.  Mathematically this makes the equations set elliptic.  This ellipticity is at the heart of why the problem remains unsolved to this day.  Real fluids are not elliptic, real fluids have finite speed of sound. In fact, compressibility may hold the key to solving the most important real physical problem in fluid mechanics, turbulence.

Turbulence is the chaotic motion of fluids that arise when a fluid is moving fast enough and the inertial force of the flow exceeds the viscous force to a certain degree.  Turbulence is characterized by the loss of perceptible dependence of the solution on the initial data, carries with it a massive loss of predictability. Turbulence is enormously important to engineer, and science in general.  The universe is full of turbulence flows, as is the Earth’s atmosphere and ocean.  Turbulence also causes the loss of efficiency of almost any machine built by engineers.  It also drives mixing from stars to car engines to the cream in the coffee cup sitting next to my right hand.

There does exist a set of equations that does have the physical properties the incompressible equations lack, the compressible Navier-Stokes equations.  The problem is that the Millennium prize doesn’t focus on this equation set, but rather it focuses upon the incompressible version.  The question is whether or not going from compressible to incompressible has changed something essential about the equations, and if that is something that is essential to understanding turbulence.  There is a broadly stated assumption about turbulence; it is contained in the solution of the incompressible Navier-Stokes equations (see for example the very first page of Frisch’s book “Turbulence: The Legacy of A. N. Kolmogorov”).  In other words, it is contained inside an equation set that has undeniably unphysical aspects.  Implicitly, the belief is that these details do not matter to turbulence.  I counter that we really don’t understand turbulence well enough to make that leap.

Incompressible flow is a meaningful oft used approximation for very many engineering applications where the large-scale speed of the fluid flow is small.  This parameter is the Mach number, and if the Mach number is low (less than 0.1 to 0.3) it is assumed that the flow can be taken to be incompressible.  Incompressibility is a useful approximation that allows the practical solution of many problems.  Turbulence is ubiquitous and particularly relevant in this low speed limit.  For this reason scientists have believed that ignoring sound waves is reasonable and turbulence can be tackled with the incompressible approximation.

It is worth point out that turbulence is perhaps one of the most elusive topics known.  It has defied progress for a century with only a trickle of advances.  No single person has brought more understanding to bear on turbulence than the Russian scientist Kolmogorov.  His work has established some very fundamental scaling laws that get right to the heart of the problem with incompressibility.

He established an analytical estimate that flows achieve at very high Reynolds numbers (Reynolds number is the ratio of inertial forces to dissipative forces), the 4/5 law.  Basically this law implies that turbulence flows are dissipative and the rate of dissipation is not determined by the value of the viscosity.  In other words, as the Reynolds number becomes large its precise value does not matter to the rate of dissipation.  The implications of this are massive and get to the heart of the issue with incompressibility.  This law implies that the flow has discontinuous dissipative solutions (gradients that become infinitely steep).  These structures would be strongly analogous to shock waves where they would appear to step functions at large scales, and any transition would take place at infinitesimally small distances.  These structures have eluded scientists both experimentally and mathematically.  I believe part of the reason for evasion has been the insistence on incompressibility.

Compressible flows have no such problems.  The flows physically and mathematically readily admit discontinuous solutions.  It is not really a challenge to see this, and shock waves form naturally.  Shock waves form at all Mach numbers, including the limit of zero Mach number.  These structures actually dissipate energy at the same rate as Kolmogorov’s law would indicate (this was first observed by Hans Bethe in 1942, Kolmogorov’s proof occurred in 1941, there was no indication that they knew of each other’s work). It is worth looking at Bethe’s derivation closely.  In the limit of zero Mach number, the compressible flow equations are dissipation free if the expansion is taken to second-order.  It is only when the third-order term in the asymptotic expansion is considered that dissipation arises and shock looks different than an adiabatic smooth solution. The question is whether in taking the limit of incompressibility has removed the desired behavior from the Navier-Stokes equations. 

I believe the answer is yes.  More explicitly shock phenomena and turbulence are assumed to be largely independent except for more compressible flows where classical shocks are found.  The question is whether the fundamental nature of shocks changes continuously as the Mach number goes to zero.  We know that shocks continue to form all the way to a Mach number of zero.  In that limit, the dissipation of energy is proportional to the third power of the jump in variables (velocity, density, pressure).  This dependence matches the scaling associated with turbulence in the above-mentioned 4/5 law.  For shocks, we know that the passage of any shock wave creates entropy in the flow.  A question worth asking is “How does this bath of nonlinear sound waves leading to pressure act, and do these nonlinear features work together to produce the effects of turbulence?”  This is a very difficult problem, and we may have made it more difficult by focusing on the wrong set of equations.

There have been several derivations of the incompressible Navier-Stokes equations from the compressible equations.  These are called the zero-Mach number equations and are an asymptotic limit.  Embedded in their derivation is the assumption that the equation of state for the fluid is adiabatic, no entropy is created.  This is the key to the problem.  Bethe’s result is that shocks are non-adiabatic.  In the process of deriving the incompressible equations we have removed the most important behavior visa-vis turbulence, the dissipation of energy by purely inertial forces.

The problem with all the existing work I’ve looked at is that entropy creating compressible flows is not considered in the passage to the zero Mach limit.  In the process one of the most interesting and significant aspects of a compressible fluid is removed because the approximation doesn’t go far enough.  It is possible, or maybe even likely that these two phenomena are firmly connected.  A rate of dissipation independent of viscosity is a profound aspect of fluid flows.  We understand intrinsically how it arises from shock waves, and its presence with turbulence remains mysterious.  It implies a singularity, a discontinuous derivative, which is exactly what a shock wave is.  We have chosen to systematically remove this aspect of the equations from incompressible flows.  Is it any wonder that the problem of turbulence is so hard to solve?  It is worth thinking about.

Mukhtarbay Otelbayev of Kazakhstan claims that he has proved the Navier-Stokes existence and smoothness problem.  I don’t know whether he has or not, I don’t have the mathematical chops to deliver that conclusion.  What I’m asking is if he has, does it really matter to our understanding of real fluid dynamics?  I think it might not matter either way

We only fund low risk research today

03 Monday Mar 2014

Posted by Bill Rider in Uncategorized

≈ 2 Comments

While musing about Moore’s law and algorithms last week something important occurred to me.  The systematic decision to emphasize hardware and performance increases via Moore’s law over the larger gains that algorithms produce may have a distinct physiological basis.  For decades, Moore’s law has provided a slow steady improvement, kind of like investing in bonds instead of risky stocks.  Algorithmic improvements tends to be episodic and “quantum” in nature.  They can’t be relied upon to deliver a steady diet of progress on a time scale that most program manager’s reign’s run. 

 I think the problem is that banking on Moore’s law is safe while banking on algorithms is risky.  The program manager looking to succeed and move up may not want to risk the uncertainty of having little progress made on their watch.  The risk of the appearance of scandal looms.  Money can be spent with no obvious progress.  Algorithmic work depends upon breakthroughs, which we all should know can’t be scheduled. 

 This has profound implications on what we try to do as a society and Nation.  I’ve bemoaned the lack of “Moonshots” today.  These would be the big risky projects that have huge payoffs, but large chances at failure.  It is much better to rely upon the unsexy, low-risk, low-payoff work that is incremental.  Pressure to publish is much the same.  Incremental work will provide easier publishing and less chance of crashing and burning.  Risky work is harder to publish, more prone to clashes with reviewers and can be immensely time consuming. 

 The deeper question is “what are we losing?”  How can we remove the element of risk and failure as an impediment to deeper investments in the future?  How can we create efforts that capture the imagination and provide greater accomplishments that benefit us all?  Right now, the barriers to risky, but rewarding, research must be lowered or society as a whole will suffer.  This suffering isn’t deeply felt because it reflects gains not seen and loss of possibility.

 

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 56 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar