• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: March 2014

#predictive or 11 things to make your simulation model the real world

28 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

It feels almost dirty to put a “#” hashtag in my title, but what the hell! The production of predictive models is the holy grail of modeling and simulation.  On the other hand we have the situation where a lot of scientists and engineers who think they have predictivity when in fact they have cheated. By “cheating” I usually mean one form or another of calibration either mindfully or ignorantly applied to the model. The model itself ends up being an expensive interpolation and any predictivity is illusory. 

When I say that you are modeling the real world, I really mean that you actually understand how well you compare.  A model that seems worse, but is honest about your simulation mastery is better than a model that seems to compare better, but is highly calibrated.  This seems counter intuitive as I’m say that a greater disparity is better.  In the case where you’ve calibrated your agreement, you have lost the knowledge of how well you model anything.  Having a good idea of what you don’t know is essential for progress.

A computational model sounds a lot better than an interpolation. In such circumstances simulation ends up being a way of appearing to add more rigor to the prediction when any real rigor was lost in making the simulation agree so well to the data. As long as one is simply interpolating the cost is the major victim of this approach, but in the case where one extrapolates there is danger in the process.  In complex problems simulation is almost always extrapolating in some sense.  A real driver for this phenomenon is mistakenly high standards for matching experimental data, which drive substantial overfitting of data (in other words forcing a better agreement than the model should allow). In many cases well-intentioned standards of accuracy in simulation drive pervasive calibration that undermines the ability to predict, or assess the quality of any prediction.  I’ll explain what I mean by this and lay out what can be proactively done to conduct bonafide modeling. 

I suppose the ignorant can be absolved of the sin they don’t realize they are committing. Their main sin is ignorance, which is bad enough.  In many cases the ignorance is utterly willful.  For example, physicists tend to show a lot of willful ignorance of numerical side effects. They know it exists yet continue to systematically ignore it, or calibrate for its effects.  The delusional calibrators are cheating purposefully and then claiming victory despite having gotten the answer by less than noble means.  I’ve seen example after example of this in a wide spectrum of technical fields. Quite often nothing bad happens until a surprise leaps up from the data.  The extrapolation finally becomes poor and the response of the simulated system surprises. 

The more truly ignorant will find that they get the best answer by using a certain numerical method, or grid resolution and with no further justification declare this to be the best solution. This is the case for many, many engineering applications of modeling and simulation.  For some people this would mean using a first-order method because it gives a better result than the second-order method.  They could find that using a more refined mesh gives a worse answer and then use the coarser grid.  This is easier than trying track down why either of these dubious steps would give better answers because they shouldn’t.  In other cases, they will find a dubious material or phenomenological model gives better results, or a certain special combination.  Even more troubling is the tendency to choose expedient techniques whereby mass, momentum or energy is simply thrown away, or added in response to a bad result. Generally speaking, the ignorant that apply these techniques have no general idea how accurate their model actually is, its uncertainties, or the uncertainties in the quantities they are comparing to.

While dummies abound in science, charlatans are a bigger problem.  While calibration when mindfully done and acknowledged is legitimate, the misapplication of calibration as mastery in modeling is rampant. Again, like the ignorant, the calibrators often have no working knowledge of many of innate uncertainties in the model.  They will joyfully go about calibrating over numerical error, model form, data uncertainty, and natural variability without a thought.  Of course the worst form of this involves ignorant calibrators who believe they have mastery over things they understand poorly. This ultimately is a recipe for disaster, but the near term benefits of these practices are profound. Moreover the powers that be are woefully prepared to unmask these pretenders.

At its worst calibration will utilize unphysical, unrealizable models to navigate the solution into complete agreement with data.  I’ve seen examples where fundamental physical properties (like equation of state or cross sections) are made functions of space, when they should be invariant of position. Even worse the agreement will be better than it has a right to be, not even include the possibility that the data being calibrated to is flawed. Other calibrations will fail to account for experimental measurement error, or natural variability and never even raise the question of what these might be.  In the final analysis the worst aspect of this entire approach is lost opportunity to examine the state of our knowledge and seek to improve it.

How to do things right:

1. Recognize that the data you are comparing to isn’t accurate, and variable. Try to separate these uncertainties into their sources, measurement error, intrinsic variability, or unknown factors.

2. Your simulation results are similarly uncertain for a variety of reasons. More importantly you should be able to more completely and mindfully examine their sources and estimate their magnitude. Numerical errors arise from finite resolution, uncoverged nonlinearities (the effects of linearization), unconverged linear solvers, and outright bugs.  The models often can have their parameters change, or even change to other models.  The same can be said of the geometric modeling.

3. Much of the uncertainty in modeling can be explored in a concrete manner by modifying the details of the models in a manner that is physically defensible. The values in or from the model can be changed in ways that can be defended in a strict physical sense.

4. In addition different models are often available for important phenomena and these different approaches can yield a degree of uncertainty.  To some degree different computer codes themselves constitute different models and can be used to explore differences in what would be considered reasonable defensible models of reality.

5. A key concept in validation is a hierarchy of experimental investigations that cover different levels of system complexity, and modeling difficulty. These sources of experimental (validation) data provide the ability to deconstruct the phenomena of interest into its constituent pieces and validate them independently. When everything is put together for the full model a fuller appreciation for the validity of the parts can be achieved allowing greater focus on the source of discrepancy.

6. Be ruthless in uncovering what you don’t understand because this will define your theoretical and/or experimental program.  If nothing else it will help you mindfully and reasonably calibrate while places limits of extrapolation.

7. If possible work on experiments to help you understand basic things you know poorly and use the results to reduce or remove the scope of calibration.

8. Realize that the numerical solution to your system itself constitutes a model of one sort or another.  This model is a function of the grid you use, and the details of the numerical solution.

9. Separate your uncertainties between the things you don’t know and the things that just vary. This is the separation of epistemic and aletory uncertainty.  The key to this separation is that that epistemic errors can be removed through learning more. Aletory uncertainty is part of the system that is harder to control. 

10. Realize that most physical systems are not completely well determined problems.  In other words if you do an experiment that should be the same over and over some of the variation in results is due to imperfect knowledge of the experiment.  One should not try to exactly match the results of every experiment individually; some of the variation in results is real physical noise.

11. Put everything else into the calibration, but realize that it is just papering over what you don’t understand.  This should provide you with the appropriate level of humility.

 

If I had a time machine…

21 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

First an admission, I thought it was Friday yesterday and it was time to post.  It pretty much sums up my week, so as penance I’m giving you a bonus post.  I also had a conversation yesterday that really struck a cord with me, and it relates to yesterday’s topic, legacy codes.  In a major project area I would like to step into a time machine and see what where current decision-making will take us.  The conversation offered me that very opportunity.

A brief aside to introduce the topic, I’m a nuclear engineer educationally, and worked as a reactor safety engineer for three years at Los Alamos.  Lately I’ve returned to nuclear engineering as part of a large DOE project.  After having gone into great depth with modern computational science, the return to nuclear engineering has been bracing.  It is like stepping into the past.  I am trying to bring modern concepts in computational science quality to bear on the analysis of nuclear reactors.  To say it is an ill fit is a dramatic over-statement, it is a major culture clash.  The nuclear engineering community’s idea of quality is so antiquated that almost none of my previous 20 years experience is helpful; it is the source of immense frustration.  I have to hold back my disgust constantly at what I see.

A big part of the problem is the code base that nuclear engineering uses.  It is legacy code.  The standard methodology is almost always based on the way things were done in the 1970’s, when the codes were written.  You get to see lots of Fortran, lots of really crude approximations, and lots of code coupling via passing information through the file system.  Nuclear reactor analysis is almost always done with a code and model that is highly calibrated.  It is so calibrated that there isn’t any data left over to validate the model.  We have no idea whether the codes are predictive (it is almost assured that they are not).

It is a giant steaming pile of crap.  The best part is that this steaming pile of crap is the mandated way doing the analysis.  The absurd calibrated standards are written into regulations that the industry must follow.  It creates a system where nothing will ever get any better, and rather than follow the best scientific approach to doing this analysis, we do things in the slipshod way they were done in the 1970’s.  I am mindful that we didn’t know any better back then and we had limitations in the methodology we could apply.  After all, a wristwatch can beat the biggest supercomputer in the world in the early to mid-1970’s today.  This is a weak excuse for continuing to do things today like we did it then, but we do.  We have to.

We still use codes today that ended their active development in that era.  Some of the codes have been revamped with modern languages and interfaces, but the legacy intellectual core remains stuck in the mid-1970’s (40 years ago!).  The government simply stopped funding the development of new methods, and began to mandate the perpetuation of the legacy methodology.  The money dried up for new development and has been replaced by maintenance of the legacy capability and legacy analysis methodology that is unworthy of the task it is set to in the modern world.

Here is the punch line in my mind. We are setting ourselves on a course to do the same with the analysis of the nuclear weapons.  I think looking at computational analysis of nuclear reactors gives us a “time machine” that shows the path nuclear weapon’s analysis is on.  We have stopped developing anything new, and started to define a set of legacy capabilities that must be perpetuated.  Some want to simply work on porting the existing codes to the next generation of computers without adding anything to the intellectual basis.  Will this create an environment just like reactor safety analysis in 15 years?  Will we be perpetuating the way we do things now for perpetuity?  I worry that this is where our leaders are taking us?

I believe that three major factors are at play.  One is a deep cultural milleu that is strangling scientific innovation, reducing both aggregate funding and the emphasis and capacity for innovation.  The United States simply lacks faith that science can improve our lives and acts accordingly.  The second two factors are more psychological.  The first is a belief that we have a massive sunk cost in software and it must be preserved.  This is the same fallacy that makes people lose all their money in Las Vegas. It is stupid, but people buy it. Software can’t be preserved; it begins to decay the moment it is written.  More tellingly, the intellectual basis of software must either grow or it begins to die.  We are creating experts in preserving past knowledge, which is very different from creating new knowledge.

Lastly, when the codes began to become useful to analysis an anchoring bias was formed.  A lot of what nuclear engineers analyze can’t be seen.  As such, a computer code becomes the picture many of us have of phenomena.  Think about radiation transport and what it “looks” like.  We can’t see it visually.  Our path to seeing it is computational simulation.  When we do “see” it, it forms a powerful mental image.  I can attest to learning about radiation transport for years and the power of simulation to put this concept into vivid images.  This image becomes an anchor bias that is difficult to escape.  This image includes both the simulation’s picture of reality as well as the simulation’s errors and model deficiencies.  The bias means that an unambiguously better simulation will be rejected because it doesn’t “look” right.  It is why legacy codes are so hard to displace.  For reactor safety the anchoring bias has been written into regulatory law.

This resonates with my assessment of how the United States’ is managing to destroy its National Laboratory system through systematic mismanagement of the scientific enterprise.  It fact it is self-consistent.  The deeper question is why our leader makes decisions like this?  These are two cases where a collective decision has been made to mothball a technology, an important and controversial technology, as everything nuclear is.  Instead of applying the best of modern science to the mothballed technology, we mothball everything about it.

It would seem that the United States will only invest in the analysis of significant technological programs while the technology is being actively developed.  In other words, the computational tools are built only when the thing they are being analyzed is being built too.  We do an awful job of stewardship using computation.  This is true in spades with nuclear reactors, and I fear may be true with the awkwardly named “stockpile stewardship program”.  It turns out that the entirety of the stewardship is grounded on ever-faster computers rather than a holistic, balanced approach.  We aren’t making new nuclear weapons, and increasingly we aren’t applying new science to their stewardship.  We aren’t actually doing our best to do this important job.  Instead we are holding fast to a poorly constructed, politically expedient plan laid out 20 years ago.

On the other hand maybe it’s just the United States ceding scientific leadership in yet another field.  We’ll just let the Europeans and Chinese have computational science too.

Legacy Code is Terrible in More Ways than Advertised

20 Thursday Mar 2014

Posted by Bill Rider in Uncategorized

≈ 2 Comments

From the moment I started graduate school I dealt with legacy code.  I started off by extending the development of a modeling code by my advisor’s previous student.  My hatred of legacy code had begun.  The existing code was poorly written, poorly commented and obtuse.  I probably responded by adding more of the same.  The code was crap so why should I write nice code on top of this basis.  Bad code can help encourage more bad code.  The only positive was contributing to the ultimate death of that code so that no other poor soul would be tortured by developing on top of my work.  Moving to a real professional job only hardened my views; a National Lab is teeming with legacy code.  I soon encountered code that made the legacy code of grad school look polished.  These codes were better documented, but written even more poorly.  Lots of dusty deck Fortran 4 was encountered with memory management techniques associated with computing on CDC supercomputers. Programming devices created at the dawn of computer programming languages were common.  I encountered spaghetti code that would make your head spin.  If I tried to flowchart the method it would look like a Mobius strip.  What dreck!

On the positive side, all this legacy code powered my desire to write better code.  I started to learn about software development and good practices.  I didn’t want to leave people cleaning up my messes and cursing the code I wrote.  They probably did anyway.  The code I wrote was good enough to be reused for purposes I never intended and as far as I know is still in use today.  Nonetheless, legacy code is terrible, and expensive, but necessary.  Replacing legacy code is terrible, expensive and necessary too.  Software is just this way.  There is a deeper problem with legacy code, legacy ideas.  Software is a way of actualizing ideas, algorithms into action.  It is the way that computers can do useful things.  The problem is that the deeper ideas behind the algorithms often get lost in the process. 

Writing code is a manner of concrete problem solving.  Writing a code for general production use is a particularly difficult brand of problem solving because of the human element involved.  The code isn’t just for you to use, but for others to use.   Code should be written for humans, not the computer.  You have to provide them with a tool they can wield.  If the users of a code wield the code successfully, the code begins to take on a character of its own.  If the problem is hard enough and the code is useful enough, the code begins to become legendary. 

A legendary code then becomes a legacy code that must be maintained.  Often the magic that makes it useful is shrouded in the mystery of the techniques for problem solving used.  It becomes a legacy code when the architect who made it useful moves on.  At this point the quality of the code and the clarity of the key ideas becomes paramount in importance.   If the ideas are not clear, the ideas become fixed because subsequent stewards of the capability cannot change them without breaking them.  Too often the “wizards” who developed the code were too busy solving their user’s problems to document what they are doing. 

These codes are a real problem for scientific computing.  They also form the basis of collective achievement and knowledge in many cases, the storehouse of powerful results.  They often become entrenched because they solve important problems for important users, and their legendary capability takes on the air of magic.  I’ve seen this over-and-over and it is a pox on computational science.  It is one of the major reasons that ancient methods for computing solutions continue to be used long after they should have been retired.  For a lot of physics particularly those involved with transport (first order hyperbolic partial differential equations), the numerical method has a large impact on the physical model. 

More properly, the numerical method is part of the model itself, thus the numerical solution and physical modeling are not separable.  Part of the reason is the need to add some sort of stabilization mechanism to the solution (some form of numerical or artificial viscosity).  If the numerical model changes, the related models need to change too.  Any calibrations need to be redone (and there are always calibrations!).  If the existing code is useful there is huge resistance to change because any new method is likely to be worse on the problems that count. Again, I’ve seen this repeatedly over the past 25 years.   The end result is that old legacy codes simply keep going long after their appropriate shelf life. 

Worse yet, a new code can be developed to put the old method into a new code base.  The good thing is that the legacy “code” goes away, but the legacy method remains.  It is sort of like getting a new body and simply moving the soul out of the old body into the new one.  If the method being transferred is well understood and documented, this process has some positives  (i.e., fresh code).  It also represents the loss of the opportunity to refresh the method along with the code.  Since the legacy code started it is likely that the numerical solver technology has improved.  Not improving the solver is a lost opportunity to improve the code.

I am defining the “soul” of the code, the approximations made to the laws of physics.  These are differential equation solvers and the quality of the approximation is one of the most important characteristics of the code.  The nature of the approximations and the errors made therein often define the code’s success.  It really is the soul or personality of the code.  Changes to this part of a successful legacy code are almost impossible.  The more useful or successful the code is the harder such changes are to execute.  I might argue these are the conditions where it is more important to achieve. 

Some algorithms are more of a utility. An example is numerical linear algebra.  Many improvements have taken place with the efficiency that we can solve linear algebra on a computer.  These are important utility that massively impacts the efficiency, but not the solution itself.  We can make the solution on the computer much faster without any effect on the nature of the approximations we make to the laws of physics.  Good software abstracts the interface to these methods so that improvements can be had independent of the core code.  There are fewer impediments to this sort of development because the answer doesn’t change.  If the solution has been highly calibrated and/or highly trusted, getting it faster is naturally accepted.  Too often changes (i.e., improvements) in the solution are not accepted so naturally.

In the minds of many of the users of the code, the legacy code often provides the archetype of what a solution should look like.  This is especially true is the code is used to do useful programmatic work and analyze or engineer important systems.  This mental picture provides an anchor to their computational picture of reality.  Should that picture become too entrenched, the users of the code begin to lose objectivity and the anchor becomes a bias.  This bias can be exceedingly dangerous in that the legacy code’s solutions, errors, imperfections and all become their view of reality.  This view becomes an outright impediment to improving on the legacy code’s results.  It should be a maxim that results can always be improved; the model and method in the code are imperfect reflections of nature and should always be subject to improvements.  These improvements can happen via direct focused research, or the serendipitous application of research from other sources.  Far too often the legacy code acts to suffocate research and stifle creativity because of assumptions made in its creation both implicit and explicit.

One key concept with Legacy codes is technical debt.  Technical debt is an accumulation issues that have been solved in a quick and dirty manner rather than systematically.  If the legacy codes are full of methods that are not well understood, technical debt will accumulate and begin to dominate the development.  A related concept is technical inflation where basic technology passes what is implemented in a code.  Most often this term is applied to aspects of computer science.  In reality technical inflation may also apply to the basic numerical methods in the legacy code.  If the code has insufficient flexibility, the numerical methods become fixed, and rapidly lose any state-of-the-art character (if it even had it to begin with!).  Time only increases the distance between the code and the best available methods.  The lack of connectivity ultimately short-circuits the ability of the methods in the legacy code to influence the development of better methods.  All of these factors conspire to accelerate the rate of technical inflation.

In circumstances where the legacy “code” is replaced, but the legacy methodology is retained (i.e., a fresh code base).  The presence of the intellectual legacy can strangle innovation.  If the fresh code is a starting point for real extensions from the foundational methods and not overly constrained to the past, progress can be had.  This sort of endeavor must be entered into carefully with a well thought-through plan.  Too often this is not the approach, and legacy methods are promulgated forward without a genuine change.  With each passing year the intellectual basis that the methodology was grounded upon ages and understanding is lost.  Technical inflation sets in and the ability to close the gap recedes.  In many cases the code developers will lose sight of what is going on in the research community as it becomes increasingly irrelevant to them.  Eventually, the technical inflation becomes a cultural barrier that will threaten the code.  The results obtained with the code cease to be scientific, and the code developers become curators or priests. They are paying homage to the achievements of the past, and sacrificing their careers at the altar of expediency.  The original developers of the methodology move from legendary to mythic status and all perspective is lost.  The users of the code become a cult. 

Believe me, I’ve seen this in action.  It isn’t pretty.  Solving the inherent problems at this stage require the sorts of interventions that technical people suck at.  

Depending on the underlying culture of the organization using and/or developing the code, the cult can revolve around different things.  At Los Alamos, it is a cult of physicists with numerical methods, software and engineering slighted in importance.  At Sandia, it is engineering that defines the cult.  Engineers are better at software engineering too, so that gets more priority.  The numerical methods and the underlying models are slighted.  In the nuclear industry, legacy code and methods are rampant, with all things bowing to the cult of nuclear regulation.  This regulation is supposed to provide safety, but I fear the actual impact is to squash debate and attention to any details other than the regulatory demands.  This might be the most troubling cult I’ve seen.  It avoids any real deep thought and enshrines legacy code as the core of a legally mandated cult of calibration.  This calibration is papering over a deep lack of understanding and leads to over-confidence or over-spending, probably both. The calibration is deeply entrenched into their problem solving approach that they have no real idea how well the actual systems are being modeled.  Understanding is not even on the radar.  I’ve seen talented and thoughtful engineers self-limit their approach to problem solving because of the sort of fear the regulatory environment brings.  Instead of bringing their “A” game, the regulation induces a thought paralyzing fear. 

The way to avoid the issues is avoid using legacy code and/or methods that are poorly understood.  Important application results should not be dependent on things you do not understand.  Codes are holistic things.  The quality of results depends on many things and people tend to focus on single aspects of the code usually in a completely self-absorbed manner.  Code users think that their efforts are the core of quality, which lends itself to justifying crude calibrations.  People developing closure models tend to focus on their efforts and believe that their impact is paramount.  Method developers focus on the impact of the methods.  The code developer thinks about the issues related to the quality of the code and its impact.  With regulatory factors all independent thought is destroyed.  The fact is that all of these things are intertwined.  It is the nature of a problem that is not separable and must be solved in a unified fashion.  Every single aspect of the code from its core methods, to the models it contains to the manner of its use must be considered in providing quality results.

What sort of person does V&V?

14 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

The proper way to say the title of this talk is with more than a bit of distain.  Too often, I have encountered a disturbingly negative attitude toward V&V and those who practice it. I think it is time for us to shoulder some of the blame and rethink our approach to engaging other scientists and engineers on the topic of modeling and simulation (M&S) quality. 

 V&V should be an easy sell to the scientific and engineering establishment.  It hasn’t been, it has been resisted at every step.  V&V is basically a rearticulation of the scientific method we all learn, use and ultimately love and cherish.  Instead, we find a great deal of animosity toward V&V, and outright resistence to including it as part of the M&S product.  To some extent it has been successful in growing as a discipline and focus, but too many barriers still exist.  Through hard learned lessons I have come to the conclusion that a large part of the reason is the V&V community’s approach.  For example, one of the worst ideas the V&V community has ever had is “independent V&V”.  In this model V&V comes in independently and renders a judgment on the quality of M&S.  It ends up being completely adversarial with the M&S community, and a recipe for disaster.  We end up less engaged and hated by those we judge.  No lasting V&V legacy is created through the effort.  The M&S professionals treat V&V like a disease and spend a lot of time trying to simply ignore or defeat it.  This time could be better spent improving the true quality, which ought to be everyone’s actual objective.  Archetypical examples of this appraoch in action are federal regulators (NRC, the Defense Board…).  This idea needs to be modified into something collaborative where the M&S professions end up owning the quality of their work, and V&V engages as a resource to improve quality.

 The fact is that everyone doing M&S wants to do the best job they can, but to some degree don’t know how to do everything.  In a lot of cases they haven’t even considered some of the issues we can help with.  V&V expertise can provide knowledge and capability to improve quality if they are welcome and trusted.  One of the main jobs of V&V should be build trust so that they might provide their knowledge to important work.  In sense, the V&V community should be quality “coaches” for M&S.  Another way the V&V community can help is to provide appropriately leveled tools for managing quality.  PCMM can be such a tool if its flexibility is increased. Most acutely, PCMM needs a simpler version.  Most modeling and simulation professionals will do a very good job with some aspects of quality.  Other areas of quality fall outside their expertise or interest.  In a very real sense, PCMM is a catalog of quality measures that could be taken.  Following the framework helps M&S professionals keep all the aspects of quality in mind and within reach.  The V&V community can then provide the necessary expertise to carry out a deeper quality approach.

 If V&V allows itself to get into the role of judge and jury on quality, progress will be poor. V&V’s job is to ask appropriate questions about quality as partners with M&S professionals interested in improving the quality of their work.  By taking this approach we can produce a M&S future where quality continuously improves.

What is the role of Passion in Science and Engineering?

14 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 In most people’s minds science and engineering doesn’t invoke images of passionate devotion.  Instead they think of equations, computers, metal being cut and exotic instruments.  Nonetheless like all manner of human endeavor, passion plays a key role in producing the most stunning progress and achievement.  Passion is one of those soft things we are so intently uncomfortable with, and like most soft things our success is keenly determined by how well we engage the issues around them. 

 So what am I passionate about?  What got me to do what I do today?  I started thinking about what got me started in computational simulation, and led to getting a PhD and ending in computational science today.  At the heart of the journey is a passionate idealistic sentiment, “if I can simulate something on the computer, it implies that I understand it.”  To me, a focus on V&V is a natural outgrowth of this idealism.  Too often, I lose sight of what got me started.  Too often, I end up doing work that has too little meaning, too little heart. I need to get in better touch with that original passion that propelled me through graduate school, and those first few years as a professional.  The humdrum reality of work so often squeezes my passion for modeling and simulation out.  When you feel passionately about what you are doing, it stops being work.

 Employers and employees are most comfortable with hard skills and tangible things that can be measured.  Often interviews and hiring revolve around the technical skills associated with the job, soft skills are either ignored or an afterthought.  Things like knowledge, ability to solve problems, money, and time are concrete and measureable.  They lend themselves to metrics.  Soft things are feelings (like passion), innovation, inclusion, emotion, and connectedness.  Most of these things are close to the core of what defines success and evade measurement.  Hard skills are necessary, but woefully insufficient. 

 Scientists and especially engineers are very uncomfortable with this.  Take a wonderfully written and insightful essay as an example.  Its quality is a matter of opinion, and can’t be quantified in a manner that makes the scientific world happy.  Yet the quality exists, and the capacity of such an essay to move ones emotion, shape one’s opinions, and enrich the lives of those that read it are clear.  If we don’t value the soft stuff success will elude us.

 A well-written persuasive argument can shape action and ultimately lead to greater material gains in what can be measured.  The inability to measure this quality should in no way undermine its value.  Yet, so often, it does.  We end up valuing what we can measure and fail to address what cannot be measured.  We support the development of hard skills and fail to develop the soft skills.  Passion is one of these soft things that does not receive the care and feeding it needs.  It is overlooked as a way forward to productivity and innovation.  Fun is another thing and its link to passion is strong.  People feel fun in doing things they have passion for.  With the passion and fun comes effortless work and greater achievement than skillful execution without those characteristics.

 Passion needs to be fed.  Passion can ignite innovation and productivity.  If you work at what you have passion for, you’ll likely be happier, and more productive.  Your life and the lives of those you touch will be better.  Too often people fail to find passion in work and end up channeling themselves into something outside of work where they can find passion.  At the foundation of many great achievements is passionate work.   As passion is lost all that is left is work, and with the loss of passion, the loss of possibility.

 Maybe you should find your own passion again.  Something propelled you to where you are today.  It must be powerful to have done that.

The Clay Prize and The Reality of the Navier-Stokes Equations

07 Friday Mar 2014

Posted by Bill Rider in Uncategorized

≈ 22 Comments

The existence of solutions to the (incompressible) Navier-Stokes is one of the Clay Institutes Millennium Prizes.  Each of the problems is a wickedly difficult mathematical problem, and the Navier-Stokes existence proof is no exception.  The interest in the problem has been enlivened by a claim that the problem has been solved.  Terry Tao has made a series of stunning posts to his blog outlining majestically the mathematical beauty and difficulty associated with the problem. In my mind, the issue is whether it really matters to the real world, or whether it is formulated in a manner that leads to utility. 

I fear the answer no.  We might have enormous mathematical skill applied to a problem that has no meaning to reality.

The key word I left out so far is “incompressible”, and incompressible is not entirely physical.  An incompressible fluid is impossible.  Really.  It is impossible, but before you stop reading out of disgust with me let me explain why.

One characteristic of incompressibility is the endowment of infinitely fast sound waves into the Navier-Stokes equations.  By infinite, I mean infinite, sound is propagated everywhere instantaneously.  This is clearly unphysical (a colleague of mine from Los Alamos deemed the sound waves to be “superluminal”).  It violates a key principle known as causality.

More properly, incompressibility is an approximation to reality, and there is a distinct possibility that this approximation causes the resulting equations to depart from reality in essential ways for a given application.   For very many applications incompressibility is enormously useful as an approximation, but like all approximations it has limits on its utility.  Some of these limitations are well known.  Mathematically this makes the equations set elliptic.  This ellipticity is at the heart of why the problem remains unsolved to this day.  Real fluids are not elliptic, real fluids have finite speed of sound. In fact, compressibility may hold the key to solving the most important real physical problem in fluid mechanics, turbulence.

Turbulence is the chaotic motion of fluids that arise when a fluid is moving fast enough and the inertial force of the flow exceeds the viscous force to a certain degree.  Turbulence is characterized by the loss of perceptible dependence of the solution on the initial data, carries with it a massive loss of predictability. Turbulence is enormously important to engineer, and science in general.  The universe is full of turbulence flows, as is the Earth’s atmosphere and ocean.  Turbulence also causes the loss of efficiency of almost any machine built by engineers.  It also drives mixing from stars to car engines to the cream in the coffee cup sitting next to my right hand.

There does exist a set of equations that does have the physical properties the incompressible equations lack, the compressible Navier-Stokes equations.  The problem is that the Millennium prize doesn’t focus on this equation set, but rather it focuses upon the incompressible version.  The question is whether or not going from compressible to incompressible has changed something essential about the equations, and if that is something that is essential to understanding turbulence.  There is a broadly stated assumption about turbulence; it is contained in the solution of the incompressible Navier-Stokes equations (see for example the very first page of Frisch’s book “Turbulence: The Legacy of A. N. Kolmogorov”).  In other words, it is contained inside an equation set that has undeniably unphysical aspects.  Implicitly, the belief is that these details do not matter to turbulence.  I counter that we really don’t understand turbulence well enough to make that leap.

Incompressible flow is a meaningful oft used approximation for very many engineering applications where the large-scale speed of the fluid flow is small.  This parameter is the Mach number, and if the Mach number is low (less than 0.1 to 0.3) it is assumed that the flow can be taken to be incompressible.  Incompressibility is a useful approximation that allows the practical solution of many problems.  Turbulence is ubiquitous and particularly relevant in this low speed limit.  For this reason scientists have believed that ignoring sound waves is reasonable and turbulence can be tackled with the incompressible approximation.

It is worth point out that turbulence is perhaps one of the most elusive topics known.  It has defied progress for a century with only a trickle of advances.  No single person has brought more understanding to bear on turbulence than the Russian scientist Kolmogorov.  His work has established some very fundamental scaling laws that get right to the heart of the problem with incompressibility.

He established an analytical estimate that flows achieve at very high Reynolds numbers (Reynolds number is the ratio of inertial forces to dissipative forces), the 4/5 law.  Basically this law implies that turbulence flows are dissipative and the rate of dissipation is not determined by the value of the viscosity.  In other words, as the Reynolds number becomes large its precise value does not matter to the rate of dissipation.  The implications of this are massive and get to the heart of the issue with incompressibility.  This law implies that the flow has discontinuous dissipative solutions (gradients that become infinitely steep).  These structures would be strongly analogous to shock waves where they would appear to step functions at large scales, and any transition would take place at infinitesimally small distances.  These structures have eluded scientists both experimentally and mathematically.  I believe part of the reason for evasion has been the insistence on incompressibility.

Compressible flows have no such problems.  The flows physically and mathematically readily admit discontinuous solutions.  It is not really a challenge to see this, and shock waves form naturally.  Shock waves form at all Mach numbers, including the limit of zero Mach number.  These structures actually dissipate energy at the same rate as Kolmogorov’s law would indicate (this was first observed by Hans Bethe in 1942, Kolmogorov’s proof occurred in 1941, there was no indication that they knew of each other’s work). It is worth looking at Bethe’s derivation closely.  In the limit of zero Mach number, the compressible flow equations are dissipation free if the expansion is taken to second-order.  It is only when the third-order term in the asymptotic expansion is considered that dissipation arises and shock looks different than an adiabatic smooth solution. The question is whether in taking the limit of incompressibility has removed the desired behavior from the Navier-Stokes equations. 

I believe the answer is yes.  More explicitly shock phenomena and turbulence are assumed to be largely independent except for more compressible flows where classical shocks are found.  The question is whether the fundamental nature of shocks changes continuously as the Mach number goes to zero.  We know that shocks continue to form all the way to a Mach number of zero.  In that limit, the dissipation of energy is proportional to the third power of the jump in variables (velocity, density, pressure).  This dependence matches the scaling associated with turbulence in the above-mentioned 4/5 law.  For shocks, we know that the passage of any shock wave creates entropy in the flow.  A question worth asking is “How does this bath of nonlinear sound waves leading to pressure act, and do these nonlinear features work together to produce the effects of turbulence?”  This is a very difficult problem, and we may have made it more difficult by focusing on the wrong set of equations.

There have been several derivations of the incompressible Navier-Stokes equations from the compressible equations.  These are called the zero-Mach number equations and are an asymptotic limit.  Embedded in their derivation is the assumption that the equation of state for the fluid is adiabatic, no entropy is created.  This is the key to the problem.  Bethe’s result is that shocks are non-adiabatic.  In the process of deriving the incompressible equations we have removed the most important behavior visa-vis turbulence, the dissipation of energy by purely inertial forces.

The problem with all the existing work I’ve looked at is that entropy creating compressible flows is not considered in the passage to the zero Mach limit.  In the process one of the most interesting and significant aspects of a compressible fluid is removed because the approximation doesn’t go far enough.  It is possible, or maybe even likely that these two phenomena are firmly connected.  A rate of dissipation independent of viscosity is a profound aspect of fluid flows.  We understand intrinsically how it arises from shock waves, and its presence with turbulence remains mysterious.  It implies a singularity, a discontinuous derivative, which is exactly what a shock wave is.  We have chosen to systematically remove this aspect of the equations from incompressible flows.  Is it any wonder that the problem of turbulence is so hard to solve?  It is worth thinking about.

Mukhtarbay Otelbayev of Kazakhstan claims that he has proved the Navier-Stokes existence and smoothness problem.  I don’t know whether he has or not, I don’t have the mathematical chops to deliver that conclusion.  What I’m asking is if he has, does it really matter to our understanding of real fluid dynamics?  I think it might not matter either way

We only fund low risk research today

03 Monday Mar 2014

Posted by Bill Rider in Uncategorized

≈ 2 Comments

While musing about Moore’s law and algorithms last week something important occurred to me.  The systematic decision to emphasize hardware and performance increases via Moore’s law over the larger gains that algorithms produce may have a distinct physiological basis.  For decades, Moore’s law has provided a slow steady improvement, kind of like investing in bonds instead of risky stocks.  Algorithmic improvements tends to be episodic and “quantum” in nature.  They can’t be relied upon to deliver a steady diet of progress on a time scale that most program manager’s reign’s run. 

 I think the problem is that banking on Moore’s law is safe while banking on algorithms is risky.  The program manager looking to succeed and move up may not want to risk the uncertainty of having little progress made on their watch.  The risk of the appearance of scandal looms.  Money can be spent with no obvious progress.  Algorithmic work depends upon breakthroughs, which we all should know can’t be scheduled. 

 This has profound implications on what we try to do as a society and Nation.  I’ve bemoaned the lack of “Moonshots” today.  These would be the big risky projects that have huge payoffs, but large chances at failure.  It is much better to rely upon the unsexy, low-risk, low-payoff work that is incremental.  Pressure to publish is much the same.  Incremental work will provide easier publishing and less chance of crashing and burning.  Risky work is harder to publish, more prone to clashes with reviewers and can be immensely time consuming. 

 The deeper question is “what are we losing?”  How can we remove the element of risk and failure as an impediment to deeper investments in the future?  How can we create efforts that capture the imagination and provide greater accomplishments that benefit us all?  Right now, the barriers to risky, but rewarding, research must be lowered or society as a whole will suffer.  This suffering isn’t deeply felt because it reflects gains not seen and loss of possibility.

 

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar