When the number of factors coming into play in a phenomenological complex is too large scientific method in most cases fails. One need only think of the weather, in which case the prediction even for a few days ahead is impossible.
― Albert Einstein
One of the dirty little secrets of computing in the scientific and engineering worlds is the fact that the vast majority of serious calculations are highly calibrated (and that’s the nice way to say it). In many important cases, the quality of the “prediction” is highly dependent upon models being calibrated against data. In some cases calling the calibrated “models,” does modeling a great disservice, and the calibration instruments are simply knobs used to tune the calculation. The tuning accounts for serious modeling shortcomings and often allows the simulation to produce results that approximate the fundamental balances of the physical system. Often without the calibrated or knobbed “modeling” the entire simulation is of little use and bears no resemblance of reality. In all cases this essential simulation practice creates a huge issue for the proper and accurate uncertainty estimation.
Confidence is ignorance. If you’re feeling cocky, it’s because there’s something you don’t know.
― Eoin Colfer
At some deep level the practice of calibrating simulations against data is entirely unavoidable. Behind this unavoidable reality is a more troubling conclusion that our knowledge of the World is substantially less than we might like to freely admit to ourselves. By the same token the actual
uncertainty in our knowledge is far larger than we are willing to admit. The sort of uncertainty that is present cannot be meaningfully addressed through the focus on more computing hardware (its assessment could be helped, but not solved). This uncertainty can only be addressed through a systematic effort to improve models and engage in broad experimental and observation science and engineering. If we work hard to actively understand reality better the knobs can be reduced or even removed as knowledge grows. This sort of work is exactly the sort of risky thing our current research culture eschews as a matter of course.
Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric.
― Bertrand Russell
This area of modeling and simulation is essential to many areas to varying degrees. If we are to advance our use and utility of modeling and simulation with confidence, it must be dealt with in a better and more honest way. It is useful to point to a number of important applications where calibration or knobs are essential to success. For air flow over an airplane or automobiles turbulence modeling is essential and turbulence is one of the key areas for calibrated results. Climate and weather modeling is another area where knobs are utterly essential. Plasma physics is yet another area where the modeling is so poor that calibration is absolutely necessary. Inertial or magnetically confined fusion both require knobs to allow simulations to be useful. In addition to turbulence and mixing, various magnetic or laser physics add to the problems with simulation quality, which can only be dealt with effectively through calibration and knobs.
You couldn’t predict what was going to happen for one simple reason: people.
― Sara Sheridan
The conclusion that I’ve come to is that the uncertainty in the cases of calibrated or knobbed calculation has two distinct faces each of which should be fully articulated by those conducting simulations. One is the best-case scenario of the simulated uncertainty, which depends on the modeling and its calibration being rather complete and accurate in capturing reality. The second is the pessimistic case where the uncertainty comes from the lack of knowledge that led to the need for calibration in the first place. If the simulation is calibrated, the truth is that the calibration is highly dependent upon the data used and guarantees of validity are dependent on matching the conditions closely associated with the data. Outside the range where the data was collected, the calibration should carry with it greater uncertainty. The further we move outside the range defined by the data, the greater the uncertainty.
This is most commonly seen in curve fitting using regression. The curve and the data are closely correlated and standard uncertainties are relatively small. When the uncertainty is taken outside the range of the data, it grows much larger. In the assessment of uncertainty in calculations this is rarely taken into account. Generally those using calculation like to be blithely unaware of whether the calibrations they are using are well within the range of validity. Calibration is also imperfect and carries an error with them intrinsic to the determination of the settings. The uncertainty associated with the data itself is always an issue when either taking the optimistic or more pessimistic face of uncertainty.
A potentially more problematic aspect of calibration is using the knobs to
account for multiple effects (turbulence, mixing, plasma physics, radiation and numerical resolution are common). In this cases the knobs may account for a multitude of poorly understood physical phenomena, mystery physics and lack of numerical resolution. This creates a massive opportunity for severe cognitive dissonance, which is reflected in an over-confidence in simulation quality. Scientists using simulations like to provide those funding their work with greater confidence than it should carry because the actual uncertainty would trouble those paying for it. Moreover the range of validity for such calculation is not well understood or explicitly stated. One of the key aspects of the calibration being necessary is that the calculation cannot reflect a real World situation without it. The model simply misses key aspects of reality without the knobs (climate modeling is an essential example).
In the cases of the knobs accounting for numerical resolution, the effect is usually crystal clear when the calibration of the knob settings needs to be redone whenever the numerical resolution changes because a new faster computer becomes available. The problem is that those conducting the calculations rarely make a careful accounting of this effect. They simply recalibrate the calculations and go on without ever making much of it. This often reflects a cavalier attitude toward computational simulation that rarely intersects with high quality. This lack of transparency can border on delusional. At best this is simply intellectually sloppy, at worst it reflects a core of intellectual dishonesty. In either case a better path is available to us.
Science is not about making predictions or performing experiments. Science is about explaining.
― Bill Gaede
In essence there are two uncertainties that matter: the calibrated uncertainty where data is keeping the model reasonable, and the actual predictive uncertainty that is much larger and reflects the lack of knowledge that makes the calibration necessary in the first place. Another aspect of the modeling in the calibrated setting is the proper use of the model for computing quantities. If the quantity coming from the simulation can be tied to the data used for calibration, the calibrated uncertainty is a reasonable thing to use. If the quantity from the simulation is inferred and not directly calibrated, the larger uncertainty is appropriate. Thus we see that the calibrated model has intrinsic limitations, and cannot be used for predictions that go beyond the data’s physical implications. For example climate modeling is certainly reasonable for examining the mean temperature of the Earth. One the other hand the data associated with extreme weather events like flooding rains are not calibrated, and uncertainty regarding their prediction under climate change are more problematic.
In modeling and simulation nothing comes for free. If the model needs to be calibrated to accurately simulate a system, the modeling is limited in an essential way. The limitations in the model are uncertainties about aspects of the system tied to the modeling inadequacies. Any predictions of the details associated with these aspects of the model are intrinsically uncertain. The key is the acknowledgement of the limitations associated with calibration. Calibration is needed to deal with uncertainty about modeling, and the lack of knowledge limits the applicability of simulation. One applies the modeling in a manner that is cautious, if they are being rational. Unfortunately people are not rational and tend to put far too much faith in these calibrated models. In these cases they engage in wishful thinking, and fail to account for the uncertainty in applying the simulations for prediction.
It is impossible to trap modern physics into predicting anything with perfect determinism because it deals with probabilities from the outset.
― Arthur Stanley Eddington
If we are to improve the science associated with modeling and simulation the key is uncertainty. We should charter work that addresses the most important uncertainties through well-designed scientific investigations. Many of these mysteries cannot be addressed without adventurous experimentation. Current modeling approaches need to be overthrown and replaced with different approaches without limitations (e.g., the pervasive mean field models of today). No amount of raw computing power can solve any of these problems. Our current research programs in high performance computing are operating in complete ignorance of the approach necessary for progress.
All you need in this life is ignorance and confidence, and then success is sure.
– Mark Twain


national security issue to sell it without a scintila of comprehension for what makes these computers useful in the first place. The speed of the computer is one of the least important aspects of the real transformative power of supercomputing, and the most distant from its capasity to influence the real world.
immense and truly transformative. We have deep scientific, engineering and societal questions that will be unanswered, or answered poorly due to our risk aversion. For example, how does climate change impact the prevalence of extreme weather events? Our existing models can only infer this rather than simulate it directly. Other questions related to material failure, extremes of response for engineered systems, and numerous scientific challenges will remain beyond our collective grasp. All of this


Avoiding accountability is never a good thing. On the other hand too much overbearing accountability starts to look like pervasive trust issues. The concomitant effects of working in a low-trust environment are corrosive to everything held accountable. As most things the key is balance between accountability and freedom, too much of either lowers performance. Today we have too little freedom and far too much accountability in a destructive form. For the sake of progress and quality a better balance must be restored. Today’s research environment is being held accountable in ways that reflect a lack of trust, and a complete lack of faith in the people doing the work, and perhaps most importantly produce a dramatic lack of quality in the work (
It shows in everything we do.
Our accounting systems are out of control. They spawn an ever-growing set of rules and accounts to manage the work. All of this work is nothing more than a feel good exercise for managers who mostly want to show “due-diligence” and those they “manage risk”. No money is ever wasted doing anything (except increasingly all the money is wasted). Instead we are squeezing the life out of our science, which manifests itself as low quality work. In a very real way low quality science is easier to manage, far more predictable and easy to make accountable. One can easily argue that really great science with discovery and surprise completely undermines accountability, so we implicitly try to remove it from the realm of possibility. Without discovery, serendipity and surprise, the whole enterprise is much more fitting to tried and true business principles. In light of where the accountability has come from, it might be good to take a hard look at these business principles and the consequences they have reaped.
Quality suffers because of loss of breadth of perspective and injection of ideas from divergent points of view. Creativity and innovation (i.e., discovery) are driven by broad and divergent perspectives. Most discoveries in science are simply repurposed ideas from another field. Discovery is the thing we need to progress in science and society. It is the very thing that our current accountability climate is destroying. Accountability helps to drive away any thoughts from outside the prescribed boundaries of the work. Another maxim of today is the customer is always right. For us the customers are working under similar accountability standards. Since they are “right” and just as starved for perspective, the customer works to narrow the focus. We get a negative flywheel effect where narrowing focus and perspective work to enhance their effect.
The end result of our current form accountability is small-minded success. In other words we succeed at many small unimportant things, but fail at large important things. The management can claim that everything is being done properly, but never produce anything that really succeeds in a big way. From the viewpoint of accountability we see nothing wrong all deliverables are met and on time. True success would arise by attempting to succeed at bigger things, and sometimes failing. The big successes are the root of progress and the principal benefit of dreaming big and attempting to achieve big. In order to succeed big, one must be willing to fail big too. Today big failure surely brings congressional hearings and the all to familiar witch-hunt. Without the risks of failure we are left with small-minded success being the best we can do.
I was stunned by how empowering his description of work could be, and how far from this vision I work under today. I might simply suggest that my management read that book and implement everything in it. The scary thing is that they did read it, and nothing came of it. The current system seems to be completely impervious to good ideas (or perhaps following the book would have been too empowering to the employees!). Of course the book suggests a large number of practices that are completely impossible under current rules and opposed by the whole concept of accountability we are under today.
ical reason not to test them; the idea of not testing is purely political. It is a good political stance from a moral and ethical point-of-view and I have no issue with taking that stand on those grounds. From a scientific and engineering point-of-view it is an awful approach, and clearly far from optimal and prone to difficulties. These difficulties can be a very good thing if harnessed appropriately, but today such utility is not present in the execution of our Lab’s mission. As one should always remember, nuclear weapons are political things, not scientific, and politics is always in charge.
anniversary. Our political leaders are declaring it to be a massive success. They have been busy taking a victory lap and crowing about its achievements. The greatest part of this success is high performance computing. These proclamations are at odds with reality. The truth is that the past 20 years have marked the downfall of the quality and superiority of our Labs and the supremacy of these institutions scientifically. The program should have been a powerful hedge against decline, and perhaps it has been. Perhaps without stockpile stewardship the Labs would be in even worse shape than they are today. That is a truly terrifying thought. We see a broad-based decline in the quality of the scientific output of the United States, and our nuclear weapons’ Labs are no different. It appears that the best days are behind us. It need not be this way with proper leadership and direction.
Nonetheless given the stance of not testing we should be in the business of doing the very best job possible within these self-imposed rules (i.e., no full up testing). We are not and we are not to a relatively massive degree. This is not on purpose, but rather by a stunning lack of clarity in objectives and priorities. We have allowed a host of other priorities to undermine success in this essential endeavor. By taking the fully integrated testing of the weapons off the table requires that we bring our very best to everything else we do.
I’ve written a great deal about how bad our approach to modeling and simulation is, but it’s the tip of the proverbial iceberg of incompetence and steps that systematically undermine the work necessary to succeed. Where modeling and simulation gets a lot of misdirected resources the experimental and theoretical efforts at the Labs have been eviscerated. The impact of this evisceration on modeling and simulation is evident in issues with the actual credibility of simulation. This destruction has been done at the time when they are needed the most. Instead support for these essential scientific engines for progress have been “knee-capped”. Just as importantly a positive work environment has been absolutely annihilated by how the Lab’s are managed.
Science becomes so incremental that progress is glacial. You almost completely guarantee safety and in the process a complete lack of discovery. Experiments lose all their essence and utility in acting as a hedge against over-confidence by surprising us. Add the risk aversion we talk about below, and you have experimental science that does almost nothing. As a result we get very little for our experimental dollar, and allow ourselves to do almost nothing innovative or exciting. So yes, safety is really important, and we need to produce a safe working environment. This same environment must also be a productive place. The productivity gains that we have seen in the private world have been systematically undermined at the Labs, not just by safety, but two other drivers risk aversion and security.
Finally we have a focus on accountability where we want to be guaranteed that no money be wasted at any time. Part of this is risk aversion where research that might not pan out and doesn’t get funded because not panning out is viewed as failure. Instead these failures are at the core of learning and growing. Failure is essential to learning and acquiring knowledge. Our accountability system is working to destroy the scientific method, the development of staff, and our ability to be the best. To some extent we account not because we need to, but because we can. Computers allow us to sub-divide our sources of money into ever-smaller bins along with everyone’s time, and effort. In the process we lose the crosscutting nature of the Lab’s science in the process. We get a destruction of multi-disciplinary science that is absolutely essential to doing the work of stewardship. Without multi-disciplinary science we will surely fail at this essential mission, and we are managing the Labs in a way that assures this outcome.
All of this is systematically crushing our workforce and its morale. In addition, we are failing to build the next generation of scientists and engineers with a level of quality necessary for the job. We are allowing the quality of the staff to degrade through the mismanagement of the entire enterprise at a National level. Without a commitment to real unequivocal success in the stewardship mission, the entire activity is simply an exercise in futility.
When we look at the overall picture we see a system that is not working. We spend more than enough money on stockpile stewardship, but we spend the money foolishly. The money is being wasted on a whole bunch of things that have nothing to do with stewardship. Most of the resources are going into guaranteeing complete safety, complete absence of risk, complete security and complete accountability. It is a recipe for abject failure at the integrated job of safeguarding the Nation. We are failing in a monumental way while giving our country the picture of success. Of course the average American is so easily fooled because if they weren’t would our politics be so dysfunctional and dominated by fear-based appeals?
Another way of making progress is to renew our intent towards building truly World-class scientists at the Labs. We can do this by harnessing the Lab’s missions to do work that challenges the boundary of science. Today we are World class by definition and not through our actions. We can change this by simply addressing the challenges we have with a bold and aggressive research program. This will drive the professional development to heights that today’s current approach cannot match. Part of the key to developing people is to allow their work to be the engine of learning. For learning, failure and risk is key. Without failure we learn nothing, just recreate the success we already know about. World-class science is about learning new things and cannot happen without failure, and failure is not tolerated today. Without failure science does not work.
The stupid, naïve and unremittingly lazy thinking that permeate high performance computing aren’t just found there. It dominates the approach to stockpile stewardship. We are stewarding our nuclear weapons with a bunch of wishful thinking instead of a
well-conceived and executed plan. We are in the process of systematically destroying the research excellence that has been the foundation of our National security. It is not malice, but rather societal incompetence that is leading us down this path. Increasingly, the faith in our current approach is dependent on the lack of reality of the whole nuclear weapons’ enterprise. They haven’t been used for 70 years and hopefully that lack of use will continue. If they are used we will be in a much different World if they are used, and a World we are not ready for any more. I seriously worry that our lack of seriousness and pervasive naivety about the stewardship mission will haunt us. If we have screwed this up, history will not be kind to us.