Experiment is the sole source of truth. It alone can teach us something new; it alone can give us certainty.
― Henri Poincaré
In looking at the dynamic surrounding verification and validation recently I’ve noticed
a very grim evolution of the narrative. Two things have happened to undermine the maturity of V&V. One I’ve spoken about in the past, the tendency to drop verification and focus solely on validation, which is bad enough. In the absence of verification, validation starts to become rather strained and drift toward calibration. Assurances that one is properly solving the model they are claiming to be solving are unsupported by evidence. This is bad enough all by itself. The use of V&V as a vehicle for improving modeling and simulation credibility is threatened by this alone, but something worse looms even larger.
A more common and pervasive trend is the conflation of validation with uncertainty quantification. It has become very common for uncertainty quantification (UQ) to be defined as the whole of validation. To some extent this is fueled by a focus on high performance computing where UQ provides a huge appetite for computation
al cycles thus eliciting lots of love and support in HPC circles. Validation must be about experiments and a broad cross section of uncertainties that may only be examined through a devotion to multi-disciplinary work and collaboration. One must always remember that validation can never be separated from measurements in the real world whether experimental or observational. The experiment-simulation connection in validation is primal and non-negotiable.
There are three types of lies — lies, damn lies, and statistics.
― Benjamin Disraeli
A second part of the issue is the hot topic nature of UQ. UQ has become a buzzword and seems to be a hot issue in publishing and research. Saying you’re doing UQ seems to be a means to squeezing money out of funding agents. In addition UQ can be done relatively automatically and mechanically. Tools and techniques exist to enable UQ to be d
one without much deep thought even though it touches upon many deep technical topics. Actual validation is far harder and more holistic. The core to any work in validation is serious experimental expertise and hard-nosed comparison with simulations. The detailed nature of the experiment and its intrinsic errors and uncertainties is the key to any comparison. Without knowing the experimental uncertainty any computational uncertainty is context free. My grumpy intellectual would quip that validation requires thinking and that leads people to avoid it because thinking is so hard. The deeper issues are that validation is complex and mutli-disciplinary in nature making it collaborative and difficult. Experts in a single discipline can do UQ, so it is an easy out.
Five percent of the people think;
ten percent of the people think they think;
and the other eighty-five percent would rather die than think.
― Thomas A. Edison

One of the biggest issues is the stunning incompleteness of UQ in general. Most commonly UQ is done via an exploration of the variation of parameters in models. Complex models of reality have a lot of constants that are not known with great precision. Various techniques may be utilized to efficiently examine the variation in
computational solutions due to changes in these parameters. Among the methods used are things like Markov Chain Monte Carlo (MCMC), polynomial chaos, and other sampling methods. The results from this work are useful and sound, but form a rather incomplete view of uncertainty. Even in these cases the sampling is often subject to lack of certainty with the assessment driven by the difficulty of determining uncertainty in high dimensional spaces. Modeling and simulation suffers from a host of other uncertainties not covered by these methodologies. For example most simulations have some degree of numerical error that may be quite large. Numerous techniques exist for exploring its magnitude and nature. Many systems being modeled have some stochastic or variability associated with them. Modeling assumptions are often made in simulating a system or experiment. The solution may change greatly on the basis of these assumptions or modeling approximations. A different computational modeler may make much different assumptions and produce a different solution.
Judge a man by his questions rather than by his answers.
― Voltaire
If validation is to be done properly a fairly complete accounting of modeling and simulation uncertainty is needed. One also needs to also understand the experimental error and uncertainty with equal completeness. One must be acutely aware of the intrinsic lack of certainty in the estimation of uncertainty. The combination of the solutions and the sizes of each uncertainty provides a modeling and simulation solution into proper context. Without knowledge of the uncertainties in each data source, the distance between solutions cannot be judged. For example if the experimental precision is very good and the uncertainty is quite small, the simulation needs to be equally precise to be judged well. Conversely a large experimental uncertainty would allow model to be much looser, and still be judged well. More critically the experiment wouldn’t provide actionable evidence on research needs, and expert judgment would reign.
The whole of the uncertainty provides an important source of scientific tension. If experimental uncertainty is small, it requires modeling and simulation to be equally precise to imply good results. It pushes the modeling to improve to meet the high standard of the experiment. If the modeling and simulation is very good, but the experiments have large uncertainty, it should push the experiments to improve because they fail to constrain and distinguish between models. By having a deep and complete understanding of uncertainty, we can define where we need to put resources to improve. We know what aspects of our current knowledge are the most in need of attention and limiting progress.
One must always be aware of the significant attraction of short changing uncertainty estimation. Doing a complete job of estimating uncertainty almost always results in an increase in the magnitude of uncertainty. This is where science as a fundamentally human enterprise comes into play. People would rather think uncertainties are small than large. Uncertainty is uncomfortable and people shy away from discomfort. By under-estimating uncertainty people unconsciously put themselves at ease by doing incomplete work. A more rigorous and complete approach almost always produces a discomforting result. When one combines discomfort with difficulty of accomplishment, the necessary factors for lack of effort and completeness becomes clear. With this temptation in mind the tendency to take the easy route must be acknowledged.
The bottom line is the necessity understanding uncertainty in a holistic manner can produce useful and defensible context for science. It can allow us to understand where we need to improve our knowledge or practice. Without this accounting the whole
issue falls into the area of relying upon expert judgment or politics to make the decisions. We fail to understand where our knowledge is weak and potentially overlook experiments necessary for understanding. We may have the right experiments, but cannot make measurements of sufficient accuracy. We might have models of insufficient complexity, or numerical solutions with too much numerical error. All of these spell out different demands for resource allocation.
Much of the tension is captured in these two quotes although I hope Eddington was probably trying to be ironic!
Never trust an experimental result until it has been confirmed by theory
― Arthur Stanley Eddington
It doesn’t matter how beautiful your theory is … If it doesn’t agree with experiment, it’s wrong.
― Richard Feynman
provide real value to our applied programs, and with this value provides generous financial support. This support is necessary for the codes to do their job and creates financial stability. With this support comes acute responsibility that needs to be acknowledged and serious effort needs to apply to meeting these.
capability for real problems. While this character is primary in defining production codes everything else important in high performance computing is eclipsed by the modeling imperative. These codes are essential for the utility of high performance computing resources and often become the first codes to make their way and use high-end resources. They quite often are the explicit justification for the purchase of such computing hardware. This character usually dominates and demands certain maturity of software professionalism.
On the flipside there are significant detrimental aspects of such codes. For example the methods and algorithms in production codes are often crude and antiquated in comparison to state of the art. The same can be said for the models, the algorithms and often the computer code itself. The whole of the production code credibility is deeply impacted by these pedigrees and their impact on real World programs and things. This issue comes from several directions; the codes are often old and used for long periods of time. The experts who traditionally define the credibility drive this to some extent. It often takes a long time to develop the code to the level needed to solve the hard real world problems as well as the expertise to navigate the code’s capability into results that have real world meaning. Older methods are robust, proven and trusted (low order, and dissipative is usually how robust happens). Newer methods are more fragile, or simply can’t deal with all the special cases and issues that threaten the solution of real problems. Again, the same issues are present with models, algorithms and the nature or quality of the computer code itself.
In far too many instances, the systematic pedigree defining steps are being skipped for the old system of person-centered credibility. The old person-centered system is simple and straightforward. You trust somebody and develop a relationship that supports credibility. This person’s skills include detailed technical analysis, but also inter-personal relationship building skills. If such a system is in place there is not a problem as long as the deeper modern credibility is also present. Too often the modern credibility is absent or shorted and effectively replaced by a cult of personality. If we put our trust in people who do not value the best technical work available in favor of their force of personality or personal relationships we probably deserve the substandard work that results.











will touch upon at the very end of this post, Riemann solvers-numerical flux functions can also benefit from this, but some technicalities must be proactively dealt with.
(
some optimization solutions (
approximations. This issue can easily be explained by looking at the smooth sign function compared with the standard form. Since the dissipation in the Riemann solver is proportional to the characteristic velocity, we can see that the smoothed sign function is everywhere less than the standard function resulting in less dissipation. This is a stability issue analogous to concerns around limiters where these smoothed functions are slightly more permissive. Using the exponential version of “softabs” where the value is always greater than the standard absolute value can modulate this permissive nature.
computing. Worse yet, computing can not be a replacement for reality, but rather is simply a tool for dealing with it better. In the final analysis the real world still needs to be in the center of the frame. Computing needs to be viewed in the proper context and this perspective should guide our actions in its proper use.
many ways computing is driving enormous change societally and creating very real stress in the real world. These stresses are stoking fears and lots of irrational desire to control dangers and risks. All of this control is expensive, and drives an economy of fear. Fear is very expensive. Trust, confidence and surety are cheap and fast. One totally irrational way to control fear is ignore it, allowing reality to be replaced. For people who don’t deal with reality well, the online world can be a boon. Still the relief from a painful reality ultimately needs to translate to something tangible physically. We see this in an over-reliance on modeling and simulation in technical fields. We falsely believe that experiments and observations can be replaced. The needs of the human endeavor of communication can be done away with through electronic means. In the end reality must be respected, and people must be engaged in conversation. Computing only augments, but never replaces the real world, or real people, or real experience. This perspective is a key realization in making the best use of technology.
on computer power coupled to an unchanging model as the recipe for progress. Focus and attention to improving modeling is almost completely absent in the modeling and simulation world. This ignores one of the greatest truths in computing that no amount of computer power can rescue an incorrect model. These truths do little to alter the approach although we can be sure that we will ultimately pay for the lack of attention to these basics. Reality cannot be ignored forever; it will make itself felt in the end. We could make it more important now to our great benefit, but eventually our lack of consideration will demand more attention.
independently. The proper decomposition of error allows the improvement of modeling in a principled manner.
ng the model we believe we are using, the validation is powerful evidence. One must recognize that the degree of understanding is always relative to the precision of the questions being asked. The more precise the question being asked is, the more precise the model needs to be. This useful tension can help to drive science forward. Specifically the improving precision of observations can spur model improvement, and the improving precision of modeling can drive observation improvements, or at least the necessity of improvement. In this creative tension the accuracy of solution of models and computer power plays but a small role.
Sometimes you read something that hits you hard. Yesterday was one of those moments while reading Seth Godin’s daily blog post (
I rewrote Godin’s quote to reflect how work is changing me (at the bottom of the post). It really says something needs to give. I worry about how many of us feel the same thing. Right now the workplace is making me a shittier version of myself. I feel that self-improvement is a constant struggle against my baser instincts. I’m thankful for a writer like Seth Godin who can push me to into a vital and much needed self-reflective “what the fuck” !
are seeing cultural, economic, and political changes of epic proportions across the human world. With the Internet forming a backbone of immense interconnection, and globalization, the transformations to our society are stressing people resulting in fearful reactions. These are combining with genuine threats to humanity in the form of weapons of mass destruction, environmental damage, mass extinctions and climate change to form the basis of existential danger. We are not living on the cusp of history; we are living through the tidal wave of change. There are massive opportunities available, but the path is never clear or safe. As the news every day testifies, the present mostly kind of sucks. While I’d like to focus on the possibilities of making things better, the scales are tipped toward the negative backlash to all this change. The forces trying to stop the change in its tracks are strong and appear to be growing stronger.
Many of our institutions are under continual assault by the realities of today. The changes we are experiencing are incompatible with many of our institutional structures such as the places I work. Increasingly this assault is met with fear. The evidence of the overwhelming fear is all around us. It finds its clearest articulation within the political world where fear-based policies abound with the rise of Nationalist anti-Globalization candidates everywhere. We see the rise of racism, religious tensions and protectionist attitudes all over the World. The religious tensions arise from an increased tendency to embrace traditional values as a hedge against change and the avalanche of social change accompanying technology, globalization and openness. Many embrace restrictions and prejudice as a solution to changes that make them fundamentally uncomfortable. This produces a backlash of racist, sexist, homophobic hatred that counters everything about modernity. In the workplace this mostly translates to a genuinely awful situation of virtual paralysis and creeping bureaucratic over-reach resulting in a workplace that is basically going no where fast. For someone like me who prizes true progress above all else, the workplace has become a continually disappointing experience.
ance. As we embrace online life and social media, we have gotten supremely fixated on superficial appearances and lost the ability to focus on substance. The way things look has become far more important than the actuality of anything. Having a reality show celebrity as the President seems like a rather emphatic exemplar of this trend. Someone who looks like a leader, but lacks most of the basic qualifications is acceptable to many people. People with actual qualifications are viewed as suspicious. The elite are rejected because they don’t relate to the common man. While this is obvious on a global scale through political upheaval, the same trends are impacting work. The superficial has become a dominant element in managing because the system demands lots of superficial input while losing any taste for anything of enduring depth. Basically, the system as a whole is mirroring society at large.
The prime institutional directive is survival and survival means no fuck ups, ever. We don’t have to do anything as long as no fuck ups happen. We are ruled completely by fear. There is no balance at all between fear-based motivations and the needs for innovation and progress. As a result our core operational principle for is compliance above all else. Productivity, innovation, progress and quality all fall by the wayside to empower compliance. Time and time again decisions are made to prize compliance over productivity, innovation, progress, quality, or efficiency. Basically the fear of fuck ups will engender a management action to remove that possibility. No risk is ever allowed. Without risk there can be no reward. Today no reward is sufficient to blunt the destructive power of fear.
rection. It is the twin force for professional drift and institutional destruction. Working at an under-led institution is like sleepwalking. Every day you go to work basically making great progress at accomplishing absolutely nothing of substance. Everything is make-work and nothing is really substantive you have lots to do because of management oversight and the no fuck up rules. You make up results and produce lots of spin to market the illusion of success, but there is damn little actual success or progress. The utter and complete lack of leadership and vision is understandable if you recognize the prime motivation of fear. To show leadership and vision requires risk, and risk cannot take place without failure and failure courts scandal. Risk requires trust and trust is one of the things in shortest supply today. Without the trust that allows a fuck up without dire consequences, risks are not taken. Management is now set up to completely control and remove the possibility of failure from the system.
rewards and achievement without risk is incompatible with experience. Everyday I go to work with the very explicit mandate to do what I’m told. The clear message every day is never ever fuck up. Any fuck ups are punished. The real key is don’t fuck up, don’t point out fuckups and help produce lots of “alternative results” or “fake breakthroughs” to help sell our success. We all have lots of training to do so that we make sure that everyone thinks we are serious about all this shit. The one thing that is absolutely crystal clear is that getting our management stuff correct is far more important than every doing any real work. As long as this climate of fear and oversight is in place, the achievements and breakthroughs that made our institutions famous (or great) will be a thing of the past. Our institutions are all about survival and not about achievement. This trend is replicated across society as a whole; progress is something to be feared because it unleashes unknown forces potentially scaring everyone. The fear resulting in being scared undermines trust and without trust the whole cycle re-enforces itself.
A big piece of the puzzle is the role of money in perceived success. Instead of other measures of success, quality and achievement, money has become the one-size fits all measure of the goodness of everything. Money serves to provide the driving tool for management to execute its control and achieve broad-based compliance. You only work on exactly what you are supposed to be working on. There is no time to think or act on ideas, learn, or produce anything outside the contract you’ve made with you customers. Money acts like a straightjacket for everyone and serves to constrict any freedom of action. The money serves to control and constrain all efforts. A core truth of the modern environment is that all other principles are ruled by money. Duty to money subjugates all other responsibilities. No amount of commitment to professional duties, excellence, learning, and your fellow man can withstand the pull of money. If push comes to shove, money wins. The peer review issues I’ve written about are testimony to this problem; excellence is always trumped by money.
called leaders who utilize fear as a prime motivation. Every time a leader uses fear to further their agenda, we take a step backward. One the biggest elements in this backwards march is thinking that fear and danger can be managed. Danger can only be pushed back, but never defeated. By controlling it in the explicit manner we attempt today, we only create a darker more fearsome danger in the future that will eventually overwhelm us. Instead we should face our normal fears as a requirement of the risk progress brings. If we want the benefits of modern life, we must accept risk and reject fear. We need actual leaders who encourage us to be bold and brave instead of using fear to control the masses. We need to quit falling for fear-based pitches and hold to our principles. Ultimately our principles need to act as a barrier to fear becoming the prevalent force in our decision-making.
Everyone wants his or her work or work they pay for to be high quality. The rub comes when you start to pay for the quality you want. Everyone seems to want high quality for free, and too often believes that low cost quality is a real thing. Time and time again it becomes crystal clear that high quality is extremely expensive to obtain. Quality is full of tedious detail oriented work that is very expensive to conduct. More importantly when quality is aggressively pursued, it will expose problems that need to be solved to reach quality. For quality to be achieved these problems must be addressed and rectified. This ends up being the rub, as people often need to stop adding capability or producing results, and focus on fixing the problems. People, customer and those paying for things tend to not want to pay for fixing problems, which is necessary for quality. As a result, it’s quite tempting to not look so hard at quality and simply do more superficial work where quality is largely asserted by fiat or authority.
The entirety of this issue is manifested in the conduct of verification and validation in modeling and simulation. Doing verification and validation is a means of high quality work for modeling and simulation. Like other forms of quality work, it can be done well engaging in details and running problems to ground. Thus V&V is expensive and time consuming. These quality measures take time and effort away from results, and worse yet produce doubt in the results. As a consequence the quality mindset and efforts need to have significant focus and commitment, or they will fall by the wayside. For many customers the results are all that matters, they aren’t willing to pay for more. This becomes particularly true if those doing the work are willing to assert quality without doing the work to actually assure it. In other words the customer will take work that is asserted to be high quality based on the word of those doing the work. If those doing the work are trying to do this on the cheap, we produce low or indeterminate quality work, sold as high quality work masking the actual costs.
The largest part of the issue is the confluence of two terrible trends: increasingly naïve customers for modeling and simulation and decreasing commitment for paying for modeling and simulation quality. Part of this comes from customers who believe in modeling and simulation, which is a good thing. The “quality on the cheap” simulations create a false sense of security because it provides them financial resources. Basically we have customers who increasingly have no ability to tell the difference between low and high quality work. The work’s quality is completely dependent upon those doing the work. This is dangerous in the extreme. This is especially dangerous when the modeling and simulation work is not emphasizing quality or paying for its expensive acquisition. We have become too comfortable with the tempting quick and dirty quality. The (color) viewgraph norm that used to be the quality standard for computational work that had faded in use is making a come back. A viewgraph norm version of quality is orders of magnitude cheaper than detailed quantitative work needed to accumulate evidence. Many customers are perfectly happy with the viewgraph norm and naïvely accept results that simply look good and asserted as high quality.
Perhaps an even bigger issue is the misguided notion that the pursuit of high quality won’t derail plans. We have gotten into the habit of accepting highly delusional plans for developing capability that do not factor in the cost of quality. We have allowed ourselves to bullshit the customer to believing that quality is simple to achieve. Instead the pursuit of quality will uncover issues that must be dealt with and ultimately change schedules. We can take the practice of verification as an object lesson in how this works out. If done properly verification will uncover numerous and subtle errors in codes such as bugs, incorrect implementations, boundary conditions, or error accumulation mechanisms. Sometimes the issues uncovered are deeply mysterious and solving them requires great effort. Sometimes the problems exposed require research with uncertain or indeterminate outcomes. Other times the issues overthrow basic presumptions about your capability that require significant corrections in large-scale objectives. We increasingly live in a world that cannot tolerate these realities. The current belief is that we can apply project management to the work, and produce high quality results that ignore all of this.
The way that the trip down to “quality hell” starts is the impact of digging into quality. Most customers are paying for capability rather than quality. When we allow quick and dirty means of assuring quality to be used, the door is open for the illusion of quality. For the most part the verification and validation done by most scientists and engineers is the quick, dirty and incomplete variety. We see the use of eyeball or viewgraph norm pervasively in comparing results in both verification and validation. We see no real attempt to grapple with the uncertainties in calculations or measurements to put comparisons in quantitative context. Usually we see people create graphics that have the illusion of good results, and use authority to dictate that these results indicate mastery and quality. For the most part the scientific and engineering community simply gives in to the authoritative claims despite a lack of evidence. The deeper issue with the quick and dirty verification is the mindset of those conducting it; they are working from the presumption that the code is correct instead of assuming there are problems, and collecting evidence to disprove this.
quantitative work is the remedy for the qualitative, eyeball, viewgraph, and color video metric so often used today. Deep quantitative studies show the sort of evidence that cannot be ignored. If the results are good, the evidence of quality is strong. If a problem is found, the need for remedy is equally strong. In validation or verification the creation of an error bar goes a long way to putting any quality discussion in context. The lack of an error bar casts any result adrift and lacking in context. A secondary issue would be the incomplete work where full error bars are not pursued, or results that are not favorable are not pursued or worse yet, suppressed.
alternative facts” are driven by this lack of willingness to deal with reality. Why deal with truth and the reality of real problems when we can just define them away with more convenient facts. In today’s world we are seeing a rise of lies, bullshit and delusion all around us. As a result, we are systematically over-promising and under-delivering on our work. We over-promise to get the money, and then under-deliver because of the realities of doing work one cannot get maximum capability with maximum quality for discount prices. Increasingly bullshit (propaganda) fills in the space between what we promise and what we deliver. Pairing with this deep dysfunction is a systematic failure of peer review within programs. Peer review has been installed as backs stop again the tendencies outlined above. The problem is that too often peer review does not have a free reign. Too often with have conflicts of interest, or control that provide an explicit message that the peer review had better be positive, or else.
We bring in external peer reviews filled with experts who have the mantle of legitimacy. The problem is that these experts are hired or drafted by the organizations being reviewed. Being too honest or frank in a peer review is the quickest route to losing that gig and the professional kudos that goes along with it. One bad or negative review will assure that the reviewer is never invited back. I’ve seen it over and over again. Anyone who provides an honest critique is never seen again. A big part of the issue is that the reviews are viewed as pass-fail tests and problems uncovered are dealt with punitively. Internal peer reviews are even worse. Again any negative review is met with distain. The person having the audacity and stupidity to be critical is punished. This punishment is meted out with the clear message, “only positive reviews are tolerated.” Positive reviews are thus mandated by threat and retribution. We have created the recipe for systemic failure.
Putting the blame on systematic wishful thinking is far too kind. High quality for a discount price is wishful thinking at best. If the drivers for this weren’t naïve customers and dishonest programs, it might be forgivable. The problem is that everyone who is competent knows better. The real key to seeing where we are going is the peer review issue. By squashing negative peer review, the truth is exposed. Those doing all this substandard work know the work is poor, and simply want a system that does not expose the truth. We have created a system with rewards and punishments that allows this. Reward is all monetary, and very little positive happens based on quality. We can assert excellence without doing the hard things necessary to achieve it. As long as we allow people to simply declare their excellence without producing evidence of said excellence quality will languish.
In the modern dogmatic view of high performance computing, the dominant theme of utility revolves around being predictive. This narrative theme is both appropriate and important, but often fails to recognize the essential prerequisites for predictive science, the need to understand and explain. In scientific computing the ability to predict with confidence is always preceded by the use of simulations to aid and enable understanding and assist in explanation. A powerful use of models is the explanation of the mechanisms leading to what is observed. In some cases simulations allow exquisite testing of models of reality, and when a model matches reality we infer that we understand the mechanisms at work in the World. In other cases we have observations of reality that cannot be explained. With simulations we can test our models or experiment with mechanisms that can explain what we see. In both cases the confidence of the traditional science and engineering community is gained through the process of simulation-based understanding.
Understanding as the object of modeling and simulation also works keenly to provide a culture of technical depth necessary for prediction. I see simulation leaping into the predictive fray without the understanding stage as arrogant and naïve. This is ultimately highly counter-productive. Rather than building on the deep trust that the explanatory process provides, any failure on the part of simulation becomes proof of the negative. In the artificially competitive environment we too often produce, the result is destructive rather than constructive. Prediction without first establishing understanding is an act of hubris, and plants the seeds of distrust. In essence by sidestepping the understanding phase of simulation use makes failures absolutely fatal to success instead of stepping-stones to excellence. This is because the understanding phase is far more forgiving. Understanding is learning and can be engaged in with a playful abandon that yields real progress and breakthroughs. It works through a joint investigation of things no one knows and any missteps are easily and quickly forgiven. This allows the competence and knowledge to be built through the acceptance of failure. Without allowing these failures, success in the long run cannot happen.
specific area where this dynamic is playing out with maximal dysfunctionality is climate science. Climate modeling codes are not predictive and tend to be highly calibrated to the mesh used. The overall modeling paradigm involves a vast number submodels to include a plethora of physical processes important within the Earth’s climate. In a very real sense the numerical solution of the equations describing the climate are forever to be under-resolved with significant numerical error. The system of Earth’s climate also involves very intricate and detailed balances between physical processes. The numerical error is generally quite a bit larger than the balance effects determining the climate, so the overall model must be calibrated to be useful.
In the modern modeling and simulation world this calibration then provides the basis of very large uncertainties. The combination of numerical error and modeling error means that the true simulation uncertainty is relatively massive. The calibration assures that the actual simulation is quite close to the behavior of the true climate. The models can then be used to study the impact of various factors on climate and aid the level of understanding of climate science. This entire enterprise is highly model-driven and the level of uncertainty is quite large. When we transition over to predictive climate science, the issues become profound. We live in a world where people believe that computing should help to provide quantitative assistance for vexing problems. The magnitude of uncertainty from all sources should provide people with significant pause and provide a pushback from putting simulations in the wrong role. It should also not prevent simulation from providing a key tool in understanding this incredibly complex problem.
program generally does not support the understanding role of simulation in science.
In summary we have yet another case of marketing of science overwhelming the true narrative. In the search for funding to support computing, the sale’s pitch has been arranged around prediction as a product. Increasingly, we are told that a faster computer is all that we really need to do. The implied message in this sale’s pitch is a lack of necessity to support and pursue other aspects of modeling and simulation for predictive success. These issues are plaguing our scientific computing programs. Long-term success of high performance computing is going to be sacrificed, based on this funding-motivated approach. We can add the failure to recognize understanding, explaining and learning as a key products for science and engineering from computation.