• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: October 2015

“Preserve the Code Base” is an Awful Reason for Anything

30 Friday Oct 2015

Posted by Bill Rider in Uncategorized

≈ 2 Comments

The greater danger for most of us lies not in setting our aim too high and falling short; but in setting our aim too low, and achieving our mark.

― Michelangelo Buonarroti

The DOE ASC program has turned into “Let’s create a program that will replace the old generation of legacy codes with a new generation of legacy codes.”  In this way the program which just celebrated its 20th anniversary has been a massive success. Unfortunately this end product is not in service to our National security, it is a threat.

One of the reasons I have been given for some of the work we are doing is the need to “preserve our code base”. This code base is called a multi-billion dollar investment that DOE has made and needs to be maintained for the future. Nothing could be further from the truth. It is one of the most defeatist and insulting things I can imagine. It is naïve and simplistic at it core. This makes me want to puke.

Why should I have such a strong and visceral reaction to a statement of “support” for the importance of modeling and simulation work? After all to preserve the code base comes with funding for lots of work and the purchase of super-exotic super-computers that seem really cool (they’re really big and complicated with lots of flashing lights plus cost a shit-ton of money). My problem comes from the lack of faith this approach denotes in the ability of our current scientists to produce anything of intellectual value. Instead the valuing the creativity and creation of knowledge by our current generation of scientists, we are implicitly valuing the contributions from the past. We should value the work of the past, but as a foundation to build upon not worship as an idol. The impact of the approach means the value of work today is diminished, and the careers of current scientists are diminished to the point of simply being caretakers. It makes today’s scientists mindless high priests of the past. We end up asking very little of them in terms of challenge and accomplishment, and end up harming the Nation’s future in the process. Hence the reason for the “makes me want to puke,” comment.

So why the hell does this messaging exist?

It is a rather feeble attempt to justify the existence of the work that exists. It is feeble because it completely misrepresents the work entirely and creates a harmful narrative. The question exist because people simply don’t understand what “code” is. They think of code like a bridge that once built simply does the job for a very long time. Code is nothing at all like a bridge or building and trying to manage it in the manner that is being promoted is dangerous, destructive and borders on incompetence. It is certainly an attitude born of complete ignorance.

Cooking requires confident guesswork and improvisation– experimentation and substitution, dealing with failure and uncertainty in a creative way

― Paul Theroux

VikingProCTA much better analogy is cooking. Code is simply the ingredients used to cook a dish. Good ingredients are essential, but insufficient to assure you get a great meal. Moreover food spoils and needs to be thrown out, replaced or better ones choses. Likewise parts of the code are in constant need of replacement or repair or simply being thrown out. The computing hardware is much like the cooking hardware, the stove top, oven, food processors, etc. which are important to the process, but never determine the quality of the meal. They may determine the ease of preparation of the meal, but almost never the actually taste and flavor. In the kitchen nothing is morechef decorating dessert important than the chef. Nothing. A talented chef can turn ordinary ingredients into an extraordinary culinary experience. Give that same talented chef great ingredients, and the resulting dining experience could be absolutely transcendent.

6a00d8341c51c053ef00e54f8863998834-800wiOur scientists are like the chefs and their talents determine the value of the code and its use. Without their talents the same code can be rendered utterly ordinary. The code is merely a tool that translates simple instructions into something the computer can understand. In skilled hands it can render the unsolvable, solvable and unveil an understanding of reality invisible to experiment. In unskilled hands, it can use a lot of electricity and fool the masses. With our current attitude toward computers we are turning Labs once stocked with outstanding ingredients and masterful chefs into fast food frycooks. The narrative of preserve the code base, isn’t just wrong, it is downright dangerous and destructive.

…no one is born a great cook, one learns by doing.

― Julia Child

It does represent modeling and simulation in support of our Nation’s nuclear weapons, and this should worry everyone a lot. Rather than talk about investing in knowledge, talent and people, we are investing our energy in keeping old stale code alive and well as our computers change. Of course we are evolving our computers in utterly idiotic ways that do little or nothing to help us solve problems that we really care about. Instead we are designing and evolving our computers to solve problems that only matter for press releases. More and more the computers that make for good press releases are the opposite for real problems; the new computers just suck so much harder for solving real problems.

zombie_computers_by_cousinwooferComputing at the high end of modeling and simulation is undergoing great change in a largely futile endeavor to squeeze what little life Moore’s law has left in it. The truth is that Moore’s law for all intents and purposes died a while ago, at least for real codes solving real problems. Moore’s law only lives in its zombie-guise of a benchmark involving dense linear algebra that has no relevance to the codes we actually buy computers for. So I am right in the middle of a giant bait and switch scheme that depends on the even greater naivety and outright ignorance on the part of those cutting the checks for the computers than those defining the plan for the future of computing.press-release-pdf-top500

At the middle of this great swindle is code. What is code, or more properly code for solving models used to simulate the real world? The simplest way to think about code is to view it as a recipe that a “master” chef created to produce a model of reality. A more subtle way to think about a code is as a record of intellectual labor made toward defining and solving models proposed to simulate reality. If we dig our way deeper, we see that code is way of taking a model of reality and solving it generally without making gross assumptions to render it analytically tractable. The model is only as good as theTop500Logointellect and knowledge base used to comprise it in conjunction with the intellect and knowledge used to solve the problem. At the deepest level the code is only as good as the people using it. By not investing in the quality of our scientists we are systematically undermining the value of the code. For the scientists to be good their talent must be developed through engaging in the solution of difficult problems.

imagesIf we stay superficial and dispense with any and all sophistication then we get rid of the talented people, and we can get by with trained monkeys. If you don’t understand what is happening in the code, it just seems like magic. With increasing regularity the people running these codes treat the codes like magical recipes for simulating “reality”. As long as the reality being simulated isn’t actually being examined experimentally, the magic works. If you have magic recipes, you don’t change them because you don’t understand them. This is what we are creating at the labs today, trained monkeys using magical recipes to simulate reality.

In a lot of ways the current situation is quintessentially modern and exceptionally American in tenor. We have massive computers purchased at great cost running magical codes written by long dead (or just retired) wizards maintained by a well-paid,code_monkeywell-educated cadre of pheasants. Behind these two god-awful reasons to spend money is a devaluing of the people working at the Labs. Development of talent and the creation of intellectual capital by that talent are completely absent from the plan. It creates a working environment that is completely backward looking and devoid of intellectual ownership. It is draining the Labs of quality and undermining one of the great engines of innovation and ingenuity for the Nation and World.

wizsmallThe computers aren’t even built to run the magical code, but rather run a benchmark that only produces results for press releases. Running the magical code is the biggest challenge for serfs because the computers are so ill suited to their “true” purpose. The serfs are never given the license of ability to learn enough to create their own magic; all their efforts go into simply maintaining the magic of the bygone era.

What could we accomplish if we knew we could not fail?

― Eleanor Roosevelt

What would be better?

One option would be to stop buying these computers whose sole purpose is to create a splashy press release then struggle forever to run magical codes. Instead we should build computers that are optimized within constraints to solve the problems the agencies they are purchased for a solving. We could work to push back against the ever-steeper decline in realized performance. Maybe we should actually design, build and buy computers we actually want to use. What a novel concept, buy a computer you actually want to use instead of one you are forced to use!

That, my friends, is the simplest thing to achieve. The much more difficult thing is overcoming the magical code problem. The first thing is overcoming magical code is to show the magic for what it is, the product of superior intellect and clever problem solving and nothing more. We have to allow ourselves to create new solutions to new problems grounded by the past, but never chained to it. The codes we are working with are solving the problems posed in the past, and the problems of today are different.

One of the biggest issues with the magical codes is their masterful solution of the problems they were created to solve. The problems they are solving are not the problems we need to solve today. The questions driving technological decision making today are different than yesterday. Even if there is a good reason to preserve the code base (there isn’t), the code base is solving the wrong problems; it is solving yesterday’s problems (really yesteryear’s or yester-decade’s problems).

Elmer-pump-heatequationAll of this is still avoiding the impact of solution algorithms on the matter of efficiency. As others and I have written, algorithms can do far more than computers to improve the efficiency of solution. Current algorithms are an important part of the magical recipes in current codes. We generally are not doing anything to improve the algorithmic performance in our codes. We simply push the existing algorithms along into the future.

3_code-matrix-944969This is another form of the intellectual product (or lack thereof) that the current preserve the code base attitude favors. We completely avoid the possibility of doing anything better than we did in the past algorithmically. Historically improvements in algorithms provided vastly greater advances in capability than Moore’s law provided. I say historically because these advances largely occurred prior to the turn of the Century (i.e., 2000). In the 15 years since progress due to algorithmic improvements has ground to a virtual halt.

Why?
All the energy in scientific computing has gone into implementing existing algorithms on the new generation of genuinely awful computers. Instead of investing in a proven intellectual path for progress that has paired with computer improvements, we have shifted virtually all effort into computers and their direct consequences. Algorithmic research is risky and produces many failures. It takes a great deal of tolerance for failures to invest sufficiently to get the big payoff.MorleyWangXuElements

Our funding agencies have almost no tolerance for failure, and without the tolerance for failure the huge successes are impossible. The result is a systematic lack of progress, and complete reliance on computer hardware for improvement. This is a path that will ultimately and undeniably leads to a complete dead end. In the process of reaching this dead end we will sacrifice an entire generation of scientists to this obviously sub-optimal and stupid approach.

Ultimately, the irritation over the current path is primarily directed at the horrible waste of opportunity it represents. There is so much important work that needs to be done to improve modeling and simulation’s quality and impact. At this point in time computing hardware might be the least important aspect to work on; instead it is the focal point.

351d13330f3e278569b6bd7c5c32dedcMuch greater benefits could be realized through developing better models, extending physical theories, and fundamental improvements in algorithms. Each of these areas is risky and difficult research, but offers massive payoffs with each successful breakthrough. The rub is that breakthroughs are not guaranteed, but rather require an element of faith in ability of human intellect to succeed. Instead we are placing our resources behind an increasingly pathetic status quo approach. Part of the reason for continuation of the approach is merely the desire of current leadership to take a virtual victory lap by falsely claiming the success of the approach they are continuing.

In a variety of fields the key aspect of modeling and simulation that evades our grasp today are complex chaotic phenomena that are not understood well. In fluid dynamics turbulence continues to be vexing. In solid mechanics fracture and failure serve a similar role in limiting progress. Both areas are in dire need of new fresh ideas that might break the collective failures down and allow progress. In neither area will massive computing provide the hammer blow to allow progress. Only by harnessing human creativity and ingenuity will these areas progress.

In many ways I believe that one of the key aspect that limits progress is our intrinsic devotion to deterministic models. Most of our limiting problems all lend themselves more naturally to non-deterministic models. These all require new mathematics, new methods, new algorithms to unleash their power. Faster computers are always useful, but without the new ideas these new faster computers will simply waste resources. The issue isn’t that our emphasis is necessarily unambiguously bad, but rather grossly imbalanced and out of step with where our priorities should be.

Mediocrity knows nothing higher than itself; but talent instantly recognizes genius.

― Arthur Conan Doyle

At the core of our current problems is human talent. Once we found our greatest strength in the talent of the scientists and engineers working at the Labs. Today we focus on things, code, computers, and machines as the strength while starving the pipeline of human talent. When the topic of talent comes up they speak about hiring the best and brightest while paying them at the “market” rate. More damning is how the talent is treated when it hires on. The preserve the code mantra speaks to a systematic failure to develop, nurture and utilize talent. We hire people with potential and then systematically squander it through inept management and vacant leadership. Our efforts in theory and experiment are equally devoid of excellence, utility and vision.

c037fa3f2632d31754b537b793dc8403Once we developed talent by providing tremendously important problem to solve and turn excellent people loose to solve these problems in an environment that encouraged risky, innovative solutions. In this way the potentially talented people become truly talented and accomplished, ready to slay the next dragon using the experience of the previous slain beasts. Today we don’t even show let them see an actual dragon. Our staff never realize any of their potential because they are simply curating the accomplishments of the past. The code we are preserving is one of these artifacts we are guarding. This approach is steadily strangling the future.

Human resources are like natural resources; they’re often buried deep. You have to go looking for them, they’re not just lying around on the surface. You have to create the circumstances where they show themselves.

― Ken Robinson

We want no risk and complete safety; we get mediocrity and decline

23 Friday Oct 2015

Posted by Bill Rider in Uncategorized

≈ 2 Comments

 

Only those who dare to fail greatly can ever achieve greatly.

― Robert F. Kennedy

With each passing day I am more dismayed by the tendency for mediocrity to be excused by people in the work they do. Instead of taking pride in our work, we are increasingly simply acting life mindless serfs farming and toiling on land by aristocratic overlords. The refrain is so often based on ceding the authority to define the quality of your work to others with statements like the following:

“It was what the customer wanted.”

“It met all requirements.”

“We gave them what they paid for.”

All of these statements end up paving the road for second-rate work and remove all responsibility from ourselves for the state of affairs. We are responsible because we don’t allow ourselves to take any real risks.

They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.

― Benjamin Franklin

encryption-NSA-spying_SS_127879991_090613-617x416The whole risk-benefit equation is totally out of whack for society as a whole. The issue is massive across the whole of the Western world, but nowhere is it more in play than the United States. Acronyms like TSA and NSA immediately bring to mind. We have traded a massive amount of time, effort and freedom for a modest to fleeting amount of added security. It is unequivocal that Americans have never been safer and more secure than now. Americans are have also never been more fearful. Our fears have been amplified for political gain and focused on things that barely qualify as threats. Meanwhile we ignore real danger and threats because they are relatively obscure and largely hidden from plain view (think income inequality, climate change, sugar, sedentary lifestyle, guns, …). Among the costs of this focus on removing the risk of bad things happening is the chance to do anything unique and wonderful in our work.A Transportation Security Administration (TSA) officer pats down Elliott Erwitt as he works his way through security at San Francisco International Airport in San Francisco, Wednesday, Nov. 24, 2010. (AP Photo/Jeff Chiu)

The core issue that defines the collapse of quality is the desire for absolute guarantees that nothing bad will ever happen. There are no guarantees in life. This hasn’t stopped us from trying to remove failure from the lexicon. What this desire creates is destruction of progress and hope. We end up being so cautious and risk adverse that we have no adventure, and never produce anything unintended. Unintended can be bad so we avoid it, but unintended can be wonderful and be a discovery. We avoid that too.

I am reminded of chocolate chip cookies as an apt analogy for what we have created.nestle-tollhouse-cookie-doughIf I go to the store and buy a package of Nestle Toll House morsels, and follow the instructions on the back I will produce a perfectly edible, delicious, cookie. These cookies are quite serviceable, utterly mediocre and totally uninspired. Our National Labs are well on their way to be the Toll House cookies of science.

TollHouseCookiesI can make some really awesome chocolate chip cookies using a recipe that has taken 25 years to perfect. Along the way I have made some batches of truly horrific cookies while conducting “experiments” with new wrinkles on the recipe. If I had never made these horrible batches of cookies, the recipe I use today would be no better than the Toll House one I started with. The failures are completely essential for the success in the long run. Sometimes I make a change that is incredible and a keeper, and sometimes it destroys or undermines the taste. The point is that I have to accept the chance that any given batch of cookies will be awful if I expect to create a recipe that is in any way remarkable of unique.

IMG_26881-594x396The process that I’ve used to make really wonderful cookies is the same one as science needs to make progress. It is a process that cannot be tolerated today. Today the failures are simply unacceptable. Saying that you cannot fail is equivalent to saying that you cannot discover anything and cannot learn. This is exactly what we are getting. We have destroyed discovery and we have destroyed the creation of deep knowledge and deep learning in the process.

There is only one thing that makes a dream impossible to achieve: the fear of failure.

― Paulo Coelho

What’s the Point of All This Stuff?

16 Friday Oct 2015

Posted by Bill Rider in Uncategorized

≈ 2 Comments

I find my life is a lot easier the lower I keep my expectations.

― Bill Watterson

I’ve been troubled a lot recently by the thought that things I’m working on are not terribly important, or worse yet not the right thing to be focusing on. Its not a very quieting thought to think that your life’s work is nigh on useless. So what the hell do I do? Something else?

My job is also bound to numerous personal responsibilities and the fates of my loved ones. I am left with a set of really horrible quandaries about my professional life. I can’t just pick up and leave, or at least do that and respect myself. My job is really well paying, but every day it becomes more of a job. The worst thing is the

overwhelming lack of intellectual honesty associated with the importance of the work I do. I’m working on projects with goals that are at odds with progress. We are spending careers and lives working on things that will not improve science, and engineering much less positively effect society as a whole.

I really believe that computational modeling is a boon to society and should be transformative if used properly. It needs capable and able computing to work well. All of this sounds great, but the efforts in this direction are slowing diverging from a path of success. In no aspect of the overall effort is this truer than high performance or scientific computing. We are on a path to squandering a massive amount of effort to achieve almost nothing of utility for solving actual problems in the so-called exascale initiative. The only exascale that will actually be achieved in on a meaningless benchmark, and the actual gains in computational modeling performance are fleeting and modest to the point of embarrassment. It is a marketing ploy masquerading as strategy and professional malpractice, or simply mass incompetence.

io_Disgust_standardStrong language, you might say, but it’s at the core of much of my personal disquiet. I’m honest almost to a fault and the level of intellectual dishonesty in my work is implicitly at odds with my mostly held values. For the most part the impact is some degree of personal morale decline and definite feel of lacking inspiration and passion for work. I’ve been fortunate to live large portions of my life feeling deeply passionate and inspired by my work, but those positive feelings have waned considerably.

I’ve written about the shortcomings of our path in high performance computing a good bit. I discussed a fair bit the nature of modeling and simulation activities and their relationship to reality. For modeling and simulation to benefit society things in the real world need to be impacted by it. Ultimately, the current focus is on the parts of computing furthest from reality. Every bit of evidence is that the effort should be focused completely differently.

The only real mistake is the one from which we learn nothing.

― Henry Ford

The current program is a recreating the mistakes made twenty years ago in high performance computing in the original ASCI program. Perhaps the greatest sin is not learning anything from the mistakes made then. We have had twenty years of history and lessons learned that have been conspicuously ignored in the present time. This isn’t simply bad or stupid; it is absolutely the sins of being both unthinking and incompetent. We are going to waste entire careers and huge amounts of money in service of intellectually shallow efforts that could have been avoided by the slightest attention to history. To call what we do science when we haven’t bothered to learn the obvious lessons right in front of us is simply malpractice of the worst possible sort.

All of this is really servicing our aversion to risk. Real discovery and advances in science require risk, require failure and cannot be managed like building a bridge. It is an inherently error-prone and failure-driven exercise that requires leaps of faith in the ability of humanity to overcome the unknown. We must take risks and the bigger the risk, the bigger the potential reward. If we are unwilling to take risks and perhaps fail, we will achieve nothing. The current efforts are constructed to avoid failure at all cost. We will spend a lot to achieve very little.

In a deep way it makes a lot of sense to clearly state what the point of all this work is. Is it “pure” research where we simply look to expand knowledge? Are we trying to better deliver a certain product? How grounded in reality are the discoveries needed to advance? Is the whole thing focused on advancing the economics of computing? Are we powering other scientific efforts and viewing computing as an engine of discovery?

None of these questions really matter though, the current direction fails in every respect. I would answer that the current high performance-computing trajectory is not focused on success in answering any of these questions except for a near-term give-images-2away to the computing industry (like so much of government spending, it is simply largess to the rich). Given the size of the computing industry, this emphasis is somewhere between silly and moronic. If we are interested in modeling and simulation for any purpose of scientific and engineering performance, the current trajectory is woefully sub-optimal. We are not working on the aspects of computing that impact reality in any focused manner. The only benefit of the current trajectory is using computers that are enormously wasteful with electricity and stupendously hard to use.

By seeking and blundering we learn.

― Johann Wolfgang von Goethe

We could do so much better with a little bit of thought, and probably spend far lessmediocritydemotivator money, or spend the same amount of money more intelligently. We need substantial work on the models we solve. The models we are working on today are largely identical to those we solved twenty years ago, but the questions being asked in engineering and science are far different. We need new models to answer these questions. We need to focus on algorithms for solving existing and new models. These algorithms are as or more effective than computing power in improving the efficiency of solution. Despite this, the amount of effort going into improving algorithms is trivial and fleeting. Instead we are focused on a bunch of activities that have almost no impact on the efficiency or quality of modeling and simulation.

The efforts today simply focus on computing power during a time where the increases in computing power are becoming increasingly challenged by the laws of physics. In a very real sense the community is addicted to Moore’s law, and the demise of Moore’s law threatens the risk adverse manner that progress has been easily achieved for twenty years. We need to return to a high risk, high payoffs research that once powered modeling and simulation, but the community eschews today. We are managed like we are building bridges and science is not like building a bridge at all. The management style for science today is so completely risk adverse that it systematically undermines the very engine that powers discovery (risk and failure).

Models will generally get worse with greater computing resources. If the model is wrong there is no amount of computing resources that can fix it. It will simply converge to a better wrong answer. If the model is answering the wrong question, the faster computer cannot force it to answer the right one. The only thing that can improve matter is a better or more appropriate model. Today working on better or more appropriate models receives little attention or resources instead we pour our efforts into faster computers. These faster computers are solving yesterday’s model faster and to more high fidelity wrong answers than ever before.

a582af380087cd231efd17be2e54ce16Most of our codes and efforts on the next generation of computers are simply rehashed versions of the codes of yesterday including the models solved. Despite overwhelming evidence of these models intrinsic shortcomings, we continue to pour effort in to solving these wrong models faster. In addition the models being solved are simply ill suited to the societal questions being addressed with modeling and simulation. A case in point is the simulation of extreme events (think 100 year floods, or failures of engineered products, or economic catastrophes). If the model is geared to solving the average behavior of a system, these extreme events cannot be directly simulated, only inferred empirically. These new models are necessary and need new math and new algorithms to solve them. We are failing to do this in a wholesale way.

Next the hierarchy of activities in modeling and simulation are algorithms (and methods). These algorithms allow the solution of the aforementioned models. The algorithm defines the efficiency, and character of the solution to models. These algorithms are the recipes for solution that computer science supports and the computers are built to work on. Despite their centrality and importance to the entire enterprise, the development of better algorithms receives almost no support.

A better algorithm will positively influence the solution of models on every single computer that employs it. Any algorithmic support today is oriented toward the very largest and most exotic computers with a focus on parallelism and efficiency of implementation. Issues such as accuracy and operational efficiency are simply not a focus. A large part of the reason for the current focus is the emphasis and difficulty of simply moving current algorithms to the exotic and difficult to use computing platforms being devised today. This emphasis is squeezing everything else out of existence, and reflects a misguided and intellectually empty view of what would make a difference.demotivators

There. I feel a little better having vented, but only a little.

Don’t mistake activity with achievement.

― John Wooden

The role of Frameworks like PCMM and PIRT in Credibility

09 Friday Oct 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Integrity is telling myself the truth. And honesty is telling the truth to other people.

― Spencer Johnson

jean-rostand-computers-quotes-think-why-think-we-have-computers-to-doAs tools the frameworks of PIRT and PCMM are only as good as how they are used to support high quality work. Using these tools will not necessarily improve your credibility, but rather help you assess it holistically. Ultimately the credibility of your modeling & simulation capability is driven by the honesty and integrity of your assessment. It is easy to discuss what you do well and where you have mastery over a topic. The real key to assessment includes a will and willingness to articulate where an effort is weak and where the fundamental foundational knowledge in a field is the limiting factor in your capacity for solving problems. The goal in credibility assessment is not demonstration of mastery over a topic, but rather a demonstration of the actual state of affairs so that decisions can be made with full knowledge of the weight to place and risks inherent in modeling & simulation.

A thing like quality is highly subjective and certainly subject to a great deal of relativity where considerations of the problem being solved and the modeling & simulation capabilities of the field or fields relevant come into consideration. As such the frameworks serve to provide a basic rubric and commonly consistent foundation for the consideration of modeling & simulation quality. As such PIRT and PCMM are blueprints for numerical modeling. The craft of executing the modeling & simulation within the confines of resources available is the work of the scientists and engineers. These resources include the ability to muster effort towards completing work, but also the knowledge and capability base that you can draw upon. The goal of high quality is to provide an honest and holistic approach to guiding the assessment of modeling & simulation quality.

In the assessment of quality the most important aspect to get right is honesty about the limitations of a modeling & simulation capability. This may be the single most difficult thing to accomplish. There are significant physiological and social factors that lead to a lack of honesty in evaluation and assessment. No framework or system can completely overcome such tendencies, but might act as a hedge against the tendency to overlook critical details that do not reflect well on the assessment. The framework assures that each important category is addressed. The ultimate test for the overall integrity and honesty of an assessment of modeling & simulation credibility depends upon deeper technical knowledge than any framework can capture.

Quite often an assessment will avoid dealing with systematic problems for a given capability that have not been solved sufficiently. Several examples of this phenomenon are useful in demonstrating where this can manifest itself. In fluid dynamics, turbulence remains a largely unsolved problem. Turbulence has intrinsic and irreducible uncertainty associated with it, and no single model or modeling approach is adequate to elucidate the important details. In Lagrangian solid mechanics the technique of element death is pervasively utilized for highly strained flows where fracture and failure occur. It is essential for many simulations and often renders a simulation to be non-convergent under mesh refinement. In both cases the communities dependent upon utilizing modeling & simulation with these characteristics tend to under-emphasize the systematic issues associated with both. This produces a systematically higher confidence and credibility than is technically justifiable. The general principle is to be intrinsically wary of unsolved problems in any given technical discipline.

mediocritydemotivatorTogether the PIRT and PCMM adapted and applied to any modeling & simulation activity form part of the delivery of defined credibility of the effort. The PIRT gives context to the modeling efforts and the level of importance and knowledge of each part of the work. It is a structured manner for the experts in a given field to weigh in on the basis for model construction. The actual activities should be strongly reflected in the sort of assessed importance and knowledge basis reflected in the PIRT. Similarly the PCMM can be used for a structured assessment of the specific aspects of the modeling & simulation.

The degree of foundational work providing the basis for confidence for the work is spelled out in the PCMM categories. Included among these are the major areas of emphasis some of which may be drawn from outside the specific effort. Code verification being an exemplar of this where its presence and quality provides a distinct starting point for the specific aspects of the estimation of the numerical error for the specific modeling & simulation activity being assessed. Each of the assessed categories forms the starting point for the specific credibility assessment.

One concrete way to facilitate the delivery of results is the consideration of the uncertainty budget for a given modeling activity. Here the delivery of results using PIRT and PCMM is enabled by considering them to be resource guides for the concrete assessment of an analysis and its credibility. This credibility is quantitatively defined by the uncertainty and the intended application’s capability to absorb such uncertainties for the sort of questions to be answered or decision to be made. If the application is relatively immune to uncertainty or only needing a qualitative assessment then large uncertainties are not worrisome. If on the other hand an application is operating under tight constraints associated with other considerations (sometimes called a design margin) then the uncertainties need to be carefully considered in making any decisions based on modeling & simulation.

This gets to the topic of how modeling & simulation are being used. Traditionally modeling & simulation goes through two distinct phases of use. The first phase is dominated by “what if” modeling efforts where the results are largely qualitative and exploratory in nature. The impact of decisions or options is considered on a qualitative basis and guides decisions in a largely subjective way. Here the standards of quality tend to focus on completeness and high-level issues. As modeling & simulation proves its worth for these sorts of studies, it begins to have greater quantitative demands placed on it. This forms a transition to a more demanding case for modeling & simulation where design or analysis decision is made. In this case the standards for uncertainty become far more taxing. This is the place where these frameworks become vital tools in organizing and managing the assessment of quality.

This is not to say that these tools cannot assist in earlier uses of modeling & simulation. In particular the PIRT can be a great tool to engage with in determining modeling requirements for an effort. Similarly, the PCMM can be used to judge the appropriate level of formality and completeness for an effort to engage with. Nonetheless these frameworks are far more important and impactful when utilized for more mature, “engineering” focused modeling & simulation efforts.

Any high level integrated view of credibility is built upon the foundation of the issues exposed in PIRT and PCMM. The problem that often arises in a complex modeling & simulation activity is managing the complexity of the overall activity. Invariably gaps, missing efforts and oversights will creep into the execution of the work. The basic modeling activity is informed by the PIRT’s structure. Are there important parts of the model that are missing, or poorly grounded in available knowledge? From PCMM, are the important parts of the model tested adequately? The PIRT becomes a fuel for assessing the quality of the validation, and planning for an appropriate level of activity around important modeling details. Questions regarding the experimental support for the modeling can be explored in a structured and complete manner. While the credibility is not built on the PCMM and PIRT, the ability to manage its assessment is enabled by their mastery of the complexity of modeling & simulation.

In getting to the quantitative basis for assessment of credibility, the definition of the uncertainty budget for a modeling & simulation activity can be enlightening. While the PCMM and PIRT provide a broadly encompassing view of modeling & simulation quality from a qualitative point of view, the uncertainty budget is ultimately a quantitative assessment of quality. Forcing the production of numerical values to the quality is immensely useful and provides important focus. For this to be a useful and powerful tool, this budget must be determined with well-defined principles and fairly good disciplined decision-making.

imagesOne of the key principles underlying a successful uncertainty budget is the determination of unambiguous categories for assessment. Each of these broad categories can be populated with sub-categories, and finer and finer categorization. Once an effort has committed to a certain level of granularity in defining uncertainty, it is essential that the uncertainty be assessed broadly and holistically. In other words, it is important, if not essential that none of the categories be ignored.

This can be extremely difficult because some areas of uncertainty are truly uncertain, no information may exist to enable a definitive estimation. This is the core of the difficulty for uncertainty estimation, the unknown value and basis for some quantitative uncertainties. Generally speaking, the unknown or poorly known uncertainties are more important to assess than some of the well-known ones. In practice the opposite happens, when something is poorly known the value often adopted in the assessment is implicitly defined quantitatively as “zero”. This is implicit because the uncertainty is simply ignored, and it is not mentioned, or assigned any value. This is dangerous. Again, the availability of the frameworks comes in handy to help the assessment identify major areas of effort.

A reasonable decomposition of the sources of uncertainty can fairly generically be defined at a high level: experimental, modeling and numerical sources. We would suggest that each of these broad areas be populated with a finite uncertainty, and each of the finite values assigned be supported by well-defined technical arguments. Of course, each of these high level areas will have a multitude of finer grained components describing the sources of uncertainty along with routes toward their quantitative assessment. For example, experimental uncertainty has two major components, observational uncertainty and natural variability. Each of these categories can in kind be analyzed by a host of additional detailed aspects. Numerical uncertainty lends itself to many sub-categories: discretization, linear, nonlinear, parallel consistency, and so on.

The key is to provide a quantitative assessment for each category at a high level with a non-zero value for uncertainty and a well-defined technical basis. We note that the technical basis could very well be “expert” judgment as long as this is explicitly defined. This gets to the core of the matter; the assessments should always be explicit and not leave essential content for implicit interpretation. A successful uncertainty budget would define the major sources of uncertainty for all three areas along with a quantitative value for each. In the case where the technical basis for the assessment is weak or non-existent, the uncertainty should be necessarily large to reflect the lack of technical basis. Like statistical sampling, the benefit to doing more work is a reduction in the magnitude of the uncertainty associated with the quantity. Enforcing this principle means that follow-on work that produces larger uncertainties requires the admission that earlier uncertainties were under-estimated. The assessment process and uncertainty budget are inherently learning opportunities for the overall effort. The assessment is simply an encapsulation of the current state of knowledge and understanding.

Too often in modeling & simulation uncertainty, efforts receive a benefit through ignoring important sources of uncertainty. By doing nothing to assess uncertainty they report no uncertainty associated with the quantity. Insult is done to this injury when the effort realizes that doing work to assess the uncertainty then can only increase its value. This sort of dynamic becomes self-sustaining, and more knowledge and information results in more uncertainty. This is a common and often seen impact of uncertainty assessment. Unfortunately this is a pathological issue. The reality is that this indicts the earlier assessment of uncertainty where the estimate that was made earlier is actually too small. A vital principle is that more work in assessing uncertainty should always reduce uncertainty. If this does not happen the previous assessment of uncertainty was too small. This is an all to common occurrence that occurs when a modeling & simulation effort is attempting to convey a too large sense of confidence in their predictive capability. The value of assessed uncertainty should converge to the irreducible core of uncertainty associated with the true lack of knowledge or intrinsic variability of the thing being modeled. In many cases the uncertainty is interacting with an important design or analysis decision where a performance margin needs to be balanced with the modeling uncertainty.

An ironic aspect to uncertainty estimation is the tendency to estimate large uncertainties where expertise and knowledge are strong, while under-estimating uncertainty in areas where expertise is weak. This is often seen with numerical error. A general trend in modeling & simulation is the tendency to treat computer codes as black boxes. As such, the level of expertise in numerical methods used in modeling & simulation can be quite low. This has the knock-on effect of lowering the estimation of numerical uncertainty, and utilizing the standard methodology for solving the equations numerically. Quite often the numerical error is completely ignored in analysis. In many cases the discretization error should dominate the uncertainty, but aspects of the solution methodology can color this assessment. Key among these issues is the nonlinear error, which can compete with the discretization error if care is not taken.

This problem is compounded by a lack of knowledge associated with the explicit details of the numerical algorithm and the aspects of the solution that can lead to issues. In this case the PCMM can assist greatly in deducing these structural problems. The PCMM provides several key points that allow the work to proceed with greater degrees of transparency with regard to the numerical solution. The code verification category provides a connection to the basis for confidence in any numerical method. Are the basic features and aspects of the numerical solution being adequately tested? The solution verification category asks whether the basic error analysis and uncertainty estimation is being done. Again the frameworks encourage a holistic and complete assessment of important details.

The final aspects to highlight in the definition of credibility are the need for honesty and transparency in the assessment. Too often assessments of modeling & simulation lack the fortitude to engage in a fundamental honesty regarding the limitations of the technology and science. If the effort is truly interested in not exposing their flaws, no framework can help. Much of the key value in the assessment is defining where effort can be placed to improve the modeling & simulation. It should help to identify the areas that drive the quality of the current capability.

imagesIf the effort is interested in a complete and holistic assessment of its credibility, the frameworks can be invaluable. The value is key in making certain that important details and areas of focus are not over- or under-valued in the assessment. The areas of strong technical expertise are often focused upon, while areas of weakness can be ignored. This can produce systematic weaknesses in the assessment that may produce wrong conclusions. More perniciously, the assessment can willfully or not ignores systematic shortcomings in a modeling & simulation capability. This can lead to a deep under-estimate in uncertainty while significantly over-estimating confidence and credibility.

For modeling & simulation efforts properly focused on an honest and high integrity assessment of their capability, the frameworks of PCMM and PIRT can be an invaluable aid. The assessment can be more focused and complete than it would be in their absence. The principle good of the frameworks is to make the assessment explicit and intentional, and avoid unintentional oversights. Their use can go great lengths to provide direct evidence of due diligence in the assessment and highlight the quality of the credibility provided to whomever utilizes the results.

We should not judge people by their peak of excellence; but by the distance they have traveled from the point where they started.

― Henry Ward Beecher

Resolving and Capturing As Paths To Fidelity

02 Friday Oct 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 

Divide each difficulty into as many parts as is feasible and necessary to resolve it.

― René Descartes

In numerical methods for partial (integral too) differential equations there are two major routes to solving problems “well”. One is resolving the solution (physics) with some combination of accuracy and discrete degrees of freedom. This is the path of brute force where much of the impetus to build ultra-massive computers comes from. It is the tool of pure scientific utilization of computing. The second path is capturing the physics where ingenious methods allow important aspects of the solution to be found without every detail being known. This methodology forms the backbone of and enables most useful applications of computational modeling. Shock capturing is the archetype of this approach where the actual singular shock is smeared (projected) onto the macroscopic grid instead of demanding infinite resolution through the application of a carefully chosen dissipative term.12099970-aerodynamic-analysis-hitech-cfd

In reality both approaches are almost always used in modeling, but their differences are essential to recognize. In many of these cases we resolve unique features in a model such as the geometric influence on the result while capturing more universal features like shocks or turbulence. In practice, modelers are rarely so intentional about this or what aspect of numerical modeling practice is governing their solutions. If they were the practice of numerical modeling would be so much better. Notions of accuracy, fidelity and general intentionality in modeling would improve greatly. Unfortunately we appear to be on a path to allow an almost intentional “dumbing down” of numerical modeling by dulling down the level knowledge associated with the details of how solutions are achieved. This is the black box mentality that dominates modeling and simulation in the realm of applications.

No where does the notion of resolving physics come into play like direct numerical simulation of turbulence, or direct simulation of any other physics for that matter. Rayleigh-Taylor_instabilityTurbulence is the archetype of this approach. It also serves are a warning to anyone interested in attempting this approach. In DNS there is a conflict between fully resolving the physics and most successfully computing the most dynamic physics given existing computing resources. As a result the physics being computed by DNS rarely uses a mesh in the “asymptotic” range of convergence. Despite being fully resolved, DNS is rarely if ever subjected to numerical error estimation. In the cases where this has been achieved the accuracy of DNS falls short of expectations for “resolved” physics. In all likelihood truly resolving the physics would require far more refined meshes than current practice dictates, and would undermine the depth of scientific exploration (lower Reynolds numbers). We see the balance between quality and exploration in science is indeed a tension that remains ironically unresolved.

Perhaps more concerning is the tendency to only measure the integral response of systems subject to DNS. We rarely see a specific verification of the details of the small scales that are being resolved. Without all the explicit and implicit work to assure the full resolution of the physics, one might be right to doubt the whole DNS enterprise and its results. It remains a powerful tool for science and a massive driver for computing, but due diligence on its veracity remains a sustained shortcoming in its execution. As such the greater degree of faith in DNS results should be an endeavor of science rather than simply granted by fiat, as we tend to do today.

Given the issues with fully resolving physics where does “capturing” fit? In principle capturing means that the numerical method contains a model that allows it to function properly when the physics is not resolved. It usually means that the method will reliably produce integral properties of the solution. This is achieved by building the right asymptotic properties into the method. The first and still archetypical method is shock capturing and artificial viscosity. The method was developed to marry a shock wave to a grid by smearing it across a small number of mesh cells and adding the inherent entropy production to a method. Closely related to this methodology is large eddy simulation, which allows under-resolved turbulence to be computed. The subgrid model in its simplest form is exactly artificial viscosity from the first shock capturing method, and allows the flow to dissipate at a large scale without computing the small scales. It also stabilizes what would otherwise be an unstable computation.Supersonic_Bullet_Shadowgraph

Another major class of physics capturing is interface or shock tracking. Here a discontinuity is tracked with the presumption that it is a sharp transition between two states. These states could be the interface between two materials, or pre- and post-shock values. In any cases a number of assumptions are encoded into the method on the evolution of the interface and how the states change. Included are rules for the representation of the solution, which define the method’s performance. Of course, stability of the method is of immense importance and the assumptions made in the solution can have unforeseen side-effects.

One of the key issues for the pragmatic issues associated with the resolved versus captured solution is that most modern methods blend the two concepts. It is a best of both Worlds strategy. Great examples exist in the World of high-order shock capturing methods. Once a shock exists all of rigor in producing high-order accuracy becomes a serious expense compared to the gains in accuracy. The case for using higher than second order method remains weak to this day. The question to be answered more proactively by the community is “how can high-order methods be used productively and efficiently in production codes?”

Combining the concepts of resolving and capturing is often done without any real thought on how this impacts issues of accuracy, and modeling. The desire is to have the convenience-stability-robustness of capturing with the accuracy associated with efficient resolution. Achieving this for anything practical is exceedingly difficult. A deep secondary issue is the modeling inherent in capturing physics. The capturing methodology is almost always associated with embedding a model into the method. People will then unthinkingly model the same physical mechanisms again resulting in a destructive double counting of physical effects. This can confound any attempt to systematically improve models. The key question to ask about the solution, “is this feature being resolved? Or is this feature being captured?” The demands on the computed solution based on the answer to these simple questions are far different.

These distinctions and differences all become critical when the job of assessing the credibility of computational models. Both the modeling aspects, numerical error aspects along with the overall physical representation philosophy (i.e., meshing) become critical in defining the credibility of a model. Too often those using computer codes to conduct modeling in scientific or engineering contexts are completely naïve and oblivious to the subtleties discussed here.

mistakesdemotivatorIn many cases they are encouraged to be as oblivious as possible about many of the details important in numerical modeling. In those cases the ability to graft any understanding onto the dynamics of the numerical solution of the governing equations onto their analysis becomes futile. This is common when the computer code solving the model is viewed as being a turnkey, black box sort of model. Customers accepting results presented in this fashion should be inherently suspicious of the quality. Of course, the customers are often encouraged to be even more naïve and nearly clueless about any of the technical issues discussed above.

Resolve, and thou art free.

― Henry Wadsworth Longfellow

 

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...