• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Category Archives: Uncategorized

Are we really stewarding anything but decline?

06 Friday Nov 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

ArtilleryShell

Never ascribe to malice that which is adequately explained by incompetence.

― Robert J. Hanlon

I’ve written mostly about modeling and simulation because that’s what I do and what I know best, but its part of a larger effort and a larger problem. I work for a massive effort known as science-based stockpile stewardship where modeling and simulation is one of the major themes. This whole effort was conceived of as a way of maintaining our confidence (faith) in our nuclear weapons in the absence of actually testing them. There is absolutely no technCastle_Unionical reason not to test them; the idea of not testing is purely political. It is a good political stance from a moral and ethical point-of-view and I have no issue with taking that stand on those grounds. From a scientific and engineering point-of-view it is an awful approach, and clearly far from optimal and prone to difficulties. These difficulties can be a very good thing if harnessed appropriately, but today such utility is not present in the execution of our Lab’s mission. As one should always remember, nuclear weapons are political things, not scientific, and politics is always in charge.

The science-based stockpile stewardship program is celebrating its twenty-year22366202545_1acf15e9e4 anniversary. Our political leaders are declaring it to be a massive success. They have been busy taking a victory lap and crowing about its achievements. The greatest part of this success is high performance computing. These proclamations are at odds with reality. The truth is that the past 20 years have marked the downfall of the quality and superiority of our Labs and the supremacy of these institutions scientifically. The program should have been a powerful hedge against decline, and perhaps it has been. Perhaps without stockpile stewardship the Labs would be in even worse shape than they are today. That is a truly terrifying thought. We see a broad-based decline in the quality of the scientific output of the United States, and our nuclear weapons’ Labs are no different. It appears that the best days are behind us. It need not be this way with proper leadership and direction.

Confidence is something you feel before you truly understand the situation

― Julie E Czerneda

Trinity_Test_Fireball_16msNonetheless given the stance of not testing we should be in the business of doing the very best job possible within these self-imposed rules (i.e., no full up testing). We are not and we are not to a relatively massive degree. This is not on purpose, but rather by a stunning lack of clarity in objectives and priorities. We have allowed a host of other priorities to undermine success in this essential endeavor. By taking the fully integrated testing of the weapons off the table requires that we bring our very best to everything else we do.

OTD-July-25---Operation-Crossroads-jpgI’ve written a great deal about how bad our approach to modeling and simulation is, but it’s the tip of the proverbial iceberg of incompetence and steps that systematically undermine the work necessary to succeed. Where modeling and simulation gets a lot of misdirected resources the experimental and theoretical efforts at the Labs have been eviscerated. The impact of this evisceration on modeling and simulation is evident in issues with the actual credibility of simulation. This destruction has been done at the time when they are needed the most. Instead support for these essential scientific engines for progress have been “knee-capped”. Just as importantly a positive work environment has been absolutely annihilated by how the Lab’s are managed.

Without the big integrated experiment to tell you what you need to know for confidence all the other experiments need to be taken up a notch or two to fill in the gap. Instead we have created an environment where experimental science has been lobotomized and exists in an atmosphere of extreme caution that almost assures the lack of necessary results for healthy science. Hand in hand with a destruction of experimental science is the loss of any vibrancy of theoretical science. The necessary bond between experimental and theoretical science has been torn asunder. Usually when working well the two approaches push and pull each other to assure progress. With neither functioning, science grinds to a halt. Engineering is similarly dysfunctional. We do not know enough today to execute the mission. In a very real sense we will never know enough, but our growth of knowledge is completely dependent on a functioning engine of discovery powered primarily by experiment, but also theory. Without either functioning properly modeling and simulation is simply a recipe for over-confidence.

We can only see a short distance ahead, but we can see plenty there that needs to be done.

― Alan Turing

We have gotten to this point with the best of intentions, but the worst in performance and understanding of what it takes to be successful. We are not talking about malice on the part of our national leadership, which would be tantamount to treason, but rather the sort of incompetence that arises from the political chaos of the modern era. When we add a completely dysfunctional and spoiled public consciousness governed principally by fear we have the recipe for wholesale decline and the seemingly systematic destruction of formerly great institutions. Make no mistake, we are destroying our technical base as surely as our worst enemy would, but through our own inept management and internal discord.

Let’s start with the first nail in the coffin, the “Tiger teams” of the mid-1990’s. We decided to apply the same forces that have made nuclear power economically unviable to the National Labs (nuclear power has been made massively expensive through over regulation, and a legal environment which causes costs to explode through the time-integrated value of money). This isn’t actual safety, but rather an imposition of a massive paperwork and procedural burden on the Labs, which produces safety primarily by decreasing productivity to the level where nothing happens.

19.3_F2_ThornquistScience becomes so incremental that progress is glacial. You almost completely guarantee safety and in the process a complete lack of discovery. Experiments lose all their essence and utility in acting as a hedge against over-confidence by surprising us. Add the risk aversion we talk about below, and you have experimental science that does almost nothing. As a result we get very little for our experimental dollar, and allow ourselves to do almost nothing innovative or exciting. So yes, safety is really important, and we need to produce a safe working environment. This same environment must also be a productive place. The productivity gains that we have seen in the private world have been systematically undermined at the Labs, not just by safety, but two other drivers risk aversion and security.

Guaranteed security is another pox on the Labs. This pox is impacting society as a whole, but Labs suffer under another burden. We pay an immense tax on our lives by trying to defend ourselves from minuscule risks associated with terrorism. We have given up privacy as a society so that our security forces can find the scant number of terrorists who represent almost no actual risk to citizens. The security stance at the Labs is no different. We have almost no risk or danger of anything, yet we pay a huge price in terms of privacy, productivity and work environment to avoid vanishing small risks. Instead of producing Labs that are so fantastic that we constantly push back the barriers of knowledge and stay ahead of our enemies, we kill ourselves with security. We keep ourselves from communicating, producing work and collaborating effectively for virtually no true benefit aside from soothing irrational fear.

timeline-18Finally we have a focus on accountability where we want to be guaranteed that no money be wasted at any time. Part of this is risk aversion where research that might not pan out and doesn’t get funded because not panning out is viewed as failure. Instead these failures are at the core of learning and growing. Failure is essential to learning and acquiring knowledge. Our accountability system is working to destroy the scientific method, the development of staff, and our ability to be the best. To some extent we account not because we need to, but because we can. Computers allow us to sub-divide our sources of money into ever-smaller bins along with everyone’s time, and effort. In the process we lose the crosscutting nature of the Lab’s science in the process. We get a destruction of multi-disciplinary science that is absolutely essential to doing the work of stewardship. Without multi-disciplinary science we will surely fail at this essential mission, and we are managing the Labs in a way that assures this outcome.

Los_Alamos_colloquiumAll of this is systematically crushing our workforce and its morale. In addition, we are failing to build the next generation of scientists and engineers with a level of quality necessary for the job. We are allowing the quality of the staff to degrade through the mismanagement of the entire enterprise at a National level. Without a commitment to real unequivocal success in the stewardship mission, the entire activity is simply an exercise in futility.

We seek guaranteed safety and that simply cannot happen without doing nothing at all. We seek guaranteed lack of risk, and no chance of failure, which is the antithesis of research and learning. Science is powered by risk and voyage into the unknown. Without the unknown, an inherently risky thing, science is simply a curating of existing knowledge. Our security stance seems totally rational especially in the post-911 world. It is nothing more than a fear mongering that strives to do the impossible, maintain a tight control over information based on science, and maintain our advantage by keeping us from using the best available technology. Instead of enhancing our productivity with technology and science, we hamstring ourselves to defend our possession of old knowledge. The power of the Labs and their staff is driven by achievement and discovery and the push for safety, risk-free and total security is completely at odds and work to destroy what used to be our strength.

The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

― George Bernard Shaw

fig10_roleWhen we look at the overall picture we see a system that is not working. We spend more than enough money on stockpile stewardship, but we spend the money foolishly. The money is being wasted on a whole bunch of things that have nothing to do with stewardship. Most of the resources are going into guaranteeing complete safety, complete absence of risk, complete security and complete accountability. It is a recipe for abject failure at the integrated job of safeguarding the Nation. We are failing in a monumental way while giving our country the picture of success. Of course the average American is so easily fooled because if they weren’t would our politics be so dysfunctional and dominated by fear-based appeals?

Evil people rely on the acquiescence of naive good people to allow them to continue with their evil.

― Stuart Aken

What could we be doing to make things better and step toward success?

The first thing we need a big audacious goals with enough resources and freedom to solve the problems. Stockpile stewardship itself should be enough of a challenge, and we do have the resources to solve the problem. What we are missing is the freedom to get the job done, and the general waste of resources on things that contribute nothing toward success. Actually much of our resourcing goes directly into things that detract from success. Think about it, we spend most of our precious money undermining any chance at succeeding. One of the core issues is that we are not answering the new questions that today’s World is asking. Instead we are continuing to try to answer yesterday’s questions even when they are no longer relevant.

Theories might inspire you, but experiments will advance you.

― Amit Kalantri

6767444295_259ef3e354Another way of making progress is to renew our intent towards building truly World-class scientists at the Labs. We can do this by harnessing the Lab’s missions to do work that challenges the boundary of science. Today we are World class by definition and not through our actions. We can change this by simply addressing the challenges we have with a bold and aggressive research program. This will drive the professional development to heights that today’s current approach cannot match. Part of the key to developing people is to allow their work to be the engine of learning. For learning, failure and risk is key. Without failure we learn nothing, just recreate the success we already know about. World-class science is about learning new things and cannot happen without failure, and failure is not tolerated today. Without failure science does not work.

A big piece of today’s issues with the Labs are a deep disconnect between experiment, and theory that is necessary to drive science forward. As well as the admonitions against failure, the push and pull of experiment and theory has broken down. This tie must be re-established if scientific health and vitality is to be restored. When it works properly we see a competition between experimental science and theory. Sometimes experiments provide results that theory cannot explain driving theory forward. At other times theory makes predictions that experiments have to progress to measure and confirm. Today we simply work in a mode where we continually confirm existing theory, and fail to push either into the unknown. Science cannot progress under such conditions.images

Much of the problem with the lack of progress can be traced to the enormous time, effort and resource that go into useless regulation, training and paperwork. These efforts go far beyond the necessary level of seriousness in assuring safety and security to trying to guarantee safety and security in all endeavors. These guarantees are foolish and lead to an overly cautious workplace where failure is ruled out by dictum and risks necessary for progress are avoided. This leads to lack of progress, meaning and excellence in science. It is a recipe for decline. We do not have a system that prioritizes productivity, progress and quality of work. We have lost the perspective in balancing our efforts in favor of the seemingly safest and securest mode of effort.

Cielo rotatorThe stupid, naïve and unremittingly lazy thinking that permeate high performance computing aren’t just found there. It dominates the approach to stockpile stewardship. We are stewarding our nuclear weapons with a bunch of wishful thinking instead of aimgreswell-conceived and executed plan. We are in the process of systematically destroying the research excellence that has been the foundation of our National security. It is not malice, but rather societal incompetence that is leading us down this path. Increasingly, the faith in our current approach is dependent on the lack of reality of the whole nuclear weapons’ enterprise. They haven’t been used for 70 years and hopefully that lack of use will continue. If they are used we will be in a much different World if they are used, and a World we are not ready for any more. I seriously worry that our lack of seriousness and pervasive naivety about the stewardship mission will haunt us. If we have screwed this up, history will not be kind to us.

You have attributed conditions to villainy that simply result from stupidity.

― Robert A. Heinlein

“Preserve the Code Base” is an Awful Reason for Anything

30 Friday Oct 2015

Posted by Bill Rider in Uncategorized

≈ 2 Comments

The greater danger for most of us lies not in setting our aim too high and falling short; but in setting our aim too low, and achieving our mark.

― Michelangelo Buonarroti

The DOE ASC program has turned into “Let’s create a program that will replace the old generation of legacy codes with a new generation of legacy codes.”  In this way the program which just celebrated its 20th anniversary has been a massive success. Unfortunately this end product is not in service to our National security, it is a threat.

One of the reasons I have been given for some of the work we are doing is the need to “preserve our code base”. This code base is called a multi-billion dollar investment that DOE has made and needs to be maintained for the future. Nothing could be further from the truth. It is one of the most defeatist and insulting things I can imagine. It is naïve and simplistic at it core. This makes me want to puke.

Why should I have such a strong and visceral reaction to a statement of “support” for the importance of modeling and simulation work? After all to preserve the code base comes with funding for lots of work and the purchase of super-exotic super-computers that seem really cool (they’re really big and complicated with lots of flashing lights plus cost a shit-ton of money). My problem comes from the lack of faith this approach denotes in the ability of our current scientists to produce anything of intellectual value. Instead the valuing the creativity and creation of knowledge by our current generation of scientists, we are implicitly valuing the contributions from the past. We should value the work of the past, but as a foundation to build upon not worship as an idol. The impact of the approach means the value of work today is diminished, and the careers of current scientists are diminished to the point of simply being caretakers. It makes today’s scientists mindless high priests of the past. We end up asking very little of them in terms of challenge and accomplishment, and end up harming the Nation’s future in the process. Hence the reason for the “makes me want to puke,” comment.

So why the hell does this messaging exist?

It is a rather feeble attempt to justify the existence of the work that exists. It is feeble because it completely misrepresents the work entirely and creates a harmful narrative. The question exist because people simply don’t understand what “code” is. They think of code like a bridge that once built simply does the job for a very long time. Code is nothing at all like a bridge or building and trying to manage it in the manner that is being promoted is dangerous, destructive and borders on incompetence. It is certainly an attitude born of complete ignorance.

Cooking requires confident guesswork and improvisation– experimentation and substitution, dealing with failure and uncertainty in a creative way

― Paul Theroux

VikingProCTA much better analogy is cooking. Code is simply the ingredients used to cook a dish. Good ingredients are essential, but insufficient to assure you get a great meal. Moreover food spoils and needs to be thrown out, replaced or better ones choses. Likewise parts of the code are in constant need of replacement or repair or simply being thrown out. The computing hardware is much like the cooking hardware, the stove top, oven, food processors, etc. which are important to the process, but never determine the quality of the meal. They may determine the ease of preparation of the meal, but almost never the actually taste and flavor. In the kitchen nothing is morechef decorating dessert important than the chef. Nothing. A talented chef can turn ordinary ingredients into an extraordinary culinary experience. Give that same talented chef great ingredients, and the resulting dining experience could be absolutely transcendent.

6a00d8341c51c053ef00e54f8863998834-800wiOur scientists are like the chefs and their talents determine the value of the code and its use. Without their talents the same code can be rendered utterly ordinary. The code is merely a tool that translates simple instructions into something the computer can understand. In skilled hands it can render the unsolvable, solvable and unveil an understanding of reality invisible to experiment. In unskilled hands, it can use a lot of electricity and fool the masses. With our current attitude toward computers we are turning Labs once stocked with outstanding ingredients and masterful chefs into fast food frycooks. The narrative of preserve the code base, isn’t just wrong, it is downright dangerous and destructive.

…no one is born a great cook, one learns by doing.

― Julia Child

It does represent modeling and simulation in support of our Nation’s nuclear weapons, and this should worry everyone a lot. Rather than talk about investing in knowledge, talent and people, we are investing our energy in keeping old stale code alive and well as our computers change. Of course we are evolving our computers in utterly idiotic ways that do little or nothing to help us solve problems that we really care about. Instead we are designing and evolving our computers to solve problems that only matter for press releases. More and more the computers that make for good press releases are the opposite for real problems; the new computers just suck so much harder for solving real problems.

zombie_computers_by_cousinwooferComputing at the high end of modeling and simulation is undergoing great change in a largely futile endeavor to squeeze what little life Moore’s law has left in it. The truth is that Moore’s law for all intents and purposes died a while ago, at least for real codes solving real problems. Moore’s law only lives in its zombie-guise of a benchmark involving dense linear algebra that has no relevance to the codes we actually buy computers for. So I am right in the middle of a giant bait and switch scheme that depends on the even greater naivety and outright ignorance on the part of those cutting the checks for the computers than those defining the plan for the future of computing.press-release-pdf-top500

At the middle of this great swindle is code. What is code, or more properly code for solving models used to simulate the real world? The simplest way to think about code is to view it as a recipe that a “master” chef created to produce a model of reality. A more subtle way to think about a code is as a record of intellectual labor made toward defining and solving models proposed to simulate reality. If we dig our way deeper, we see that code is way of taking a model of reality and solving it generally without making gross assumptions to render it analytically tractable. The model is only as good as theTop500Logointellect and knowledge base used to comprise it in conjunction with the intellect and knowledge used to solve the problem. At the deepest level the code is only as good as the people using it. By not investing in the quality of our scientists we are systematically undermining the value of the code. For the scientists to be good their talent must be developed through engaging in the solution of difficult problems.

imagesIf we stay superficial and dispense with any and all sophistication then we get rid of the talented people, and we can get by with trained monkeys. If you don’t understand what is happening in the code, it just seems like magic. With increasing regularity the people running these codes treat the codes like magical recipes for simulating “reality”. As long as the reality being simulated isn’t actually being examined experimentally, the magic works. If you have magic recipes, you don’t change them because you don’t understand them. This is what we are creating at the labs today, trained monkeys using magical recipes to simulate reality.

In a lot of ways the current situation is quintessentially modern and exceptionally American in tenor. We have massive computers purchased at great cost running magical codes written by long dead (or just retired) wizards maintained by a well-paid,code_monkeywell-educated cadre of pheasants. Behind these two god-awful reasons to spend money is a devaluing of the people working at the Labs. Development of talent and the creation of intellectual capital by that talent are completely absent from the plan. It creates a working environment that is completely backward looking and devoid of intellectual ownership. It is draining the Labs of quality and undermining one of the great engines of innovation and ingenuity for the Nation and World.

wizsmallThe computers aren’t even built to run the magical code, but rather run a benchmark that only produces results for press releases. Running the magical code is the biggest challenge for serfs because the computers are so ill suited to their “true” purpose. The serfs are never given the license of ability to learn enough to create their own magic; all their efforts go into simply maintaining the magic of the bygone era.

What could we accomplish if we knew we could not fail?

― Eleanor Roosevelt

What would be better?

One option would be to stop buying these computers whose sole purpose is to create a splashy press release then struggle forever to run magical codes. Instead we should build computers that are optimized within constraints to solve the problems the agencies they are purchased for a solving. We could work to push back against the ever-steeper decline in realized performance. Maybe we should actually design, build and buy computers we actually want to use. What a novel concept, buy a computer you actually want to use instead of one you are forced to use!

That, my friends, is the simplest thing to achieve. The much more difficult thing is overcoming the magical code problem. The first thing is overcoming magical code is to show the magic for what it is, the product of superior intellect and clever problem solving and nothing more. We have to allow ourselves to create new solutions to new problems grounded by the past, but never chained to it. The codes we are working with are solving the problems posed in the past, and the problems of today are different.

One of the biggest issues with the magical codes is their masterful solution of the problems they were created to solve. The problems they are solving are not the problems we need to solve today. The questions driving technological decision making today are different than yesterday. Even if there is a good reason to preserve the code base (there isn’t), the code base is solving the wrong problems; it is solving yesterday’s problems (really yesteryear’s or yester-decade’s problems).

Elmer-pump-heatequationAll of this is still avoiding the impact of solution algorithms on the matter of efficiency. As others and I have written, algorithms can do far more than computers to improve the efficiency of solution. Current algorithms are an important part of the magical recipes in current codes. We generally are not doing anything to improve the algorithmic performance in our codes. We simply push the existing algorithms along into the future.

3_code-matrix-944969This is another form of the intellectual product (or lack thereof) that the current preserve the code base attitude favors. We completely avoid the possibility of doing anything better than we did in the past algorithmically. Historically improvements in algorithms provided vastly greater advances in capability than Moore’s law provided. I say historically because these advances largely occurred prior to the turn of the Century (i.e., 2000). In the 15 years since progress due to algorithmic improvements has ground to a virtual halt.

Why?
All the energy in scientific computing has gone into implementing existing algorithms on the new generation of genuinely awful computers. Instead of investing in a proven intellectual path for progress that has paired with computer improvements, we have shifted virtually all effort into computers and their direct consequences. Algorithmic research is risky and produces many failures. It takes a great deal of tolerance for failures to invest sufficiently to get the big payoff.MorleyWangXuElements

Our funding agencies have almost no tolerance for failure, and without the tolerance for failure the huge successes are impossible. The result is a systematic lack of progress, and complete reliance on computer hardware for improvement. This is a path that will ultimately and undeniably leads to a complete dead end. In the process of reaching this dead end we will sacrifice an entire generation of scientists to this obviously sub-optimal and stupid approach.

Ultimately, the irritation over the current path is primarily directed at the horrible waste of opportunity it represents. There is so much important work that needs to be done to improve modeling and simulation’s quality and impact. At this point in time computing hardware might be the least important aspect to work on; instead it is the focal point.

351d13330f3e278569b6bd7c5c32dedcMuch greater benefits could be realized through developing better models, extending physical theories, and fundamental improvements in algorithms. Each of these areas is risky and difficult research, but offers massive payoffs with each successful breakthrough. The rub is that breakthroughs are not guaranteed, but rather require an element of faith in ability of human intellect to succeed. Instead we are placing our resources behind an increasingly pathetic status quo approach. Part of the reason for continuation of the approach is merely the desire of current leadership to take a virtual victory lap by falsely claiming the success of the approach they are continuing.

In a variety of fields the key aspect of modeling and simulation that evades our grasp today are complex chaotic phenomena that are not understood well. In fluid dynamics turbulence continues to be vexing. In solid mechanics fracture and failure serve a similar role in limiting progress. Both areas are in dire need of new fresh ideas that might break the collective failures down and allow progress. In neither area will massive computing provide the hammer blow to allow progress. Only by harnessing human creativity and ingenuity will these areas progress.

In many ways I believe that one of the key aspect that limits progress is our intrinsic devotion to deterministic models. Most of our limiting problems all lend themselves more naturally to non-deterministic models. These all require new mathematics, new methods, new algorithms to unleash their power. Faster computers are always useful, but without the new ideas these new faster computers will simply waste resources. The issue isn’t that our emphasis is necessarily unambiguously bad, but rather grossly imbalanced and out of step with where our priorities should be.

Mediocrity knows nothing higher than itself; but talent instantly recognizes genius.

― Arthur Conan Doyle

At the core of our current problems is human talent. Once we found our greatest strength in the talent of the scientists and engineers working at the Labs. Today we focus on things, code, computers, and machines as the strength while starving the pipeline of human talent. When the topic of talent comes up they speak about hiring the best and brightest while paying them at the “market” rate. More damning is how the talent is treated when it hires on. The preserve the code mantra speaks to a systematic failure to develop, nurture and utilize talent. We hire people with potential and then systematically squander it through inept management and vacant leadership. Our efforts in theory and experiment are equally devoid of excellence, utility and vision.

c037fa3f2632d31754b537b793dc8403Once we developed talent by providing tremendously important problem to solve and turn excellent people loose to solve these problems in an environment that encouraged risky, innovative solutions. In this way the potentially talented people become truly talented and accomplished, ready to slay the next dragon using the experience of the previous slain beasts. Today we don’t even show let them see an actual dragon. Our staff never realize any of their potential because they are simply curating the accomplishments of the past. The code we are preserving is one of these artifacts we are guarding. This approach is steadily strangling the future.

Human resources are like natural resources; they’re often buried deep. You have to go looking for them, they’re not just lying around on the surface. You have to create the circumstances where they show themselves.

― Ken Robinson

We want no risk and complete safety; we get mediocrity and decline

23 Friday Oct 2015

Posted by Bill Rider in Uncategorized

≈ 2 Comments

 

Only those who dare to fail greatly can ever achieve greatly.

― Robert F. Kennedy

With each passing day I am more dismayed by the tendency for mediocrity to be excused by people in the work they do. Instead of taking pride in our work, we are increasingly simply acting life mindless serfs farming and toiling on land by aristocratic overlords. The refrain is so often based on ceding the authority to define the quality of your work to others with statements like the following:

“It was what the customer wanted.”

“It met all requirements.”

“We gave them what they paid for.”

All of these statements end up paving the road for second-rate work and remove all responsibility from ourselves for the state of affairs. We are responsible because we don’t allow ourselves to take any real risks.

They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.

― Benjamin Franklin

encryption-NSA-spying_SS_127879991_090613-617x416The whole risk-benefit equation is totally out of whack for society as a whole. The issue is massive across the whole of the Western world, but nowhere is it more in play than the United States. Acronyms like TSA and NSA immediately bring to mind. We have traded a massive amount of time, effort and freedom for a modest to fleeting amount of added security. It is unequivocal that Americans have never been safer and more secure than now. Americans are have also never been more fearful. Our fears have been amplified for political gain and focused on things that barely qualify as threats. Meanwhile we ignore real danger and threats because they are relatively obscure and largely hidden from plain view (think income inequality, climate change, sugar, sedentary lifestyle, guns, …). Among the costs of this focus on removing the risk of bad things happening is the chance to do anything unique and wonderful in our work.A Transportation Security Administration (TSA) officer pats down Elliott Erwitt as he works his way through security at San Francisco International Airport in San Francisco, Wednesday, Nov. 24, 2010. (AP Photo/Jeff Chiu)

The core issue that defines the collapse of quality is the desire for absolute guarantees that nothing bad will ever happen. There are no guarantees in life. This hasn’t stopped us from trying to remove failure from the lexicon. What this desire creates is destruction of progress and hope. We end up being so cautious and risk adverse that we have no adventure, and never produce anything unintended. Unintended can be bad so we avoid it, but unintended can be wonderful and be a discovery. We avoid that too.

I am reminded of chocolate chip cookies as an apt analogy for what we have created.nestle-tollhouse-cookie-doughIf I go to the store and buy a package of Nestle Toll House morsels, and follow the instructions on the back I will produce a perfectly edible, delicious, cookie. These cookies are quite serviceable, utterly mediocre and totally uninspired. Our National Labs are well on their way to be the Toll House cookies of science.

TollHouseCookiesI can make some really awesome chocolate chip cookies using a recipe that has taken 25 years to perfect. Along the way I have made some batches of truly horrific cookies while conducting “experiments” with new wrinkles on the recipe. If I had never made these horrible batches of cookies, the recipe I use today would be no better than the Toll House one I started with. The failures are completely essential for the success in the long run. Sometimes I make a change that is incredible and a keeper, and sometimes it destroys or undermines the taste. The point is that I have to accept the chance that any given batch of cookies will be awful if I expect to create a recipe that is in any way remarkable of unique.

IMG_26881-594x396The process that I’ve used to make really wonderful cookies is the same one as science needs to make progress. It is a process that cannot be tolerated today. Today the failures are simply unacceptable. Saying that you cannot fail is equivalent to saying that you cannot discover anything and cannot learn. This is exactly what we are getting. We have destroyed discovery and we have destroyed the creation of deep knowledge and deep learning in the process.

There is only one thing that makes a dream impossible to achieve: the fear of failure.

― Paulo Coelho

What’s the Point of All This Stuff?

16 Friday Oct 2015

Posted by Bill Rider in Uncategorized

≈ 2 Comments

I find my life is a lot easier the lower I keep my expectations.

― Bill Watterson

I’ve been troubled a lot recently by the thought that things I’m working on are not terribly important, or worse yet not the right thing to be focusing on. Its not a very quieting thought to think that your life’s work is nigh on useless. So what the hell do I do? Something else?

My job is also bound to numerous personal responsibilities and the fates of my loved ones. I am left with a set of really horrible quandaries about my professional life. I can’t just pick up and leave, or at least do that and respect myself. My job is really well paying, but every day it becomes more of a job. The worst thing is the

overwhelming lack of intellectual honesty associated with the importance of the work I do. I’m working on projects with goals that are at odds with progress. We are spending careers and lives working on things that will not improve science, and engineering much less positively effect society as a whole.

I really believe that computational modeling is a boon to society and should be transformative if used properly. It needs capable and able computing to work well. All of this sounds great, but the efforts in this direction are slowing diverging from a path of success. In no aspect of the overall effort is this truer than high performance or scientific computing. We are on a path to squandering a massive amount of effort to achieve almost nothing of utility for solving actual problems in the so-called exascale initiative. The only exascale that will actually be achieved in on a meaningless benchmark, and the actual gains in computational modeling performance are fleeting and modest to the point of embarrassment. It is a marketing ploy masquerading as strategy and professional malpractice, or simply mass incompetence.

io_Disgust_standardStrong language, you might say, but it’s at the core of much of my personal disquiet. I’m honest almost to a fault and the level of intellectual dishonesty in my work is implicitly at odds with my mostly held values. For the most part the impact is some degree of personal morale decline and definite feel of lacking inspiration and passion for work. I’ve been fortunate to live large portions of my life feeling deeply passionate and inspired by my work, but those positive feelings have waned considerably.

I’ve written about the shortcomings of our path in high performance computing a good bit. I discussed a fair bit the nature of modeling and simulation activities and their relationship to reality. For modeling and simulation to benefit society things in the real world need to be impacted by it. Ultimately, the current focus is on the parts of computing furthest from reality. Every bit of evidence is that the effort should be focused completely differently.

The only real mistake is the one from which we learn nothing.

― Henry Ford

The current program is a recreating the mistakes made twenty years ago in high performance computing in the original ASCI program. Perhaps the greatest sin is not learning anything from the mistakes made then. We have had twenty years of history and lessons learned that have been conspicuously ignored in the present time. This isn’t simply bad or stupid; it is absolutely the sins of being both unthinking and incompetent. We are going to waste entire careers and huge amounts of money in service of intellectually shallow efforts that could have been avoided by the slightest attention to history. To call what we do science when we haven’t bothered to learn the obvious lessons right in front of us is simply malpractice of the worst possible sort.

All of this is really servicing our aversion to risk. Real discovery and advances in science require risk, require failure and cannot be managed like building a bridge. It is an inherently error-prone and failure-driven exercise that requires leaps of faith in the ability of humanity to overcome the unknown. We must take risks and the bigger the risk, the bigger the potential reward. If we are unwilling to take risks and perhaps fail, we will achieve nothing. The current efforts are constructed to avoid failure at all cost. We will spend a lot to achieve very little.

In a deep way it makes a lot of sense to clearly state what the point of all this work is. Is it “pure” research where we simply look to expand knowledge? Are we trying to better deliver a certain product? How grounded in reality are the discoveries needed to advance? Is the whole thing focused on advancing the economics of computing? Are we powering other scientific efforts and viewing computing as an engine of discovery?

None of these questions really matter though, the current direction fails in every respect. I would answer that the current high performance-computing trajectory is not focused on success in answering any of these questions except for a near-term give-images-2away to the computing industry (like so much of government spending, it is simply largess to the rich). Given the size of the computing industry, this emphasis is somewhere between silly and moronic. If we are interested in modeling and simulation for any purpose of scientific and engineering performance, the current trajectory is woefully sub-optimal. We are not working on the aspects of computing that impact reality in any focused manner. The only benefit of the current trajectory is using computers that are enormously wasteful with electricity and stupendously hard to use.

By seeking and blundering we learn.

― Johann Wolfgang von Goethe

We could do so much better with a little bit of thought, and probably spend far lessmediocritydemotivator money, or spend the same amount of money more intelligently. We need substantial work on the models we solve. The models we are working on today are largely identical to those we solved twenty years ago, but the questions being asked in engineering and science are far different. We need new models to answer these questions. We need to focus on algorithms for solving existing and new models. These algorithms are as or more effective than computing power in improving the efficiency of solution. Despite this, the amount of effort going into improving algorithms is trivial and fleeting. Instead we are focused on a bunch of activities that have almost no impact on the efficiency or quality of modeling and simulation.

The efforts today simply focus on computing power during a time where the increases in computing power are becoming increasingly challenged by the laws of physics. In a very real sense the community is addicted to Moore’s law, and the demise of Moore’s law threatens the risk adverse manner that progress has been easily achieved for twenty years. We need to return to a high risk, high payoffs research that once powered modeling and simulation, but the community eschews today. We are managed like we are building bridges and science is not like building a bridge at all. The management style for science today is so completely risk adverse that it systematically undermines the very engine that powers discovery (risk and failure).

Models will generally get worse with greater computing resources. If the model is wrong there is no amount of computing resources that can fix it. It will simply converge to a better wrong answer. If the model is answering the wrong question, the faster computer cannot force it to answer the right one. The only thing that can improve matter is a better or more appropriate model. Today working on better or more appropriate models receives little attention or resources instead we pour our efforts into faster computers. These faster computers are solving yesterday’s model faster and to more high fidelity wrong answers than ever before.

a582af380087cd231efd17be2e54ce16Most of our codes and efforts on the next generation of computers are simply rehashed versions of the codes of yesterday including the models solved. Despite overwhelming evidence of these models intrinsic shortcomings, we continue to pour effort in to solving these wrong models faster. In addition the models being solved are simply ill suited to the societal questions being addressed with modeling and simulation. A case in point is the simulation of extreme events (think 100 year floods, or failures of engineered products, or economic catastrophes). If the model is geared to solving the average behavior of a system, these extreme events cannot be directly simulated, only inferred empirically. These new models are necessary and need new math and new algorithms to solve them. We are failing to do this in a wholesale way.

Next the hierarchy of activities in modeling and simulation are algorithms (and methods). These algorithms allow the solution of the aforementioned models. The algorithm defines the efficiency, and character of the solution to models. These algorithms are the recipes for solution that computer science supports and the computers are built to work on. Despite their centrality and importance to the entire enterprise, the development of better algorithms receives almost no support.

A better algorithm will positively influence the solution of models on every single computer that employs it. Any algorithmic support today is oriented toward the very largest and most exotic computers with a focus on parallelism and efficiency of implementation. Issues such as accuracy and operational efficiency are simply not a focus. A large part of the reason for the current focus is the emphasis and difficulty of simply moving current algorithms to the exotic and difficult to use computing platforms being devised today. This emphasis is squeezing everything else out of existence, and reflects a misguided and intellectually empty view of what would make a difference.demotivators

There. I feel a little better having vented, but only a little.

Don’t mistake activity with achievement.

― John Wooden

The role of Frameworks like PCMM and PIRT in Credibility

09 Friday Oct 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Integrity is telling myself the truth. And honesty is telling the truth to other people.

― Spencer Johnson

jean-rostand-computers-quotes-think-why-think-we-have-computers-to-doAs tools the frameworks of PIRT and PCMM are only as good as how they are used to support high quality work. Using these tools will not necessarily improve your credibility, but rather help you assess it holistically. Ultimately the credibility of your modeling & simulation capability is driven by the honesty and integrity of your assessment. It is easy to discuss what you do well and where you have mastery over a topic. The real key to assessment includes a will and willingness to articulate where an effort is weak and where the fundamental foundational knowledge in a field is the limiting factor in your capacity for solving problems. The goal in credibility assessment is not demonstration of mastery over a topic, but rather a demonstration of the actual state of affairs so that decisions can be made with full knowledge of the weight to place and risks inherent in modeling & simulation.

A thing like quality is highly subjective and certainly subject to a great deal of relativity where considerations of the problem being solved and the modeling & simulation capabilities of the field or fields relevant come into consideration. As such the frameworks serve to provide a basic rubric and commonly consistent foundation for the consideration of modeling & simulation quality. As such PIRT and PCMM are blueprints for numerical modeling. The craft of executing the modeling & simulation within the confines of resources available is the work of the scientists and engineers. These resources include the ability to muster effort towards completing work, but also the knowledge and capability base that you can draw upon. The goal of high quality is to provide an honest and holistic approach to guiding the assessment of modeling & simulation quality.

In the assessment of quality the most important aspect to get right is honesty about the limitations of a modeling & simulation capability. This may be the single most difficult thing to accomplish. There are significant physiological and social factors that lead to a lack of honesty in evaluation and assessment. No framework or system can completely overcome such tendencies, but might act as a hedge against the tendency to overlook critical details that do not reflect well on the assessment. The framework assures that each important category is addressed. The ultimate test for the overall integrity and honesty of an assessment of modeling & simulation credibility depends upon deeper technical knowledge than any framework can capture.

Quite often an assessment will avoid dealing with systematic problems for a given capability that have not been solved sufficiently. Several examples of this phenomenon are useful in demonstrating where this can manifest itself. In fluid dynamics, turbulence remains a largely unsolved problem. Turbulence has intrinsic and irreducible uncertainty associated with it, and no single model or modeling approach is adequate to elucidate the important details. In Lagrangian solid mechanics the technique of element death is pervasively utilized for highly strained flows where fracture and failure occur. It is essential for many simulations and often renders a simulation to be non-convergent under mesh refinement. In both cases the communities dependent upon utilizing modeling & simulation with these characteristics tend to under-emphasize the systematic issues associated with both. This produces a systematically higher confidence and credibility than is technically justifiable. The general principle is to be intrinsically wary of unsolved problems in any given technical discipline.

mediocritydemotivatorTogether the PIRT and PCMM adapted and applied to any modeling & simulation activity form part of the delivery of defined credibility of the effort. The PIRT gives context to the modeling efforts and the level of importance and knowledge of each part of the work. It is a structured manner for the experts in a given field to weigh in on the basis for model construction. The actual activities should be strongly reflected in the sort of assessed importance and knowledge basis reflected in the PIRT. Similarly the PCMM can be used for a structured assessment of the specific aspects of the modeling & simulation.

The degree of foundational work providing the basis for confidence for the work is spelled out in the PCMM categories. Included among these are the major areas of emphasis some of which may be drawn from outside the specific effort. Code verification being an exemplar of this where its presence and quality provides a distinct starting point for the specific aspects of the estimation of the numerical error for the specific modeling & simulation activity being assessed. Each of the assessed categories forms the starting point for the specific credibility assessment.

One concrete way to facilitate the delivery of results is the consideration of the uncertainty budget for a given modeling activity. Here the delivery of results using PIRT and PCMM is enabled by considering them to be resource guides for the concrete assessment of an analysis and its credibility. This credibility is quantitatively defined by the uncertainty and the intended application’s capability to absorb such uncertainties for the sort of questions to be answered or decision to be made. If the application is relatively immune to uncertainty or only needing a qualitative assessment then large uncertainties are not worrisome. If on the other hand an application is operating under tight constraints associated with other considerations (sometimes called a design margin) then the uncertainties need to be carefully considered in making any decisions based on modeling & simulation.

This gets to the topic of how modeling & simulation are being used. Traditionally modeling & simulation goes through two distinct phases of use. The first phase is dominated by “what if” modeling efforts where the results are largely qualitative and exploratory in nature. The impact of decisions or options is considered on a qualitative basis and guides decisions in a largely subjective way. Here the standards of quality tend to focus on completeness and high-level issues. As modeling & simulation proves its worth for these sorts of studies, it begins to have greater quantitative demands placed on it. This forms a transition to a more demanding case for modeling & simulation where design or analysis decision is made. In this case the standards for uncertainty become far more taxing. This is the place where these frameworks become vital tools in organizing and managing the assessment of quality.

This is not to say that these tools cannot assist in earlier uses of modeling & simulation. In particular the PIRT can be a great tool to engage with in determining modeling requirements for an effort. Similarly, the PCMM can be used to judge the appropriate level of formality and completeness for an effort to engage with. Nonetheless these frameworks are far more important and impactful when utilized for more mature, “engineering” focused modeling & simulation efforts.

Any high level integrated view of credibility is built upon the foundation of the issues exposed in PIRT and PCMM. The problem that often arises in a complex modeling & simulation activity is managing the complexity of the overall activity. Invariably gaps, missing efforts and oversights will creep into the execution of the work. The basic modeling activity is informed by the PIRT’s structure. Are there important parts of the model that are missing, or poorly grounded in available knowledge? From PCMM, are the important parts of the model tested adequately? The PIRT becomes a fuel for assessing the quality of the validation, and planning for an appropriate level of activity around important modeling details. Questions regarding the experimental support for the modeling can be explored in a structured and complete manner. While the credibility is not built on the PCMM and PIRT, the ability to manage its assessment is enabled by their mastery of the complexity of modeling & simulation.

In getting to the quantitative basis for assessment of credibility, the definition of the uncertainty budget for a modeling & simulation activity can be enlightening. While the PCMM and PIRT provide a broadly encompassing view of modeling & simulation quality from a qualitative point of view, the uncertainty budget is ultimately a quantitative assessment of quality. Forcing the production of numerical values to the quality is immensely useful and provides important focus. For this to be a useful and powerful tool, this budget must be determined with well-defined principles and fairly good disciplined decision-making.

imagesOne of the key principles underlying a successful uncertainty budget is the determination of unambiguous categories for assessment. Each of these broad categories can be populated with sub-categories, and finer and finer categorization. Once an effort has committed to a certain level of granularity in defining uncertainty, it is essential that the uncertainty be assessed broadly and holistically. In other words, it is important, if not essential that none of the categories be ignored.

This can be extremely difficult because some areas of uncertainty are truly uncertain, no information may exist to enable a definitive estimation. This is the core of the difficulty for uncertainty estimation, the unknown value and basis for some quantitative uncertainties. Generally speaking, the unknown or poorly known uncertainties are more important to assess than some of the well-known ones. In practice the opposite happens, when something is poorly known the value often adopted in the assessment is implicitly defined quantitatively as “zero”. This is implicit because the uncertainty is simply ignored, and it is not mentioned, or assigned any value. This is dangerous. Again, the availability of the frameworks comes in handy to help the assessment identify major areas of effort.

A reasonable decomposition of the sources of uncertainty can fairly generically be defined at a high level: experimental, modeling and numerical sources. We would suggest that each of these broad areas be populated with a finite uncertainty, and each of the finite values assigned be supported by well-defined technical arguments. Of course, each of these high level areas will have a multitude of finer grained components describing the sources of uncertainty along with routes toward their quantitative assessment. For example, experimental uncertainty has two major components, observational uncertainty and natural variability. Each of these categories can in kind be analyzed by a host of additional detailed aspects. Numerical uncertainty lends itself to many sub-categories: discretization, linear, nonlinear, parallel consistency, and so on.

The key is to provide a quantitative assessment for each category at a high level with a non-zero value for uncertainty and a well-defined technical basis. We note that the technical basis could very well be “expert” judgment as long as this is explicitly defined. This gets to the core of the matter; the assessments should always be explicit and not leave essential content for implicit interpretation. A successful uncertainty budget would define the major sources of uncertainty for all three areas along with a quantitative value for each. In the case where the technical basis for the assessment is weak or non-existent, the uncertainty should be necessarily large to reflect the lack of technical basis. Like statistical sampling, the benefit to doing more work is a reduction in the magnitude of the uncertainty associated with the quantity. Enforcing this principle means that follow-on work that produces larger uncertainties requires the admission that earlier uncertainties were under-estimated. The assessment process and uncertainty budget are inherently learning opportunities for the overall effort. The assessment is simply an encapsulation of the current state of knowledge and understanding.

Too often in modeling & simulation uncertainty, efforts receive a benefit through ignoring important sources of uncertainty. By doing nothing to assess uncertainty they report no uncertainty associated with the quantity. Insult is done to this injury when the effort realizes that doing work to assess the uncertainty then can only increase its value. This sort of dynamic becomes self-sustaining, and more knowledge and information results in more uncertainty. This is a common and often seen impact of uncertainty assessment. Unfortunately this is a pathological issue. The reality is that this indicts the earlier assessment of uncertainty where the estimate that was made earlier is actually too small. A vital principle is that more work in assessing uncertainty should always reduce uncertainty. If this does not happen the previous assessment of uncertainty was too small. This is an all to common occurrence that occurs when a modeling & simulation effort is attempting to convey a too large sense of confidence in their predictive capability. The value of assessed uncertainty should converge to the irreducible core of uncertainty associated with the true lack of knowledge or intrinsic variability of the thing being modeled. In many cases the uncertainty is interacting with an important design or analysis decision where a performance margin needs to be balanced with the modeling uncertainty.

An ironic aspect to uncertainty estimation is the tendency to estimate large uncertainties where expertise and knowledge are strong, while under-estimating uncertainty in areas where expertise is weak. This is often seen with numerical error. A general trend in modeling & simulation is the tendency to treat computer codes as black boxes. As such, the level of expertise in numerical methods used in modeling & simulation can be quite low. This has the knock-on effect of lowering the estimation of numerical uncertainty, and utilizing the standard methodology for solving the equations numerically. Quite often the numerical error is completely ignored in analysis. In many cases the discretization error should dominate the uncertainty, but aspects of the solution methodology can color this assessment. Key among these issues is the nonlinear error, which can compete with the discretization error if care is not taken.

This problem is compounded by a lack of knowledge associated with the explicit details of the numerical algorithm and the aspects of the solution that can lead to issues. In this case the PCMM can assist greatly in deducing these structural problems. The PCMM provides several key points that allow the work to proceed with greater degrees of transparency with regard to the numerical solution. The code verification category provides a connection to the basis for confidence in any numerical method. Are the basic features and aspects of the numerical solution being adequately tested? The solution verification category asks whether the basic error analysis and uncertainty estimation is being done. Again the frameworks encourage a holistic and complete assessment of important details.

The final aspects to highlight in the definition of credibility are the need for honesty and transparency in the assessment. Too often assessments of modeling & simulation lack the fortitude to engage in a fundamental honesty regarding the limitations of the technology and science. If the effort is truly interested in not exposing their flaws, no framework can help. Much of the key value in the assessment is defining where effort can be placed to improve the modeling & simulation. It should help to identify the areas that drive the quality of the current capability.

imagesIf the effort is interested in a complete and holistic assessment of its credibility, the frameworks can be invaluable. The value is key in making certain that important details and areas of focus are not over- or under-valued in the assessment. The areas of strong technical expertise are often focused upon, while areas of weakness can be ignored. This can produce systematic weaknesses in the assessment that may produce wrong conclusions. More perniciously, the assessment can willfully or not ignores systematic shortcomings in a modeling & simulation capability. This can lead to a deep under-estimate in uncertainty while significantly over-estimating confidence and credibility.

For modeling & simulation efforts properly focused on an honest and high integrity assessment of their capability, the frameworks of PCMM and PIRT can be an invaluable aid. The assessment can be more focused and complete than it would be in their absence. The principle good of the frameworks is to make the assessment explicit and intentional, and avoid unintentional oversights. Their use can go great lengths to provide direct evidence of due diligence in the assessment and highlight the quality of the credibility provided to whomever utilizes the results.

We should not judge people by their peak of excellence; but by the distance they have traveled from the point where they started.

― Henry Ward Beecher

Resolving and Capturing As Paths To Fidelity

02 Friday Oct 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 

Divide each difficulty into as many parts as is feasible and necessary to resolve it.

― René Descartes

In numerical methods for partial (integral too) differential equations there are two major routes to solving problems “well”. One is resolving the solution (physics) with some combination of accuracy and discrete degrees of freedom. This is the path of brute force where much of the impetus to build ultra-massive computers comes from. It is the tool of pure scientific utilization of computing. The second path is capturing the physics where ingenious methods allow important aspects of the solution to be found without every detail being known. This methodology forms the backbone of and enables most useful applications of computational modeling. Shock capturing is the archetype of this approach where the actual singular shock is smeared (projected) onto the macroscopic grid instead of demanding infinite resolution through the application of a carefully chosen dissipative term.12099970-aerodynamic-analysis-hitech-cfd

In reality both approaches are almost always used in modeling, but their differences are essential to recognize. In many of these cases we resolve unique features in a model such as the geometric influence on the result while capturing more universal features like shocks or turbulence. In practice, modelers are rarely so intentional about this or what aspect of numerical modeling practice is governing their solutions. If they were the practice of numerical modeling would be so much better. Notions of accuracy, fidelity and general intentionality in modeling would improve greatly. Unfortunately we appear to be on a path to allow an almost intentional “dumbing down” of numerical modeling by dulling down the level knowledge associated with the details of how solutions are achieved. This is the black box mentality that dominates modeling and simulation in the realm of applications.

No where does the notion of resolving physics come into play like direct numerical simulation of turbulence, or direct simulation of any other physics for that matter. Rayleigh-Taylor_instabilityTurbulence is the archetype of this approach. It also serves are a warning to anyone interested in attempting this approach. In DNS there is a conflict between fully resolving the physics and most successfully computing the most dynamic physics given existing computing resources. As a result the physics being computed by DNS rarely uses a mesh in the “asymptotic” range of convergence. Despite being fully resolved, DNS is rarely if ever subjected to numerical error estimation. In the cases where this has been achieved the accuracy of DNS falls short of expectations for “resolved” physics. In all likelihood truly resolving the physics would require far more refined meshes than current practice dictates, and would undermine the depth of scientific exploration (lower Reynolds numbers). We see the balance between quality and exploration in science is indeed a tension that remains ironically unresolved.

Perhaps more concerning is the tendency to only measure the integral response of systems subject to DNS. We rarely see a specific verification of the details of the small scales that are being resolved. Without all the explicit and implicit work to assure the full resolution of the physics, one might be right to doubt the whole DNS enterprise and its results. It remains a powerful tool for science and a massive driver for computing, but due diligence on its veracity remains a sustained shortcoming in its execution. As such the greater degree of faith in DNS results should be an endeavor of science rather than simply granted by fiat, as we tend to do today.

Given the issues with fully resolving physics where does “capturing” fit? In principle capturing means that the numerical method contains a model that allows it to function properly when the physics is not resolved. It usually means that the method will reliably produce integral properties of the solution. This is achieved by building the right asymptotic properties into the method. The first and still archetypical method is shock capturing and artificial viscosity. The method was developed to marry a shock wave to a grid by smearing it across a small number of mesh cells and adding the inherent entropy production to a method. Closely related to this methodology is large eddy simulation, which allows under-resolved turbulence to be computed. The subgrid model in its simplest form is exactly artificial viscosity from the first shock capturing method, and allows the flow to dissipate at a large scale without computing the small scales. It also stabilizes what would otherwise be an unstable computation.Supersonic_Bullet_Shadowgraph

Another major class of physics capturing is interface or shock tracking. Here a discontinuity is tracked with the presumption that it is a sharp transition between two states. These states could be the interface between two materials, or pre- and post-shock values. In any cases a number of assumptions are encoded into the method on the evolution of the interface and how the states change. Included are rules for the representation of the solution, which define the method’s performance. Of course, stability of the method is of immense importance and the assumptions made in the solution can have unforeseen side-effects.

One of the key issues for the pragmatic issues associated with the resolved versus captured solution is that most modern methods blend the two concepts. It is a best of both Worlds strategy. Great examples exist in the World of high-order shock capturing methods. Once a shock exists all of rigor in producing high-order accuracy becomes a serious expense compared to the gains in accuracy. The case for using higher than second order method remains weak to this day. The question to be answered more proactively by the community is “how can high-order methods be used productively and efficiently in production codes?”

Combining the concepts of resolving and capturing is often done without any real thought on how this impacts issues of accuracy, and modeling. The desire is to have the convenience-stability-robustness of capturing with the accuracy associated with efficient resolution. Achieving this for anything practical is exceedingly difficult. A deep secondary issue is the modeling inherent in capturing physics. The capturing methodology is almost always associated with embedding a model into the method. People will then unthinkingly model the same physical mechanisms again resulting in a destructive double counting of physical effects. This can confound any attempt to systematically improve models. The key question to ask about the solution, “is this feature being resolved? Or is this feature being captured?” The demands on the computed solution based on the answer to these simple questions are far different.

These distinctions and differences all become critical when the job of assessing the credibility of computational models. Both the modeling aspects, numerical error aspects along with the overall physical representation philosophy (i.e., meshing) become critical in defining the credibility of a model. Too often those using computer codes to conduct modeling in scientific or engineering contexts are completely naïve and oblivious to the subtleties discussed here.

mistakesdemotivatorIn many cases they are encouraged to be as oblivious as possible about many of the details important in numerical modeling. In those cases the ability to graft any understanding onto the dynamics of the numerical solution of the governing equations onto their analysis becomes futile. This is common when the computer code solving the model is viewed as being a turnkey, black box sort of model. Customers accepting results presented in this fashion should be inherently suspicious of the quality. Of course, the customers are often encouraged to be even more naïve and nearly clueless about any of the technical issues discussed above.

Resolve, and thou art free.

― Henry Wadsworth Longfellow

 

Today We Value Form Over Substance

25 Friday Sep 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Our greatest fear should not be of failure but of succeeding at things in life that don’t really matter.

― Francis Chan

Vacations are a necessary disconnection from the drive and drain of the every day events of our lives. They are also necessary to provide perspective on things that become to commonplace within the day in day out. My recent little vacation was no different. I loved both Germany and France, but came to appreciate the USA more too. Europeans and particularly Parisians smoke in tremendous numbers. It makes a beautiful city like Paris a bit less ideal and uncomfortable. My wife has recently had major surgery that limits her mobility, and European accommodation for disabilities and handicaps are terrible in comparison to the USA.   680x250_paris2

So the lesson to be learned is that despite many issues, the USA is still a great place and better than Europe in some very real ways. These interesting observations are not the topic here, which is another observation born of vacation time and its numerous benefits.

Bureaucracy destroys initiative. There is little that bureaucrats hate more than innovation, especially innovation that produces better results than the old routines. Improvements always make those at the top of the heap look inept. Who enjoys appearing inept?

― Frank Herbert

Another thing I noted was the irritation I feel when formality trumps substance in work. Formality has its place, but when it stifles initiative, innovation and quality, it does more harm than good. There was an activity at work that I had finished prior to leaving. Anything related to the actual work of it was utterly complete. It involved reviewing some work and denoting whether they had completed the work promised (it was in our parlance, a milestone). They had, and in keeping with current practice for such things, the bar was set so low that they would have had an almost impossible time not succeeding. Despite the relative lack of substance to the entire affair, the old fashioned memo to management was missing (the new fashioned memo with electronic signature was finished, and delivered). Here form was subservient to function and effort was expended in a completely meaningless way.

I had committed to taking a real vacation; I was not going to waste Paris doing useless administrative work. I screened my e-mail and told the parties involved that I would deal with it upon my return. Yet people wouldn’t let go of this. They had to have this memo signed in the old-fashioned way. In the end I thought what if they had put as much effort into doing old-fashioned work? What of instead of dumbing down and making the work meaningless to make sure of success, the work had been reaching far, and focused on extending the state of practice? Well then it might have been worth a bit of extra effort, but the way we do work today, this administrative flourish was simply insult to injury.

Management cares about only one thing. Paperwork. They will forgive almost anything else – cost overruns, gross incompetence, criminal indictments – as long as the paperwork’s filled out properly. And in on time.

― Connie Willis

The experience makes it clear, today we value form over function, appearances over substance. It is an apt metaphor for how things work today. We’d rather be successful doing using useless work than unsuccessful doing useful work. A pathetic success is more valuable than a noble failure. Any failure is something that induces deep and unrelenting fear. The inability to fail is hampering success of the broader enterprise to such a great degree as to threaten the quality of the entire institution (i.e., the National Lab system).Pert_example_gantt_chart

The issue of how to achieve the formality and process desired without destroying the essence of the Lab’s excellence has not been solved. Perhaps if credit were given for innovative, risk-taking work so that the inevitable failures were not penalized would be a worthy start. In terms of impact on all the good that the Labs do, the loss of the engines of discovery they represent negatively impacts national security, the economy, the environment and the general state of human knowledge. Reversing these impacts and puts the Labs firmly in the positive column would be a monumental improvement.

It is hard to fail, but it is worse never to have tried to succeed.

― Theodore Roosevelt

The epitomes of these pathetic successes are milestones. Milestones measure our programs, and seemingly the milestones denote important, critical work. Instead of driving accomplishments, the milestones sew the seeds of failure. This is not specifically recognized failure, but the failure driven by the long-term decay of innovative, aggressive technical work. The reason is the view that milestones cannot fail for any reason, and this knowledge drives any and all risk from the definition of the work. People simply will not take a chance on anything if it is associated with a milestone. If we are to achieve excellence this tendency must be reversed. Somehow we need to reward the taking of risks to power great achievements. We are poorer as a society for allowing the current mindset to become the standard.

Without risk, we systematically accomplish less innovative important work, and ironically package the accomplishment of relatively pathetic activities as success. To insure success, good work is rendered pathetic so that the risk of failure is completely removed. It happens all the time, over and over. It is so completely engrained into the system that people don’t even realize what they are doing. To make matters worse, milestones command significant resources be committed toward their completion. So we have a multitude of sins revolving around milestones: lots of money going to execute low-risk research masquerading as important work.

pgmmilestoneOver time these milestones come to define the entire body of work. This approach to managing the work at the Labs is utterly corrosive and has aided the destruction of the Labs as paragons of technical excellence. We would be so much better off if a large majority of our milestone failed, and failed because they were so technically aggressive. Instead all our milestones succeed because the technical work is chosen to be easy. Reversing this trend requires some degree of sophisticated thinking about success. In a sense providing a benefit for conscientious risk-taking could help. We still could rely upon the current risk-averse thinking to provide systematic fallback positions, but we would avoid making the safe, low-risk path the default chosen path.

Only those who dare to fail greatly can ever achieve greatly.

― Robert F. Kennedy

One place where formality and substance collide constantly is the world of V&V. The conduct of V&V is replete with formality and I generally hate it. We have numerous frameworks and guides that define how it should be conducted. Doing V&V is complex and deep, never being fully defined or complete. Writing down the process for V&V is important simply for the primary need to grapple with the broader boundaries of what is needed. It is work that I do, and continue to do, but following a framework or guide isn’t the essence of what is needed to do V&V. An honest and forward-looking quality mindset is what V&V is about.

It is a commitment to understanding, curiosity and quality of work. All of these things are distinctly lacking in our current culture of formality over substance. People can cross all the t’s and dot all the i’s, yet completely failed to do a good job. Increasingly the good work is being replaced by formality of execution without the “soul” of quality. This is what I see, lots of lip service paid to completion of work within the letter of the law, and very little attention to a spirit of excellence. We have created a system that embraces formality instead of excellence as the essence of professionalism. Instead excellence should remain the central tenet in our professional work with formality providing structure, but not the measure of it.

Bureaucracies force us to practice nonsense. And if you rehearse nonsense, you may one day find yourself the victim of it.

― Laurence Gonzales

Let’s see how this works in practice within V&V, and where a different perspective and commitment would yield radically different results.

I believe that V&V should first and foremost be cast as the determination of uncertainties in modeling and simulation (and necessarily experimentation as the basis for validation). Other voices speak to the need to define the credibility of the modeling and simulation enterprise, which is an important qualitative setting for V&V work. Both activities combine to provide a deep expression of commitment to excellence and due diligence that should provide a foundation for quality work.

I feel that uncertainties are the correct centering of the work in a scientific context. These uncertainties should always be quantitatively defined, that is should never be ZERO, but always have a finite value. V&V should push people to make concrete quantitative estimates of uncertainty based on technical evidence accumulated through focused work. Sometimes this technical evidence is nothing more than expert judgment or accumulated experience, but most of time much more. The true nature of what is seen in work done today whether purely within the confines of customer work or research shown at conferences or in journals does not meet these principles. The failure to meet these principles isn’t a small quibbling amount, but a profound systematic failure. The failure isn’t really a broad moral problem, but a consequence of fundamental human nature at work. Good work does not provide a systematic benefit and in many cases actually provides a measurable harm to those conducting it.

Today, many major sources of uncertainty in the modeling, simulation or experimentation are unidentified, unstudied and systematically reported as being identically ZERO. Often this value of ZERO is simply implicit. This means that the work doesn’t state that it’s “ZERO,” but rather fails to examine the uncertainties at all leading to it being a nonentity. In other words the benefit of doing no work at all is reporting a smaller uncertainty. The nature of the true uncertainty is invisible. This is a recipe for an absolute disaster.Mesh_Refinement_Image4

A basic principle is that doing more work should result in smaller uncertainties. This is like a statistical sampling where gathering more samples systematically produces a smaller statistical error (look at standard error in frequentist statistics). The same thing applies to modeling or numerical uncertainty. Doing more work should always reduce uncertainties, but the uncertainty is always finite, and never identically ZERO. Instead by doing no work at all, we allow people to report ZERO as the uncertainty. Doing more work can only result in increasing the uncertainty. If doing more work increases the uncertainty the proper conclusion is that your initial uncertainty estimate was too small. The current state of affairs is a huge problem that undermines progress.

Here is a very common example of how it manifests itself in practice. The vast majority of computations for all purposes do nothing to estimate numerical errors, and get away with reporting an effective value of ZERO for the numerical uncertainty. Instead of ZERO if you have done little or nothing to structurally estimate uncertainties, your estimates should be larger than the truth to account for your lack of knowledge. Less knowledge should never be rewarded with reporting smaller uncertainty.

For example you do some work and find out that the numerical uncertainty is larger than your original effort. The consequence is that your original estimate was too small and you should learn about how to avoid this problem in the future. Next time doing more work and getting to report a smaller uncertainty should then reward you. You should also do the mea culpa and admit that your original estimate was overly optimistic. Remember V&V is really about being honest about the limitations of modeling and simulation. Too often people get hung up on being able to do complete technical assessment before reporting any uncertainty. If the full technical work cannot be executed, they end up presenting nothing at all, or ZERO.

People get away with not doing numerical error estimation in some funny ways. Here is an example that starts with the creation of the numerical model for a problem. If the model is created so that it uses all the reasonably available computing resources, it can avoid estimating numerical errors by a couple of ways. Often these models are created with an emphasis of computational geometric resolution. Elements of the problem are meshed using computational elements that are one thick (or wide or tall). As a result the model cannot be (simply) coarsened to produce a cheaper model that can assist in computational uncertainty estimation. Because it has used all the reasonable resources, refining the model and completing a simulation is impossible without heroic efforts. You effectively only have a single mesh resolution to work with by fiat.

Then they often claim that their numerical errors are really very small. Any effort to estimate these small errors would be a waste of time. This sort of twisted logicurldemands a firm unequivocal response. First, if your numerical error is so small than why are using such a computationally demanding model? Couldn’t you get by with a bit more numerical error since its so small as to be regarded as negligible? Of course their logic doesn’t go there because their main idea is to avoid doing anything, not actually estimate the numerical uncertainty or do anything with the information. In other words, this is a work avoidance strategy and complete BS, but there is more to worry about here.

It is troubling that people would rely upon meshes where a single element defines a length scale in the problem. Almost no numerical phenomena I am aware of are resolved in a single element with the exception of integral properties such as conservation, and only if this is built into the formulation. Every quantity associated with the single element is simply there for integral effect, and could be accommodated with even less resolution. It is almost certainly not “fully resolved” in any way shape or form. Despite these rather obvious realities of numerical modeling of physical phenomena, the practice persists and in fact flourishes. The credibility of such calculations should be taken as being quite suspect without extensive evidence to contrary.

In the end we have embraced stupidity and naivety as principles packaged as formality and process. The form of the work being well planned and executed as advertised for defining quality. Our work is delivered with unwavering over-confidence that is not supported by the qualities of the foundational work. We would be far better off looking to intelligence, curiosity and sophistication with a dose of wariness as the basis for work. Each of these characteristics form a foundation that naturally yields the best effort possible rather than the systematic reduction of risk that fails to push the boundaries of knowledge and capability. We need to structure our formal processes to encourage our best rather than frighten away the very things we depend upon for success as individuals, institutions and a people.

 

Your Time Step Control is Wrong

18 Friday Sep 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

IMG_3163

No man needs a vacation so much as the man who has just had one.

― Elbert Hubbard

A couple of things before I jump in: I was on vacation early this week, and by vacation I mean, vacation, nothing from work was happening, it was Paris and that’s too good to waste working, and the title of the post is partially right, its actually a situation that is much worse than that.

Among the most widely held and accepted facts for solving a hyperbolic PDE explicitly is the time step estimate, and it is not good enough. Not good enough to be relied upon for production work. That’s a pretty stunning thing to say you might think, but its actually pretty damn obvious. Moreover this basic fact also means that the same bounding estimates aren’t good enough to insure proper entropy production either since they are based on the same basic thing. In short order we have two fundamental pieces of the whole solution technology that can easily fall completely short of what is needed for production work.

It is probably holding progress back as more forgiving methods will be favored in the face of this shortcoming. What is more stunning is how bad the situation can actually be.

If you don’t already know, what is the accepted and standard approach? The time step size or dissipation is proportional to the characteristic speeds in the problem and then other specifics of a given method. The characteristic speeds are the absolute key to everything. For simple one dimensional gas dynamics, the characteristics are u\pm c and u where u is the fluid velocity and c is the sound speed. A simple bound can be used as \lambda_{\max}=|u|+c. The famous CFL or Courant condition gives a time step of \Delta t = \min(A \Delta x/\lambda_{\max}), where A is a positive constant typically less than one. Then you’re off to the races and computing solutions to gas dynamics.

For dissipation purposes you look for the largest local wave speeds for a number of simple Riemann solvers, or other dissipation mechanism. This can be done locally or globally. If you choose the wave speed large enough, you are certain to meet the entropy condition.

sodxThe problem is that this way of doing things is as pervasive as it is wrong. The issue is that it ignores nonlinearities that can create far larger wave speeds. This happens on challenging problems and at the most challenging times for codes. Moreover the worse situation for this entire aspect of the technology happens under seemingly innocuous circumstances, a rarefaction wave, albeit it becomes acute for very strong rarefactions. The worst case is the flow into a vacuum state where the issue can become profoundly bad. Previously I would account for the nonlinearity of sound speeds for shock waves by adding a term similar to an artificial viscosity to the sound speed to account for this. This is only proportional to local velocity differences, and does not account for pressure effects. This approach looks like this for an ideal gas, C = Co + \frac{\gamma + 1}{2}|\Delta u|. This helps the situation a good deal and makes the time step selection or the wave speed estimate better, but far worse things can happen (the vacuum state issue mentioned above).

Here is the problem that causes this issue to persist. To really get your arms around this issue requires a very expensive solution to the Riemann problem, the exact Riemann solver. Moreover to get the bounding wave speed in general requires a nonlinear iteration using Newton’s method, and checking my own code 15 iterations of the solver. No one is going to do this to control time steps or estimate wave speeds.

Yuck!

I will postulate that this issue can be dealt with approximately. A first step would be to include the simplest nonlinear Riemann solver to estimate things; this uses the acoustic impedances to provide an estimate of the interior state of the Riemann fan. Here one puts the problem in a Lagrangian frame and solve the approximate wave curves, on the left side -\rho_l C_l (u_* - u_l) = p_* - p_l and on the right side \rho_r C_r (u_r -u_*) = p_r - p_* solving for u_* and p_*. Then use the jump conditions for density, 1/\rho_* - 1/\rho = (u_l-u_*)/\rho_l C_l to find the interior densities (similar to the right). Then compute the interior sound speeds to get the bounds. The problem is that for a strong shock or rarefaction this approximation comes up very short, very short indeed.

The way to do better is to use a quadratic approximation for the wave speeds where the acoustic impedance changes to account for the strength of the interaction. I used these before to come up with a better approximate Riemann solver. The approximation is largely the same as before except, \rho C_o \rightarrow \rho C_o + \rho s (u_* - u_o). Now you solve a quadratic problem, which is still closed form, but you have to deal with an unphysical root (which is straightforward using physical principles). For the strong rarefaction this still doesn’t work very well because the wave curve has a local minimum at a much lower velocity than the vacuum velocity.

I think this can be solved, but I need to work out the details and troubleshoot. Aside from this the details are similar to the description above.

Rider, William J. “An adaptive Riemann solver using a two-shock approximation.” Computers & fluids 28.6 (1999): 741-777.

Multimat2015: A Biannual Festival on Computing Compressible Multiple Materials

11 Friday Sep 2015

Posted by Bill Rider in Uncategorized

≈ 1 Comment

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.
― Werner Heisenberg

logo-ohnetextOne of my first talked about the Multimat2013, a biannual meeting of scientists specializing in the computation of multi-material compressible flows. Last time in 2013 we met in San Francisco, this time in Würzburg Germany. These conferences began as a minisymposium at an international congress in 2000. The first actual “Multimat” was 2002 in Paris. I gave the very first talk at this meeting (and it was a near disaster, an international jet lag tale with admonitions about falling asleep when you arrive in Europe, don’t do it!). The 2nd Conference was in 2005 and then every two years thereafter. The spiritual leader for the meetings and general conference chairman is Misha Shashkov, a Lab fellow at Los Alamos. Still taken as a whole the meeting marks a remarkable evolution and renaissance for numerical methods, particularly Lagrangian frame shock capturing.

1024px-Marienberg_wuerzburgSometimes going to a conference is completely justified by witnessing a single talk. This was one of those meetings. Most of the time we have to justify going to conferences by giving our own talks. The Multimat2015 conference was a stunning example of just how wrong-headed this point of view is. The point of going to a conference is to be exposed to new ideas from a different pool of ideas than you usually swim in. It is not to market or demonstrate our own ideas. This is not to say that giving talks at conferences aren’t valuable, they just aren’t the principle or most important reason for going. This is a key manner in which our management simply misunderstands science.

The morning of the second day I had the pleasure of seeing a talk by Michael Dumbser (University of Trento). I’ve followed his career for a while and deeply appreciate the inventive and interesting work he does. For example I find his PnPm methods to be a powerful and interesting approach to discretely attacking problems in a manner that may be vastly powerful. Nonetheless I was ill prepared for the magnificent work he presented at Multimat2015. One of the things that have held discontinuous Galerkin methods back for years is nonlinear stabilization. I believe Michael has “solved” this problem, at least conceptually.

Like many brilliant ideas he took a problem that cannot be solved well and recast it into a problem that we know how to solve. This is just such a case. The key idea is to identify elements that need nonlinear stabilization (or in other words, the action of a limiter). One identified, these elements are then converted into a number of finite volume elements corresponding to the degrees of freedom in the discontinuous basis used to discretize the larger element. Then a nonlinear stabilization is applied to the finite volumes (using monotonicity limiters, WENO, etc. whatever you like). Once the stabilized solution is found on the temporary finite volumes, the evolved original discontinuous basis is recovered from the finite volume solution. Wow what an amazingly brilliant idea! This provides a methodology that can retain high sub-element level resolution of discontinuous solutions.

The problem that remains are producing a nonlinear stabilization suitable for production use that goes beyond monotonicity preservation. This was the topic of my talk, how does one most to something better than mere monotonicity preservation as a nonlinear stabilization technique and be robust enough for production use. We need to produce methods that stabilize solutions physical, but retain accuracy to a larger degree while producing results robustly enough for use in a production setting. Once such a method is developed it would improve Dumbser’s method quite easily. A good step forward would be methods that do not damp isolated, well-resolved extrema in a robust way. Just as first order methods are the foundation for monotonicity preserving methods, I believe that monotonicity preserving methods can form the basis for extrema-preserving methods.

I often use a quote from Scott Adams to describe the principle for designing high-resolution methods, “Logically all things are created by a combination of simpler less capable components,” pointed out by Culbert Laney in his book Computational Gas Dynamics. The work of Dumbser is a perfect exemplar of this principle in many ways. Here the existing state of the art methods for Gas Dynamics are used as fundamental building blocks for stabilizing the discontinuous Galerkin methods.

sodAnother theme from this meeting is the continued failure by the broader hyperbolic PDE community to quantify errors, or quantify the performance of the methods used. We fail to do this even when we have an exact solution… Well this isn’t entirely true. We quantify the errors when we have exact solution that is continuously differentiable. So when the solution is smooth we show the order of accuracy. Change the problem to something with a discontinuity and the quantification always goes away and replaced with hand-waving arguments and expert judgment.

The reason is that the rate of convergence for methods is intrinsically limited to first-order with a discontinuity. Everyone then assumes that the magnitude of the error is meaningless in this case. The truth is that the magnitude of the error can be significantly different from method to method and an array of important details changes the error character of the methods. We have completely failed to report on these results as a community. The archetype of this character is Sod’s shock tube, the “hello World” problem for shock physics. We have gotten into the habit of simply showing results to this problem that demonstrate that the method is reasonable, but never report the error magnitude. The reality is that this error magnitude can vary by a factor of 10 for commonly used methods at the same grid resolution. Even larger variations occur for more difficult problems.images copy 3

The problems with a lack of quantification continue and magnify in more than one dimension. For problems with discontinuities there are virtually no exact solutions for problems in multiple dimensions (genuinely multi-dimensional as opposed to problems that are one-dimensional run in multiple dimensions). One of the key aspects of multiple dimensions is vorticity. This renders problems chaotic and non-unique. This only amplifies the hand waving and expert statements on methods and their relative virtues. I believe we should be looking for ways to move past this habit and quantify the differences.

This character in no small way is holding back progress. As long as hand waving and expert judgment is the guide for quality, progress and improvements for method will be hostage to personality instead of letting the scientific method guide choices.

General issues with quantifying the performance of methods. Where it is easy isn’t done, where it is hard. Multidimensional problems that are interesting all have vorticity, the results are all compared in the infamous eyeball or viewgraph norm. Mixing & vorticity are essential, but never measured. All comparisons are expert based and use hand-waving arguments, and the current experts and their methods will continue to hold sway and progress will wane.

The heart of excitement for previous meetings, collocated, cell-centered Lagrangian methods have now become so common and broadly used as to be passé. A number of good talks were given on this class of methods showing a broad base of progress. It is remarkable that such methods have now perhaps displaced the classical staggered mesh methods originating with Von Neumann as the stock and trade of this community. The constant and iterative progress with these methods awaits the next leap in performance, and the hard work of transitioning to being a workhorse in production solutions. This work is ongoing and may ultimately provide the impetus for the next great leap forward.

Aside from this work the meeting attempts to work within the natural tension between physics, mathematics, engineering and computer science to its great benefit. In addition to this beneficial tension, the meeting also straddles the practical and pragmatic needs of code developers for production software, and university research. Over the years we have witnessed a steady stream of ideas and problems flowing back and forth between these disparate communities. As such the meeting is a potpourri of variety and perspective providing many great ideas for moving the overall technical community forward through the creative utilization of these tensions

We tend to overvalue the things we can measure and undervalue the things we cannot.
― John Hayes

Now I am looking forward to the next meeting two years hence in Santa Fe.

What Do We Want From Our Labs?

04 Friday Sep 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Some men are born mediocre, some men achieve mediocrity, and some men have mediocrity trust upon them.

― Joseph Heller

map_1400_REV3

In taking a deep breathe and pausing from my daily grind, I considered the question, “what do we want from our Labs?” By Labs I mean the loose collection of largely government sponsored research institutions supporting everything from national defense to space exploration. By, we, I mean the Nation and its political, intellectual and collective leadership. Working at a Lab for the last 25 plus years under the auspices of the Department of Energy (working for two Labs actually), I think this question is worth a lot of deep thinking and consideration.

A reasonable conclusion drawn from the experiences given my span of employment would be,

“We want to destroy the Labs as competent entities.”

I’m fairly sure this isn’t the intent of our National Leadership, but rather an outcome of 8286049510_dd79681555_cother desires. Otherwise well-intentioned directives that have deeply damaging unintended consequences drive the destruction of the Labs. While calling the directives well-intentioned is probably correct, the mindset driving them is largely a combination of fear, control and unremittingly short-term thinking. Such destruction is largely a consequence of changes in National behavior that sweep across every political and economic dimension. Many of the themes parallel the driving forces of inequality and a host of other societal transformations.

4797035306_84fa3db10a_bOne of the key aspects of the Labs that deeply influenced my own decision to work there was their seemingly enduring commitment to excellence. In the years since, this commitment has wavered and withered to the point where excellence gets lip service and little else. The environment no longer supports the development or maintenance of truly excellent science and engineering (despite frequent protestations of being “World Class”). Our National Labs were once truly World Class, but no more. Decades of neglect and directives that conflict directly with achieving World Class results have destroyed any and all claim to such lofty status. We are not part of an acceptance of mediocrity that we willfully ignore and ignorantly claim to be excellent and world class.

World Class requires commitment to quality of the sort we cannot muster because it also requires a degree on independence and vigorous intellectual exchange that our masters can no longer tolerate. So, here is my first answer of what they want from the Labs,

“obedience” and “compliance”

above all else. Follow rules and work on what you are told to work on. Neither of these values can ever lead to anything “World Class”. I would submit that these values could only undermine and work to decay everything needed to be excellent.

One place where the fundamental nature of excellence and the governance of the Labs intersect is the expectations on work done. If one principle dominates the expectations that the Labs must comply with is

“do not ever ever ever fail at anything”.

The kneejerk response from politicians, the public and the increasingly scandal-mongering media is “that seems like a really good thing.” If they don’t fail then money won’t be wasted and they’ll just do good work. Wrong and wrong! If we don’t allow failure, we won’t allow success either, the two are intertwined as surely as life and death.

nasa1In reality this attitude have been slowly strangling the quality from the Labs. By never allowing failure we see a march steadily toward mediocrity and away from excellence. No risks are ever taken, and the environment to develop World Class people, research and results is smothered. We see an ever-growing avalanche of accounting and accountability to make sure no money ever gets wasted doing anything that wasn’t pre-approved. Meticulous project planning swallows up any and all professional judgment, risk-taking or opportunities. Breakthroughs are scheduled years in advance, absurd as that sounds.

People forget how fast you did a job – but they remember how well you did it

― Howard Newton

The real and dominant outcome from all this surety of outcomes is a loss of excellence, innovation and an ever-diminishing quality. The real losers will be the Nation, its security and its economic vitality into the future. Any vibrancy and supremacy we have in economics and security is the product of our science and technology excellence thirty to forty years ago. We are vulnerable to be preyed upon by whomever has the audacity to take risks and pursue the quality we have destroyed.

Another piece of the puzzle are the salaries and benefits for the employees and managers. At one level it is clear that we are paid well, or at least better than the average American. The problem is that in relative terms we are losing ground every year to where we once were. On the one hand we are told that we are World Class, yet we are compensated at a market-driven, market-average rate. Therefore we are either incredibly high-minded or the stupidest World Class performers out there. At the same time the management’s compensation has shot up, especially at the top of the management of the Labs. On the one hand this is simply an extension of the market-driven approach. Our executives are compensated extremely well, but not in comparison to their private industry peers. In sum the overall picture painted by the compensation at the Labs is one of confusion, and priorities that are radically out of step with the rhetoric.

All of this is tragic enough in and of itself were it simply happening at the Labs alone. Instead these trends merely echo a larger chorus of destruction society wide. We see all our institutions of excellence under a similar assault. Education is being savaged by many of the same forces. Colleges view the education of the next generation as a mere afterthought to balancing their books and filling their coffers. Students are a source of money rather than a mission. One need only look at what universities value; one thing is clear educating students isn’t their priority. Business exists solely to enrich their executives and stockholders; customers and community be damned. Our national infrastructure is crumbling without anyone rising to even fix what already exists. Producing an infrastructure suitable for the current Century is a bridge too far. All the while the Country pours endless resources into the Military-Industrial Complex for the purpose of fighting paper tigers and imaginary enemies. Meanwhile the internal enemies of our future are eating us from within and winning without the hint of struggle.

In the final analysis what are we losing by treating our Labs with such callousness and disregard? For the most part we are losing opportunity to innovate, invent and discover the science and technology of the future. In the 25 years following World War II the Labs were an instrumental set of institutions that created the future and continued the discoveries that drove science and technology. The achievements of that era are the foundation of the supremacy in both National defense and security, but also economic power. By destroying the Labs as we are doing, we assure that this supremacy will fade into history and we will be overtaken by other powers. It is suicide by a slow acting poison.

Quality means doing it right when no one is looking.

― Henry Ford

We have short-circuited the engines of excellence in trying to control risk and demonstrate accountability. The best way to control risk is not take any. We have excelled at this. By not taking risks we don’t have to deal with or explain failure in the short-term, but in the long-term we destroy quality and undermine excellence. Accountability is the same. We know how every dollar is spent, and what work is done every hour of the day, but that money no longer supports serendipity and the professional discretion and judgment that underpinned the monumental achievements of the past. One cannot claim to plan or determine a priori the breakthroughs and discoveries they haven’t made yet. Now we simply have low-risk and fully accountable Laboratories that can only be described as being mediocre.

Such mediocrity provides the Nation with no short-term issues to explain or manage, but makes our long-term prospects clearly oriented towards a decline in the Nation’s fortunes. We seem to uniformly lack the courage and ethical regard for our children to stop this headlong march away from excellence. Instead we embrace mediocrity because it’s an easy and comfortable short-term choice.

We don’t get a chance to do that many things, and every one should be really excellent. Because this is our life.

― Steve Jobs

The only sin is mediocrity.

― Martha Graham

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 56 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...