Accountability is generally a good thing. We are at our best when we are held accountable to our colleagues, our efforts and ourselves. So how can accountability ever be a bad thing? The way it’s done today is a vehicle of unparalleled destructive power.
There is nothing so useless as doing efficiently that which should not be done at all.
― Peter Drucker
Avoiding accountability is never a good thing. On the other hand too much overbearing accountability starts to look like pervasive trust issues. The concomitant effects of working in a low-trust environment are corrosive to everything held accountable. As most things the key is balance between accountability and freedom, too much of either lowers performance. Today we have too little freedom and far too much accountability in a destructive form. For the sake of progress and quality a better balance must be restored. Today’s research environment is being held accountable in ways that reflect a lack of trust, and a complete lack of faith in the people doing the work, and perhaps most importantly produce a dramatic lack of quality in the work (https://williamjrider.wordpress.com/2014/10/23/excellence-and-accountability/).
Accountability can be implemented in many ways, and today in science it looks like micromanagement. How can we make accountability (a generally good thing!) destructive? We define work that should be innovative and creative in terms of well-defined deliverables and milestones (https://williamjrider.wordpress.com/2014/12/04/the-scheduled-breakthrough/), which must never be failed to execute. An important thing that comes from research is finding out what are distinctly bad ideas. The right thing to do is stop when you something is a bad idea and finds a new idea. Today we continue to plow along a path even when we know it’s the wrong one because of the sort of contracts we are accountable to. Perhaps most importantly the quality of the work rarely if ever enters into the accountability. We live in an environment where quality is simply assumed to be in place, and no one seems to have a direct and unbreakable commitment to it. In today’s accountability culture, quality is simply not part of the expectations.
It shows in everything we do.
We divvy up the work into smaller and smaller bins with well-defined deliverables and quarterly progress reports. The same principles that are corrupting our business world are being applied to science (https://williamjrider.wordpress.com/2014/10/10/corporate-principles-do-not-equal-good-management/). Where these principles are arguably appropriate for business (the whole shareholder value concept as the point of business), they are unremittingly damaging to science. Yet apply them we do gleefully and wantonly. It is strangling the quality of the work that is being made accountable as surely as it wastes precious resources. Time and money are interchangeable, but the most unforgivable aspect of this is the waste of careers, talent and human potential to a cause that undermines more than it builds.
Small minds just like small stones can never create giant waves.
― Mehmet Murat ildan
The accountability we see today is destroying the ability to define, think about, and executes big ideas. We live in an era of small-minded, small ideas and a general lack of accomplishment of anything that matters. People are encouraged to work very prescriptively and narrowly within their prescriptively and narrowly defined scope of work. Success often (if not always) depends on things outside the scope of the work we are accountably doing. How can we do something “out of the box” if we are driven to always stay in “the box”? We then say that since it is outside our scope of work, it is outside what we are responsible for. We then feel that ignoring things out of scope for our responsibilities is a duty we are accountable for. The present form of accountability allows one to ignore the big picture and execute the body of work promised whether it matter or not, whether it is useful of not, and whether it is quality or not. It almost assures that work done is not well integrated or adaptive to deeper understanding.
…If there is no risk, there is no reward.
― Christy Raedeke
Another impact of the small-minded thinking is a complete lack of ownership of anything bigger than what you are directly accountable for. You are encouraged to focus only on what you are directly being paid to focus on. Coupled with naïve intellectually shallow management you have a recipe for systematic mediocrity. Just as damning is the extreme risk aversion of the management and increasingly by rank and file scientists. This pervasive risk aversion almost assures that nothing of significance will be accomplished. One can work hard on meaningless tasks and feel successful empowering an ever-diminishing quality standard for all the work touched by accountability. It assures that we will never accomplish anything big or important. In many cases this sort of approach is appropriate for building bridges, repaving roads or putting up a skyscraper. For research, science or high-end engineering it is harmful, damaging and ultimately a giant waste of money. We follow plans that do not stand the test of time and we fail to adjust to what we learn.
Our accounting systems are out of control. They spawn an ever-growing set of rules and accounts to manage the work. All of this work is nothing more than a feel good exercise for managers who mostly want to show “due-diligence” and those they “manage risk”. No money is ever wasted doing anything (except increasingly all the money is wasted). Instead we are squeezing the life out of our science, which manifests itself as low quality work. In a very real way low quality science is easier to manage, far more predictable and easy to make accountable. One can easily argue that really great science with discovery and surprise completely undermines accountability, so we implicitly try to remove it from the realm of possibility. Without discovery, serendipity and surprise, the whole enterprise is much more fitting to tried and true business principles. In light of where the accountability has come from, it might be good to take a hard look at these business principles and the consequences they have reaped.
We exist in an increasingly risk adverse (https://williamjrider.wordpress.com/2015/10/23/we-want-no-risk-and-complete-safety-we-get-mediocrity-and-decline/) and fearful society beset by massive inequality of income, wealth and opportunity. Many of these terrible outcomes can be traced directly to the sorts of business principles being applied to science. Such principles are completely oriented toward driving outcomes preferentially toward the “haves” and away from the “have not’s”. Ultimately, the biggest threat to the rich and powerful is change in the status quo. The sorts of management and accountability used today mostly works to undermine any real progress, which favors the status quo. Science is one of the major societal engines of progress and change. The rich and powerful are fearful of progress, and work to kill it. We are tremendously successful at killing progress, and modern accountability is one of the best tools to do it.
Creativity requires the courage to let go of certainties.
― Erich Fromm
Quality suffers because of loss of breadth of perspective and injection of ideas from divergent points of view. Creativity and innovation (i.e., discovery) are driven by broad and divergent perspectives. Most discoveries in science are simply repurposed ideas from another field. Discovery is the thing we need to progress in science and society. It is the very thing that our current accountability climate is destroying. Accountability helps to drive away any thoughts from outside the prescribed boundaries of the work. Another maxim of today is the customer is always right. For us the customers are working under similar accountability standards. Since they are “right” and just as starved for perspective, the customer works to narrow the focus. We get a negative flywheel effect where narrowing focus and perspective work to enhance their effect.
Never attribute to malice that which can be adequately explained by stupidity.
― Robert Heinlein
This has manifested itself as the loss of the Labs as honest brokers. The Labs are simply sycophants today who work on what they are paid to work on. A large-scale extension of the customer is always right principle. They never provide even a scintilla of feedback to government programs because of the fear of having their funding cut. Instead they pile on to poorly constructed and intellectually shallow programs because they promise funding. Thus we get programs that are phenomenally shallow and intellectually empty, but are managed at a level that provides no freedom or innovation to rescue them from their mediocrity. The accountability means that the empty intellectual goals are executed to a tee, and any value that might have arisen from the resources is sacrificed to the altar of doing what you’re told to do.
When programs of the sort that the government is funding are integrated over decades you see an immense decline in the institutions due to the loss of autonomy of the staff. Our National leadership in science simply corrodes and younger scientists do not develop in any sort of coherent way. Careers are starved of the sorts of efforts needed to build them. We have created a generation of mediocre scientists who excel at obedience and simply grinding through projects. They are distinguished by their ability to produce the deliverables they promised and little else. Once great institutions are steaming caldrons of mediocrity and mostly just pork barrel spending (I often joke that the execution of the Lab Mission is best achieved by going out and buying a car).
An inappropriate focus on money is the root of many of these problems. These days we will do almost anything for money, and money is the primary measure of everything (https://williamjrider.wordpress.com/2014/08/29/money-makes-for-terrible-priorities/). In particular the accountability of what money is spent on provides the standard form of success. Did we do what the money was supposed to pay for? If so, success is declared. Never mind that the money has been sub-divided into ever-smaller bins that effectively destroy the ability to achieve anything bigger and more coherent. The big ideas that would really make a huge difference to everyone never happen because we can’t ever produce a body of work that is coherent enough to succeed. We are always doing work “in the box”.
The end result of our current form accountability is small-minded success. In other words we succeed at many small unimportant things, but fail at large important things. The management can claim that everything is being done properly, but never produce anything that really succeeds in a big way. From the viewpoint of accountability we see nothing wrong all deliverables are met and on time. True success would arise by attempting to succeed at bigger things, and sometimes failing. The big successes are the root of progress and the principal benefit of dreaming big and attempting to achieve big. In order to succeed big, one must be willing to fail big too. Today big failure surely brings congressional hearings and the all to familiar witch-hunt. Without the risks of failure we are left with small-minded success being the best we can do.
Big goals, trust and leadership are the cures. We need to prioritize progress and discovery by producing an environment that is tailored to produce it. Hand in hand with this is a level of faith in the human spirit and ingenuity. Let people believe that their work matters with proof that they are contributing to a meaningful goal. Daniel Pink wrote a book called “Drive” where a workplace is described that is the utter antithesis to the sort of accountability science labors under today (http://www.amazon.com/Drive-Surprising-Truth-About-Motivates/dp/1594484805/ref=sr_1_1?ie=UTF8&qid=1447431195&sr=8-1&keywords=drive).
I was stunned by how empowering his description of work could be, and how far from this vision I work under today. I might simply suggest that my management read that book and implement everything in it. The scary thing is that they did read it, and nothing came of it. The current system seems to be completely impervious to good ideas (or perhaps following the book would have been too empowering to the employees!). Of course the book suggests a large number of practices that are completely impossible under current rules and opposed by the whole concept of accountability we are under today.
It is completely ironic that the very forces that are pushing accountability down our throats are completely free of any accountability themselves. Our current political class is virtually invulnerable to any accountability from the voters. The rich and powerful overlords rule the masses with impunity. Their degree of wealth makes them completely resistant to accountability. The accountability thrust upon the rest of us is simply a tool to maintain and magnify their power through killing progress and assuring that the status quo that favors them is never threatened. Accountability is simply a way of crushing progress, and making sure that the current societal order is maintained.
I worry that only some external force and/or event will be able to dismantle the current system, and it will not be pretty or pleasant for anyone. The forces in power today are quite entrenched and resist any move that might reduce their stranglehold on the World.
The best way to find out if you can trust somebody is to trust them.
― Ernest Hemingway

ical reason not to test them; the idea of not testing is purely political. It is a good political stance from a moral and ethical point-of-view and I have no issue with taking that stand on those grounds. From a scientific and engineering point-of-view it is an awful approach, and clearly far from optimal and prone to difficulties. These difficulties can be a very good thing if harnessed appropriately, but today such utility is not present in the execution of our Lab’s mission. As one should always remember, nuclear weapons are political things, not scientific, and politics is always in charge.
anniversary. Our political leaders are declaring it to be a massive success. They have been busy taking a victory lap and crowing about its achievements. The greatest part of this success is high performance computing. These proclamations are at odds with reality. The truth is that the past 20 years have marked the downfall of the quality and superiority of our Labs and the supremacy of these institutions scientifically. The program should have been a powerful hedge against decline, and perhaps it has been. Perhaps without stockpile stewardship the Labs would be in even worse shape than they are today. That is a truly terrifying thought. We see a broad-based decline in the quality of the scientific output of the United States, and our nuclear weapons’ Labs are no different. It appears that the best days are behind us. It need not be this way with proper leadership and direction.
Nonetheless given the stance of not testing we should be in the business of doing the very best job possible within these self-imposed rules (i.e., no full up testing). We are not and we are not to a relatively massive degree. This is not on purpose, but rather by a stunning lack of clarity in objectives and priorities. We have allowed a host of other priorities to undermine success in this essential endeavor. By taking the fully integrated testing of the weapons off the table requires that we bring our very best to everything else we do.
I’ve written a great deal about how bad our approach to modeling and simulation is, but it’s the tip of the proverbial iceberg of incompetence and steps that systematically undermine the work necessary to succeed. Where modeling and simulation gets a lot of misdirected resources the experimental and theoretical efforts at the Labs have been eviscerated. The impact of this evisceration on modeling and simulation is evident in issues with the actual credibility of simulation. This destruction has been done at the time when they are needed the most. Instead support for these essential scientific engines for progress have been “knee-capped”. Just as importantly a positive work environment has been absolutely annihilated by how the Lab’s are managed.
Science becomes so incremental that progress is glacial. You almost completely guarantee safety and in the process a complete lack of discovery. Experiments lose all their essence and utility in acting as a hedge against over-confidence by surprising us. Add the risk aversion we talk about below, and you have experimental science that does almost nothing. As a result we get very little for our experimental dollar, and allow ourselves to do almost nothing innovative or exciting. So yes, safety is really important, and we need to produce a safe working environment. This same environment must also be a productive place. The productivity gains that we have seen in the private world have been systematically undermined at the Labs, not just by safety, but two other drivers risk aversion and security.
Finally we have a focus on accountability where we want to be guaranteed that no money be wasted at any time. Part of this is risk aversion where research that might not pan out and doesn’t get funded because not panning out is viewed as failure. Instead these failures are at the core of learning and growing. Failure is essential to learning and acquiring knowledge. Our accountability system is working to destroy the scientific method, the development of staff, and our ability to be the best. To some extent we account not because we need to, but because we can. Computers allow us to sub-divide our sources of money into ever-smaller bins along with everyone’s time, and effort. In the process we lose the crosscutting nature of the Lab’s science in the process. We get a destruction of multi-disciplinary science that is absolutely essential to doing the work of stewardship. Without multi-disciplinary science we will surely fail at this essential mission, and we are managing the Labs in a way that assures this outcome.
All of this is systematically crushing our workforce and its morale. In addition, we are failing to build the next generation of scientists and engineers with a level of quality necessary for the job. We are allowing the quality of the staff to degrade through the mismanagement of the entire enterprise at a National level. Without a commitment to real unequivocal success in the stewardship mission, the entire activity is simply an exercise in futility.
When we look at the overall picture we see a system that is not working. We spend more than enough money on stockpile stewardship, but we spend the money foolishly. The money is being wasted on a whole bunch of things that have nothing to do with stewardship. Most of the resources are going into guaranteeing complete safety, complete absence of risk, complete security and complete accountability. It is a recipe for abject failure at the integrated job of safeguarding the Nation. We are failing in a monumental way while giving our country the picture of success. Of course the average American is so easily fooled because if they weren’t would our politics be so dysfunctional and dominated by fear-based appeals?
Another way of making progress is to renew our intent towards building truly World-class scientists at the Labs. We can do this by harnessing the Lab’s missions to do work that challenges the boundary of science. Today we are World class by definition and not through our actions. We can change this by simply addressing the challenges we have with a bold and aggressive research program. This will drive the professional development to heights that today’s current approach cannot match. Part of the key to developing people is to allow their work to be the engine of learning. For learning, failure and risk is key. Without failure we learn nothing, just recreate the success we already know about. World-class science is about learning new things and cannot happen without failure, and failure is not tolerated today. Without failure science does not work.
The stupid, naïve and unremittingly lazy thinking that permeate high performance computing aren’t just found there. It dominates the approach to stockpile stewardship. We are stewarding our nuclear weapons with a bunch of wishful thinking instead of a
well-conceived and executed plan. We are in the process of systematically destroying the research excellence that has been the foundation of our National security. It is not malice, but rather societal incompetence that is leading us down this path. Increasingly, the faith in our current approach is dependent on the lack of reality of the whole nuclear weapons’ enterprise. They haven’t been used for 70 years and hopefully that lack of use will continue. If they are used we will be in a much different World if they are used, and a World we are not ready for any more. I seriously worry that our lack of seriousness and pervasive naivety about the stewardship mission will haunt us. If we have screwed this up, history will not be kind to us.
A much better analogy is cooking. Code is simply the ingredients used to cook a dish. Good ingredients are essential, but insufficient to assure you get a great meal. Moreover food spoils and needs to be thrown out, replaced or better ones choses. Likewise parts of the code are in constant need of replacement or repair or simply being thrown out. The computing hardware is much like the cooking hardware, the stove top, oven, food processors, etc. which are important to the process, but never determine the quality of the meal. They may determine the ease of preparation of the meal, but almost never the actually taste and flavor. In the kitchen nothing is more
important than the chef. Nothing. A talented chef can turn ordinary ingredients into an extraordinary culinary experience. Give that same talented chef great ingredients, and the resulting dining experience could be absolutely transcendent.
Our scientists are like the chefs and their talents determine the value of the code and its use. Without their talents the same code can be rendered utterly ordinary. The code is merely a tool that translates simple instructions into something the computer can understand. In skilled hands it can render the unsolvable, solvable and unveil an understanding of reality invisible to experiment. In unskilled hands, it can use a lot of electricity and fool the masses. With our current attitude toward computers we are turning Labs once stocked with outstanding ingredients and masterful chefs into fast food frycooks. The narrative of preserve the code base, isn’t just wrong, it is downright dangerous and destructive.
Computing at the high end of modeling and simulation is undergoing great change in a largely futile endeavor to squeeze what little life Moore’s law has left in it. The truth is that Moore’s law for all intents and purposes died a while ago, at least for real codes solving real problems. Moore’s law only lives in its zombie-guise of a benchmark involving dense linear algebra that has no relevance to the codes we actually buy computers for. So I am right in the middle of a giant bait and switch scheme that depends on the even greater naivety and outright ignorance on the part of those cutting the checks for the computers than those defining the plan for the future of computing.
intellect and knowledge base used to comprise it in conjunction with the intellect and knowledge used to solve the problem. At the deepest level the code is only as good as the people using it. By not investing in the quality of our scientists we are systematically undermining the value of the code. For the scientists to be good their talent must be developed through engaging in the solution of difficult problems.
If we stay superficial and dispense with any and all sophistication then we get rid of the talented people, and we can get by with trained monkeys. If you don’t understand what is happening in the code, it just seems like magic. With increasing regularity the people running these codes treat the codes like magical recipes for simulating “reality”. As long as the reality being simulated isn’t actually being examined experimentally, the magic works. If you have magic recipes, you don’t change them because you don’t understand them. This is what we are creating at the labs today, trained monkeys using magical recipes to simulate reality.
well-educated cadre of pheasants. Behind these two god-awful reasons to spend money is a devaluing of the people working at the Labs. Development of talent and the creation of intellectual capital by that talent are completely absent from the plan. It creates a working environment that is completely backward looking and devoid of intellectual ownership. It is draining the Labs of quality and undermining one of the great engines of innovation and ingenuity for the Nation and World.
The computers aren’t even built to run the magical code, but rather run a benchmark that only produces results for press releases. Running the magical code is the biggest challenge for serfs because the computers are so ill suited to their “true” purpose. The serfs are never given the license of ability to learn enough to create their own magic; all their efforts go into simply maintaining the magic of the bygone era.
All of this is still avoiding the impact of solution algorithms on the matter of efficiency. As others and I have written, algorithms can do far more than computers to improve the efficiency of solution. Current algorithms are an important part of the magical recipes in current codes. We generally are not doing anything to improve the algorithmic performance in our codes. We simply push the existing algorithms along into the future.
This is another form of the intellectual product (or lack thereof) that the current preserve the code base attitude favors. We completely avoid the possibility of doing anything better than we did in the past algorithmically. Historically improvements in algorithms provided vastly greater advances in capability than Moore’s law provided. I say historically because these advances largely occurred prior to the turn of the Century (i.e., 2000). In the 15 years since progress due to algorithmic improvements has ground to a virtual halt.
Much greater benefits could be realized through developing better models, extending physical theories, and fundamental improvements in algorithms. Each of these areas is risky and difficult research, but offers massive payoffs with each successful breakthrough. The rub is that breakthroughs are not guaranteed, but rather require an element of faith in ability of human intellect to succeed. Instead we are placing our resources behind an increasingly pathetic status quo approach. Part of the reason for continuation of the approach is merely the desire of current leadership to take a virtual victory lap by falsely claiming the success of the approach they are continuing.
Once we developed talent by providing tremendously important problem to solve and turn excellent people loose to solve these problems in an environment that encouraged risky, innovative solutions. In this way the potentially talented people become truly talented and accomplished, ready to slay the next dragon using the experience of the previous slain beasts. Today we don’t even show let them see an actual dragon. Our staff never realize any of their potential because they are simply curating the accomplishments of the past. The code we are preserving is one of these artifacts we are guarding. This approach is steadily strangling the future.
The whole risk-benefit equation is totally out of whack for society as a whole. The issue is massive across the whole of the Western world, but nowhere is it more in play than the United States. Acronyms like TSA and NSA immediately bring to mind. We have traded a massive amount of time, effort and freedom for a modest to fleeting amount of added security. It is unequivocal that Americans have never been safer and more secure than now. Americans are have also never been more fearful. Our fears have been amplified for political gain and focused on things that barely qualify as threats. Meanwhile we ignore real danger and threats because they are relatively obscure and largely hidden from plain view (think income inequality, climate change, sugar, sedentary lifestyle, guns, …). Among the costs of this focus on removing the risk of bad things happening is the chance to do anything unique and wonderful in our work.
If I go to the store and buy a package of Nestle Toll House morsels, and follow the instructions on the back I will produce a perfectly edible, delicious, cookie. These cookies are quite serviceable, utterly mediocre and totally uninspired. Our National Labs are well on their way to be the Toll House cookies of science.
I can make some really awesome chocolate chip cookies using a recipe that has taken 25 years to perfect. Along the way I have made some batches of truly horrific cookies while conducting “experiments” with new wrinkles on the recipe. If I had never made these horrible batches of cookies, the recipe I use today would be no better than the Toll House one I started with. The failures are completely essential for the success in the long run. Sometimes I make a change that is incredible and a keeper, and sometimes it destroys or undermines the taste. The point is that I have to accept the chance that any given batch of cookies will be awful if I expect to create a recipe that is in any way remarkable of unique.
The process that I’ve used to make really wonderful cookies is the same one as science needs to make progress. It is a process that cannot be tolerated today. Today the failures are simply unacceptable. Saying that you cannot fail is equivalent to saying that you cannot discover anything and cannot learn. This is exactly what we are getting. We have destroyed discovery and we have destroyed the creation of deep knowledge and deep learning in the process.
engineering much less positively effect society as a whole.

money, or spend the same amount of money more intelligently. We need substantial work on the models we solve. The models we are working on today are largely identical to those we solved twenty years ago, but the questions being asked in engineering and science are far different. We need new models to answer these questions. We need to focus on algorithms for solving existing and new models. These algorithms are as or more effective than computing power in improving the efficiency of solution. Despite this, the amount of effort going into improving algorithms is trivial and fleeting. Instead we are focused on a bunch of activities that have almost no impact on the efficiency or quality of modeling and simulation.


Together the PIRT and PCMM adapted and applied to any modeling & simulation activity form part of the delivery of defined credibility of the effort. The PIRT gives context to the modeling efforts and the level of importance and knowledge of each part of the work. It is a structured manner for the experts in a given field to weigh in on the basis for model construction. The actual activities should be strongly reflected in the sort of assessed importance and knowledge basis reflected in the PIRT. Similarly the PCMM can be used for a structured assessment of the specific aspects of the modeling & simulation.
If the effort is interested in a complete and holistic assessment of its credibility, the frameworks can be invaluable. The value is key in making certain that important details and areas of focus are not over- or under-valued in the assessment. The areas of strong technical expertise are often focused upon, while areas of weakness can be ignored. This can produce systematic weaknesses in the assessment that may produce wrong conclusions. More perniciously, the assessment can willfully or not ignores systematic shortcomings in a modeling & simulation capability. This can lead to a deep under-estimate in uncertainty while significantly over-estimating confidence and credibility.





Over time these milestones come to define the entire body of work. This approach to managing the work at the Labs is utterly corrosive and has aided the destruction of the Labs as paragons of technical excellence. We would be so much better off if a large majority of our milestone failed, and failed because they were so technically aggressive. Instead all our milestones succeed because the technical work is chosen to be easy. Reversing this trend requires some degree of sophisticated thinking about success. In a sense providing a benefit for conscientious risk-taking could help. We still could rely upon the current risk-averse thinking to provide systematic fallback positions, but we would avoid making the safe, low-risk path the default chosen path.
demands a firm unequivocal response. First, if your numerical error is so small than why are using such a computationally demanding model? Couldn’t you get by with a bit more numerical error since its so small as to be regarded as negligible? Of course their logic doesn’t go there because their main idea is to avoid doing anything, not actually estimate the numerical uncertainty or do anything with the information. In other words, this is a work avoidance strategy and complete BS, but there is more to worry about here.




