I’m a progressive. In almost every way that I can imagine, I favor progress over the status quo. This is true for science, music, art, and literature, among other things. The one place where I tend to be status quo are work and personal relationships that form the foundation for my progressive attitudes. These foundations are formed by several “social contracts” that serve to define the roles and expectations. Without this foundation, the progress I so cherish is threatened because people naturally retreat to conservatism for stability.
Without deviation from the norm, progress is not possible.
What I’ve come to realize is that the shortsighted, short is demolishing many of these social contracts –term thinking dominating our governance. Our social contracts are the basis of trust and faith in our institutions whether they are the rule of government, or the place we work. In each case we are left with a severe corrosion of the intrinsic faith once granted these cornerstones of public life. The cost is enormous, and may have created a self-perpetuating cycle of loss of trust precipitating more acts that undermine trust.
Stagnation is self-abdication.
Take the Labs where I’ve worked. At one time the Lab’s were trusted with the (nuclear) defense of the Nation. This happened in a time of immense threat and danger, yet the oversight was minimal. A substantial resource was given to the Labs to pursue the mission, and the Labs performed marvelously. The Labs fulfilled their social contract with the Nation, and similarly the Labs created a social contract with its employees. Serve here, and we will take care of you. You will be given engaging work, paid well, and ultimately allowed to retire comfortably. Shape your scientific explorations in service of the National security mission, and you will be provided resources. Beyond the direct success in the nuclear work, the scientific work was part of the Nation’s preeminence internationally and produced much of the foundation for great economic success. We have almost systematically destroyed everything good about these Labs.
It takes strength and courage to admit the truth.
Even before the Cold War ended, this social contract began to unravel. The trust eroded and the money came with increasing strings attached. Similarly, the social contract with the employees became too “expensive” to fulfill. Over time the lack of trust and the associated “accountability” has spiraled out of control (we will spend ten dollars to save one). This has precipitated the no risk, no failure allowed environment that is choking innovation and progress out of our work. Increasingly the support for a career at the Labs is being removed, and it’s turning into just another job (not a bad one, but nothing special either).
These developments are paralleled by changes across the economy. They are manifestations of the short-term quarterly return mentality ruling industry. Research and development without immediate impact on the bottom line are increasingly missing from industrial research (missing from government research too). Employees are commodities whose life and career prospects is none of the business concern. The Labs benchmark themselves to these industries and share these attitudes because it benefits the short-term balance sheet.
What gets lost? Almost everything. Progress, quality, security, you name it. Our short-term balance sheet looks better, but our long-term prospects look dismal. The scary thing is that these developments help drive conservative thinking, which in turn drives these developments. As much as anything this could explain our Nation’s 50-year march to the right. We have taken the virtuous cycle we were granted, and developed a viscous cycle. It is a cycle that we need to get out of before it crushes our future.
Any defensiveness is a sign of failure. You can’t move forward if you are defensive.
We got here through overconfidence and loss of trust can we get out of it by combining realism with trust in each other. Right now, the signs are particularly bad with nothing looking like realism, or trust being part of the current public discourse on anything. 

It seems to be a lot easier to metaphorically put our heads in the sand. A lot of the time we go to great lengths to convince ourselves of the opposite of the truth, to convince ourselves that we are the master’s of the universe. Instead we can only achieve the mastery we crave though the opposite. We should never consider our knowledge and capability to be flawless, but flawed and incomplete.
The people applying calibrated models are often lauded as the models of success. The problems with this are deep and pernicious. We want to do much more than calibrate results, we want to understand and explore the unknown. The only way to do that is systematically uncover our failings, and shortcomings with a ken focus on exposing the limits we have. The practical success of calibrated modeling stands squarely in the way of pushing the bounds of knowledge.

We are encouraged by everything around us to work on things that are important. Given the intrinsic differences between the messaging we are given explicitly and implicitly, its hard to really decide what’s important. Of course, if you work on what’s important you will personally make a bit more money. You really make a lot of money if you work specifically in the money making industry…
These words are spoken whenever we go into planning “reportable” milestones in virtually every project I know about. If we are getting a certain chunk of money, we are expected to provide milestones that report our progress. It is a reasoned and reasonable thing, but the execution is horribly botched by the expectations that are grafted onto the milestone. Along with the guidance in the title of this post, we are told, “these milestones must always be successful, so choose your completion criteria carefully.” Along with this we make sure that these milestones don’t contain too much risk.
The real danger in the philosophy we have adopted is the creeping intrusion of mediocrity into everything we do. Nothing is important enough to take risks with. The thoughts expressed through these words are driving a mindless march toward mediocrity, once great research institutions are being thrust headfirst into the realm of milquetoast also-rans. The scientific and engineering superiority of the United States is leaving in lockstep with every successfully completed milestone built this way.
Science depends on venturing bravely into the unknown, a task of inherent risk, and massive potential reward. The reward and risk are linked intimately; with nothing risked, nothing is gained. By making milestones both important and free of risk, we sap vitality from our work. Instead of wisely and competently stewarding the resources we are trusted with, they are squandered on work that is shallow and uninspired. Rather than being the best we can do, it becomes the thing we can surely do.
When push comes to shove, these milestones are always done, and always first in line for resource allocation. At the same time we have neutered them from the outset. The strategy (if you can call it that!) is self-defeating, and only yields the short-term benefit of the appearance of success. This appearance of success is believed to be necessary for continuing the supply of resources.
If you haven’t heard of “wicked problems” before it’s a concept that you should familiarize your self with. Simply put, a wicked problem is a problem that can’t be stated or defined without attempting to solve it. Even then your definition will be woefully incomplete. Wicked problems are recursive. Every attempt to solve the problem yields a better definition of the problem. They are the metaphorical onion where peeling back every layer produces another layer.
In code development this often takes the form of refactoring where the original design of part of the software is redone based on the experience gained through its earlier implementation. You understand the use of and form that the software should take once you’ve tried to write it (or twice or thrice or…). The point is that the implementation is better the second or third time based on the experience of the earlier work. In essence this is embracing failure in its proper role as a learning experience. A working, but ultimately failed form of the software is the best experience for producing a better piece of software.
This principle applies far more broadly to scientific endeavors. An archetypical scientific wicked problem is climate change not simply because the complexity of the scientific aspects of the problem, but also the political and cultural dynamics stirred up. In this way climate change connects back to the traditional wicked problems from the social sciences. A more purely scientific problem that is wicked is turbulence because of its enormous depth in terms of physics, engineering and math with an almost metaphysical level of intractability arising naturally. Turbulence is also connected to a wealth of engineering endeavors with immense economic consequences.
Maintaining the perspective of wickedness as being fundamental is useful as it drives home the belief that your deep knowledge is intrinsically limited. The way that experts look at V&V (or any other wicked problem) is based on their own experience, but is not “right” or “correct” in and of itself. It is simply a workable structure that fits the way they have attacked the problem over time.
This is truly sad considering the transformative potential bound up in those hopeful, unrealistic, dreams we allowed ourselves to express. We could be doing things that are magnificent; instead we withdraw to the world of the possible and bureaucratically controlled, politically viable reality. The projects we hopefully envisioned would be transformative and create a far greater future than the path we are currently on. We are told that the people in Washington can’t envision anything greater either. Perhaps they are just like us, simply unwilling to honestly voice anything greater than our currently pedestrian path.
This is why the future is so bleak; the dreams are there, but no one has faith that these dreams can be realized. Support for working on the dream is missing, why start something that will never be finished? People have recently realized that the future was supposed to bring flying cars and instead we got mini-supercomputers in our pockets (that do very little computing). Of course it doesn’t quite look like “Blade Runner” or “Minority Report” either. The problem is that it looks like the dystopian parts of those movies have a greater chance of reality than the cool parts.
that more often than not, such symmetry is missing. Quite often the value doesn’t seem to be aligned with the cost (that watch cost how much!?). The key is to look deeper, the costs and the value are aligned, but the truth is often hidden by shame. The shame is often the explicit admission that we value things that are so inherently superficial and ultimately damaging to our future.
Much of the actual value we take in items is hidden. It also matters greatly who does the evaluation. This is clearest with luxury items where people are willing pay a great deal for a brand name because of its cachet. This is a how top designer and brands make their money; people are willing to pay significant premium for having the product with a name. Examples can be found with shoes, cars, watches, and handbags among many items. It becomes a sign of distinction just to own certain
items.
In the area of computing other opportunity costs are in evidence. We have made significant effort to make high performance computing about big computers solving meaningless benchmarks. We have defined weak scaling as a success. These have come at significant costs such as the diminishment of the efficiency of doing most of our computational work because we have failed to focus on performance at the basic node level. Our pursuit of mistaken goals in high performance computing has also driven a divestment from many important areas of algorithmic research. Perhaps the most powerful tool for effectively using computers, the algorithm, has been stripped of its vitality in the process.


The question is of supreme importance moving forward with supercomputing. The Drucker quote fairly well captures both the importance of answering the metrics question moving forward, but also consideration of what happened in the recent path. I’ve come to the conclusion that the devotion to “weak scaling” is probably doing a lot of damage to high performance computing.
solution verification/error estimation, validation, and uncertainty quantification are a good place to start, but inadequate for many projects. An example might be weather or climate modeling where data assimilation is important enough to warrant its own category. In computation of social science, the geometry is irrelevant and needs to be replaced with an appropriate description of the environment things like agents are placed in. In other cases the experimental work is sufficiently complex
and focused that it should be expanded into far greater detail included a data focus. The point is that PCMM is not a fixed framework, but an idea of how to organize your activity as to not leave important things out.
How might I use PCMM to do something that isn’t V&V related first? Say, like writing a new code?
I would consider what the application of the code is intended to be and how much further than the original intent might be supported? How essential is the geometric fidelity to the quality of simulations? How well are the basic physical models, and supporting constitutive relations established? Is the numerical method and the equations supported by mathematical rigor? Are numerical errors well understood where the equations and method are to be applied? What experiment exist for validation, and will new validation experiments be conducted? What sort of quantities of interest are needed and how will their uncertainty be assessed? For every question how critical is the quality of the answer, and what is the level of decision to be made with the results? Might any of this change over time, and can those changes be accommodated in the desired code?
