Failing organizations are usually over-managed and under-led.
—Warren Bennis
We are living the golden age of management, or at least in the golden age of looking to management for answers. Everything can be managed, and managed better or more completely. It doesn’t matter how poorly it is done, management is the due diligence for everything. It doesn’t matter if all the planning, Gantt charts and other approaches actually lead to worse performance. We must plan. I agree that we need to plan, and we need to be comfortable throwing those plans away. As military greats have said, planning is essential, and those plans are thrown out with the first contact with the enemy.
No battle plan survives contact with the enemy
—Colin Powell
Its been a few years since I was so deeply engaged with a programmatic planning process. This year I’m doing it on several fronts. The experience is has been jarring. Since my last engagement the entire planning activity has gotten further removed from any practical reality. The powers that be seem to be devoted to believing that the planning should be something we are held to. We seem to have to walk a tightrope between the necessary utility of planning and what we will be culpable for. In terms of the quality of research and science, the whole thing ends up being deeply damaging. Management that suits construction projects is being applied to activities it was never intended to work with. We are at once fully engaged in being held to the objectives while not taking them seriously. I am deeply concerned about how damaging this process is to everything it touches.
Last week I saw a fascinating seminar by Kenneth Stanley U. Central Florida) on “Innovation without Objectives”. He made a remarkably persuasive case that innovation is actually hurt by objectives. The planning we do is completely devoted to defining objectives. The ironic thing is that innovation can be made and objectives met, but not necessarily at the things that we try to achieve. Moreover being overly constrained to achieving the objectives actually hurt the ability to achieve that objective. This is because we focus on obvious solutions while ruling out innovative solutions that are more effective. His argument seemed to point toward the conclusion that the management we are devoting ourselves to is nearly completely counter-productive. Of course, implementing anything like this in today ‘s environment is completely unthinkable. There isn’t the necessary trust in science (or anything else for that matter) to let this happen.
The Truth! You can’t handle the truth!
—Colonel Jessup in “A Few Good Men”
We live in an age where the line between truth and lying is blurred in ways that should trouble us. The reality is that you can get yourself into a lot of trouble telling the truth. Structured lying is the new truth. Just ask Jon Gruber whose truth telling created a firestorm. He said that Americans were stupid and the legislation for the ACA (Obamacare for those of you who don’t know what ACA means) wasn’t transparent. Both things are true, and saying this is politically stupid, but completely honest. It almost perfectly demonstrates that lying is the new truth.
Almost every single bit of legislation passed by Congress is completely opaque and full of crap that Congress wants to hide. Its how everything is done whether or not the bill is democratic or republican. Americans probably aren’t “stupid” at least in the official definition of the word, Americans are willfully ignorant, lazy, uniformed, easily duped and a variety of other awful things. Telling them the truth about this is the problem. We have a generation of people that were told that they are special and everyone is a winner. Calling bullshit on this is unimaginable, and this was Gruber’s crime. He spoke truth in a city devoted to lies.
Here we get to the core of things; we can’t tell the truth about things; we can’t innovate and eliminate harmful over-management; we can’t trust anyone. The devotion to lying as a society leads directly to the lack of trust. Leadership is born out of trust therefore we have only ourselves to blame for the lack of it. Until we start to honor truthfulness again we can expect leadership to be absent from our lives.
Men occasionally stumble over the truth, but most of them pick themselves up and hurry off as if nothing ever happened.
– Winston Churchill
I used to work at McDonalds a long time ago. Most people know that a Big Mac uses a secret sauce in dressing the sandwich. It looks like Thousand Island dressing, but rest assured, it is a secret sauce of some sort. Ideally, the secret sauce is the literal trademark of the sandwich, its identity and it’s a secret only known by a hallowed priesthood. Little did I know that in my chosen professional life I would be exposed to a completely different “secret sauce”.
A successful modeling and simulation code is very much the same thing; it has a trademark “secret sauce”. Usually this is the character for the code is determined by how it is made robust enough to run interesting applied problems. Someone special figured out how to take the combination of physical models, numerical methods, mesh, computer code, round-off error, input, output… and figured out how to make it all work. This isn’t usually documented well, if at all. Quite often it is more than a little embarrassing. The naïve implementation of the same method usually doesn’t quite work. This is a dark art, the work of wizards and the difference between success and failure.
The rub is that we are losing the recipes. In many places the people who developed the secret sauce are retiring and dying. They aren’t being replaced. We are losing the community knowledge of the practices that lead to success. We may be in for a rude awakening because these aspects of modeling and simulation are underappreciated, undocumented and generally ignored. Sometimes the secret to make the code work is sort-to-very embarrassing.
Conservatives are not necessarily stupid, but most stupid people are conservatives…
keep-out-of-partisan-politics-1.16473
to allow that all those who conduct themselves as worthy members of the community are equally entitled to the protections of civil government. I hope ever to see America among the foremost nations of justice and liberality.
The assault on science and reason by conservatives is seemingly endless. Leading the charge is the denial of climate change and the threat it poses to humanity. The reasoning for the denial is two-fold, the risk action on climate would impart to the ruling corporate class, and the conservatives love of their wasteful, destructive lifestyles (which largely fuel profit to the corporate overlords). Further attacks occur within their embrace of fundamental religious faction’s desire to indoctrinate our
children with their myths in place of science (i.e., creationism). In other venues the conservatives attack science that goes against corporate greed be it environmental science, medicine and especially the social sciences. Conservatives deny the underlying biological basis of homosexuality because of its implications for their religious beliefs. Time and time again it is their commitment to traditional religious belief over science and reason that drives a wedge.
The attacks on science and reason are by no means completely one-sided. Both liberals and conservatives fail to repudiate the various science deniers and neo-Luddite factions in their midst. For instance liberal anti-science can be see with anti-vaccine, anti-GMO and anti-nuclear movements. Each of these is based on fear of technology and is fundamentally irrational. For instance, the coupling of liberal environmental leanings and anti-nuclear mindsets undermines support for action on climate change
(
What do you do when you’re in a leadership position for a project that you’re sure is moving in the wrong direction?
used twenty years ago. Furthermore most of innovative and creative energy has gone into implementing the codes on modern computers. The result is a phalanx of missed opportunity whose implicit costs are massive. I’d like to sketch out what some of these opportunities and the costs associated with missing them.
What are some of the things we are missing? Clearly one of the greatest sacrifices of the “sunk cost” code is static discretizations and models. The numerical methods that implement the physical models in codes are generally completely intertwined with the codes basic structure. Over time, these aspects of the code become a virtual skeleton for everything else the code does. The skeletal replacement surgery usually kills the patient, and that can be allowed. Therefore we get stuck. New discretizations could provide far more accurate solutions, and new models could provide greater fidelity to reality, but this has been taken off the table to maintain continuity of effort. Part of the work that we need to conduct is a better understanding of how practical discretization accuracy is achieved. For most applications we don’t have smooth solutions and the nominal notions of numerical
accuracy do not hold. How do discretization choices impact this? And how can these choices be optimized given resources? Furthermore changes in these areas are risky and never sure to succeed, while risk reduction with fear of failure is the preeminent maxim of project management today.
Moving on to other more technical aspects of computing and potential benefits I’ll touch on two other missing elements. One of these is stability theory. As I noted a couple of posts ago, robustness is a key to a code’s success. At a very deep level robustness is a crude form of stability. The crudeness is a symptom of failings in the current stability theory. This implies that we could be far better with a more extensive and useful stability theory. Part of this is defining a stability that captures the requirements mathematically for producing robust, physical results. Today we simply don’t have this. Stability theory is a starting point, and we have to kludge our way to robustness.
A second area of progress that we have suffered from not having is numerical linear algebra. We are thirty years on from the last big breakthrough, multigrid. Multigrid is viewed as being the ultimate algorithm given its ideal scaling with respect to the number of unknowns (being linear, and all other methods are super linear). Since then we have moved to using multigrid as a preconditioner for Krylov method improving both methods, and implemented the method on modern computers (which is really hard). Thirty years is a long time especially considering that other advances in this field came on a faster than decadal pace. A good question to ask is whether a sub-
linear method can be defined? Is multigrid the ultimate algorithm? I suspect that the answer is sub-linear method can be discovered, and work on “big data” is pointing the direction. Beyond this we typically solve linear algebra far more accurately (very small residuals) than probably necessary. It is done almost reflexively with a better safe than sorry attitude. This is a huge waste of effort, and someone should come up with a sensible way to set solver tolerances and optimize computational resources.
Expect to see a lot of money going into computing to support “extreme” or “exascale” initiatives. It is too bad that this effort is largely misplaced and inefficient. The chosen approach is grossly imbalanced and not indicative of historical perspective. The work we are not doing is risky, but capable of massive benefit. Current management models seem to be immune to measuring opportunity cost while amplifying the tendency to avoid risk and failure at all costs.
our single greatest weakness. It has become our defining characteristic. Fear is driving everything we do as a nation, and it is choking us. FDR spoke those words to a generation whose early lives spat at fear, but whose actions later in life paved the way to its control. More than the loss of innovation that I wrote about last, we have lost our courage and become a nation frightened cowards. We fight wars against weak nations for terrible reasons. We allow our vastly armed police force to terrorize our citizens. We imprison huge numbers of Americans without any thought to what it implies. We torture using methods we have executed people for. Its all because we are afraid.We are a shadow of the
nation that faced facism because we have lost
diate attention. Without change the problems will move from festering to metastasizing and exploding. Whether it is the curse of massive economic inequality and its risks to the bulk of the population and its toxic impact on politics, or our continuing racial inequities both are shrinking from any progress. We are allowing inequality to continue undermining any reality of to the increasingly mythical “American Dream” while allowing the elite to buy elections “legally”. We might have an abysmal level of social mobility; if you’re poor you’ll stay poor, if you’re rich you’ll stay rich. Race is continuing stain that will explode soon as the identity of minority and majority switch identity. We run the risk of having the minority rule, which is the recipe for revolution as is the scourge of inequality.
The war on terror is the epitome of our collective fear. While 9/11 was tragic, it shouldn’t have ever resulted in the sort of resources devoted to its response. We have lost much of our soul to it. Terror has bred terror, and America committed torture, murder and other crimes in its wake. We have sacrificed freedom and privacy in the name of fear. Terror kills very few Americans even factoring 9/11 in, or the lives of soldiers fighting overseas. Americans do a much better job of killing other Americans than terrorists be it by gunfire citizen-to-citizen or our completely and utterly out of control police force.
On top of this we have a completely out of control prison system. It has become a new day Jim Crow with its racial imbalances, and a complete lack of perspective on it terribly reflects on all of us. We destroy more lives of fellow citizens with the moronic war on drugs than the war on terror could have ever caused. The criminalization of drugs is mostly about subjugating minorities and little about public safety (alcohol is a very dangerous drug, but the drug of choice for the white power structure). The drug war isn’t about safety; it’s a replacement for Jim Crow.
Yesterday while working out I read a stunningly good article from Aeon (

Cold War. The great stagnation that started in the early 1970’s has seen each of those elements come to a halt, and an immense rise of two paired elements massive inequality and conservatism. The rebuilding of the class of oligarchs is destroying the vast middle class that marked that period of great progress. The conservative movements are a direct response to the vast social (and technological) progress. The conservatism is a reaction to the outright fear of change that Hanlon identifies.
creating wealth that is outside the established channels of the social order. The conservatives have come as a reaction to the sort of changes produced in the “Golden Quarter” as Hanlon describes it. Fear of racial equality is driven by the loss of the white majority, and religious fundamentalism reacts to the sorts of freedoms earned during that period. All of this amplified by the discomfort of new technology while the new technology creates change in society that wreck havoc with the traditional social order.
Old Europe started to die in World War 1 and its wake helped set in motion forces that created the depths of the depression and the cataclysm of World War 2, which marked the end of Old Europe and the birth of that Golden Quarter. One must also remember that the excesses of the hyper-rich and inequality also played a key roll in how WWI and the depression unfolded. These excesses unleashed a torrent of progressive action to fix the damage to society. It seems that the same thing could unfold in the future to end the current era of stagnation and greed. Let’s hope not. One might hope we have the wisdom to turn away before things get so bad.
The bind we are in today is largely about trust and faith in each other. We don’t trust because we know how selfish, self-centered and fundamentally corrupt we are. We assume everyone else is just as untrustworthy. Without trust the ability to do anything important or great simply doesn’t exist. No one is worth investing anything for the good of the whole. Every action has become centered on the good of the self. Crisis and calamity are built by such selfishness. Unfortunately, America is the most selfish place in the world, bar none. You do the math, who is the most likely to trigger the next calamity?
he process. Our mode of project/program management and accountability is crushing our ability to do meaningful work. We make plans for our research, which includes milestones, Gantt charts, and the like. While I don’t have anything against planning per se, I have a lot being held to the plans. The quarterly reports are exemplars of being held to a plan that should only be an initial trajectory, and not the final destination.
I will grant you that the approach to project management has its place. A lot of rote construction projects should be done this way. A complex, but often executed project should run this way. I am talking about research. Research is the exemplar of what should absolutely not be run this way. Yet we do it with almost Pavlovian glee.
I thought about what I wrote a few weeks ago, and realized that when I state robust, I mean almost the same thing as stable. Well almost the same is not the same. Robust is actually a stronger statement since it implies that the answer is useful in a sense. A stable calculation can certainly produce utter and complete gibberish (it may be even more dangerous to produce realistic-looking, but qualitatively/quantitatively useless results). I might posit that robustness could be viewed as a stronger form of stability that provides a guarantee that the result should not be regarded as bullshit.
Perhaps this is the path forward I’m suggesting. The theory of PDE stability is rather sparse and barren compared to ODE theory. PDE stability is really quite simple conceptually, while ODE stability theory is rich with detail and nuance. One has useful and important concepts such as A-stability, L-stability and so on. There are appealing concepts such as relative-stability and order stars, which have no parallel in PDE stability. I might be so bold as to suggest that PDE stability theory is incomplete and unfinished. We have moved toward accuracy and efficiency and never returned to finish the foundation they should be built upon. We are left with a field that has serious problems with determining quality and correctness of solutions (
The PDE stability theory was first, and clearly articulated by John Von Neumann and first communicated during lectures in February 1947, and in a report that same year [VNR47]. These same concepts appeared in print albeit obliquely in Von Neumann and Goldstine [VNG47], and Crank-Nicholson’s classic [CN47]. Joe Grcar gives a stunning and full accounting of the work of Von Neumann and Goldstine and its impact on applied mathematics and computing in SIAM Review [Grcar]. Since Von Neumann had access to and saw the power of computing, he saw stability issues first hand, and tackled them. He had to, it bit him hard in 1944 [MR14]. His stability analysis methodology is still the gold standard for PDEs (
As Dahlquist relays the PDE world had a head start, and other important work was conducted perhaps most significantly the equivalence theorem of Lax [LaxEquiv]. This theorem was largely recreated independently by Dahlquist two or three years later (he reports that Lax gave the theory in a seminar in 1953). The equivalence theorem states that the combination of stability and consistency is equivalent to convergence. Being rather flip about this stability means getting an answer and consistency means solving the right problem.
Life comes with many choices regarding what to do, what to eat, buy, watch, listen to and so on. Depending on your personal tastes these choices are wonderful or a burden. If you really care about something quite often you demand choices to be happy. You won’t be pleased with limited options when you know something better isn’t even being offered. In other cases where you aren’t emotionally invested, too many choices can be a burden, and unwelcome. You just need something functional and aren’t willing to expend the effort to sift through a bunch of alternatives. This distinction happens over and over across our lives both personal and professional.
What one person demands as a phalanx of options is a crushing affront to another. The demands of choice come from the aficionado who sees the texture and variation among the choices. When no options are available it can be greeted as the acceptance of something awful. This could even be true for the single option, which is acknowledged as the best and would be chosen from many options. On the other hand for someone who doesn’t care about the details, the mediocre is just fine. It isn’t that they wouldn’t like something better; it is that they can’t tell the difference or don’t care. This sort of dichotomy exists with everyone and varies topic to topic. It plays a huge role in science and engineering. I am certainly guilty of this, and I suspect all of you are too.
A while back I wrote about what I don’t like about the finite element method (FEM) (
Working at Sandia one thing is always true; the code you write will implement the FEM. With a new project and generally it would be very beneficial to have multiple valid discretizations on the same mesh. This would enable a number of things such as error estimation, resilience against hardware errors, and more robust overall algorithms. The problem is that the FEM generally offers a single preferred discretization once the mesh and associated elements are chosen.
under-integrate, lump the mass matrix or apply a stabilization method. Even then, the available options for discretizing are rather barren compared with finite volume or finite difference methods. It feels like a straightjacket by contrast to relative unconstrained freedom. Even the options I once worked with were too constrained compared with the universe of possibilities offered as I discovered in my most recent paper (“Revisiting Remap Methods” DOI 10.1002/fld.3950).
For me this is a particular issue in the area of shock physics. Most of the users of shock physics codes are completely happy with their options. For some, the code simply needs to run to completion and produce something that looks plausibly realistic. For me this seems like a god-awfully low standard, and I see methods that are antiquated and backwards. The code users usually only notice new methods when something bad happens, the code runs slower, the answer changes from the ones they’ve grown accustomed to, or the code crashes. It is a rarity for the new method to be greeted as a benefit. The result is stagnation and a dearth of progress.
We have become a community that sees options as a burden. Other burdens such as changes in computers are swallowing the appetite for the options that exist. As time goes by, the blessings seem more and more distant and foreign to the thought process. Moreover the users of codes don’t see the effort put into better methods as a virtue and want to see focus on improving the capacity to model the physical systems they are interested with. Part of this relates strongly to the missing elements in the education of people engaged in modeling and simulation. The impact of numerical methods on the modeling of physical systems is grossly under-appreciated, and leads to a loss of
perspective. Methods in codes are extremely important and impactful (artificial, numerical and shock dissipation anyone?). Users tend to come close to completely ignoring this aspect of their modeling due to the esoteric nature of its impact.