It’s never too late
…to start heading in the right direction.
― Seth Godin
What would be the best gift I could hope for? Honestly the best gifts would be health and
happiness in my private life, so I mean the best gift for my professional life. It would lead to a much more vibrant and rewarding research life than the zombie-like march toward mediocrity we’re been on for several decades. The mediocrity is being fueled by the tendency to celebrate false achievement to generate the need for unvarnished success. So what would I like to find under the tree this morning?
Confidence is ignorance. If you’re feeling cocky, it’s because there’s something you don’t know.
― Eoin Colfer
How about an end to the orgy of self-congratulating false achievement defining scientific computing these days? That would be refreshing for a change. There seems to be a disturbing trend to only report success today, with any sense of failure or challenge is simply not accepted by our funding agencies. So instead of computation honestly being the third way to practice science, as many would pronounce, it still falls a bit short. Perhaps with sufficient honestly over the past couple of decades, this would be an honest assessment instead of a hollow boast
To share your weakness is to make yourself vulnerable; to make yourself vulnerable is to show your strength.
― Criss Jami
There are plenty of challenges to be had if we simply applied some integrity to our assessment of what we’re capable of. This doesn’t belittle what we are capable of, but rather highlight the size of the difficulties we face. The worst aspect of the current situation is that our present capability could be so much beyond what it is today if we treated the science with greater honesty. Computationa
l science opens doors to understanding unavailable to experimental and theoretical science; it is a compliment to each, but not a replacement. A large part of the problem is the extent to which it has been sold as a replacement to traditional science. This is at the core of the problem, but there is much more wrong with the direction.
The best way to predict the future is to invent it.
—Alan Kay
As the end of the era of cheap progress in computing power looms before us, we should look toward crafting a future where progress isn’t gifted by hardware following a law whose demise is vastly overdue (i.e., the end of Moore’s law). I see a field where Moore’s law seems to be seen as the only path to success, and its demise is greeted as apocalyptic. Rather than seeing the end of Moore’s law as a problem, I believe it will force us to work smarter again. We will stop relying upon faster computers to gift us progress and start thinking about how to solve problems better again. This lack of thought has created a dismal state of affairs in scientific computing research.
Before getting to what we are missing it might be good to focus a little bit of attention on hardware and how we got into this mess. Twenty some-odd years ago we had the “attack of the killer micros”. This became something real as it swallowed the high
performance computing world whole. It’s being taken over today by an attack of legions of portables. The entire computing industry has become enormous and dominated by cell phone and the impending “Internet of things”. Before the killer micros, we had custom machines and custom chips tailored to the scientific computing world, and typified by Crays.
It might be worth think about what would the computers customized for the needs of scientific computing would have looked like if we hadn’t accepted the fool’s errand of chasing Moore’s law like a dog chases a dirty old tennis ball? I don’t like being overly nostalgic, but in many ways the state of affairs
with computing hardware has gotten far worse over the last twenty years. Sure the computers are much faster, but they are terrible to use. From a user’s perspective, the systems are worse. We have accepted the commodity bargain and a “worse is better” approach instead of demanding something focused on solving our problems. I most hypothesize that we would have been better off with somewhat slower computers that were more useful and better constructed to our tasks. In relative terms we have accepted crap in pursuit of a politically correct, but technically corrupt vision of computing. In pursuit of the fastest, biggest computer we have accepted worse actual, real performance.
Innovative solutions to new challenges seldom come from familiar places.
—Gyan Nagpal
What if we had stayed with that model? Of course it wasn’t a viable path from a business point of view, but stay with me. What would computer designs look like? What would be the programming model look like? How would the emphasis be different? I’d like to think we would have been better off not trying to squeeze two decades more out of Moore’s law. The truth is that we never use all of these machines anyway except for marginally useful (mostly useless) stunt computations. The whole Moore’s law thing is largely a lie anyway; the problem being solved in measuring speed is completely uncharacteristic of scientific computing (LINPAC). If we apply benchmarking to something more realistic (the HCG benchmark is a step in the right direction), we see that supercomputers get 1-5% of the speed that the LINPAC benchmark gives. Most of our codes are even slower than that. Putting this all together we can see that high performance computing is a bit of a facade. The emphasis on hardware is central to the illusion, but this is only the tip of the illusory iceberg.
Failure isn’t something to be embarrassed about; it’s just proof that you’re pushing your limits, trying new things, daring to innovate.
—Gavin Newsom
R
esearch in scientific computing has not had enough impact over this time. In too many cases new numerical methods are still not being used in codes. The methods embedded in code are largely and significantly the same ones we used twenty years ago. The models too are twenty years old. The older models should be invalid many cases simply due to the refinement of the mesh, and the requisite change in time and length scales. New capabilities in uncertainty estimation and coupled physics are still research rather than deployed and producing results. In many cases the codes are two, three or four generations of methods and models overdue from a complete change. Back in the days of Crays and a bit more courage in research, new codes would be spawned every few years. Now we hold on to our old codes for decades. These new codes are the vehicle for new ideas, new methods and new models to be used for useful, important work.
Perhaps a useful analogy would be to think of high performance computing as a car. In this view the computer is the engine, and the code is the steering, interior, stereo, and other features. What kind of car have we been driving? Basically we are driving a car from the 1980’s with a series of new engines. The steering is the same, the interior has at most reupholstered and all the original equipment is still in place. Instead of being able to hook up your iPhone to the stereo, we are playing our old 8-tracks. No built in navigation either, so make sure you buy a map at the gas station. This works fine, but you won’t get any warning about the major construction zone. This is the bargain we’ve entered into; it is an absurd and unbalanced approach. If it were a car we wouldn’t stand for it, so why do we put up with this in computing?
If failure is not an option, then neither is success.
― Seth Godin
Over the years the entire program has suffered under the low-risk, false success management of today’s research. We have to simultaneously deliver progress without taking risks. No one who understands how to make progress would buy off on it. We labor under the assumption that we can manage to complete success while encountering no problems and succeed without the need for failure. Failure and risk are the lifeblood of progress. The management of research in this manner is largely an illusion that manifests in the reality of lack of risk taking and pervasive incrementalism. It strains credibility to its limit to believe this. The end result is achievement without progress. Achievement is made by fiat and only because failure is never an option today. The utter lack of honesty is truly disturbing.
Change almost never fails because it’s too early. It almost always fails because it’s too late.
― Seth Godin
Part of the impact of this reign of mediocrity is the over-development of code bases. This enables the achievement to take place within the incrementalism so intrinsic to the low-risk model of management. To achieve progress we continue to build upon bases of code long after they should have been discarded. As a consequence the amount of technical debt and inflation associated with our code is perilously large. We are hemmed into outdated ideas of how a code should be written, and the methods and models implicit in the design. The ability to start fresh and put new ideas into action simply isn’t allowed under the current model.
The best way to get a good idea is to have a lot of ideas.
—Linus Pauling
A couple of other write-ups have appeared this week touching on the topic of progress and what’s holding us back (http://www.americanscientist.org/issues/pub/wheres-the-real-bottleneck-in-scientific-computing and http://www.hpcwire.com/2006/07/21/seven_challenges_of_high_performance_computing-1/). In the case of Greg’s commentary, he is right on the mark, but the use of modern software engineering is close a necessary, but wholly insufficient condition for success. I see this where I work. We are about as good as anyone at doing the software end of things professionally, yet without the scientific vision and aggressive goal-setting it is a hallow victory.
What the software engineering reflects is the maturity of scientific software and the need to contend with its impact on the field. Codes have become large and complex. To solve big problems they need to be developed using real engineering. Like most infrastructure they crumble and show age. If they are not invested in and kept up, they will fail to perform. The lifetime of software is much shorter than other infrastructures, but similarly we don’t have the political will to fix the problem.
Doug makes a number of good points in his commentary, but I think he misses the single biggest issue. The main problem with the list is the degree of political correctness associated with the list. He still hails from the point-of-view that Moore’s law must be pursued. It is the tail that is wagging the dog, and it has been in control of our mindset too long. Doug also hits upon some of the same issues that Greg touches on, and again software engineering done professionally is a necessity. Finally the increasing emphasis on V&V is touched upon, but as I will discuss, our current ideas about it miss something essential. V&V should be the discipline that points to where progress has been made, and been lacking.
The biggest issues with scientific computing are the conclusions that we have already solved a bunch of problems with current capability. All that we need to do is build bigger computers, refine the mesh and watch the physics erupt from the computer. This is the sense that most of high performance computing is constructed upon. We already know how to do everything we need to do; it’s just a matter of getting the computers big enough to crush the problems into submission.
This is where verification and validation come in. Again our current practice is permeated with a seeming belief that good results are merely a formality. Most V&V work provides balance with some sense of trust in results combined with an assessment of how limited capability really is. It should provide a targeted view of where improvement is need. Instead of honesty in the nature of our understanding, we have over-selling. V&V is expected to be a rubber stamp for the victories of scientific simulation. Bad news isn’t expected or accepted. We act as if we have complete mastery over science, and it’s just a matter of engineering.
Remember the two benefits of failure. First, if you do fail, you learn what doesn’t work; and second, the failure gives you the opportunity to try a new approach.
—Roger Von Oech
Nothing could be further from the truth. The primary achievement of scientific computing has unveiled new mysteries and limitations. This is the nature of the quest for knowledge. Answering good questions only yields the capacity to ask better, more refined questions. To answer these new questions, we need better computational science, but also vibrant experimental and theoretical science. Our current approach to the field in general is not providing it. The methods and models of yesterday are not sufficient to answer the questions of today or tomorrow. We need to quit perpetuating the illusion that they are.
Healthy curiosity is a great key in innovation.
—Ifeanyi Enoch Onuoha
The right way to make progress is to realize that sometimes the answer to a question raised by computation lies in experiment or theory. Conversely the new theoretical question may find answers in experiment or computation. We benefit by having each area push the other. Computation simply adds to the capacity to solve problems, but does not replace the need for the traditional approaches. If we neglect theory and experiment, we are diminished. Ultimately our progress with computation will be harmed (if it hasn’t already).
Let’s celebrate the holidays and give each other the gift of deep open-minded questions that require every tool at our disposal to answer. Let’s stop giving ourselves false self-congratulating achievements that only perpetuate the wrong view of science. Let’s make real progress.
Only those who dare to fail greatly can ever achieve greatly.
— Robert Kennedy

degree to which human ingenuity plays a roll in progress. The program that funds a lot of what I work on, the ASC program, is twenty years old. It was part of a larger American effort toward science based stockpile stewardship envisioned to provide confidence in nuclear weapons when they aren’t being tested.
For example, think about weather or climate modeling and how to improve it. If we model the Earth with a grid of 100 kilometers on a side (so about 25 mesh cells would describe New Mexico), we would assume that a grid of 10 kilometers on a side would be better because it now uses 2500 cells for New Mexico. The problem is that a lot else needs to change too in the model to take advantage of the finer grid such as the way clouds, wind, sunlight, plant life, and a whole bunch of things are represented. This is true much more broadly than just weather or climate, almost every single model that connects a simulation to reality needs to be significantly reworked, as the grid is refined. Right now, insufficient work is being funded to do this. This is a big reason why the benefit of the faster computers is not being realized. There’s more.
problems of the modern supercomputing is the lack of effort to improve the solution of balance laws. We need to create methods with smaller errors, and when errors are made bias them toward physical errors. There’s more.
For special sorts of linear systems associated with balance laws we can do a lot better. This has been a major part of the advance of computing, and the best we can do is for the amount of work to scale exactly like the number of equations (i.e., linearly). As the number of equations grows large the difference between the cube and the linear growth is astounding. This linear algorithm were enabled by multigrid or multilevel algorithms invented by Achi Brandt almost 40 years ago, and coming into widespread use 25 or 30 years ago.
We can’t really do any better today. The efforts of the intervening three decades of supercomputing has focused on making multilevel methods work on modern parallel computers, but no improvement algorithmically. Perhaps linear is the best that can be done although I doubt this. Work with big data is spurring the development of methods that algorithmically scale at less than linear. Perhaps these ideas can improve on multigrid’s performance. The key is would be to allow inventiveness to flourish. In addition risky and speculative work would need to be encouraged instead of the safe and dull work of porting methods to new computers.

Like most Americans I speak primarily with people who are like me. In my case it those educated in a scientific or technical field, working at a Lab or University, doing research. I don’t have a lot of contact with people of a different background. I do have a handful of conservative friends, and the difference in their Worldview is both understandable as it is stunning. What is really breathtaking is the difference in what we think is true. This is attributable to where our information comes from.
In large part the Internet has allowed everyone to draw from sources of information that suits their preconceptions. To a large extent we are fed information that drives us
public.
The traditional news media is dominated by corporate interests with Fox News leading the way, but the old three ABC, CBS and NBC being no different. MSNBC is viewed as the liberal vanguard, but again it’s no different either. Once this dynamic was primarily associated with Newspapers, but as they die, it has been replaced by TV and as they begin to die by the Internet. Big money is running the show I every case, and finding a niche that provides them profit and power.
Sometimes it’s semi-innocuous such as advertising for a TV show or movie. The dangerous aspect is the continuous avoidance of issues that the big moneyed interests don’t want portrayed, discussed or explored. This lack of coverage for a class of issues associated with money and class is poisoning democracy, tilting the discussion, and ultimately serving only the short-term needs of the businesses themselves.
A more innocuous aspect is the slanting of the political dynamic, which is happening pervasively via Fox News and its use of a message firmly associated with a single political agenda. In the UK it’s Sky that does the same thing. Across the board the impact has been to turn up the heat on partisan bickering and diminish the ability of the democratic process to function. Part of the problem is that it becomes clear if you make the mistake of talking about such things, people no longer operate with the same facts. Each side of the debate simply cherry-picks facts to suit their aims and avoids facts that undermine their chosen message. As a result no one really gets the full story, and the truth is always slanted one way or another. As a result the ability to compromise and move forward on vexing problems has become impossible.
he situation today favors those who have power, and today power stems from wealth. Money is paying for information, and this information is paving the way toward an even greater accumulation of power. The Internet is not providing the democratization of knowledge, but rather giving those already in power access to unparalleled capability to issue effective propaganda. Those in power are assisted by a relatively weak government whose ability to counter their stranglehold on society is stymied by inaction.
One of the things that continually bother me the most about the changes where I work is the sunset of the individual from importance. The single person is gradually losing power to the nameless and increasingly faceless management. Increasingly everyone is viewed as a commodity, and each of us is interchangeable. Another scientist or engineer can be slotted into my place without any loss of function. My innate knowledge, experience, creativity, passion are each worthless when compared to the financial imperatives of my employer. We are encouraged if not commanded to be obedient sheep. Along the way we have lost the capability to foster deep sustained careers, the current regime encourages the opposite. The reason given for sacrificing this aspect of work is financial. No one can pay for the cost of the social construct necessary to enable this. I think that is BS, the reason is power and control. 
employer. This model was the cornerstone of the National Laboratory system. The Nation benefited greatly from the model both in terms of security and National defense, but also from the scientific and engineering greatness it engendered.
scientists during the Manhattan Project provided the catalyst to extend this model more broadly. Their achievements fueled its continued support into the late 70’s. Then it was deemed to be too expensive to maintain. The management of the Labs is choking this culture to death. If it isn’t already gone, it soon will be.
A deep part of its power was the enabling of individual achievement and independent thought. Perhaps more than the cost of the social contract, the Nation has allowed the force of conformity, lack of trust and fear of intellect to undermine this model. While the financial costs have escalated largely due to systematic mismanagement and the absence of political courage and leadership, it has been the excuse for the changes. While the details are different at the Labs the overall forces are hand-in-hand with the overall destruction of the middle class who was offered a similar social contract in the post war years. This has been replaced by cheap labor, or outsources always with the excuse about cost.
objectives actually undermined the achievement of innovation. It was a fascinating and provocative idea. I knew it would be the best thing I did all day (it was), and it keeps coming back to my thoughts. I don’t think it was the complete answer to innovation, but Professor Stanley was onto a key aspect of what is needed to innovate.
In other words, the system we have today is harmful. We are committed to continue down the current path, results be damned. We have to plan and have milestones just like business theory tells us with no failure being accepted. In fact we seem to act as if failure can simply be managed away. Instead of recognizing that failure is essential to progress, and that failure is actually healthy, we attempt to remove it from the mix. Of course, failure is especially unlikely if you don’t try to do anything difficult, and choose your objectives with the sort of mediocrity accepted today because lack of failure is greeted as success.
The courage that once described Americans during the last century has been replaced by a fear of any change. Worries about a variety of risks have spurred Americans to accept a host of horrible decreases in freedom for a modest-to-negligible increase in safety. The costs to society have been massive including the gutting of science and research vitality. Of course fear is the clearest was to pave the way for the very outcomes that you sought to avoid. We eschew risk attempting to manage the smallest detail as if that might matter. This is combined with a telling lack of trust, which implies a certain amount of deep self-reflection. People don’t trust because they are not trustworthy and project that character onto others. The combination of risk avoidance, and lack of trust produces a toxic recipe for decline and the opposite of the environment for innovation.
We collectively believe that running everything like a business is the path to success. This means applying business management principles to areas it has no business being applied to. It means applying principles that have been bad for business. Their application has destroyed entire industries in the name of short-term gains provided to a small number of shareholders. The problem is that these “principles” have been extremely good for a small cadre of people who happen to be in power. Despite the obvious damage that they do, their application widely makes perfect sense to the chief beneficiaries. It is both an utterly reasonable and completely depressing conclusion.
When returning to the theme of how to best spur innovation and its antithetical relation to objectives, I become a bit annoyed. I can’t help but believe that before we can build the conditions for innovation we must destroy the false belief that business principles are the way to manage anything. This probably means that we have to see a fundamental shift in what business principles we favor. We should trade the dictum of shareholder benefit for a broader social contract that benefits the company’s long-term health, the employees and the communities as well. Additionally, we need to recover our courage to take risks and let failure happen. We need to learn to trust each other again and stop trying to manage everything.
I’m sure that most people would take one look at the sort of things I read professionally and collectively gasp. The technical papers I read are usually greeted by questions like “do you really understand that?” Its usually a private thing, but occasionally on a plane ride, I’ll give a variety of responses based on the paper from “yeah, it actually makes sense” to “not really, this paper is terrible, but I think the work might be important.”
and becomes increasingly unapproachable to anyone else. This tendency should mark the death knell of the area, but instead the current system seems to do a great deal to encourage this pathology.
Other areas seem to be so devoid of the human element of science that the work has not contextual basis. In every case science is an intrinsically human endeavor, yet scientists often work to divorce humanity from the work. A great deal of mathematics works this way and leads to a gap in the understanding of the flow of ideas. The source and inspiration for key ideas and work is usually missing from the writing. This leads to a lack of comprehension of the creative process. A foolhardy commitment to loses history only including the technical detail in the writing. Part and participle of this problem are horrific literature reviews.
In some fields the number of citations for the work is appalling. The author ends up providing no map for the uninitiated reader to figure out what they are talking about. Again this works both to hide information and context while making the author seem smarter than they really are.
The phrase “world class” appears so often in reviews I’ve seen that it has become a cliché. It is a completely unnecessary throwaway compliment handed out like candy on Halloween. It’s become expected, hollow praise that ultimately undermines any honest critique that follows its utterance. It’s time to stop handing out this compliment unless the situation calls for it, which is almost never.
t-to-project all of them being “world class”. The first few times I heard it, I felt great. Like wow, I’m a high performing individual in a world class organization, doing world class research. I must be really great too. As time went on, I kept hearing this even if the review was a complete train wreck. The comment would come even if the content of the review were decisively mediocre.
ego massaging part of the review.” Are the organizations I work for so weak-willed and pathetic that they need this sort of garbage? Is the overly defensive and meagerly technical content of the review actually worthy of such high praise? Increasingly this oft-heard phrase has become an excuse to dismiss everything the review has to say. Mostly for the sentiment that if they think that was “world class”, these guys are a bunch a bozos. They are either stupid or dishonest, if not both. Are we paying them to give us this empty praise? How did we find people with such low standards?
This plays into the general theme of the role of bullshit in today’s society. Telling the truth is the new sin. It’s so much more acceptable to tell the lies you are intended to believe. If someone actually expressed the truth that should be said they would be treated like a pariah. This explains why we pay external reviews to come around every year and tell us that we are still “world class”. Along with the empty platitudes we get a handful of suggestions that can all be ignored because why would a “world class” organization need to improve.



One of the keys to using a backup plan effectively is that it allows your primary plan to be more aggressive. Knowing that a viable alternative is available allows a more expansive primary plan to be envisioned. Presently the sort of planning that yields a single path forward produces risk adverse objectives. This produces the state of affairs we see today. Plans are generally too short-term focused and contain relatively little risk. Having contingency plans to fall back upon would allow a much greater amount of risk to be absorbed in the primary plan.
principle by which to combine them. The principle is use the high-order when the situation is safe and won’t produce oscillations, and fall back to the low-order method when danger ensues. Now multiple methods work well enough that people think that nothing more needs to be done (I don’t agree!). The same approach has worked well in other areas, and in my opinion could be employed far more broadly.
Sports provide another way to look at adaptive approaches and planning. Some teams are exceptional in a single approach to playing the game, and fail when they come up against the perfect counter. Great teams can play multiple ways and can be effective with Plan A, Plan B, Plan C… They can attack and defend is a variety of ways. The best among them can switch between different approaches seamlessly either to adapt to an opponent, or to surprise or overwhelm an opponent. To execute well in a variety of ways requires immense effort and practice, but this is the price of excellence.
The key element in this thought process is the devotion to solve problems. The secondary element is the development of multiple solutions to problems. This requires the developer of the plans to not be over-committed to a single approach. Sometimes the biggest problem with a plan is the over-investment of those executing the plan in the single path to success.

I used to work at McDonalds a long time ago. Most people know that a Big Mac uses a secret sauce in dressing the sandwich. It looks like Thousand Island dressing, but rest assured, it is a secret sauce of some sort. Ideally, the secret sauce is the literal trademark of the sandwich, its identity and it’s a secret only known by a hallowed priesthood. Little did I know that in my chosen professional life I would be exposed to a completely different “secret sauce”.
A successful modeling and simulation code is very much the same thing; it has a trademark “secret sauce”. Usually this is the character for the code is determined by how it is made robust enough to run interesting applied problems. Someone special figured out how to take the combination of physical models, numerical methods, mesh, computer code, round-off error, input, output… and figured out how to make it all work. This isn’t usually documented well, if at all. Quite often it is more than a little embarrassing. The naïve implementation of the same method usually doesn’t quite work. This is a dark art, the work of wizards and the difference between success and failure.
The rub is that we are losing the recipes. In many places the people who developed the secret sauce are retiring and dying. They aren’t being replaced. We are losing the community knowledge of the practices that lead to success. We may be in for a rude awakening because these aspects of modeling and simulation are underappreciated, undocumented and generally ignored. Sometimes the secret to make the code work is sort-to-very embarrassing.