The essence of the independent mind lies not in what it thinks, but in how it thinks.
― Christopher Hitchens
Even with a day off, work last week really, completely sucked. I got to spend very little time doing my daily habit of writing in a focused manner. Every day at work was a pain and ended the week with hosting a group of visitors who are responsible for part of the new exascale computing initiative. Among the visitors were a few people whom I have history with both good and bad. If you’ve read this blog you know that I’m not a fan of the exascale computing imitative. Despite this, I was expected to be on my best behavior (and I think that I was). It was not the time, nor place to debate the program’s goals or wisdom (to be honest I’m not sure what the right time is). It’s pretty clear to me that there hasn’t been much of any debate or thought put into the whole thing. That’s a discussion for a different day.
Thinking is the hardest work there is, which is probably the reason so few engage in it.
― Henry Ford
Nonetheless some good came from the experience aside from demonstrating my own self-control. I won’t say much about the visit except that the exascale initiative is not terribly compelling as programs go, and I thought it went well from our official perspective. I have a better idea of how they are viewing the program and its objectives and priorities. We had a chance to talk about how we are approaching a similarly structured program. No one is thinking about all the missing elements from the approach in a constructive way, and lots of old mistakes are being made all over again. People show a remarkable lack of historical perspective and ability to engage in revisionist history. The refrains of my “bullshit” post on lack of honesty in the view of success rang in my ears.
Stop thinking, and end your problems.
― Lao Tzu
I also noted the distinct air of control from the visitors and discussion of their colleagues who run our programs. The programs want to give us very little breathing room to exercise our own judgment on priorities. They want to define and narrow our focus to their priorities. Given the lack of technical prowess from those running things it’s dangerous. Awful programs like exascale are the direct result of this sort of control and lack of intellectual thought running research. Everything is politically engineered and nothing is really composed of elements that are designed to maximize science. The result is a long-term malaise in research, progress and science we are suffering from. Ultimately, the system we are laboring under will result is less growth and prosperity for us all. It is the inevitable result of basing our decisions on fear and risk avoidance instead of hope, faith and boldness.
The conventional view serves to protect us from the painful job of thinking.
― John Kenneth Galbraith
Because our program is all about stockpile stewardship the meeting was held in a classified setting. This means no electronics and a requirement that I unplug. It might be a good excuse to get some reading done, but I had to look like I was paying attention all day. So I took copious notes. Not much interesting happened so most of the notes were to self and captured my thoughts, reflections and perspectives. This alone made the entire experience valuable from a personal-professional perspective. I managed to digest a lot of my backlog of thinking that the well-connected World distracts you from constantly. I had some well-structured time with my own thoughts and that’s a really good thing.
The only freedom you truly have is in your mind, so use it.
― M.T. Dismuke
Getting away from the electronic world of web pages, text messages, email for a while is a blessing. I could approach my thoughts with a literal clean sheet. I started by reflecting on all the good ideas I’ve had recently, but haven’t gotten the time to work on. It was a lot, which has a depressing aspect. There is so much to potentially work on that I can’t. Its worse that I don’t exactly see the value in what I am working on. It’s a bit of a personal tragedy. I suspect its one that plays out across the World of research. We have less and less time to work on things we judge to be important.
Aside from the deeper thoughts I also realize that it pays to think in many different ways even from a mechanical point of view. I try walking each day along with a walking meditation, followed by free association. It ends up being a very effective way to self-brainstorm. I keep a notebook for each day in a cloud app. There is this blog, which allows for a freeform prose, but done in an electronic form. Writing things down on paper has subsided a lot, and last week I rediscover the virtue of that medium albeit by the nature of the circumstances. For a long time I kept a pad of lined post-it notes in my car since a lot of good ideas would just come to me driving to and from work. It might be good to force myself to use paper alone more often. By the power of cameras and remarkable text recognition the paper can go directly into my electronic notebook any way.
The important thing to me is to capture the ideas that move from the background of my thinking to the foreground. Some of these thoughts are half-baked, but others are really genius. The human mind is a remarkable thing especially when it’s subjected to lots disparate input. The day away from electronics was good for rebooting how I approached free thinking when its available. I’d like to think its what I’m paid for, but honestly that isn’t really likely to be the truth. Everything about how I’m paid is about not really thinking about the deeper meaning. We are encouraged to simply putter along doing as we’re told. The mantra of today is quit thinking and get back to work.
Power does not corrupt. Fear corrupts… perhaps the fear of a loss of power.
― John Steinbeck
Now we get to the darker aspects of free association, you start to turn your gaze toward the shit show unfolding before you. Life today is full of things that should be regarded with
contempt. Our overlords encourage us to ignore the carnage they are subjecting the world to, but it is there hidden in plain sight. Today we live in a coarse and belligerent culture that threatens to undermine everything good. I’m not talking about the sort of moral decay social conservatives would point to. I’m talking about the fundamental rewards, checks and balances that encourage an environment of selfish and greedy behavior. At the same time these same forces work to undermine every effort to pay attention to larger societal, organizational and social imperatives that collectively make everything better. We act selfish in the service of maintaining the power of others, and avoiding the sort of collective service that raises everyone.
So I was offered a front road seat at a primo shit show, and here is what it made me think.
Our research is now running on the basis of money as a scoring system with no real concrete societal objectives in sight. In the 20th Century many great things were accomplished and the technology that dominates our economy was invented through scientific discovery. A great deal of that discovery was directly associated with fear, first of Germans and then the Nazis then the Soviets. The atomic bomb, hydrogen bomb, jet aircraft, microprocessors, cell phones, GPS, and almost every in our modern world owe their discovery to this response to fear of existential threats. These were real adversaries with well-developed technology, engineering and science requiring a serious response of our Nation-State to the threat they represented. Today, we see a bunch of disorganized barbarians as an existential threat. It is completely pathetic.
We really don’t have to have our collective act together to compete. It’s all fear and no benefit of accomplishing great things, and we aren’t. We just have the requisite reduction in freedom in response to this fear without any of the virtues. This dismal state of affairs results in a virtual emptying of meaning from work that used to be important. I work at a place where work ought to have value and importance, yet we’ve managed to ruin it.
Power attracts the corruptible. Suspect any who seek it.
― Frank Herbert
It is utterly stunning that working for an organization committed to National Security does not provide me with any sense that my work is important. I don’t have enough
latitude and capability to exercise my jud
gment to feel truly empowered at work. All the control and accountability at work is primarily disempowering employees and sucking all the meaning from work. I ought feel an immense amount of importance to what I do. My management, writ large, is managing to destroy something that ought to be completely easy to achieve. This malaise is something we see nationally as the general sense that your work has little larger meaning is used to crush people’s wills. Instead of empowering people and achieving their best efforts, we see control used to minimize contributions and destroy any attempt toward deeper meaning. This sense is deeply reflected in the current political situation in the World and the broad sweeping anger seen in the populace.
The love affair with corporate governance for science is another aspect of the current milieu that is deeply corrupting science. Our corporate culture is corrupting society as a whole and science is no exception. The greed and bottom line infatuation perverts and distorts value systems and has systematically harshened the cultures of everything it touches. Increasingly, the accepted moral thing to do is make yourself as successful as possible. This includes lying, cheating and stealing if necessary (and you can get away with it). More corrosively it means losing any view of broader social, societal, organizational or professional responsibility and obligation. This undermines collaboration and free exchange of ideas, which ultimately destroys innovation and discovery.
Accountability has been instituted that allows people to ethically ignore the broader context in favor of narrow focus. They are told that doing this is the “right” thing to do, and basically they should otherwise mind their own business. This attitude extends to society as a whole and we are all poorer for it. We keep ideas to ourselves, and the narrowly defined parochial interests of those who pay us. Instead we should operate as engaged and collaborative stewards of our society, organizations or professions. We have adopted a system that encourages the worst in people rather than the best. We should absolutely expect problems to be caused by this culture of selfishness. The symptoms are everywhere and threaten our society in a myriad of ways. The only portion of society that benefits from our present culture is the rich and powerful overlords. These systems maintain and expand their ability to keep their corrupt and poisonous stranglehold on everyone else.
A man who has never gone to school may steal a freight car; but if he has a university education, he may steal the whole railroad.
― Theodore Roosevelt
merely to demonstrate rote knowledge; one needs to understand the principles underlying the knowledge. One way to demonstrate the mastery over knowledge is utilize the current knowledge and then extend the knowledge in that area to something new. This gets to the core of our current problem in science, we are not being asked to extend knowledge, and we are asked to curate knowledge. As a result we are losing the ownership that denotes mastery.
tware libraries, and the mapping of all of these to modern computing architectures. Because of the demise of Moore’s law we are exploring a myriad of extremely exotic computing approaches. These exotic computer architectures are causing implementations to become similarly exotic. In a sense my concern is that the difficulty of simply using these computers has the effect of sucking “all the oxygen” from the system and leaves precious little resource behind for any other creative endeavor, or risk taking. As a result we have no real progress being made in any of the activities in modeling and simulation beyond mere implementations.
ls? Or improved methods? Or improved algorithms? The answers to these questions are not uniform by any means. There are times when the greatest need for modeling and simulation is the capacity of the computing hardware. At other times the models, methods or algorithms are the limiting factors. The question we should answer is what is the limiting factor today? It is not computing hardware. While we can always use more computing power, it is not limiting us today. I believe we are far more limited by our models of reality, and the manner in which we create, analyze and assess these models. Despite this lack of need for improved hardware, computing hardware is the focus of our efforts.
time those people go away and the logic and rationale for the code’s form and function begins to fade away. We often find that certain things in the code can never be changed lest the code become non-functional. We are left with something that looks and feels like magic. It works and we don’t know why or understand how it works, but it does.
he space of knowledge is taken off the table and relegated to being purely curated. This full demonstration has the role of providing an important feedback of reality to the work being done. Reality is very good at injecting humility into the system when it is most needed. When the knowledge is curated we rapidly remove important and essential aspects of stewardship. We have immense issues associated with the long-term responsibility of caring for a stockpile. New issues arise that are beyond the set of conditions the systems were originally designed for. All of this needs a fertile intellectual environment to be properly stewarded. We are not doing this today. Instead the intellectual environment is actually being steadily eroded in favor of curating knowledge. In computing, the creation of legacy codes is a key symptom of this environment.
ually undocumented. Many of the tricks are far more obvious and logical to use, and their failure is usually unexplained. Hence the production code works on the basis of tricks of the trade that are often history dependent, and rarely explained, yet utterly essential.
for it is the standard way things unfold. The reason this happens is because creating a new production code is a risky thing. Most of the time the effort fails. The creation of the code requires a good environment that nurtures the effort. If the environment is not biased toward replacing older codes with new codes (i.e., progress and improving technology), the inertia of the status quo will almost invariably win. This inertia is based on the very human tendency to base correctness on what you are already doing. The current answer has a great deal greater propriety than the new answer. In many cases the results of existing codes provide the strongest and clearest mental image of what a phenomena looks like to people utilizing modeling and simulation especially in fields where experimental visuals do not exist.
The path toward better performance in modeling and simulation has focused to an unhealthy degree on hardware for the past quarter century. This focus has been driven to a very large degree by a reliance on Moore’s law. It is a rather pathetic risk avoidance strategy. Moore’s law is the not really a law, but rather an empirical observation that computing power (or other equivalent measures) is growing at roughly a rate of doubling every 18 months. This observation has held since 1965 although its demise is now rapidly upon us. The reality is that Moore’s law has held for far longer than it ever could have been expected to hold, and its demise is probably overdue.
We can almost be certain that Moore’s law will be completely and unequivocally dead by 2020. For most of us its death has already been a fact of life for nearly a decade. Its death during the last decade was actually a good thing, and benefited the computing industry. They stopped trying to sell us new computers every year and unleashed the immense power of mobile computing and unparalleled connectivity. Could it actually be a good thing for scientific computing? Could its demise actually unleash innovation and positive change that we are denying ourselves?
Each of these questions can be answered in a deeply affirmative way, but requires a rather complete and well-structured alteration in our current path. The opportunity relies upon the recognition that activities in modeling and simulation that have been under-emphasized for decades provide even greater benefits than advances in hardware. During the quarter century of reliance on hardware for advancing modeling and simulation we have failed to get the benefits of these other activities. These neglected activities are modeling, solution methods and algorithms. Each of these activities entails far higher risk than relying upon hardware, but also produce far greater benefits when breakthroughs are made. I’m a believer in humanity’s capacity for creation and the inevitability of progress if we remove the artificial barriers to creation we have placed upon ourselves.
If we look at the lost opportunities and performance from our failure to invest in these areas, we can easily see how much has been sacrificed. In a nutshell we have (in all probability) lost as much performance (and likely more) as Moore’s law could have given us. If we acknowledge that Moore’s law’s gains are actually not seen in real applications, we have lost an even greater level. Our lack of taste for failure and unpredictable research outcomes is costing us a huge amount of capability. More troublingly, the outcomes from research in each of these areas can actually enable things that are completely different in character than the legacy applications. There are wonderful things we can’t do today because of the lack of courage and vision. Instead the hardware path we are on almost assures that the applications only evolve in incremental, non-revolutionary ways.
Earlier this week I gave a talk on modernizing codes to a rather large audience at work. The abstract for the talk was based on the very first draft of my Christmas blog post. It was pointed and fiery enough to almost guarantee me a great audience. I can only hope that the talk didn’t disappoint. A valid critique of the talk was my general lack of solutions to the problems I articulate. I countered that the solutions are dramatically more controversial than the statement of the problems. Nonetheless the critique is valid and I will attempt to provide the start of a response here.
ASC is a prime example of failing to label and learn from failures. As a result we make the same mistakes over and over again. We are currently doing this in ASC in the march toward exascale. The national exascale initiative is doing the same thing. This tendency to relabel failures as success was the topic of my recent “bullshit” post. We need failure to be seen as such so that we do better things in the place of repeating our mistakes. Today the mistakes simply go unacknowledged and become the foundation of a lie. Such lies then become the truth and we lose all contact with reality. Loss of contact with reality is the hallmark of today’s programs.
One of the serious problems for the science programs is their capacity to masquerade as applied programs. For example ASC is sold as an applied program doing stockpile stewardship. It is not. It is a computer hardware program. Ditto for the exascale initiative, which is just a computing hardware program too. Science or the stockpile stewardship missions are mere afterthoughts. The hardware focus becomes invariant to any need for the hardware. Other activities that do not resonate with the hardware focus simply get shortchanged even when they have the greatest leverage in the real world.
The beginning of the year is a prime time for such a discussion. Too often the question of importance is simply ignored in lieu of simple and thoughtless subservience to other’s judgment. If I listen to my training at work, the guidance is simple. Do what you’re paid to do as directed by your customer. This is an ethos of obedience and your particular judgment and prioritization is not really a guide. This is a rather depressing state of affairs for someone trained to do independent research; let someone else decide for you what is important, what is a priority. This seems to be what the government wants to do to the Lab, destroy them as independent entities, and replace this with an obedient workforce doing whatever they are directed to do.
An important, but depressing observation about my work currently is that I do what I am supposed to be doing, but it isn’t what is important to be doing. Instead of some degree of autonomy and judgment being regularly exercised in my choice of daily activities, I do what I’m supposed to do. Part of the current milieu at work is the concept of accountability to customers. If a customer pays you to do something, you’re supposed to do it. Even if the thing you’re being paid for is a complete waste of time. The truth is most of what we are tasked to do at the Labs these days is wasteful and nigh on useless. It’s the rule of the day, so we just chug along doing our useless work, collecting a regular paycheck, and following the rules.
The real world is important. Things in the real world are important. This is an important thing to keep in mind at all times with modeling and simulation. We are supposed to be modeling the real world for the purpose of solving real world problems. Too often in the programs I work on this seemingly obvious maxim gets lost. Sometimes it is completely absent from the modeling and simulation narrative. Its lack of presence is palpable in today’s efforts in high performance computing. All the energy is going into producing the “fastest” computers. The United States must have the fastest computer in the World, and if it doesn’t it is a calamity. The fact that this fastest computer will allow us to simulate reality better is a foregone conclusion.
This is a rather faulty assumption. Not just a little bit faulty, but deeply and completely flawed. This is true under a set of conditions that are increasingly under threat. If the model of reality is flawed, no computer, no matter how fast can rescue the model. A whole bunch of other activities can provide an equal or greater impact onto the efficiency of modeling than a faster computer. Moreover the title of fastest computer has less a less to do with having the fastest simulation. The benchmark that measures the fastest computer is becoming ever less relevant to measuring speed with simulations. So in summary, efforts geared toward the fastest computer are not very important. Nonetheless they are the priority for my customer.
The reason for the lack of progress is simple, high performance computing is still acting as if it were in the mainframe era. We still have the same sort of painful IT departments that typified that era. High performance computing is more Mad Men than Star Trek. The control of computing resources, the policy based use and the culture of specialization all contribute to this community-wide failing. We still rely
upon centralized massive computing resources to be the principle delivery mechanism. Instead we should focus energy on getting computing for modeling and simulation to run seamlessly from the mobile computer all the way to the supercomputer without all the barriers we self-impose. We are doing far too little to simply put it at our collective
to be a niche activity, and not fulfill its potential.
It goes without saying that we want to have modern things. A modern car is generally better functionally than its predecessors. Classic cars primarily provide the benefit of nostalgia rather than performance, safety or functionality. Modern things are certainly even more favored in computing. We see computers, cell phones and tablets replaced on an approximately annual basis with hardware having far greater capability. Software (or apps) gets replaced even more frequently. Research programs are supposed to be the epitome of modernity and pave the road to the future. In high end computing no program has applied more resources (i.e., lots of money!! $$) to scientific computing than the DoE’s Advanced Simulation & Computing(ASC) program and its original ASCI. This program is part of a broader set of science campaigns to support the USA’s nuclear weapons’ stockpile in the absence of full scale testing. It is referred to as “Science-based” stockpile stewardship, and generally a commendable idea. Its been going on for nearly 25 years now, and perhaps the time is ripe (over-ripe?) for assessing our progress.
My judgment is that ASC has succeeded in replacing the old generation of legacy codes with a new generation of legacy codes. This is now marketed to the unwitting masses as “preserving the code base”. This is a terrible reason to spend a lot of money and fails to recognize the real role of code, which is to encode expertise and knowledge of the scientists into a working recipe. Legacy codes simply make this an intellectually empty exercise making the intellect of the current scientists subservient to the past. The codes of today have the same intellectual core as the codes of a quarter of a century ago. The lack of progress in developing new ideas into working code is palpable and hangs heavy around the entire modeling and simulation program like a noose.
A modern version of a legacy code is not modernizing; it is surrender. We have surrendered to fear, and risk aversion. We have surrendered to the belief that we already know enough. We have surrendered to the belief that the current scientists aren’t good enough to create something better than what already exists. As I will outline this modernization is more of an attempt to avoid any attempt to engage in risky or innovative work. It places all of the innovation in an inevitable change in computing platforms. The complexity of these new platforms makes programming so difficult that it swallows every amount of effort that could be going into more useful endeavors.
Is a code modern if it executes on the newest computing platforms? Is a code modern if it is implemented using a new computer language? Is a code modern if it utilizes new software libraries in its construction and execution? Is a code modern if it has embedded uncertainty quantification? Is a code modern if it does not solve today’s problems? Is a code modern if it uses methods developed decades ago? Is a code modern if it runs on my iPhone?
The conventional wisdom would have us believe that we are presently modernizing our codes in preparation for the next generation of supercomputers. This is certainly a positive take on the current efforts in code development, but not a terribly accurate characterization either. The modernization program is certainly limited to the aspects of the code that have the least impact on the results, and avoids modernizing the aspects of a code most responsible for its utility. To understand this rather bold statement requires a detailed explanation.
So this is where we are at, stuck in the past, trapped by our own cowardice and lack of imagination. Instead of simply creating modern codes, we should be creating the codes of the future, applications for tomorrow. We should be trailblazers, but this requires risk and taking bold chances. Our current system cannot tolerate risk because it entails the distinct chance of failure, or unintended consequence. If we had a functioning research program there is the distinct chance that we would create something unintended and unplanned. It would be wonderful and disruptive in a wonderful way, but it would require the sort of courage that is in woefully short supply today. Instead we want to have certain outcomes and control, which means that our chance of discovering anything unintended disappears from the realm of the possible.
The core of the issue is the difficulty of using the next generation of computers. These machines are literally monstrous in character. They raise parallelism to a level that makes the implementation of codes incredibly difficult. We are already in a massive deficit in terms of performance on computers. For the last 25 years we have steadily lost ground in accessing the potential performance of our computers. Our lack of evolution for algorithms and methods plays a clear role here. By choosing to follow our legacy code path we are locked into methods and algorithms that are suboptimal both in terms of performance, accuracy and utility on modern and future computing architectures. The amount of technical debt is mounting and magnified by acute technical inflation.