• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: January 2016

The utility, joy and pain of unplugging – thinking deeply

29 Friday Jan 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

The essence of the independent mind lies not in what it thinks, but in how it thinks.

― Christopher Hitchens

Mainframe_fullwidthEven with a day off, work last week really, completely sucked. I got to spend very little time doing my daily habit of writing in a focused manner. Every day at work was a pain and ended the week with hosting a group of visitors who are responsible for part of the new exascale computing initiative. Among the visitors were a few people whom I have history with both good and bad. If you’ve read this blog you know that I’m not a fan of the exascale computing imitative. Despite this, I was expected to be on my best behavior (and I think that I was). It was not the time, nor place to debate the program’s goals or wisdom (to be honest I’m not sure what the right time is). It’s pretty clear to me that there hasn’t been much of any debate or thought put into the whole thing. That’s a discussion for a different day.

Thinking is the hardest work there is, which is probably the reason so few engage in it.

― Henry Ford

Nonetheless some good came from the experience aside from demonstrating my own self-control. I won’t say much about the visit except that the exascale initiative is not terribly compelling as programs go, and I thought it went well from our official perspective. I have a better idea of how they are viewing the program and its objectives and priorities. We had a chance to talk about how we are approaching a similarly structured program. No one is thinking about all the missing elements from the approach in a constructive way, and lots of old mistakes are being made all over again. People show a remarkable lack of historical perspective and ability to engage in revisionist history. The refrains of my “bullshit” post on lack of honesty in the view of success rang in my ears.

Stop thinking, and end your problems.

― Lao Tzu

Capitol-for-Forum-PageI also noted the distinct air of control from the visitors and discussion of their colleagues who run our programs. The programs want to give us very little breathing room to exercise our own judgment on priorities. They want to define and narrow our focus to their priorities. Given the lack of technical prowess from those running things it’s dangerous. Awful programs like exascale are the direct result of this sort of control and lack of intellectual thought running research. Everything is politically engineered and nothing is really composed of elements that are designed to maximize science. The result is a long-term malaise in research, progress and science we are suffering from. Ultimately, the system we are laboring under will result is less growth and prosperity for us all. It is the inevitable result of basing our decisions on fear and risk avoidance instead of hope, faith and boldness.

The conventional view serves to protect us from the painful job of thinking.

― John Kenneth Galbraith

nucleartesting-620x310Because our program is all about stockpile stewardship the meeting was held in a classified setting. This means no electronics and a requirement that I unplug. It might be a good excuse to get some reading done, but I had to look like I was paying attention all day. So I took copious notes. Not much interesting happened so most of the notes were to self and captured my thoughts, reflections and perspectives. This alone made the entire experience valuable from a personal-professional perspective. I managed to digest a lot of my backlog of thinking that the well-connected World distracts you from constantly. I had some well-structured time with my own thoughts and that’s a really good thing.

The only freedom you truly have is in your mind, so use it.

― M.T. Dismuke

Getting away from the electronic world of web pages, text messages, email for a while is a blessing. I could approach my thoughts with a literal clean sheet. I started by reflecting on all the good ideas I’ve had recently, but haven’t gotten the time to work on. It was a lot, which has a depressing aspect. There is so much to potentially work on that I can’t. Its worse that I don’t exactly see the value in what I am working on. It’s a bit of a personal tragedy. I suspect its one that plays out across the World of research. We have less and less time to work on things we judge to be important.

urlAside from the deeper thoughts I also realize that it pays to think in many different ways even from a mechanical point of view. I try walking each day along with a walking meditation, followed by free association. It ends up being a very effective way to self-brainstorm. I keep a notebook for each day in a cloud app. There is this blog, which allows for a freeform prose, but done in an electronic form. Writing things down on paper has subsided a lot, and last week I rediscover the virtue of that medium albeit by the nature of the circumstances. For a long time I kept a pad of lined post-it notes in my car since a lot of good ideas would just come to me driving to and from work. It might be good to force myself to use paper alone more often. By the power of cameras and remarkable text recognition the paper can go directly into my electronic notebook any way.

The important thing to me is to capture the ideas that move from the background of my thinking to the foreground. Some of these thoughts are half-baked, but others are really genius. The human mind is a remarkable thing especially when it’s subjected to lots disparate input. The day away from electronics was good for rebooting how I approached free thinking when its available. I’d like to think its what I’m paid for, but honestly that isn’t really likely to be the truth. Everything about how I’m paid is about not really thinking about the deeper meaning. We are encouraged to simply putter along doing as we’re told. The mantra of today is quit thinking and get back to work.

Power does not corrupt. Fear corrupts… perhaps the fear of a loss of power.

― John Steinbeck

Now we get to the darker aspects of free association, you start to turn your gaze toward the shit show unfolding before you. Life today is full of things that should be regarded with fox_benghazicontempt. Our overlords encourage us to ignore the carnage they are subjecting the world to, but it is there hidden in plain sight. Today we live in a coarse and belligerent culture that threatens to undermine everything good. I’m not talking about the sort of moral decay social conservatives would point to. I’m talking about the fundamental rewards, checks and balances that encourage an environment of selfish and greedy behavior. At the same time these same forces work to undermine every effort to pay attention to larger societal, organizational and social imperatives that collectively make everything better. We act selfish in the service of maintaining the power of others, and avoiding the sort of collective service that raises everyone.

So I was offered a front road seat at a primo shit show, and here is what it made me think.

nazis
13
soldiers-helmets_2391254k
nazi-party-hero-H

Our research is now running on the basis of money as a scoring system with no real concrete societal objectives in sight. In the 20th Century many great things were accomplished and the technology that dominates our economy was invented through scientific discovery. A great deal of that discovery was directly associated with fear, first of Germans and then the Nazis then the Soviets. The atomic bomb, hydrogen bomb, jet aircraft, microprocessors, cell phones, GPS, and almost every in our modern world owe their discovery to this response to fear of existential threats. These were real adversaries with well-developed technology, engineering and science requiring a serious response of our Nation-State to the threat they represented. Today, we see a bunch of disorganized barbarians as an existential threat. It is completely pathetic. maxresdefault copyWe really don’t have to have our collective act together to compete. It’s all fear and no benefit of accomplishing great things, and we aren’t. We just have the requisite reduction in freedom in response to this fear without any of the virtues. This dismal state of affairs results in a virtual emptying of meaning from work that used to be important. I work at a place where work ought to have value and importance, yet we’ve managed to ruin it.

Power attracts the corruptible. Suspect any who seek it.

― Frank Herbert

It is utterly stunning that working for an organization committed to National Security does not provide me with any sense that my work is important. I don’t have enough
latitude and capability to exercise my judfight_club_zpsce1c50eegment to feel truly empowered at work. All the control and accountability at work is primarily disempowering employees and sucking all the meaning from work. I ought feel an immense amount of importance to what I do. My management, writ large, is managing to destroy something that ought to be completely easy to achieve. This malaise is something we see nationally as the general sense that your work has little larger meaning is used to crush people’s wills. Instead of empowering people and achieving their best efforts, we see control used to minimize contributions and destroy any attempt toward deeper meaning. This sense is deeply reflected in the current political situation in the World and the broad sweeping anger seen in the populace.

imgresThe love affair with corporate governance for science is another aspect of the current milieu that is deeply corrupting science. Our corporate culture is corrupting society as a whole and science is no exception. The greed and bottom line infatuation perverts and distorts value systems and has systematically harshened the cultures of everything it touches. Increasingly, the accepted moral thing to do is make yourself as successful as possible. This includes lying, cheating and stealing if necessary (and you can get away with it). More corrosively it means losing any view of broader social, societal, organizational or professional responsibility and obligation. This undermines collaboration and free exchange of ideas, which ultimately destroys innovation and discovery.

is-the-orwellian-trapwire-surveillance-system-illegal-e1345088900843-640x360Accountability has been instituted that allows people to ethically ignore the broader context in favor of narrow focus. They are told that doing this is the “right” thing to do, and basically they should otherwise mind their own business. This attitude extends to society as a whole and we are all poorer for it. We keep ideas to ourselves, and the narrowly defined parochial interests of those who pay us. Instead we should operate as engaged and collaborative stewards of our society, organizations or professions. We have adopted a system that encourages the worst in people rather than the best. We should absolutely expect problems to be caused by this culture of selfishness. The symptoms are everywhere and threaten our society in a myriad of ways. The only portion of society that benefits from our present culture is the rich and powerful overlords. These systems maintain and expand their ability to keep their corrupt and poisonous stranglehold on everyone else.

A man who has never gone to school may steal a freight car; but if he has a university education, he may steal the whole railroad.

― Theodore Roosevelt

Intellectual Ownership is Essential to All Stewardship

22 Friday Jan 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

What I cannot create, I do not understand.

― Richard Feynman

The Richard Feynman quote “what I cannot create I do not understand” appeared on his chalkboard at his death. I realized that I have generally run my professional life by this principle. It isn’t enough Richard-feynmanmerely to demonstrate rote knowledge; one needs to understand the principles underlying the knowledge. One way to demonstrate the mastery over knowledge is utilize the current knowledge and then extend the knowledge in that area to something new. This gets to the core of our current problem in science, we are not being asked to extend knowledge, and we are asked to curate knowledge. As a result we are losing the ownership that denotes mastery.

A core example of the issue is the capability to completely understand the material they are responsible for. If we are implementing things in a computer code can we rederive all the expression in the code? Do we understand the assumptions, conditions and caveats with the actual expressions? Can we extend or modify the expressions as the situation calls for it? Today in very many cases the answers to these questions is no. This is true for models we use, methods used for solution and algorithms we depend upon. This broad-based lack of intellectual ownership is a direct threat to our ability to ably provide stewardship of our missions dependent upon these products.

I learned very early the difference between knowing the name of something and knowing something.

― Richard Feynman

I want to be very clear about what I’m commenting on, we have substantial intellectual ownership of the code we write, but not necessarily what that code does. What we often do not have ownership of is the contents of that code, the models, the methods solving those models, and the algorithms the methods are based upon. We own the contents of the implementation, the sofimages-1tware libraries, and the mapping of all of these to modern computing architectures. Because of the demise of Moore’s law we are exploring a myriad of extremely exotic computing approaches. These exotic computer architectures are causing implementations to become similarly exotic. In a sense my concern is that the difficulty of simply using these computers has the effect of sucking “all the oxygen” from the system and leaves precious little resource behind for any other creative endeavor, or risk taking. As a result we have no real progress being made in any of the activities in modeling and simulation beyond mere implementations.

A large part of my argument hinges upon the intellectual core of the value proposition associated with modeling and simulation. The question is whether progress in modeling and simulation is most greatly benefitted by greater computing power? Or improved modeurlls? Or improved methods? Or improved algorithms? The answers to these questions are not uniform by any means. There are times when the greatest need for modeling and simulation is the capacity of the computing hardware. At other times the models, methods or algorithms are the limiting factors. The question we should answer is what is the limiting factor today? It is not computing hardware. While we can always use more computing power, it is not limiting us today. I believe we are far more limited by our models of reality, and the manner in which we create, analyze and assess these models. Despite this lack of need for improved hardware, computing hardware is the focus of our efforts.

The program I work under is called stockpile stewardship. We act far more like stockpile curators. The general state of affairs is rapidly evolving to a state where no one really has the intellectual ownership of the content of their work. There is frightfully little freedom in defining a path toward greater understanding. The inertia of the “this is the way things are done around here” is so strong that it derails a great degree of progress. Intellectual ownership and progress are intimately related. The true ownership of knowledge and the capacity to produce progress are often, if not almost always one and the same.

In computational modeling we are largely in the business of stewarding legacy codes full of knowledge being curated. The choice of legacy codes is predicated on the ability of the codes to simulate issues of interest from an application point of view. It is a safe way to proceed in the short term while immensely dangerous in the long term. These codes are immensely complicated and full of wide swaths of both explicit and implicit knowledge. In many ways the models, methods and algorithms in the code are most completely documented in code, and other forms of documentation are woefully incomplete. Much of the knowledge of the decisions made in defining the code is contained only in the head’s of the people who originally wrote the code. Overurl-1 time those people go away and the logic and rationale for the code’s form and function begins to fade away. We often find that certain things in the code can never be changed lest the code become non-functional. We are left with something that looks and feels like magic. It works and we don’t know why or understand how it works, but it does.

As I stated above I run my professional like by recreating existing the things. This makes dead certain that I understand something, and the only way to validate this understand is to move past the strictures and barriers of existing understanding. To truly own knowledge is equivalent to expanding that knowledge. You confirm your ownership of knowledge by the process of creation. This creative process is part of the essence of research. Thus by the virtue of the creation of legacy codes we are confirming the lack of research in the areas of knowledge vital to stewarding our stockpile.

I think nature’s imagination Is so much greater than man’s, she’s never going to let us relax.

― Richard Feynman

With the stewardship program we are in a very precarious position. We are not allowed to design new weapons. We are not allowed to fully test our ideas either. An important part of t55306675he space of knowledge is taken off the table and relegated to being purely curated. This full demonstration has the role of providing an important feedback of reality to the work being done. Reality is very good at injecting humility into the system when it is most needed. When the knowledge is curated we rapidly remove important and essential aspects of stewardship. We have immense issues associated with the long-term responsibility of caring for a stockpile. New issues arise that are beyond the set of conditions the systems were originally designed for. All of this needs a fertile intellectual environment to be properly stewarded. We are not doing this today. Instead the intellectual environment is actually being steadily eroded in favor of curating knowledge. In computing, the creation of legacy codes is a key symptom of this environment.

It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.

― Richard Feynman

In taking the raw intellectual material going into a production code many things must be simultaneously integrated (by production code I mean the codes we use to do serious application modeling for applied programs, like stockpile stewardship). This integration of intellectual material includes fundamental models, the closure of those models, the methods of solving those models and the utilization of algorithms for those methods. Furthermore there is a specific implementation in computer code that is suitable or better yet, optimized for the computing hardware.

A number of tricks, or practical accommodations are made in getting the models, methods and algorithms to work effectively for solving “real” problems with all of their dirty and realistic aspects. Most of these tricks are not something people are proud of, or can even explain in a coherent way. In many cases, the trick simply works and often a number of other tricks were tried first, and didn’t produce the desired outcome. The trick itself is rarely documented as such, and the tricks that didn’t work are usimagesually undocumented. Many of the tricks are far more obvious and logical to use, and their failure is usually unexplained. Hence the production code works on the basis of tricks of the trade that are often history dependent, and rarely explained, yet utterly essential.

It is this point where the real horror show of legacy codes unfolds. The production code becomes a key aspect of executing an applied program, and the tricks necessary to make the code work are encoded into all of the results. The author of the tricks gets older and ultimately retires or dies, or gets a better job. When this happens the tricks become part of legend or lore and their capacity to make the code work achieves a magical status. The code’s results depend on all of the tricks, and the custody of the code passes to new people. These new people can’t change the tricks because they don’t understand the tricks. In very many cases, the tricks don’t look any different than the rest of the code and are indistinguishable from the parts of the methodology that are coherent and logical. It is all one big mess of coherent and incoherent ideas that simply gets stewarded much in the manner that monks steward old religious texts.

This is a generically awful situation that happens over and over again. In the programs I work imagefor it is the standard way things unfold. The reason this happens is because creating a new production code is a risky thing. Most of the time the effort fails. The creation of the code requires a good environment that nurtures the effort. If the environment is not biased toward replacing older codes with new codes (i.e., progress and improving technology), the inertia of the status quo will almost invariably win. This inertia is based on the very human tendency to base correctness on what you are already doing. The current answer has a great deal greater propriety than the new answer. In many cases the results of existing codes provide the strongest and clearest mental image of what a phenomena looks like to people utilizing modeling and simulation especially in fields where experimental visuals do not exist.

For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.

― Richard Feynman

Why should things be easy to understand?

― Thomas Pynchon

 

Could the demise of Moore’s Law be a blessing in disguise?

15 Friday Jan 2016

Posted by Bill Rider in Uncategorized

≈ 2 Comments

The feeling is less like an ending than just another starting point.

― Chuck Palahniuk

This is another reply to my critique of our modernization program for codes, which is really creating a whole new generation of legacy codes. This is proposing a solution to this that is the acceptance of the inevitable.

Transistor_Count_and_Moore's_Law_-_2008_1024The path toward better performance in modeling and simulation has focused to an unhealthy degree on hardware for the past quarter century. This focus has been driven to a very large degree by a reliance on Moore’s law. It is a rather pathetic risk avoidance strategy. Moore’s law is the not really a law, but rather an empirical observation that computing power (or other equivalent measures) is growing at roughly a rate of doubling every 18 months. This observation has held since 1965 although its demise is now rapidly upon us. The reality is that Moore’s law has held for far longer than it ever could have been expected to hold, and its demise is probably overdue.

For microprocessors, Moore’s law died around 2007, and now only lives via increasing reliance upon parallelism (i.e., lots of processors). Getting the performance out of such massive parallelism is enormously difficult and practically unachievable for an increasingly large span of methods, procedures and algorithms. Our lack of getting the advertised performance out of computers has been a large and growing problem systematically ignored for the same quarter century. It is papered over by measuring performance by a benchmark that has virtually no resemblance of any useful application and is basically immaterial to real progress.

hopedemotivatorWe can almost be certain that Moore’s law will be completely and unequivocally dead by 2020. For most of us its death has already been a fact of life for nearly a decade. Its death during the last decade was actually a good thing, and benefited the computing industry. They stopped trying to sell us new computers every year and unleashed the immense power of mobile computing and unparalleled connectivity. Could it actually be a good thing for scientific computing? Could its demise actually unleash innovation and positive change that we are denying ourselves?

Yes!

What if the death of Moore’s law is actually an opportunity and not a problem? What if accepting the death of Moore’s law is a virtue that the high performance computing community is denying itself? What might be gained through embracing this reality?

unnamedEach of these questions can be answered in a deeply affirmative way, but requires a rather complete and well-structured alteration in our current path. The opportunity relies upon the recognition that activities in modeling and simulation that have been under-emphasized for decades provide even greater benefits than advances in hardware. During the quarter century of reliance on hardware for advancing modeling and simulation we have failed to get the benefits of these other activities. These neglected activities are modeling, solution methods and algorithms. Each of these activities entails far higher risk than relying upon hardware, but also produce far greater benefits when breakthroughs are made. I’m a believer in humanity’s capacity for creation and the inevitability of progress if we remove the artificial barriers to creation we have placed upon ourselves.

The reasons for not emphasizing these other opportunities can be chalked up to the tendency to avoid high-risk work in favor of low risk work with seeming guarantees. Such guarantees come from Moore’s law hence the systematic over-reliance on its returns. Advances in models, methods and algorithms tend to be extremely episodic and require many outright failures with the breakthroughs happening in unpredictable ways. The breakthroughs also depend upon creative, innovative work, which is difficult to manage in the manner of the project management techniques so popular today. So under the spell of Moore’s law, why take the chance of having to explain research failures when you can bet on a sure thing?

IBM_Blue_Gene_P_supercomputerIf we look at the lost opportunities and performance from our failure to invest in these areas, we can easily see how much has been sacrificed. In a nutshell we have (in all probability) lost as much performance (and likely more) as Moore’s law could have given us. If we acknowledge that Moore’s law’s gains are actually not seen in real applications, we have lost an even greater level. Our lack of taste for failure and unpredictable research outcomes is costing us a huge amount of capability. More troublingly, the outcomes from research in each of these areas can actually enable things that are completely different in character than the legacy applications. There are wonderful things we can’t do today because of the lack of courage and vision. Instead the hardware path we are on almost assures that the applications only evolve in incremental, non-revolutionary ways.

If we finally accept that Moore’s law is dead, can we finally stop shooting ourselves in the foot? Can we start to support these activities with a proven track record and undeniable benefits? If we do not, the attempts to utilize the hardware to produce exascale computers will siphon all the energy from the system. The starvation of effort toward models, methods and algorithms will only grow. The gulf between what we might have produced and what we actually have will only grow larger and more extreme. This is an archetypical opportunity cost. Moreover we need to admit to ourselves that for any application we really care about, the term exascale is complete bullshit. If the press release says we have an exascale computer, for an actual application of real interest we might have one-hundredth of that speed. This might actually be optimistic.

To make matters worse, the imbalance in research and effort is poisoning the future. The hardware path has laid waste to an entire generation of modeling-simulation scientists who might have been able to conduct groundbreaking work. Instead they have been marshaled into the foolhardy hardware path. The only reasons for choosing hardware are the belief that it is easier to fund and yields guaranteed (lower) returns. Our management needs to stop embracing this low bar and begin to practices an effective management of the future. The depth of my worry is that we do not have the capacity to manage a creative environment in manner that accepts the failure necessary for success. We have become completely addicted to the “easy” progress of Moore’s law, and forgotten how to do hard work.

Perhaps the end of Moore’s law for real can provide the scientific computing community with a much needed crisis. I believe that the way out of the crisis is simple and easy. The path has been trod before, but we have lost the ability to walk it. We need to allow if not encourage risk and failure in forgotten areas of endeavor. We need to balance the work appropriately realizing the value of each activity, and focus on the work where we have need and opportunity. A quarter century of reduced effort in models, methods and algorithms probably means that efforts will yield an avalanche of breakthroughs if only the effort is marshaled. To unleash this creative tsunami, the steady march toward exascale needs to halt because it swallows all effort. We are creating computers so completely ill-suited to scientific computation that simply using them is catastrophic to every other aspect of computing.

Another more beneficial aspect of changing our perspective and accepting Moore’s law’s death is how we compute. Just as the death of Moore’s law in processors unleashed computing into the mobile market and an era of massive innovation in use of computing, the same can happen for scientific computing. We only need to change the perspective. Today the way we use computing is stuck in the past. The way computing is managed is stuck in the past. Scientists and engineers still solve their problems as they did 25 years ago, and the exascale focus does little to push forward. We still exist in the mainframe era with binding policies from the IT departments choking innovation and progress.

A pessimist sees the difficulty in every opportunity; an optimist sees the opportunity in every difficulty.

― Winston S. Churchill

A response to criticism: Are we modernizing our codes?

14 Thursday Jan 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

To avoid criticism say nothing, do nothing, be nothing.

― Aristotle

One warning there will be some bad words in the post today. If you don’t like it, don’t go on. It is the way most of us really talk about stuff!

legacy-code-1Earlier this week I gave a talk on modernizing codes to a rather large audience at work. The abstract for the talk was based on the very first draft of my Christmas blog post. It was pointed and fiery enough to almost guarantee me a great audience. I can only hope that the talk didn’t disappoint. A valid critique of the talk was my general lack of solutions to the problems I articulate. I countered that the solutions are dramatically more controversial than the statement of the problems. Nonetheless the critique is valid and I will attempt to provide the start of a response here.

Here are the solutions I would propose at a high level:

  1. Constantly question ourselves on whether we are going in the right direction. Today we follow directions mindlessly, and refuse to make reasonable course corrections. (this point was a part of the talk! I think it is the vital starting point)
  2. Micromanagement is killing us; we need to macromanage at focus on large objectives.
  3. Focus on improving reality and stop focusing on large-scale projects that produce impact in the real world.
  4. Destroy the mediocrity, obedience and compliance cultures that have arisen because of fear-based management and decision making
  5. Allow failure, encourage risk, and celebrate mistakes
  6. If something is really failure, allow it, say it and be all right with it. No more bullshit statements of success where reality is a train wreck.
  7. Create a learning environment where failure and mistakes create opportunity for growth
  8. Focus on excellence rather than subservience and mediocrity.
  9. Stop using money as a measure of success
  10. Rid us of short-term thinking and measurement of success. Start looking toward long-term success
  11. Rid ourselves of the vast amount wasted effort going into menial tasks that serve no purpose (obedience and compliance related)
  12. Get rid of project management of science. This concept is utterly and completely destructive for the conduct of science.
  13. We are never prepared for serendipitous outcomes and the change of direction they should provide.
  14. We are ruled by fear and use this fear as the reason to avoid risk and fail to reach highly for achievements.
  15. Quit allowing our system to turn failure into success. Let failure happen. Celebrate. Learn move forward. By allowing failure to be rebranded as success lets failure serve the wrong purpose.

If failure is not an option, then neither is success.

― Seth Godin

bullshit_everywhere-e1345505471862ASC is a prime example of failing to label and learn from failures. As a result we make the same mistakes over and over again. We are currently doing this in ASC in the march toward exascale. The national exascale initiative is doing the same thing. This tendency to relabel failures as success was the topic of my recent “bullshit” post. We need failure to be seen as such so that we do better things in the place of repeating our mistakes. Today the mistakes simply go unacknowledged and become the foundation of a lie. Such lies then become the truth and we lose all contact with reality. Loss of contact with reality is the hallmark of today’s programs.

Remember the two benefits of failure. First, if you do fail, you learn what doesn’t work; and second, the failure gives you the opportunity to try a new approach.

—Roger Von Oech

6767444295_259ef3e354One of the serious problems for the science programs is their capacity to masquerade as applied programs. For example ASC is sold as an applied program doing stockpile stewardship. It is not. It is a computer hardware program. Ditto for the exascale initiative, which is just a computing hardware program too. Science or the stockpile stewardship missions are mere afterthoughts. The hardware focus becomes invariant to any need for the hardware. Other activities that do not resonate with the hardware focus simply get shortchanged even when they have the greatest leverage in the real world.

[Warning foul language ahead!]

In other words, when things are fucked up, someone needs to say so, and the fuck up needs to be acknowledged. If we don’t we can’t learn and we end up fucking up again. The general bullshit way of reporting success for things that are not successful creates the environment where fuck ups are not viewed as such. Good fuck ups should not end careers they should be highlights as long as you learn from it. If you don’t learn from your mistakes then you have a problem. Today we have the bigger problem of never allowing ourselves to ever learn from a genuine mistake, which only plants the seeds of ever greater fuck ups in the future.

dt160111.gif

The ubiquity of the humor of the Dilbert comics lends perspective to all of this. Dilbert is dominated by issues from the corporate world and seems to indicate that the issues we are having a pervasive society-wide. Corporate governance is as messed up as government funded research. I’m almost certain that academia is in a similar or worse shape. Some places like NASA are envious of the screwed up state of affairs we have in DOE. I probably don’t court too much controversy by saying that money is at the root of many problems largely because it’s become the universal way of keeping score of success.

Judge a man by his questions rather than by his answers.

― Voltaire

 

 

What is important to work on?

08 Friday Jan 2016

Posted by Bill Rider in Uncategorized

≈ 2 Comments

Dare to think for yourself.

― Voltaire

6a00d8341c51c053ef00e54f8863998834-800wiThe beginning of the year is a prime time for such a discussion. Too often the question of importance is simply ignored in lieu of simple and thoughtless subservience to other’s judgment. If I listen to my training at work, the guidance is simple. Do what you’re paid to do as directed by your customer. This is an ethos of obedience and your particular judgment and prioritization is not really a guide. This is a rather depressing state of affairs for someone trained to do independent research; let someone else decide for you what is important, what is a priority. This seems to be what the government wants to do to the Lab, destroy them as independent entities, and replace this with an obedient workforce doing whatever they are directed to do.

This is one of the innovator’s dilemmas: Blindly following the maxim that good managers should keep close to their customers can sometimes be a fatal mistake.

― Clayton M. Christensen

When one takes a long critical look at your customer’s credentials and knowledge, the choice to seek their guidance on importance and priority becomes laughable. They are usually radically less competent to make such choices than most of you. Perhaps this is just my good old-fashioned Los Alamos arrogance showing. My customers don’t actually know best! The problem with the current standard of priority setting is that the customer’s guidance is destroying research, and slowly suffocating formerly great sources of innovation and excellence. Where independent and competent scientific and engineering leadership once underpinned the Nation’s security, the forced compliance and obedience has bred cultures of mediocrity and pathological avoidance of risk driven by fear of any failure.

imgresAn important, but depressing observation about my work currently is that I do what I am supposed to be doing, but it isn’t what is important to be doing. Instead of some degree of autonomy and judgment being regularly exercised in my choice of daily activities, I do what I’m supposed to do. Part of the current milieu at work is the concept of accountability to customers. If a customer pays you to do something, you’re supposed to do it. Even if the thing you’re being paid for is a complete waste of time. The truth is most of what we are tasked to do at the Labs these days is wasteful and nigh on useless. It’s the rule of the day, so we just chug along doing our useless work, collecting a regular paycheck, and following the rules.

Actually that isn’t completely true, I try to carve out time each day to write something. It’s a priority to myself that I exercise almost without fail. I have endeavored to make it a habit. It is actually several habits in one. For example, I write a personal journal each day to capture my innermost thoughts about my life and it’s an invaluable sounding board for myself. It is writing and that is good in and of it self, but it is completely private. Being private I don’t share it, or expect anyone else to read it. As such it doesn’t have to be clear or make sense to anyone, but me. On the other hand, the blog is intended to be read although that isn’t its purpose. It serves several purposes: the habit of writing, venting off steam, and working on concepts that are occupying my thoughts. All three things are very important to me, and I make them a priority for each and every day.

I wish that the same concept and discipline could be applied more broadly. Sadly it isn’t and most of us spend our time doing useless stuff. Most of what all of us are paid to work on is a complete waste of time. I end up spending most of my time communicating between groups of people, negotiating agreements and troubleshooting. I do far too little of the creative technical work that I love. Most of my writing is focused on raging against the forces of mediocrity eating away at any focus on doing something useful, important and high in quality. I recently came to the realization that the first ten years of my career were easily characterized by having my generally high expectations over-delivered upon at work; while the last fifteen years have seen my ever lowering expectations under-delivered upon in a seemingly endless search for the bottom. I wonder just how much lower my expectations need to drop to match the reality of today.

So, what might be more important than the crap they pay us to do? There are a lot of things in my personal life that qualify as more important than anything at work. This is rather obvious, but sad as the importance of work ought to put up a better fight. The problem with important work is that it is risky and difficult. We might fail at it, and failure is something we can’t bear to do these days. Ultimately we can’t muster the will to do important things because that would make our work real, consequential and meaningful, but also risky.

What’s important?

Judge a man by his questions rather than by his answers.

― Voltaire

Mainframe_fullwidthThe real world is important. Things in the real world are important. This is an important thing to keep in mind at all times with modeling and simulation. We are supposed to be modeling the real world for the purpose of solving real world problems. Too often in the programs I work on this seemingly obvious maxim gets lost. Sometimes it is completely absent from the modeling and simulation narrative. Its lack of presence is palpable in today’s efforts in high performance computing. All the energy is going into producing the “fastest” computers. The United States must have the fastest computer in the World, and if it doesn’t it is a calamity. The fact that this fastest computer will allow us to simulate reality better is a foregone conclusion.

Cray XE6 imageThis is a rather faulty assumption. Not just a little bit faulty, but deeply and completely flawed. This is true under a set of conditions that are increasingly under threat. If the model of reality is flawed, no computer, no matter how fast can rescue the model. A whole bunch of other activities can provide an equal or greater impact onto the efficiency of modeling than a faster computer. Moreover the title of fastest computer has less a less to do with having the fastest simulation. The benchmark that measures the fastest computer is becoming ever less relevant to measuring speed with simulations. So in summary, efforts geared toward the fastest computer are not very important. Nonetheless they are the priority for my customer.

So what do I do? Being a good little trooper, I work on the fastest computer stuff cause it pays the bills.

So what should we be doing instead?

I might suggest that we take a page from the success of the wider computing industry, and figure out how to make computing more ubiquitous in the daily life of a scientist. Well that’s already happened by fiat as scientists are functioning (just barely in many cases) members of society. So, let me be a bit more specific; make modeling and simulation part of the daily life of a scientist or engineer. This is something we as a community are failing at. Progress is being made by pure inertia, but that progress is dismal compared to what it could be.

mad-menThe reason for the lack of progress is simple, high performance computing is still acting as if it were in the mainframe era. We still have the same sort of painful IT departments that typified that era. High performance computing is more Mad Men than Star Trek. The control of computing resources, the policy based use and the culture of specialization all contribute to this community-wide failing. We still relystar trek tosupon centralized massive computing resources to be the principle delivery mechanism. Instead we should focus energy on getting computing for modeling and simulation to run seamlessly from the mobile computer all the way to the supercomputer without all the barriers we self-impose. We are doing far too little to simply put it at our collective
fingertips and make it a simple and ubiquitous part of what gets done.

The mobile computing is the key and mobile computing is not embraced by high performance computing. The challenge is to get the mobile computing to figure into the workflow of the average scientist. This is to do scientific work, not check Facebook, or email, or right swipe (they’ll continue to do this). The work should seamlessly move between the big iron and the mobile as appropriate. Remember if Linpac is to be believed my iPhone is as the Cray-2 was and certainly capable of doing some real work. If we can create this environment modeling and simulation would take off and truly meet its potential. Without the change it will continue iphone_4sto be a niche activity, and not fulfill its potential.

So what needs to change to accomplish this sort of transformation and success? First, the emphasis on big iron needs to diminish because it takes all the energy available. The trick is to make improvements in modeling, methods and algorithms and enable the more correct and efficient use of computing. Part of the modeling is the practice of modeling and simulation. Today we have generally poor practices, and a deep skewing of modeling by the tendency to adopt hero calculations. These calculations are focused at using the most massive computing available rather than be fitted to the purpose and suitable to assessment. The methods and algorithms can provide fundamental efficiency and scaling that will enable more to be done at the mobile level of computing.

People don’t want to buy a quarter-inch drill. They want a quarter-inch hole.

― Clayton M. Christensen

One of the modern mantras of business is the “disruptive innovation”. These innovations are sought out as the route to marketplace supremacy, creating a product that changes the business landscape and overwhelms the competition. These disruptive innovations are viewed as only being good things. I will challenge that assertion with the belief that massively parallel computing was a disruptive innovation (a common assertion), but not a positive one. It has been a negative innovation and successfully undermined the broader pipeline of innovation and discovery in high performance computing replacing with a slavish devotion to hardware as the route to progress.

To put increasingly outdated codes on these computers has taken all the available energy and robbed us of key innovations in models, methods and algorithms that would have been more beneficial than all the hardware gains. In addition these innovations would have created bona fide intellectual products that the current scientists would own, understand and benefit from mastery over. The computing hardware has become increasingly ill suited for scientific computing as well as being difficult to use. The next generation of computing promises to be worse in almost every regard. The focus on the massively parallel aspects of implementations has also starved focus on creating methods that are well suited to the modern CPU’s. As a result we utilize a tiny fraction of the available computing power from each CPU. In sum the current and future hardware focus has been destructive, wasteful and counter-productive. It may be laying waste to an entire generation of computational scientists.

The high performance computing community would be well advised to choose a different path towards progress before its too late. The path should acknowledge the holistic path to progress and the role of prioritizing work that impact the world beyond computing. It would be inspiring to take a page from the broader computing industry’s embrace of mobile computing as the route to ubiquitous computing that impacts the life of those who use it in immutable ways. I will take note that the death of Moore’s law for microprocessors and the assent of mobile computing happened in the same time frame. Worth thinking about, are these events correlated? We should realize that code is simply a computer recognizable expression of intellectual ideas. It should be understood if it is to be used for anything serious.

All of us should do some serious thought about what is important to work on. I think most of the current research emphasis in high performance computing is focusing on the trivial as the essential, and effectively trivializing the essential. A future where real progress and impact is made depends on changing this dynamic around.

‘Controversial’ as we all know, is often a euphemism for ‘interesting and intelligent.

― Kevin Smith

Are we really modernizing our codes?

01 Friday Jan 2016

Posted by Bill Rider in Uncategorized

≈ 2 Comments

Real generosity towards the future lies in giving all to the present.

― Albert Camus

cell-phoneIt goes without saying that we want to have modern things. A modern car is generally better functionally than its predecessors. Classic cars primarily provide the benefit of nostalgia rather than performance, safety or functionality. Modern things are certainly even more favored in computing. We see computers, cell phones and tablets replaced on an approximately annual basis with hardware having far greater capability. Software (or apps) gets replaced even more frequently. Research programs are supposed to be the epitome of modernity and pave the road to the future. In high end computing no program has applied more resources (i.e., lots of money!! $$) to scientific computing than the DoE’s Advanced Simulation & Computing(ASC) program and its original ASCI. This program is part of a broader set of science campaigns to support the USA’s nuclear weapons’ stockpile in the absence of full scale testing. It is referred to as “Science-based” stockpile stewardship, and generally a commendable idea. Its been going on for nearly 25 years now, and perhaps the time is ripe (over-ripe?) for assessing our progress.

So, has ASC succeeded?

Unknown-3My judgment is that ASC has succeeded in replacing the old generation of legacy codes with a new generation of legacy codes. This is now marketed to the unwitting masses as “preserving the code base”. This is a terrible reason to spend a lot of money and fails to recognize the real role of code, which is to encode expertise and knowledge of the scientists into a working recipe. Legacy codes simply make this an intellectually empty exercise making the intellect of the current scientists subservient to the past. The codes of today have the same intellectual core as the codes of a quarter of a century ago. The lack of progress in developing new ideas into working code is palpable and hangs heavy around the entire modeling and simulation program like a noose.

It’s not technology that limits us. We’re the limitation. Our technology is an expression of our intelligence and creativity, so the limitations of our technology are a reflection of our own limitations. We can’t fundamentally advance technology until we fundamentally advance ourselves.

― Christian Cantrell

legacy-code-1A modern version of a legacy code is not modernizing; it is surrender. We have surrendered to fear, and risk aversion. We have surrendered to the belief that we already know enough. We have surrendered to the belief that the current scientists aren’t good enough to create something better than what already exists. As I will outline this modernization is more of an attempt to avoid any attempt to engage in risky or innovative work. It places all of the innovation in an inevitable change in computing platforms. The complexity of these new platforms makes programming so difficult that it swallows every amount of effort that could be going into more useful endeavors.

The prevailing excuses for the modernization program we see today are the new computers we are buying. These computers are the embodiment of the death rattle of Moore’s law. These computers are still the echoes of the mainframe era that died so long ago, but lives on in scientific computing. The whole model of scientific computing is anything but modern; it is a throwback to a bygone era that needs to die. Mobile computing drives computing today and the true power of computing is connectivity and mobility, or perhaps ubiquity. These characteristics have not been harnessed by scientific computing.

The future is already here – it’s just not evenly distributed.

― William Gibson

mistakesdemotivatorIs a code modern if it executes on the newest computing platforms? Is a code modern if it is implemented using a new computer language? Is a code modern if it utilizes new software libraries in its construction and execution? Is a code modern if it has embedded uncertainty quantification? Is a code modern if it does not solve today’s problems? Is a code modern if it uses methods developed decades ago? Is a code modern if it runs on my iPhone?

What makes a code, or anything else for that matter, modern?

For the most part the core of our simulation codes are not changing in any substantive manner. Our codes will be solving the same models, with the same methods and algorithms, using the same meshing approach and the same analysis procedures. The things that will be changing are the coding and implementation of these model, methods and algorithms. The operating systems, system software, low-level libraries and degree of parallelization will all change substantially. The computers we run the codes on will change dramatically too. So at the end of process will our codes be modern?

imagesThe conventional wisdom would have us believe that we are presently modernizing our codes in preparation for the next generation of supercomputers. This is certainly a positive take on the current efforts in code development, but not a terribly accurate characterization either. The modernization program is certainly limited to the aspects of the code that have the least impact on the results, and avoids modernizing the aspects of a code most responsible for its utility. To understand this rather bold statement requires a detailed explanation.

Ultimately if our masters are to be believed, the point of ASC, SBSS and our codes is the proper stewardship of the nuclear weapons stockpile. The stockpile exists in the real, physical world and consists of a decreasing number of complex engineered systems we are charged with understanding. Part of that understanding involves the process of modeling and simulation, which needs a chain of activities to succeed. Closest to reality is our model of reality, which is then solved by a combination of methods and algorithms, which in turn are implemented in code to run on a computer. All of this requires a set of lower-level libraries and software that effectively interface the coded implementation with the computer. Finally we have the computer that runs the code.

Each one of these steps is essential and must work properly to succeed; the needs of each step need to be balanced against the others in a holistic fashion. For example no amount of computer power, computer science, or scaling will ever rescue a code whose models are flawed. If you believe that our models as presently stated are inappropriate to answer the questions facing the stockpile today (and I do), the current program does nothing to alleviate this problem. I believe we have failed to properly balance our efforts, and allowed ourselves to create a new generation of legacy codes to replace the previous one. A legacy code is the opposite of a modern code, but its exact what we have made.

The goal of a life purpose is not what you will create, but what it will make you into for creating it.

― Shannon L. Alder

A major problem with the approach we have taken to computing is the impact on the careers of our staff. Instead of producing a cadre of professionals spanning the full spectrum of the necessary knowledge and skills, we have a skewed base. The bias toward stewardship by massive computer power without emphasis on modeling, methods or algorithms, the development of our scientists and engineers is similarly and unhealthily skewed as well. By not embracing a holistic path with an emphasis on creation and innovation, the development of the current generation of scientists and engineers is stunted. We see our current path perpetuating both an unbalanced approach that amplifies its harmful impact by eschewing risky research and avoiding both innovation and discovery in the process. This produces the knock on effect of killing the development of our staff.

It is notable that this is a New Year’s Day post; so the future is here. Given this, and upon some reflection a research program isn’t really good enough if it is modern, it must be futuristic. Research should be creating the future, not simply be in the present. If research is stuck in the past, the future really can’t be accessed. My concern is that the view of low risk endeavors is severely shaped by what has succeeded in the past. The best way to be successful, at least superficially, is to do what has worked in the past. This seems to be what we are doing in high performance computing. We build the codes that worked in the past and put them on our big mainframes. The truth is we can’t be modern if we are in the past, and we will never create the future.

computational_fluid_hSo this is where we are at, stuck in the past, trapped by our own cowardice and lack of imagination. Instead of simply creating modern codes, we should be creating the codes of the future, applications for tomorrow. We should be trailblazers, but this requires risk and taking bold chances. Our current system cannot tolerate risk because it entails the distinct chance of failure, or unintended consequence. If we had a functioning research program there is the distinct chance that we would create something unintended and unplanned. It would be wonderful and disruptive in a wonderful way, but it would require the sort of courage that is in woefully short supply today. Instead we want to have certain outcomes and control, which means that our chance of discovering anything unintended disappears from the realm of the possible.

With relative ease this situation could be rescued. Balance could be restored and progress could proceed. We simply need to produce a greater focus and proper importance on the issues associated with modeling, methods and basic algorithms (along with appropriate doses of experiments, physics and real applied math). Each of these areas is greatly in need of an injection of real vitality and modernity, and offer far greater benefits than our present focus on computing hardware. It is arguable that we have evolved to point where the emphasis on hardware is undermining more valuable efforts. This would require a requisite reduction is some of computer science and hardware focus, which is useless without better models anyway.

titan-supercomputerThe core of the issue is the difficulty of using the next generation of computers. These machines are literally monstrous in character. They raise parallelism to a level that makes the implementation of codes incredibly difficult. We are already in a massive deficit in terms of performance on computers. For the last 25 years we have steadily lost ground in accessing the potential performance of our computers. Our lack of evolution for algorithms and methods plays a clear role here. By choosing to follow our legacy code path we are locked into methods and algorithms that are suboptimal both in terms of performance, accuracy and utility on modern and future computing architectures. The amount of technical debt is mounting and magnified by acute technical inflation.

I’ll posit an even more controversial idea about massively parallel computers. These machines were a bona fide disruptive innovation, but instead of disrupting positively as this concept usually applies, parallel computing has been destructive. The implementation of standard scientific computing models and methods has been so difficult that more valuable efforts have been decimated in the process. For example numerical linear algebra has been completely static algorithmically for thirty years. The effort to merely implement multigrid on parallel computers has swallowed all of the innovation. The problem is that the innovative algorithmic progress would crush the impact of implementations with a single breakthrough. Have we been denied a breakthrough because of effort is all focused on implementation?

The clincher is that the next generation of computing may be even more catastrophically disruptive than the previous one.

To accomplish this we would have to turn our back on the mantra of the last quarter century; we just need a really fast computer (preferably the fastest one on Earth) and the stockpile will be OK. This mindset is so incredibly vacuous as to astound, but the true epitome of modernity is superficiality. The view that a super fast computer is all we need to make modeling and simulation work effectively is simplistic in the extreme. In modern America simplistic is what sells. Americans don’t do subtlety and the current failures in high performance computing can all be linked to the subtle differences between the simplistic messaging that gets funding and the subtle messaging of what would be effective. Our leaders have consistently chosen to focus on what would get funded over what would be effective. We cannot continue to make these choices and be successful; the deficit in intellectual depth will come due soon. Instead of allowing this to become a crisis we have the opportunity to get ahead of the problem and change course.

The best way to predict your future is to create it.

― Abraham Lincoln

The real goal should not be modernizing our codes; it should be creating the codes of the future. First, we must throw off the shackles of the past and refuse to perpetuate the creation of a new generation of legacy codes. The codes of the future should solve the problems of the future using futuristic model, methods and algorithms. If we continue to keep our attention in the past and promoting the continued preservation of an antiquated code base, the future will never arrive. Simply implementing the codes of the past so that they work on new computers is merely a useful exercise for proofing concepts. These computers purchased at great cost should be looking forward, not back, with fresh eyes and new ideas for solving the problems ahead of us, not yesterday’s.

Today’s science is tomorrow’s technology.

― Edward Teller

We owe the future nothing less. The future is in our hands; we can make it into what we want to.

You realize that our mistrust of the future makes it hard to give up the past.

― Chuck Palahniuk

 

 

 

 

 

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...