• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Category Archives: Uncategorized

What is the essence of computational science?

05 Monday Jan 2015

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Crisis is Good. Crisis is a Messenger.

― Bryant McGill

Computational What? Science? Engineering? Physics? Mathematics?

My last post is about computational science’s impending multi-crisis with the loss of Moore’s law, exploding software complexity, and failure to invest in its intellectual foundation. A reasonable question is how broadly the issues described there apply to subsets of the field. What about computational physics? What about computational engineering? What about computer science? What about applied mathematics? What are the differences between these notions of computation’s broader role in the scientific world? What are the key characteristics that make the essence of scientific computing?

One of the most important of disciplines is the one that never assumed its logical name: mathematical engineering.

—Nick Trefethen

Computational science is an umbrella for a variety of things that have gradations of difference and don’t form a terribly coherent whole. Instead it is a continuum with one end being held down by computer science and the other end by computational engineering; or perhaps the fields that birthed computing, physics and mathematics. The differences between engineering, mathematics and physics show themselves in computation as they do in other fields, but scientific computing should really be something of amalgam of all of these areas.

We are not creators; only combiners of the created. Invention isn’t about new ingredients, but new recipes. And innovations taste the best.

― Ryan Lilly

The Origin: Physics and Mathematics
JohnvonNeumann-LosAlamosTo start our discussion it is worth taking a look at the origins of computing when mathematics and physics combined to create the field. This combination is embodied in John von Neumann whose vision largely produced the initial instantiation of scientific computing. Scientific computing began in earnest under the aegis of the development of the atomic bomb. The application of computing was engineering analysis done by some of the greatest physicists in the world most notably Hans Bethe and Richard Feynman using methods devised by John von Neumann and Rudolf Peierls. Engineering was limited to the computer itself. Feynman_and_Oppenheimer_at_Los_AlamosMathematicians played key roles in more properly using computers notably through the efforts of Robert Richtmyer, Nicholas Metropolis and Richard Hamming. As a rule, the overall effort was conducted by a host of geniuses for an application of monumental international impact and importance. Practically speaking, they were exquisitely talented scientists who were also immensely motivated and had every resource available to them.

From this august origin, computing began to grow outside the womb of nuclear weapons’ work. Again, it was John von Neumann that provided the vision and bh_computers_01leadership. This time from the Institute for Advanced Study in Princeton focused on weather and development of better computers. Again, the application was largely in the realm of physics with the engineering being applied to the computers. Meanwhile computing was broadening in its appeal and attention from the success in Los Alamos and Princeton along with colleagues at universities. Other Labs in the United States and the Soviet Union also began exploring the topic. It still remained immature and speculative especially in a world that scarcely comprehended what a computer was or could do.

Computers are incredibly fast, accurate, and stupid: humans are incredibly slow, inaccurate and brilliant; together they are powerful beyond imagination.

― Albert Einstein

Engineering Joins the Revolution

It wasn’t until the 1960’s when engineering activities began to include computation. Part of the reason for this was the development of the initial set of methods by physicists and mathematicians and the unavailability of computing in general and specifically computing power sufficient to contemplate engineering. At first, the engineering uses of computing were exploratory and relegated to the research activities more like the physical or mathematical sciences than engineering. By the 1970’s this ended led by a handful of pioneers in Aerospace, Mechanical and Civil engineering. The growth of engineering use of computing led to some bad things like hubris of the awful “numerical wind tunnel” affair. In the late 1970’s talk of replacing wind tunnel testing with numerical simulations became an embarrassing set back (which we have naively made all over again). It represented a massive technical over reach and ultimately a setback by driving a wedge between computing and experiments.aj_pic_color

Civil engineering made progress by utilizing the finite element method, which was ideally suited for that fields’ intellectual basis. In mechanical engineering heat transfer and fluid flow problems and dominantly heat exchanger design led the way. Together with aerospace engineering the important topic of computational fluid dynamics (CFD), which is the archetype of computational science in general. Nuclear engineering was birthed from physics and had computing at its heart almost from the beginning especially with the problem of reactor core design. These methods were born directly from the nuclear weapons’ program as a natural outgrowth of the peaceful exploration of nuclear power.

Science is the extraction of underlying principles from given systems, and engineering is the design of systems on the basis of underlying principles.

—Nick Trefethen

Mathematics Shrinks from View

Computer Science is no more about computers than astronomy is about telescopes

― Edsger Wybe Dijkstra

15_Courant_2All of this was a positive outgrowth of the combination of physics and mathematics. During the same period the mathematical contributions to scientific computing went several directions, with pure mathematics birthing computer science, and applied mathematics. Computer science has become increasingly divorced from scientific computing over time and failed to provide the sort of inspirational impetus mathematics had previously provided. For several decades applied mathematics filled this vacuum with great contributions to progress. In more recent times applied mathematics has withdrawn from this vital role. The consequence of these twin developments has taken a terrible toll of depriving scientific computing of a strong pipeline of mathematical innovation. I will admit that statistics has made recent strides in connecting to scientific computing. While this is a positive development, it hardly makes up for the broader diminishing role of other mathematics from computing.

TerenceTaoWe see that computation was born from physics and mathematics with engineering joining after the field had been shaped by those fields. Over the past thirty or forty years engineering has come to play an ever larger part in scientific computing, the physical sciences have continued their part, but mathematics has withdrawn from centrality. Computer science has taken the mantle of pure mathematics’ lack of utility. Applied mathematics leapt to fill this void, but has withdrawn from providing the full measure of much needed intellectual vitality.

Computer science is one of the worst things that ever happened to either computers or to science.

—Neil Gershenfeld

Computing Becomes Something Monstrous

Part of the reason for this is a change in the cultural consciousness regarding computing. In the beginning there was physics and mathematics combining in the imagination of John von Neumann to produce something new, something wonderful and era defining. It gestated in the scientific community for several decades until computing exploded into the public conscious. It was a combination of maturity in the use of computing and sufficient computing power available to the masses that triggered the transition. Computing was no longer the purview of nerds and geeks, now being owned by all of humanity. As such computing became somewhat pedestrian in nature and lost its sheen. This also explains the rise of engineering as an outlet for computing, and the loss of mathematics. In the absence of innovation we substituted raw power. Rather than continue to improve through better thinking we came to rely upon Moore’s law for progress. Where we used to out-smart problems, they now would be overpowered by an unparalleled force.Dts_news_bill_gates_wikipedia

Steve_Jobs_Headshot_2010-CROPWhile scientists and big business owned computing until about 1995, all of sudden it became public property. Soon it grew to be something that dominated the global economy. Powered by Moore’s law computing became ubiquitous and ironically ceased being about computing; computers became about communication. Now everything valuable about computers is communication, not computation. Computation is an essential, but minor element in the value proposition. A big part of the reason is the power of computers is so great that the computational load has become trivial. The Internet gives access to information, data and connects people in ways never imaginable. As such the business possibilities are staggering. Computing is now longer so much about computers as it is about people and their money.138

Moore’s law also became a cipher for technology and progress. It has infected computational science with its pervasively superficial nature. It isn’t that Moore’s law is superficial per se; it is the societal interpretation of its implications. Moore’s law is at death’s door, if it hasn’t already passed. Its death does not mean progress will die, it just means progresses path will change. 

You can’t solve a problem with the management of technology with more technology.

— Bill Joy

Where do we go from here?

What we can conclude form this discussion is a fundamental change in the character of scientific computing. Where the use of computing for engineering work should have added to form a more complete whole, the withdrawal of mathematics has cheated us of that brighter future. Engineering is an essential human activity and the natural outgrowth of our scientific achievements, but it can lack creativity at times. Such creativity is always beneficial and better when combining disciplines. The structure and rigor of mathematics is essential for putting this creativity on the strongest footing. To make progress in the future it should be essential to include engineering, the physical sciences and mathematics with some degree of equality. The rather weak minded approach of simply utilizing Moore’s law to drive scientific computing forward must end both from a fundamental standpoint as well as by the death of this source of progress.

Postscript

I’ve decided to get off the daily writing thing, or more accurately the daily posting. I still need to write every single day, so that’s not changing. I’m a lot better off thinking a bit more about the topic to write about, and working it out over several days. My new goal is two or three blog posts a week.

The Future is Already Here, Everyday.

The future is already here – it’s just not evenly distributed.

― William Gibson

2015: Time for a New Era in Scientific Computing?

01 Thursday Jan 2015

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Societies in decline have no use for visionaries.

― Anaïs Nin

gordon-moore-blue-3-1.jpg.rendition.cq5dam.webintel.960.320 So it’s a new year with all the requisite reflective looks forward and backwards. I’ll do both here and posit that perhaps an era is drawing to a close and its time for a big change in scientific computing. Even more, I’ll argue that a big change is being thrust upon us, and its time to get ahead of it. I’ve taken the history of scientific computing and laid it out in a series of eras each 15-20 years long. These eras are defined by a combination of ideas, algorithms, methods, hardware and software. Changes in the composition of all of these define each era and trigger the changes.

 A man’s shortcomings are taken from his epoch; his virtues and greatness belong to himself.

― Johann Wolfgang von Goethe

I believe that a combination of crises will trigger the change that is upon us. One of these crises has all the headlines, one is barely hidden from view and a third is silent, but each has a huge role to play. The key visible crisis is hardware driven and revolves around the viability of Moore’s law operating in computing and computational performance. We seem to have taken the approach that maintaining Moore’s law is essential and we are willing to expend vast amounts of money to achieve it. This money could be spent more profitably elsewhere in the enterprise. The second crisis is software driven and associated with the complexity of scientific software and the ponderous nature it has taken on. Software is becoming increasingly unsustainable and expensive threatening to swallow all of the available resources. The third silent crisis is the dearth of new ideas in scientific computing and/or an inability to impact progress. This third crisis is primarily driven by the combination of hard-to-impossible to use hardware with software complexity exploding to strangle any new ideas in their proverbial cribs. Even when the ideas can be breathed to life, they are starved of the sort of resources and focus necessary to bring them to fruition. Dealing with the first two problems is simply taking all the resources available, and leaving nothing else.

History is a Rorschach test, people. What you see when you look at it tells you as much about yourself as it does about the past.

― Jennifer Donnelly

Yesterday I tweeted “scientific computing was once the grand avenue of computing, now it is a dark alley in a bad neighborhood.” The scientific community once drove computing as a vanguard, and now has to adapt to whatever the market does. It has become a niche activity and economically slaved to a colossus of global reach. A huge marketplace, which might benefit computing, now drives hardware and software innovation but its direction is not optimal. We must react to directions that benefit the marketplace rather than determine the direction for the market.

Let us study things that are no more. It is necessary to understand them, if only to avoid them.

― Victor Hugo

images-1The politics of the time have an enormous impact on focus and resource availability. Scientific computing was born in the crucible of a World War and matured in the urgency of the Cold War. Nothing like this exists to focus the mind and open the pocketbook like that today. On the other hand computing has never been as important as it is today. Never have more of society’s resources gone in its direction. How can we harness this massive creative force for our benefit?

The greatest and most powerful revolutions often start very quietly, hidden in the shadows. Remember that.

― Richelle Mead

I’ve taken the history of scientific computing and broken in up into distinct five eras and made the leap of defining 2015 as the beginning of a new one. I’ll grant that the dates are rounded up or down by a few years, so maybe we’re already in a new era, or it’s a few years off.

The farther backward you can look, the farther forward you are likely to see.

― Winston S. Churchill

  • imagesUnknownPre-1945 (prehistory): There was no scientific computing because computers were in their infancy, and the combination of their use for science was not yet seen. In this time the foundations of mathematics and physics were made by a host of greats along with numerical methods crafted for hand computations. The combination of computers
    with the vision of John von Neumann and the necessity of World War 2 brought scientific computing out of this womb and into practice.
  • vnc011945-1960 (creation): In this time scientific computing was largely taking place in the most important Labs on the most important topic with access to high priority and huge resources. Great innovations were taking place in computers and the practice of computing. Along with refinements in the engineering of computers, the practice of programming began to take shape. The invention of Fortran and its capacity to express methods and algorithms in code was one of the developments to bring this era to a close. In this time, the development of mathematical theory and numerical analysis was key. The invention of stability, and convergence of numerical methods was one of the great achievements. These provided a platform for systematic development in the 1960’s.
  • 1960-1975 (foundations): During this period scientific computing emerged from the shadows into the light. Computers became increasingly available outside the top-secret environment, and computing began to be a valid academic endeavor. With this democratization of computing came extensive application to an ever-widening set of problems. Many of the key methods and algorithms for scientific computing were created in this period. The field of computational fluid dynamics (CFD) came into being. CFD was then viewed as an enormous boon to aerospace science. By the time the period drew to a close there was great optimism and hope. Computers were becoming quite capable and more generally available. The Labs still led the world especially because they always had access to better hardware and software than anyone else. They were beginning to be indispensible tools for business. Moore’s law was defined and the beginning of a massive growth in computing power had started.
  • 1975-1995 (glory years): I’ve described this time as the “golden age” of Unknown-2computational science. For the first time the computers and software was balanced with the methods and models. In many ways Seymour Cray defined the era first with the CDC 6600 and 7600 computers then with the machines bearing his name. The vision set forth by von Neumann came into force. Academic scientific computing became completely respectable with mathematics, physics and engineering all taking part. The first hints of extreme hubris were witnessed; the “numerical wind tunnel” debacle unfolded in aerospace. The ability for CFD to displace physical wind tunnel testing in design and qualification was a masUnknown-4sive over-reach in capability. Great damage was done in the process, and no one seems to have learned from the experience. It foreshadows the developments of the current time with ASC when the creation of “virtual underground testing” was proposed to make up for a ban on actual underground testing.
  • images-21995-2015 (mid-life): Then the glory days ended with a bang through a combination of events. The Cold War ended and soon nuclear testing ceased. The Labs would have their own “numerical wind tunnel” moment, but no actual wind tunnel would be available to test it. At the same time the capacity of the supercomputers of the golden era to maintain Moore’s law came to an end. The entire ASC program hinged upon the premise that advances in computational performance would pave the way for predictive simulation. We had the attack of the killer micros and the birth of massively parallel computation to keep hardware performance on the increase. Unknown-3Getting the methods and models of old to work on these computers became an imperative; the access to more computing power via Moore’s law became an imperative as well. At the same time the complexity of the codes was growing by leads and bounds. New programming paradigms were being ushered into use with C++ leading the way. Its object-oriented principles were thought to be a way to handle the seemingly overwhelming complexity. With more resources flowing into hardware and software the amount of energy going into methods and models waned. Where efforts in these endeavors had previously yielded gains larger than Moore’s law such gains have simply evaporated during this era.
  • 2015- (mid-life crisis): Now we get to today and the elements of revolution are falling into place. We have three crises at hand with each brewing during the era we’re at the end of. Hardware is in crisis with Moore’s law either already dead or on death’s door. The complexity of the software is beginning to threaten progress. Lack of innovation in methods, algorithms and modeling is killing other sources of improved performance. Let’s look at each crisis in turn and its threat to scientific computing’s progress.

All revolutions are, until they happen, then they are historical inevitabilities.

― David Mitchell

Transistor_Count_and_Moore's_Law_-_2008_1024The most evident crisis is the demise of Moore’s law. Given the devotion to computing power as the route to predictive computational science, the loss of growth in computing power would be fatal. There are two worrying signs: the growth in computing power at the processor level has slipped to a crawl, and the ability to use all the power of the massively parallel computers for real problems is missing. At the low end of computing nothing will save Moore’s law especially as the computing industry has moved on to other priorities. It is just accepted. At the high end we grasp on to terrible metrics like weak scaling, or LINPAC to hide the problems, but the immensity of the issues become clearer every day. In the middle of this Moore’s law is clinging to life, but the two sides are converging on the middle and when they do Moore’s law will be dead. There are a host of hopes for life, but the laws of physics are arrayed against the continuation of this trend. With all the effort going into using Moore’s law what will be left to pick up the pieces?fastest-supercomputer-Fujitsu-Numerical-Wind-Tunnel

 

A second crisis simmering just below the surface is software complexity. The combination of trying to make codes work on cutting edge computers and the increasing desire for capability in codes is creating a software monstrosity. Earlier in the week I learned from readers of the blog about Gall’s law, which says that a complex system evolves from a simple system that works, and a complex system defined from scratch will not work. We run a great risk that we will be stuck with massive codes bases that are eroded by mountains of technical debt. These debts threaten to choke all progress and commit us to the fundamental methodology defined by the increasingly unwieldy code. The issue of software and how we translate our intellectual labor to silicon has to be dealt with soon or it will strangle progress more surely than the death of Moore’s law.

 

CFD_tunnel_comparisonThe third and most shadowy crisis is lack of impact from methods, models and algorithms in the most modern era of scientific computing. As I said earlier, part of the problem are the twin crises of decline in hardware gains and software-bloat sapping the energy from the system? Before our infatuation with Moore’s law as the heartbeat of progress innovation in algorithms, numerical methods and modeling produced more progress than hardware gains. These gains are harder to measure and far subtler than raw computational performance, but just as real. As hardware fades away as a source of progress they are the natural place to turn to for advances. The problem is that we have starved this side of scientific computing for nearly 20 years. Major changes are needed to reinvigorate this approach. As I’ve come to realize the software languages are themselves a massive algorithmic achievement (Fortran is listed among the 10 greatest algorithms of the 20th century!). This is to say that intellectual labor toward figuring out how to program computers in the future is part of this issue and a necessary element in fixing two of the crises.

 

But I suppose the most revolutionary act one can engage in is… to tell the truth.

― Howard Zinn

 

The question is whether we will answer the call to action that the current day’s developments should demand. The situation is so critical that the current state of affairs cannot continue for much longer. My greatest concern is the lack of leadership and the lack of appetite for the risk-taking necessary to take on the challenge of the day. If we can find the courage to step forward new vista await, it’s simply a matter of coming to terms with realities. If we don’t the next era of scientific computing could be marked by decline and obsolesce. It need not be this way, but some significant changes are needed in attitudes and approaches to leading the field. Are we up to the challenge? Do we have the leadership we need? It is time to get ahead of the crises now before they become overwhelming.

 

Those who make peaceful revolution impossible will make violent revolution inevitable.

― John F. Kennedy

 

A revolution is not a bed of roses. A revolution is a struggle between the future and the past.

― Fidel Castro

 

Is this really the best we can do?

30 Tuesday Dec 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

The biggest human temptation is to settle for too little.

― Thomas Merton

Lately at work I find myself wondering, “is this really the best we can do?” “Why aren’t people interested in better methods?” “Are the current methods and models good enough?” The reasons for these questions are the relatively low-bar we set for achievement these days. The outcome of all this lack of ambition is astonishing levels of mediocrity coming from ridiculously large investments. Ultimately it reflects our choice to invest too much in hardware and not enough in intellectual capacity.

Some men are born mediocre, some men achieve mediocrity, and some men have mediocrity trust upon them.

― Joseph Heller

titan2The problem with the money spent on computational science is one of lack of balance and aggression. There is plenty of money; the problem is what that money is spent on. Too much focus has been made on hardware with a reliance on Moore’s law for progress. Hardware improvement has been a sure thing for half a century, and thus a low risk. Recent investments in scientific computing have largely forgotten the history of scientific computing. A lot of issues remain unaddressed by current work. This is a theme I’ve touched on before (https://williamjrider.wordpress.com/2014/09/12/the-dangers-of-good-enough-thinking/, https://williamjrider.wordpress.com/2013/12/26/what-makes-a-computational-simulation-good/, https://williamjrider.wordpress.com/2014/03/20/legacy-code-is-terrible-in-more-ways-than-advertised/, https://williamjrider.wordpress.com/2014/10/06/supercomputing-is-a-zombie/).

Compromise is a sign you’ll pass on the road to mediocrity.

― Tim Fargo

1280px-Columbia_Supercomputer_-_NASA_Advanced_Supercomputing_FacilityThere is a significant practical problem with maintaining the progress provided by Moore’s law. It has become very hard to do. This is because Moore’s law is already dead; at least its dead in any practical sense. For a tiny and relatively unimportant, impractical application we can still make it work, but at a ridiculous cost. For most of the things we really use computers to do, Moore’s law died about 10 years ago. To keep it alive we make computers that are incredibly hard to use, to build and maintain. They use too much power and cost way too much money. The resources going into this “fool’s errand” are starving all the work that actually makes the computers useful. This downward spiral needs to end. The commercial computing world has already divested from Moore’s law, and now focuses on software and communication capability for value. The hardware is improving, but modestly compared to the past.

Another critical issue is the complexity of the software. The software is much like an infrastructure that is crumbling. Like roads, we patch them, but rarely rebuild the software from the ground up. As a result we use the same basic software for years legacy-code-1beyond the date it should be rewritten. It becomes less useful and more expensive to maintain. These costs amplify over the long-term, but in the short-term, the patch and kick the can down the road approach is viable. The problem simply gets worse every year. We are unwilling to deal with rewriting the software because of the investment they represent. The problem is quite analogous to the Nation’s physical infrastructure problems.

Cray XE6 imageWe also have issues regarding methods and models. As code becomes more capable, it becomes harder to develop new codes because it is so expensive to measure up. The old codes can do marvelous things for practical problems. Testing new methods and models becomes nearly intractable. Ideas to make methods and models abstract with “components” have largely failed to provide a path forward. Part of the issue is the inability for component-based methods to solve “real” applications, which requires a lot of dirty work (robustness, efficiency, reality). As a result the older methods and models have become engrained. As this happens the community working on methods and models becomes estranged from computing. Additionally, the effort to put these older codes on new computers has become extremely difficult and expensive. This is compounded by the size of the code base. Together we have the recipe for disaster.

The highest level than can be reached by a mediocre but experienced mind is a talent for uncovering the weaknesses of those greater than itself.

― Georg Christoph Lichtenberg

This ends up in the loss of expertise at the level of depth needed for excellence as improving methods and models developed those experts, which isn’t happening. This provides all of the makings for a viscous cycle. Eventually we will hollow out our expertise leaving us with old methods, old models running on new computers tlegacy-codehat are outrageously expensive, but less useful every year. It simply cannot be sustained. The short-term thinking and the lack of tolerance for risk keep us from solving any of these problems. We end up of settling for mediocrity as a result.

The only sin is mediocrity.

― Martha Graham

I think that the idea of not having Moore’s law operating terrifies some people. They 23611-004-41D33A72won’t have the security of doing nothing and making progress. The problem is that they don’t recognize the exorbitant costs of propping up Moore’s law for the last decade, or the cost of what has been scarified. The terrible thing is that the costs and risks of the path we’ve taken are far higher. We are moving toward a collapse of astounding proportions. Instead of building a sustainable future we are building on the past while losing sight of how we actually got here. For decades the mathematics and physics were miles ahead of computers. During the late-70’s computers caught up and for fifteen or twenty years there was a glorious balance of computing hardware and intellectual capital. We have lost sight of what made all of this possible, and we are taking a huge risk moving forward.FortranCardPROJ039.agr

Caution is the path to mediocrity. Gliding, passionless mediocrity is all that most people think they can achieve.

― Frank Herbert

technicaldebtAt some level it all stems from fear of failing, which ironically, leads to actual failing, or at least success that is so modest that it seems indistiquishable from failure for success-minded folk. I simply don’t see an appetite for progress that can overwhelm the desire to never appear to fail. This outcome is assured by the belief that we can manage our way to success (and manage away failure), and the short-term focus for everything.

As long as I have a want, I have a reason for living. Satisfaction is death.

― George Bernard Shaw

 

We need more bad ideas

27 Saturday Dec 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

There is only one thing that makes a dream impossible to achieve: the fear of failure.

― Paulo Coelho

What the world really needs now is a lot of bad ideas. We need an environment where bad ideas can thrive and fall flat. We need people who can guide those bad ideas in 04ae327de73cd6d8c6985c286ebe01ff0d003a0870702ad731929682db209658to the grave with deftness and courage. The people responsible for these bad ideas should be celebrated and rewarded for their failure. If this sounds absurd, it is, and this seeming absurdity is the heart of our incapacity for real success.

The best way to get a good idea is to have a lot of ideas.

—Linus Pauling

I’m not talking about intentionally bad ideas, but ideas that honestly push to solve problems in innovative ways that most people criticize as being bad. If we can tolerate lots of terrible ideas and let them fail, and then come back for more bad ideas, we will succeed in creating a place where something great can happen. Today’s success only world is creating an environment where nothing great can happen. Our fears are destroying our ability to be great because we don’t feel that anything audacious and “out of the box” that might flop can be tolerated.

Freedom means nothing unless it means the freedom to be different

― Marty Rubin

Right now you can’t support a bad idea, everything has to succeed, or at least be able to be spun into a success. This is producing an enforced mediocrity that is choking real success to death before it is even born. The thing is that some of the bad ideas are actually brilliantly good ideas. If we don’t support the bad ideas, the brillianUnknownt ones hiding in the shadows will never see the light. They are viewed as bad because they don’t fit conventions. Conventions are safe, and lead to the form of mediocrity masquerading as success today.

 

If failure is not an option, then neither is success.

― Seth Godin

Here’s to being big screw-ups. Let’s do a lot more of it. Let’s celebrate it, and come back for more. In the process we will see greater success than ever.

Here’s to the crazy ones. The misfits. The rebels. The troublemakers. The round pegs in the square holes. The ones who see things differently. They’re not fond of rules. And they have no respect for the status quo. You can quote them, disagree with them, glorify or vilify them. About the only thing you can’t do is ignore them. Because they change things. They push the human race forward. And while some may see them as the crazy ones, we see genius. Because the people who are crazy enough to think they can change the world, are the ones who do.

― Apple Inc.

 

Scientific computing’s achievement myth?

25 Thursday Dec 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 

It’s never too late

…to start heading in the right direction.

― Seth Godin

What would be the best gift I could hope for? Honestly the best gifts would be health and Unknownhappiness in my private life, so I mean the best gift for my professional life. It would lead to a much more vibrant and rewarding research life than the zombie-like march toward mediocrity we’re been on for several decades. The mediocrity is being fueled by the tendency to celebrate false achievement to generate the need for unvarnished success. So what would I like to find under the tree this morning?

Confidence is ignorance. If you’re feeling cocky, it’s because there’s something you don’t know.

― Eoin Colfer

How about an end to the orgy of self-congratulating false achievement defining scientific computing these days? That would be refreshing for a change. There seems to be a disturbing trend to only report success today, with any sense of failure or challenge is simply not accepted by our funding agencies. So instead of computation honestly being the third way to practice science, as many would pronounce, it still falls a bit short. Perhaps with sufficient honestly over the past couple of decades, this would be an honest assessment instead of a hollow boast

To share your weakness is to make yourself vulnerable; to make yourself vulnerable is to show your strength.

― Criss Jami

There are plenty of challenges to be had if we simply applied some integrity to our assessment of what we’re capable of. This doesn’t belittle what we are capable of, but rather highlight the size of the difficulties we face. The worst aspect of the current situation is that our present capability could be so much beyond what it is today if we treated the science with greater honesty. Computationa34yrawwyca6bwyebdmh5l5zzdlncalwgl science opens doors to understanding unavailable to experimental and theoretical science; it is a compliment to each, but not a replacement. A large part of the problem is the extent to which it has been sold as a replacement to traditional science. This is at the core of the problem, but there is much more wrong with the direction.

The best way to predict the future is to invent it.

—Alan Kay

As the end of the era of cheap progress in computing power looms before us, we should look toward crafting a future where progress isn’t gifted by hardware following a law whose demise is vastly overdue (i.e., the end of Moore’s law). I see a field where Moore’s law seems to be seen as the only path to success, and its demise is greeted as apocalyptic. Rather than seeing the end of Moore’s law as a problem, I believe it will force us to work smarter again. We will stop relying upon faster computers to gift us progress and start thinking about how to solve problems better again. This lack of thought has created a dismal state of affairs in scientific computing research.

Before getting to what we are missing it might be good to focus a little bit of attention on hardware and how we got into this mess. Twenty some-odd years ago we had the “attack of the killer micros”. This became something real as it swallowed the high keynote-snir-sc-4-638performance computing world whole. It’s being taken over today by an attack of legions of portables. The entire computing industry has become enormous and dominated by cell phone and the impending “Internet of things”. Before the killer micros, we had custom machines and custom chips tailored to the scientific computing world, and typified by Crays.

It might be worth think about what would the computers customized for the needs of scientific computing would have looked like if we hadn’t accepted the fool’s errand of chasing Moore’s law like a dog chases a dirty old tennis ball? I don’t like being overly nostalgic, but in many ways the state of affairs Unknown-1with computing hardware has gotten far worse over the last twenty years. Sure the computers are much faster, but they are terrible to use. From a user’s perspective, the systems are worse. We have accepted the commodity bargain and a “worse is better” approach instead of demanding something focused on solving our problems. I most hypothesize that we would have been better off with somewhat slower computers that were more useful and better constructed to our tasks. In relative terms we have accepted crap in pursuit of a politically correct, but technically corrupt vision of computing. In pursuit of the fastest, biggest computer we have accepted worse actual, real performance.

Innovative solutions to new challenges seldom come from familiar places.

—Gyan Nagpal

What if we had stayed with that model? Of course it wasn’t a viable path from a business point of view, but stay with me. What would computer designs look like? What would be the programming model look like? How would the emphasis be different? I’d like to think we would have been better off not trying to squeeze two decades more out of Moore’s law. The truth is that we never use all of these machines anyway except for marginally useful (mostly useless) stunt computations. The whole Moore’s law thing is largely a lie anyway; the problem being solved in measuring speed is completely uncharacteristic of scientific computing (LINPAC). If we apply benchmarking to something more realistic (the HCG benchmark is a step in the right direction), we see that supercomputers get 1-5% of the speed that the LINPAC benchmark gives. Most of our codes are even slower than that. Putting this all together we can see that high performance computing is a bit of a facade. The emphasis on hardware is central to the illusion, but this is only the tip of the illusory iceberg.

Failure isn’t something to be embarrassed about; it’s just proof that you’re pushing your limits, trying new things, daring to innovate.

—Gavin Newsom

RUnknownesearch in scientific computing has not had enough impact over this time. In too many cases new numerical methods are still not being used in codes. The methods embedded in code are largely and significantly the same ones we used twenty years ago. The models too are twenty years old. The older models should be invalid many cases simply due to the refinement of the mesh, and the requisite change in time and length scales. New capabilities in uncertainty estimation and coupled physics are still research rather than deployed and producing results. In many cases the codes are two, three or four generations of methods and models overdue from a complete change. Back in the days of Crays and a bit more courage in research, new codes would be spawned every few years. Now we hold on to our old codes for decades. These new codes are the vehicle for new ideas, new methods and new models to be used for useful, important work.

Olds_Ninety-Eight_CoupePerhaps a useful analogy would be to think of high performance computing as a car. In this view the computer is the engine, and the code is the steering, interior, stereo, and other features. What kind of car have we been driving? Basically we are driving a car from the 1980’s with a series of new engines. The steering is the same, the interior has at most reupholstered and all the original equipment is still in place. Instead of being able to hook up your iPhone to the stereo, we are playing our old 8-tracks. No built in navigation either, so make sure you buy a map at the gas station. This works fine, but you won’t get any warning about the major construction zone. This is the bargain we’ve entered into; it is an absurd and unbalanced approach. If it were a car we wouldn’t stand for it, so why do we put up with this in computing?2015-acura-ilx-2-4l

If failure is not an option, then neither is success.

― Seth Godin

Over the years the entire program has suffered under the low-risk, false success management of today’s research. We have to simultaneously deliver progress without taking risks. No one who understands how to make progress would buy off on it. We labor under the assumption that we can manage to complete success while encountering no problems and succeed without the need for failure. Failure and risk are the lifeblood of progress. The management of research in this manner is largely an illusion that manifests in the reality of lack of risk taking and pervasive incrementalism. It strains credibility to its limit to believe this. The end result is achievement without progress. Achievement is made by fiat and only because failure is never an option today. The utter lack of honesty is truly disturbing.

Change almost never fails because it’s too early. It almost always fails because it’s too late.

― Seth Godin

Part of the impact of this reign of mediocrity is the over-development of code bases. This enables the achievement to take place within the incrementalism so intrinsic to the low-risk model of management. To achieve progress we continue to build upon bases of code long after they should have been discarded. As a consequence the amount of technical debt and inflation associated with our code is perilously large. We are hemmed into outdated ideas of how a code should be written, and the methods and models implicit in the design. The ability to start fresh and put new ideas into action simply isn’t allowed under the current model.

 The best way to get a good idea is to have a lot of ideas.

—Linus Pauling

A couple of other write-ups have appeared this week touching on the topic of progress and what’s holding us back (http://www.americanscientist.org/issues/pub/wheres-the-real-bottleneck-in-scientific-computing and http://www.hpcwire.com/2006/07/21/seven_challenges_of_high_performance_computing-1/). In the case of Greg’s commentary, he is right on the mark, but the use of modern software engineering is close a necessary, but wholly insufficient condition for success. I see this where I work. We are about as good as anyone at doing the software end of things professionally, yet without the scientific vision and aggressive goal-setting it is a hallow victory.

What the software engineering reflects is the maturity of scientific software and the need to contend with its impact on the field. Codes have become large and complex. To solve big problems they need to be developed using real engineering. Like most infrastructure they crumble and show age. If they are not invested in and kept up, they will fail to perform. The lifetime of software is much shorter than other infrastructures, but similarly we don’t have the political will to fix the problem.

F16-CFD-RealDoug makes a number of good points in his commentary, but I think he misses the single biggest issue. The main problem with the list is the degree of political correctness associated with the list. He still hails from the point-of-view that Moore’s law must be pursued. It is the tail that is wagging the dog, and it has been in control of our mindset too long. Doug also hits upon some of the same issues that Greg touches on, and again software engineering done professionally is a necessity. Finally the increasing emphasis on V&V is touched upon, but as I will discuss, our current ideas about it miss something essential. V&V should be the discipline that points to where progress has been made, and been lacking.

The biggest issues with scientific computing are the conclusions that we have already solved a bunch of problems with current capability. All that we need to do is build bigger computers, refine the mesh and watch the physics erupt from the computer. This is the sense that most of high performance computing is constructed upon. We already know how to do everything we need to do; it’s just a matter of getting the computers big enough to crush the problems into submission.Unknown-2

This is where verification and validation come in. Again our current practice is permeated with a seeming belief that good results are merely a formality. Most V&V work provides balance with some sense of trust in results combined with an assessment of how limited capability really is. It should provide a targeted view of where improvement is need. Instead of honesty in the nature of our understanding, we have over-selling. V&V is expected to be a rubber stamp for the victories of scientific simulation. Bad news isn’t expected or accepted. We act as if we have complete mastery over science, and it’s just a matter of engineering.

Remember the two benefits of failure. First, if you do fail, you learn what doesn’t work; and second, the failure gives you the opportunity to try a new approach.

—Roger Von Oech

 

Nothing could be further from the truth. The primary achievement of scientific computing has unveiled new mysteries and limitations. This is the nature of the quest for knowledge. Answering good questions only yields the capacity to ask better, more refined questions. To answer these new questions, we need better computational science, but also vibrant experimental and theoretical science. Our current approach to the field in general is not providing it. The methods and models of yesterday are not sufficient to answer the questions of today or tomorrow. We need to quit perpetuating the illusion that they are.

 

Healthy curiosity is a great key in innovation.

—Ifeanyi Enoch Onuoha

 

The right way to make progress is to realize that sometimes the answer to a question raised by computation lies in experiment or theory. Conversely the new theoretical question may find answers in experiment or computation. We benefit by having each area push the other. Computation simply adds to the capacity to solve problems, but does not replace the need for the traditional approaches. If we neglect theory and experiment, we are diminished. Ultimately our progress with computation will be harmed (if it hasn’t already).

 

Let’s celebrate the holidays and give each other the gift of deep open-minded questions that require every tool at our disposal to answer. Let’s stop giving ourselves false self-congratulating achievements that only perpetuate the wrong view of science. Let’s make real progress.

 

Only those who dare to fail greatly can ever achieve greatly.

— Robert Kennedy

 

The Idea That Won’t Die

22 Monday Dec 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

The real zombie-apocalypse is the pandemic of drama and mediocrity.

― Bryant McGill

A while back I referred to supercomputing as being a zombie
(https://williamjrider.wordpress.com/2014/10/06/supercomputing-is-a-zombie/). All my experience in the past few months has led me to reconsider this line of thinking, I was wrong. It is much worse than I had ever anticipated. We are continuing to favor computing hardware over more innovative problem solving despite the end of Moore’s law being upon us. The cost is wasted money and significant under-utilization of computing’s benefits.bike-girl-zombie

This morning I awoke thinking about the same thing again, and realized that I missed part of the analogy, not only is the supercomputing emphasis brainless, but it eats our brains too just like a zombie would do. It is rotting the thought out of computational science. The present trends in high performance computing are actually offensively belittling toward the Operation_Upshot-Knothole_-_Badger_001degree to which human ingenuity plays a roll in progress. The program that funds a lot of what I work on, the ASC program, is twenty years old. It was part of a larger American effort toward science based stockpile stewardship envisioned to provide confidence in nuclear weapons when they aren’t being tested.

Orthodoxy means not thinking–not needing to think. Orthodoxy is unconsciousness.

― George Orwell

It now is on verge of being ironic by the name science-based. Science is based on evidence and the current approach to supercomputing is not. It is a faith-based program. The faith is founded primarily on the imminently reasonable prospect that faster, bigger computers bring better solutions to computed modeling and simulation. The whole concept is based on “convergence,” which implies that the computed solution approaches the “true” solution as the amount of computational effort increases. Computational effort is typically associated with a mesh, or grid that defines how the real world is chopped up and represented on the computer.

ClimateModelnestingFor example, think about weather or climate modeling and how to improve it. If we model the Earth with a grid of 100 kilometers on a side (so about 25 mesh cells would describe New Mexico), we would assume that a grid of 10 kilometers on a side would be better because it now uses 2500 cells for New Mexico. The problem is that a lot else needs to change too in the model to take advantage of the finer grid such as the way clouds, wind, sunlight, plant life, and a whole bunch of things are represented. This is true much more broadly than just weather or climate, almost every single model that connects a simulation to reality needs to be significantly reworked, as the grid is refined. Right now, insufficient work is being funded to do this. This is a big reason why the benefit of the faster computers is not being realized. There’s more.

The more pieces used to represent the world, the smaller the pieces are and the greater the effort. This is the drive for bigger computers. It’s not nearly so simple, but simplicity is what Americans do best these days. It would be true if we weren’t working toward this end with one hand tied behind our backs (maybe both hands). We have to do more than just make faster computers; we have to think about what we are doing, a lot more. The power of computers needs wisdom that we sorely lack.

There is safety in numbers. And science. Clone your way to being safe. Nobody can protect you like you. And you and you and you.

― Jarod Kintz

More than better models, we can do a better job of solving the balance laws defining the models that are used to connect one grid cell with another. We solve these laws with numerical methods that produce errors in the solution. Better methods produce smaller errors, and beyond that all errors are not equal. Some errors are closer to what is physical, while other errors are decidedly unphysical. Better methods often make errors that are more physical (i.e., numerical diffusion). One of the major image020problems of the modern supercomputing is the lack of effort to improve the solution of balance laws. We need to create methods with smaller errors, and when errors are made bias them toward physical errors. There’s more.

 

As we use finer meshes the computer must use more data. The amount of work the computer needs to do to solve a problem is not necessarily proportional to the amount of data; sometimes (most of the time) it takes more work to solve more data than the previous amount. In other words the amount of work grows faster than the data. A typical problem we solve on the computer is the simultaneous solution of linear equations, i.e., linear algebra. The classical way of solving such a problem is Gaussian elimination where the work scales with the cube of the number of equations. Therefore a thousand times larger problem will require a billion times the work to solve.

 

7b8b354dcd6de9cf6afd23564e39c259For special sorts of linear systems associated with balance laws we can do a lot better. This has been a major part of the advance of computing, and the best we can do is for the amount of work to scale exactly like the number of equations (i.e., linearly). As the number of equations grows large the difference between the cube and the linear growth is astounding. This linear algorithm were enabled by multigrid or multilevel algorithms invented by Achi Brandt almost 40 years ago, and coming into widespread use 25 or 30 years ago.

 

The desire for safety stands against every great and noble enterprise.

― Tacitus

 

images-2We can’t really do any better today. The efforts of the intervening three decades of supercomputing has focused on making multilevel methods work on modern parallel computers, but no improvement algorithmically. Perhaps linear is the best that can be done although I doubt this. Work with big data is spurring the development of methods that algorithmically scale at less than linear. Perhaps these ideas can improve on multigrid’s performance. The key is would be to allow inventiveness to flourish. In addition risky and speculative work would need to be encouraged instead of the safe and dull work of porting methods to new computers.

 

As I’ve said before risk avoidance is killing research in many field, scientific computing is no different (https://williamjrider.wordpress.com/2014/12/05/is-risk-aversion-killing-innovation/, https://williamjrider.wordpress.com/2014/03/03/we-only-fund-low-risk-research-today/, https://williamjrider.wordpress.com/2014/12/12/whats-your-backup-plan/). One sign of risk aversion is the inability to start new computer codes, the implementations of the algorithms, methods and models. We continue to work and rework old codes because of the capability they offer compared to a new code. We see these old codes as investments that we must continue to remodel. It’s time to tear them down and put up a fresh structure with new ideas instead of continually putting a fresh coat of paint on the tired old ones. bagchi-tlcc

The potential for good I’ve touched on here is the tip of the iceberg. Algorithms and models can add vastly more value to computational science than faster machines. The only issue is that we aren’t brave enough to take advantage of opportunity. That is the saddest thing about this.

 

Writing is thinking. To write well is to think clearly. That’s why it’s so hard.

― David McCullough

 

 

Truth, Justice and the American Way

21 Sunday Dec 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

There are no facts, only interpretations.

― Friedrich Nietzscheis-the-orwellian-trapwire-surveillance-system-illegal-e1345088900843-640x360

If you haven’t figured it out by now I’m what Americans call a liberal, or more properly a progressive. In Europe I’d be a centrist. In some parts of the USA they’d call me a communist because they don’t have the slightest idea of what communism is. The United States has always had an anti-intellectual strain that has reasserted itself with vigor. Ironically, the information age has helped to pave the way for this. The key is that we no longer know what the truth is. No one does either the left or the right of the spectrum.

There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.

― Isaac Asimov

 

msnbc-liberal-biasLike most Americans I speak primarily with people who are like me. In my case it those educated in a scientific or technical field, working at a Lab or University, doing research. I don’t have a lot of contact with people of a different background. I do have a handful of conservative friends, and the difference in their Worldview is both understandable as it is stunning. What is really breathtaking is the difference in what we think is true. This is attributable to where our information comes from.

UnknownIn large part the Internet has allowed everyone to draw from sources of information that suits their preconceptions. To a large extent we are fed information that drives us
further to one side or another. The news media is now slanted toward particular point of view. In other words everyone left and right chooses to be bombarded by propaganda that not simply fits, but reinforces and strengthens his or her starting point. Of course, this change is only true for those choosing to be informed at all. As the rancor and vitriol has increased with the polarization, many people have simply tuned out. They use their access to information differently, to simply escape from “reality” whatever the hell that is. This is reflected in the horrible voting habits of the Bias-2public.

why-the-religious-right-is-losing-the-war-on-christmas The traditional news media is dominated by corporate interests with Fox News leading the way, but the old three ABC, CBS and NBC being no different. MSNBC is viewed as the liberal vanguard, but again it’s no different either. Once this dynamic was primarily associated with Newspapers, but as they die, it has been replaced by TV and as they begin to die by the Internet. Big money is running the show I every case, and finding a niche that provides them profit and power.

 When we change the way we communicate, we change society

― Clay Shirky

As such big money’s interests are being represented in the media’s message. vkaplan28f-1-webSometimes it’s semi-innocuous such as advertising for a TV show or movie. The dangerous aspect is the continuous avoidance of issues that the big moneyed interests don’t want portrayed, discussed or explored. This lack of coverage for a class of issues associated with money and class is poisoning democracy, tilting the discussion, and ultimately serving only the short-term needs of the businesses themselves.

Tragedy of the Commons: while each person can agree that all would benefit from common restraint, the incentives of the individuals are arrayed against that outcome.

― Clay Shirky

fox_benghaziA more innocuous aspect is the slanting of the political dynamic, which is happening pervasively via Fox News and its use of a message firmly associated with a single political agenda. In the UK it’s Sky that does the same thing. Across the board the impact has been to turn up the heat on partisan bickering and diminish the ability of the democratic process to function. Part of the problem is that it becomes clear if you make the mistake of talking about such things, people no longer operate with the same facts. Each side of the debate simply cherry-picks facts to suit their aims and avoids facts that undermine their chosen message. As a result no one really gets the full story, and the truth is always slanted one way or another. As a result the ability to compromise and move forward on vexing problems has become impossible.

Men occasionally stumble over the truth, but most of them pick themselves up and hurry off as if nothing ever happened.

—Winston Churchill

Ultimately thomeless_2259709bhe situation today favors those who have power, and today power stems from wealth. Money is paying for information, and this information is paving the way toward an even greater accumulation of power. The Internet is not providing the democratization of knowledge, but rather giving those already in power access to unparalleled capability to issue effective propaganda. Those in power are assisted by a relatively weak government whose ability to counter their stranglehold on society is stymied by inaction.

Integrity is telling myself the truth. And honesty is telling the truth to other people.

― Spencer Johnson

 

Where the ideal of the libertarian political is the empowering of the individual, the reality of today is an Orwellian vision. The weakness of our governance is simply the fuel to maintain and enhance the power structure of inequality. The only individuals who are empowered are the wealthy and powerful; the rank and file of the public is under their thrall. The Internet has joined religion as the opiate of the masses. Conflict and disagreement are the lifeblood of this stasis and the domination of partisan propaganda is its soulless center.bigbrother

 

It’s not clear that even Orwell could have foreseen such a screwed up situation.

 

Truth is life’s most precious commodity.

— Edwin Louis Cole

 

The Power of the Individual

19 Friday Dec 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

The first duty of a man is to think for himself

― José Martí

images copy 26One of the things that continually bother me the most about the changes where I work is the sunset of the individual from importance. The single person is gradually losing power to the nameless and increasingly faceless management. Increasingly everyone is viewed as a commodity, and each of us is interchangeable. Another scientist or engineer can be slotted into my place without any loss of function. My innate knowledge, experience, creativity, passion are each worthless when compared to the financial imperatives of my employer. We are encouraged if not commanded to be obedient sheep. Along the way we have lost the capability to foster deep sustained careers, the current regime encourages the opposite. The reason given for sacrificing this aspect of work is financial. No one can pay for the cost of the social construct necessary to enable this. I think that is BS, the reason is power and control. Feynman_and_Oppenheimer_at_Los_Alamos

Conformity is the jailer of freedom and the enemy of growth.

― John F. Kennedy

What gets lost in this change? One thing is for sure, quality of the work, the quality of the job and the bond between worker and employer. A strong support for the development and sustainability of the individual was a cornerstone of the old social contract. Loyal high quality work with commitment and expertise was given to the Unknown-1 copy 13employer. This model was the cornerstone of the National Laboratory system. The Nation benefited greatly from the model both in terms of security and National defense, but also from the scientific and engineering greatness it engendered.

The trick to forgetting the big picture is to look at everything close up.

― Chuck Palahniuk

For some reason we have collectively decided that this is too expensive to sustain. We can’t pay for retirements, we don’t encourage careers, or deep sustained technical expertise. It still happens, but only by the will of individual fighting against the tide of conformity. This spirit was present at all the National Labs, but never stronger than at Los Alamos. The legendary work of the Unknown copy 26scientists during the Manhattan Project provided the catalyst to extend this model more broadly. Their achievements fueled its continued support into the late 70’s. Then it was deemed to be too expensive to maintain. The management of the Labs is choking this culture to death. If it isn’t already gone, it soon will be.

Propaganda is what gives us the freedom to do as we are told.

― Markus W. Lunner

images-1 copy 13A deep part of its power was the enabling of individual achievement and independent thought. Perhaps more than the cost of the social contract, the Nation has allowed the force of conformity, lack of trust and fear of intellect to undermine this model. While the financial costs have escalated largely due to systematic mismanagement and the absence of political courage and leadership, it has been the excuse for the changes. While the details are different at the Labs the overall forces are hand-in-hand with the overall destruction of the middle class who was offered a similar social contract in the post war years. This has been replaced by cheap labor, or outsources always with the excuse about cost.

The opposite of courage in our society is not cowardice, it’s conformity.

― Rollo May

The question is what is the value of what has been lost? What is the price to put on that?

Paranoia is just having the right information.

― William S. Burroughs

 

 

What if he’s right?

16 Tuesday Dec 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 

A couple of weeks ago I saw a talk about “Innovation without Objectives”. Ken Stanley, a professor from Central Florida, gave this talk where he proposed that novelty was a better spur to innovation than objectives. He then went further to say that the profit1objectives actually undermined the achievement of innovation. It was a fascinating and provocative idea. I knew it would be the best thing I did all day (it was), and it keeps coming back to my thoughts. I don’t think it was the complete answer to innovation, but Professor Stanley was onto a key aspect of what is needed to innovate.

Great minds discuss ideas. Average minds discuss events. Small minds discuss people.

― Henry Thomas Buckle

What if he was right? What if the whole objective basis for our planning and customer focus is stifling the very thing it is supposed to be enabling? Even if the proof were unassailable, I don’t think it wouldn’t mean anything. We aren’t capable of instituting the sort of system that would improve matters. As a result nothing would change.

business-booksIn other words, the system we have today is harmful. We are committed to continue down the current path, results be damned. We have to plan and have milestones just like business theory tells us with no failure being accepted. In fact we seem to act as if failure can simply be managed away. Instead of recognizing that failure is essential to progress, and that failure is actually healthy, we attempt to remove it from the mix. Of course, failure is especially unlikely if you don’t try to do anything difficult, and choose your objectives with the sort of mediocrity accepted today because lack of failure is greeted as success.

 Cowardice is the most terrible of vices.

― Mikhail Bulgakov

Americans have adopted a fearful approach to managing research that stems from a fundamental lack of trust that plays hand-in-hand with fear. Two things stand out in today’s social order a systematic fear of change, and a lack of trust for anyone. It’s completely arguable that dishonesty has become the accepted and expected social norm. Being honest is now a good way to get in trouble.

Creativity takes courage.

― Henri Matisse

WWII1The courage that once described Americans during the last century has been replaced by a fear of any change. Worries about a variety of risks have spurred Americans to accept a host of horrible decreases in freedom for a modest-to-negligible increase in safety. The costs to society have been massive including the gutting of science and research vitality. Of course fear is the clearest was to pave the way for the very outcomes that you sought to avoid. We eschew risk attempting to manage the smallest detail as if that might matter. This is combined with a telling lack of trust, which implies a certain amount of deep self-reflection. People don’t trust because they are not trustworthy and project that character onto others. The combination of risk avoidance, and lack of trust produces a toxic recipe for decline and the opposite of the environment for innovation.

The worst enemy to creativity is self-doubt.

― Sylvia Plath

Unknown-1 copy 11We collectively believe that running everything like a business is the path to success. This means applying business management principles to areas it has no business being applied to. It means applying principles that have been bad for business. Their application has destroyed entire industries in the name of short-term gains provided to a small number of shareholders. The problem is that these “principles” have been extremely good for a small cadre of people who happen to be in power. Despite the obvious damage that they do, their application widely makes perfect sense to the chief beneficiaries. It is both an utterly reasonable and completely depressing conclusion.

The urge to destroy is also a creative urge.

― Pablo Picasso

Unknown copy 24When returning to the theme of how to best spur innovation and its antithetical relation to objectives, I become a bit annoyed. I can’t help but believe that before we can build the conditions for innovation we must destroy the false belief that business principles are the way to manage anything. This probably means that we have to see a fundamental shift in what business principles we favor. We should trade the dictum of shareholder benefit for a broader social contract that benefits the company’s long-term health, the employees and the communities as well. Additionally, we need to recover our courage to take risks and let failure happen. We need to learn to trust each other again and stop trying to manage everything.

The chief enemy of creativity is good sense.

― Pablo Picasso

 

Write to look smart, or write to be understood?

15 Monday Dec 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 

That must be wonderful; I have no idea of what it means.

—Albert Camus

gobbledygookI’m sure that most people would take one look at the sort of things I read professionally and collectively gasp. The technical papers I read are usually greeted by questions like “do you really understand that?” Its usually a private thing, but occasionally on a plane ride, I’ll give a variety of responses based on the paper from “yeah, it actually makes sense” to “not really, this paper is terrible, but I think the work might be important.”

Most of it looks like impenetrable gobbledy-gook to all but the most trained eye. Some of it still looks like impenetrable gobbledy-gook to a trained eye. Even within these highly technical set of literature there are islands of complete confusion for me (turbulence theory, continuum mechanics, finite element mathematics are good examples). For the most part those working in these fields are completely to blame for the state of affairs. Some subfields seem to be conspicuously attempting NOT to communicate with anyone outside their club, anyone who hasn’t been given their specific secret decoder ring. decoder_ring1

If you cant’ explain it simply, you don’t understand it well enough.

― Albert Einstein

Why do some fields write with clarity while others obfuscate and make their work as opaque as possible?

Write to be understood, write to teach. Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.

― Marie Curie

In some cases the writing is done in such a dense coded fashion that one can only draw the conclusion that the author’s intent is to make them look smarter than you. They seem smarter than me because I can’t understand anything they say. I have grown to resist this notion, and drawn another conclusion; they encapsulate their work in a notational din that hides their own lack of understanding. If they really understood what they were talking about, they could explain it in terms others could begin to comprehend. Such dense and impenetrable writing only masks the lack of any sort of deeper understanding. If they really understood what they were doing, they could make it simple, and walk the reader up to the complexity in a constructive manner.

One should use common words to say uncommon things

― Arthur Schopenhauer

Often authors take the confusing and technically opaque approach to writing means that they are really only interested in communicating with a small cadre of peers. There are many oblique subfields in the scientific world where communities of several dozen people write only for those people. The same small clique reviews, reads and cites the work of others. Over time the writing only climbs deeper into the rabbit hole AliceInWonderland-DownTheRabbitHole-011and becomes increasingly unapproachable to anyone else. This tendency should mark the death knell of the area, but instead the current system seems to do a great deal to encourage this pathology.

Writing is thinking. To write well is to think clearly. That’s why it’s so hard.

― David McCullough

161448.strip.zoomOther areas seem to be so devoid of the human element of science that the work has not contextual basis. In every case science is an intrinsically human endeavor, yet scientists often work to divorce humanity from the work. A great deal of mathematics works this way and leads to a gap in the understanding of the flow of ideas. The source and inspiration for key ideas and work is usually missing from the writing. This leads to a lack of comprehension of the creative process. A foolhardy commitment to loses history only including the technical detail in the writing. Part and participle of this problem are horrific literature reviews.impostor In some fields the number of citations for the work is appalling. The author ends up providing no map for the uninitiated reader to figure out what they are talking about. Again this works both to hide information and context while making the author seem smarter than they really are.

Just because we don’t understand doesn’t mean that the explanation doesn’t exist.

― Madeleine L’Engle

technical-writer

I was buoyed to read about efforts to improve communication scientists to the outside world, http://www.theatlantic.com/education/archive/2014/12/how-scientists-are-learning-to-write/383685/, but scientists could stand some work on learning to communicate with other scientists. If you can’t write to be understood by anyone outside you tiny subfield it should be viewed as a problem. At the very least other scientists should be able to fathom something about what you’re doing. If they can’t the common citizen doesn’t stand a chance. This compounds the sort of deep political problems science has today. It doesn’t cause them, but it makes them worse.

 In the land of Gibberish, the man who makes sense, the man who speaks clearly, clearly speaks nonsense.

― Jarod Kintz

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 56 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...