• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: February 2015

Know where the value in work resides

27 Friday Feb 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

We all die. The goal isn’t to live forever, the goal is to create something that will.

― Chuck Palahniuk

When we achieve a modicum of success professionally it usually stems from a large 20131011153017_Nobel_Prize_03_5d9eb62fefdegree of expertise or achievement in a fairly narrow realm. At the same time this expertise or achievement has a price; it was gained through a great degree of focus, luck and specialization. Over time this causes a lack of perspective for the importance of your profession in the broader world. It is often difficult to understand why others can’t see the intrinsic value in what you’re doing. There is a good reason for this, you have probably lost the reason why what you do is valuable.

Ultimately, the value of an activity is measured in terms of its impact in the broader world. Often times these days economic activity is used to imply value fairly directly. This isn’t perfect by any means, but useful nonetheless. For some areas of necessary achievement this can be a jarring realization, but a vital one. Many monumental achievements actually have distinctly little value in reality, or the value comes far after the discovery. In many cases the discoverer lacks the perspective or skill to translate the work into practical value. Some of these are necessary to achieve things of greater value. Achieving the necessary balance in these cases is quite difficult, and rarely, if ever achieved.

pic017It’s always important to keep the most important things in mind, and along with quality, the value of the work is always a top priority. In thinking about computing, the place where the computers change how reality is engaged is where value resides. Computer’s original uses were confined to business, science and engineering. Historically, computers were mostly the purview of the business operations such as accounting, payroll and personnel management. They were important, but not very important. People could easily go through life without ever encountering a computer and their impact was indirect.

ibm-pcAs computing was democratized via the personal computer, the decentralization of access to computer power allowed it to grow to an unprecedented scale, but an even greater transformation laid ahead. Even this change made an enormous impact because people almost invariably had direct contact with computers. The functions that were once centralized were at the fingertips of the masses. At the same time the scope of computer’s impact on people’s lives began to grow. More and more of people’s daily activities were being modified by what computing did. This coincided with the reign of Moore’s law and its massive growth in the power and/or the decrease in the cost of computing capability. Now computing has become the most dominant force in the World’s economy.

Why? It wasn’t Moore’s law although it helped. The reason was simply that computing began to matter to everyone in a deep, visceral way.

Nothing is more damaging to a new truth than an old error.

— Johann Wolfgang von Goethe

cell-phoneThe combination of the Internet with telecommunications and super-portable personal computers allowed computing to obtain massive value in people’s lives. The combination of ubiquity and applicability to the day-to-day life made computing’s valuable. The value came from defining a set of applications that impact people’s lives directly and always within arm’s reach. Once these computers became the principle vehicle of communication and the way to get directions, find a place to eat, catch up with old friends, and answer almost any question at will, the money started flow. The key to the explosion of value wasn’t the way the applications were written, or coded or run on computers, it was their impact on our lives. The way the applications work, their implementation in computer code, or the computers themselves just needed to be adequate. Their characteristics had very little to do with the success.

It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.

― Richard P. Feynman

UnknownScientific computing is no different; the true value lies in its impact on reality. How can it impact our lives, the products we have or the decisions we make. The impact of climate modeling is found in its influence on policy, politics and various economic factors. Computational fluid dynamics can impact a wide range of products through better engineering. Other computer simulation and modeling disciplines can impact the military choices, or provide decision makers with ideas about consequences for actions. In every case the ability of these things to influence reality is predicated on a model of reality. If the model is flawed, the advice is flawed. If the model is good, the advice is good. No amount of algorithmic efficiency, software professionalism or raw computer power can save a bad model from itself. When a model is good the solution algorithms and methods found in computer code, and running on computers enable its outcomes. Each of these activities needs to be competently and professionally executed. Each of these activities adds value, but without the path to reality and utility its value is at risk.

climate_modeling-ruddmanDespite this bulletproof assertion about the core of value in scientific computing, the amount of effort focusing on improving modeling is scant. Our current scientific computing program is predicated on the proposition that the modeling is good enough already. It is not. If the scientific process were working, our models would be improving from feedback. Instead they are stagnant and the entire enterprise is focused almost exclusively on computer hardware. The false proposition is that the computers simply need to get faster and the reality will yield to modeling and simulation.

So we have a national program that is focused on the least valuable thing in the process, and ignores the most valuable piece. What is the likely outcome? Failure,srep00144-f2 or worse than that abject failure. The most stunning thing about the entire program is the focus is absolutely orthogonal to the value of the activities. Software is the next largest focus after hardware. Methods and algorithms are the next highest focus. If one breaks out this area of work into its two pieces, the new-breakthroughs or the computational implementation work, the trend continues. The less valuable implementation work has the lion’s share of the focus, while the groundbreaking type of algorithmic work is virtually absent. Finally, modeling is nearly a complete absentee. No wonder the application case for exascale computing is so pathetically lacking

It is sometimes an appropriate response to reality to go insane.

― Philip K. Dick

ClimateModelnestingAlas, we are going down this road whether it is a good idea or not. Ultimately this is a complete failure of the scientific leadership of our nation. No one has taken the time or effort to think this shit through. As a result the program will not be worth a shit. You’ve been warned.

The difference between genius and stupidity is; genius has its limits.

― Alexandre Dumas-fils

Software is More Than An Implementation or Investment

20 Friday Feb 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

– Martin Fowler

legacy-code-1I don’t think software gets the support or respect it deserves particularly in scientific computing. It is simply too important to treat it the way we do. It should be regarded as an essential professional contribution and supported as such. Software shouldn’t be a one-time investment either; it requires upkeep and constant rebuilding to be healthy. Too often we pay for the first version of the code then do everything else on the cheap. The code decays and ultimately is overcome by technical debt. The final danger with code is the loss of the knowledge basis for the code itself. Too much scientific software is “magic” code that no one understands. If no one understands the code, the code is probably dangerous to use.

Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

– Rich Cook

Recently I’ve taken to harping on deconstructing the value proposition for scientificUnknown-2computing. The connection to work of importance and value is essential to understand, and the lack of such understanding explains why our current trajectory is so problematic. Just to reiterate, the value of computing, or scientific computing is found in the real world. The real world is studied through the use of models in scientific computing that are most often differential equations. Using algorithms or methods we then solve these models. These models as interpreted by their solution methods or algorithms are expressed in computer code, which in turn runs on a computer.

Good code is its own best documentation. As you’re about to add a comment, ask yourself, ‘How can I improve the code so that this comment isn’t needed?’

– Steve McConnell 

Each piece of this stream of activities is necessary and must be competently executed, but they are not equal. For example if the model is poor, no method can make up for this. No computer code can rescue it, and no amount of computer power can solve it in a way that is useful. On the other hand for some models or algorithms, no computer exists that is fast enough to solve the problem. The question is where are the problems today? Do we lack enough computer power to solve the current models? Or are the current models flawed, and the emphasis should be on improving them? In my opinion the key problems are caused by inadequate models first, and inefficient algorithms and methods second. Software, while important is the third most important aspect and the computers themselves are the least important aspect of scientific computing.moodys-software-bug-screws-investors2

With that said we do have significant issues with software, its quality, its engineering and its upkeep. Scientific software simply isn’t developed with nearly enough professionalism. Too much effort is placed on implementing algorithms compared to the effort in keeping the software up to date. Often software is written, but not maintained. Such maintenance is akin to issues with upkeep on roads and bridges. Often the money only exists to patch the existing road rather than redesign and rebuild it to meet current needs. In this way technical debt explodes and often overwhelms the utility of the computer implementation. It simply becomes legacy code. The code is passed down from generation to generation and ported to new computers. Performance suffers, understanding suffers and ultimately quality dies. In many places the entire software enterprise allows the code to be written then only maintained, and ported to generation after generation of computer.

C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do, it blows away your whole leg.

– Bjarne Stroustrup

article4More importantly software often outlives the people responsible for the intellectual capital represented in it. A real danger is the loss of expertise in what the software is actually doing. There is a specific and real danger in using software that isn’t understood. Many times the software is used as a library and not explicitly understood by the user. The software is treated as a storehouse of ideas, but if those ideas are not fully understood there is danger. It is important that the ideas in software be alive and fully comprehended. Unknown-4

Perhaps, the biggest problem we have is the insistence that the most important issue is the hardware; our computers simply aren’t fast enough. This is an overly simplistic view of the issues and ultimately saps energy from solving more important issues with software is among these. Unfortunately, it isn’t the weakest part of the chain of value, but it is too weak for the health of the field. In total the present National focus in computing is almost completely opposite to the value of the activities. The least valuable thing gets the most attention, and the most valuable thing gets the least. How things got so far out of whack is another story.

 People who are really serious about software should make their own hardware.

― Alan Kay

Not All Algorithm Research is Created Equal

14 Saturday Feb 2015

Posted by Bill Rider in Uncategorized

≈ 2 Comments

In algorithms, as in life, persistence usually pays off.

― Steven S. Skiena

Over the past year I’ve opined that algorithm (method) research is under-supported and under-appreciated as a source of progress in computing. I’m not going to backtrack one single inch on this. We are not putting enough effort into using computers better, and we are too much effort to building bigger, less useful and very hard to use computers. Without the smarts to use these machines wisely this effort will end up being a massive misapplication of resources.bh_computers_09

The problem is that the issues with algorithm research are even worse than this. The algorithm research we are supporting is mostly similarly misdirected. It turns out we are focused on algorithm research that does even more to damage our prospects for success. In other words, even within the spectrum of algorithm research there isn’t an equality of impact.

The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.

— Nick Trefethen

There are two fundamental flavors of algorithm research with intrinsically different value to the capability of computing. One flavor involves the development of new algorithms with improved properties compared to existing algorithms. The most impactful algorithmic research focuses on solving the unsolved problem. This research is groundbreaking and almost limitless in impact. Whole new fields of work can erupt from these discoveries. Not surprisingly, this sort of research is the most poorly supported. Despite its ability to have enormous and far-reaching impact, this research is quite risky and prone to failure.ContentImage-RiskManagement

If failure is not an option, then neither is success.

― Seth Godin

It is the epitome of the risk-reward dichotomy. If you want a big reward, you need to take a big risk, or really lots of big risks. We as a society completely suck at taking risks. Algorithm research is just one of enumerable examples. Today we don’t do risk and we don’t do long-term. Today we do low-risk and short-term payoff.

Redesigning your application to run multithreaded on a multicore machine is a little like learning to swim by jumping into the deep end.

—Herb Sutter

A second and kindred version of this research is the development of improved solutions. These improvements can provide lower cost of solution through better scaling of operation count, or better accuracy. These innovations can provide new vistas for computing and enable the solution of new problems by virtue of efficiency. This sort of research can be groundbreaking when it enables something to be done that couldn’t be reached due to inefficiency.7b8b354dcd6de9cf6afd23564e39c259

This is the form of algorithm research forms a greater boon to efficiency of computing than Moore’s law has provided. A sterling example comes from numerical linear algebra where costly methods have been replaced by methods that made solving billions of equations simultaneously well within reach of existing computers. Another really good example were the breakthroughs in the 1970’s by Jay Boris and Bram Van Leer whose discretization methods allowed an important class of problems to be solved effectively. This powered a massive explosion in the capacity of computational fluid dynamics (CFD) to produce meaningful results. Without their algorithmic advances CFD might still be ineffective for most engineering and science problems.

The third kind of algorithm research is focused on the computational implementation of existing algorithms. Typically these days this involves making an algorithm work on parallel computers. More and more it focuses on GPU implementations. This research certainly adds value and improves efficiency, but its impact pales in comparison to the other kind of research. Not that it isn’t important or useful, it simply doesn’t carry the same “bang for the buck” as the other two.

In the long run, our large-scale computations must inevitably be carried out in parallel.

—Nick Trefethen

Care to guess where we’ve been focusing for the past 25 years?

The last kind of research gets the lion’s share of the attention. One key reason for this focus is the relative low risk nature of implementation research. It needs to be done and generally it succeeds. Progress is almost guaranteed because of the non-conceptual nature of the work. This doesn’t imply that it isn’t hard, or requires less expertise. It just can’t compete with the level of impact as the more fundamental work. The change in computing due to the demise of Moore’s law has brought parallelism, and we need to make stuff work on these computers.images-2

Both are necessary and valuable to conduct, but the proper balance between the two is a necessity. The lack of tolerance for risk is one of the key factors contributing to this entire problem. Low-risk attitudes contribute to the dominance of focus on computing hardware and the appetite for the continued reign of Moore’s law. It also compounds and contributes to the dearth of focus on more fundamental and impactful algorithm research. We are buying massively parallel computers, and our codes need to run on them. Therefore the algorithms that comprise our codes need to work on these computers. QED.500x343xintel-500x343.jpg.pagespeed.ic.saP0PghQP9

The problem with this point of view is it’s absolute disconnect with the true value of computing. Computing’s true value comes from the ability to solve models of reality. We solve those models with algorithms (or methods). These algorithms are then represented in code for the computer to understand. Then we run them on a computer. The computer is the most distant thing from the value of computing (ironic, but true). The models are the most important thing, followed by how we solve the model using methods and algorithms.LinExtrap

Our current view and the national “exascale” initiative represents a horribly distorted and simplistic view of how scientific value is derived from computing, and as such makes for a poor investment strategy for the future. The computer, the thing the greatest distance from value, is the focus of the program. In fact the emphasis in the national program is focused at the opposite end of the spectrum from the value.titan2

I only hope we get some better leadership before this simple-minded mentality savages our future.

Extraordinary benefits also accrue to the tiny majority with the guts to quit early and refocus their efforts on something new.

― Seth Godin

Transistor_Count_and_Moore's_Law_-_2008_1024

Why is Scientific Computing Still in the Mainframe Era?

12 Thursday Feb 2015

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Conformity is the jailer of freedom and the enemy of growth.

― John F. Kennedy

Mainframe_fullwidthIn watching the ongoing discussions regarding the National Exascale initiative many observations can be made. I happen to think the program is woefully out of balance, and focused on the wrong side of the value proposition for computing. In a nutshell it is stuck in the past.

All the heroes of tomorrow are the heretics of today.

― E.Y. Harburg

The program is obsessively focused on hardware and the software most closely relatedIBM_704_mainframeto hardware. As the software gets closer to the application, the focus starts to drift. As the application gets closer and modeling is approached, the focus is non-existent. It is simply assumed that the modeling just needs a really huge computer and the waters will magically part and the path the promised land of predictive simulation will just appear. Science doesn’t work this way, or more correctly well functioning science doesn’t work like this. Science works with a push-pull relationship between theory, experiment and tools. Sometimes theory is pushing experiments to catch up. Sometimes tools are finding new things for theory to answer. Computing is such a tool, but it isn’t be allowed to push theory, or more properly theory should be changing to accommodate what the tools show us.

The opposite of courage in our society is not cowardice, it’s conformity.

― Rollo May

I’ve written a lot about all of these problems.

One of the other observations I haven’t written about is how antiquated this entire point of view is. The supercomputers are run in a manner consistent with the old fashioned “mainframes” that IBM used to produce. Mainframes have faded from prominence, but still exist. They are no longer the central part of computing, and this change has been good for everyone. The overly corporate and centralized computing model associated with mainframes is still in place. It is orthogonal to the nature of computing in most places. The decentralized computing associated with phones, and laptops, and tablets and the cloud all democratized computing. That democratization led the way for everyone using computing, and often not realizing they were. It was one of the keys to value and the explosion of information, data and computing. It is completely opposite of supercomputing.computers

 The conventional view serves to protect us from the painful job of thinking.

― John Kenneth Galbraith

800px-Cray_Y-MP_GSFCThe question is whether there is some way to learn from everyone else. How can this centralized supercomputing be broken down in a way to help the productivity of the scientist. One of the things that happened when mainframes went away was an explosion of productivity. The centralized computing is quite unproductive and constrained. Computing today is the opposite, unconstrained and completely productive. It is completely integrated into the very fabric of our lives. Work and play are integrated too. Everything happens all the time at the same time. Instead of maintaining the old-fashioned model we should be looking into harvesting the best of modern computing to overthrow the old model.Mainframe Computer

Mainframes represent the old way and conformity; freedom from them represents the new way and freedom. To succeed at supercomputing freedom is the path to success

Great people have one thing in common: they do not conform.

― P.K. Shaw

“No amount of genius can overcome a preoccupation with detail”

06 Friday Feb 2015

Posted by Bill Rider in Uncategorized

≈ 1 Comment

No amount of genius can overcome a preoccupation with detail

—Levy’s Eighth Law

I’ve been inundated with thinking about exascale computing this week. Programming models, code, computer languages, libraries, and massively parallel implementations of algorithms. At the end of all the talk about advanced computing, I’m left thinking that something really key is being ignored moving forward. We are already inTianhe-2-supercomputerdrowning in data whether we are talking about the Internet in general, the coming “Internet of things” or the scientific use of computing. The future is going to be much worse and we are already overwhelmed. If we try to deal with every single detail, we are destined to fail.

How can we move forward and keep our sanity?the-data-deluge

Of course reality is actually much simpler, or at least the part we care about. In almost every decision of any importance, the details fade away and we are left with only an important core of significance. This is a key concept moving forward in computing, sparsity. Not everything matters and the important thing is discovering how to unmask this kernel of essential information. If we can’t the data deluge will drown us.

Fortunately some concepts have emerged recently that hold promise. The whole area of compressed sensing is structured around the capacity to unveil the important signalarticle4in all the noise and represent this importance compactly and optimally. This class of ideas will be important in managing the Tsunami of data that awaits us.

The future will give us more data than we can ever wade through, and we need principled ways to manage our view of it. In many cases we won’t even be able to get the data off the computer at all, only a part of it. If our code or calculation crashes we won’t be able to restart from exactly the same state. We are going to have to let go of the details. This should be easier because the reality is that they don’t matter, or more properly the vast majority of the details don’t. The trick is holding on to the details that do matter.Treesparsity_Image

Why haven’t models of reality changed more?

02 Monday Feb 2015

Posted by Bill Rider in Uncategorized

≈ 2 Comments

Tradition becomes our security, and when the mind is secure it is in decay.

― Jiddu Krishnamurti

Over the past couple of posts I’ve opined that the essence of value in computing should be best found in the real world. This is true for scientific computing as it is for the broader world. The ability of computers to impact reality more completely has powered an incredible rise in the value of computing and transformed the World. Despite this seemingly obvious proposition, in recent years and with current plans, the scientific community has focused its efforts on the part of computing most distant from reality, the computing hardware. The bridge from the real world to the artificial reality of the simulation are our models of reality.

Tradition is a fragile thing in a culture built entirely on the memories of the elders.

― Alice Albinia

In science these models are often cast in the esoteric form of differential equations toSir_Isaac_Newton_(1643-1727)be solved by exotic methods and algorithms. Ultimately, these methods and algorithms must be expressed as computer code before the computers can be turned loose on their approximate solution. These models are relics. The whole enterprise of describing the real world through these models arose from the efforts of intellectual giants starting with Newton and continuing with Leibnitz, Euler, and a host of brilliant 17th, 18th and 19th Century scientists. Eventually, if not almost immediately, models became virtually impossible to solve via available (analytical) methods except for a1451154824_d2f54abded_z handful of special cases.

There is no creation without tradition; the ‘new’ is an inflection on a preceding form; novelty is always a variation on the past.

― Carlos Fuentes

math-formula-chalkboardWhen computing came into use in the middle of the 20th Century some of these limitations could be lifted. As computing matured fewer and fewer limitations remained, and the models of the past 300 years became accessible to solution albeit through approximate means. The success has been stunning as the combination of intellectual labor on methods and algorithms along with computer code, and massive gains in hardware capability have transformed our view of these models. Along the way new phenomena have been recognized including dynamical systems or chaos opening doors to understanding the World. Despite the progress I believe we have much more to achieve.

What might be holding us back? The models are not evolving and advancing in reaction to the access to solution via computing.

 The difficulty lies not so much in developing new ideas as in escaping from old ones.

― John Maynard Keynes

lorenz3dToday we are largely holding to the models of reality developed prior to the advent of computing as a means of solution. The availability of solution has not yielded the balanced examination of the models themselves. These models are
artifacts of an age where the nature of solution was radically different. One might wonder what sorts of modifications of the existing paradigm would be in order should the means of solution be factored in. For example the notion of deterministic unique solutions to the governing equations is pervasive, yet reality clearly shows this to be wrong. Solutions to reality are always a little bit, to very different even given nearly identical initial conditions.

The assumption of an absolute determinism is the essential foundation of every scientific enquiry.

― Max Planck

Originally the models focused on the average or mean tendency of reality. This is reasonable for much of science and engineering, but as the point-of-view becomes refined other issues begin to crowd this out. These variations in outcome can dominate the utility of these models. For many cases the consequence of reality is driven by the uncommon or unusual outcomes (i.e., the tails of the distributiuon). Most of our current modeling approach and philosophy is utterly incapable of studying this problemchaos2effectively. This gets to the core of studying uncertainty in physical systems. We need to overhaul our approach of reality to really come to grips with this. Computers, code and algorithms are probably at or beyond the point where this can be tackled.

It is impossible to trap modern physics into predicting anything with perfect determinism because it deals with probabilities from the outset.

― Arthur Stanley Eddington

diceHere is the problem. Despite the need for this sort of modeling, the efforts in computing are focused at the opposite end of the spectrum. Current funding and focus is aimed at the computing hardware, and code with little effort being applied to algorithms, methods and models. The entire enterprise needs a serious injection of intellectual energy in the proper side of the value proposition.

Cynics are – beneath it all – only idealists with awkwardly high standards.

― Alain de Botton

 

 

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...