• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Category Archives: Uncategorized

Uncertainty Quantification is Certain to be Incomplete

17 Friday Apr 2015

Posted by Bill Rider in Uncategorized

≈ 2 Comments

Maturity, one discovers, has everything to do with the acceptance of ‘not knowing.

― Mark Z. Danielewski

U300px-Comparison_mean_median_mode.svgncertainty quantification is a hot topic. It is growing in importance and practice, but people should be realistic about it. It is always incomplete. We hope that we have captured the major forms of uncertainty, but the truth is that our assumptions about simulation blind us to some degree. This is the impact of “unknown knowns” the assumptions we make without knowing we are making them. In most cases our uncertainty estimates are held hostage to the tools at our disposal. One way of thinking about this looks at codes as the tools, but the issue is far deeper actually being the basic foundation we base of modeling of reality upon.

… Nature almost surely operates by combining chance with necessity, randomness with determinism…

― Eric Chaisson

IBM_Blue_Gene_P_supercomputerOne of the really uplifting trends in computational simulations is the focus on uncertainty estimation as part of the solution. This work is serving the demands of decision makers who increasingly depend on simulation. The practice allows simulations to come with a multi-faceted “error” bar. Just like the simulations themselves the uncertainty is going to be imperfect, and typically far more imperfect than the simulations themselves. It is important to recognize the nature of imperfection and incompleteness inherent in uncertainty quantification. The uncertainty itself comes from a number of sources, some interchangeable.

Sometimes the hardest pieces of a puzzle to assemble, are the ones missing from the box.

― Dixie Waters

Let’s explore the basic types of uncertainty we study:

Epistemic: This is the uncertainty that comes from lack of knowledge. This could be associated with our imperfect modeling of systems and phenomena, or materials. It could come from our lack of knowledge regarding the precise composition and configuration of the systems we study. It could come from the lack of modeling for physical processes or features of a system (e.g., neglecting radiation transport, or relativistic effects). Epistemic uncertainty is the dominant form of uncertainty reported because tools exist to estimate it, and it treats simulation codes like “black boxes”.

Sir_Isaac_Newton_(1643-1727)Aleatory: This is uncertainty due to the variability of phenomena. This is the weather. The archetype of variability is turbulence, but also think about the detailed composition of every single device. They are all different in some small degree never mind their history after being built. To some extent aleatory uncertainty is associated with a breakdown of continuum hypothesis and is distinctly scale dependent. As things are simulated at smaller scales different assumptions must be made. Systems will vary at a range of length and time scales, and as scales come into focus their variation must be simulated. One might argue that this is epistemic, in that if we could measure things precisely enough then it could be precisely simulated (given the right equations, constitutive equation and boundary conditions). This point of view is rational and constructive only to a small degree. For many systems of interest chaos reigns and measurements will never be precise enough to matter. By and large this form of uncertainty is simply ignored because simulations can’t provide information.

Numerical: Simulations involve taking a “continuous” system and cutting them up into discrete pieces. Insofar as the equations describe reality the solutions should approach a correct solution as these pieces get more numerous (and smaller). This is the essence of mesh refinement. Computational simulation is predicated upon this notion to an increasingly ridiculous degree. Regardless of the viability of the notion, the approximations made numerically are a source of error to be included in any error_12122_tex2html_wrap26bar. Too often these errors are ignored, wrongly assumed to be small, or incorrectly estimated. There is no excuse for this today.

Users: the last sources of uncertainty examined are the people who use codes and construct models to be solved. As problem complexity grows the decisions in modeling become more subtle and prone to variability. Quite often modelers of equal skill will come up with distinctly different answers or uncertainties. Usually a problem is only modeled once, so this form of uncertainty (or the requisite uncertainty on the uncertainty) is completely hidden from view. Unless there is an understanding of how the problem definition and solution choices impact the solution, the uncertainty will be unquantified. Knowledge of this uncertainty is almost always larger for complex problems where it is less likely for the simulations to be conducted by independent teams. Studies have shown this to be as large or larger than other sources! Almost the only place this has received any systematic attention is nuclear reactor safety analysis.

As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.

― Albert Einstein

One has to acknowledge that the line between epistemic and aleatory is necessarily fuzzy. In a sense the balance is tipped toward epistemic because of the tools exist to study it. At some level this is a completely unsatisfactory state of affairs. Some features of systems arise from the random behavior of the constituent parts of the system. Systems and their circumstances are just a little different, and these differences yield differences (sometimes slight) in the response. Sometimes these small differences create huge changes in the outcomes. It is these huge changes that drive a great deal of worry in decision-making. Addressing these issue is a huge challenge for computational modeling and simulation; a challenge that we simply aren’t addressing at all today.

Why?

The assumption of an absolute determinism is the essential foundation of every scientific enquiry.

― Max Planck

images-1A large part of the reason for failing to address these matters is the implicit, but slavish devotion to determinism. Simulations are almost always viewed as the solution to a deterministic problem. This means there is AN answer. Answers are almost never sought in the sense of a probability distribution. Even probabilistic methods like Monte Carlo are trying to approach the deterministic solution. Reality is almost never AN answer and almost always a distribution. What we end up solving is the mean expected response of a system to the average circumstance. What is actually observed is a distribution of responses to a distribution of circumstances. Often the real question to answer in any study (with or without simulation) is what’s the worse that can reasonably happen? A level of confidence that says 95% or 99% of the responses will be less than some bad level usually defines the desired result. This sort of question is best thought of as aleatory, and our current simulation capability doesn’t begin to address it.

 

When your ideas shatter established thought, expect blowback.

― Tim Fargo

The key aspect of this entire problem is a slavish devotion to determinism in modeling. Almost every modeling discipline sees the solution being sought as being utterly deterministic. This is lolorenz3dgical if the conditions being modeled are known with exceeding precision. The problem is that such precision is virtually impossible for any circumstance. This is the core of the problem with simulating the aleatory uncertainty that so frequently remains untreated. It is almost completely ignored by a host of fundamental assumption in modeling that is inherited by simulations. These assumptions are holding back real progress in a host of fields of major importance.

Finally we must combine all these uncertainties to get our putative “error bar”. There are a number of ways to go about this combination with varying properties. The most popular knee-jerk approach is to use the root mean square of the contributions (square root of the sum of the squares). The sum of the absolute values would be a better and safer choice, since it is always larger (hence more conservative) than the sum of squares. If you’re feeling cavalier and want to play it dangerous, just use the largest uncertainty. Each of these choices is related to probabilistic assumptions, which in the case of sum of squares is assuming a normal distribution.

 

It is impossible to trap modern physics into predicting anything with perfect determinism because it deals with probabilities from the outset.

― Arthur Stanley Eddington

One of the most pernicious and deepening issues associated with uncertainty quantification is “black box” thinking. In many cases the simulation code is viewed as being a black box where the user knows very about its workings beyond a purely functional level. This often results in generic and generally uninformed decisions being made on uncertainty. The expectations of the models and numericaltempco2-1880-2009methods are understood only superficially, and this results in a superficial uncertainty estimate. Often the black box thinking extends to the tool used to get uncertainty too. We then get the result from a superposition of two black boxes. Not a lot light bets shed on reality in the process. Numerical errors are ignored, or simply misdiagnosed. Black box users often simply do a mesh sensitivity study, and assume that small changes under mesh variation are indicative of convergence and small errors. They may or may not be such evidence. Without doing a more formal analysis this sort of conclusion is not justified. If code and problem is not converging, the small changes may be indicative of very large numerical errors or even divergence and a complete lack of control.

Whether or not it is clear to you,

no doubt the universe is unfolding

as it should.

― Max Ehrmann

The answer to this problem is deceptively simple make things “white box” testing. The problem is that making our black boxes into white boxes is far from simple. Perhaps the hardest thing about this is having people doing the modeling and simulation with sufficient expertise to treat the tools as white boxes. A more reasonable step forward is for people to simply realize the dangers inherent in black box testing mentality.

Science is a way of thinking much more than it is a body of knowledge.

― Carl Sagan

In many respects uncertainty quantification is in its infancy. The techniques are immature and terribly incomplete. Beyond this character, we are deeply tied to modeling philosophies that hold us back from progress. The whole field needs to mature and throw off the shackles imposed by the legacy of Newton and the entire rule of determinism that still holds much of science under its spell.

The riskiest thing we can do is just maintain the status quo.

― Bob Iger

 

The Profound Costs of End of Life Care for Moore’s Law

10 Friday Apr 2015

Posted by Bill Rider in Uncategorized

≈ 1 Comment

When you stop growing you start dying.

― William S. Burroughs

500x343xintel-500x343.jpg.pagespeed.ic.saP0PghQP9Moore’s law isn’t a law, but rather an empirical observation that has held sway for far longer than could have been imagined fifty years ago. In some way shape or form, Moore’s law has provided a powerful narrative for the triumph of computer technology in our modern World. For a while it seemed almost magical in its gift of massive growth in computing power over the scant passage of time. Like all good things, it will come to an end, and soon if not already.

Its death is an inevitable event, and those who have become overly reliant upon its bounty are quaking in their shoes. For the vast majority of society Moore’s law has already faded away. Our phones and personal computers no longer become obsolete due to raw performance every two or three years. Today obsolescence comes from software, or advances in the hardware’s capability to be miserly with power (longer battery life on your phone!). Scientific computing remains fully in the grip of Moore’s law fever! Much in the same way that death is ugly and expensive for people, the death of Moore’s law for scientific computing will be the same. images-2

Nothing can last forever. There isn’t any memory, no matter how intense, that doesn’t fade out at last.

― Juan Rulfo

02One of the most pernicious and difficult problems with health care is end of life treatment (especially in the USA). An enormous portion of the money spent on a person’s health care is focused on the end of life (25% or more). Quite often these expenses actually harm people and reduce their quality of life. Rarely do the expensive treatments have a significant impact on the outcomes, yet we spend the money because death is so scary and final. The question I’m asking is whether we are about to do exactly the same thing with Moore’s law in scientific computing?

Yes.

jaguar-7  Moore’s law is certainly going to end. In practical terms it may already be dead with it holding only in the case of completely impractical stunt calculations. If one looks at the scaling of calculations with modest practical importance such as the direct numerical simulation of turbulence the conclusion is that Moore’s law has passed away. The growth in capability has simply fallen dramatically off the pace we would expect from Moore’s law. If one looks at the rhetoric in the national exascale initiative, the opposite case is made. We are going forward guns blazing. The issue is just the same as end of life care for people, is the cost worth the benefit?

Its hard to die. Harder to live

― Dan Simmons

The computers that are envisioned for the next decade are monstrosities. They are impractical and sure to be nearly impossible to use. They will be unreliable. They will be horrors to program. Almost everything about these computers is utterly repulsive to contemplate. Most of all these computers will be immense challenges to conduct any practical work on. Despite all these obvious problems we are going to spend vast sums of money acquiring these computers. All of this stupidity will be in pursuit of a vacuous goal of the fastest computer. It will only be the fastest in terms of a meaningless benchmark too.

Real dishes break. That’s how you know they’re real.

― Marty Rubin

Titan-supercomputerFor those of us doing real practical work on computers this program is a disaster. Even doing the same things we do today will be harder and more expensive. It is likely that the practical work will get harder to complete and more difficult to be sure of. Real gains in throughput are likely to be far less than the reported gains in performance attributed to the new computers too. In sum the program will almost certainly be a massive waste of money. The plan is for most of the money going to the hardware and the hardware vendors (should I think corporate welfare?). All of this will be done to squeeze another 7 to 10 years of life out of Moore’s law even though the patient is metaphorically in a coma already.

The bottom line is that the people running our scientific computing programs think that they can sell hardware. The parts of scientific computing where the value comes from can’t be persuasively sold. As a result modeling, methods, algorithms and all the things that make scientific computing actually worth doing are starved for support. Worse yet, the support they do receive is completely swallowed up by trying to simply make current models, methods and algorithms work on the monstrous computers we are buying.

What would be a better path for us to take?

Let Moore’s law die, hold a wake and chart a new path. Instead of building computers to be fast, build them to be useful and easy to use. Start focusing some real energy on modeling, methods and algorithms. Instead of solving the problems we had in scientific computing from 1990, start working toward methodologies that solve tomorrow’s problems. All the things we are ignoring have the capacity to add much more value than our present path.

For nothing is evil in the beginning.

― J.R.R. Tolkien

The irony of this entire saga is that computing could mean so much more to society if we valued the computers themselves less. If we simply embraced the evitable death of Moore’s law we could open the doors to innovation in computing instead of killing it in pursuit of a foolish and wasteful extension of its hold.

The most optimistic part of life is its mortality… God is a real genius.

― Rana Zahid Iqbal

 

The obvious choice is not the best choice

03 Friday Apr 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 

Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won’t come in.

― Isaac Asimov

imgres-1If someone gives you some data and asks you to fit a function that “models” the data, many of you know the intuitive answer, “least squares”. This is the obvious, simple choice, and perhaps, not surprisingly, not the best answer. How bad this choice may be depends on the situation? One way to do better is to recognize the situations where the solution via least squares may be problematic, and produce an undue influence on the results.

Most of our assumptions have outlived their uselessness.

― Marshall McLuhan

To say that this problem is really important to the conduct of science is a vast understatement. The reduction of data is quite often posed in terms a simple model (linear terms in important parameters) and solved via least squares. The data is often precious, or very expensive to measure. Given the importance of data in science it is ironic that we should so often take the final hurtle so cavalierly and apply such a crude manner to analyze it as least squares. More properly we don’t consider the consequences of such an important choice, usually it isn’t even thought of as a choice.

That’s the way progress works: the more we build up these vast repertoires of scientific and technological understanding, the more we conceal them.

― Steven Johnson

The key to this is awakening to the assumptions made in least squares. The key assumption is the nature of the assumed errors in fit, which is normally distributed (or Gaussian) statistics for least squares. If you know this to be true then least squares is the right choice. If this is not true, then you might be introducing a rather significant assumption (a known unknown if you will) into your fit. In other words your results will be based upon an assumption you don’t even know that you made.

If your data and model match quite well and the deviations are small, it also may not matter (much). This doesn’t make least squares a good choice, just not a damagingCompareRobustAndLeastSquaresRegressionExample_01one. If the deviations are large or some of your data might be corrupt (i.e., outliers), the choice of least squares can be catastrophic. The corrupt data may have a completely overwhelming impact on the fit. There are a number of methods for dealing with outliers in least squares, and in my opinion none of them good.

 The difficulty lies not so much in developing new ideas as in escaping from old ones.

― John Maynard Keynes

300px-Comparison_mean_median_mode.svgFortunately there are existing methods that are free from these pathologies. For example the least median deviation fit can deal with corrupt data easily. It naturally excludes outliers from the fit because of a different underlying model. Where least squares are the solution of a minimization problem in the energy or L2 norm, the least median deviation uses the L1 norm. The problem is that the fitting algorithm is inherently nonlinear, and generally not included in most software.OE_51_7_071402_f002

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham Maslow

LeastSquaresLineMod_gr_3One of the problems is that least squares are virtually knee-jerk in its application. It is contained in standard software such as Microsoft Excel and can be applied with almost no thought. If you have to write your own curve-fitting program by far the simplest approach is to use least squares. It can often produce a linear system of equations to solve where alternatives are invariably nonlinear. The key point is to realize that this convenience has a consequence. If your data reduction is important, it might be a good idea to think about what you ought to do a bit more.

Duh.

The explanation requiring the fewest assumptions is most likely to be correct.

― William of Ockham

imgres

The Dark Side of Publishing

27 Friday Mar 2015

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Reviews are for readers, not writers. If I get a bad one, I shrug it off. If I get a good one, I don’t believe it
― William Meikle

UnknownA week ago I received bad news, the review for a paper were back. One might think that getting a review back would be good, but it rarely is. These reviews are too often a horrible soul-crushing experience. In this case I had reports from two reviewers, and one of them delivered the ego thrashing I’ve come to fear.

 I’ve found the best way to revise your own work is to pretend that somebody else wrote it and then to rip the living shit out of it.

― Don Roff

imagesIn total the two reviews were generally consistent on the details of the paper, and the sorts of suggestions for bringing the paper into the condition needed to allow publication. The difference was the tone of the reviews. One of the reviews was completely constructive and detailed in its critique. Each and every critique was offered in a positive light even when the error was pure carelessness.

The other review couldn’t be more different in tone. From the outset it felt like an attack on me. It took me until several days until I could read it in a manner that allowed me to take constructive action. For example including a comment that says “the writing is terrible” is basically an attack on the authors (yes it feels personal). This could be stated much more effectively, “I believe that you have something important to say here, but the ideas do not come across clearly.” Both things say the same thing, but one of them invites a positive and constructive response. I invite the readers to endeavor to write your own reviews in a manner to invite authors to improve. One of my co-authors who has a somewhat more unbiased eye noted that the referee’s report seemed a bit defensive.

So now I’m taking the path of revising the paper. A visceral report makes this much more difficult to accomplish. The constructive review is relatively easy to accommodate, and makes for a good blueprint for progress. The nasty review is much harder to employ in the same fashion. I feel that I’m finally on the path to do this, but Unknown-1it could have been much easier. There is nothing wrong with being critical, but the way its done matters a lot.

That’s the magic of revisions – every cut is necessary, and every cut hurts, but something new always grows.

― Kelly Barnhill

Just for the record the paper is titled “Robust Verification Analysis” by myself, Jim Kamm (Los Alamos), Walt Witkowski and Tim Wildey (Sandia), it was submitted to the Journal of Computational Physics. As part of the revision I’ve taken the liberty of rewriting the abstract:

We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. Our methodology is well suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a powerful optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification Analysis.

The practice of verification is a key aspect for determining the correctness of computer codes and their respective computational simulations. In practice verification is conducted through repeating simulations with varying discrete resolution and conducting a systematic analysis of the results. The accuracy of the calculation is computed directly against an exact solution, or inferred by the behavior of the sequence of calculations.

Nonlinear regression is a standard approach to producing the analysis necessary for verification results. We note that nonlinear regression is equivalent to solving a nonlinear optimization problem. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the solutions underlying assumptions. Constraints applied in the solution can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics).

This provides self-contained, data driven error estimation including uncertainties for both the solution and order of convergence. Our method will produce high quality results for the well-behaved cases consistent with existing practice. The methodology will also produce reliable results for ill-behaved circumstance. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and more challenging simulations. We pay particular attention to the case where few calculations are available and these calculations are conducted on coarse meshes. These are compared to analytical solutions, or calculations on highly refined meshes.

Here is abstract from the the original submission:

Code and solution verification are key aspects for determining the quality of computer codes and their respective computational simulations. We introduce a verification method that can produce quality results more generally with less well-behaved calculations. We have named this methodology Robust Verification Analysis. Nonlinear regression is a standard approach to producing the analysis necessary for verification results. Nonlinear regression is equivalent to solving a nonlinear optimization problem. We base our methodology on utilizing multiple constrained optimizations to solve the verification model. Constraints can include expert judgment regarding convergence rates and bounding values for physical quantities. This approach then produces a number of error models, which are then analyzed through robust statistical techniques (e.g., median instead of mean statistics). This provides self-contained, data driven error estimation including uncertainties for both the solution and order of convergence. Our method will produce high quality results for the well-behaved cases consistent with existing practice as well. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and challenging data sets.

 There is a saying: Genius is perseverance. While genius does not consist entirely of editing, without editing it’s pretty useless.

― Susan Bell

When you print out your manuscript and read it, marking up with a pen, it sometimes feels like a criminal returning to the scene of a crime.
― Don Roff

Innovation is a big deal because we are so bad at it!

20 Friday Mar 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Innovation is the specific instrument of entrepreneurship…the act that endows resources with a new capacity to create wealth.

― Peter F. Drucker

Innovation as a focus is everywhere – because we can’t do it. It is essential to our economic and national future, yet we are terrible at it!

Plans are of little importance, but planning is essential.

― Winston Churchill

We have created a society that routinely crushes innovative thinking. We understand technicaldebtthe importance of innovation, but refuse to create the conditions that nurture it. Most of the time we do the opposite. One sterling example of innovation crushing behavior is the misapplication of project management to scientific research. We apply the same approach to building a bridge or repaving a road as supposedly “cutting-edge” research project. In the process the project is on time and under-budget, but stripped of innovative research. The whole notion of “scheduled breakthroughs” is an anathema to successful research, yet pervasive in current management practice. The only objective that is achieved in the process is control, but the soul of the work is destroyed.

To succeed, planning alone is insufficient. One must improvise as well.

― Isaac Asimov

is-the-orwellian-trapwire-surveillance-system-illegal-e1345088900843-640x360The problem isn’t the planning per se, but rather trying to stick to the plans. Planning is useful, even essential, but generally not fully actionable with adaptation necessary to actually succeed. Too often in today’s climate, the plans are adhered to despite evidence of their inadequacy. The conditions that allow innovation are a threat to so much in the ordinary day-in, day-out conduct of business and social constructs. By producing a culture of conformity and safety, the conditions that spur new thinking (i.e., innovation) are not allowed to grow and bloom.

Innovation is about practical creativity – it’s about making new ideas useful…

Before innovation – or practical creativity – there is insight. You must see the world differently.

― Max McKeown

september-9-11-attacks-anniversary-ground-zero-world-trade-center-pentagon-flight-93-second-airplane-wtc_39997_600x450While innovation is one of the most effective engines of growth and progress, the conditions allowing it to happen threaten every other aspect of society. This is especially true with today’s hyper-safety, low-risk culture, which has been driven into over-drive by the threat of terrorism. In the long run the greatest damage to our long-term growth is the adoption of the risk-adverse policies and approaches so broadly. Terrorism is only a threat if we allow it to change us, and we have. These constructs provide safety and lower the risk of bad things, but also strangle progress and innovation.

The best way to predict your future is to create it

― Abraham Lincoln

capitol-building-from-gala-300x200A huge part of this problem is the lack of tolerance for risk. Innovation often fails, and lots of failure yields the opportunity for innovative success. As our society has squashed risk, it has also squeezed out the potential for breakthroughs. The consequence is a safer, more predictable, but much poorer future. Risk and reward are tied closely together. Nothing ventured, nothing gained is the old maxim that applies today. Today no venture that entails even the slightest tinge of risk can be tolerated. The result is no ventures whose outcomes aren’t virtually pre-ordained. Success is broadly achieved only through the systematic diminishment of our objectives.

If you are deliberately trying to create a future that feels safe, you will willfully ignore the future that is likely.

― Seth Godin

These things we do to control outcomes, control people and manage our work all chip away at the conditions necessary for innovation. Innovation requires things to be imagesslightly out of control, slightly unpredictable to succeed. This success is the product of the mixing of ideas that aren’t “supposed” to be in contact. Hotbeds of innovation come from putting disparate people together and allowing interactions to occur in a natural way. Good examples are the old AT&T labs where a generally poor building design caused the interaction of people of greatly differing backgrounds to interact closely. Common areas, dining areas, bathrooms, stairwells, etc. all provide some of the necessary lubrication for innovation. By allowing people to collide in an almost random way, serendipity erupts and innovation blooms.

Dreamers are mocked as impractical. The truth is they are the most practical, as their innovations lead to progress and a better way of life for all of us.

― Robin S. Sharma

UnknownAnother key is a certain amount of freedom. The freedom to pursue the best outcome even if that outcome is not what was planned. Today the plan has become the arbiter of effort, and we penalize deviations from the plan. The results are disastrous for innovation, which is inevitably a departure from the original plan.

Throughout history, people with new ideas—who think differently and try to change things—have always been called troublemakers.

― Richelle Mead

 

Are we computing the right things?

13 Friday Mar 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

If you want a new tomorrow, then make new choices today.

― Tim Fargo

Ultimately the importance of what we compute is determined by how useful the results are. Are the results good at explaining something we see in nature, confirming an idea, providing concrete evidence of how a scenario might unfold, or helping create a better widget? The classical uses of scientific computing are solving initial value problems and large-scale data analysis each of which can play a role in the answering the above questions. How much have we moved bey article4ond this classical view in the 70 or so years the field has existed?

I think the answer is “not nearly enough,” and computing is failing to deliver on its full potential as a result.

Never attribute to malice that which can be adequately explained by stupidity.

― Robert Hanlon

 

the-data-delugeScientific computing is still dominated by the same two big uses that existed at the beginning. Recently data analysis has reasserted itself as the big “new” thing. This is mostly the consequence of the deluge of data coming from the Internet, and the impending Internet of things. For mainstream science, the initial value problem still holds sway for a broader set of activities although data is big in astronomy, geophysics and social sciences.

 

To change ourselves effectively, we first had to change our perceptions.

― Stephen R. Covey

 

sankaran_fig1_360The problem is that bigger, better things are possible if we simply marshal our efforts properly. Computing has the potential to reshape our ability to design through combining our forward simulations with optimization. The same could be done with data analysis to power calibration of models. Another powerful would be a pervasive analysis of uncertainties in our modeling. Almost all of these cases have direct analogs in the World of data analysis. Together this array of untapped potential would contribute greatly to our understanding and mastery of nature.

 

Engineers like to solve problems. If there are no problems handily available, they will create their own problems.

― Scott Adams

 

What is holding us back?

 

Probably the greatest issue holding us back is our absolute intolerance of risk. It is always less risky to incrementally improve what you are already doing. This has become the singular focus of science today. Making small improvements to something that is already deemed a success is a path to avoiding failure, “building on success”. Most progress looks like this, and today almost all progress looks like this. To get more out of computing, we need to risk doing something really new, and with that risk comes the possibility of failure. Without that risk the level of success that may be achieved is also much lower. I believe that this is the main driver behind not taking advantage of computing.ContentImage-RiskManagement

 

Evolution is more about adaptivity than adaptability.

― Raheel Farooq

 

images-1This modern pathology also creates a myriad of side effects. One of the engines of innovation is applied mathematics where the act of playing it safe is sapping the vitality from the field. Increasingly the applied math work is focused on ideal model problems, and eschews the difficult work of attacking real problems, or problems where the math is messy. Without a more applied and more daring approach to developing capabilities, the innovative energy will not be unleashed. Part of the innovation means simply trying new things whether or not it is amenable to analysis. Work is guided by importance and utility rather than tractability.

 

Life’s journey is built of crests and troughs, the movement is always going to be fast only towards the trough and the progress is bound to be slow towards the crest.

― Anuj Somany

 

imagesA good place to look at where analysis should be applied is to methods that work. The topic of compressed sensing is a great example. By the time compressed sensing was “invented” it had been in use for 30 years as a practical approach in several fields, but lacked theoretical support. When the theoretical support arrived from some of the best mathematicians alive today, the field exploded. New uses for this old methodology are discovered almost every day. It is an example of what a coherent theory can do for a field. Without the theory, the topic was stranded as a “trick” and its applicability was limited. With the theory the applications that could be attempted grew immensely (and continues to grow).

 

Our culture works hard to prevent change.

― Seth Godin

 

sifter-noaaAnother place where we have systematically failed to advance appropriately is the simulation of stochastic or random phenomena. We are still devoted to solving almost everything in terms of a mean field theory. While the mean field view of the World has served us well, today many of our most important applications are driven by statistics. How often will something really good, or really bad happen? How much of a population of devices will fail in a certain may? How likely is a certain event? Today most of our simulation capability is ill suited to answering these questions. In many cases we try to answer these incorrectly by merely examining the uncertainty in the mean field solution (i.e., sampling uncertainty parametrically, which is not the same thing). Almost none of the simulation techniques are suitable for examining the variability of the systems being simulated.

 

If failure is not an option, then neither is success.

― Seth Godin

 

UnknownThe foundation of our limitations is not our intellectual abilities, but rather our taste for risk and change. With change and risk comes the potential for failure or unexpected outcomes. Lately, these sorts of things can’t be tolerated by our society. Without tolerance for bad things, our capacity to experience good things is undermined. Instead we are left to swim in an era of unmitigated mediocrity. It is sad that we’ve come to accept this as our mantra.

 

Fear does that. We have become afraid of everything, and fearful of things we used to simply overcome.imgres

 

I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration.

― Frank Herbert, Dune

 

 

 

 

 

 

Science Requires that Modeling be Challenged

06 Friday Mar 2015

Posted by Bill Rider in Uncategorized

≈ 6 Comments

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham Maslow

One of the most insidious and nefarious properties of scientific models is their tendency to take over, and sometimes supplant, reality.

— Erwin Chargaff

Mainframe_fullwidthIn scientific computing the quality of the simulations is slaved to the quality of the models being solved. The simulations cannot be more useful than the models allow. This absolute fact is too often left from the considerations of the utility of computing for science. Models are immensely important for the conduct of science and their testing essential to progress. When a model survives a test it is a confirmation of existing understanding. When a model fails and is overturned, science has the opportunity to leap forward. Both of these events should be cherished as cornerstones of the scientific method. Scientific computing as articulated today does not fully honor this point-of-view.

…all models are approximations. Essentially, all models are wrong, but some are useful. However, the approximate nature of the model must always be borne in mind…

— George E.P. Box

The purpose of models is not to fit the data but to sharpen the questions.

— Samuel Karlin

ClimateModelnestingThe centrality of the utility of models is defined by the role of models in connecting simulations to reality. When a scientist steps back from the narrow point-of-view associated with computing and looks at science more holistically, the role of models becomes much clearer. Models are approximate, but tractable, visions of reality that have utility in their necessary simplicity. Our models also define in loose terms what we envision about reality. In science our models define well how we understand the World. In engineering our models define how and what we build. If we expand our models, we expand our grasp of reality and our capacity for creation. Models connect the World’s reality to our intellectual grasp of that reality.

Science is not about making predictions or performing experiments. Science is about explaining.

― Bill Gaede

Computing has allowed more complex models to be used because it is freed of the confines of analytical techniques. Despite this freedom, the nature of models has been relatively stagnant with the approach to modeling still tethered to the (monumeSir_Isaac_Newton_(1643-1727)ntal) ideas introduced in 17th, 18th and 19th centuries. Despite the ability to solve much more complicated models of reality that should come closer to “truth,” we are still trapped in this older point-of-view. In total too little progress is being made in removing these restrictions in how we think about modeling the World. Ultimately these restrictions are holding us back from a more pervasive understanding and control over the natural World. The costs of this seeming devotion to an antiquated perspective are immense, essentially incalculable. Succinctly put, the potential that computing represents is far, far from being realized today.

It’s not an experiment if you know it’s going to work.

― Jeff Bezos

If science is to be healthy the models of reality should constantly be challenged by experiment. Experiments should be designed to critically challenge or confirm our models. Too often this essential role is missing from computational experiments, and to some extent can only come from reality itself, that is classical experiments. This hasn’t stopped the hubris of some that define computations as replacements for experiments when they conduct direct numerical experiments and declare them to be ab initio.

The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.

― Albert Einstein

This is very common in turbulence, for example, and this approach should be blamed for helping to stagnate progress in this field. The truly dangerous trend is for real World experiments being replaced by computations, which is happening with frightening regularity. This creates an intellectual disconn2-29s03ect of science’s lifeblood and its modeling by allowing modeling to replace experiments. With models then taking the role of experiment a vicious cycle ensues where faulty models are not displaced by experimental challenges. Instead an incorrect or incomplete model can increase its stranglehold on thought.

The real world is where the monsters are.

― Rick Riordan

Nothing is more damaging to a new truth than an old error.

— Johann Wolfgang von Goethe

dag006Indeed lack of progress in understanding turbulence can largely be traced to the slavish devotion to classical ideas, and the belief that the incompressible Navier-Stokes equations somehow contain the truth. I feel that they do not, and it would be foolish to adopt this belief. That has not stopped the community from fully and completely adopting this belief. Incompressibility is itself an unphysical approximation (albeit a useful one), but woefully unphysical in its implied infinite speed of propagation for sound waves. It is also strains any connections of the flow to the second law of thermodynamics, which almost certainly plays a key role in turbulence. Incompressibility removes thermodynamics from the equations in the most brutish way possible. Computing has only worked to strengthen these old and stale idea’s hold on the field, and perhaps set progress backwards by decades. This need not be the case, but outright intellectual laziness has set in.

It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.

― Richard P. Feynman

Experimental observations are only experience carefully planned in advance, and designed to form a secure basis of new knowledge.

― Sir Ronald Fisher

Classically experiments are conducted to either confirm our understanding, or images-1challenge it. A convincing experiment that challenges our understanding is invaluable to the conduct of science. Experimental work that provides this sort of data is essential to progress. When the data is confirmatory, it provides the basis of validation or calibration of models. Too often the question of whether the models are right or wrong is not considered. As a result the models tend to drift over time out of applicability. The derivation and definition of different models based on the feedback from real data is too infrequent. Explaining data should be a far more important task in the day-in-day-out conduct of science.

Theories might inspire you, but experiments will advance you.

― Amit Kalantri

Experiment is the sole source of truth. It alone can teach us something new; it alone can give us certainty.

― Henri Poincaré

In computational modeling and simulation this is happening even less. Part of the reason is the lack of active questioning of the models by scientists. Models have been applied for decades without significant challenge to assertion that all we need is a faster, bigger computer for reality to yield to the model’s predictive power. The incapacity of the model to be predictive is rarely even considered as an outcome. images-2Another way of expressing this problem is the lingering and persistent weakness of validation (and it brother in arms, verification). Too often the validation received by models is actually validation disguised as calibration without the correctness of the model even considered. The ultimate correctness of a model should always be front and center in validation, yet this question is rarely asked. Properly done validation would expose models as being wrong, or similarly hamstrung in their ability to model aspects of reality. The consequence is the failure to develop new models and too much faith placed in heavily calibrated old models.

Humans see what they want to see.

― Rick Riordan

 Remember, you see in any situation what you expect to see.

― David J. Schwartz

The current situation is not healthy. Science is based on failures, and failure is not allowed today. The validation outcome that a model is wrong is viewed as a failure. Instead it is an outstanding success that provides the engine for scientific progress so vitally needed. In most computational simulations this outcome is ruled out from the outset. Rather than place the burden of evidence on the model being correct, we tend to do the opposite and place the burden on proving models wrong. This is backwards to the demands of progress. We might consider a different tact. Thitechnicaldebts comes as an affront to the viewpoint that scientific computing is an all-conquering capability that only needs a big enough computer to enslave reality to its power. Nothing can be further from the truth. In the process we are wasting the massive investment in computing rather than harnessing it.

The formulation of the problem is often more essential than its solution, which may be merely a matter of mathematical or experimental skill.

― Albert Einstein

To succeed scientific computing needs to embrace the scientific method again insteaantifragile1d of distancing itself from the engine of progress so distinctly. We need leadership in science that demands a different path be taken. This path needs to embrace risks and allow for failure while defining a well-defined structure that puts experiment and modeling in proper roles and appropriate contexts.

Never in mankind’s history have we so fundamentally changed our means of existence with so little thought.

― James Rozoff

Know where the value in work resides

27 Friday Feb 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

We all die. The goal isn’t to live forever, the goal is to create something that will.

― Chuck Palahniuk

When we achieve a modicum of success professionally it usually stems from a large 20131011153017_Nobel_Prize_03_5d9eb62fefdegree of expertise or achievement in a fairly narrow realm. At the same time this expertise or achievement has a price; it was gained through a great degree of focus, luck and specialization. Over time this causes a lack of perspective for the importance of your profession in the broader world. It is often difficult to understand why others can’t see the intrinsic value in what you’re doing. There is a good reason for this, you have probably lost the reason why what you do is valuable.

Ultimately, the value of an activity is measured in terms of its impact in the broader world. Often times these days economic activity is used to imply value fairly directly. This isn’t perfect by any means, but useful nonetheless. For some areas of necessary achievement this can be a jarring realization, but a vital one. Many monumental achievements actually have distinctly little value in reality, or the value comes far after the discovery. In many cases the discoverer lacks the perspective or skill to translate the work into practical value. Some of these are necessary to achieve things of greater value. Achieving the necessary balance in these cases is quite difficult, and rarely, if ever achieved.

pic017It’s always important to keep the most important things in mind, and along with quality, the value of the work is always a top priority. In thinking about computing, the place where the computers change how reality is engaged is where value resides. Computer’s original uses were confined to business, science and engineering. Historically, computers were mostly the purview of the business operations such as accounting, payroll and personnel management. They were important, but not very important. People could easily go through life without ever encountering a computer and their impact was indirect.

ibm-pcAs computing was democratized via the personal computer, the decentralization of access to computer power allowed it to grow to an unprecedented scale, but an even greater transformation laid ahead. Even this change made an enormous impact because people almost invariably had direct contact with computers. The functions that were once centralized were at the fingertips of the masses. At the same time the scope of computer’s impact on people’s lives began to grow. More and more of people’s daily activities were being modified by what computing did. This coincided with the reign of Moore’s law and its massive growth in the power and/or the decrease in the cost of computing capability. Now computing has become the most dominant force in the World’s economy.

Why? It wasn’t Moore’s law although it helped. The reason was simply that computing began to matter to everyone in a deep, visceral way.

Nothing is more damaging to a new truth than an old error.

— Johann Wolfgang von Goethe

cell-phoneThe combination of the Internet with telecommunications and super-portable personal computers allowed computing to obtain massive value in people’s lives. The combination of ubiquity and applicability to the day-to-day life made computing’s valuable. The value came from defining a set of applications that impact people’s lives directly and always within arm’s reach. Once these computers became the principle vehicle of communication and the way to get directions, find a place to eat, catch up with old friends, and answer almost any question at will, the money started flow. The key to the explosion of value wasn’t the way the applications were written, or coded or run on computers, it was their impact on our lives. The way the applications work, their implementation in computer code, or the computers themselves just needed to be adequate. Their characteristics had very little to do with the success.

It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.

― Richard P. Feynman

UnknownScientific computing is no different; the true value lies in its impact on reality. How can it impact our lives, the products we have or the decisions we make. The impact of climate modeling is found in its influence on policy, politics and various economic factors. Computational fluid dynamics can impact a wide range of products through better engineering. Other computer simulation and modeling disciplines can impact the military choices, or provide decision makers with ideas about consequences for actions. In every case the ability of these things to influence reality is predicated on a model of reality. If the model is flawed, the advice is flawed. If the model is good, the advice is good. No amount of algorithmic efficiency, software professionalism or raw computer power can save a bad model from itself. When a model is good the solution algorithms and methods found in computer code, and running on computers enable its outcomes. Each of these activities needs to be competently and professionally executed. Each of these activities adds value, but without the path to reality and utility its value is at risk.

climate_modeling-ruddmanDespite this bulletproof assertion about the core of value in scientific computing, the amount of effort focusing on improving modeling is scant. Our current scientific computing program is predicated on the proposition that the modeling is good enough already. It is not. If the scientific process were working, our models would be improving from feedback. Instead they are stagnant and the entire enterprise is focused almost exclusively on computer hardware. The false proposition is that the computers simply need to get faster and the reality will yield to modeling and simulation.

So we have a national program that is focused on the least valuable thing in the process, and ignores the most valuable piece. What is the likely outcome? Failure,srep00144-f2 or worse than that abject failure. The most stunning thing about the entire program is the focus is absolutely orthogonal to the value of the activities. Software is the next largest focus after hardware. Methods and algorithms are the next highest focus. If one breaks out this area of work into its two pieces, the new-breakthroughs or the computational implementation work, the trend continues. The less valuable implementation work has the lion’s share of the focus, while the groundbreaking type of algorithmic work is virtually absent. Finally, modeling is nearly a complete absentee. No wonder the application case for exascale computing is so pathetically lacking

It is sometimes an appropriate response to reality to go insane.

― Philip K. Dick

ClimateModelnestingAlas, we are going down this road whether it is a good idea or not. Ultimately this is a complete failure of the scientific leadership of our nation. No one has taken the time or effort to think this shit through. As a result the program will not be worth a shit. You’ve been warned.

The difference between genius and stupidity is; genius has its limits.

― Alexandre Dumas-fils

Software is More Than An Implementation or Investment

20 Friday Feb 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

– Martin Fowler

legacy-code-1I don’t think software gets the support or respect it deserves particularly in scientific computing. It is simply too important to treat it the way we do. It should be regarded as an essential professional contribution and supported as such. Software shouldn’t be a one-time investment either; it requires upkeep and constant rebuilding to be healthy. Too often we pay for the first version of the code then do everything else on the cheap. The code decays and ultimately is overcome by technical debt. The final danger with code is the loss of the knowledge basis for the code itself. Too much scientific software is “magic” code that no one understands. If no one understands the code, the code is probably dangerous to use.

Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

– Rich Cook

Recently I’ve taken to harping on deconstructing the value proposition for scientificUnknown-2computing. The connection to work of importance and value is essential to understand, and the lack of such understanding explains why our current trajectory is so problematic. Just to reiterate, the value of computing, or scientific computing is found in the real world. The real world is studied through the use of models in scientific computing that are most often differential equations. Using algorithms or methods we then solve these models. These models as interpreted by their solution methods or algorithms are expressed in computer code, which in turn runs on a computer.

Good code is its own best documentation. As you’re about to add a comment, ask yourself, ‘How can I improve the code so that this comment isn’t needed?’

– Steve McConnell 

Each piece of this stream of activities is necessary and must be competently executed, but they are not equal. For example if the model is poor, no method can make up for this. No computer code can rescue it, and no amount of computer power can solve it in a way that is useful. On the other hand for some models or algorithms, no computer exists that is fast enough to solve the problem. The question is where are the problems today? Do we lack enough computer power to solve the current models? Or are the current models flawed, and the emphasis should be on improving them? In my opinion the key problems are caused by inadequate models first, and inefficient algorithms and methods second. Software, while important is the third most important aspect and the computers themselves are the least important aspect of scientific computing.moodys-software-bug-screws-investors2

With that said we do have significant issues with software, its quality, its engineering and its upkeep. Scientific software simply isn’t developed with nearly enough professionalism. Too much effort is placed on implementing algorithms compared to the effort in keeping the software up to date. Often software is written, but not maintained. Such maintenance is akin to issues with upkeep on roads and bridges. Often the money only exists to patch the existing road rather than redesign and rebuild it to meet current needs. In this way technical debt explodes and often overwhelms the utility of the computer implementation. It simply becomes legacy code. The code is passed down from generation to generation and ported to new computers. Performance suffers, understanding suffers and ultimately quality dies. In many places the entire software enterprise allows the code to be written then only maintained, and ported to generation after generation of computer.

C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do, it blows away your whole leg.

– Bjarne Stroustrup

article4More importantly software often outlives the people responsible for the intellectual capital represented in it. A real danger is the loss of expertise in what the software is actually doing. There is a specific and real danger in using software that isn’t understood. Many times the software is used as a library and not explicitly understood by the user. The software is treated as a storehouse of ideas, but if those ideas are not fully understood there is danger. It is important that the ideas in software be alive and fully comprehended. Unknown-4

Perhaps, the biggest problem we have is the insistence that the most important issue is the hardware; our computers simply aren’t fast enough. This is an overly simplistic view of the issues and ultimately saps energy from solving more important issues with software is among these. Unfortunately, it isn’t the weakest part of the chain of value, but it is too weak for the health of the field. In total the present National focus in computing is almost completely opposite to the value of the activities. The least valuable thing gets the most attention, and the most valuable thing gets the least. How things got so far out of whack is another story.

 People who are really serious about software should make their own hardware.

― Alan Kay

Not All Algorithm Research is Created Equal

14 Saturday Feb 2015

Posted by Bill Rider in Uncategorized

≈ 2 Comments

In algorithms, as in life, persistence usually pays off.

― Steven S. Skiena

Over the past year I’ve opined that algorithm (method) research is under-supported and under-appreciated as a source of progress in computing. I’m not going to backtrack one single inch on this. We are not putting enough effort into using computers better, and we are too much effort to building bigger, less useful and very hard to use computers. Without the smarts to use these machines wisely this effort will end up being a massive misapplication of resources.bh_computers_09

The problem is that the issues with algorithm research are even worse than this. The algorithm research we are supporting is mostly similarly misdirected. It turns out we are focused on algorithm research that does even more to damage our prospects for success. In other words, even within the spectrum of algorithm research there isn’t an equality of impact.

The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.

— Nick Trefethen

There are two fundamental flavors of algorithm research with intrinsically different value to the capability of computing. One flavor involves the development of new algorithms with improved properties compared to existing algorithms. The most impactful algorithmic research focuses on solving the unsolved problem. This research is groundbreaking and almost limitless in impact. Whole new fields of work can erupt from these discoveries. Not surprisingly, this sort of research is the most poorly supported. Despite its ability to have enormous and far-reaching impact, this research is quite risky and prone to failure.ContentImage-RiskManagement

If failure is not an option, then neither is success.

― Seth Godin

It is the epitome of the risk-reward dichotomy. If you want a big reward, you need to take a big risk, or really lots of big risks. We as a society completely suck at taking risks. Algorithm research is just one of enumerable examples. Today we don’t do risk and we don’t do long-term. Today we do low-risk and short-term payoff.

Redesigning your application to run multithreaded on a multicore machine is a little like learning to swim by jumping into the deep end.

—Herb Sutter

A second and kindred version of this research is the development of improved solutions. These improvements can provide lower cost of solution through better scaling of operation count, or better accuracy. These innovations can provide new vistas for computing and enable the solution of new problems by virtue of efficiency. This sort of research can be groundbreaking when it enables something to be done that couldn’t be reached due to inefficiency.7b8b354dcd6de9cf6afd23564e39c259

This is the form of algorithm research forms a greater boon to efficiency of computing than Moore’s law has provided. A sterling example comes from numerical linear algebra where costly methods have been replaced by methods that made solving billions of equations simultaneously well within reach of existing computers. Another really good example were the breakthroughs in the 1970’s by Jay Boris and Bram Van Leer whose discretization methods allowed an important class of problems to be solved effectively. This powered a massive explosion in the capacity of computational fluid dynamics (CFD) to produce meaningful results. Without their algorithmic advances CFD might still be ineffective for most engineering and science problems.

The third kind of algorithm research is focused on the computational implementation of existing algorithms. Typically these days this involves making an algorithm work on parallel computers. More and more it focuses on GPU implementations. This research certainly adds value and improves efficiency, but its impact pales in comparison to the other kind of research. Not that it isn’t important or useful, it simply doesn’t carry the same “bang for the buck” as the other two.

In the long run, our large-scale computations must inevitably be carried out in parallel.

—Nick Trefethen

Care to guess where we’ve been focusing for the past 25 years?

The last kind of research gets the lion’s share of the attention. One key reason for this focus is the relative low risk nature of implementation research. It needs to be done and generally it succeeds. Progress is almost guaranteed because of the non-conceptual nature of the work. This doesn’t imply that it isn’t hard, or requires less expertise. It just can’t compete with the level of impact as the more fundamental work. The change in computing due to the demise of Moore’s law has brought parallelism, and we need to make stuff work on these computers.images-2

Both are necessary and valuable to conduct, but the proper balance between the two is a necessity. The lack of tolerance for risk is one of the key factors contributing to this entire problem. Low-risk attitudes contribute to the dominance of focus on computing hardware and the appetite for the continued reign of Moore’s law. It also compounds and contributes to the dearth of focus on more fundamental and impactful algorithm research. We are buying massively parallel computers, and our codes need to run on them. Therefore the algorithms that comprise our codes need to work on these computers. QED.500x343xintel-500x343.jpg.pagespeed.ic.saP0PghQP9

The problem with this point of view is it’s absolute disconnect with the true value of computing. Computing’s true value comes from the ability to solve models of reality. We solve those models with algorithms (or methods). These algorithms are then represented in code for the computer to understand. Then we run them on a computer. The computer is the most distant thing from the value of computing (ironic, but true). The models are the most important thing, followed by how we solve the model using methods and algorithms.LinExtrap

Our current view and the national “exascale” initiative represents a horribly distorted and simplistic view of how scientific value is derived from computing, and as such makes for a poor investment strategy for the future. The computer, the thing the greatest distance from value, is the focus of the program. In fact the emphasis in the national program is focused at the opposite end of the spectrum from the value.titan2

I only hope we get some better leadership before this simple-minded mentality savages our future.

Extraordinary benefits also accrue to the tiny majority with the guts to quit early and refocus their efforts on something new.

― Seth Godin

Transistor_Count_and_Moore's_Law_-_2008_1024

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 56 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...