• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: April 2015

Progress should be Mandatory

24 Friday Apr 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

― George Bernard Shaw

CERN_large_hadron_colliderWe appear to be living in a golden age of progress. I’ve come increasingly to the view that this is false. We are living in an age that is enjoying the fruits of a golden age and following the inertia of a scientific golden age. The forces powering the “progress” we enjoy are not being returned to our future generations. So, what are we going to do when we run out of the gains made by our fore bearers?

Barcelona-Police-brutalityProgress is a tremendous bounty to all. We can all benefit from wealth, longer and healthier lives, greater knowledge and general well-being. The forces arrayed against progress are small-minded and petty. For some reason the small-minded and petty interests have swamped forces for good and beneficial efforts. Another way of saying this is the forces of the status quo are working to keep change from happening. The status quo forces are powerful and well-served by keeping things as they are. Income inequality and conservatism are closely related because progress and change favors those who benefit from change. The people at the top favor keeping things just as they are.Unknown

 Those who do not move, do not notice their chains.

― Rosa Luxemburg

article4Most of the technology that powers today’s world was actually developed a long time ago. Today the technology is simply being brought to “market”. Technology at a commercial level has a very long lead-time. The breakthroughs in science that surrounded the effort fighting the Cold War provide the basis of most of our modern society. Cell phones, computers, cars, planes, etc. are all associated with the science done decades ago. The road to commercial success is long and today’s economic supremacy is based on yesterday’s investments.vnc01

Without deviation from the norm, progress is not possible.

― Frank Zappa

Since the amount of long-term investment today is virtually zero, we can expect virtually zero return down the road. We aren’t effectively putting resources into basic or applied research much as we aren’t keeping up with roads and bridges. Our low-risk approach to everything is sapping the vitality from research. We compound this by failing to keep our 20th Century infrastructure healthy, and completely failing to provide a 21st Century one (just look at our pathetic internet speeds). Even where we spend lots on money on things like science little investment is happening because of the dysfunctional system. One of the big things hurting any march toward progress is the inability to take risks. Because failure is so heavily penalized, people won’t take the risks necessary for success. If you can’t fail, you can’t succeed either. It is an utterly viscous cycle that highlights the nearly complete lack of leadership. The lack of action by National leadership is simply destroying the quality of our future.

Restlessness is discontent — and discontent is the first necessity of progress. Show me a thoroughly satisfied man — and I will show you a failure.

― Thomas A. Edison

Take high performance computing as an example. In many respects the breakthroughs in algorithms have been as important as the computers themselves. Lack of risk taking has highlighted the computers as the source of progress because of Moore’s law. Algorithmic work is more speculative and hence risky. Payoffs are huge, but infrequent and thus risky. Effort might be expended that yields nothing at all. There shouldn’t be anything wrong with that! Because they are risky they are not favored.

We can only see a short distance ahead, but we can seeAlan_Turing_photoplenty there that needs to be done.

― Alan Turing

A secondary impact of the focus on computers is that the newer computing approaches are really hard to use. It is a very hard problem to simply get the old algorithmic approaches to work at all. With so much effort going into implementation as well as being siphoned from new algorithmic research, the end product is stagnation. Numerical linear algebra is a good example of this terrible cycle in action. The last real algorithm breakthrough is multigrid about 30 years ago. Since then work has focused on making the algorithms work on massively parallel computers.

Progress always involves risk; you can’t steal second base and keep your foot on first

― F.W. Dupee

The net result is lack of progress. Our leaders are seemingly oblivious to the depth of the problem. They are too caugh20131011153017_Nobel_Prize_03_5d9eb62feft up in trying to justify the funding for the path they are already taking. The damage done to long-term progress is accumulating with each passing year. Our leadership will not put significant resources into things that pay off far into the future (what good will that do them?). We have missed a number of potentially massive breakthroughs chasing progress from computers alone. The lack of perspective and balance in the course for progress shows a stunning lack of knowledge for the history of computing. The entire strategy is remarkably bankrupt philosophically. It is playing to the lowest intellectual denominator. An analogy that does the strategy too much justice would compare this to rating cars solely on the basis of horsepower.

A person who makes few mistakes makes little progress.

― Bryant McGill

is-the-orwellian-trapwire-surveillance-system-illegal-e1345088900843-640x360The end product of our current strategy will ultimately starve the World of an avenue for progress. Our children will be those most acutely impacted by our mistakes. Of course we could chart another path that balanced computing emphasis with algorithms, methods and models. Improvements in our grasp of physics and engineering should probably be in the driver’s seat. This would require a significant shift in the focus, but the benefits would be profound.

One of the most moral acts is to create a space in which life can move forward.

― Robert M. Pirsig

What we lack is the concept of stewardship to combine with leadership. Our leaders are stewards of the future, or they should be. Instead they focus almost exclusively on the present with the future left to fend for itself.Zmachine

 

Human progress is neither automatic nor inevitable… Every step toward the goal of justice requires sacrifice, suffering, and struggle; the tireless exertions and passionate concern of dedicated individuals.

― Martin Luther King Jr.

 

 

Uncertainty Quantification is Certain to be Incomplete

17 Friday Apr 2015

Posted by Bill Rider in Uncategorized

≈ 2 Comments

Maturity, one discovers, has everything to do with the acceptance of ‘not knowing.

― Mark Z. Danielewski

U300px-Comparison_mean_median_mode.svgncertainty quantification is a hot topic. It is growing in importance and practice, but people should be realistic about it. It is always incomplete. We hope that we have captured the major forms of uncertainty, but the truth is that our assumptions about simulation blind us to some degree. This is the impact of “unknown knowns” the assumptions we make without knowing we are making them. In most cases our uncertainty estimates are held hostage to the tools at our disposal. One way of thinking about this looks at codes as the tools, but the issue is far deeper actually being the basic foundation we base of modeling of reality upon.

… Nature almost surely operates by combining chance with necessity, randomness with determinism…

― Eric Chaisson

IBM_Blue_Gene_P_supercomputerOne of the really uplifting trends in computational simulations is the focus on uncertainty estimation as part of the solution. This work is serving the demands of decision makers who increasingly depend on simulation. The practice allows simulations to come with a multi-faceted “error” bar. Just like the simulations themselves the uncertainty is going to be imperfect, and typically far more imperfect than the simulations themselves. It is important to recognize the nature of imperfection and incompleteness inherent in uncertainty quantification. The uncertainty itself comes from a number of sources, some interchangeable.

Sometimes the hardest pieces of a puzzle to assemble, are the ones missing from the box.

― Dixie Waters

Let’s explore the basic types of uncertainty we study:

Epistemic: This is the uncertainty that comes from lack of knowledge. This could be associated with our imperfect modeling of systems and phenomena, or materials. It could come from our lack of knowledge regarding the precise composition and configuration of the systems we study. It could come from the lack of modeling for physical processes or features of a system (e.g., neglecting radiation transport, or relativistic effects). Epistemic uncertainty is the dominant form of uncertainty reported because tools exist to estimate it, and it treats simulation codes like “black boxes”.

Sir_Isaac_Newton_(1643-1727)Aleatory: This is uncertainty due to the variability of phenomena. This is the weather. The archetype of variability is turbulence, but also think about the detailed composition of every single device. They are all different in some small degree never mind their history after being built. To some extent aleatory uncertainty is associated with a breakdown of continuum hypothesis and is distinctly scale dependent. As things are simulated at smaller scales different assumptions must be made. Systems will vary at a range of length and time scales, and as scales come into focus their variation must be simulated. One might argue that this is epistemic, in that if we could measure things precisely enough then it could be precisely simulated (given the right equations, constitutive equation and boundary conditions). This point of view is rational and constructive only to a small degree. For many systems of interest chaos reigns and measurements will never be precise enough to matter. By and large this form of uncertainty is simply ignored because simulations can’t provide information.

Numerical: Simulations involve taking a “continuous” system and cutting them up into discrete pieces. Insofar as the equations describe reality the solutions should approach a correct solution as these pieces get more numerous (and smaller). This is the essence of mesh refinement. Computational simulation is predicated upon this notion to an increasingly ridiculous degree. Regardless of the viability of the notion, the approximations made numerically are a source of error to be included in any error_12122_tex2html_wrap26bar. Too often these errors are ignored, wrongly assumed to be small, or incorrectly estimated. There is no excuse for this today.

Users: the last sources of uncertainty examined are the people who use codes and construct models to be solved. As problem complexity grows the decisions in modeling become more subtle and prone to variability. Quite often modelers of equal skill will come up with distinctly different answers or uncertainties. Usually a problem is only modeled once, so this form of uncertainty (or the requisite uncertainty on the uncertainty) is completely hidden from view. Unless there is an understanding of how the problem definition and solution choices impact the solution, the uncertainty will be unquantified. Knowledge of this uncertainty is almost always larger for complex problems where it is less likely for the simulations to be conducted by independent teams. Studies have shown this to be as large or larger than other sources! Almost the only place this has received any systematic attention is nuclear reactor safety analysis.

As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.

― Albert Einstein

One has to acknowledge that the line between epistemic and aleatory is necessarily fuzzy. In a sense the balance is tipped toward epistemic because of the tools exist to study it. At some level this is a completely unsatisfactory state of affairs. Some features of systems arise from the random behavior of the constituent parts of the system. Systems and their circumstances are just a little different, and these differences yield differences (sometimes slight) in the response. Sometimes these small differences create huge changes in the outcomes. It is these huge changes that drive a great deal of worry in decision-making. Addressing these issue is a huge challenge for computational modeling and simulation; a challenge that we simply aren’t addressing at all today.

Why?

The assumption of an absolute determinism is the essential foundation of every scientific enquiry.

― Max Planck

images-1A large part of the reason for failing to address these matters is the implicit, but slavish devotion to determinism. Simulations are almost always viewed as the solution to a deterministic problem. This means there is AN answer. Answers are almost never sought in the sense of a probability distribution. Even probabilistic methods like Monte Carlo are trying to approach the deterministic solution. Reality is almost never AN answer and almost always a distribution. What we end up solving is the mean expected response of a system to the average circumstance. What is actually observed is a distribution of responses to a distribution of circumstances. Often the real question to answer in any study (with or without simulation) is what’s the worse that can reasonably happen? A level of confidence that says 95% or 99% of the responses will be less than some bad level usually defines the desired result. This sort of question is best thought of as aleatory, and our current simulation capability doesn’t begin to address it.

 

When your ideas shatter established thought, expect blowback.

― Tim Fargo

The key aspect of this entire problem is a slavish devotion to determinism in modeling. Almost every modeling discipline sees the solution being sought as being utterly deterministic. This is lolorenz3dgical if the conditions being modeled are known with exceeding precision. The problem is that such precision is virtually impossible for any circumstance. This is the core of the problem with simulating the aleatory uncertainty that so frequently remains untreated. It is almost completely ignored by a host of fundamental assumption in modeling that is inherited by simulations. These assumptions are holding back real progress in a host of fields of major importance.

Finally we must combine all these uncertainties to get our putative “error bar”. There are a number of ways to go about this combination with varying properties. The most popular knee-jerk approach is to use the root mean square of the contributions (square root of the sum of the squares). The sum of the absolute values would be a better and safer choice, since it is always larger (hence more conservative) than the sum of squares. If you’re feeling cavalier and want to play it dangerous, just use the largest uncertainty. Each of these choices is related to probabilistic assumptions, which in the case of sum of squares is assuming a normal distribution.

 

It is impossible to trap modern physics into predicting anything with perfect determinism because it deals with probabilities from the outset.

― Arthur Stanley Eddington

One of the most pernicious and deepening issues associated with uncertainty quantification is “black box” thinking. In many cases the simulation code is viewed as being a black box where the user knows very about its workings beyond a purely functional level. This often results in generic and generally uninformed decisions being made on uncertainty. The expectations of the models and numericaltempco2-1880-2009methods are understood only superficially, and this results in a superficial uncertainty estimate. Often the black box thinking extends to the tool used to get uncertainty too. We then get the result from a superposition of two black boxes. Not a lot light bets shed on reality in the process. Numerical errors are ignored, or simply misdiagnosed. Black box users often simply do a mesh sensitivity study, and assume that small changes under mesh variation are indicative of convergence and small errors. They may or may not be such evidence. Without doing a more formal analysis this sort of conclusion is not justified. If code and problem is not converging, the small changes may be indicative of very large numerical errors or even divergence and a complete lack of control.

Whether or not it is clear to you,

no doubt the universe is unfolding

as it should.

― Max Ehrmann

The answer to this problem is deceptively simple make things “white box” testing. The problem is that making our black boxes into white boxes is far from simple. Perhaps the hardest thing about this is having people doing the modeling and simulation with sufficient expertise to treat the tools as white boxes. A more reasonable step forward is for people to simply realize the dangers inherent in black box testing mentality.

Science is a way of thinking much more than it is a body of knowledge.

― Carl Sagan

In many respects uncertainty quantification is in its infancy. The techniques are immature and terribly incomplete. Beyond this character, we are deeply tied to modeling philosophies that hold us back from progress. The whole field needs to mature and throw off the shackles imposed by the legacy of Newton and the entire rule of determinism that still holds much of science under its spell.

The riskiest thing we can do is just maintain the status quo.

― Bob Iger

 

The Profound Costs of End of Life Care for Moore’s Law

10 Friday Apr 2015

Posted by Bill Rider in Uncategorized

≈ 1 Comment

When you stop growing you start dying.

― William S. Burroughs

500x343xintel-500x343.jpg.pagespeed.ic.saP0PghQP9Moore’s law isn’t a law, but rather an empirical observation that has held sway for far longer than could have been imagined fifty years ago. In some way shape or form, Moore’s law has provided a powerful narrative for the triumph of computer technology in our modern World. For a while it seemed almost magical in its gift of massive growth in computing power over the scant passage of time. Like all good things, it will come to an end, and soon if not already.

Its death is an inevitable event, and those who have become overly reliant upon its bounty are quaking in their shoes. For the vast majority of society Moore’s law has already faded away. Our phones and personal computers no longer become obsolete due to raw performance every two or three years. Today obsolescence comes from software, or advances in the hardware’s capability to be miserly with power (longer battery life on your phone!). Scientific computing remains fully in the grip of Moore’s law fever! Much in the same way that death is ugly and expensive for people, the death of Moore’s law for scientific computing will be the same. images-2

Nothing can last forever. There isn’t any memory, no matter how intense, that doesn’t fade out at last.

― Juan Rulfo

02One of the most pernicious and difficult problems with health care is end of life treatment (especially in the USA). An enormous portion of the money spent on a person’s health care is focused on the end of life (25% or more). Quite often these expenses actually harm people and reduce their quality of life. Rarely do the expensive treatments have a significant impact on the outcomes, yet we spend the money because death is so scary and final. The question I’m asking is whether we are about to do exactly the same thing with Moore’s law in scientific computing?

Yes.

jaguar-7  Moore’s law is certainly going to end. In practical terms it may already be dead with it holding only in the case of completely impractical stunt calculations. If one looks at the scaling of calculations with modest practical importance such as the direct numerical simulation of turbulence the conclusion is that Moore’s law has passed away. The growth in capability has simply fallen dramatically off the pace we would expect from Moore’s law. If one looks at the rhetoric in the national exascale initiative, the opposite case is made. We are going forward guns blazing. The issue is just the same as end of life care for people, is the cost worth the benefit?

Its hard to die. Harder to live

― Dan Simmons

The computers that are envisioned for the next decade are monstrosities. They are impractical and sure to be nearly impossible to use. They will be unreliable. They will be horrors to program. Almost everything about these computers is utterly repulsive to contemplate. Most of all these computers will be immense challenges to conduct any practical work on. Despite all these obvious problems we are going to spend vast sums of money acquiring these computers. All of this stupidity will be in pursuit of a vacuous goal of the fastest computer. It will only be the fastest in terms of a meaningless benchmark too.

Real dishes break. That’s how you know they’re real.

― Marty Rubin

Titan-supercomputerFor those of us doing real practical work on computers this program is a disaster. Even doing the same things we do today will be harder and more expensive. It is likely that the practical work will get harder to complete and more difficult to be sure of. Real gains in throughput are likely to be far less than the reported gains in performance attributed to the new computers too. In sum the program will almost certainly be a massive waste of money. The plan is for most of the money going to the hardware and the hardware vendors (should I think corporate welfare?). All of this will be done to squeeze another 7 to 10 years of life out of Moore’s law even though the patient is metaphorically in a coma already.

The bottom line is that the people running our scientific computing programs think that they can sell hardware. The parts of scientific computing where the value comes from can’t be persuasively sold. As a result modeling, methods, algorithms and all the things that make scientific computing actually worth doing are starved for support. Worse yet, the support they do receive is completely swallowed up by trying to simply make current models, methods and algorithms work on the monstrous computers we are buying.

What would be a better path for us to take?

Let Moore’s law die, hold a wake and chart a new path. Instead of building computers to be fast, build them to be useful and easy to use. Start focusing some real energy on modeling, methods and algorithms. Instead of solving the problems we had in scientific computing from 1990, start working toward methodologies that solve tomorrow’s problems. All the things we are ignoring have the capacity to add much more value than our present path.

For nothing is evil in the beginning.

― J.R.R. Tolkien

The irony of this entire saga is that computing could mean so much more to society if we valued the computers themselves less. If we simply embraced the evitable death of Moore’s law we could open the doors to innovation in computing instead of killing it in pursuit of a foolish and wasteful extension of its hold.

The most optimistic part of life is its mortality… God is a real genius.

― Rana Zahid Iqbal

 

The obvious choice is not the best choice

03 Friday Apr 2015

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 

Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won’t come in.

― Isaac Asimov

imgres-1If someone gives you some data and asks you to fit a function that “models” the data, many of you know the intuitive answer, “least squares”. This is the obvious, simple choice, and perhaps, not surprisingly, not the best answer. How bad this choice may be depends on the situation? One way to do better is to recognize the situations where the solution via least squares may be problematic, and produce an undue influence on the results.

Most of our assumptions have outlived their uselessness.

― Marshall McLuhan

To say that this problem is really important to the conduct of science is a vast understatement. The reduction of data is quite often posed in terms a simple model (linear terms in important parameters) and solved via least squares. The data is often precious, or very expensive to measure. Given the importance of data in science it is ironic that we should so often take the final hurtle so cavalierly and apply such a crude manner to analyze it as least squares. More properly we don’t consider the consequences of such an important choice, usually it isn’t even thought of as a choice.

That’s the way progress works: the more we build up these vast repertoires of scientific and technological understanding, the more we conceal them.

― Steven Johnson

The key to this is awakening to the assumptions made in least squares. The key assumption is the nature of the assumed errors in fit, which is normally distributed (or Gaussian) statistics for least squares. If you know this to be true then least squares is the right choice. If this is not true, then you might be introducing a rather significant assumption (a known unknown if you will) into your fit. In other words your results will be based upon an assumption you don’t even know that you made.

If your data and model match quite well and the deviations are small, it also may not matter (much). This doesn’t make least squares a good choice, just not a damagingCompareRobustAndLeastSquaresRegressionExample_01one. If the deviations are large or some of your data might be corrupt (i.e., outliers), the choice of least squares can be catastrophic. The corrupt data may have a completely overwhelming impact on the fit. There are a number of methods for dealing with outliers in least squares, and in my opinion none of them good.

 The difficulty lies not so much in developing new ideas as in escaping from old ones.

― John Maynard Keynes

300px-Comparison_mean_median_mode.svgFortunately there are existing methods that are free from these pathologies. For example the least median deviation fit can deal with corrupt data easily. It naturally excludes outliers from the fit because of a different underlying model. Where least squares are the solution of a minimization problem in the energy or L2 norm, the least median deviation uses the L1 norm. The problem is that the fitting algorithm is inherently nonlinear, and generally not included in most software.OE_51_7_071402_f002

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham Maslow

LeastSquaresLineMod_gr_3One of the problems is that least squares are virtually knee-jerk in its application. It is contained in standard software such as Microsoft Excel and can be applied with almost no thought. If you have to write your own curve-fitting program by far the simplest approach is to use least squares. It can often produce a linear system of equations to solve where alternatives are invariably nonlinear. The key point is to realize that this convenience has a consequence. If your data reduction is important, it might be a good idea to think about what you ought to do a bit more.

Duh.

The explanation requiring the fewest assumptions is most likely to be correct.

― William of Ockham

imgres

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...