• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: September 2014

What is the source of the USA’s tilt to the right?

26 Friday Sep 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

The United States has become an exceedingly conservative country over the past 40 or so years. It is so far to the right politically that one can argue that the current Unknownpresident who seemingly represents the left is more conservative than the Republican president 40 year ago, Nixon. Nixon’s policies such a founding the EPA, pulling out of Vietnam, Court appointees, and détente with China would all peg him to the left of Obama. Despite this, in a stunning departure from historical reality, the right constantly charges Obama with being socialist, communist and worse. None of these charges has even the slightest degree of basis in fact, yet they continue. What the hell is going on? How did we get here?

I have a theory that at least should be thought about.

Belief can be manipulated. Only knowledge is dangerous.

― Frank Herbert

Perhaps the nature of the corporate environment has played a key role in paving the way for and culturally normalizing the conservative swing the USA has experienced. For the middle span of the 20th Century, corporations operated under conditions wher440px-President_Barack_Obamae they were expected to be profitable, but also operate in the best interests of all its stakeholders, not just stockholders. These stakeholders were comprised of customers, employees and the communities where they operated in addition to the stockholders. This formed a deep social contract with business, the country and its citizens and became the operational ethos during that time. It was the time when the United States had its greatest standing relative to the rest of the World. It was the ethos that allowed the middle class to rise to prominence and vibrancy. It was the ethos that ultimately opened the door to progress and social change that benefited everyone.

This ethos is dead, and it has been replaced with an entirely different social contract.

So the question is, do corporate executives, provided they stay within the law, have responsibilities in their business activities other than to make as much money for their stockholders as possible? And my answer to that is, no they do not.

– Milton Friedman

The true depth of the issue is the capacity of American voters to time and time again vote opposite to their own personal best interests. It is virtually a form of self-abuse. They have consistently voted for governance that hurts them, their families and the communities where they live. These are the so-called values voters. They consistently vote on issues such as abortion, race, guns, God and gays. They consistently get the shaft economically from their employers and the government. At the same time their employers are benefiting from the same decisions that hurt these voters.

Perhaps another component is playing a key role in this dynamic. We are creating a national climate that harkens to the nature of our corporate culture. The same dynamic that is burying the middle class in lower wages, debt and reduced social mobility is playing out in the public and private sphere. Are our dysfunctional workplaces paving the road for normalizing the dysfunction in our public lives? The loss of job security and general economic malaise offered to the rank-and-file corporate employee is virtually identical to the sort of governance offered by the GOP and the associated swing to the right by the democrats (the so-called liberals). Could it be that corporate life is simply paving the road for the sort of government envisioned by Grover Norquist (and Milton Friedman)? At the very least, the two have a deeply symbiotic relationship.

I am in favor of cutting taxes under any circumstances and for any excuse, for any reason, whenever it’s possible.

– Milton Friedman

Beginning in the 1970’s business began to adopt a different vision as an operating principle. The Chicago economic school revolted against the social contract and placed return to the stockholders as the only principle for business. All other stakeholders were deemed unimportant and outside theSupreme_Court_US_2010 responsibility of business. The only other stipulation was to play by the “rules”. Over time, the Nation’s laws have been modified to adopt these principles and allow their operation to be even more profitable (or at the very least money makers for the ruling class). Businesses still play by the rules, even if they are writing these rules via forms of legalized bribery. This bribery has been legalized through the Supreme Court’s ironically named Citizen’s United case. They follow the rules, but the morality of their actions is unquestionably dark and self-serving.

The people who can destroy a thing, they control it.

― Frank Herbert

We have seen greed and viscous self-interest replace socially conscious management. Gone are businesses and businessmen being pillars of society. We have entered into a koch-brothersperiod where modern day Robber Barons rule our country. The question is what came first, the conservative political movement or the greedy business principles? To what extent are they the same thing? Or do they exist in a symbiotic relationship, self-reinforcing their capacity to damage our Nation so that a few people can siphon their fortunes from the carcass of the once great Nation? So did Friedman’s perverse ideas lead the way, or did the spirit of the National Review and Nixon’s petty tyranny give birth to the Reagan revolution?

The real heart of the votes that allow conservative movement’s rule is the simmering hatred of the Old SoKu_Klux_Klan_Virgina_1922_Paradeuth that forms the backbone of the Republican majority. This has formed a coalition that feeds off of deep racial hatred and fear coupled with money and greed from the top of the corporate food chain. Our politics serves the interests of the rich and they have turned the seething historical hatred and racism to the dark purpose of sustaining their stranglehold on the Nation’s wealth.

Maybe the end of this reign of terror is in sight?

On the face of it, shareholder value is the dumbest idea in the world.

– Jack Welch (2009, Mr. Rank and Yank)

Shareholder value is a result, not a strategy…your main constituencies are your employees, your customers and your products.

– Jack Welch (2009, Mr. Rank and Yank)

The single thing that seems to be at the core of the philosophy we have adopted as a Nation is the inherently short-term horizon for all decisions. As a Nation we ni_love_quarterly_reports_mug-p168055427806712929enw9p_400o longer act strategically, everything is tactical. Companies are run on the basis of the quarterly report, and investors will divest themselves once the purpose of maximizing their take has been fulfilled. They turn their attentions to the next victim. Our government is equally short sighted and it is arguable that the longest time horizon available is the four-year presidential cycle, with the annual budget or biannual congressional elections forming a stronger basis for the time horizon that matters. These all seem long to the manic pace that business life and death runs at. We act as if we have no future and perhaps we don’t.

If you want a new tomorrow, then make new choices today.

― Tim Fargo

The question is whether we can change our ways? Could a different business ethos help pave the way? Would our business leaders stand for a future where they make less money and take more responsibility for the future? The bottom line would be measured in more than dollars, but in customers, prospering employees and a society that values their work. It would be a much better future than we are currently creating.

Rank does not confer privilege or give power. It imposes responsibility.

–Peter Drucker

Free enterprise cannot be justified as being good for business. It can be justified only as being good for society.

–Peter Drucker

Planning to Fail

23 Tuesday Sep 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

It is getting to late September and we approach the vaunted holiday of the Fiscal New Year when government projects, both new and old, start their annual cycle. In the last couple of months of the current fiscal year we make our plans for the following year. All this planning usually entails setting objectives and goals with a lucky few getting major milestones to gauge progress. In most cases we have effectively planted the seeds of failure in the process. We are literally planning our failure in advance.

It is hard to fail, but it is worse never to have tried to succeed.

― Theodore Roosevelt

I’ll be right up front with suggesting something better that might include the current plans as a part, but in a far different role. Why not choose to make the primary goals in planning something aspirational? Something worth achieving and worth pride of ownership and commitment rather than the pre-packaged mediocrity we accept today. I would suggest having three distinct levels of deliverables and milestones:
1. What you would hope and love to achieve?
2. What you will probably achieve?
3. What is the worst possible outcome?
The point is to define a set of goals that spans the potential outcomes including the best to the worst of what might be achieved. Right now, we tend to define something between 2 & 3, which is serving us quite poorly. We are increasingly heirs to a legacy of mediocrity where greatness should have reigned.

Without leaps of imagination or dreaming, we lose the excitement of possibilities. Dreaming, after all is a form of planning.

― Gloria Steinem

In the current milieu you almost never hear aspirational goals as part of research (I will touch on one case below!). Increasingly, the goal is to play it safe, and deliver what you signed up for. This goes hand-in-hand with the mantra under-promise and over-deliver, which is almost never the actual case. Most projects are under-promised and delivered in the under-promised manner. I’ve been guilty of it myself; it is the standard MO. Our collective horizon of achievement is increasingly defined by fear of failure rather than hope for achievement. We lack a vision for accomplishing anything great, memorable and astonishing.

There is only one thing that makes a dream impossible to achieve: the fear of failure.

― Paulo Coelho

We are sacrificing our future on the altar of management while systematically undermining our technological and scientific supremacy in the process. In many key areas such as applied mathematics, computational science, engineering among others we have already ceded the lead to other parts of the globe. I’ve watched this happening for the length of my career as once great institutions crumble under the weight of the current climate of incompetence masquerading as “best practices” borrowed from our morally and ethically bankrupt corporate culture. Just as the corporate culture is destroying the middle class in the United States, the practices are hollowing out our ability to achieve great things in science, engineering and technology. Key to the forces is the absolutely overwhelming attention to the short term coupled to the devaluation of the long term. With the long term goes the ability to dream of achieving anything great. Great things take time and commitment, something our management culture does not allow.

Don’t mistake activity with achievement.

― John Wooden

The climate of planning and the demands for accountability is creating an environment that has stifled quality, innovation and progress. Its ability to choke the vibrancy fromimages-2 research has reached a crisis level with the mindset creating the direct opposite of the intended outcome (at least I’m assuming the intent is positive). All of this is packed into the soul-crushing drumbeat of the quarterly progress report and constant reviews. Those of us going through this process have almost uniformly adopted a way of managing this cycle of mediocrity. One always picks deliverables that have already been completed, or can be completed with no risk. Milestones are the same thing; the work has already been done. All that is needed for success is just showing up and checking the boxes. No one is supposed to put forth any goals that might be challenging or possibly be missed. No risks should be taken.

We have taken World-class scientific institutions and systematically weakened or destroyed them in the name of management. Leadership of any sort is virtually absent relegated to the stuff of legend. Our science is woefully over-managed and depressingly under-led. Our project plans say otherwise, we are successful every year, decade after decade as we march toward a future that will undoubtedly punish us for our naivety and sense of entitlement. Part of the problem is the inability of those in charge of our Country to properly articulate and recognize the key difference between failures due to incompetence as opposed to a failure arising from a well-crafted attempt to succeed at something majestic. In today’s America there isn’t any difference, and people have responded with safety delivered in mediocrity.

We can thank our increasingly greedy and dysfunctional corporate culture, which hS_and_P_500_pe_ratio_to_mid2012as been mindlessly adopted as the model for managing anything. This has created the quarterly progress mentality, which must always succeed. Corporate American has the same mantra where a Company must show a good balance sheet every quarter or suffer the wrath of the stock market. The balance sheets are cooked and companies are shaken down regardless of the long-term damage done. The long-term perspective has been destroyed. Companies invest little or nothing R&D continually savaging their own future to assure a good quarterly report. Given this model of propriety we shouldn’t express any surprise over the damage done where this approach as been adopted. Still some things still slip through the cracks.

imagesThe National Ignition Facility (NIF) is a shining example of failure at high-level goals. I have to admit feeling a certain smugness when they failure, but perhaps I was too harsh. NIF has lofty and worthy goals, success would have been glorious for NIF and for the Nation and World. It is too bad they didn’t succeed. Like all failures, the real failure would be not learning from the mistake. The jury is still out on whether or not they will learn.

Never was anything great achieved without danger.

― Niccolò Machiavelli

Questions remain about the nature of the failure to reach breakeven ignition. Could or should have the failure been predicted? What does it say about our knowledge of the science of inertial fusion in the light of the failure? Was the confidence in ignition a sign of hubris or narcissistic chest beating? Taken in the best possible way, NIF had a loftynif_chu302x236s and worthy goal and they went for it. That is really a good thing. Now, it is important that we all learn from this and science progresses to better attempt ignition in the future. We need to understand how to predict what actually happened. We need our knowledge to grow and improve future performance. If these improvements do not materialize then NIF will really be a failure.

What NIF probably highlights is a certain lack of quality in planning associated with the intrinsic uncertainty of outcomes. It is a case where the best-case scenario is the only one visible. In reality, there are multiple things that could happen: best case, likely case and worst-case outcomes. We ought to really have all three firmly in mind when examining a research project. This is a place where my suggestion of spanning the possibilities would have lent much needed credibility to the research outcomes instead of leading to a public relations black eye. We should laud NIF for its vision and audacity; we need many more projects to shoot for the audacious outcomes. We need this with enough honesty to admit that the likely outcome won’t be anywhere as majestic as what the best-case promises.

What Would We Actually Do with An Exascale Computer?

19 Friday Sep 2014

Posted by Bill Rider in Uncategorized

≈ 2 Comments

        The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.

– Nick Trefethen

The powerful truth of this observation by a famous mathematician seems to be lost on the dialog associated with computing. The result will be the waste of a lot of money and effort getting highly suboptimal results at the end. It shows the danger of defining public research policy on the basis of marketing slogans and poorly thought through conventional wisdom. Hopefully a more sensible and effective path can be crafted before too much longer.

jen-hsun-huang-road-to-exascale-sc111One of the key unanswered questions in supercomputing is how the next generation of computers (i.e., exascale or 10^18 operations/second, http://en.wikipedia.org/wiki/Exascale_computing) would be used to the benefit of the Nation (or humanity). Thus far, the scientific community has offered up rather weak justifications for the need for exascale. In part, the scientific community should get a pass because the computing, when available, will undoubtedly be worthwhile and beneficial to discovery, industry and national security.

Titan-supercomputerAt a deeper level, the weakness of the case for exascale highlights the extent to which the tail is now wagging the dog. Historically, the application of computing has always been preeminent and the computers themselves were always inadequate to sate the appetite for problem solving. Now we have a meta-development program for computing that exceeds our grasp of vision for problems to solve (see for example http://www.seas.harvard.edu/news/2014/07/built-for-speed-designing-exascale-computers). In the broader computing industry the use of computing is still king, but scientific computing has for some reason lost this basic rule along the way.

Doing the right thing is more important than doing the thing right.

– Peter Drucker

The working assumption is that the bigger and faster a computer is, the better it is. This is the basis of current policy and the focus of research in supercomputing. By merely examining the broader computing industry it is easy to see how specious this assumption is. Better is a tight collaboration between machine and how the machine is used. Apple is a great example of software and elegant design trumping pure power. The spate of recent developments in innovative applications of computing is driving the economic engine around computing. The convergence of computing with communication and mass media transcends almost anything done in computing hardware. All of these lessons are there for picking, but completely lost on the supercomputing crowd who are stuck in the past, and following old ideas on an inertial trajectory.

This simply cannot continue, we must define problems to be solved that clearly benefit from the power provided by the computer. Where does this disconnect come from? I believe that the last 20-25 years have focused almost exclusively on developments associated with the nature of the hardware with the need for computing being an a priori assumption. The failure to invest in application-focused programs, algorithms, and modeling has left computing adrift. The march of the killer-micros and the emergence of computing as a major axis in the economy have deepened this development. We are left with a supercomputing program lacking the visionary basis for justifying its own existence. It basis is viewed as being axiomatic. It isn’t axiomatic; therefore we have a problem in need of immediate attention.

A large part of the reason for the current focus hinges upon the lack of appetite for risk taking in research. The capacity of computing to improve has been able to count on Moore’s law for decades and improvement happened without having to do much. It was basically a sure 500x343xintel-500x343.jpg.pagespeed.ic.saP0PghQP9thing. Granted that the transition of computing from the Crays of the 80’s and 90’s to the massively parallel machines of the 21st Century was challenging, but Moore’s law softened the difficulty. Risky algorithmic research was basically starved and ignored because the return from Moore’s law was virtually guaranteed. The result has been a couple of decades where algorithmic advances have languished due to wanton neglect. We have been able to depend on Moore’s law’s almost mechanical reliability for 50 or more years. The beginning of the end is at hand and the problems with its continuation will only mount (see the excellent post by Herb Sutter for more than I can begin to state on the topic http://herbsutter.coimage36m/welcome-to-the-jungle/). The consequence is that risky research is the only path forward, and our current approach to managing research is woefully out of step with taking risks. If we don’t embrace risky research and take some real chances there will be a colossal crash.

It is useful to look at history for some much needed perspective on the current state of supercomputing. At the beginning of the computing age, the problems being solved were the focus of attention, the computers were necessary vehicles. For several decades the computers and computing environment was woefully inadequate to the vision of what computational science could dirrego. Sometime in 1980’s the computers caught up. For about 15-20 glorious years a golden age ensued where the computers and the problems being solved were in balance. Sure the computers could be faster, but the state of programming, visualization, methods, and all the various aspects were in glorious harmony. This golden age of supercomputing is probably most clearly associated with Seymour Cray’s computers like the Cray 1, X-MP, Y-MP and C90. Then computing became more than just mainframes for accounting and supercomputers for scientists, computing became mainstream. The harmony was upset and suddenly the computers themselves lurched into the forefront of thought with the “killer micros” and massively parallel processors. Since that time supercomputing has become increasingly about hardware and we have in large part lost the vision of what we are doing with them.

Doing more things faster is no substitute for doing the right things.

– S. R. Covey

This is in large part at the core of the problems today with supercomputing focus. We are not envisioning difficult problems we want to or need to solve and finding computers that can assist us. Instead we are chasing computers that are designed through a Frankenstein monster process of combining commercial viability and constraints increasingly coming from the mobile computing market with solving yesterday’s problems with yesterday’s methods measured by yesterday’s yardstick. Almost everything about this is wrong for supercompuimages-2ting and computational science. In the process of we are losing innovation in algorithms and problem-solving strategies. For example the increasing irrelevance of applied math to computing (and particularly the applications of computing) can be traced directly to the time when the golden era ended. In almost any area that participated in the golden era the codes today are simply ports of the codes from that time to the new computers.

Management is doing things right. Leadership is doing the right things

– Peter Drucker

What we need is a fresh vision of problems to solve. Problems that matter for society are essential as are new ways of attacking old problems. As a matter of course we should not be simply solving the same old problems on a finer mesh with a more resolved geometry. Instead we should be solving these problems in better ways. Engaging in the design of devices and materials using optimization and deeply embedded error quantification. We need to be solving problems in more accurate, physically true and efficient methods. In short a new vision is needed that pushes today’s computers to be woefully inadequate. At a minimum the balance needs to be restored where the application of the computers is as important as the computers themselves. We need to realize that our thinking about using supercomputers is basically “in the box”. It is in the same box that created computing 70 years ago, and it is time to envision something bolder. We remain stuck today in the original vision where forward simulations of physical phenomena drove the creation of supercomputing.
It has become almost a cliché to say that supercomputing is an essential element to our National Security. The forays of the Chinese into the lead of supercomputing has been used as an alarm bell to call for greater support from the government. Yesterday’s missile gap has become today’s supercomputer gap. Throughout the Cold War supercomputing was used to support our national defense especially at the cutting edge in the nuclear weapons labs. At the end of the Co6a00d8341c90b153ef01901d4e5b6a970bld War when underground nuclear testing ended, the USA put its nuclear stockpile’s health in a scientific program known a stockpile stewardship where supercomputing plays a major role. I have argued repeatedly that the mere power of the computers has received too much priority and the way we use the computers too little, but the overall idea has merit.

While the ascendance of the Chinese in computing power is alarming, we should be far more alarmed by the great leaps and bounds they are making in the methods to use on such computers. Their advances in computing methodology have been even more dramatic than the headline catching fastest computers. Thnvidia-china-supercomputer-660x495eir investment in intellectual capital seems to greatly exceed our own. This is a far greater threat to our security than the computers themselves (or ISIS for that matter!). This is coupled with strategic investments of the Chinese in their convention and nuclear defense including substantial improvement in technology (including innovative approaches). Today the dialog of supercomputing is solely cast in overly simplistic terms of raw power, and raw speed without the texture of how this is or can be used. We are poorly served by the dialog.

What gets measured gets improved.

– Peter Drucker

The public at large and our leaders are capable of much better thinking. In computing today the software is as important (or more important) than the hardware. Computing and communication are becoming a single streamlined entity and innovation is key to the industry. New applications are approaches to using the combined computing and communication capability are driving the entire area. The demands of these developments are driving the development of computing hardware in ways that are straining traditional supercomputing. Supercomputing research has been left in an extremely reactive mode of operation in response. One of the problems is the failure to pick up the mantle of innovation that has become the core of commercial computing. Another issue is a massive amount of technical debt resulting from a lack of investment and a failure to take risks.

Technical debt in computing is usually reserved for the discussion of the base of legacy code that acts as an anchor on future capability. Supercomputing certainly has this issue in spades, but it may not be the deepest aspect of its massive technical debt. The knowledge base for computing is also suffering from technical debt where we continue to use quite old methodology year past its sell-by date. Rather than develop computing along broad lines at the end of the Cold War we have produced a new cadre of legacy codes simply porting the older methods onto a new generation of computers whole cloth. New methods for solving the equations that govern phenomena were largely avoided and the focus was simply on getting the codes to work on the massively parallel computers. In addition we continued to use the new computers just like the old computers. The whole enterprise was very much “inside the box” as far as the use of computers was concerned. It became a fait accompli thimages-3at results would be better on the bigger computers.

One of the biggest chunks of technical debt is a well-known, but very dirty secret. For real applications we get extremely poor performance on supercomputers. It a computer is rated at one petaflop; we see something like a few percent of that performance on an actual application rendering our one-petaflop computer more like a 10 teraflop computer. This makes the government who pays for the computing unhappy in the extreme. The problem stems from competing interests and a genuine failure of tackle this difficult issue. The first interest is casting the new supercomputer is the best possible light. We have a benchmarking program that is functionally meaningless that defines who is the fastest, LINPAC. This is the LU decomposition of a massive matrix, and it has almost nothing to do with any problem that people buy supercomputers to solve. It is extremely floating point intense and optimized for computers. Even worse its characteristics are getting further and further from what actual applications do. As a result the distance between the advertised performance and the actual performance is growing with each passing year.

As bad as the problem is it is about to get much worse. About 15 years ago this problem was raised as an issue by external reviews. At that time we were getting about 3-5% of the LINPAC measured performance on new supercomputers. The numbers kept on dropping (from the 20% we got on the old vector Crays of the 80’s and 90’s). Eventually the GAO got wind of this and started to put heat on the Labs. The end result was to play ostrich and stick our head in the sand. The external reviews simply decided to quit asking the question because the answer was so depressing. The problem has not gotten any better, it has gotten worse. The problem is that any path to exascale computing creates computers that take all the trends that have been driving the actual performance lower and amplify them by orders of magnitude. We could be looking wistfully at the days of 1% of peak performance as the actual achieved result.

Fixing technical debt problems is much like dealing with a decaying physical infrastructure. In the USA we just patch things, we don’t actually deal with the core issues. Our leadership has become increasingly superficial and incapable of investing any resources in the future value of anything. It is true with physical and cyber infrastructure, research and development. Our near-term risk aversion is a compounding issue, and again in supercomputing the lack of investment and risk aversion will eventually bite us hard. In many ways Moore’s law has crippled our leadership in computing because it allowed a safe return on the investments. Risk adverse managers could simply focus on making Moore’s law work for them and guarantee them a return on work. They didn’t have to invest effort in risky algorithms or applications that might have a huge payoff, but were more likely to prove to be busts. Historically these payoffs have in aggregate provided the same benefit as Moore’s law, but the breakthroughs happened episodically and never followed the steady curve of progress that hardware advances produced.

Moore’s law is dying, and it might be the best thing to happen in a while. We have gotten lazy and stupid as a result, and losing its easy gains may have the benefit of waking the computing community up. Just because the hardware is not going to be improving for “free” doesn’t mean things can’t improve. You just have to look at other sources for progress. They are there; they have always been there. We just need to work harder now.

Let’s move to the use cases for exascale. These come in two broad categories: “in the box” and “out of the box”. As we will see the in the box cases are a hard sell because they start to become ridiculous fairly fast. More than that dealing with these cases causes us to take a long hard look at all the technical debt we have accumulated over the past couple of decades of risk adverse devotion to hardware. A perfect example is numerical linear algebra where we haven’t had an algorithmic breakthrough for 30 years. That was multigrid, which provided linear complexity. I participated in a smaller meta-breakthrough about 10 years later when Krylov method aficionados realized that multigrid was a great preconditioner, and multigrid gurus realized that Krylov methods could make multigrid far more robust. In the 20 years since almost all the effort has been on implementing these methods on “massively” parallel computers, none on improving the algorithmic scaling (or complexity).7b8b354dcd6de9cf6afd23564e39c259

If you want something new, you have to stop doing something old.

-Peter Drucker

Of course, producing a method with sublinear scaling would be the way to go, but it isn’t clear how to do it. Recently “big data” has come to the rescue and sublinear methods are beginning to be studied. Perhaps these ideas can migrate over to numerical linear algebra and break the deadlock. It would be a real breakthrough, and for much of computational science make a difference equal to one or two generations of supercomputers. It would be the epitome of the sort high risk, high payoff research we have been systematically eschewing for decades.

Let’s get to a few use cases as concrete examples of the problem. I’ll focus on computational fluid dynamics because its classic and I know enough to be dangerous. We can look at some archetypes like direct numerical simulation of turbulence, design calculations and shock hydrodynamics. The variables to consider are the grid, number of degrees of freedom, and operations per zone per cycle and the number of cycles. We will apply the maxim from Edward Teller about computing,

A state-of-the-art calculation requires 100 hours of CPU time on the state-of-the-art computer, independent of the decade.

I will note that the recent study “CFD 2030” from NASA is a real tour-de-force and is highly, highly recommended (http://ntrs.nasa.gov/search.jsp?R=20140003093). It is truly a visionary document that steps in the right direction including realistic estimates of the need for computing and visionary new concepts in using computing. I will use it as a resource for these, ballpark, “back of the envelope” estimates. If one looks at one of the canonical problems in supercomputing, turbulenceimages-1 copy in a box we can see what in the box thinking can get us (damn little). There are some fairly well established scaling laws that define what can be done with turbulence simulation. If we look at the history of direct numerical simulation (DNS) we can see the development of computing written clearly. The classical scaling of DNS goes as the Reynolds number (the non-dimensional ratio of convective to diffusive terms in the governing equations) to a high power, using Diego Donzis’ thesis as a guide the number of cells is connected to the Reynolds number by N=3.5 Re^1.5 (empirically derived). We take the number of operations per degree of freedom as 250 and assume linear scaling, and a long computation time where the number of steps is 100 times the number of linear mesh points. Note that the proper Reynolds number in the table is the Taylor microscale Reynolds number that scales as roughly the square root of the more commonly known macroscale Reynolds number

N Ops/second Reynolds Number
32 1.2 Mflop 35
128 300 Mflop 90
256 5 Gflop 140
1024 1.3 Tflop 250
4096 325 Tflop 900
10,000 12 Pflop 1600
20,000 185 Pflop 2600
40,000 3 Eflop 4100

This assumes that the efficiency of calculations is constant over the fifty-year period of time this would imply. It most certainly is not. The decrease in efficiency is at least a factor of 10 if not more. Applying the factor of ten decrease in efficiency means that on an exascale computer as defined by LINPAC we would be solving the N=20,000 case at a 5% efficiency, which implicitly assumed the efficiency was 50% back in the time of the first N=32 DNS calculations. Ironically as the use case becomes smaller the necessary efficiency decreases. Currently the largest DNS is equivalent to about N=6000, which seems to imply that the efficiency is on the order of 15% (this takes 1.5Pflop by my estimation method). This exercise can then be applied almost whole cloth to large eddy simulation (LES) that is (almost) direct numerical simulation with a coarser grid and modest amount of additional modeling. It is notable that this is the application of scaling to the most idealized turbulence imaginable and avoids the mass of complications people would really be interested in.

The out of the box examples are much more interesting because they point to changing how computers are used. These are a mixture of evolutionary changes with revolutionary approaches. They are dangerous to the status quo, and success will require some serious risk-taking. For this reason they might not happen, our institutions and management are terrible at taking risks. As an example I will stick with fluid dynamics, but add embedded optimization with error and uncertainty estimation. In an aeronautical setting one might be changing the details or geometry of a wing or flight body to achieve the best possible performance (say for instance, lower drag) subject to constraints such as some standard for lift. Another interesting approach might examine the statistical response of a design to differences in the microstructure of the material. This sort of simulation might start to get at questions associated with failure of devices that have eluded simulation via the traditional approaches utilizing purely forward (classical) simulation.

Software gets slower faster than hardware gets faster.

– N. Wirth

One way to project to exascale would be to apply some large sample of forward calculations to statistically sample a user case of interest. For example 1000 state-of-the-art DNS calculations (N=6000) would produce an exascale of demand. The problem is that most supercomputing sites view this sort of use of the computers as cheating! What would it take to do a less crude and more embedded uncertainty quantification approach might use extra degrees of freedom to describe the evolution of uncertainty. The question is how the solution efficiency scales with the additional degrees of freedom and how to implement and describe the results of the calculations.

D C R Simulation-2For the slightly more out of the box case for exascale let’s look at a solution to the Vlasov-Maxwell-Boltzmann equation(s) for solving the evolution of a plasma. This is a seven (!) dimensional problem so it ought get big really fast. In addition we can add some additional degrees of freedom that might provide the ability to solve the problem with embedded uncertainty, or a high order method like discontinuous Galerkin (or both!). I’ll just continue with some assumptions from the DNS example regarding the floating-point intensity to integrate each degree of freedom (1000 operations per variable per degree of freedom, and a day of total runtime)

Nspace Nvelocity Ndof/variable Ops/second
32 20 1 40 Gflop
64 40 1 2.5 Tflop
128 80 1 160 Tflop
256 160 1 10 Pflop
512 160 1 80 Pflop
1024 160 1 650 Pflop
2048 160 1 5.2 Eflop
32 20 20 776 Gflop
64 40 20 50 Tflop
128 80 20 3.2 Pflop
256 160 20 204 Pflop
512 160 20 1.6 Eflop

We immediately see that the calculations have a great potential to swamp any computer we can envision for the near future. I will fully acknowledge that these estimates are completely couched in the “bigger is better” mindset.

What we really need is a way of harnessing innovative ways of applying computing. For example, what can be done to rimages copy 3eplace expensive time consuming testing of materials for failure? Can this be reliably computed? And if not, why not? Can we open up new spaces for designing items via computing and couple it to additive manufacturing? Can we rely upon this approach to get new efficiencies and agility in our industries? This is exactly the sort of visionary approach that is missing today from the dialog. The same approach could be used to work on the mechanical design of parts in a car, or almost anything else if significant embedded uncertainty or optimization were included in the calculation. This is an immense potential use of computing to make the World better. We will quickly see that envisioning new application much more rapidly saturates the capacity of computing keep up. It also drives the development of new algorithms, and mathematics to support its use.

“Results are gained by exploiting opportunities, not by solving problems.”

-Peter Drucker

Ghattas-mantle-convection650Part of the need for realizing the need for out of the box thinking comes from observing the broader computing world. Today much more value comes from innovative application than the hardware itself. New applications for computing are driving massive economic and cultural change. Massive value is being created out of thin air. Part of this entails risk, and risk is something our R&D community has become extremely adverse to. We are in dire need of new ideas and new approaches to applying computational science. The old ideas were great, but now grow tired and really lack the capacity to advance the state much longer.

Much in the same fashion as Moore’s law, the old vision of computing is running out of steam, and its time to reboot.

 

The dangers of “good enough” thinking

12 Friday Sep 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

A lot of us look at our lives, the technology, the things we have, the things we use and feel very fortunate.  We are. A lot of us think that all of this is good enough.  A lot of us are wrong. Too much of this thinking ultimately Screen-Shot-2013-04-03-at-3.04.44-PMleads to decline and “good enough” turns into old-fashioned and backwards looking nostalgia.  In the case of technical problem solving the things that are good enough are a scaffolding for the hard problems you are asked to solve. Often this scaffolding becomes a box you construct around yourself. It can ultimately limit your ability to solve the problem rather than enabling you. As Clay Shirky says,

The systemic bias for continuity creates tolerance for the substandard.

Or if you prefer Frank Zappa,

Without deviation from the norm, progress is not possible.

I see the issue coming up even in the most high tech settings. At national labs the super high tech computer models that are used to analyze deep scientific questions are often viewed as being good enough. This leads to the horrible tools called legacy codes.  Today’s new codes are tomorrow’s legacy codes.  Once you decide the tools you have are good enough, you’ve stepped down the road toward mediocrity. CICB's_Laboratory

I’m not talking about pursing progress for the sake of progress, but rather seeking progress where it is available or needed.  A mix of things is always available. Some forms of progress are available and are equivalent to simply keeping up. Other forms of progress are associated with an acute recognition of what is limiting your current capability. Balancing these aspects of progress can result in a healthy situation where we move forward. One should always have a deep appreciation for the shortcomings of your current technology and knowledge with a taste for how and when it is holding you back.

What happens when the attitude is “things are good enough”?  One form of this happens when people cut themselves off from the state of the art. You see it in technology, but also things like entertainment. This is where “oldies” radio stations come from.  People just get tired of keeping up and begin to live in the past. They get tired of trying to understand and appreciate new art forms (like rap, but remember at one time Jazz was edgy!).  Even worse is the inability to focus on improving situations that are problematic. This become an attitude of accommodation instead of pushing to do things better.

The attitude of accommodation of limitations is part of the issue. Of course, one person’s major advance is another person’s painful technological change. Institutions are like people, as they get older, technology starts to become frightening, or simply tiring. This may be close to the core of the issue. Technology fatigue. People just don’t want to make the effort to do anything new because of a lack of proof or confidence of improvement. It is simply easier to believe that what we have today is good enough.

Why go for a mobile phone? The phone in the kitchen or bedroom at home, or your office is good enough.  It seems odd today, but many people went through the phone calculus and decided mobile phones aren’t worth the cost or trouble.  Seth Godin puts this in context,

 If it scares you, it might be a good thing to try.

imagesIt took a technology that clearly took phones to whole new level, the iPhone to push the technology.  Now the people who haven’t made the change are a small slice of the population without either the inclination or resources for a mobile phone.  What made the difference? The iPhone was more than a phone, it was a mini computer that put the Internet at your fingertips, and new modes of communication into play.  All of these pushed the mobile phone over the line. As Steve Jobs famously noted,

Creativity is just connecting things

In my home we haven’t had a land-line phone for over a decade, but I still have one in my office.  My employer should get rid of it. They won’t, but they should.  My wife and I are naturally progressive and interested in new things.  I worry that my employer isn’t quite as progressive as they should be, and Siemens_Euroset_805their attitude on phones spills over to other things.  They have made huge strides with modern electronics and I have hope. They still have issues with integrating cutting edge research into the delivery of major products. There is too much acceptance of old technology and good enough thinking is far too easy to find. Given their devotion to Edisonian mode engineering, they might do well to listen to Thomas Edison’s advise,

Restlessness is discontent — and discontent is the first necessity of progress. Show me a thoroughly satisfied man — and I will show you a failure.

One of my colleagues has a great saying,

you’re never good enough, you can always improve.

This is the sort of thing that gives me hope. We need a lot more people like him.

The Beloved Engineering Safety Factor

04 Thursday Sep 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

“The desire for safety stands against every great and noble enterprise.” ― Tacitus

I remember back in the days of my undergraduate education leaning about safety factors for the first time (sometimes the “fudge” factor, where “fudge” is a polite way to say another word!). It was just an accepted practice to account for what isn’t completely known and is a generally acknowledged good idea. The basic idea is to pad images copyout the margin of safety on various design decisions or analyses to account for the possibility that the formulas being used are off, or maybe you made a mistake, or forgot to account for something. It’s all the unknowns, including the “unknown unknowns” we keep hearing about. The true depth of the concept is simply passed over by the undergraduate instruction where a bit more contemplation would serve everyone so much better. It would serve to inject some necessary humility into the dialog and provide context for vibrant research in the future.

“Reports that say there’s — that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things that we know that we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns, the ones we don’t know we don’t know.” – Donald Rumsfeld

images-1When I was first introducted to safety factors it wasn’t treated like anything profound or terribly important. Just a “best practice” with no philosophical context for what it means. Upon my return to this topic in the context of recent research, it began to dawn on me that safety factors are really deep and conceptually rich. The fact that there is simply a lot that we don’t know, or even know we don’t know is potentially terrifying. We could be screwing up horribly and not even know it. At some other level what the safety factor is really telling you that the vaunted research embedded in the formulas you’re applying is potentially wrong or inapplicable to your situation. What the professors didn’t say to you so plainly is that the research of the greats might be just plain wrong. In other words, the greatest research in their field might not be worth completely trusting.

This is really OK. We are all adults here, right?

“Risk means ‘shit happens’ or ‘good luck” ― Toba Beta

The whole concept of safety factors is a way of pointing out where work is needed. A too generous safety factor is wasteful and expensive. Of course, a key tension is that a safety factor that is too small is dangerous. This is where accidents and disasters with technology lie. It is worth pointing out that to some degree accidents and disasters are unavoidable, and sometimes the events in the tail of probability distribution happen. No safety factor is going to prevent this. Some misery is unavoidable. Sometimes the problem is deeper than what a safety factor can deal with. The universe is complex and nonlinear with lots of abnormal things. Most of our understanding of the universe is linear, simple and normal (statistically). Our best science is setting us up for a fall. Not often, but often enough that we shouldn’t be too surprised when it happens.

“On a long enough timeline, the survival rate for everyone drops to zero.”- The Narrator from Fight Club

This isn’t in any sense an attempt to belittle the accomplishments of the generations of scientists whose work set the foundation for the modern World and its wonders. Humanity has accomplished so much, and the understanding of the World we have today is an immense virtue, but let’s not get carried away. We are not masters of the Universe, it is the master of us.

“We have to remember that what we observe is not nature herself, but nature exposed to our method of questioning.” – Werner Heisenberg

My point is to throw down the gauntlet at those who think science is done, and we understand everytUnknown copyhing. The truth is we don’t know shit. So many important things are simply mysteries, and as we unveil their secrets, new mysteries will rise to take their place. We never really know the truth, we just know what we see, which is deeply influenced by how we ask the question in the first place. We can predict precious little with the sort of confidence that some would have us believe. Surprises are awaiting us around every corner, and we have so much hard work to do. Progress is needed on so many fronts.

This is why we need safety factors. The safety factor ought to be viewed as a sort of construction sign on the highway of knowledge. It is an invitation to slow down and watch the work needed to keep the road open, safe, and (relatively) free of potholes. We should be more emphatic in educating undergraduate engineers about the stark limits of knowledge so their hubris is levened by an appropriate humility. I first came across the safety factor in learning basic fluid-thermal design for nuclear reactors. Given an analysis of the maximum temperature we would multiply the controlling detail by a factor to make extra-sure that the peak temperature did not cross over the threshold of danger. Another place where safety factors play a key role is the determination of stability and accuracy of numerical calculations. One will compute a time step size that is stable then descrease it by some safety factor to make sure that nonlinearity doesn’t bite you.

Information is the resolution of uncertainty. – Claude Shannon

Most recently there is the case of the estimation of numerical error in calculations. The standard practice is to take the data and produce estimates of the convergence meat-plate-study-1-factor-of-safety-factor-of-safety1rate and converged solution. The distance between your numerical solution and the estimate of the converged solution is the numerical error bar, which is multiplied by a factor that depends upon how hinky the estimates are, the hinkier the estimate, the larger the safety factor. Again we have precise and detailed analysis that is augmented by a factor to account for the slop (well not slop, the nonlinearity). A truism is that if we lived in a linear world, safety factors wouldn’t be necessary. Equally true is that a linear world would be dull as dirt.

That is extremely important. That should be highlighted, underlined and screamed out in an education. We are not masters of the Universe. We are at the mercy of the whims of reality. We only have a tenuous grasp on what reality is. We know far less than we think we do. We have so much to learn. Life is uncertain and fragile. The whole damn thing needs a safety factor. Beyond that, no safety factor will be large enough to save you

 Maturity, one discovers, has everything to do with the acceptance of ‘not knowing.― Mark Z. Danielewski

So, embrace the lack of certainty, and knowledge, it is what makes life worth living. It gives life an edge and excitement. It gives us things to do, knowledge to be discovered, and new truths to revel in. Maybe safety factors are a little bit awesome, and not so pedestrian after all.

 

 

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...