• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: May 2016

Failure is not a bad thing!

27 Friday May 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Not trying or putting forth your best effort is.

Only those who dare to fail greatly can ever achieve greatly.

― Robert F. Kennedy

Last week I attended the ASME Verification and Validation Symposium as I have for the last five years. One of the keynote talks ended with a discussion of the context of results in V&V. It used the following labels for an axis: Success (good) and Failure (bad). I took issue with it, and made a fairly controversial statement. Failure is not negative or bad, and we should not keep referring to it as being negative. I might even go so far as to suggest that we actually encourage failure, or at the very least celebrate it because failure also implies your trying. I would be much happier if the scale was related to effort and excellence of work. The greatest sin is not failing; it is not trying.

Keep trying; success is hard won through much failure.

― Ken Poirot

mediocritydemotivatorIt is actually worse than simply being a problem that the best effort isn’t put forth, lack of acceptance of failure inhibits success. The outright acceptance of failure as a viable outcome of work is necessary for the sort of success one can have pride in. If nothing is risked enough to potentially fail than nothing can be achieved. Today we have accepted the absence of failure as being the tell tale sign of success. It is not. This connection is desperately unhealthy and leads to a diminishing return on effort. Potential failure while an unpleasant prospect is absolutely necessary for achievement. As such the failures when best effort is put forth should be celebrated and lauded whenever possible and encouraged. Instead we have a culture that crucifies those who fail with regard for the effort on excellence of the work going into it.

Right now we are suffering in many endeavors from deep unremitting fear of failure. The outright fear of failing and the consequences of that failure are resulting in many efforts reducing their aggressiveness in attacking their goals. We reset our goals downward to avoid any possibility of being regarded as failing. The result is an extensive reduction in achievement. We achieve less because we are so afraid of failing at anything. This is resulting is the destruction of careers, and the squandering of vast sums of money. We are committing to mediocre work that is guaranteed of “success” rather than attempting excellent work that could fail.

A person who never made a mistake, never tried anything new

― Albert Einstein

We have not recognized the extent to which failure energizes our ability to learn, and bootstrap ourselves to a greater level of achievement. Failure is perhaps the greatest means to teach us powerful lessons. Failure is a means to define our limits of understanding and knowledge. Failure is the fuel for discovery. Where we fail, we have work that needs to be done. We have mystery and challenge. Without failure we lose discovery, mystery, challenge and understanding. Our knowledge becomes stagnant and we cease learning. We should be embracing failure because failure leads to growth and achievement.

Instead today we recoil and run from failure. Failure has become such a massive black mark professionally that people simply will not associate themselves with something that isn’t a sure thing. The problem is that sure things aren’t research they are developed science and technology. If one is engaged in research, we do not have certain results. The promise of discovery is also tinged with the possibility of failure. Without the possibility of failure, discovery is not possible. Without an outright tolerance for a failed result or idea, the discovery of something new and wonderful cannot be had. At a personal level the ability to learn, develop and master knowledge is driven by failure. The greatest and most compelling lessons in life are all driven by failures. With a failure you learn a lesson that sticks with you, and your learning sticks.

Creatives fail and the really good ones fail often.

― Steven Kotler

It is the ability to tolerate ambiguity in results that leads to much of the management response. Management is based on assuring results and defining success. Our modern management culture seems to be incapable of tolerating the prospect of failure. Of course the differences in failure are not readily supported by our systems today. There is a difference between an earnest effort that still fails and an incompetent effort that fails. One should be supported and celebrated and the other is the definition of unsuccessful. We have lost the capacity to tolerate these subtleties. All failure is viewed as the same and management is intolerant. They require results that can be predicted and failure undermines this central tenant of management.

The end result of all of this failure avoidance is a generically misplaced sense of what constitutes achievement. More deeply we are losing the capacity to fully understand how to structure work so that things of consequence may be achieved. In the process we are wasting money, careers and lives in the pursuit of hollowed out victories. The lack of failure is now celebrated even though the level of success and achievement is a mere shadow of the sorts of success we saw a mere generation ago. We have become so completely under the spell of avoidance of scandal that we shy away from doing anything bold or visionary.

Elliott Erwitt

A Transportation Security Administration (TSA) officer pats down Elliott Erwitt as he works his way through security at San Francisco International Airport in San Francisco, Wednesday, Nov. 24, 2010. (AP Photo/Jeff Chiu)

We live in an age where the system cannot tolerate a single bad event (e.g., failure whether it is an engineered system, or a security system,…). In the real World failures are utterly and completely unavoidable. There is a price to be paid for reductions of bad events and one can never have an absolute guarantee. The cost of reducing the probability of bad events escalates rather dramatically as you look to reduce the tail probabilities beyond a certain point. Things like the mass media and demagoguery by politicians takes any bad event and stokes fears using the public’s response as a tool for their own power and purposes. We are shamelessly manipulated to be terrified of things that have always been one-off minor risks to our lives. Our legal system does its dead level best to amp all of this fearful behavior for their own selfish interests of siphoning as much money as possible from whoever has the misfortune of tapping into the tails of extreme events.

maxresdefault copyIn the area of security, the lack of tolerance for bad events is immense. More than this, the pervasive security apparatus produces a side effect that greatly empowers things like terrorism. Terror’s greatest weapon is not high explosives, but fear and we go out of our way to do terrorists jobs for them. Instead of tamping down fears our government and politicians go out of their way to scare the shit out of the public. This allows them to gain power and fund more activities to answer the security concerns of the scared shitless public. The best way to get rid of terror is to stop getting scared. The greatest weapon against terror is bravery, not bombs. A fearless public cannot be terrorized.

Mars_orbit_rendez_vous_S95_01407The end result of all of this risk intolerance is a lack of achievement as individuals, organizations, or the society itself. Without the acceptance of failure, we relegate ourselves to a complete lack of achievement. Without the ability to risk greatly we lack the ability to achieve greatly. Risk, danger and failure all improve our lives in every respect. The dial is too turned away from accepting risk to allow us to be part of progress. All of us will live poorer lives with less knowledge, achievement and experience because of the attitudes that exist today. The deeper issue is that the lack of appetite for obvious risks and failure actually kicks the door open for even greater risks and more massive failures in the long run. These sorts of outcomes may already be upon us in terms of massive opportunity cost. Terrorism is something that has cost our society vast sums of money and undermined the full breadth of society. We should have had astronauts on Mars already, yet the reality of this is decades away, so our societal achievement is actually deeply pathetic. The gap between “what could be” and “what is” has grown into a yawning chasm. Somebody needs to lead with bravery and pragmatically take the leap over the edge to fill it.manned-mission-mars-illustration

We have nothing to fear but fear itself.

– Franklin D. Roosevelt.

The Lax Equivalence Theorem, its importance and limitations

20 Friday May 2016

Posted by Bill Rider in Uncategorized

≈ 3 Comments

 

Advances are made by answering questions. Discoveries are made by questioning answers. Bernard Haisch

― Philip Houston

Peter_LaxIn the constellation of numerical analysis theorems the Lax equivalence theorem may have no equals in its importance. It is simple and its impact is profound on the whole business of numerical approximations. The theorem basically implies that if you provide a consistent approximation to the differential equations of interest and it is stable, the solution will converge. The devil of course is in the details. Consistency is defined by having an approximation with ordered errors in the mesh or time discretization, which implies that the approximation is at least first-order accurate, if not better. A key aspect of this that is overlooked is the necessity to have mesh spacing sufficiently small to achieve the defined error where failure to do so renders the solution erratically convergent at best.

The importance of this theorem in the basis for our commitment to high performance computing cannot be understated. The generally implicit assumption that more computer power is the path to better calculation is incredibly widespread. The truth and limits of this assumption are important to understand. If things are working well, it is a good assumption, but it should not be taken for granted. A finer mesh is almost a priori assumed to be better. This basic premise is rarely if ever challenged, but it should be. Under ideal circumstances it is a fine assumption, but it is not challenged nearly as often as it should be. Part of this narrative is the description of when it should be challenged and the nature of circumstances where verification achieves outright conflict with the utility of the model. We end with the prospect that the verified model might only be accurate and verified in cases where the model is actually invalid.

For example we can carry out a purely mathematical investigation of the convergence of a numerical solution without any regard for its connection to a models utility. Thus we can solve a problem using time and length scales that would render the underlying model as ridiculous and satisfy the mathematical conditions we are examining. Stepping away from this limited perspective to look at the combination of convergence with model utility is necessary. In practical terms the numerical error and convergence where a model is valid is essentially important to understand. Ultimately we need accurate numerical solutions in regimes where a model is valid. More than just being accurate, we need to understand what the numerical errors are in these same regions. Too often we see meshes refined to a ridiculous degree without regard to whether the length scales in the mesh render the model invalid.

I have always thought the actions of men the best interpreters of their thoughts.

― John Locke

A second widespread and equally troubling perspective is the wrong-headed application of the concept of grid independent solutions. Achieving a grid-independent solution is generally viewed as a good to great thing. It is if the conditions are proper. It is not under almost identical conditions. The difference between the good almost great thing, and the awful almost horrible thing is a literal devil in the details, but essential to the conduct of excellence in computational science. One absolutely must engage in the quantitative interrogation of the results. Without a quantitative investigation, the good grid independent results could actually be an illusion.images

If the calculation is convergent, the grid independence actually means the calculation is not sensitive to the grid in a substantive manner. This requires that the solution have small errors and the grid independence is the effect of those errors being ordered and negligible. In fact it probably means that you’re engaged in overkill and could get away with a coarser (cheaper) grid without creating any substantial side effects. On the other hand a grid independent solution could mean that the calculation is complete garbage numerically because the grid doesn’t affect the solution. It is not convergent. These two cases are virtually identical in practice and only revealed by doing detailed quantitative analysis of the results. In the end grid independence isn’t either necessary, or sufficient for quality computations. Yet at the same time it is often the advised course of action in conducting computations!

54Stability then becomes the issue where you must assure that the approximations produce bounded results under the appropriate control of the solution. Usually the stability is defined as a character of the time stepping approach and requires that the time step be sufficiently small to provide stability. A lesser-known equivalence theorem is due to Dahlquist and applies to integrating ordinary differential equations and applies to multistep methods. From this work the whole aspect of zero stability arises where you have to assure that a non-zero time step size gives stability in the first place. More deeply, Dahlquist’s version of the equivalence theorem applies to nonlinear equations, but is limited to multistep methods where as Lax’s applies to linear equations.

Here is the theorem in all its glory (based on wikipedia entries).

The Lax Equivalence Theorem:

For the numerical solution of partial differential equations. It states that for a consistent finite difference method for a well-posed linear initial value problem, the method is convergent if and only if it is stable. Published in 1956, discussed in a seminar by Lax in 1954.

The Dahlquist Equivalence Theorem:

Now suppose that a consistent linear multistep method is applied to a sufficiently smooth differential equation and that the starting values y_0, y_1, y_2, \ldots y_n  all converge to the initial value y_0 as h \rightarrow 0. Then, the numerical solution converges to the exact solution as h \rightarrow 0 if and only if the method is zero-stable.

Something to take serious note of regarding these theorems is the concept of numerical stability was relatively new and novel in 1956. Both theorems helped to lock the concept of stability into being central to numerical analysis. As such both theorems have place as one of the cornerstones of modern numerical analysis. The discovery of the issue of stability was first addressed by Von Neumann for PDEs in the late 1940’s, and independently explored by Dahlquist for ODEs in the early 1950’s. Nonetheless it should be noted that these theorems were published when numerical stability was a new concept and not completely accepted and understood for its importance.

Given that we can very effectively solve nonlinear equations using the principles applied by Lax’s theorem is remarkable. Even though the theojohn-von-neumann-2rem doesn’t formally apply to the nonlinear case the guidance is remarkably powerful and appropriate. We have a simple and limited theorem that produces incredible consequences for any approximation methodology that we are applying to partial differential equations. Moreover the whole this was derived in the early 1950’s and generally thought through even earlier. The theorem came to pass because we knew that approximations to PDEs and their solution on computers do work. Dahlquist’s work is founded on a similar path; the availability of the computers shows us what the possibilities are and the issues that must be elucidated. We do see a virtuous cycle where the availability of computing capability spurs on developments in theory. This is an important aspect of healthy science where different aspects of a given field push and pull each other. Today we are counting on hardware advances to push the field forward. We should be careful that our focus is set where advances are ripe, its my opinion that hardware isn’t it.

The equivalence theorem also figures prominently in the exercise of assuring solution quality through verification. This requires the theorem to be turned ever so slightly on its head in conducting the work. Consistency is not something that can be easily demonstrated; instead verification focuses on convergence, which may be. We can control the discretization parameters, mesh spacing or time step size to show how the solution changes with respect to them. Thus in verification, we use the apparent convergence of solutions to imply both stability and consistency. Stability is demonstrated by fiat in that we have a bounded solution to use to demonstrate convergence. Having a convergent solution with ordered errors then provides consistency. The whole thing barely hangs together, but makes for the essential practice in numerical analysis.

At this time we are starting to get at the practical limitations of this essential theorem. One way to get around the issues is the practice of hard-nosed code verification as a tool. With code verification you can establish that the code-discretization produces the correct solution to an exact solution to the PDEs. In this way the consistency of the approximation can become firmly established. Part of the remaining issue is the fact that the analytical exact solution is generally far nicer and better behaved that the general solutions you will be applying the code to. Still this is a vital and important step in the overall practice. Without firm code verification as a foundation, the practice of verification without exact solutions is simply untethered. Here we have firmly established the second major limitation of the equivalence theorem, the first being it’s being limited to linear equations.
timeline-18One of the very large topics in V&V that is generally overlooked is models and their range of validity. All models are limited in terms of their range of applicability based on time and length scales. For some phenomena this is relatively obvious, e.g., multiphase flow. For other phenomena the range of applicability is much more subtle. Among the first important topics to examine is the satisfaction of the continuum hypothesis, the capacity of a homogenization or averaging to be representative. The degree of satisfaction of homogenization is dependent on the scale of the problem and degrades as phenomenon becomes smaller scale. For multiphase flow this is obvious as the example of bubbly flow shows. As the number of bubbles becomes smaller any averaging becomes highly problematic. It argues that the models should be modified in some fashion to account for the change in scale size.

I’ll get at another extreme example of the nature of scale size and the capacity of standard models to apply. Take the rather pervasive principle associated with the second law of thermodynamics. As the size of the system being averaged becomes smaller and smaller eventually violations of this law can be occasionally observed. This clearly means that natural and observable fluctuations may cause the macroscopic laws to be violated. The usual averaged continuum equations do not support this behavior. It would imply that the models should be modified for appropriate scale dependence where such violations in the fundamental laws might naturally appear in solutions.

Cielo rotatorAnother more pernicious and difficult issues are homogenization assumptions that are not so fundamental. Consider the situation where a solid is being modeled in a continuum fashion. When the mesh is very large, the solid comprised of discrete grains can be modeled by averaging over these grains because there are so many of them. Over time we are able to solve problems with smaller and smaller mesh scales. Ultimately we now solve problems where the mesh size approaches the grain size. Clearly under this circumstance the homogenization used for averaging will lose its validity. The structural variations in the homogenized equations are removed and should become substantial and not be ignored as the mesh size becomes small. In the quest for exascale computing this issue is completely and utterly ignored. Some areas of study for high performance computing consider these issues carefully most notably climate and weather modeling where the mesh size issues are rather glaring. I would note that these fields are subjected to continual and rather public validation.

Let’s get to some other deeper limitations of the power of this theorem. For example the theorem gets applied to the models without any sense of the validity of the model. Without consideration for validity, the numerical exploration of convergence may be explored with vigor and be utterly meaningless. The notion of validation is important in considering whether verification should be explored in depth. As such one gets to the nature of V&V as holistic and iterative. As such the overall exploration should always look back at results to determine whether everything is consistent and complete. If the verification exercise takes the model outside its region of validity, the verification error estimation may be regarded as suspect.

code_monkeyThe theorem is applied to the convergence of the model’s solution in the limit where the “mesh” spacing goes to zero. Models are always limited in their applicability as a function of length and time scale. The equivalence theorem will be applied and take many models outside their true applicability. An important thing to wrangle in the grand scheme of things is whether models are being solved and convergent in the actual range of scales where it is applicable. A true tragedy would be a model that is only accurate and convergent in regimes where it is not applicable. This may actually be the case in many cases most notably the aforementioned multiphase flow. This calls into question the nature of the modeling and numerical methods used to solve the equations.

Huge volumes of data may be compelling at first glance, but without an interpretive structure they are meaningless.

― Tom Boellstorff

The truth is we have an unhealthy addiction to high performance computing as a safe and trivial way to make progress computationally. We continually move forward with complete faith that a larger calculations are intrinsically better without doing the legwork to demonstrate that they are. We avoid focus on other routes to better solutions in modeling, methods and algorithms in favor of simply porting old models, methods and algorithms to much larger computers. Without having a balanced program with appropriate intellectual investment and ownership of the full breadth of computational science, our approach is headed for disaster. We have a significant risk of producing a computational science program that completely fools itself. We fail to do due diligence activities in favor of simply making the assumption that a finer mesh computed on bigger computers is a priori better than taking another route to improvement.

The difficulty lies not so much in developing new ideas as in escaping from old ones.

― John Maynard Keynes

Lax, Peter D., and Robert D. Richtmyer. “Survey of the stability of linear finite difference equations.” Communications on Pure and Applied Mathematics 9.2 (1956): 267-293.

 

Dahlquist, Germund. “Convergence and stability in the numerical integration of ordinary differential equations.” Mathematica Scandinavica 4 (1956): 33-53.

 

WTF: What the …

13 Friday May 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

…fuck?

hqdefaultWTF has become the catchphrase for today’s world. “What the fuck” moments fill our days and nothing is more WTF than Donald Trump. We will be examining the viability of the reality show star, and general douchebag celebrity-rich guy as a viable presidential candidate for decades to come. Some view his success as a candidate apocalyptically, or characterize it as an “extinction level event” politically. In the current light it would seem to be a stunning indictment of our modern society. How could this happen? How could we have come to this moment as a nation where it is even a possibility for such a completely insane outcome to be a reasonably high probability outcome of our political process? What else does it say about us as a people? WTF? What in the actual fuck!

urlThe phrase “what the fuck” came into the popular lexicon along with Tom Cruz in the movie, “Risky Business” back in 1983. There the lead character played by Tom Cruz exclaims, “sometimes you gotta say, what the fuck” It was a mantra for just going for broke and trying stuff without obvious regard for the consequences. Given our general lack of an appetite for risk and failure, the other side of the coin took the phrase over. Over time the phrase has morphed into a general commentary about things that are generally unbelievable. Accordingly the acronym WTF came into being by 1985. I hope that 2016 is peak-what the fuck, cause things can’t get much more what the fuck without everything turning into a complete shitshow. Going to a deeper view of things the real story behind where we are is the victory of bullshit as a narrative element in society. In a sense the transfer of WTF from a mantra for risk taking has transmogrified into a mantra for the breadth of impact of not taking risks!0e0350a18051417f6c8aa34f35775d52

Indeed if one looks to the real core of the Donald Trump phenomena it is the victory of bullshit over logic, facts and rational thought. He simply spouts off a bunch of stuff that our generally uneducated, bigoted, and irrationally biased public wants to hear, and the rubes vote for him in droves. It does not matter that almost everything he says is complete and utter bullshit. A deeper question is how the populace has become so conditioned to accept a candidate who regularly lies to them. Instead of uttering difficult truths and dealing with problems in a rational and pragmatic manner (i.e., actual leadership) we are regularly electing people who simply tell us something we want to hear. Today’s world has real, difficult and painful problems to solve, and irrational solutions will only make them worse. With our political discourse so completely broken, none of the problems will be addressed much less solved. Our nation will literally drown in the bullshit we are being fed.

dude_wtfIt is the general victory of showmanship and entertainment. The superficial and bombastic rule the day. I think that Trump is committing one of the greatest frauds of all time. He is completely and utterly unfit for office, yet has a reasonable (or perhaps unreasonable) chance to win the election. The fraud is being committed in plain sight and the fact that he speaks falsehoods at a marvelously high rate without any of the normative ill effects. Trump’s victory is testimony to how gullible the public is to complete bullshit. This gullibility reflects the lack of will on the part of the public to address real issues. With the sort of “leadership” that Trump represents, the ability to address real problems will further erode. The big irony is that Trump’s mantra of “Make America Great Again” is the direct opposite impact of his message. Trump’s sort of leadership destroys the capacity of the Nation to solve the sort of problems that lead to actual greatness. He is hastening the decline of the United States by choking our will to act in a tidal wave of bullshit.

imgresThere is a lot more bullshit in society than just Trump; he is just the most obvious example right now. Those who master bullshit win the day today, and it drives the depth of the WTF moments. Fundamentally there are forces in society today that are driving us toward the sorts of outcomes that cause us to think, “WTF?” For example we are prioritizing a high degree of micromanagement over achievement due to the risks associated with giving people freedom. Freedom encourages achievement, but also carries the risk of scandal when people abuse their freedom. Without the risks you cannot have the achievements. Today the priority is no scandal and accomplishment simply isn’t important enough to empower anyone. We are developing systems of management that serve to disempower people so that they don’t do anything unpredictable (like achieve something!).

The effect of all of this is an increasing disconnect from achievement and the concept of accountability. The end point is that the transition of achievement into bullshit is driven by the needs to market results, and marketed results being impossible to distinguish from actual results by most people. This is the sort of path that leads to our broken political discourse. When bullshit becomes identical to facts, the value and power of facts are absolutely destroyed. We are left with a situation where falsehoods are viewed as having equal footing with truth. Things like science, and the general concept of leadership by the elites in society becomes something to be reviled. We get Donald Trump as a viable candidate for President.

Zel2j47I wonder deeply about the extent to which things like the Internet play into this dynamic. Does the Internet allow bullshit to be presented with equality to bona fide facts? Does the Internet and computers allow a degree of micromanagement that strangles achievement? Does the Internet produce new patterns in society that we don’t understand much less have the capacity to manage? What is the value of information if it can’t be managed or understood in any way that is beyond superficial? The real danger is that people will gravitate toward what they want to view as facts instead of confronting issues that are unpleasant. The danger seems to be playing out in the political events in the United States and beyond.

Like any technology the Internet is not either good or bad by definition. It is a tool that may be used for either the benefit or the detriment of society. One way to look at the Internet is as an accelerator for what would happen societally. Information and change is Idiocracy_movie_posterhappening faster and it is lubricating changes to take effect at a high pace. On the one hand we have an incredible ability to communicate with people that is beyond the capacity to even imagine a generation ago. The same communication mechanisms produces a deluge of information we are drowning and input to a degree that is choking people’s capacity to process what they are being fed. What good is the information, if the people receiving it are unable to comprehend it sufficiently to take action? or if people are unable to distinguish the proper actionable information from the complete garbage?

A force that adds to entire mix is a system that increasingly only values the capacity to monetize things. This has destroyed the ability of the media to act as a referee in society. Facts and lies are now a free for all. Scandal and fiction sell far better than facts and truth. As a result we scandalize everything. As a result small probability events are drummed up to drive better ratings (the monetization of the news), and scare the public. The scared public then acts to try to control these false risks and fears through accountability, and the downward spiral of bullshit begins. Society today is driven by fear and the desire to avoid being afraid. Risk is avoided and failure is punished. Today failure is a scandal and our ability to accomplish anything of substance is sacrificed to avoid the potential of scandal.

idi-featAll of these forces are increasingly driving the elites in society (and increasingly the elites are simply those who have a modicum of education) to look at events and say “WTF?” I mean what the actual fuck is going on? The movie Idiocracy was set 500 years in the future, and yet we seem to be moving toward that vision of the future at an accelerated path that makes anyone educated enough to see what is happening tremble with fear. The sort of complete societal shitshow in the movie seems to be unfolding in front of our very eyes today. The mind-numbing effects of reality show television and pervasive low-brow entertainment is spreading like a plague. Donald Trump is the most obvious evidence of how bad things have gotten.

urlThe sort of terrible outcomes we see obviously through our broken political discourse are happening across society. The scientific world I work in is no different. The low-brow and superficial are dominating the dialog. Our programs are dominated by strangling micromanagement that operates in the name of accountability, but really speaks volumes about the lack of trust. Furthermore the low-brow dialog simply reflects the societal desire to eliminate the technical elites from the process. This also connects back to the micromanagement because the elites can’t be trusted either. It’s become better to speak to the uneducated common man who you can “have a beer with” than trust the gibberish coming from an elite. As a result the rise of complete bullshit as technical achievements has occurred. When the people in charge can’t distinguish between Nobel Prize winning work and complete pseudo-science, the low-bar wins out. Those of us who know better are left with nothing to do but say What the Fuck? Over and over again.

The bottom line is that the Donald Trump phenomenon isn’t a localized event; it is the result of a deeply broken society. These type of WTF events are all around us every day albeit in lesser forms. Our workplaces are teaming with WFT moments. We take WTF training for WTF problems. We have WTF micromanagement that serves no purpose except to provide people with a false sense of security that nothing scandalous is going on. The WTF moment is that the micromanagement is scandalous in and of itself. All the while our economic health stands at the abyss of calamity because most business systems are devised to be wealth-transfer mechanisms rather than value creation or customer satisfaction mechanisms.giphy

We live in a world where complete stupidity such as anti-vaxxers, climate change denial, creationists,… all seem to be taken as seriously as the science stacked against them. WTF? When real scientists are ignored in comparison to celebrities and politicians as the oracles of truth, we are lost. We waste immense amounts of time, energy, money and karma preventing things that will never happen, yet doing these things is a priority. WTF? What the fuck! Indeed! All of this is corrosive in the extreme to the very fabric of our society. The corrosion is visible in the most repugnant canary every seen in our collective coalmine, the Donald. He is a magnificent WTF indictment of our entire society.

 

HPC is just a tool; Modeling & Simulation is what is Important

04 Wednesday May 2016

Posted by Bill Rider in Uncategorized

≈ 5 Comments

People don’t want to buy a quarter-inch drill. They want a quarter-inch hole.

― Clayton M. Christensen

The truth in this quote is for those looking to aspire to the greater things that a drill can help build. At the same time we all know people who love their tools and aspire to the greatest toolset money can buy, yet never build anything great. They have the world’s most awesome set of tools in their garage or work room, yet do nothing but tinker. People who build great things have a similar set of great tools, but focus on what they are building. The whole area of computing is risking becoming the hobby enthusiast who loves to show you their awesome tool box, but never have the intent of doing more than showing it off while building nothing in the process. Our supercomputers are the new prize for such enthusiasts. The core of the problem is the lack of anything important to apply these powerful tools to accomplish.

High Performance computing has become the central focus for scientific computing. It is just a tool. This is a very bad thing and extremely unhealthy for the future of scientific computing. The problems with making it the focal point for progress are manifestly obvious if one thinks about what it takes to make scientific computing work. The root of the problem is the lack of thought going into our current programs, and ultimately a failure to understand that HPC isn’t what we should be focused on; it is a necessary part of how scientific computing is delivered. The important part of what we do is modeling and simulation, which can transform how we do science and engineering. HPC can’t transform anything except electricity into heat, and relies upon modeling and simulation for its value. While HPC is important and essential to the whole enterprise the other aspects of proper delivery of scientific computing are so starved for attention that they may simply fail to exist soon.IBM_Blue_Gene_P_supercomputer

For all the talk of creating a healthy ecosystem for HPC the programs comprised today are woefully inadequate towards achieving that end. The computer hardware focus exists because it is tangible and one can point at a big mainframe and say “I bought that!” It misses the key aspects of the field necessary for success, and even worse the key value in scientific computing. Scientific computing is a set of tools that allow the efficient manipulation of models of reality to allow exploration, design and understanding of the World without actually having to experiment with the real thing. Everything valuable about computing is in reality and what I will explain below is that the computer hardware is the most distant and least important aspect of the ecosystem that makes scientific computing important, useful and valuable to society.

images-1 copyEffectively we are creating an ecosystem where the apex predators are missing, and this isn’t a good thing. The models we use in science are the key to everything. They are the translation of our understanding into mathematics that we can solve and manipulate to explore our collective reality. Computers allow us to solve much more elaborate models than otherwise possible, but little else. The core of the value in scientific computing are the capacity of the models to explain and examine the physical World we live in. They are the “apex predators” in the scientific computing system, and taking this analogy further our models are becoming virtual dinosaurs where evolution has ceased to take place. The models in our codes are becoming a set of fossilized skeletons and not at all alive, evolving and growing.

The process of our models becoming fossilized is a form of living death. Models need to evolve, change, grow or even become extinct for science to be healthy. In the parlance of computing, models are embedded in codes and form the basis of a code’s connection to reality. When a code becomes a legacy code, the model(s) in the code become legacy as well. A healthy ecosystem would allow for the models (codes) to confront reality and come away changed in substantive ways including evolving, adapting and even dying as a result. In many cases the current state of scientific computing with its focus on HPC does not serve this purpose. The forces that change codes are diverse and broad. Being able to run codes at scales never before seen should produce outcomes that sometimes lead to a code’s (model’s) demise. The demise or failure of models is an important and health part of an ecosystems missing today.

wind-farm-cc-Jeff-Hayes-2008People do not seem to understand that faulty models render the entirety of the computing exercise moot. Yes, the computational results may be rendered into exciting and eye-catching pictures suitable for entertaining and enchanting various non-experts including congressmen, generals, business leaders and the general public. These eye-catching pictures are getting better all the time and now form the basis of a lot of the special effects in the movies. All of this does nothing for how well the models capture reality. The deepest truth is that no amount of computer power, numerical accuracy, mesh refinement, or computational speed can rescue a model that is incorrect. The entire process of validation against observations made in reality must be applied to determine if models are correct. HPC does little to solve this problem. If the validation provides evidence that the model is wrong and a more complex model is needed then HPC can provide a tool to solve it.

The modern computing environment whether seen in a cell phone, or a supercomputer is a marvel of the modern World. It requires a host of technologies working togetherTitan-supercomputerseamlessly to produce incredible things. We have immensely complex machines that produce important outcomes in the real world through a set of interweaved systems that translate electrical signals into instructions understood by the computer and humans, into discrete equations, solved by mathematical procedures that describe the real world and ultimately compared with measured quantities in systems we care about. If we look at our focus today, the complexity of focus is the part of the technology that connects very elaborate complex computers to the instructions understood both by computers and people. This is electrical engineering and computer science. The focus begins to dampen in the part of the system where the mathematics, physics and reality comes in. These activities form the bond between the computer and reality. These activities are not a priority, and conspicuously diminished significantly by today’s HPC.

images copy 26HPC today is structured in a manner to eviscerate fields that have been essential to the success of scientific computing. A good example is our applied mathematics programs. In many cases applied mathematics has become little more than scientific programming and code development. Far too little actual mathematics is happening today, and far too much focus is seen in productizing mathematics in software. Many people with training in applied mathematics only do software development today and spend little or no effort in doing analysis and development away from their keyboards. It isn’t that software development isn’t important, the issue is the lack of balance in the overall ratio of mathematics to software. The power and beauty of applied mathematics must be harnessed to achieve success in modeling and simulation. Today we are simply bypassing 3_code-matrix-944969this essential part of the problem to focus on delivering software products.

Similar issues are present with applied physics work. A healthy research environment for making progress in HPC would see far greater changes in modeling. A key aspect of modeling is the presence of experiments that challenge the ability of modeling to produce good useful representations of reality. Today such experimental evidence is sorely lacking and significantly hampered by the inability to take risks and push the envelope with real things. If we push the envelope on things in the real world it will expose our understanding to deep scrutiny. Such scrutiny will necessitate changes to our modeling. Without this virtuous cycle the drive to improve is utterly lacking.

Another missing element in the overall mix is the extent to which modeling and simulation supports activities pushing society forward. We seem to be in an era where society is not interested in progressing at anything these days. Instead we are trying to work toward a risk free world where everyone is completely safe and entirely divorced from ever failing at anything. Because of this environment the overall push for better technology is blunted to the degree that nothing of substance ever gets done. The maxim of modern life is that vast amounts of effort will be expended to assure that trivially possible things are assured of not happening. No small possibility of a dire outcome is too small to inhibit the expenditure of vast resources to make this smaller. This explains so much of what is wrong with the World today. We will bankrupt ourselves to achieve many expensive and unimportant outcomes that are completely unnecessary. Taking this view of the World allows us to explain to utter stupidity of the HPC world.

The origin, birth and impact of modeling and simulation arises from its support of activities essential to making societal progress. Without activities working toward societal progress (or at least the scientific-technological aspects of this) modeling and simulation is stranded in stasis. Progress in modeling and simulation is utterly tied to work in areas where big things are happening. High performance computing arose to prominence as a tool to allow modeling and simulation to tackle problems of greater complexity and difficulty. That said HPC is only one of the tools allowing this to happen. There is a complex and vast chain of tools necessary for modeling and simulation to succeed. It is arguable that HPC isn’t even the most important or lynchpin tool in the mix. If one looks at the chain of things that needs to work together the actual computer is the farthest removed from the reality we are aiming to master? If anything along the chain of tools closer to reality breaks, the computer is rendered useless. In other words, the computer can work perfectly and be infinitely fast and efficient, yet still be useless unless the software running on it is correct. Furthermore in the exercise of modeling and simulation, the software must be based on firm mathematical and physical principles, or it will be similarly useless. This last key step is exactly the part of the overall approach we are putting little or no effort in to in our current approach. Despite this lack of evidence we have made HPC central to success today. Too much focus on a tool of limited importance will swallow resources that could have been expended on more impactful activities. Ultimately the drivers for progress at a societal level are necessary for any of this work to have actual meaning.

If I’m being honest modeling and simulation is just a tool as well. As a tool it is much closer to the building of something great than the computers are. Used properly it can have a much greater impact on the capacity for helping produce great things than the computers can. What we miss more than anything is the focus on achieving great things as a society. We are too busy trying to save ourselves from a myriad of minute and vanishing threats to our safety. As long as we are so unremittingly risk adverse we will accomplish nothing. Our focus on big computers over big achievements is just a small reflection of this vast societal ill.

To the man who only has a hammer, everything he encounters begins to look like a nail.

― Abraham H. Maslow

 

 

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...