• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: July 2016

My Job Should Be Awesome; Its Not, Why?

29 Friday Jul 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

There is nothing quite so useless, as doing with great efficiency, something that should not be done at all.

― Peter F. Drucker

This post is going to be a bit more personal than usual; I’m trying to get my head around why work is deeply unsatisfying, and how the current system seems to conspire to destroy all the awesome potential it should have. My job should have all the things we desire: meaning, empowerment and a quest for mastery. At some level we seem to be in an era that belittles all dreams and robs work of the meaning it should have, disempowers most, and undermines mastery of anything. Worse yet, mastery of things seems to invoke outright suspicion being regarded more than a threat then a resource. On the other hand, I freely admit that I’m lucky and have a good well-paying, modestly empowering job compared to the average Joe or Jane. So many have it so much worse. The flipside of this point of view is that we need to improve work across the board; if the best jobs are this crappy one can scarcely imagine how bad things are for normal or genuinely shitty jobs.

In my overall quest for quality and mastery I refuse to settle for this and it only makes my dilemma all the more confounding. As I state in the title, my job has almost everything going for it and it should be damn close to unambiguously awesome. In fact my job used to be awesome, and lots of forces beyond my control have worked hard to completely fuck that awesomeness up. Again getting to the confounding aspects of the situation, the forces that be seem to be absolutely hell bent on continuing to fuck things up, and turn awesome jobs into genuinely shitty ones. I’m sure the shitty jobs are almost unbearable. Nonetheless, I know that grading on a curve my job is still awesome, but I don’t settle for success being defined by being less shitty than other people. That’s just a recipe for things to get shittier! If I have these issues with my work, what the hell is the average person going through?

images-1So before getting to all the things fucking everything up, let’s talk about why the job should be so incredibly fucking awesome. I get to be a scientist! I get to solve problems, and do math and work with incredible phenomena (some of which I’ve tattooed on my body). I get to invent things like new ways of solving problems. I get to learn and grow and develop new skills, hone old skills and work with a bunch of super smart people who love to share their wealth of knowledge. I get to write papers that other people read and build on, I get to read papers written by a bunch of people who are way smarter than me, and if I understand them I learn something. I get to speak at conferences (which can be in nice places to visit) and listen at them too on interesting topics, and get involved in deep debates over the boundaries of knowledge. I get to contribute to solving important problems for mankind, or my nation, or simply for the joy of solving them. I work with incredible technology that is literally at the very bleeding edge of what we know. I get to do all of this and provide a reasonably comfortable living for my loved ones.

If failure is not an option, then neither is success.

― Seth Godin

All of the above is true, and here we get to the crux of the problem. When I look at each day I spend at work almost nothing in that day supports any of this. In a very real sense all the things that are awesome about my job are side projects or activities that only exist in the “white space” of my job. The actual job duties that anyone actually gives a shit about don’t involve anything from the above list of awesomeness. Everything I focus on and drive toward is the opposite of awesome; it is pure mediocre drudgery, a slog that starts on Monday and ends on Friday, only to start all over again. In a very deep and real sense, the work has evolved into a state where all the awesome things about being a scientist are not supported at all, and every fucking thing done by society at large undermines it. As a result we are steadily and completely hollowing out value, meaning, and joy from the work of being a scientist. This hollowing is systematic, but serves no higher purpose that I can see other than to place a sense of safety and control over things.

The Cul-de-Sac ( French for “dead end” ) … is a situation where you work and work and work and nothing much changes

― Seth Godin

pileofshit

So the real question to answer is how did we get to this point? How did we create systems whose sole purpose seems to be robbing life of meaning and value? How are formerly great institutions being converted into giant steaming piles of shit. Why is work becoming such a universal shit show? Work with meaning and purpose should be something society values both from the standpoint of pure productivity, but also for the sense of respect for humanity. Instead we are turning away from making work meaningful, and making steady choices that destroy the meaning in work. The forces at play are a combination of fear, greed, and power. Each of these forces has a role to play is a widespread and deep destruction of a potentially better future. These forces provide short-term comfort, but long-term damage that ultimately leaves us poorer both materially and spiritually.

Men go to far greater lengths to avoid what they fear than to obtain what they desire.

― Dan Brown

Of these forces, fear is the most acute and widespread. Fear is harnessed by the rich and powerful to hold onto and grow their power, their stranglehold on society. Across society we see people looking at the world and saying “Oh shit! this is scary, make it stop!” The rich and powerful can harness this chorus of fear to hold onto and enhance their power. The fear comes from the unknown and change, which is driving people into attempting to control things, which also suits the needs of the rich & powerful. This control gives people a false sense of safety and security at the cost of empowerment and meaning. For those at the top of the food chain, control is what they want because it allows them to hold onto their largess. The fear is basically used to enslave the population and cause them to willingly surrender for promises of safety and security against a myriad of fears. In most cases we don’t fear the greatest thing threatening us, the forces that work steadfastly to rob our lives of meaning. At work the fear is the great enemy of all that is good killing meaning, empowerment and mastery in one fell swoop.

Power does not corrupt. Fear corrupts… perhaps the fear of a loss of power.

― John Steinbeck

How does this manifest itself in my day-to-day work? A key mechanism in undermining meaning in work is the ever more intrusive and micromanaged money running research. The control comes under the guise of accountability (who can argue with that, right). The accountability leads to a systematic diminishment in achievement and has much more to do with a lack of societal trust (which embodies part of fear mechanics). Instead of insuring better results and money well spent, the whole dynamic creates a virtual straightjacket for everyone in the system that assures they actually create, learn and produce far less. We see research micromanaged, and projectized in ways that are utterly incongruent with how science can be conducted. The lack of trust translates to lack of risk and the lack of risk equates to lack of achievement (with empowerment and mastery sacrificed at the altar of accountability). This is only one aspect of how the control works to undermine work. There are so many more.

Our greatest fear should not be of failure but of succeeding at things in life that don’t really matter.

― Francis Chan

Pimgresart of these systematic control mechanisms at play is the growth of the management culture in all these institutions. Instead of valuing the top scientists and engineers who produce discovery, innovation and progress, we now value the management class above all else. The managers manage people, money and projects that have come to define everything. This is true at the Labs as it is at universities where the actual mission of both has been scarified to money and power. Neither the Labs nor Universities are producing what they were designed to create (weapons, students, knowledge,). Instead they have become money-laundering operations whose primary service is the careers of managers. All one has to do is see who are the headline grabbers from any of these places; it’s the managers (who by and large show no leadership). These managers are measured in dollars and people, not any actual achievements. All of this is enabled by control and control enables people to feel safe and in control. As long as reality doesn’t intrude we will go down this deep death spiral.

We have priorities and emphasis in our work and money that have nothing to do with the reason our Labs, Universities or even companies exist. We have useless training that serves absolutely no purpose other than to check a box off. The excellence or quality of the work done has no priority at all. We have gotten to the point where peer review is a complete sham, and any honest assessment of the quality of the work is met with hostility. We should all wrap our heads collectively around this maxim of the modern workplace, it can be far worse for your career to demand technical quality as part of what you do than to do shoddy work. We are heading headlong into a mode of operation where mediocrity is enshrined as a key organizational value to be defended against potential assaults by competence. All of this can be viewed as the ultimate victory of form over substance. If it looks good, it must be good. The result is that the appearances are managed, and anything of substance is rejected.

The result of the emphasis on everything else except the core mission of our organizations is the systematic devaluation of those missions, along with a requisite creeping incompetence and mediocrity. In the process the meaning and value of the work takes a fatal hit. Actually expressing a value system of quality and excellence is now seen as a threat and becomes are career limiting perspective. A key aspect of the dynamic to underachievementdemotivatorrecognize is the relative simplicity of running mediocre operations without any drive for excellence. Its great work if you can get it! If your standards are complete shit, almost anything goes, and you avoid the need for conflict almost entirely. In fact the only source of conflict becomes the need to drive away any sense of excellence. Any hint of quality or excellence has the potential to overturn this entire operation and the sweet deal of running it. So any quality ideas are attacked and driven out as surely as the immune system attacks a virus. While this might be a tad hyperbolic, its not too far off at all, and the actual bull’s-eye for an ever growing swath of our modern world.

The key value is money and its continued flow. The whole system runs on a business model of getting money regardless of what it entails doing or how it is done. Of course having no standards makes this so much easier, if you’ll do any shitty thing as long as they pay you for it management is easier. With standards of quality this whole operation becomes self-replicating. In a very direct way the worse thing one can do is get hard work, so the system is wired to drive good work away. You’re actually better off doing shitty work held to shitty standards. Doing the right thing, of the thing right is viewed as a direct threat to the flow of money and generates an attack. The prime directive is money to fund people and measure the success of the managers. Whether or not the money generates excellent meaningful work or focuses on something of value simply does not matter. It becomes a completely viscous cycle where money breeds more money and more money can be bred by doing simple shoddy work than asking hard questions and demanding correct answers. In this way we can see how mediocrity becomes the value that is tolerated and excellence is reviled. Excellence is a threat to power, mediocrity simply accepts being lorded over by the incompetent.

ap9504070450_1

At some level it is impossible to disconnect what is happening in science from the broader cultural trends. Everything happening in the political climate today is part of the trends I see at work. The political climate is utterly and completely corrosive, and the work environment is the same thing. In the United States we have had 20 years of government, which has been engineered to not function. This is to support the argument that government is bad and doesn’t work (and it should be smaller). The fact is that it is engineered not to work by the proponents of this philosophy. The result is a literal self-fulfilling prophesy, government doesn’t work if you don’t try to make it work. If we actually put effort into making it work, valued expertise and excellence, it would work just fine. We get shit because that’s what we ask for. If we demanded excellence and performance, and actually held people accountable for it, we might actually get it, but it would be hard, it would be demanding. The problem is that success would disprove the maxim that government is bad and doesn’t work.

One of my friends recently pointed out that the people managing and running the programs that fund the work at the Labs in Washington actually make less than our Postdocs at the Labs. The result is that we get what we pay for, incompetence, which grows more manifestly obvious with each passing year. If we want things to work we need to hire talented people and hold them to high standards, which means we need to pay them what they are worth.

hqdefaultWe see a large body of people in society who are completely governed by fear above all else. The fear is driving people to make horrendous and destructive decisions politically. The fear is driving the workplace into the same set of horrendous and destructive decisions. Its not clear whether we will turn away from this mass fear before things get even worse. I worry that both work and politics will be governed by these fears until it pushes us over the precipice to disaster. Put differently, the shit show we see in public through politics mirrors the private shit show in our workplaces. The shit is likely to get much worse before it gets better.

There are two basic motivating forces: fear and love. When we are afraid, we pull back from life. When we are in love, we open to all that life has to offer with passion, excitement, and acceptance. We need to learn to love ourselves first, in all our glory and our imperfections. If we cannot love ourselves, we cannot fully open to our ability to love others or our potential to create. Evolution and all hopes for a better world rest in the fearlessness and open-hearted vision of people who embrace life.

― John Lennon

A More Robust, Less Fragile Stability for Numerical Methods

25 Monday Jul 2016

Posted by Bill Rider in Uncategorized

≈ 2 Comments

 

Science is the process that takes us from confusion to understanding…

― Brian Greene

stability-in-lifeStability is essential for computation to succeed. Better stability principles can pave the way for greater computational success. We are in dire need of new, expanded concepts for stability that provides paths forward toward uncharted vistas of simulation.

Without stability numerical methods are completely useless. Even a modest amount of instability can completely undermine and destroy the best simulation intentions. Stability became a thing; right after computers became a thing. Early work on ordinary differential equations encountered instability, but the computations being handcrafted was always suspect. The availability of automatic computations via computers ended the speculation, and now it became clear, numerical methods could become unstable. With the proof of a clear issue in hand great minds went to work to put this potential chaos to order. This is the kind of great work we should be asking applied math to be doing today, and sadly are not because of our over reliance on raw computing power.

Von Neumann devised the first technique for stability analysis after encountering ijohn-von-neumann-2nstability at Los Alamos during World War 2. This method is still the gold standard for analysis today in spite of rather profound limitations and applicability. In the early 1950’s Lax came up withrichtmyer_robert_b1the equivalence theorem (interestingly both Von Neumann and Lax worked with Robert Richtmyer, https://williamjrider.wordpress.com/2016/05/20/the-lax-equivalence-theorem-its-importance-and-limitations/), which only highlighted the importance of stability more boldly. Remarkably ordinary differential equation methods came to stability later than partial differential equations in Dahlquist’s groundbreaking work. He produced a stability theory and equivalence theory that paralleled the work of Von Neumann and Lax for PDEs. All he needed were computers to drive the need for the work. We will note that the PDE theory is all for linear methods and linear equations, while the ODE theory is for
linear methods, but applies to nonlinear ODEs,

Once a theory was established for stability computations could proceed with enough guarantee of solution to progress. For a very long time this stability work was all that was needed. Numerical methods, algorithms and general techniques galore came into being and application covering a broad swath of the physics and engineering World. Gradually, over time, we started to see computation become spoken as a new complementary practice in science that might stand shoulder to shoulder with theory and experiment. These views are a bit on the grandiose side of things where a more balanced perspective might rationally note that numerical methods allow complex nonlinear models to be solved where classical analytical approaches are quite limited. At this point its wise to confront the issue that might be creeping into your thinking, our theory is mostly linear while the utility for computation is almost all nonlinear. We have a massive gap between theory and utility with virtually no emphasis or focus or effort to close it.

This is so super important that I’ve written about it before, doing the basic Von Neumannstability-1methodology using Mathematica, https://williamjrider.wordpress.com/2014/07/15/conducting-von-neumann-stability-analysis/ & https://williamjrider.wordpress.com/2014/07/21/von-neumann-analysis-of-finite-difference-methods-for-first-order-hyperbolic-equations/, in the guise of thoughts about robustness,  https://williamjrider.wordpress.com/2014/12/03/robustness-is-stability-stability-is-robustness-almost/,

and practical considerations for hyperbolic PDEs, https://williamjrider.wordpress.com/2014/01/11/practical-nonlinear-stability-considerations/. Running headlong through this arc of thought are lessons learned from hyperbolic PDEs.Peter_LaxHyperbolic PDEs have always been at the leading edge of computation because they are important to applications, difficult and this has attracted a lot of real unambiguous genius to solve it. I’ve mentioned a cadre of genius who blazed the trails 60 to 70 years ago (Von Neumann, Lax, Richtmyer https://williamjrider.wordpress.com/2014/05/30/lessons-from-the-history-of-cfd-computational-fluid-dynamics/, https://williamjrider.wordpress.com/2015/06/25/peter-laxs-philosophy-about-mathematics/). We are in dire need of new geniuses to slay the nonlinear dragons that stand in the way of progress. Unfortunately there is little or no appetite or desire for progress, and the general environment stands squarely in opposition. The status quo is viewed as all we need, https://williamjrider.wordpress.com/2015/07/10/cfd-codes-should-improve-but-wont-why/, and progress in improving basic capabilities and functionality has disappeared except for utilizing ever more complex and complicated hardware (with ever more vanishing practical returns).

Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.

― Nassim Nicholas Taleb

I’ve been thinking about concepts of nonlinear stability for a while wondering if we can move past the simple and time-honored concepts developed so long ago. Recently I’ve taken a look a Taleb’s “anti-fragile” concept and realized that it might have some traction in this arena. In a sense the nonlinear stability concepts developed here to fore are akin to anti-fragile where the methodology is developed and works best in the worst-case scenario. In the case of nonlinear stability for hyperbolic PDEs the worst-case scenario is a linear discontinuity where the solution has a jump and the linear solution is utterly unforgiving of any errors. In this crucible, all the bad things that can happen to a solution arise, either overly diffusive low accuracy solutions, or oscillations with high accuracy producing demonstrably unphysical results.

No structure, even an artificial one, enjoys the process of entropy. It is the ultimate fate of everything, and everything resists it.

― Philip K. Dick

When the discontinuity is associated with a real physical solution these twin and opposing maladies are both unacceptable. The diffusion leads to a complete was of computational effort that dramatically inhibits any practical utility of numerical methods. For more complex systems of equations where turbulent chaotic solutions would naturally arise, the diffusive methods drive all solutions to be laminar and boring (not matching physical reality in essential aspects!). On the plus side of the ledger, the diffusive solution is epically robust and reliable, a monument to stability. On the other high-order methods based on the premise that the solution is smooth and differentiable (i.e., nice and utterly ideal) is the epitome of fragility. The oscillations can easily put the solution into unphysical states that render the solution physically absurd.

Difficulty is what wakes up the genius

― Nassim Nicholas Taleb

Now we get to the absolute genius of nonlinear stability that arose from this challenge. Rather than forcing us to accept one or the other, we introduce a concept by which we can have the best of both, using whatever discretization is most appropriate for the local circumstances. Thus we have a solution adaptive method that chooses the right approximation for the solution locally. Therefore a different method may be used for every place in space and time. The key concept is the rejection of the use of a linear discretization where the same method is applied everywhere in the solution, which caused the entire problem elucidated above. Instead we introduce a mechanism to analyze the solution and introduce an approximation appropriate for the local structure of the solution.

Tmediocritydemotivatorhe desired outcome is to use the high-order solution as much as possible, but without inducing the dangerous oscillations. The key is to build upon the foundation of the very stable, but low accuracy, dissipative method. The theory that can be utilized makes the dissipative structure of the solution a nonlinear relationship. This produces a test of the local structure of the solution, which tells us when it is safe to be high-order, and when the solution is so discontinuous that the low order solution must be used. The result is a solution that is high-order as much as possible, and inherits the stability of the low-order solution gaining purchase on its essential properties (asymptotic dissipation and entropy-principles). These methods are so stable and powerful that one might utilize a completely unstable method as one of the options with very little negative consequence. This class of methods revolutionized computational fluid dynamics, and allowed the relative confidence in the use of methods to solve practical problems.

Instead of building on the lessons learned with these methods, we seem to have entered an era where the belief that no further progress on this front is needed. We have lost sight of the benefits of looking to produce better methods. Better methods like those that are nonlinearly stable open new vistas of simulation that presently are closed to systematic and confident exploration. The methods described above did just this and produced the current (over) confidence in CFD codes.

A good question is why haven’t these sort of ideas spread to a wider swath of the numerical world? The core answer is “good enough” thinking, which is far too pervasive today. Part of the issue is the immediacy of need for hyperbolic PDEs isn’t there in other areas. Take time integration methods where the consensus view is that ODE integration is good enough, thank you very much, and we don’t need effort. To be honest, it’s the same idea as CFD codes, which are they are good enough and we don’t need effort. Other areas of simulation like parabolic and elliptic PDEs might also use such methods, but again thestability-3.hiresneed is far less obvious than for hyperbolic PDEs. The truth is that we have made rather stunning progress in both areas and the breakthroughs have put forth the illusion that the methods today are good enough. We need to recognize that this is awesome for those who developed the status quo, but a very bad thing if there are other breakthroughs ripe for the taking. In my view we are at such a point and missing the opportunity to make the “good enough,” “great” or even “awesome”. Nonlinear stability is deeply associated with adaptivity and ultimately more optimal and appropriate approximations for the problem at hand.

If you’re any good at all, you know you can be better.
― Lindsay Buckingham

So what might a more general principle look like as applied to ODE integration? Let’s explore this along with some ideas of how to extend the analysis to support such paths. The acknowledged bullet-proof method is backwards Euler, u^{n+1} = u^n + h f(u^{n+1}), which is A-stable and the ODE gold standard of robustness. It is also first-order accurate. One might like to use something higher order with equal reliability, but alas this is exactly the issue we have with hyperbolic PDEs. What might a nonlinear approximation look like?

Let’s assume we will stick with the BDF (backwards differentiation formula) methods, and we have an ODE that should produce positive definite (or some sort of sign- or property-preserving character). We will stick with positivity for simplicity’s sake. The fact is that perfectly linearly stable solutions may well produce solutions that lose positivity. Staying with simplicity of exposition, the second-order BDF method is \frac{3}{2} u^{n+1} = 2 u^n - \frac{1}{2} u^{n-1} + h f(u^{n+1}). This can be usefully rearranged to \frac{3}{2} u^{n+1} - h f(u^{n+1}) = 2 u^n -\frac{1}{2} u^{n-1}. If the right hand side of this expression is positive, 2 u^n -\frac{1}{2} u^{n-1} > 0, and the eigenvalue of f(u^{n+1}) < 0, we have faith that u^{n+1} > 0. If the right hand side is negative, it would be wise to switch to the backwards Euler scheme for this time step. We could easily envision taking this approach to higher and higher order.

StabilityFurther extensions of nonlinear stability would be useful for parabolic PDEs. Generally parabolic equations are fantastically forgiving so doing anything more complicated is not prized unless it produces better accuracy at the same time. Accuracy is imminently achievable because parabolic equations generate smooth solutions.  Nonetheless these accurate solutions can still produce unphysical effects that violate other principles. Positivity is rarely threatened although this would be a reasonable property to demand. It is more likely that the solutions will violate some sort of entropy inequality in a mild manner. Instead of producing something demonstrably unphysical, the solution would simply not have enough entropy generated to be physical. As such we can see solutions approaching the right solution, but in a sense from the wrong direction that threatens to produce non-physically admissible solutions. One potential way to think about is might be in an application of heat conductions. One can examine whether or not the flow of heat matches the proper direction of heat flow locally, and if a high-order approximation does not either choose another high-order approximation that does, or limit to a lower order method wit unambiguous satisfaction of the proper direction. The intrinsically unfatal impact of these flaws mean they are not really addressed.

Mediocrity will never do. You are capable of something better.
― Gordon B. Hinckley

Thinking about marvelous magical median (https://williamjrider.wordpress.com/2016/06/07/the-marvelous-magical-median/) spurred these thoughts to mind. I’ve been taken with this functions ability to produce accurate approximations from approximations that are lower order, and pondering whether this property can extend to stability either linear or nonlinear (empirical evidence is strongly in favor). Could the same functionality be applied to other schemes or approximation methods? In the case of the median if we take two of the three arguments to have a particular property, the third argument will inherit this property if the other two bound it. This is immensely powerful and you can be off to the races by simply having two schemes that possess a given desirable property. Using this approach more broadly than it has been applied thus far would be an interesting avenue to explore.

If you don't have time to do it right, when will you have the time to do it over?
― John Wooden

 

The Death of Peer Review

16 Saturday Jul 2016

Posted by Bill Rider in Uncategorized

≈ 6 Comments

 

If you want to write a negative review, don’t tickle me gently with your aesthetic displeasure about my work. Unleash the goddamn Kraken.

― Scott Lynch

Sadly, this spirit is not what we see today either on the giving or receiving end of peer review, and we are all poorer for it.

mistakesdemotivator_largeAs surely as the sun rises in the East, peer review that treasured and vital process for the health of science is dying or dead. In many cases we still try to conduct meaningful peer review, but increasingly it is simply a mere animated zombie form of peer review. The zombie peer review of today is a mere shadow of the living soul of science it once was. Its death is merely a manifestation of bigger broader societal trends such as those unraveling the political processes, or transforming our economies. We have allowed the quality of the work being done to become an assumption that we do not actively interrogate through a critical process (e.g., peer review). Instead if we examine the emphasis for how money is spent in science and engineering everything, but the quality of the technical work is focused on and demands are made. Instead there is an inherent assumption that the quality of the technical work is excellent and the organizational or institutional focus. With sufficient time this lack of emphasis is eroding the quality presumptions to the point where they no longer hold sway.

Being the best is rarely within our reach. Doing our best is always within our reach.

― Charles F. Glassman

In science, peer review takes many forms each vital to the healthy functioning of productive work. Peer review forms an essential check and balance on the quality of woimages-2rk, wellspring of ideas and vital communication mechanism. Whether in the service of publishing cutting edge research, or providing quality checks for Laboratory research or engineering design its primal function is the same; quality, defensibility and clarity are derived through its proper application. In each of its fashions the peer review has an irreplaceable core of a community wisdom, culture and self-policing. With its demise, each of these is at risk of dying too. Rebuilding everything we are tearing down is going to be expensive, time-consuming and painful.

Let’s get to the first conclusion of this thought process, peer review is healthiest in its classic form, the academic publishing review, and it is in crisis there. The scientific community widely acknowledges that the classic anonymous peer review is absolutely riddled with problems and abuses. The worst bit is that this is where it works the best. So at its best, peer review is terrible. The critiques are many and valid. For example there is widespread abuse of the process by the powerful and established. The system is driven by a corrupt academic system that feeds the overall dysfunction (i.e., publish or perish). Corruption and abuse by the journals themselves is deep and getting worse never mind the exploding costs. Then we have issues about teaming conflicts of interest and deeply passive aggressive behavior veiled behind the anonymity. Despite all these problems peer review here tends to still largely work and albeit in a deeply suboptimal manner.

Another complaint is the time and effort that these reviews take along with suggestions to make things better with modern technology. Online publishing and the ubiquity of the Internet is capable of radically reducing the time and effort (equals money) of publishing a paper. I will say that the time and effort issue for peer review is barking up the wrong tree. The problem with the time and effort is that peer review isn’t valued sufficiently. 7678607190_33e771ac97_bDoing peer review isn’t given much wait professionally whether you’re a professor or working in a private or government lab. Peer review won’t give you tenure, or pay raises or other benefits; it is simply a moral act as part of the community. This character as an unrewarded moral act gets to the issue at the heart of things. Moral acts and “doing the right thing” is not valued today, nor are there definable norms of behavior that drive things. It simply takes the form of an unregulated professional tax, pro bono work. The way to fix this is change the system to value and reward good peer review (and by the same token punish bad in some way). This is a positive side of modern technology, which would be good to see, as the demise of peer review is driven to some extent by negative aspects of modernity, as I will discuss at the end of this essay.

Honest differences are often a healthy sign of progress.

― Mahatma Gandhi

Let’s step away from the ideal context of the classical academic peer review of a paper to an equally common practice, the peer review of organizations, programs and projects. This is a practice of equal or greater importance as it pertains to the execution of technical work across the World. We see it taking action in the form of a design review for software, engineered products and analyses used to inform decisions. In my experience peer review in these venues is in complete free-fall and collapsing under the weight of societal pressures that cannot support the proper execution of the necessary practices. My argument is that we are living within a profoundly low-trust world, and peer review relies upon implicit expectations of trust to be executed with any competence. This lack of trust is present on both ends of the peer review system. When the trust is low, honesty cannot be present and instead honesty will be punished.

First let’s talk about the source of critique, the reviewers. Reviewers have little trust and faith that their efforts will be taken seriously if they find problems, and if they do raise an issue it is just as likely that they, the messenger, will be punished instead. blamedemotivatorAs a result reviewers rarely do a complete of good job of reviewing things, as they understand what the expected result is. Thus the review gets hollowed out from its foundation because the recipient of the review expects to get more than just a passing grade, they expect to get a giant pat on the back. If they don’t get their expected results, the reaction is often swift and punishing to those finding the problems. Often those looking over the shoulder are equally unaccepting of problems being found. Those overseeing work are highly political and worried about appearances or potential scandal. The reviewers know this to and that a bad review won’t result in better work, it will just be trouble for those being reviewed. The end result is that the peer review is broken by the review itself being hollow, the reviewers being easy on the work because of the explicit expectations and the implicit punishments and lack of follow through for any problems that might be found.

Those who can create, will create. Those who cannot create, will compete.

― Michael F. Bruyn

If we look to those being reviewed, the system only gets worse. The people being reviewed only see downside to engaging in peer review, no upside at all. Increasingly any sort of spirit or implied expectation of technical quality has been left behind. Peer review is done to provide the veneer of quality regardless to its actual presence. As a consequence any result from peer review that doesn’t say this work is the best and executed perfectly is ripe to be ignored (or punish those not complying with the implied directive). Those being reviewed have no desire or intent to take any corrective action or address any issue that might be surfaced. As a result the peer review is simply window dressing and serves no purpose other than marketing. The reasons for the evolution to this dysfunctional state are many and clear. The key to the problem is the lack of ability to politically confront problems. A problem is often taken as a death sentence rather than a call to action. Since no issues or actual challenges will be confronted, much less solved, the only course of action is to ignore and bury them.

dysfunctiondemotivatorWe then get to the level above who is being reviewed and closer to the source of the problem, the political system. Our political systems are poisonous to everything and everyone. We do not have a political system perhaps anywhere in the World that is functioning to govern. The result is a collective inability to deal with issues, problem and challenges at a massive scale. We see nothing, but stagnation and blockage. We have a complete lack of will to deal with anything that is imperfect. Politics is always present and important because science and engineering are still intrinsically human activities, and humans need politics. The problem is that truth, and reality must play some normative role in decisions. The rejection of effective peer review is a rejection of reality as being germane and important in decisions. This rejection is ultimately unstable and unsustainable. The only question is when and how reality will impose itself, but it will happen, and in all likelihood through some sort calamity.

demotivators-7-728To get to a better state visa-vis peer review trust and honesty needs to become a priority. This is a piece of a broader rubric for progress toward a system that values work that is high in quality. We are not talking about excellence as superficially declared by the current branding exercise peer review has become, but the actual achievement of unambiguous excellence and achievement. The combination of honesty, trust and the search for excellence and achievement are needed to begin to fix our system. Much of the basic structure of our modern society is arrayed against this sort of change. We need to recognize the stakes in this struggle and prepare our selves for difficult times. Producing a system that supports something that looks like peer review will be a monumental struggle. We have become accustomed to a system that feeds on false excellence and achievement and celebrates scandal as an opiate for the masses.

One only needs to look to the nature of the current public discourse and political climate. We are rapidly moving into a state where the discourse is utterly absent of any substance and the poisonous climate is teetering over into destructive. Reality is beginning to fight back against the flaws in the system. Socially we are seeing increased fear, violence and outright conflict. The problems with peer review pale in comparison to the tide rolling in, but reflect many of the same issues. Peer review is an introverted view of our numerous ills where the violence and damaging environment evident in our mass media is the extroverted side of the same coin. In this analysis peer view is simply another side effect of the massive issues confronting our entire world projected into the environment of science and engineering. Fixing all these issues is in the best interests of humanity, but it’s going to be hard and unpleasant. Because of the difficulty of fixing any of this, we will avoid it until the problems become unbearable for a large enough segment of humanity. Right now it is easier and simpler to just accept an intrinsically uncritical perspective and hqdefaultsimply lie to ourselves about how good everything is and how excellent all of are.

If one doesn’t have the stomach to examine things through such a social lens, one might consider the impact of money on the system. In many ways the critical review of research can now be measured almost entirely in monetary terms. This is especially true in the organizational or laboratory environment where most people managing view money and its continued flow being the only form of review they care about. In such a system a critical peer review system becomes a threat instead a source of renewal. Gradually, over time the drive for technical excellence is replaced by the drive for financial stability. We have allowed financial stability to become disconnected from technical achievement, and in doing so killed peer review. When technical excellence and achievement become immaterial to any measure of success, and money only matters peer review is something to be ignored, avoided and managed because no perceived good can come from it.

Self-consciousness kills communication.

― Rick Steves

Worse than having no perceived good associated with it, peer review if done properly becomes evidence of problems. The problems exposed in peer review represent calls to action that today’s systems cannot handle because they are an affront to planning, schedules, milestones and budgetary allocations. Problems also expose flaws in the fundamental assumptions of today’s world that the technical work is high quality and does not need active focused (appropriate) management to succeed. As a result any problems induce a “shoot the messenger” mentality that acts to destroy the critique and send a clear message that peer review should not be done honestly or seriously. The result has been a continual erosion of the technical quality, so often assumed to be present a priori. This is a viscous cycle where technical problems remain unexposed, or hidden by a lack of vigorous, effective peer review. The problems then fester and grow because problems like these do not cure themselves, and the resistance to peer review or any form of critique only becomes further re-enforced. This doesn’t end well, and the end results are perfectly predictable. The only thing that stems the decay is the encroachment mediocritydemotivatorof decay ultimately ends the ability to conduct a peer review at all. Moreover, the culture that is arising in science acts as a further inhibition to effective review by removing the attitudes necessary for success from the basic repertoire of behaviors.

Some men are born mediocre, some men achieve mediocrity, and some men have mediocrity trust upon them.

― Joseph Heller

underachievementdemotivator

10 Big Things For the Future of (Computational) Science

10 Sunday Jul 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

 

The future depends on what you do today.

― Mahatma Gandhi

The future is already here – it’s just not evenly distributed.

― William Gibson

When did the future switch from being a promise to being a threat?

― Chuck Palahniuk

Ikqb0orhdoqbkhw11o8djt has been a long time since I wrote a list post, and it seemed a good time to do one. They’re always really popular online, and it’s a good way to survey something. Looking forward into the future is always a nice thing when you need to be cheered up. There are lots of important things to do, and lots of massive opportunities. Maybe if we can muster our courage and vision we can solve some important problems and make a better world. I will cover science in general, and hedge the conversation toward computational science, cause that’s what I do and know the most about.

Mediocrity will never do. You are capable of something better.

― Gordon B. Hinckley

Here is the list:

  1. Fixing the research environment and encouraging risk taking, innovation and tolerance for failure
  2. CRISPR
  3. Additive manufactoring
  4. Exascale computing
  5. Nontraditional computing paradigms
  6. Big data
  7. Reproducibility of results
  8. Algorithmic breakthroughs
  9. The upcoming robotic revolution (driverless cars)
  10. Cyber-security and cyber-privacy
  1. mediocritydemotivatorFixing the research environment and encouraging risk taking, innovation and tolerance for failure. I put this first because it impacts everything else so deeply. There are many wonderful things that the future holds for all of us, but the overall research environment is holding us back from the future we could be having. The environment for conducting good, innovative game changing research is terrible, and needs serious attention. We live in a time where all risk is shunned and any failure is punished. As a result innovation is crippled before it has a chance to breathe. The truth is that it is a symptom of a host larger societal issues revolving around our collective governance and capacity for change and progress.Somehow we have gotten the idea that research can be managed like a construction project, and such management is a mark of quality. Science absolutely needs great management, but the current brand of scheduled breakthroughs, milestones and micromanagement is choking the science away. We have lost the capacity to recognize that current management is only good for leeching money out of the economy for personal enrichment, and terrible for the organizations being managed whether it’s a business, laboratory or university. These current fads are oozing their way into every crevice of research including higher education where so much research happens. The result is a headlong march toward mediocrity and the destruction of the most fertile sources of innovation in the society. We are living off the basic research results of 30-50 years past, and creating an environment that will assure a less prosperous future. This plague is the biggest problem to solve but is truly reflective of a broader cultural milieu and may simply need to run its disastrous course.Over the weekend I read about the difference between first- and second-level thinking. First-level thinking looks for the obvious and superficial as a way of examining problems, issues and potential solutions. It is dealing witjohn-von-neumann-2h things in an obvious and completely intellectually unengaged manner. Let’s just say that science today is governed by first-level thinking, and it’s a very bad thing. This is contrasted with second-level thinking, which teases problems apart, analyzes them, and looks beyond the obvious and superficial. It is the source of innovation, serendipity and inspiration. Second-level thinking is the realm of expertise and depth of thought, and we all should know that in today’s World the expert is shunned and reviled as being dangerous. We will all suffer the ill-effects of devaluing expert judgment and thought as applied to our very real problems.
  1. CRISPR What can I really say here, this technology is huge, enormous and an absolute game changer. When I learned about it the first time it literally stopped me in my tracks and I said “holy shit this could change everything!” If you don’t know CRISPR is the first easy to use and flexibly programmable method for manipulating the genetic code of living beings as well as short-circuiting the rules of natural selection. Just like nuclear energy CRISPR could be a massive force for good or evil. It has the potential to change the rules of how we deal with a host of diseases and plagues upon mankind. It also has the capacity to produce weapons of mass destruction and unleash carnage upon the world. We must use far more wisdom than we typically show in wielding its power. How we do this will shape the coming decades in ways we can scarcely imagine. It also emerges in the current era where great ideas are allowed to whither and die. It seems reasonable to say that we don’t know how to wield the verurly discoveries we make, and CRISPR seems like the epitome of this.
  2. Additive manufacturing In engineering circles this is a massive opportunity for innovation and a challenge to a host of existing practices and knowledge. It will both impact and draw upon other issues from this list in how it plays out. It is often associated with the term 3-D printing, where we can produce full three dimensional objects in a process free of classic manufacturing processes like molds and production lines. The promise is to break free of the tyranny of traditional manufacturing approaches, limitations and design for small lots of designer, custom parts. Making the entire process work well enough for customers to rely upon it and have faith in the imagesmanufacturing quality and process is a huge aspect of the challenges. This is especially true for high performance parts where the requirements on the quality are very high. The other end of the problem is the opportunity to break free of traditional issues in design and open up the possibility of truly innovative approaches to optimality. Additional problems are associated with the quality and character of the material used in the design since its use in the creation of the part is substantially different than traditional manufactured part’s materials. Many of these challenges will be partially attacked using modeling & simulation drawing upon cutting edge computing platforms.
  3. Exascale computing The push for more powerful computers for conducting societally important work is as misguided as it is a big deal. I’ve written so much recently about this I find little need to say more. It is an epitome of the first item on my list, as a wrongly managed, risk intolerant solution to a real issue, which will end up doing more harm than good in the long run. It is truly the victory of first-level thinking over the deeper and more powerful second-level thinking we need. Perhaps I’m Crays-Titan-Supercomputerbeing a bit haughty in my contention that what I’ve laid out in my blog constitutes second-level thinking about high performance computing, but I stand by it, and the summary that the first-level thinking governing our computing efforts today constitutes hopelessly superficial first-level thought. Really solving problems and winning at scientific computing requires a sea change toward applying the fruits of in-depth thinking about how to succeed at using computing as a means for societal good including the conduct of science.
  1. Nontraditional computing paradigms We stand at the brink of a deep change in computing by one way or another. We will either see the end of the Moore’s law (actually its done, at all scales already), which has powered computing into a central role societally whether it is business or science. The only way the power of computing will continue to grow is through a systematic change in the principles by which computers are built. There are two potential routes being explored both being rather questionable in their capability to deliver the sort of power necessary to succeed. The most commonly discussed route is quantum computing, which promises incredible (almost limitless) power for a very limited set of applications. It also features rather difficult to impossible to manage hardware among problems limiting its transition to reality. The second approach is neuro-morphic, or brain-inspired computing, which may be more tangible and possible than quantum, but a longer shot at being a truly game changing technology. The jury is out on both technology paths, and we may just have to live with the end of Moore’s law for a long time.
  1. Big data The Internet brought computing to the masses, and mobile computing brought computing to everyone in every aspect of our lives. Along with this ubiquity of computing came a wealth of data on virtually every aspect of everyone’s lives. This data is enormous and wildly varied in its structure teaming with possibility for uses of all stripes, good and bad. Big data is the route toward wealth beyond measure, and the embodimentfacebook-friends.jpg.pagespeed.ce_.UPAsGtTZXH of the Orwellian Big Brother we should all fear. Taming big data is the combination of computing, algorithms, statistics and business all rolled into one. It is one of the places where scientific computing is actually alive with holistic energy driving innovation all the way from models of data (reality), algorithms for taming the data and hardware to handle to load. New sensors and measurement devices are only adding to the wealth as the Internet of things moves forward. In science, medicine and engineering new instruments and sensors are flooding the World with huge data sets that must be navigated, understood and utilized. The potential for discovery and progress is immense as is the challenge of grappling with the magnitude of the problem.
  2. Reproducibility of results The trust of science and expertise is seemingly at an all-time low. Part of this is the cause of the information (and misinformation) deluge we live in. It is feeding on and fed by the lack of trust in expertise within society. As such there has been some substantial focus on being able to reproduce the results of research. Some fields of study are having veritable crises driven by the failures of studies to be reproducible. In other cases the stakes are high enough that the public is genuinely worried about the issue. Such a common situation is a drug trial, which have massive stakes for anyone who might be treated with or need to be treated by a drug. Other areas science such as computation have fallen under the same suspicion, but may have the capacity to provide greater substance and faith in the hqdefaultreproducibility of their work. Nonetheless, this is literally a devil is in the details area and getting all the details right that contributes to research finding is really hard. The less oft spoken subtext to this discussion is the general societal lack of faith in science that is driving this issue. A more troubling thought regarding how replicable research actually comes from considering how uncommon replication actually is. It is uncommon to see actual replication, and difficult to fund or prioritize such work. Seeing how commonly such replication fails under these circumstances only heightens the sense of the magnitude of this problem.
  1. Algorithmic breakthroughs One way of accelerating the progress in computers and the work they do is to focus on innovations in algorithms. Instead of relying on computational hardware to increase our throughput we rely on innovation in how we use those computers or implement our methodsimages on those computers. Over time improvements in methods and algorithms have outpaced improvements in hardware. Recently this bit of wisdom has been lost to the sort of first-level thinking so common today. In big data we see needs for algorithm development overcoming the small-minded focus people rely upon. In scientific computing the benefits and potential is there for breakthroughs, but the vision and will to put effort into this is lacking. So I’m going to hedge toward the optimistic and hope that we see through the errors in our thinking and put faith in algorithms to unleash their power on our problems in the very near future!
  2. The upcoming robotic revolution (driverless cars) The fact is that robots are among us already, but their scope and presence is going to grow. Part of the key issue with the robots is the lack of brainpower to really replace the human decision-making in tasks. Computing power, and the ubiquity of the Internet in all its coupled glimagesory is making problems like this tractable. It would seem that driverless-robot cars are solving this problem in one huge area of human activity. Multiple huge entities are working this problem and by all accounts making enormous progress. The standard for the robot cars would seem to be very much higher than humans, and the system is biased against this sort of risk. Nonetheless, it would seem we are very close to seeing driverless cars on a road near you in the not too very distant future. If we can see the use of robot cars on our roads with all the attendant complexity, risks and issues associated with driving it is only a matter of time before robots begin to take their place in many other activities.
  1. Cyber-security and cyber-privacy The advent of computing at such an enormous societal scale particularly with mobile computing penetrating every aspect of our lives is the twin security-privacy dilemma. On the one hand, we are ptinder-640x334otentially victimized by cyber-criminals as more and more commerce and finance takes place online driving a demand for security. The government-police-military-intelligence apparatus also sees the potential for incredible security issues and possible avenues through the virtual records being created. At the same time the ability to have privacy or be anonymous is shrinking away. People have the desire to not have every detail of their lives exposed to the authorities (employers, neighbors, parents, children, spouses,…) meaning that cyber-privacy will become a big issue too. This will lead to immense technical-legal-social problems and conflict over how to balance the needs-demands-desires for security and privacy. How we deal with these issues will shape our society in huge ways over the coming years.

The fantastic advances in the field of electronic communication constitute a greater danger to the privacy of the individual.

― Earl Warren

How to Win at Supercomputing

04 Monday Jul 2016

Posted by Bill Rider in Uncategorized

≈ 3 Comments

The best dividends on the labor invested have invariably come from seeking more knowledge rather than more power.

— Wilbur Wright

Here is a hint; it’s not how we are approaching it today. The approach today is ultimately doomed to fail and potentially take a generation of progress wit it. We need to emphasize the true differentiating factors and embrace the actual sources of progress. Computer hardware is certainly a part of the success, but by no means the dominant factor in true progress. As a result we are starving key aspects of scientific computing from the intellectual lifeblood needed for advancing the state of the art. Even if we “win” following our current trajectory, the end result will be a loss because of the opportunity cost incurred in pursuing the path we are on today. Supercomputing is a holistic activity embedded in a broader scientific enterprise. As such it needs to fully embrace the scientific method and structure its approach more effectively.

The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

― George Bernard Shaw

21SUPERCOMPUTERS1-master768The news of the Chinese success in solidifying their lead in supercomputer performance “shocked” the high performance-computing World a couple of weeks ago. To make things even more troubling to the United States, the Chinese achievement was accomplished with home grown hardware (a real testament to the USA’s export control law!). It comes as a blow to the American efforts to retake the lead in computing power. It wouldn’t matter if the USA or anyone else for that matter were doing things differently. Of course the subtext of the entire discussion around supercomputer speed is the supposition that raw computer power measures the broader capability in computing, which defines an important body of expertise for National economic and military security. A large part of winning in supercomputing is the degree to which this supposition is patently false. As falsehoods go, this is not ironclad and a matter of debate over lots of subtle details that I elaborated upon last week. The truth depends on how idiotic the discussion needs to be and one’s tolerance for subtle technical arguments. In today’s world arguments can only be simple, verging on moronic and technical discussions are suspect as a matter of course.

Instead of concentrating just on finding good answers to questions, it’s more important to learn how to find good questions!

― Donald E. Knuth

If you read that post you might guess the answer of how we might win the quest for supercomputing supremacy. In a sense we need to do a number of things better than today. First, we need to stop measuring computer power with meaningless and misleading benchmarks. These do nothing but damage the entire field by markedly skewing the overall articulation of both the successes, but also the challenges of building uunnamedseful computers. Secondly, we need to invest our resources in the most effective areas for success these are modeling, methods and algorithms all of which are far greater sources of innovation and true performance for the accomplishment of modeling & simulation. The last thing is to change the focus of supercomputing to modeling & simulation because it is where the societal value of computing is delivered. If these three things were effectively executed upon victory would be assured to whomever made the choices. The option of taking more effective action is there for the taking.

Discovery consists of looking at the same thing as everyone else and thinking something different.

― Albert Szent-Györgyi

The first place to look for effort that might dramatically tilt the fortunes of supercomputing is modeling. Our models of the World are all wrong to some degree; they are all based on various limiting assumptions, and may be improved. None of these characteristics may be ameliorated by supercomputing power, or accuracy of discretization, nor algorithmic efficiency. Modeling limitations are utterly impervious to anything, but modeling improvement. The subtext to the entire discussion of supercomputing power is the supposition that our models today are completely adequate and only in need of faster computers to fully explain reality. This is an utterly specious point-of-view that basically offends the foundational principles of science itself. Modeling is the key to the understanding and irreplaceable in its power and scope to transform our capability.

And a step backward, after making a wrong turn, is a step in the right direction.

― Kurt Vonnegut

gesamthubschrauber-01We might take a single example to illustrate the issues associated with modeling: gradient diffusion closures for turbulence. The diffusive closure of the fluid equations for the effects of turbulence is ubiquitous, useful and a dead end without evolution. It is truly a marvel of science going back to the work of Prantl’s mixing length theory. Virtually all the modeling of fluids done with supercomputing is reliant on its fundamental assumptions and intrinsic limitations. The only place where its reach does not extend to is the direct numerical simulation where the flows are computed without the aid of modeling, i.e., a priori (which for the purposes here I will take as a given although it actually needs a lot of conversation itself). All of this said, the ability of direct numerical simulation to answer our scientific and technical questions are limited because turbulence is such a vigorous and difficult multiscale problem that even an exascale computer cannot slay.

So let’s return to what we need to do to advance the serious business of turbulence modeling. In a broad sense one of the biggest limitations of diffusion as a subgrid closure is its inability to describe behavior that is not diffusive. While turbulence is a decisively dissipative phenomenon, it is not always and only dissipative locally. The diffusive subgrid closure makes this assumption and hence carries deep limitations. In key areas of a flow field the proper subgrid model is actually non-dissipative or even anti-dissipative. The problem is that diffusion is a very stable and simple way to model phenomena in many ways exaggerating its success. We need to develop non-diffusive models that extend the capacity to model flows not fully or well described by diffusive closure approaches.

computer-modeling-trainingOnce a model is conceived of in theory we need to solve it. If the improved model cannot yield solutions, its utility is limited. Methods for computing solutions to models beyond the capability of analytical tools were the transformative aspect of modeling & simulation. Before this many models were only solvable in very limited cases through apply a number of even more limiting assumptions and simplifications. Beyond just solve the model; we need to solve it correctly, accurately and efficiently. This is where methods come in. Some models are nigh on impossible to solve, or entail connections and terms that evade tractability. Thus coming up with a method to solve the model is a necessary element in the success of computing. In the early years of scientific computing many methods came into use that tamed models into ease of use. Today’s work on methods has slowed to a crawl, and in a sense our methods development research are victims of their own success.

Arthur C. Clarke’s third law: Any sufficiently advanced technology is indistinguishable from magic.

An example of this success is the nonlinear stabilization methods I’ve written about recently. These methods are the lifeblood of the success computational fluid dynamics (CFD) codes have had. Without their invention the current turnkey utility of CFD codes would be unthinkable. Before their development CFD codes were far more art and far less science than today. Unfortunately, we have lost much of the appreciation for the power and scope of these methods. We have little understanding of what came before them and the full breadth of their magical powers. Before these methods came into the fore one was afforded the daunting task of choosing between an overly diffusive stable method (i.e., donor cell–upwind differencing) and a more accurate, but unphysically oscillatory method. These methods allowed on to have both and adaptively use whatever was necessary under the locally determined circumstances, but they can do much more. While their  power to allow efficient solutions was absolutely immense, these methods actually opened doors to physically reasonable solutions to a host of problems. One could have both accuracy and physical admissibility in the same calculation.

This is where the tale turns back toward modeling. These methods actually provide some modeling capability for “free”. As such the modeling under the simplest circumstances is completely equivalent to the Prantl’s mixing layer approach, but with the added benefit of computability. More modern stabilized differencing actually provides modeling that goes beyond the simple diffusive closure. Because of the robust stability properties of the method one can compute solutions with backscatter stably. This stability is granted by the numerical approach, but provides the ability to solve the non-dissipative model with an asymptotic stability needed for physically admissible modeling. If one had devised a model with the right physical effect of local backscatter, these methods provide the stable implementation. In this way these methods are magical and make the seemingly impossible, possible.images-1

This naturally takes us to the next activity in the chain of activities that add value to computing, algorithm development. This is the development of new algorithms that have greater efficiency to differentiate itself from the focus of algorithm work today, simply implementing old algorithms on the new computers, which comes down to dealing with the increasingly enormous amount of parallelism demanded. The sad thing is that no implementation can over come the power of algorithmic scaling, and this power is something we are systematically denying ourselves of. Indeed we have lost massive true gains in computational performance because of failure to invest in this area, and the inability to recognize the opportunity cost of a focus on implementing the old.

A useful place to look to in examining the sort of gains coming from algorithms is numerical linear algebra. The state of the art here comes from multigrid and it came into the fore over 30 years ago. Since then we have had no breakthroughs, when before a genuine breakthrough occurred about every decade. It is not coincidence 30 years ago is when parallel computing began its eventual takeover of high performance computing. Making multigrid or virtually any other “real” algorithm work at a massive parallel scale is very difficult, incredibly challenging work. This difficulty has swallowed up all the effort and energy in the system effectively starving the development of new algorithm invention out. What is the cost? We might understand the potential cost of these choices by looking back at what previous breakthroughs have gained.

We can look at the classical example of solving Poisson’s equation (\nabla^2 u = f) on the unit square or cube to instruct us on how incredibly massive the algorithmic gains might be. The crossover point between a relaxation method (Gauss-Seidel, GS, or Jacobi) and an incomplete Cholesky conjugate gradient (ICCG) is at approximately 100 unknowns. For a multigrid algorithm the crossover point in cost occurs at around 1000 unknowns. Problems of 100 or 1000 unknowns can now be accomplished on something far less capable than a cell phone. For problems associated with supercomputers the differences in the cost of these different algorithms are utterly breathtaking to behold.7b8b354dcd6de9cf6afd23564e39c259

Consider a relatively small problem today of solving Poisson’s equation on a unit cube of 1000 unknowns in each direction (10^9 unknowns). If we take the cost of multigrid as taking “one” the GS now takes ten million times more effort, and ICCG almost 1000 times the effort. Scale up the problem to something we might dream of doing on an exascale computer of a cube of 10,000 on a side with a trillion unknowns, and we easily see the tyranny of scaling and the opportunity of algorithmic breakthroughs we are denying ourselves of. For this larger problem, the GS now costs ten billion times the effort of multigrid, and ICCG is now 30,000 times the expense. Imagine the power of being able to solve something more efficiently than multigrid! Moreover multigrid can withstand incredible levels of inefficiency in its implementation and still win compared to the older algorithms. The truth is that parallel computing implementation drives the constant in front of the scaling up to a much larger value than a serial computer, so these gains are offset by the lousy hardware we have to work with.

Here is the punch line to this discussion. Algorithmic power is massive almost to a degree that defies belief. Yet algorithmic power is vanishingly small compared to methods, which itself is dwarfed by modeling. Modeling connects the whole simulation endeavor to the scientific method and is irreplaceable. Methods make these models solvable and open the doors of capability. All of these activities are receiving little tangible priority or support in the current high performance computing push resulting in the loss of incredible opportunities for societal benefit. Moreover we have placed our faith in the false hope that mere computing power is transformative.

Never underestimate the power of thought; it is the greatest path to discovery.

― Idowu Koyenikan

Both models and methods transcend the sort of gains computing hardware produces and can never replace. Algorithmic advances can be translated to the language of efficiency via scaling arguments, but provide gains that go far beyond hardware’s capacity for improvement. The problem is that all of these rely upon faith in humanities ability to innovate, think and produce things that had previously been beyond the imagination. This is an inherently risky endeavor that is prone to many failures or false hopes. This is something that today’s World seems to lack tolerance for, and as such the serendipity and marvel of discovery is scarified at the altar of fear.

We have to continually be jumping off cliffs and developing our wings on the way down.

― Kurt Vonnegut

The case for changing the focus of our current approach being airtight, and completely defensible. Despite the facts, the science and the benefits of following rational thinking there is precious little chance of seeing change. The global effort in supercomputing is utterly and completely devoted to the foolish hardware path. It wins by a combination of brutal simplicity, and eagerness to push money toward industry. So what we have is basically cash driven funeral pyre for Moore’s law. The risk-taking, innovation-driven approach necessary for success is seemingly beyond the capability of our society to execute today. The reasons why are hard to completely grasp, we have seemingly lost of nerve and taste for subtlety. Much of the case for doing the right things and those things that lead to success are bound to a change of mindset. Today the power, if not the value of computing are measured in the superficial form of hardware. The reality is that the power is bound to our ability to model, simulate and ultimately understand or harness reality. Instead we blindly put our faith in computing hardware instead of the intellectual strength of humanity.

The discussion gets to a number of misconceptions and inconsistencies that the field of supercomputing. The biggest issue is the disconnect between the needs of science and engineering and the success of supercomputing (i.e., what constitutes a win). Winning in supercomputing programs is tied to being able to put a (American) machine at the top of the list. Increasingly success at having the top computer on the increasingly useless Top500 list is completely at odds with acquiring machines useful for conducting science. A great deal of the uselessness of the list is the benchmark used to define its rankings, LINPAC, which is less relevant to applications every passing day. It has come to the point where it is hurting progress in a very real way.500x343xintel-500x343.jpg.pagespeed.ic.saP0PghQP9

The science and engineering needs are varied all the way from QCD, MD and DNS to climate modeling and integrated weapons calculations. The pure science needs of QCD, MD and DNS are better met by the machines being built today, but even in this idealized circumstance the machines we buy to top the computing list are fairly suboptimal for this pure science application. The degree of suboptimality for running our big integrated calculations has become absolutely massive over time and the gap is only growing larger with each passing year. Like most things, inattention to this condition is only allowing it to become worse. The machines being designed for winning the supercomputing contest are actual monstrosities that are genuinely unusable for scientific computing. Worse yet the execution of the exascale program is acting to make this worse in every way, not better.

We then increase the damaging execution of the supercomputing program is the systematic hollowing out of the science, and engineering content from our programs. We are systematically diminishing our efforts in experimentation, theory, modeling, and mathematics despite their greater importance and impact on the entire enterprise. The end result will be a lost generation of computational scientists who are left using computers completely ill-suited to the conduct of science. If National security is a concern, the damage we are doing is real and vast in scope.

We need supercomputing to be a fully complimentary part of the scientific enterprise used and relied upon only as appropriate with limits rationally chosen based on evidence. Instead we have created supercomputing as a prop and marketing stunt. There is a certain political correctness about how it contributes to our national security, and our increasingly compliant Labs offer no resistance to the misuse of the taxpayer money. The mantra is “don’t rock the boat,” we are getting money to do this. Whether or not it’s sensible or not is immaterial. The current programs are ineffective and poorly executed and do a poor job of providing the sorts of capability claimed. It is yet another example of and evidence of the culture of bullshit and pseudo-science that pervades our modern condition.

Supercomputer_Share_Top500_November2015The biggest issue is the death of Moore’s law and our impending failure to produce the results promised. Rather than reform our programs to achieve real benefits for science and national security, we will see a catastrophic failure. This will be viewed through the usual lens of scandal. It is totally foreseeable and predictable. It would be advisable to fix this before disaster, but my guess is we don’t have the intellect, foresight, bravery or leadership to pull this off. The end is in sight and it won’t be pretty. Instead there is a different path that would be as glorious and successful. Does anyone have the ability to turn away from the disastrous path and consciously choose success?

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

― Werner Heisenberg

Some Background reading on the Top500 list and benchmarks that define it:

https://en.wikipedia.org/wiki/TOP500

https://en.wikipedia.org/wiki/LINPACK_benchmarks

https://en.wikipedia.org/wiki/HPCG_benchmark

A sample of prior posts on topics related to this one:

https://williamjrider.wordpress.com/2016/06/27/we-have-already-lost-to-the-chinese-in-supercomputing-good-thing-it-doesnt-matter/

https://williamjrider.wordpress.com/2016/05/04/hpc-is-just-a-tool-modeling-simulation-is-what-is-important/

https://williamjrider.wordpress.com/2016/01/15/could-the-demise-of-moores-law-be-a-blessing-in-disguise/

https://williamjrider.wordpress.com/2016/01/01/are-we-really-modernizing-our-codes/

https://williamjrider.wordpress.com/2015/11/19/supercomputing-is-defined-by-big-money-chasing-small-ideas-draft/

https://williamjrider.wordpress.com/2015/10/30/preserve-the-code-base-is-an-awful-reason-for-anything/

https://williamjrider.wordpress.com/2015/10/16/whats-the-point-of-all-this-stuff/

https://williamjrider.wordpress.com/2015/07/24/its-really-important-to-have-the-fastest-computer/

https://williamjrider.wordpress.com/2015/07/03/modeling-issues-for-exascale-computation/

https://williamjrider.wordpress.com/2015/06/05/the-best-computer/

https://williamjrider.wordpress.com/2015/05/29/focusing-on-the-right-scaling-is-essential/

https://williamjrider.wordpress.com/2015/04/10/the-profound-costs-of-end-of-life-care-for-moores-law/

https://williamjrider.wordpress.com/2015/03/06/science-requires-that-modeling-be-challenged/

https://williamjrider.wordpress.com/2015/02/14/not-all-algorithm-research-is-created-equal/

https://williamjrider.wordpress.com/2015/02/12/why-is-scientific-computing-still-in-the-mainframe-era/

https://williamjrider.wordpress.com/2015/02/06/no-amount-of-genius-can-overcome-a-preoccupation-with-detail/

https://williamjrider.wordpress.com/2015/02/02/why-havent-models-of-reality-changed-more/

https://williamjrider.wordpress.com/2015/01/05/what-is-the-essence-of-computational-science/

https://williamjrider.wordpress.com/2015/01/01/2015-time-for-a-new-era-in-scientific-computing/

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...