• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Author Archives: Bill Rider

The Real Problem with Classified E-mail

09 Friday Sep 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

635933172260783601-hillary-clinton-miami-rally-super-tuesday-27The news is full of stories and outrage at Hillary Clinton’s e-mail scandal. I don’t feel that anyone has remotely the right perspective on how this happened, and why it makes perfect sense in the current system. It epitomizes a system that is prone to complete breakdown because of the deep neglect of information systems both unclassified and classified within the federal system. We just don’t pay IT professionals enough to get good service. The issue also gets to the heart of the overall treatment of classified information by the United States that is completely out of control. The tendency to classify things is completely running amok far beyond anything that is in the actual best interests of society. Tobill-clinton-pardoned-john-deutch-rcompound things, further it highlights the utter and complete disparity in how laws and rules do not apply to the rich and powerful. All of this explains what happened, and why; yet it doesn’t make what she did right or justified. Instead it points out why this sort of thing is both inevitable and much more widespread (i.e., John Deutch, Condoleezza Rich, Colin Powell,imgres and what is surely a much longer list of violations of the same thing Clinton did).

Management cares about only one thing. Paperwork. They will forgive almost anything else – cost overruns, gross incompetence, criminal indictments – as long as the paperwork’s filled out properly. And in on time.

― Connie Willis

imagesLast week I had to take some new training at work. It was utter torture. The DoE managed to find the worst person possible to train me, and managed to further drain them of all personality then treated him with sedatives. I already take a massive amount of training, most of which utterly and completely useless. The training is largely compliance based, and generically a waste of time. Still by the already appalling standards, the new training was horrible. It is the Hillary-induced E-mail classification training where I now have the authority to mark my classified E-mails as an “E-mail derivative classifier”. We are constantly taking reactive action via training that only undermines the viability and productivity of my workplace. Like most my training, this current training is completely useless, and only serves the “cover your ass” purpose that most training serves. Taken as a whole our environment is corrosive and undermines any and all motivation to give a single fuck about work.

Let’s get to the point of why Hillary was compelled to use a private e-mail system in the first place? Why classified information appeared in the place? Why people in positions of power feel they don’t have to follow rules?

Most people watching the news have little or no idea about the classified computing or e-mail systems. So let’s explain a few things about the classified systems people work on that will get to the point of why all of this is so fucking stupid. For starters the classified computing systems are absolutely awful to use. Anyone trying to get real work done on these systems is confronted with the utter horror they are to use. No one interested in productively doing work would tolerate them. In many government places the unclassified computing systems are only marginally better. The biggest reasons are lack of appropriately skilled IT professionals and lack of investment in infrastructure. Fundamentally we don’t pay the IT professionals enough to get first-rate service, and anyone who is good enough to get a better private sector job does. Moreover these incompetencedemotivatorprofessionals work on old hardware with software restrictions that serve outlandish and obscene security regulations that in many cases are actually counter-productive. So, if Hillary were interested in getting anything done she would be quite compelled to leave the federal network for greener, more productive pastures.

The more you leave out, the more you highlight what you leave in.

― Henry Green

Where one might think that the government would give classified work the highest priority, the environment for working there is the worst. Keep in mind that it is worse than the already shitty and atrocious unclassified environment. The seeming purpose of evermistakesdemotivator_largeything is not my or anyone’s actual productivity, but rather the protection of information, or at least the appearance of protection. Our approach to everything is administrative compliance with directives. Actual performance on anything is completely secondary to the appearance of performance. The result of this pathetic approach to providing the taxpayer with benefit for money expended is a dysfunctional system that provides little in return. It is primed for mistakes and outright systematic failures. Nothing stresses the system more than a high-ranking person hell-bent on doing their job. The sort of people who ascend to high positions like Hillary Clinton find the sort of compliance demanded by the system awful (because it is), and have the power to ignore it.

peter_nanosOf course I’ve seen this abuse of power live and in the flesh. Take the former Los Alamos Lab Director, Admiral Pete Nanos who famously shut the Lab down an denounced the staff as “Butthead Cowboys!” He blurted out classified information in an unclassified meeting in front of hundreds if not thousands of people. If he had taken his training, and been compliant he should have known better. Instead of being issued a security infraction like any of the butthead cowboys in attendance would have gotten, he got a pass. The powers that be simply declassified the material and let him slide by. Why? Power comes with privileges. When you’re in a position of power you find the rules are different. This is a maxim repeated over and over in our World. Some of this looks like white privilege, or rich white privilege where you can get away with smoking pot, or raping unconscious girls with no penalty, or lightened penalties. If you’re not white or not rich you pay a much stiffer penalty including prison time.7597423806_3213679a80_b

I learned the lesson again at Los Alamos in another episode that will remain slightly vague in this post. I went to a meeting that honored a Lab scientist’s career. During the course of the meeting another Lab director read an account of this person’s work noting their monumental accomplishments and contributions to the national security. All of the account was good, true and correct except it was classified in its content. I took the written text to the classification office at the Lab and noted its issues. They agreed that it was indeed classified. Because the people who wrote the account (very high ranking DoE person) and the person who read it were so high ranking they would not touch this with the proverbial ten-foot pole. They knew a violation had occurred, but their experience also told them that it was foolish to pursue it. This pursuit would only hurt those who pointed out the problem and those committing the violations were immune.

Let me ask you, dear reader, how do you think someone would treat the Secretary of State of the United States. How much more untouchable would they be? It is certainly wrong in a perfect World, but we live in a very imperfect world.

A secret’s worth depends on the people from whom it must be kept.

― Carlos Ruiz Zafón

Castle_Union
bomb.jpg_1718483346

The core philosophy in all of this is that we have lots of secrets to protect because we are the biggest and baddest country on Earth. It was certainly true at one time, but every day I wonder less and less if we still are and gain assurance that we are not. So we have created a system that is predicated on our lead in science and technology, but completely and utterly undermines our ability to keep that lead. We have a system that is completely devoted to undermining our productivity at every turn in the service of protecting information that loses its real value every day. To put it differently our current approach and policy is utter and complete fucking madness!

I also want to be clear that classification of a lot of material is absolutely necessary. It is essential to the safety and security of the Nation and the World. The cavalier and abusive way that classification is applied today runs utterly counter to this. By classifying everything in sight, we reduce the value and importance of the things that must be classified. By using classification of documents to cover everything with a blanket, the real need and purpose of classification is obscured and harmed deeply. All of this said I have not discussed the most widely abused version of classification, “Official Use Only,” which is applied in an almost entirely unregulated manner. It is abused widely and casually. Among the areas regulated by this awful policy is the Export Controlled Information, which is easily one of the worst laws I’ve ever come in contact with. It is just simply put stupid and incompetent. It probably does much more harm than good to the national security of the nation.

Power does not corrupt. Fear corrupts… perhaps the fear of a loss of power.

― John Steinbeck

GOP 2016 Debate

Republican presidential candidate, businessman Donald Trump stands during the Fox Business Network Republican presidential debate at the North Charleston Coliseum, Thursday, Jan. 14, 2016, in North Charleston, S.C. (AP Photo/Chuck Burton)

Let’s be clear about the Country and World we live in. The rich and powerful are corrupt. The rich and powerful are governed by entirely different rules than everyone else. Mistakes, violations of the law, and morality itself for the rich and powerful are fundamentally different than the common man. So to be clear Hillary Clinton committed abuses of power. Donald Trump has committed abuses of power too. Barack Obama has as well. Either Hillary or Trump will continue to do so if elected President. Until the basic attitudes toward power and money change we should expect this to continue. The same set of abuses of power happen across the spectrum of society in every organization and business. The larger the organization or business, the worse the abuse of power can expect to be. As long as it is tolerated it can be expected to continue.

A man who has never gone to school may steal a freight car; but if he has a university education, he may steal the whole railroad.

― Theodore Roosevelt

Our societal approach to classification of documents is simply a tool of this sort of rampant abuse of power. We see any sense of a viable “whistleblower” protection to be imgres-1complete and utter bullshit. People who have highlighted huge systematic abuses of power involving murder and vast violation of constitutional law are thrown to the proverbial wolves. There is no protection, it is viewed as treason and these people are treated as harshly as possible (Snowden, Assange, and Manning come to mind). As I’ve noted above people in positions of authority can violate the law with utter impunity. At the same time classification is completely out of control. More and mocnt4_fr53-1re is being classified with less and less control. Such classification often only serves to hide information and serve the needs of the status quo power structure.

In the end, Hillary had really good reasons to do what she did, and believe that she had the right to do so. Everything in the system is going to provide her with the evidence that the rules for everyone else do not apply to her. Hillary wasn’t correct, but we have created an incompetent, unproductive computing system that virtually compelled her to choose the path she took. We have created a culture where the most powerful people do not have to follow the rules that the regular guy rules. The system has been structured by fear and lack of trust without any regard for productivity. If we want to remain the most powerful country, we need to change our priorities on productivity, secrecy and the corruption of power.

The whole issue of runaway classification, classified e-mails and our inability to produce a productive work environment in National Security is at the nexus of incompetence, lack of trust, corruption resulting in a systematic devotion to societal mediocrity.

Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.

– Edward Snowden

 

How to strive for excellence in modeling & simulation

02 Friday Sep 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Where the frontier of science once was is now the centre.

― Georg Christoph Lichtenberg

mediocritydemotivatorI’ll just say up front that my contention is that there is precious little excellence to be found today in many fields. Modeling & simulation is no different. I will also contend that excellence is relatively easy to obtain, or at the very least a key change in mindset will move us in that direction. This change in mindset is relatively small, but essential. It deals with the general setting of satisfaction with the current state and whether restlessness exists that ends up allowing progress to be sought. Too often there seems to be an innate satisfaction with too much of the “ecosystem” for modeling & simulation, and not enough agitation for progress. We should continually seek the opportunity and need for progress in the full spectrum of work. Our obsession with planning, and micromanagement of research ends up choking the success from everything it touches by short-circuiting the entire natural process of progress, discovery and serendipity.

Iunderachievementdemotivatorn my view the desire for continual progress is the essence of excellence. When I see the broad field of modeling & simulation the need for progress seems pervasive and deep. When I hear our leaders talk such needs are muted and progress seems to only depend on a few simple areas of focus. Such a focus is always warranted if there is an opportunity to be taken advantage of. Instead we seem to be in an age where the technological opportunity being sought is arrayed against progress, computer hardware. In the process of trying to force progress where it is less available the true engines of progress are being shut down. This represents mismanagement of epic proportions and needs to be met with calls for sanity and intelligence in our future.

If you find that you’re spending almost all your time on theory, start turning some attention to practical things; it will improve your theories. If you find that you’re spending almost all your time on practice, start turning some attention to theoretical things; it will improve your practice.

–Donald Knuth

So how do we get better at modeling & simulation? The first thing is mindset; do we think going into our work, my goal is to make this thing good? Or state of the art? Or simply how can I make it better? The final question is the “right” one, you can always make it better, and in the process the other two questions will get answered. Too often today we never get into the fundamental mode of simply working toward continual improvement as our default mode of operation. The manner of energizing our work to do this is frightenly simple to pursue, but rarely in evidence today.

8The way toward excellence, innovation and improvement is to figure out how to break what you have. Always push your code to its breaking point; always know what reasonable (or even unreasonable) problems you can’t successfully solve. Lack of success can be defined in multiple ways including complete failure of a code, lack of convergence, lack of quality, or lack of accuracy. Generally people test their code where it works and if they are good code developers they continue to test the code all the time to make sure it still works. If you want to get better you push at the places where the code doesn’t work, or doesn’t work well. You make the problems where it didn’t work part of the ones that do work. This is the simple and straightforward way to progress, and it is stunning how few efforts follow this simple, and obvious path. It is the golden path that we deny ourselves of today.

The reasons for not engaging in this golden path are simple and completely and utterly pathological. The golden path is not easy to manage. This golden path is epitomized by out of the box thinking. Today we prize in the box thinking because it is suitable for management, and strict accountability. This strict accountability is the consequence of societal structures that lack trust and implicitly fear independent thought. Out of the box is unstructured and innovative eschewing management control. As such we introduce systems that push everything inside the proverbial box. Establishing results that are predictable has become tantamount to being trustworthy. Out of the box thinking is dangerous and the subject of fear because it cannot be predicted. This is the core of our current lack of innovation, and the malaise in modeling & simulation.

The element of thinking that is missing from how things currently progress is a sense of satisfaction about too much of what has driveimages-1n the success of modeling & simulation to date. We are too satisfied that the state of the art is fine and good enough. We lack a general sense that improvements, and progress are always possible. Instead of a continual striving to improve, the approach of focused and planned breakthroughs has beset the field. We have a distinct management approach that provides distinctly oriented improvements while ignoring important swaths of the technical basis for modeling & simulation excellence. The result of this ignorance is an increasingly stagnant status quo that embraces “good enough” implicitly through a lack of support for “better”.

There seems to be a belief that the current brand of goal-oriented micromanagement is good for technical achievement. Nothing could be further from the truth; the current goal based management philosophy is completely counter-productive and antithetical to good science and achievement. It leads to systematic goal reduction and lack of risk-taking on the part of organizations. A big part of this is the impact of the management style on the intrinsic motivations of the scientists. Scientists tend to be quite easily and intrinsically motivated by curiosity and achievement while the management system is focused on extrinsic motivation.

The test of a man isn’t what you think he’ll do. It’s what he actually does.

― Frank Herbert

We end up undermining all of the natural and simple aspects that lead to productive, innovative excellence in work, replacing these factors with a system that undermines what comes naturally. All of this has a single root, lack of trust and faith in the people doing the work. Without rebuilding the fundamental trust in providing intrinsically motivated and talented people a productive environment, I fear nothing can be done to improve our outcomes and grasp the excellence that is there for the taking. People would gravitate toward excellence naturally if the management would simply trust them and work to resonate with people’s natural inclinations.

computer-modeling-trainingModeling & simulation arose to utility in support of real things. It owes much of its prominence to the support of national defense during the cold war. Everything from fighter planes to nuclear weapons to bullets and bombs utilized modeling & simulation to strive toward the best possible weapon. Similarly modeling & simulation moved into the world of manufacturing aiding in the design and analysis of cars, planes and consumer products across the spectrum of the economy. The problem is that we have lost sight of the necessity of these real world products as the engine of improvement in modeling & simulation. Instead we have allowed computer hardware to become an end unto itself rather than simply a tool. Even in computing, hardware has little centrality to the field. In computing today, the “app” is king and the keys to the market hardware is simply a necessary detail.

Cielo rotatorTo address the proverbial “elephant in the room” the national exascale program is neither a good goal, nor bold in any way. It is the actual antithesis of what we need for excellence. The entire program will only power the continued decline in achievement in the field. It is a big project that is being managed the same way bridges are built. Nothing of any excellence will come of it. It is not inspirational or aspirational either. It is stale. It is following the same path that we have been on for the past 20 years, improvement in modeling & simulation by hardware. We have tremendous places we might harness modeling & simulation to help produce and even enable great outcomes. None of these greater societal goods is in the frame with exascale. It is a program lacking a soul.

The question always comes to what am I suggesting be done instead? We need to couch our overall efforts in modeling & simulation in supporting real world objectives. Something like additive manufacturing comes to mind as a modern example that would serve us far better than faster computers. We need to define a default attitude that progress is always possible and always something to be sought. Unfortunately, this more sensible and productive approach is politically untenable today. The real problem isn’t intellectual or bound in thoughtful dialog, but rather bound to a deep lack of faith and trust in science and scientists. We have poorly thought through programs focused on marketing and micromanagement as a direct result. Progress be damned.

There is a very real danger present when we suppress our feelings to act on inspiration in exchange for the “safety” of the status quo.

We risk sacrificing the opportunity to live a more fulfilling and purpose driven life. We risk sacrificing the opportunity to make a difference in the lives of others. We risk sacrificing the beautiful blessing of finding a greater sense of meaning in our own lives.

In short, we run the very real risk living a life of regret.

― Richie Norton

 

Progress is incremental; then it isn’t

22 Monday Aug 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Taking a new step, uttering a new word, is what people fear most.

― Fyodor Dostoyevsky

the-future-is-ours-01The title is a bit misleading so it could be concise. A more precise one would be “Progress is mostly incremental; then progress can be (often serendipitously) massive” Without accepting incremental progress as the usual, typical outcome, the massive leap forward is impossible. If incremental progress is not sought as the natural outcome of working with excellence, progress dies completely. The gist of my argument is that attitude and orientation is the key to making things better. Innovation and improvement are the result of having the right attitude and orientation rather than having a plan for it. You cannot schedule breakthroughs, but you can create an environment and work with an attitude that makes it possible, if not likely. The maddening thing about breakthroughs is their seemingly random nature, you cannot plan for them they just happen, and most of the time they don’t.

For me, the most important aspect of the work environment is the orientation toward excellence and progress. Is work focused on being “the best” or the best we can be? Are we trying to produce “state of the art” results, or are we trying to push the state of the art further? What is the attitude and approach to critique and peer review? What is the attitude toward learning, and adaptively seeking new connections between ideas? How open is the work to accepting, even embracing, serendipitous results? Is the work oriented toward building deep sustainable careers where “world class” expertise is a goal and resources are extended to achieve this end?

Increasingly when I honestly confront all these questions, the answers are troubling. There seems to be the attitude that all of this can be managed, but control of progress is largely an illusion. Usually the answers are significantly oriented away from those that would signify these values. Too often the answers are close to the complete opposite of the “right” ones. What we see is a broad aegis of accountability used to bludgeon the children of progress to death in their proverbial cribs. If accountability isn’t enough to kill progress, compliance is wheeled out as progress’ murder weapon. Used in combination we see advances slow to a crawl, and expertise fail to form where talent and potential was vast. The tragedy of our current system is lost futures first among human’s whose potential greatness is squandered, and secondly in the progress and immense knowledge they would have created. Ultimately all of this damage is heaped upon the future in the name of safety and security that feeds upon pervasive and malignant fear. We are too afraid as a culture to allow people the freedoms needed to be great and do great things.

So much of images-2modern management seems to think that innovation is something to be managed for and everything can be planned. Like most things where you just try too damn hard, this management approach has exactly the opposite effect. We are actually unintentionally, but actively destroying the environment that allows progress, innovation and breakthroughs to happen. The fastidious planning does the same thing. It is a different thing than having a broad goal and charter that pushes toward a better tomorrow. Today we are expected to plan our research like we are building a goddamn bridge! It is not even remotely the same! The result is the opposite and we are getting less for every research dollar than ever before.

Without deviation from the norm, progress is not possible.

― Frank Zappa

In a lot of respects getting to an improved state is really quite simple. Two simple changes in how we plan and how we view success at work can make an enormous difference. First we need to always strive to improve, get better whether we are talking personally or in terms of our work. Secondly, we need to not simply be “state of the art” or “world class,” we need to advanced the state of the art, or define what it means to be world class. The driving aim is to strive to be the best and make things better as our default setting. The power of default setting is incredible. The default is so often the unconscious choice that setting the default may be the single most important decision commonly made. As soon as we accept that we, or our work are “good enough” and “fit to purpose” we have lost the battle for the future. The frequency of the default setting of “good enough” is sufficient to ensure that mediocrity creeps inevitably into the frame.

A goal ensures progress. But one gets much further without a goal.

― Marty Rubin

imgresA large part of the problem with our environment is an obsession with measuring performance by the achievement of goals or milestones. Instead of working to create a super productive and empowering work place where people work exceptionally by intrinsic motivation, we simply set “lofty” goals and measure their achievement. The issue is the mindset implicit in the goal setting and measuring; this is the lack of trust in those doing the work. Instead of creating an environment and work processes that enable the best performance, we define everything in terms of milestones. These milestones and the attitudes that surround them sew the seeds of destruction, not because goals are wrong or bad, but because the behavior driven by achieving management goals is so corrosively destructive.

The result is loss of an environment that can enable the best results as a focus, and goal setting that becomes increasingly risk adverse. When goals and milestones are used to judge people, they start to set the bar lower to make sure they meet the standard. The better approach is to create the environment, culture and processes that enable the work to be the best, and reap the rewards that flow naturally. Moreover in the process of creating the environment, culture and process the workplace is happier, as well as higher performing. Intrinsic motivation is harnessed instead of crushed. Everyone benefits from a better workplace and better performance, but we lack the trust needed to do this. Setting goals and milestones simply over charges the achievement and leaves little or no room for the risk necessary for innovation. We find ourselves in a system where the innovation is killed by the lack of risk taking that milestone driven management creates.

So how does progress really work? The truth is that there are really very few major breakthroughs, and almost none of them are every planned. Most of the time people simply make incremental changes and improvements, which have small, but positive changes on what they work on. These are bricks in the wall and gentle nudges to the status quo. Occasionally these small positive changes cause something greater. Occasionally the little thing becomes something monumental and creates a massive improvement. The trick is that you typically can’t tell what little change will have the big impact in advance. Without looking for the small changes as a way of life, and a constant property, the next big thing never comes.The_Thinker,_Auguste_Rodin

This is the trap of planning. You can’t plan breakthroughs and can’t schedule a better future. Getting to massive improvements is more about creating an environment of excellence, and continuous improvement than any sort of change agenda. The key to getting breakthroughs is to get really good people to work on improving the state of the art or state of the knowledge continuously. We need broad and expansive goals with aspirational character. Instead we have overly specific goals that simply ooze a deep distrust for those conducting the work. With the lack of trust and faith in how the work is done people retract to promising the sure thing, or simply the thing they have already accomplished. The death of progress is found by having a culture of simply implementing and staying at the state of the art or being world class.

The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

― George Bernard Shaw

Lots of examples exist in the technical world whether it is new numerical methods, or technology (like GPS for example). Almost none of these sought to change the World, but they did by simply taking a key step over a threshold where the change became great. Social movements are another prime example.

lead_large
Same-sex-couples-can-get-married-in-Dona-Ana-County
url
ellins_wide-708f8033c6ce48e1fc061f936d5899c99f127ba2-s900-c85
equality

. Take the fight for marriage equality as a great example of the small things leading to huge changes. A county clerk in New Mexico (Dona Ana where Las Cruces is located) stood up and granted marriage licenses to gay and lesbian citizens. This step along with other small actions across the country launched a tidal wave of change that culminated in making marriage equality the law for the entire nation.

Steve_Jobs_Headshot_2010-CROPSo the difference is really simple and clear. You must be expanding the state of the art, or defining what it means to be world class. Simply being at the state of the art or world class is not enough. Progress depends on being committed and working actively at improving upon and defining state of the art and world-class work. Little improvements can lead to the massive breakthroughs everyone aspires toward, and really are the only way to get them. Generally all these things are serendipitous and depend entirely on a culture that creates positive change and prizes excellence. One never really knows where the tipping point is and getting to the breakthrough depends mostly on the faith that it is out there waiting to be discovered.

 

Be the change that you wish to see in the world.

― Mahatma Gandhi

Getting Real About Computing Shock Waves: Myth versus Reality

18 Thursday Aug 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Taking a new step, uttering a new word, is what people fear most.

― Fyodor Dostoyevsky

Crays-Titan-SupercomputerComputing the solution to flows containing shock waves used to be exceedingly difficult, and for a lot of reasons it is now modestly difficult. Solutions for many problems may now be considered routine, but numerous pathologies exist and the limit of what is possible still means research progress are vital. Unfortunately there seems to be little interest in making such progress from those funding research, it goes in the pile of solved problems. Worse yet, there a numerous preconceptions about results, and standard practices about how results are presented that contend to inhibit progress. Here, I will outline places where progress is needed and how people discuss research results in a way that furthers the inhibitions.

I’ve written on this general topic before along with general advise on how to make good decisions in designing methods, https://williamjrider.wordpress.com/2015/08/14/evolution-equations-for-developing-improved-high-resolution-schemes-part-1/. In a nutshell, shocks (discontinuities) provide a number of challenges and some difficult realities to thesupersonic-bullet_660table. To do the best job means making some hard choices that often fly in the face of ideal circumstances. By making these hard choices you can produce far better methods for practical use. It often means sacrificing things that might be nice in an ideal linear world for the brutal reality of a nonlinear world. I would rather have something powerful and functional in reality than something of purely theoretical interest. The published literature seems to be opposed to this point-of-view with a focus on many issues of little practical importance.

It didn’t used to be like this. I’ve highlighted the work of Peter Lax before https://williamjrider.wordpress.com/2015/06/25/peter-laxs-philosophy-about-mathematics/, and it would be an understatement to say that his work paved the way for progress in compressible fluid mechanics. Other fields such as turbulence, solid mechanics, electro-magnetics have all suffered from the lack of similar levels of applied mathematical rigor and foundation. Despite this shining beacon of progress other fields have failed to build Peter_Laxupon this example. Worse yet, the difficulty of extending Lax’s work is monumental. Moving into high dimensions invariably leads to instability and flow that begins to become turbulent, and turbulence is poorly understood. Unfortunately we are a long way from recreating Lax’s legacy in other fields (see e.g., https://williamjrider.wordpress.com/2014/07/11/the-2014-siam-annual-meeting-or-what-is-the-purpose-of-applied-mathematics/).

If one takes a long hard look at problems that pace our modeling and simulation, turbulence figures prominently. We don’t understand turbulence worth a damn. Our physical understanding is terrible and not sufficient to simply turn our understanding over to supercomputers to crush (see https://williamjrider.wordpress.com/2016/07/04/how-to-win-at-supercomputing/). In truth, this is an example where our computing hubris exceeds our intellectual grasp considerably. We need significantly greater modeling understanding to power progress. Such understanding is far too often assumed to exist images-1where it does not. Progress in turbulence is stagnant and clearly lacks key conceptual advances necessary to chart a more productive path. It is vital to do far more than simply turn codes loose on turbulent problems and let great solutions come out because they won’t. Nonetheless, it is the path we are on. When you add shocks and compressibility to the mix, everything gets so much worse. Even the most benign turbulence is poorly understood much less anything complicated. It is high time to inject some new ideas into the study rather than continue to hammer away at the failed old ones. In closing this vignette, I’ll offer up a different idea: perhaps the essence of turbulence is compressible and associated with shocks rather than being largely divorced from these physics. Instead of building on the basis of the decisively unphysical aspects of incompressibility, turbulence might be better built upon a physical foundation of compressible (thermodynamic) flows with dissipative discontinuities (shocks) that fundamental observations call for and current theories cannot explain.

Further challenges with shocked systems occur with strong shocks where nonlinearity is ramped up to a level that exposes any lingering short-comings. Multiple materials are another key physical difficulty that exposes any solution methodology’s weaknesses to acute focus. Again and again the greatest rigor in simpler settings provide a foundation for good performance when things get more difficult. Methods that ignore a variety of difficult and seemingly unfortunate realities will underperform compared to those that confront these realities directly. Usually the methods that underperform simply add more dissipation to overcome things. The dissipation usually is added in a rather heavy-handed manner because it is unguided by theory and works in opposition to unpleasant realities. Rather than seeing these realities as being the result of being pessimistic, it is the result of pragmatism. The result of being irrationally optimistic is always worse than pragmatic realism.

logoLet’s get to one of the biggest issues that confounds the computation of shocked flows, accuracy, convergence and order-of-accuracy. For computing shock waves, the order of accuracy is limited to first-order for everything emanating from any discontinuity (Majda & Osher 1977). Further more nonlinear systems of equations will invariably and inevitably create discontinuities spontaneously (Lax 1973). In spite of these realities the accuracy of solutions with shocks still matters, yet no one ever measures it. The reasons why it matter are far more subtle and refined, and the impact of accuracy is less pervasive in its victory. When a flow is smooth enough to allow high-order convergence, the accuracy of the solution with high-order methods is unambiguously superior. With smooth solutions the highest order method is the most efficient if you are solving for equivalent accuracy. When convergence is limited to first-order the high-order methods effectively lower the constant in front of the error term, which is less efficient. One then has the situation where the gains with high-order must be balanced with the cost of achieving high-order. In very many cases this balance is not achieved.

What we see in the published literature is convergence and accuracy only being assessed for smooth problems where the full order of accuracy may be seen. In the cases that are actually driving the development of methods where shocks are present accuracy and convergence is ignored. If you look at the published papers and the examples, the order of accuracy is measured and demonstrated on smooth problems almost as a matter of coursodse. Everyone knows that the order of accuracy cannot be maintained with a shock or discontinuity, and no one measures the solution accuracy or convergence. The problem is that these details still matter! You need convergent methods, and you have interest in the magnitude of the numerical error. Moreover there are still significant differences in these results on the basis of methodological differences. To up the ante, the methodological differences carry significant changes in the cost of solution. What one finds typically is a great deal of cost to achieve formal order of accuracy that provides very little benefit with shocked flows (see Greenough & Rider 2005, Rider, Greenough & Kamm 2007). This community in the open, or behind closed doors rarely confronts the implications of this reality. The result is a damper on all progress.

The standard for complex flow is well-known and documented before (i.e., “swirlier is better” https://williamjrider.wordpress.com/2014/10/22/821/). When combined with our appallingly poor understanding of turbulence, you have a perfect recipe for computing and selling complete bullshit (https://williamjrider.wordpress.com/2015/12/10/bullshit-is-corrosive/). The side-dish for the banquet of bullshit is the even broader use of the viewgraph norm (https://williamjrider.wordpress.com/2014/10/07/the-story-of-the-viewgraph-norm/) where nothing quantitative is used for comparing results. At its worst, the viewgraph norm is used in comparing results where an analytical solutions is available. So we have a case where an analytical solution is available to do a complete pileofshitassessment of error and we ignore its utility perhaps only using it for plotting. What a massive waste! More importantly it masks problems that need attention.

Underlying this awful practice is a viewpoint that the details, and magnitude of the error does not matter. Nothing could be further from the truth, the details matter a lot and there are huge differences from method to method. All these differences are systematically swept under the proverbial rug. With shock waves one has a delicate balance between the sharpness of the shock and the creation of post-shock oscillations. Allowing a shock wave to be slightly broader can remove many pathologies and produce a cleaner looking solution, but also increases the error. Determining the relative quality of the solutions is left to expert pronouncements, and experts determine what is good and bad instead of the data. I’ve written about how to do this right several times before, and its not really difficult, https://williamjrider.wordpress.com/2015/01/29/verification-youre-doing-it-wrong/. What ends up being difficult is honestly confronting reality and all the very real complications it brings to the table. It turns out that most of us simply prefer to be delusional.

imagesIn the end shocks are a well-trod field with a great deal of theoretical support for a host issues of broader application. If one is solving problems in any sort of real setting, the behavior of solutions is similar. In other words you cannot expect high-order accuracy almost every solution is converging at first-order (at best). By systematically ignoring this issue, we are hurting progress toward better, more effective solutions. What we see over and over again is utility with high-order methods, but only to a degree. Rarely does the fully rigorous achievement of high-order accuracy pay off with better accuracy per unit computational effort. On the other hand methods which are only first-order accurate formally are complete disasters and virtually useless practically. Is the sweet spot second-order accuracy? (Margolin and Rider 2002) Or just second-order accuracy for nonlinear parts of the solution with a limited degree of high-order as applied to the linear aspects of the solution? I think so.

Perfection is not attainable, but if we chase perfection we can catch excellence
― Vince Lombardi Jr.

Lax, Peter D. Hyperbolic systems of conservation laws and the mathematical theory of shock waves. Vol. 11. SIAM, 1973.

Majda, Andrew, and Stanley Osher. “Propagation of error into regions of smoothness for accurate difference approximations to hyperbolic equations.”Communications on Pure and Applied Mathematics 30, no. 6 (1977): 671-705.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” Journal of Computational Physics 196, no. 1 (2004): 259-281.

Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.

 

The benefits of using “primitive variables”

08 Monday Aug 2016

Posted by Bill Rider in Uncategorized

≈ 6 Comments

 

Simplicity is the ultimate sophistication.

― Clare Boothe Luce

urlWhen one is solving problems involving a flow of some sort, conservation principles are quite attractive since these principles follow nature’s “true” laws (true to the extent we know things are conserved!). With flows involving shocks and discontinuities, the conservation brings even greater benefits as the Lax-Wendroff theorem demonstrates (https://williamjrider.wordpress.com/2013/09/19/classic-papers-lax-wendroff-1960/). In a nutshell you have guarantees about the solution through the use of conservation form that are far weaker without it. A particular set of variables is the obvious variables because they arise naturally in conservation form. For fluid flow these are density, momentum and total energy. The most seemingly straightforward thing to do is use these same variables to discretize the equations. This is generally a bad choice and should be avoided unless one does not care about the quality of results.

While straightforward and obvious, the choice of using conserved variables is almost always a poor one, and far better results can be achieved through the use of primitive variables for most of the discretization and approximation work. This is even true if one is using characteristic variables (which usually imply some sort of entirely one-dimensional character). The primitive variables have simple and intuitive meaning physically, and often equate directly to what can be observed in nature (conservedcsd240333fig7variables don’t). The beauty of primitive variables is that they trivially generalize to multiple dimensions in ways that characteristic variables do not. The other advantages are equally clear specifically the ability to extend the physics of the problem in a natural and simple manner. This sort of extension usually causes the characteristic approach to either collapse or at least become increasingly unwieldy. A key aspect to keep in mind at all times is that one returns to the conservation variables for the final approximation and update of the equations. Keeping the conservation form for the accounting of the complete solution is essential.

To keep the bulk of the discussion simple, I will focus on the Euler equations of fluid dynamics. These equations describe the conservations of mass, \rho_t + m_x = 0, momentum, m_t + (m^2/\rho + p)_x = 0 and total energy, E_t + \left[m/\rho(E + p) \right]_x = 0 in one dimension. Even in this very simple setting the primitive variables are immensely useful as demonstrated by HT Huynh, in another of his massively under-appreciated papers. In this paper he masterfully covers the whole of the techniques and utility of primitive variables. Arguably, the use of primitive variables went mainstream with the papers of Colella and Woodward. In spite of the broad appreciation of that paper, the use of primitive variables in work is still more a niche than common practice. The benefits become manifestly obvious whether one is analyzing the equations (which is equivalent to the more complex variable set!), or discretizing the solutions.

Study the past if you would define the future.

― Confucius

ClimateModelnestingThe use of the “primitive variables” came from a number of different directions. Perhaps the earliest use of the term “primitive” came from meteorology in terms of the work of Bjerknes (1921) whose primitive equations formed the basis of early work in computing weather in an effort led by Jules Charney (1955). Another field to use this concept is the solution of incompressible flows. The primitive variables are the velocities and pressure, which is distinguished from the vorticity-streamfunction approach (Roache 1972). In two dimensions the vorticity-streamfunction solution is more efficient, but lacks simple connection to measurable quantities. The same sort of notion separates the conserved variables from the primitive variables in compressible flow. The use of primitive variables as an effective approach computationally may have begun in the computational physics work at Livermore in the 1970’s (see e.g., Debar). The connection of the primitive variables to classical analysis of compressible flows and simple physical interpretation also plays a role.

What are the primitive variables? The basic conserved variables form compressible fluid flow are density, \rho, momentum, m=\rho u, and total energy, E = \rho e + \frac{1}{2} \rho u^2. Here the velocity is u and the internal energy is e. One also has the equation of state p=P(\rho,e) as the constitutive relation. Let’s take the Euler equations and rewrite them using the primitive variables, the conservations of mass, \rho_t + (\rho u)_x = 0, momentum, (\rho u)_t + (\rho u^2 + p)_x = 0 and total energy, \left[\rho (e + \frac{1}{2}u^2)\right]_t + \left[u(\left(rho (e + \frac{1}{2}u^2)+ p\right) \right]_x = 0. Except for the energy equation, the expressions are simpler to work with, but this is the veritable tip of the proverbial iceberg.

What are the equations for the primitive variables? The primitive variables can be expressed and evolved using simpler equations, which are primarily evolution equations dependent on differentiability, which must be present for any sort of accuracy to be in play anyway. The mass equation is the same although one might expand the derivative, \rho_t + u \rho_x + \rho u_x = 0. The momentum equation is replaced by an equation of motion, u_t + u u_x + \frac{1}{\rho} p_x = 0. The energy equation could be replaced with a pressure equation, p_t + u p_x + \gamma p u_x = 0 (\gamma is the generalized isentropic derivative \partial_\rho p|_S) or an internal energy equation, \rho e_t + \rho u e_x + p u_x = 0. One can use either energy representation to good measure, or better yet, use both and avoid having to evaluate the equation of state. Moreover if one wants you can evaluate the difference between the pressure from the evolution equation and the state relation as an error measure.

How does on convert to the primitive variables, and convert back to the conserved variables? If one is interested in analysis of the conservative equations, then one linearizes the equations about a point, U_t + \left(F(U)\right)_x = 0 \rightarrow U_t + \partial_U F(U) U_x = 0 where U is the vector of conserved varibles, and F(U) is the flux function. The matrix A_c = partial_U F(U) is the flux Jacobian. One does an eigenvalue decomposition, $ to analyze the equations. From this decomposition, A_c = R_c \Lambda L_c, one can get the eigenvalues, \Lambda, and the characteristic variables, L_c \Delta U. The analysis is difficult and non-intuitive with the conserved variables.

Here we get to the cool part of this whole thing, there is a much easier and more intuitive path through the primitive variables. One can get a matrix representation of the primitive variables which I’ll call V in vector form, V_t + A_p V_x = 0. One can get the terms in A_p easily from the differential forms, and recognizing that \gamma p = \rho c^2, with c being the speed of sound, the eigen-analysis is so simple that it can be done by hand (and it’s a total piece of cake for Mathematica). Using similar notation as the conserved form, A_p = R_p \Lambda L_p. The first thing to note is that \Lambda is exactly the same, i.e., the eigenvalues are identical. One then gets a result for the characteristics, L_p \Delta V that matches the textbooks, and that L_p \Delta V = L_c \Delta U. All the differences in the transformation are bound up in the right eigenvectors R_c and R_p, and the ease of physical insight provided by the analysis.

24-Figure17-1Now we can elucidate how to move between these two forms, and even use the primitive variables for the analysis of the conserved form directly. Using Huynh’s paper as a guide and repeating the main results one defines a matrix of partial derivatives of the conserved variables, U with respect to the primitive variables, V, M= \partial_V U. This matrix then can be inverted into M^{-1} and we then may define an identity, A_c = M A_p M^{-1}, which might allow the conserved eigen-analysis to be executed in terms of the more convenient primitive variables. The eigenvalues of A_c and A_p are the same. We can get the left and right eigenvectors through L_c = L_p M^{-1} and R_c = M R_p. All of this follows the simple application of the chain rule to the linearized versions of the governing equations.

The primitive variable idea can be extended in a variety of nifty and useful ways. One can augment the variable set in ways that can yield some extra efficiency to the solution by avoiding extra evaluations of the constitutive (or state) relations. This would most classically involve using both a pressure and energy equation in the system. Miller and Puckett provide a nice example of this technique in practice, building upon the work of Colella, Glaz and Ferguson where expensive equation of state evaluations are avoided. One must note that the system of equations being used to discretize the system is carrying redundant information that may have utility beyond efficiency.

One can go beyond this to add variables to the system of equations that are redundant, but carry information implicit in their approximation that may be useful in solving equations. One might add an equation for the specific volume of the fluid to compare with density. Similar things could be done with kinetic energy, vorticity, or entropy. In each case the redunency might be used to discover or estimate error or smoothness of the underlying solution and perhaps adapt the solution method on the basis of this information.

Using the primitive variables for discretization is almost as good as using characteristic variables in terms of solution fidelity. Generally if you can get away with 1-D ideas, the characteristic variables are unambiguously the best. The primitive variables are almost as good. The key is to use a local transformation to the primitive variables for the work of discretization even when your bookkeeping is all in conserved variables. Even if you are doing characteristic variables, the construction and use of them is enabled by primitive variables. The resulting expressions for the characteristics are simpler in primitive variables. Perhaps almost as important the expressions for the variables themselves are far more intuitively expressed in primitive variables.

A real source of power of the primitive variables comes when you extend past the simpler case of the Euler equations to things like magneto-hydrodynamics (MHD i.e., compressible magnetic fluids). Doing discretization of the MHD with conserved variables is a severe challenge and analysis of their mathematical characteristic structure can be a decent into utter madness. Doing the work in these more complex systems using the primitive variables is extremely advantageous. It is an approach that is far too often left out and the quality and fidelity of numerical methods suffers as a result.

Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage to move in the opposite direction.

― Ernst F. Schumacher

Lax, Peter, and Burton Wendroff. “Systems of conservation laws.”Communications on Pure and Applied mathematics 13, no. 2 (1960): 217-237.

Huynh, Hung T. “Accurate upwind methods for the Euler equations.” SIAM Journal on Numerical Analysis 32, no. 5 (1995): 1565-1619.

Colella, Phillip, and Paul R. Woodward. “The piecewise parabolic method (PPM) for gas-dynamical simulations.” Journal of computational physics 54, no. 1 (1984): 174-201.

Woodward, Paul, and Phillip Colella. “The numerical simulation of two-dimensional fluid flow with strong shocks.” Journal of computational physics54, no. 1 (1984): 115-173.

Van Leer, Bram. “Upwind and high-resolution methods for compressible flow: From donor cell to residual-distribution schemes.” Communications in Computational Physics 1, no. 192-206 (2006): 138.

Bjerknes, V. “The Meteorology of the Temperate Zone and the General Atmospheric CIRCULATION. 1.” Monthly Weather Review 49, no. 1 (1921): 1-3.

Charney, J. “The use of the primitive equations of motion in numerical prediction.” Tellus 7, no. 1 (1955): 22-26.

Roache, Patrick J. Computational fluid dynamics. Hermosa publishers, 1972.

DeBar, R. B. Method in two-D Eulerian hydrodynamics. No. UCID-19683. Lawrence Livermore National Lab., CA (USA), 1974.

Miller, Gregory Hale, and Elbridge Gerry Puckett. “A high-order Godunov method for multiple condensed phases.” Journal of Computational Physics128, no. 1 (1996): 134-164.

Colella, P., H. M. Glaz, and R. E. Ferguson. “Multifluid algorithms for Eulerian finite difference methods.” preprint (1996).

 

My Job Should Be Awesome; Its Not, Why?

29 Friday Jul 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

There is nothing quite so useless, as doing with great efficiency, something that should not be done at all.

― Peter F. Drucker

This post is going to be a bit more personal than usual; I’m trying to get my head around why work is deeply unsatisfying, and how the current system seems to conspire to destroy all the awesome potential it should have. My job should have all the things we desire: meaning, empowerment and a quest for mastery. At some level we seem to be in an era that belittles all dreams and robs work of the meaning it should have, disempowers most, and undermines mastery of anything. Worse yet, mastery of things seems to invoke outright suspicion being regarded more than a threat then a resource. On the other hand, I freely admit that I’m lucky and have a good well-paying, modestly empowering job compared to the average Joe or Jane. So many have it so much worse. The flipside of this point of view is that we need to improve work across the board; if the best jobs are this crappy one can scarcely imagine how bad things are for normal or genuinely shitty jobs.

In my overall quest for quality and mastery I refuse to settle for this and it only makes my dilemma all the more confounding. As I state in the title, my job has almost everything going for it and it should be damn close to unambiguously awesome. In fact my job used to be awesome, and lots of forces beyond my control have worked hard to completely fuck that awesomeness up. Again getting to the confounding aspects of the situation, the forces that be seem to be absolutely hell bent on continuing to fuck things up, and turn awesome jobs into genuinely shitty ones. I’m sure the shitty jobs are almost unbearable. Nonetheless, I know that grading on a curve my job is still awesome, but I don’t settle for success being defined by being less shitty than other people. That’s just a recipe for things to get shittier! If I have these issues with my work, what the hell is the average person going through?

images-1So before getting to all the things fucking everything up, let’s talk about why the job should be so incredibly fucking awesome. I get to be a scientist! I get to solve problems, and do math and work with incredible phenomena (some of which I’ve tattooed on my body). I get to invent things like new ways of solving problems. I get to learn and grow and develop new skills, hone old skills and work with a bunch of super smart people who love to share their wealth of knowledge. I get to write papers that other people read and build on, I get to read papers written by a bunch of people who are way smarter than me, and if I understand them I learn something. I get to speak at conferences (which can be in nice places to visit) and listen at them too on interesting topics, and get involved in deep debates over the boundaries of knowledge. I get to contribute to solving important problems for mankind, or my nation, or simply for the joy of solving them. I work with incredible technology that is literally at the very bleeding edge of what we know. I get to do all of this and provide a reasonably comfortable living for my loved ones.

If failure is not an option, then neither is success.

― Seth Godin

All of the above is true, and here we get to the crux of the problem. When I look at each day I spend at work almost nothing in that day supports any of this. In a very real sense all the things that are awesome about my job are side projects or activities that only exist in the “white space” of my job. The actual job duties that anyone actually gives a shit about don’t involve anything from the above list of awesomeness. Everything I focus on and drive toward is the opposite of awesome; it is pure mediocre drudgery, a slog that starts on Monday and ends on Friday, only to start all over again. In a very deep and real sense, the work has evolved into a state where all the awesome things about being a scientist are not supported at all, and every fucking thing done by society at large undermines it. As a result we are steadily and completely hollowing out value, meaning, and joy from the work of being a scientist. This hollowing is systematic, but serves no higher purpose that I can see other than to place a sense of safety and control over things.

The Cul-de-Sac ( French for “dead end” ) … is a situation where you work and work and work and nothing much changes

― Seth Godin

pileofshit

So the real question to answer is how did we get to this point? How did we create systems whose sole purpose seems to be robbing life of meaning and value? How are formerly great institutions being converted into giant steaming piles of shit. Why is work becoming such a universal shit show? Work with meaning and purpose should be something society values both from the standpoint of pure productivity, but also for the sense of respect for humanity. Instead we are turning away from making work meaningful, and making steady choices that destroy the meaning in work. The forces at play are a combination of fear, greed, and power. Each of these forces has a role to play is a widespread and deep destruction of a potentially better future. These forces provide short-term comfort, but long-term damage that ultimately leaves us poorer both materially and spiritually.

Men go to far greater lengths to avoid what they fear than to obtain what they desire.

― Dan Brown

Of these forces, fear is the most acute and widespread. Fear is harnessed by the rich and powerful to hold onto and grow their power, their stranglehold on society. Across society we see people looking at the world and saying “Oh shit! this is scary, make it stop!” The rich and powerful can harness this chorus of fear to hold onto and enhance their power. The fear comes from the unknown and change, which is driving people into attempting to control things, which also suits the needs of the rich & powerful. This control gives people a false sense of safety and security at the cost of empowerment and meaning. For those at the top of the food chain, control is what they want because it allows them to hold onto their largess. The fear is basically used to enslave the population and cause them to willingly surrender for promises of safety and security against a myriad of fears. In most cases we don’t fear the greatest thing threatening us, the forces that work steadfastly to rob our lives of meaning. At work the fear is the great enemy of all that is good killing meaning, empowerment and mastery in one fell swoop.

Power does not corrupt. Fear corrupts… perhaps the fear of a loss of power.

― John Steinbeck

How does this manifest itself in my day-to-day work? A key mechanism in undermining meaning in work is the ever more intrusive and micromanaged money running research. The control comes under the guise of accountability (who can argue with that, right). The accountability leads to a systematic diminishment in achievement and has much more to do with a lack of societal trust (which embodies part of fear mechanics). Instead of insuring better results and money well spent, the whole dynamic creates a virtual straightjacket for everyone in the system that assures they actually create, learn and produce far less. We see research micromanaged, and projectized in ways that are utterly incongruent with how science can be conducted. The lack of trust translates to lack of risk and the lack of risk equates to lack of achievement (with empowerment and mastery sacrificed at the altar of accountability). This is only one aspect of how the control works to undermine work. There are so many more.

Our greatest fear should not be of failure but of succeeding at things in life that don’t really matter.

― Francis Chan

Pimgresart of these systematic control mechanisms at play is the growth of the management culture in all these institutions. Instead of valuing the top scientists and engineers who produce discovery, innovation and progress, we now value the management class above all else. The managers manage people, money and projects that have come to define everything. This is true at the Labs as it is at universities where the actual mission of both has been scarified to money and power. Neither the Labs nor Universities are producing what they were designed to create (weapons, students, knowledge,). Instead they have become money-laundering operations whose primary service is the careers of managers. All one has to do is see who are the headline grabbers from any of these places; it’s the managers (who by and large show no leadership). These managers are measured in dollars and people, not any actual achievements. All of this is enabled by control and control enables people to feel safe and in control. As long as reality doesn’t intrude we will go down this deep death spiral.

We have priorities and emphasis in our work and money that have nothing to do with the reason our Labs, Universities or even companies exist. We have useless training that serves absolutely no purpose other than to check a box off. The excellence or quality of the work done has no priority at all. We have gotten to the point where peer review is a complete sham, and any honest assessment of the quality of the work is met with hostility. We should all wrap our heads collectively around this maxim of the modern workplace, it can be far worse for your career to demand technical quality as part of what you do than to do shoddy work. We are heading headlong into a mode of operation where mediocrity is enshrined as a key organizational value to be defended against potential assaults by competence. All of this can be viewed as the ultimate victory of form over substance. If it looks good, it must be good. The result is that the appearances are managed, and anything of substance is rejected.

The result of the emphasis on everything else except the core mission of our organizations is the systematic devaluation of those missions, along with a requisite creeping incompetence and mediocrity. In the process the meaning and value of the work takes a fatal hit. Actually expressing a value system of quality and excellence is now seen as a threat and becomes are career limiting perspective. A key aspect of the dynamic to underachievementdemotivatorrecognize is the relative simplicity of running mediocre operations without any drive for excellence. Its great work if you can get it! If your standards are complete shit, almost anything goes, and you avoid the need for conflict almost entirely. In fact the only source of conflict becomes the need to drive away any sense of excellence. Any hint of quality or excellence has the potential to overturn this entire operation and the sweet deal of running it. So any quality ideas are attacked and driven out as surely as the immune system attacks a virus. While this might be a tad hyperbolic, its not too far off at all, and the actual bull’s-eye for an ever growing swath of our modern world.

The key value is money and its continued flow. The whole system runs on a business model of getting money regardless of what it entails doing or how it is done. Of course having no standards makes this so much easier, if you’ll do any shitty thing as long as they pay you for it management is easier. With standards of quality this whole operation becomes self-replicating. In a very direct way the worse thing one can do is get hard work, so the system is wired to drive good work away. You’re actually better off doing shitty work held to shitty standards. Doing the right thing, of the thing right is viewed as a direct threat to the flow of money and generates an attack. The prime directive is money to fund people and measure the success of the managers. Whether or not the money generates excellent meaningful work or focuses on something of value simply does not matter. It becomes a completely viscous cycle where money breeds more money and more money can be bred by doing simple shoddy work than asking hard questions and demanding correct answers. In this way we can see how mediocrity becomes the value that is tolerated and excellence is reviled. Excellence is a threat to power, mediocrity simply accepts being lorded over by the incompetent.

ap9504070450_1

At some level it is impossible to disconnect what is happening in science from the broader cultural trends. Everything happening in the political climate today is part of the trends I see at work. The political climate is utterly and completely corrosive, and the work environment is the same thing. In the United States we have had 20 years of government, which has been engineered to not function. This is to support the argument that government is bad and doesn’t work (and it should be smaller). The fact is that it is engineered not to work by the proponents of this philosophy. The result is a literal self-fulfilling prophesy, government doesn’t work if you don’t try to make it work. If we actually put effort into making it work, valued expertise and excellence, it would work just fine. We get shit because that’s what we ask for. If we demanded excellence and performance, and actually held people accountable for it, we might actually get it, but it would be hard, it would be demanding. The problem is that success would disprove the maxim that government is bad and doesn’t work.

One of my friends recently pointed out that the people managing and running the programs that fund the work at the Labs in Washington actually make less than our Postdocs at the Labs. The result is that we get what we pay for, incompetence, which grows more manifestly obvious with each passing year. If we want things to work we need to hire talented people and hold them to high standards, which means we need to pay them what they are worth.

hqdefaultWe see a large body of people in society who are completely governed by fear above all else. The fear is driving people to make horrendous and destructive decisions politically. The fear is driving the workplace into the same set of horrendous and destructive decisions. Its not clear whether we will turn away from this mass fear before things get even worse. I worry that both work and politics will be governed by these fears until it pushes us over the precipice to disaster. Put differently, the shit show we see in public through politics mirrors the private shit show in our workplaces. The shit is likely to get much worse before it gets better.

There are two basic motivating forces: fear and love. When we are afraid, we pull back from life. When we are in love, we open to all that life has to offer with passion, excitement, and acceptance. We need to learn to love ourselves first, in all our glory and our imperfections. If we cannot love ourselves, we cannot fully open to our ability to love others or our potential to create. Evolution and all hopes for a better world rest in the fearlessness and open-hearted vision of people who embrace life.

― John Lennon

A More Robust, Less Fragile Stability for Numerical Methods

25 Monday Jul 2016

Posted by Bill Rider in Uncategorized

≈ 2 Comments

 

Science is the process that takes us from confusion to understanding…

― Brian Greene

stability-in-lifeStability is essential for computation to succeed. Better stability principles can pave the way for greater computational success. We are in dire need of new, expanded concepts for stability that provides paths forward toward uncharted vistas of simulation.

Without stability numerical methods are completely useless. Even a modest amount of instability can completely undermine and destroy the best simulation intentions. Stability became a thing; right after computers became a thing. Early work on ordinary differential equations encountered instability, but the computations being handcrafted was always suspect. The availability of automatic computations via computers ended the speculation, and now it became clear, numerical methods could become unstable. With the proof of a clear issue in hand great minds went to work to put this potential chaos to order. This is the kind of great work we should be asking applied math to be doing today, and sadly are not because of our over reliance on raw computing power.

Von Neumann devised the first technique for stability analysis after encountering ijohn-von-neumann-2nstability at Los Alamos during World War 2. This method is still the gold standard for analysis today in spite of rather profound limitations and applicability. In the early 1950’s Lax came up withrichtmyer_robert_b1the equivalence theorem (interestingly both Von Neumann and Lax worked with Robert Richtmyer, https://williamjrider.wordpress.com/2016/05/20/the-lax-equivalence-theorem-its-importance-and-limitations/), which only highlighted the importance of stability more boldly. Remarkably ordinary differential equation methods came to stability later than partial differential equations in Dahlquist’s groundbreaking work. He produced a stability theory and equivalence theory that paralleled the work of Von Neumann and Lax for PDEs. All he needed were computers to drive the need for the work. We will note that the PDE theory is all for linear methods and linear equations, while the ODE theory is for
linear methods, but applies to nonlinear ODEs,

Once a theory was established for stability computations could proceed with enough guarantee of solution to progress. For a very long time this stability work was all that was needed. Numerical methods, algorithms and general techniques galore came into being and application covering a broad swath of the physics and engineering World. Gradually, over time, we started to see computation become spoken as a new complementary practice in science that might stand shoulder to shoulder with theory and experiment. These views are a bit on the grandiose side of things where a more balanced perspective might rationally note that numerical methods allow complex nonlinear models to be solved where classical analytical approaches are quite limited. At this point its wise to confront the issue that might be creeping into your thinking, our theory is mostly linear while the utility for computation is almost all nonlinear. We have a massive gap between theory and utility with virtually no emphasis or focus or effort to close it.

This is so super important that I’ve written about it before, doing the basic Von Neumannstability-1methodology using Mathematica, https://williamjrider.wordpress.com/2014/07/15/conducting-von-neumann-stability-analysis/ & https://williamjrider.wordpress.com/2014/07/21/von-neumann-analysis-of-finite-difference-methods-for-first-order-hyperbolic-equations/, in the guise of thoughts about robustness,  https://williamjrider.wordpress.com/2014/12/03/robustness-is-stability-stability-is-robustness-almost/,

and practical considerations for hyperbolic PDEs, https://williamjrider.wordpress.com/2014/01/11/practical-nonlinear-stability-considerations/. Running headlong through this arc of thought are lessons learned from hyperbolic PDEs.Peter_LaxHyperbolic PDEs have always been at the leading edge of computation because they are important to applications, difficult and this has attracted a lot of real unambiguous genius to solve it. I’ve mentioned a cadre of genius who blazed the trails 60 to 70 years ago (Von Neumann, Lax, Richtmyer https://williamjrider.wordpress.com/2014/05/30/lessons-from-the-history-of-cfd-computational-fluid-dynamics/, https://williamjrider.wordpress.com/2015/06/25/peter-laxs-philosophy-about-mathematics/). We are in dire need of new geniuses to slay the nonlinear dragons that stand in the way of progress. Unfortunately there is little or no appetite or desire for progress, and the general environment stands squarely in opposition. The status quo is viewed as all we need, https://williamjrider.wordpress.com/2015/07/10/cfd-codes-should-improve-but-wont-why/, and progress in improving basic capabilities and functionality has disappeared except for utilizing ever more complex and complicated hardware (with ever more vanishing practical returns).

Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.

― Nassim Nicholas Taleb

I’ve been thinking about concepts of nonlinear stability for a while wondering if we can move past the simple and time-honored concepts developed so long ago. Recently I’ve taken a look a Taleb’s “anti-fragile” concept and realized that it might have some traction in this arena. In a sense the nonlinear stability concepts developed here to fore are akin to anti-fragile where the methodology is developed and works best in the worst-case scenario. In the case of nonlinear stability for hyperbolic PDEs the worst-case scenario is a linear discontinuity where the solution has a jump and the linear solution is utterly unforgiving of any errors. In this crucible, all the bad things that can happen to a solution arise, either overly diffusive low accuracy solutions, or oscillations with high accuracy producing demonstrably unphysical results.

No structure, even an artificial one, enjoys the process of entropy. It is the ultimate fate of everything, and everything resists it.

― Philip K. Dick

When the discontinuity is associated with a real physical solution these twin and opposing maladies are both unacceptable. The diffusion leads to a complete was of computational effort that dramatically inhibits any practical utility of numerical methods. For more complex systems of equations where turbulent chaotic solutions would naturally arise, the diffusive methods drive all solutions to be laminar and boring (not matching physical reality in essential aspects!). On the plus side of the ledger, the diffusive solution is epically robust and reliable, a monument to stability. On the other high-order methods based on the premise that the solution is smooth and differentiable (i.e., nice and utterly ideal) is the epitome of fragility. The oscillations can easily put the solution into unphysical states that render the solution physically absurd.

Difficulty is what wakes up the genius

― Nassim Nicholas Taleb

Now we get to the absolute genius of nonlinear stability that arose from this challenge. Rather than forcing us to accept one or the other, we introduce a concept by which we can have the best of both, using whatever discretization is most appropriate for the local circumstances. Thus we have a solution adaptive method that chooses the right approximation for the solution locally. Therefore a different method may be used for every place in space and time. The key concept is the rejection of the use of a linear discretization where the same method is applied everywhere in the solution, which caused the entire problem elucidated above. Instead we introduce a mechanism to analyze the solution and introduce an approximation appropriate for the local structure of the solution.

Tmediocritydemotivatorhe desired outcome is to use the high-order solution as much as possible, but without inducing the dangerous oscillations. The key is to build upon the foundation of the very stable, but low accuracy, dissipative method. The theory that can be utilized makes the dissipative structure of the solution a nonlinear relationship. This produces a test of the local structure of the solution, which tells us when it is safe to be high-order, and when the solution is so discontinuous that the low order solution must be used. The result is a solution that is high-order as much as possible, and inherits the stability of the low-order solution gaining purchase on its essential properties (asymptotic dissipation and entropy-principles). These methods are so stable and powerful that one might utilize a completely unstable method as one of the options with very little negative consequence. This class of methods revolutionized computational fluid dynamics, and allowed the relative confidence in the use of methods to solve practical problems.

Instead of building on the lessons learned with these methods, we seem to have entered an era where the belief that no further progress on this front is needed. We have lost sight of the benefits of looking to produce better methods. Better methods like those that are nonlinearly stable open new vistas of simulation that presently are closed to systematic and confident exploration. The methods described above did just this and produced the current (over) confidence in CFD codes.

A good question is why haven’t these sort of ideas spread to a wider swath of the numerical world? The core answer is “good enough” thinking, which is far too pervasive today. Part of the issue is the immediacy of need for hyperbolic PDEs isn’t there in other areas. Take time integration methods where the consensus view is that ODE integration is good enough, thank you very much, and we don’t need effort. To be honest, it’s the same idea as CFD codes, which are they are good enough and we don’t need effort. Other areas of simulation like parabolic and elliptic PDEs might also use such methods, but again thestability-3.hiresneed is far less obvious than for hyperbolic PDEs. The truth is that we have made rather stunning progress in both areas and the breakthroughs have put forth the illusion that the methods today are good enough. We need to recognize that this is awesome for those who developed the status quo, but a very bad thing if there are other breakthroughs ripe for the taking. In my view we are at such a point and missing the opportunity to make the “good enough,” “great” or even “awesome”. Nonlinear stability is deeply associated with adaptivity and ultimately more optimal and appropriate approximations for the problem at hand.

If you’re any good at all, you know you can be better.
― Lindsay Buckingham

So what might a more general principle look like as applied to ODE integration? Let’s explore this along with some ideas of how to extend the analysis to support such paths. The acknowledged bullet-proof method is backwards Euler, u^{n+1} = u^n + h f(u^{n+1}), which is A-stable and the ODE gold standard of robustness. It is also first-order accurate. One might like to use something higher order with equal reliability, but alas this is exactly the issue we have with hyperbolic PDEs. What might a nonlinear approximation look like?

Let’s assume we will stick with the BDF (backwards differentiation formula) methods, and we have an ODE that should produce positive definite (or some sort of sign- or property-preserving character). We will stick with positivity for simplicity’s sake. The fact is that perfectly linearly stable solutions may well produce solutions that lose positivity. Staying with simplicity of exposition, the second-order BDF method is \frac{3}{2} u^{n+1} = 2 u^n - \frac{1}{2} u^{n-1} + h f(u^{n+1}). This can be usefully rearranged to \frac{3}{2} u^{n+1} - h f(u^{n+1}) = 2 u^n -\frac{1}{2} u^{n-1}. If the right hand side of this expression is positive, 2 u^n -\frac{1}{2} u^{n-1} > 0, and the eigenvalue of f(u^{n+1}) < 0, we have faith that u^{n+1} > 0. If the right hand side is negative, it would be wise to switch to the backwards Euler scheme for this time step. We could easily envision taking this approach to higher and higher order.

StabilityFurther extensions of nonlinear stability would be useful for parabolic PDEs. Generally parabolic equations are fantastically forgiving so doing anything more complicated is not prized unless it produces better accuracy at the same time. Accuracy is imminently achievable because parabolic equations generate smooth solutions.  Nonetheless these accurate solutions can still produce unphysical effects that violate other principles. Positivity is rarely threatened although this would be a reasonable property to demand. It is more likely that the solutions will violate some sort of entropy inequality in a mild manner. Instead of producing something demonstrably unphysical, the solution would simply not have enough entropy generated to be physical. As such we can see solutions approaching the right solution, but in a sense from the wrong direction that threatens to produce non-physically admissible solutions. One potential way to think about is might be in an application of heat conductions. One can examine whether or not the flow of heat matches the proper direction of heat flow locally, and if a high-order approximation does not either choose another high-order approximation that does, or limit to a lower order method wit unambiguous satisfaction of the proper direction. The intrinsically unfatal impact of these flaws mean they are not really addressed.

Mediocrity will never do. You are capable of something better.
― Gordon B. Hinckley

Thinking about marvelous magical median (https://williamjrider.wordpress.com/2016/06/07/the-marvelous-magical-median/) spurred these thoughts to mind. I’ve been taken with this functions ability to produce accurate approximations from approximations that are lower order, and pondering whether this property can extend to stability either linear or nonlinear (empirical evidence is strongly in favor). Could the same functionality be applied to other schemes or approximation methods? In the case of the median if we take two of the three arguments to have a particular property, the third argument will inherit this property if the other two bound it. This is immensely powerful and you can be off to the races by simply having two schemes that possess a given desirable property. Using this approach more broadly than it has been applied thus far would be an interesting avenue to explore.

If you don't have time to do it right, when will you have the time to do it over?
― John Wooden

 

The Death of Peer Review

16 Saturday Jul 2016

Posted by Bill Rider in Uncategorized

≈ 6 Comments

 

If you want to write a negative review, don’t tickle me gently with your aesthetic displeasure about my work. Unleash the goddamn Kraken.

― Scott Lynch

Sadly, this spirit is not what we see today either on the giving or receiving end of peer review, and we are all poorer for it.

mistakesdemotivator_largeAs surely as the sun rises in the East, peer review that treasured and vital process for the health of science is dying or dead. In many cases we still try to conduct meaningful peer review, but increasingly it is simply a mere animated zombie form of peer review. The zombie peer review of today is a mere shadow of the living soul of science it once was. Its death is merely a manifestation of bigger broader societal trends such as those unraveling the political processes, or transforming our economies. We have allowed the quality of the work being done to become an assumption that we do not actively interrogate through a critical process (e.g., peer review). Instead if we examine the emphasis for how money is spent in science and engineering everything, but the quality of the technical work is focused on and demands are made. Instead there is an inherent assumption that the quality of the technical work is excellent and the organizational or institutional focus. With sufficient time this lack of emphasis is eroding the quality presumptions to the point where they no longer hold sway.

Being the best is rarely within our reach. Doing our best is always within our reach.

― Charles F. Glassman

In science, peer review takes many forms each vital to the healthy functioning of productive work. Peer review forms an essential check and balance on the quality of woimages-2rk, wellspring of ideas and vital communication mechanism. Whether in the service of publishing cutting edge research, or providing quality checks for Laboratory research or engineering design its primal function is the same; quality, defensibility and clarity are derived through its proper application. In each of its fashions the peer review has an irreplaceable core of a community wisdom, culture and self-policing. With its demise, each of these is at risk of dying too. Rebuilding everything we are tearing down is going to be expensive, time-consuming and painful.

Let’s get to the first conclusion of this thought process, peer review is healthiest in its classic form, the academic publishing review, and it is in crisis there. The scientific community widely acknowledges that the classic anonymous peer review is absolutely riddled with problems and abuses. The worst bit is that this is where it works the best. So at its best, peer review is terrible. The critiques are many and valid. For example there is widespread abuse of the process by the powerful and established. The system is driven by a corrupt academic system that feeds the overall dysfunction (i.e., publish or perish). Corruption and abuse by the journals themselves is deep and getting worse never mind the exploding costs. Then we have issues about teaming conflicts of interest and deeply passive aggressive behavior veiled behind the anonymity. Despite all these problems peer review here tends to still largely work and albeit in a deeply suboptimal manner.

Another complaint is the time and effort that these reviews take along with suggestions to make things better with modern technology. Online publishing and the ubiquity of the Internet is capable of radically reducing the time and effort (equals money) of publishing a paper. I will say that the time and effort issue for peer review is barking up the wrong tree. The problem with the time and effort is that peer review isn’t valued sufficiently. 7678607190_33e771ac97_bDoing peer review isn’t given much wait professionally whether you’re a professor or working in a private or government lab. Peer review won’t give you tenure, or pay raises or other benefits; it is simply a moral act as part of the community. This character as an unrewarded moral act gets to the issue at the heart of things. Moral acts and “doing the right thing” is not valued today, nor are there definable norms of behavior that drive things. It simply takes the form of an unregulated professional tax, pro bono work. The way to fix this is change the system to value and reward good peer review (and by the same token punish bad in some way). This is a positive side of modern technology, which would be good to see, as the demise of peer review is driven to some extent by negative aspects of modernity, as I will discuss at the end of this essay.

Honest differences are often a healthy sign of progress.

― Mahatma Gandhi

Let’s step away from the ideal context of the classical academic peer review of a paper to an equally common practice, the peer review of organizations, programs and projects. This is a practice of equal or greater importance as it pertains to the execution of technical work across the World. We see it taking action in the form of a design review for software, engineered products and analyses used to inform decisions. In my experience peer review in these venues is in complete free-fall and collapsing under the weight of societal pressures that cannot support the proper execution of the necessary practices. My argument is that we are living within a profoundly low-trust world, and peer review relies upon implicit expectations of trust to be executed with any competence. This lack of trust is present on both ends of the peer review system. When the trust is low, honesty cannot be present and instead honesty will be punished.

First let’s talk about the source of critique, the reviewers. Reviewers have little trust and faith that their efforts will be taken seriously if they find problems, and if they do raise an issue it is just as likely that they, the messenger, will be punished instead. blamedemotivatorAs a result reviewers rarely do a complete of good job of reviewing things, as they understand what the expected result is. Thus the review gets hollowed out from its foundation because the recipient of the review expects to get more than just a passing grade, they expect to get a giant pat on the back. If they don’t get their expected results, the reaction is often swift and punishing to those finding the problems. Often those looking over the shoulder are equally unaccepting of problems being found. Those overseeing work are highly political and worried about appearances or potential scandal. The reviewers know this to and that a bad review won’t result in better work, it will just be trouble for those being reviewed. The end result is that the peer review is broken by the review itself being hollow, the reviewers being easy on the work because of the explicit expectations and the implicit punishments and lack of follow through for any problems that might be found.

Those who can create, will create. Those who cannot create, will compete.

― Michael F. Bruyn

If we look to those being reviewed, the system only gets worse. The people being reviewed only see downside to engaging in peer review, no upside at all. Increasingly any sort of spirit or implied expectation of technical quality has been left behind. Peer review is done to provide the veneer of quality regardless to its actual presence. As a consequence any result from peer review that doesn’t say this work is the best and executed perfectly is ripe to be ignored (or punish those not complying with the implied directive). Those being reviewed have no desire or intent to take any corrective action or address any issue that might be surfaced. As a result the peer review is simply window dressing and serves no purpose other than marketing. The reasons for the evolution to this dysfunctional state are many and clear. The key to the problem is the lack of ability to politically confront problems. A problem is often taken as a death sentence rather than a call to action. Since no issues or actual challenges will be confronted, much less solved, the only course of action is to ignore and bury them.

dysfunctiondemotivatorWe then get to the level above who is being reviewed and closer to the source of the problem, the political system. Our political systems are poisonous to everything and everyone. We do not have a political system perhaps anywhere in the World that is functioning to govern. The result is a collective inability to deal with issues, problem and challenges at a massive scale. We see nothing, but stagnation and blockage. We have a complete lack of will to deal with anything that is imperfect. Politics is always present and important because science and engineering are still intrinsically human activities, and humans need politics. The problem is that truth, and reality must play some normative role in decisions. The rejection of effective peer review is a rejection of reality as being germane and important in decisions. This rejection is ultimately unstable and unsustainable. The only question is when and how reality will impose itself, but it will happen, and in all likelihood through some sort calamity.

demotivators-7-728To get to a better state visa-vis peer review trust and honesty needs to become a priority. This is a piece of a broader rubric for progress toward a system that values work that is high in quality. We are not talking about excellence as superficially declared by the current branding exercise peer review has become, but the actual achievement of unambiguous excellence and achievement. The combination of honesty, trust and the search for excellence and achievement are needed to begin to fix our system. Much of the basic structure of our modern society is arrayed against this sort of change. We need to recognize the stakes in this struggle and prepare our selves for difficult times. Producing a system that supports something that looks like peer review will be a monumental struggle. We have become accustomed to a system that feeds on false excellence and achievement and celebrates scandal as an opiate for the masses.

One only needs to look to the nature of the current public discourse and political climate. We are rapidly moving into a state where the discourse is utterly absent of any substance and the poisonous climate is teetering over into destructive. Reality is beginning to fight back against the flaws in the system. Socially we are seeing increased fear, violence and outright conflict. The problems with peer review pale in comparison to the tide rolling in, but reflect many of the same issues. Peer review is an introverted view of our numerous ills where the violence and damaging environment evident in our mass media is the extroverted side of the same coin. In this analysis peer view is simply another side effect of the massive issues confronting our entire world projected into the environment of science and engineering. Fixing all these issues is in the best interests of humanity, but it’s going to be hard and unpleasant. Because of the difficulty of fixing any of this, we will avoid it until the problems become unbearable for a large enough segment of humanity. Right now it is easier and simpler to just accept an intrinsically uncritical perspective and hqdefaultsimply lie to ourselves about how good everything is and how excellent all of are.

If one doesn’t have the stomach to examine things through such a social lens, one might consider the impact of money on the system. In many ways the critical review of research can now be measured almost entirely in monetary terms. This is especially true in the organizational or laboratory environment where most people managing view money and its continued flow being the only form of review they care about. In such a system a critical peer review system becomes a threat instead a source of renewal. Gradually, over time the drive for technical excellence is replaced by the drive for financial stability. We have allowed financial stability to become disconnected from technical achievement, and in doing so killed peer review. When technical excellence and achievement become immaterial to any measure of success, and money only matters peer review is something to be ignored, avoided and managed because no perceived good can come from it.

Self-consciousness kills communication.

― Rick Steves

Worse than having no perceived good associated with it, peer review if done properly becomes evidence of problems. The problems exposed in peer review represent calls to action that today’s systems cannot handle because they are an affront to planning, schedules, milestones and budgetary allocations. Problems also expose flaws in the fundamental assumptions of today’s world that the technical work is high quality and does not need active focused (appropriate) management to succeed. As a result any problems induce a “shoot the messenger” mentality that acts to destroy the critique and send a clear message that peer review should not be done honestly or seriously. The result has been a continual erosion of the technical quality, so often assumed to be present a priori. This is a viscous cycle where technical problems remain unexposed, or hidden by a lack of vigorous, effective peer review. The problems then fester and grow because problems like these do not cure themselves, and the resistance to peer review or any form of critique only becomes further re-enforced. This doesn’t end well, and the end results are perfectly predictable. The only thing that stems the decay is the encroachment mediocritydemotivatorof decay ultimately ends the ability to conduct a peer review at all. Moreover, the culture that is arising in science acts as a further inhibition to effective review by removing the attitudes necessary for success from the basic repertoire of behaviors.

Some men are born mediocre, some men achieve mediocrity, and some men have mediocrity trust upon them.

― Joseph Heller

underachievementdemotivator

10 Big Things For the Future of (Computational) Science

10 Sunday Jul 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

 

The future depends on what you do today.

― Mahatma Gandhi

The future is already here – it’s just not evenly distributed.

― William Gibson

When did the future switch from being a promise to being a threat?

― Chuck Palahniuk

Ikqb0orhdoqbkhw11o8djt has been a long time since I wrote a list post, and it seemed a good time to do one. They’re always really popular online, and it’s a good way to survey something. Looking forward into the future is always a nice thing when you need to be cheered up. There are lots of important things to do, and lots of massive opportunities. Maybe if we can muster our courage and vision we can solve some important problems and make a better world. I will cover science in general, and hedge the conversation toward computational science, cause that’s what I do and know the most about.

Mediocrity will never do. You are capable of something better.

― Gordon B. Hinckley

Here is the list:

  1. Fixing the research environment and encouraging risk taking, innovation and tolerance for failure
  2. CRISPR
  3. Additive manufactoring
  4. Exascale computing
  5. Nontraditional computing paradigms
  6. Big data
  7. Reproducibility of results
  8. Algorithmic breakthroughs
  9. The upcoming robotic revolution (driverless cars)
  10. Cyber-security and cyber-privacy
  1. mediocritydemotivatorFixing the research environment and encouraging risk taking, innovation and tolerance for failure. I put this first because it impacts everything else so deeply. There are many wonderful things that the future holds for all of us, but the overall research environment is holding us back from the future we could be having. The environment for conducting good, innovative game changing research is terrible, and needs serious attention. We live in a time where all risk is shunned and any failure is punished. As a result innovation is crippled before it has a chance to breathe. The truth is that it is a symptom of a host larger societal issues revolving around our collective governance and capacity for change and progress.Somehow we have gotten the idea that research can be managed like a construction project, and such management is a mark of quality. Science absolutely needs great management, but the current brand of scheduled breakthroughs, milestones and micromanagement is choking the science away. We have lost the capacity to recognize that current management is only good for leeching money out of the economy for personal enrichment, and terrible for the organizations being managed whether it’s a business, laboratory or university. These current fads are oozing their way into every crevice of research including higher education where so much research happens. The result is a headlong march toward mediocrity and the destruction of the most fertile sources of innovation in the society. We are living off the basic research results of 30-50 years past, and creating an environment that will assure a less prosperous future. This plague is the biggest problem to solve but is truly reflective of a broader cultural milieu and may simply need to run its disastrous course.Over the weekend I read about the difference between first- and second-level thinking. First-level thinking looks for the obvious and superficial as a way of examining problems, issues and potential solutions. It is dealing witjohn-von-neumann-2h things in an obvious and completely intellectually unengaged manner. Let’s just say that science today is governed by first-level thinking, and it’s a very bad thing. This is contrasted with second-level thinking, which teases problems apart, analyzes them, and looks beyond the obvious and superficial. It is the source of innovation, serendipity and inspiration. Second-level thinking is the realm of expertise and depth of thought, and we all should know that in today’s World the expert is shunned and reviled as being dangerous. We will all suffer the ill-effects of devaluing expert judgment and thought as applied to our very real problems.
  1. CRISPR What can I really say here, this technology is huge, enormous and an absolute game changer. When I learned about it the first time it literally stopped me in my tracks and I said “holy shit this could change everything!” If you don’t know CRISPR is the first easy to use and flexibly programmable method for manipulating the genetic code of living beings as well as short-circuiting the rules of natural selection. Just like nuclear energy CRISPR could be a massive force for good or evil. It has the potential to change the rules of how we deal with a host of diseases and plagues upon mankind. It also has the capacity to produce weapons of mass destruction and unleash carnage upon the world. We must use far more wisdom than we typically show in wielding its power. How we do this will shape the coming decades in ways we can scarcely imagine. It also emerges in the current era where great ideas are allowed to whither and die. It seems reasonable to say that we don’t know how to wield the verurly discoveries we make, and CRISPR seems like the epitome of this.
  2. Additive manufacturing In engineering circles this is a massive opportunity for innovation and a challenge to a host of existing practices and knowledge. It will both impact and draw upon other issues from this list in how it plays out. It is often associated with the term 3-D printing, where we can produce full three dimensional objects in a process free of classic manufacturing processes like molds and production lines. The promise is to break free of the tyranny of traditional manufacturing approaches, limitations and design for small lots of designer, custom parts. Making the entire process work well enough for customers to rely upon it and have faith in the imagesmanufacturing quality and process is a huge aspect of the challenges. This is especially true for high performance parts where the requirements on the quality are very high. The other end of the problem is the opportunity to break free of traditional issues in design and open up the possibility of truly innovative approaches to optimality. Additional problems are associated with the quality and character of the material used in the design since its use in the creation of the part is substantially different than traditional manufactured part’s materials. Many of these challenges will be partially attacked using modeling & simulation drawing upon cutting edge computing platforms.
  3. Exascale computing The push for more powerful computers for conducting societally important work is as misguided as it is a big deal. I’ve written so much recently about this I find little need to say more. It is an epitome of the first item on my list, as a wrongly managed, risk intolerant solution to a real issue, which will end up doing more harm than good in the long run. It is truly the victory of first-level thinking over the deeper and more powerful second-level thinking we need. Perhaps I’m Crays-Titan-Supercomputerbeing a bit haughty in my contention that what I’ve laid out in my blog constitutes second-level thinking about high performance computing, but I stand by it, and the summary that the first-level thinking governing our computing efforts today constitutes hopelessly superficial first-level thought. Really solving problems and winning at scientific computing requires a sea change toward applying the fruits of in-depth thinking about how to succeed at using computing as a means for societal good including the conduct of science.
  1. Nontraditional computing paradigms We stand at the brink of a deep change in computing by one way or another. We will either see the end of the Moore’s law (actually its done, at all scales already), which has powered computing into a central role societally whether it is business or science. The only way the power of computing will continue to grow is through a systematic change in the principles by which computers are built. There are two potential routes being explored both being rather questionable in their capability to deliver the sort of power necessary to succeed. The most commonly discussed route is quantum computing, which promises incredible (almost limitless) power for a very limited set of applications. It also features rather difficult to impossible to manage hardware among problems limiting its transition to reality. The second approach is neuro-morphic, or brain-inspired computing, which may be more tangible and possible than quantum, but a longer shot at being a truly game changing technology. The jury is out on both technology paths, and we may just have to live with the end of Moore’s law for a long time.
  1. Big data The Internet brought computing to the masses, and mobile computing brought computing to everyone in every aspect of our lives. Along with this ubiquity of computing came a wealth of data on virtually every aspect of everyone’s lives. This data is enormous and wildly varied in its structure teaming with possibility for uses of all stripes, good and bad. Big data is the route toward wealth beyond measure, and the embodimentfacebook-friends.jpg.pagespeed.ce_.UPAsGtTZXH of the Orwellian Big Brother we should all fear. Taming big data is the combination of computing, algorithms, statistics and business all rolled into one. It is one of the places where scientific computing is actually alive with holistic energy driving innovation all the way from models of data (reality), algorithms for taming the data and hardware to handle to load. New sensors and measurement devices are only adding to the wealth as the Internet of things moves forward. In science, medicine and engineering new instruments and sensors are flooding the World with huge data sets that must be navigated, understood and utilized. The potential for discovery and progress is immense as is the challenge of grappling with the magnitude of the problem.
  2. Reproducibility of results The trust of science and expertise is seemingly at an all-time low. Part of this is the cause of the information (and misinformation) deluge we live in. It is feeding on and fed by the lack of trust in expertise within society. As such there has been some substantial focus on being able to reproduce the results of research. Some fields of study are having veritable crises driven by the failures of studies to be reproducible. In other cases the stakes are high enough that the public is genuinely worried about the issue. Such a common situation is a drug trial, which have massive stakes for anyone who might be treated with or need to be treated by a drug. Other areas science such as computation have fallen under the same suspicion, but may have the capacity to provide greater substance and faith in the hqdefaultreproducibility of their work. Nonetheless, this is literally a devil is in the details area and getting all the details right that contributes to research finding is really hard. The less oft spoken subtext to this discussion is the general societal lack of faith in science that is driving this issue. A more troubling thought regarding how replicable research actually comes from considering how uncommon replication actually is. It is uncommon to see actual replication, and difficult to fund or prioritize such work. Seeing how commonly such replication fails under these circumstances only heightens the sense of the magnitude of this problem.
  1. Algorithmic breakthroughs One way of accelerating the progress in computers and the work they do is to focus on innovations in algorithms. Instead of relying on computational hardware to increase our throughput we rely on innovation in how we use those computers or implement our methodsimages on those computers. Over time improvements in methods and algorithms have outpaced improvements in hardware. Recently this bit of wisdom has been lost to the sort of first-level thinking so common today. In big data we see needs for algorithm development overcoming the small-minded focus people rely upon. In scientific computing the benefits and potential is there for breakthroughs, but the vision and will to put effort into this is lacking. So I’m going to hedge toward the optimistic and hope that we see through the errors in our thinking and put faith in algorithms to unleash their power on our problems in the very near future!
  2. The upcoming robotic revolution (driverless cars) The fact is that robots are among us already, but their scope and presence is going to grow. Part of the key issue with the robots is the lack of brainpower to really replace the human decision-making in tasks. Computing power, and the ubiquity of the Internet in all its coupled glimagesory is making problems like this tractable. It would seem that driverless-robot cars are solving this problem in one huge area of human activity. Multiple huge entities are working this problem and by all accounts making enormous progress. The standard for the robot cars would seem to be very much higher than humans, and the system is biased against this sort of risk. Nonetheless, it would seem we are very close to seeing driverless cars on a road near you in the not too very distant future. If we can see the use of robot cars on our roads with all the attendant complexity, risks and issues associated with driving it is only a matter of time before robots begin to take their place in many other activities.
  1. Cyber-security and cyber-privacy The advent of computing at such an enormous societal scale particularly with mobile computing penetrating every aspect of our lives is the twin security-privacy dilemma. On the one hand, we are ptinder-640x334otentially victimized by cyber-criminals as more and more commerce and finance takes place online driving a demand for security. The government-police-military-intelligence apparatus also sees the potential for incredible security issues and possible avenues through the virtual records being created. At the same time the ability to have privacy or be anonymous is shrinking away. People have the desire to not have every detail of their lives exposed to the authorities (employers, neighbors, parents, children, spouses,…) meaning that cyber-privacy will become a big issue too. This will lead to immense technical-legal-social problems and conflict over how to balance the needs-demands-desires for security and privacy. How we deal with these issues will shape our society in huge ways over the coming years.

The fantastic advances in the field of electronic communication constitute a greater danger to the privacy of the individual.

― Earl Warren

How to Win at Supercomputing

04 Monday Jul 2016

Posted by Bill Rider in Uncategorized

≈ 3 Comments

The best dividends on the labor invested have invariably come from seeking more knowledge rather than more power.

— Wilbur Wright

Here is a hint; it’s not how we are approaching it today. The approach today is ultimately doomed to fail and potentially take a generation of progress wit it. We need to emphasize the true differentiating factors and embrace the actual sources of progress. Computer hardware is certainly a part of the success, but by no means the dominant factor in true progress. As a result we are starving key aspects of scientific computing from the intellectual lifeblood needed for advancing the state of the art. Even if we “win” following our current trajectory, the end result will be a loss because of the opportunity cost incurred in pursuing the path we are on today. Supercomputing is a holistic activity embedded in a broader scientific enterprise. As such it needs to fully embrace the scientific method and structure its approach more effectively.

The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

― George Bernard Shaw

21SUPERCOMPUTERS1-master768The news of the Chinese success in solidifying their lead in supercomputer performance “shocked” the high performance-computing World a couple of weeks ago. To make things even more troubling to the United States, the Chinese achievement was accomplished with home grown hardware (a real testament to the USA’s export control law!). It comes as a blow to the American efforts to retake the lead in computing power. It wouldn’t matter if the USA or anyone else for that matter were doing things differently. Of course the subtext of the entire discussion around supercomputer speed is the supposition that raw computer power measures the broader capability in computing, which defines an important body of expertise for National economic and military security. A large part of winning in supercomputing is the degree to which this supposition is patently false. As falsehoods go, this is not ironclad and a matter of debate over lots of subtle details that I elaborated upon last week. The truth depends on how idiotic the discussion needs to be and one’s tolerance for subtle technical arguments. In today’s world arguments can only be simple, verging on moronic and technical discussions are suspect as a matter of course.

Instead of concentrating just on finding good answers to questions, it’s more important to learn how to find good questions!

― Donald E. Knuth

If you read that post you might guess the answer of how we might win the quest for supercomputing supremacy. In a sense we need to do a number of things better than today. First, we need to stop measuring computer power with meaningless and misleading benchmarks. These do nothing but damage the entire field by markedly skewing the overall articulation of both the successes, but also the challenges of building uunnamedseful computers. Secondly, we need to invest our resources in the most effective areas for success these are modeling, methods and algorithms all of which are far greater sources of innovation and true performance for the accomplishment of modeling & simulation. The last thing is to change the focus of supercomputing to modeling & simulation because it is where the societal value of computing is delivered. If these three things were effectively executed upon victory would be assured to whomever made the choices. The option of taking more effective action is there for the taking.

Discovery consists of looking at the same thing as everyone else and thinking something different.

― Albert Szent-Györgyi

The first place to look for effort that might dramatically tilt the fortunes of supercomputing is modeling. Our models of the World are all wrong to some degree; they are all based on various limiting assumptions, and may be improved. None of these characteristics may be ameliorated by supercomputing power, or accuracy of discretization, nor algorithmic efficiency. Modeling limitations are utterly impervious to anything, but modeling improvement. The subtext to the entire discussion of supercomputing power is the supposition that our models today are completely adequate and only in need of faster computers to fully explain reality. This is an utterly specious point-of-view that basically offends the foundational principles of science itself. Modeling is the key to the understanding and irreplaceable in its power and scope to transform our capability.

And a step backward, after making a wrong turn, is a step in the right direction.

― Kurt Vonnegut

gesamthubschrauber-01We might take a single example to illustrate the issues associated with modeling: gradient diffusion closures for turbulence. The diffusive closure of the fluid equations for the effects of turbulence is ubiquitous, useful and a dead end without evolution. It is truly a marvel of science going back to the work of Prantl’s mixing length theory. Virtually all the modeling of fluids done with supercomputing is reliant on its fundamental assumptions and intrinsic limitations. The only place where its reach does not extend to is the direct numerical simulation where the flows are computed without the aid of modeling, i.e., a priori (which for the purposes here I will take as a given although it actually needs a lot of conversation itself). All of this said, the ability of direct numerical simulation to answer our scientific and technical questions are limited because turbulence is such a vigorous and difficult multiscale problem that even an exascale computer cannot slay.

So let’s return to what we need to do to advance the serious business of turbulence modeling. In a broad sense one of the biggest limitations of diffusion as a subgrid closure is its inability to describe behavior that is not diffusive. While turbulence is a decisively dissipative phenomenon, it is not always and only dissipative locally. The diffusive subgrid closure makes this assumption and hence carries deep limitations. In key areas of a flow field the proper subgrid model is actually non-dissipative or even anti-dissipative. The problem is that diffusion is a very stable and simple way to model phenomena in many ways exaggerating its success. We need to develop non-diffusive models that extend the capacity to model flows not fully or well described by diffusive closure approaches.

computer-modeling-trainingOnce a model is conceived of in theory we need to solve it. If the improved model cannot yield solutions, its utility is limited. Methods for computing solutions to models beyond the capability of analytical tools were the transformative aspect of modeling & simulation. Before this many models were only solvable in very limited cases through apply a number of even more limiting assumptions and simplifications. Beyond just solve the model; we need to solve it correctly, accurately and efficiently. This is where methods come in. Some models are nigh on impossible to solve, or entail connections and terms that evade tractability. Thus coming up with a method to solve the model is a necessary element in the success of computing. In the early years of scientific computing many methods came into use that tamed models into ease of use. Today’s work on methods has slowed to a crawl, and in a sense our methods development research are victims of their own success.

Arthur C. Clarke’s third law: Any sufficiently advanced technology is indistinguishable from magic.

An example of this success is the nonlinear stabilization methods I’ve written about recently. These methods are the lifeblood of the success computational fluid dynamics (CFD) codes have had. Without their invention the current turnkey utility of CFD codes would be unthinkable. Before their development CFD codes were far more art and far less science than today. Unfortunately, we have lost much of the appreciation for the power and scope of these methods. We have little understanding of what came before them and the full breadth of their magical powers. Before these methods came into the fore one was afforded the daunting task of choosing between an overly diffusive stable method (i.e., donor cell–upwind differencing) and a more accurate, but unphysically oscillatory method. These methods allowed on to have both and adaptively use whatever was necessary under the locally determined circumstances, but they can do much more. While their  power to allow efficient solutions was absolutely immense, these methods actually opened doors to physically reasonable solutions to a host of problems. One could have both accuracy and physical admissibility in the same calculation.

This is where the tale turns back toward modeling. These methods actually provide some modeling capability for “free”. As such the modeling under the simplest circumstances is completely equivalent to the Prantl’s mixing layer approach, but with the added benefit of computability. More modern stabilized differencing actually provides modeling that goes beyond the simple diffusive closure. Because of the robust stability properties of the method one can compute solutions with backscatter stably. This stability is granted by the numerical approach, but provides the ability to solve the non-dissipative model with an asymptotic stability needed for physically admissible modeling. If one had devised a model with the right physical effect of local backscatter, these methods provide the stable implementation. In this way these methods are magical and make the seemingly impossible, possible.images-1

This naturally takes us to the next activity in the chain of activities that add value to computing, algorithm development. This is the development of new algorithms that have greater efficiency to differentiate itself from the focus of algorithm work today, simply implementing old algorithms on the new computers, which comes down to dealing with the increasingly enormous amount of parallelism demanded. The sad thing is that no implementation can over come the power of algorithmic scaling, and this power is something we are systematically denying ourselves of. Indeed we have lost massive true gains in computational performance because of failure to invest in this area, and the inability to recognize the opportunity cost of a focus on implementing the old.

A useful place to look to in examining the sort of gains coming from algorithms is numerical linear algebra. The state of the art here comes from multigrid and it came into the fore over 30 years ago. Since then we have had no breakthroughs, when before a genuine breakthrough occurred about every decade. It is not coincidence 30 years ago is when parallel computing began its eventual takeover of high performance computing. Making multigrid or virtually any other “real” algorithm work at a massive parallel scale is very difficult, incredibly challenging work. This difficulty has swallowed up all the effort and energy in the system effectively starving the development of new algorithm invention out. What is the cost? We might understand the potential cost of these choices by looking back at what previous breakthroughs have gained.

We can look at the classical example of solving Poisson’s equation (\nabla^2 u = f) on the unit square or cube to instruct us on how incredibly massive the algorithmic gains might be. The crossover point between a relaxation method (Gauss-Seidel, GS, or Jacobi) and an incomplete Cholesky conjugate gradient (ICCG) is at approximately 100 unknowns. For a multigrid algorithm the crossover point in cost occurs at around 1000 unknowns. Problems of 100 or 1000 unknowns can now be accomplished on something far less capable than a cell phone. For problems associated with supercomputers the differences in the cost of these different algorithms are utterly breathtaking to behold.7b8b354dcd6de9cf6afd23564e39c259

Consider a relatively small problem today of solving Poisson’s equation on a unit cube of 1000 unknowns in each direction (10^9 unknowns). If we take the cost of multigrid as taking “one” the GS now takes ten million times more effort, and ICCG almost 1000 times the effort. Scale up the problem to something we might dream of doing on an exascale computer of a cube of 10,000 on a side with a trillion unknowns, and we easily see the tyranny of scaling and the opportunity of algorithmic breakthroughs we are denying ourselves of. For this larger problem, the GS now costs ten billion times the effort of multigrid, and ICCG is now 30,000 times the expense. Imagine the power of being able to solve something more efficiently than multigrid! Moreover multigrid can withstand incredible levels of inefficiency in its implementation and still win compared to the older algorithms. The truth is that parallel computing implementation drives the constant in front of the scaling up to a much larger value than a serial computer, so these gains are offset by the lousy hardware we have to work with.

Here is the punch line to this discussion. Algorithmic power is massive almost to a degree that defies belief. Yet algorithmic power is vanishingly small compared to methods, which itself is dwarfed by modeling. Modeling connects the whole simulation endeavor to the scientific method and is irreplaceable. Methods make these models solvable and open the doors of capability. All of these activities are receiving little tangible priority or support in the current high performance computing push resulting in the loss of incredible opportunities for societal benefit. Moreover we have placed our faith in the false hope that mere computing power is transformative.

Never underestimate the power of thought; it is the greatest path to discovery.

― Idowu Koyenikan

Both models and methods transcend the sort of gains computing hardware produces and can never replace. Algorithmic advances can be translated to the language of efficiency via scaling arguments, but provide gains that go far beyond hardware’s capacity for improvement. The problem is that all of these rely upon faith in humanities ability to innovate, think and produce things that had previously been beyond the imagination. This is an inherently risky endeavor that is prone to many failures or false hopes. This is something that today’s World seems to lack tolerance for, and as such the serendipity and marvel of discovery is scarified at the altar of fear.

We have to continually be jumping off cliffs and developing our wings on the way down.

― Kurt Vonnegut

The case for changing the focus of our current approach being airtight, and completely defensible. Despite the facts, the science and the benefits of following rational thinking there is precious little chance of seeing change. The global effort in supercomputing is utterly and completely devoted to the foolish hardware path. It wins by a combination of brutal simplicity, and eagerness to push money toward industry. So what we have is basically cash driven funeral pyre for Moore’s law. The risk-taking, innovation-driven approach necessary for success is seemingly beyond the capability of our society to execute today. The reasons why are hard to completely grasp, we have seemingly lost of nerve and taste for subtlety. Much of the case for doing the right things and those things that lead to success are bound to a change of mindset. Today the power, if not the value of computing are measured in the superficial form of hardware. The reality is that the power is bound to our ability to model, simulate and ultimately understand or harness reality. Instead we blindly put our faith in computing hardware instead of the intellectual strength of humanity.

The discussion gets to a number of misconceptions and inconsistencies that the field of supercomputing. The biggest issue is the disconnect between the needs of science and engineering and the success of supercomputing (i.e., what constitutes a win). Winning in supercomputing programs is tied to being able to put a (American) machine at the top of the list. Increasingly success at having the top computer on the increasingly useless Top500 list is completely at odds with acquiring machines useful for conducting science. A great deal of the uselessness of the list is the benchmark used to define its rankings, LINPAC, which is less relevant to applications every passing day. It has come to the point where it is hurting progress in a very real way.500x343xintel-500x343.jpg.pagespeed.ic.saP0PghQP9

The science and engineering needs are varied all the way from QCD, MD and DNS to climate modeling and integrated weapons calculations. The pure science needs of QCD, MD and DNS are better met by the machines being built today, but even in this idealized circumstance the machines we buy to top the computing list are fairly suboptimal for this pure science application. The degree of suboptimality for running our big integrated calculations has become absolutely massive over time and the gap is only growing larger with each passing year. Like most things, inattention to this condition is only allowing it to become worse. The machines being designed for winning the supercomputing contest are actual monstrosities that are genuinely unusable for scientific computing. Worse yet the execution of the exascale program is acting to make this worse in every way, not better.

We then increase the damaging execution of the supercomputing program is the systematic hollowing out of the science, and engineering content from our programs. We are systematically diminishing our efforts in experimentation, theory, modeling, and mathematics despite their greater importance and impact on the entire enterprise. The end result will be a lost generation of computational scientists who are left using computers completely ill-suited to the conduct of science. If National security is a concern, the damage we are doing is real and vast in scope.

We need supercomputing to be a fully complimentary part of the scientific enterprise used and relied upon only as appropriate with limits rationally chosen based on evidence. Instead we have created supercomputing as a prop and marketing stunt. There is a certain political correctness about how it contributes to our national security, and our increasingly compliant Labs offer no resistance to the misuse of the taxpayer money. The mantra is “don’t rock the boat,” we are getting money to do this. Whether or not it’s sensible or not is immaterial. The current programs are ineffective and poorly executed and do a poor job of providing the sorts of capability claimed. It is yet another example of and evidence of the culture of bullshit and pseudo-science that pervades our modern condition.

Supercomputer_Share_Top500_November2015The biggest issue is the death of Moore’s law and our impending failure to produce the results promised. Rather than reform our programs to achieve real benefits for science and national security, we will see a catastrophic failure. This will be viewed through the usual lens of scandal. It is totally foreseeable and predictable. It would be advisable to fix this before disaster, but my guess is we don’t have the intellect, foresight, bravery or leadership to pull this off. The end is in sight and it won’t be pretty. Instead there is a different path that would be as glorious and successful. Does anyone have the ability to turn away from the disastrous path and consciously choose success?

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

― Werner Heisenberg

Some Background reading on the Top500 list and benchmarks that define it:

https://en.wikipedia.org/wiki/TOP500

https://en.wikipedia.org/wiki/LINPACK_benchmarks

https://en.wikipedia.org/wiki/HPCG_benchmark

A sample of prior posts on topics related to this one:

https://williamjrider.wordpress.com/2016/06/27/we-have-already-lost-to-the-chinese-in-supercomputing-good-thing-it-doesnt-matter/

https://williamjrider.wordpress.com/2016/05/04/hpc-is-just-a-tool-modeling-simulation-is-what-is-important/

https://williamjrider.wordpress.com/2016/01/15/could-the-demise-of-moores-law-be-a-blessing-in-disguise/

https://williamjrider.wordpress.com/2016/01/01/are-we-really-modernizing-our-codes/

https://williamjrider.wordpress.com/2015/11/19/supercomputing-is-defined-by-big-money-chasing-small-ideas-draft/

https://williamjrider.wordpress.com/2015/10/30/preserve-the-code-base-is-an-awful-reason-for-anything/

https://williamjrider.wordpress.com/2015/10/16/whats-the-point-of-all-this-stuff/

https://williamjrider.wordpress.com/2015/07/24/its-really-important-to-have-the-fastest-computer/

https://williamjrider.wordpress.com/2015/07/03/modeling-issues-for-exascale-computation/

https://williamjrider.wordpress.com/2015/06/05/the-best-computer/

https://williamjrider.wordpress.com/2015/05/29/focusing-on-the-right-scaling-is-essential/

https://williamjrider.wordpress.com/2015/04/10/the-profound-costs-of-end-of-life-care-for-moores-law/

https://williamjrider.wordpress.com/2015/03/06/science-requires-that-modeling-be-challenged/

https://williamjrider.wordpress.com/2015/02/14/not-all-algorithm-research-is-created-equal/

https://williamjrider.wordpress.com/2015/02/12/why-is-scientific-computing-still-in-the-mainframe-era/

https://williamjrider.wordpress.com/2015/02/06/no-amount-of-genius-can-overcome-a-preoccupation-with-detail/

https://williamjrider.wordpress.com/2015/02/02/why-havent-models-of-reality-changed-more/

https://williamjrider.wordpress.com/2015/01/05/what-is-the-essence-of-computational-science/

https://williamjrider.wordpress.com/2015/01/01/2015-time-for-a-new-era-in-scientific-computing/

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • March 2026
  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 60 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...