• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Category Archives: Uncategorized

We are just sitting on our lead in Science & Technology

22 Friday Aug 2014

Posted by Bill Rider in Uncategorized

≈ 2 Comments

Today the United States is the predominant power in the World with its technological advantage leading the way. American technological superiority expresses itself in both economic and military power. Whether through drone, or the Internet it sits on top of the heap. The technology that drives this supremacy is largely the product of military research conducted during the more than fifty years of the Cold War. The Internesearchology-web-grapht for instance was born from a defense related research project designed to enable communication during and after a nuclear conflict. The United States appears to be smugly holding its lead almost as if it were part of the natural order. While all of this isn’t terribly arguable, the situation isn’t so rosy for the United States that it can lay back and assume this situation will persist indefinitely. This is exactly what is happening and it is an absolute risk to the Country.

Several factors contribute directly to the risk the USA is taking. A number     A6    of other nations are right behind the United States and they are acting like they are behind by aggressively investing in technology. The technology that the United States depends upon is old, mature and far from the cutting edge. Most of it reflects investments, risks and the philosophy of 40 or 50 years ago when the Cold War was at its height. With the Cold War fading from sight, and victory at hand the United States took the proverbial victory lap while pulling back from the basics that provided the supremacy.

110211-O-XX000-001A large part of this is a lack of aggressive pursuit of R&D and a remarkably passive, fear-based approach to investment and management. The R&D goals of excellence, innovation and risk have been changed to acceptable mediocrity, incrementalism and safe bets. We have seen a wholesale change in the federal approach for supporting science. Almost without exception these changes have made the USA less competitive and actively worked toward destroying the systems that once led the World. This is true for research institutions such as federal laboratories and universities. Rather than improving the efficiency or effectiveness of our R&D foundation we have weakened them across the board. It is arguable that our political system has grown to take the USA’s supremacy completely for granted.

BUN67Without reverting back to a fresh set of investments and a forward looking philosophy the United States can expect its superiority to fade in the next 20 years. It doesn’t have to happen, but it will if something doesn’t change. The issues have been brewing and building for my entire adult life. American’s have become literally and metaphorically fat and lazy with a sense of entitlement that will be overthrown in a manner that is likely to be profoundly disturbing to catastrophic. We have no one to blame other than ourselves. The best analogy to what is happening is a team that is looking to preserve its victory by sitting on the lead. We have gone into the prevent defense, which as the saying goes “only prevents you from winning” (if you like soccer we have gotten a lead and decided to “park the bus” hoping our opponents won’t score!).

The signs are everywhere; we don’t invest in risking, far out research, our old crumbling-bridgeinfrastructure (roads, bridges, power plants) is crumbling, and our new infrastructure is non-existent. Most other first World nations are investing (massively) in modern efficient Internet and telecommunications while we allow greedy, self-interested monopolies to starve our population of data. Our economy and ultimately our National defense will ultimately suffer from this oversight. All of these categories will provide the same outcome; we will have a weaker economy, weak inc
omes, poorer citizens, and an unreliable and increasingly inferior defense. If things don’t change we will fall from the summit and lose our place in the World.

To maintain a lead in technology and economic growth the Nation must aggressively fund research. This needs to happen in a wide range of fields and entail significant risk. Risk in research has been decreasing with each passing year. Perhaps the beginning of the decline can be traced by the attitude expressed by Senator William Proxmire. Proxmire went to great lengths to embarrass the scientific research he didn’t understand or value with his Golden Fleece Awards. In doing so he did an immense disservice to the Nation. Proxmire is gone, but his attitude is stronger than ever. The same things are true for investing in our National infrastructure; we need aggressive maintenance and far-sighted development of new capabilities. Our current political process does not value our future and will not invest in it. Because of this our future is at risk.

334px-GSAclass6SecurityContainerAnother key sign of our concern about holding onto our lead is the expansion in government secrecy and classification. The expansion of classification is a direct result of the post 9-11 World, but also fears of losing our advantage. Where science and technology are concerned, the approach depends upon the belief that hiding the secrets can keep the adversary from solving the same problems we have. In some cases this is a completely reasonable approach where elements in the secret make it unique; however in situations where the knowledge is more basic, the whole approach is foolhardy. Beyond the basic classification of things, there is an entire category of classification that is “off the books”. This is the designation of documents as “Official Use Only” which removes them from the consideration under the Freedom of Information Act. This designation is exploding in use. While it does have reasonable purpose quite often it is used as another defacto classification. It lacks the structure and accountability that formal classification has. It is unregulated and potentially very dangerous.

classifiedThe one place where this has the greatest danger is the area of “export control” which is a form of Official Use Only”. In most cases standard classification is well controlled and highly technically prescribed. Export control has almost no guidance whatsoever. The information falling under export control is much less dangerous than classified info, yet the penalties for violating the regulations are much worse. Along with the more severe penalties comes almost no technical guidance for how to determine what is export controlled. Together it is the recipe for disaster. It is yet another area where our lawmakers are utterly failing the Nation.

Ultimately the worst thing that the United States does is allowing extreme over-confidence to infect its decision-making. Just because the United States has over-whelming technological superiority today does not grant that for the future. As I noted about the superiority of today is based on the research of decades ago. If the research is not happening today, the superiority of the future will fade away. This is where the devotion to secrecy comes in. There is the misbegotten belief that we can simply hide the source of our supremacy, which is the equivalent of sitting on a lead and playing “prevent” defense. As we know the outcome from that strategy is often the opposite of intended. We are priming ourselves to be overtaken and surprised; we can only pray that the consequences will not be catastrophic and deadly.

The way to hold onto a lead is to continue doing those things that provided you the advantage in the first place. Aggressive, risk-taking research with a blend of open-ended objective applied to real-world problems is the recipe we followed in the past. It is time to return to that approach, and drop the overly risk adverse, cautious, over- and micro-managed and backwards looking approach we have taken in the past quarter of a century. The path to maintaining supremacy is crystal clear; it is only a matter of following it.

The United States Strategic Deterrence Symposium – Welcome to the Echodome!

15 Friday Aug 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Over the past two days I transitioned from finding the discussion and content of a meeting provocative and thoughtful to increasing unease about everything happening around me. More and more the feeling seeped into my consciousness that the dialog wasn’t quite as deep as I had first thought, and I had been plunged into an echo chamber where various delusions exist unchallenged by countering viewpoints. The organizers had despite meritorious efforts had failed to provide sufficiently broad viewpoints on the important topics to be engaged during the symposium. 7711692274_c23c4569cf_b

I attended an odd meeting this week, or at least odd for me, the US Strategic Deterrence Symposium put on by the US Strategic Air Command. This week’s offering comes from Omaha Nebraska, the home of Cornhuskers (Big Red) as I was repeatedly reminded over the last two days. Aside from all too much celebration of the upcoming college football season, the symposium was exceedingly well run and professional. Even more remarkably, it took place in a Hotel conference center, not “on site” or “on base”.

As always on travel to a new place I am extra observant. Indeed traveling is a great way to see the country and the world while gaining much needeEvernote Camera Roll 20140815 101005d perspective on how others live. Omaha offers me the chance to see a real sunrise that the Sandia Mountains deny me. Omaha also seems to eschew the practice of supplying sidewalks for its citizens. This is irritating given my newfound habit of walking every morning, and might explain part of the (big) red state propensity towards obesity.

Part of my watchful state was the growing thought and an increasing sense of being an interloper at this symposium. I really felt like a complete outsider. The first thing I noticed was the complete lack of technology in audience aside from cell phones (I was using an iPad to take notes which made me an immense outlier! This grew into a sense that this was the tip of the proverbial iceberg). Furthermore the footprint of the meeting on Twitter was nearly zero (until the press release from our last keynote speaker from the State Department). For a topic so ripe with technological angles and implications the seemingly Luddite mass attending the meeting was very troubling. I’ll note that the demographics of the meeting appeared to be about 40% military from all the services, and an array of beltway bandits, and think tank thinkers with a sprinkling of international attendees and high-level government officials. It wasn’t clear how many academics were there, but not many beyond those on the panels.

I will note that the meeting did offer the laudable idea of letting attendees text or email questions to the panels. I availed myself of this, with limited success. Here are my to questions,

Q1; to what extent is the current peaceful Europe more of a reflection of the collective memory of WW2 rather than a permanent change in the political dynamic? How can we effectively extend the peace?

Q2: to what extent are we vulnerable to technological surprise and potentially overconfident about our technological superiority?

The first one was never asked. It was a response to an observation that the Napoleonic wars of the 19th Century precipitated a pause in European conflict, and pondered whether WWII is the basis of the currently relatively peaceful Europe. Is the memory of the mass destruction caused by that war tilting governments toward peaceful resolution of differences? The second question was asked albeit in a modified form to an interesting subject. We had been treated to relatively jingoist discussion of American military superiority, and I wanted to know if we were a bit overconfident and could be surprised by an unforeseen technological advance. Instead the moderator asked the question to the Chinese member of the panel, who responded that yes they are afraid of this. Maybe the Americans shouldn’t be so sure of themselves. The Chinese are doing something about their fear and the Americans appear to be grossly overconfident of their technological hegemony.

When the meeting closed the Admiral who hosted the symposium highlighted the importance of youth, and their role in showing us the way forward especially with technology. He had everyone 35 years old and younger standup. Given my embrace of technology I think this approach isn’t good enough, not by a long shot. We need to challenge everyone in the field to be technologically advanced and learn how to live into eh modern World. Just because one is old shouldn’t allow one off the hook. Frankly this was one of the most disappointing moments of the whole meeting. This community needs to challenge itself to be current and up to date with a deep, broad understanding of the technology that our security hinges upon.

I’ll highlight three of the talks before I close. One that was very good, one that was off the mark and a third one that made me angry.

The dinner talk at the end of the first day came from Dr. Zuhdi Jasser. I found it to be very thought provoking with a message for supporting the rise of secular Islam as a policy. He was passionate and focused with a key message of supporting the liberal, secular forces that allow for Western ideals of liberty and freedom to flourish (I noted embarrassingly that “liberty” and “liberal” have the same root, but are perceived entirely differently by politics). One troubling aspect of Dr. Jasser’s speech was his failure to take on the forces of anti-secular Christianity (probably present in the room!). These forces actually legitimize the sort of philosophy he is speaking out against and it effectively makes the Jihad two-sided with Christian soldiers squaring off in a modern Crusade. This is a lost opportunity. This would also speak against his advocacy of dropping the enemy of my enemy is my friend philosophy that defines too much of US Foreign policy for the last 70 years (is he guilty of this very practice by embracing the American Right?).

General Frank Klotz, the NNSA administrator, gave the opening talk of the second day. In many ways the talk was not very interesting, but it was relevant to me. Ultimately, the item that stuck to me was the discussion of the maintenance of a World-class wcomputer_annex_featureorkforce in science and technology. General Klotz described the NNSA support for re-capitalizing the facilities as central to this. He reiterated the importance of the workforce several times. From my perspective we are failing at this goal, and failing badly. The science that the United States is depending on is in virtual free fall. Our supremacy militarily is dependent of the science of 20-40 years ago, and the pipeline is increasingly empty. We have fallen behind Europe, and may fall behind China in the not too distant future. The entire scientific establishment is receding from prominence in large part to a complete lack of leadership and compelling mission as a Nation. It is a crisis. It is a massive threat to National security. The concept of deterrence by capability used to be important. It is now something that we cannot defend because our capabilities are in such massive decline. It needs to come back; it needs tZmachineo be addressed with an eye towards recapturing its importance. Facilities are no replacement for a vibrant scientific elite doing cutting edge work. Today, for some reason we seem to accept this as such.

One of the final panels offered a talk that just made me angry in a visceral, deep way. It came from DHS. The speaker offered up a vision of a walled off, gated community as a response to the potential terrorism. In my view this sort of approach to terrorism is the absolute failure of deterrence. He outlined an America where terrorism has won and our freedom has been scarified to the altar of fear. It is the surrender of our lifestyle to the forces of terror. He represented a view that hands victory to our enemies. One could argue that our response to terrorism has handed them success by the massive amount of resources squandered in fighting it, and the replacement of liberty, freedom and our fundamental principles by surveillance, torture and perpetual war.  As the events in Ferguson, Missouri have demonstrated, too much of the war has been exported to our streets by police disguised as an occupying army (the negatively of this was alluded to at the meeting although not directly).trident_2471905b

The last thing that stood out to me was the political attitude of the attendees. People usually shy away from expressing deep political sentiment, but not here. I felt like a group preparing the talking points for Fox News surrounded me. I heard open climate denial without a hint of reservation. Interesting the climate issue looms large over US-Russian dynamics with the Artic being a potentially huge flashpoint. Other climate related topics such as increased regional conflict due to crop failures and energy were avoided. This is a huge problem. I felt that the audience was largely only tolerating the current administration, and deeply wanted to see a more strident policy aligned with Neo-Con ideals. The topic of strategic deterrent is too important to not be subject to a deeper more nuanced debate, but this wasn’t happening here.

In summary, this was an important and good meeting with lots of provocative content, but needs to sharpen its edge and challenge the audience’s conventional wisdom. They need to tear open the echo chamber naturally arising from this community. If the USA isn’t careful we will all be surprised by dangers and risks hiding in plain sight.

 

What came first? The method? Or the math?

08 Friday Aug 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

I’ll post a footnote to my thoughts of a week ago (it wasn’t my plan A). It comes from trying to piece together the early history of CFD, my frustration with the lack of detail and context associated with the scientific and mathematical writing. Upon reflection I think it is actually a deeper problem with deeper consequences.

A generation which ignores history has no past — and no future.” ― Robert A. Heinlein

The title is the proverbial chicken and egg question, but for each case there is a definite chicken or egg answer. The problem is that far too often the literature does not contain the information by whichthe answer may be determined. Our community history is thus lost and lessons of how knowledge was obtained are not passed along.

“We make our own monsters, then fear them for what they show us about ourselves.” ― Mike Carey & Peter Gross

The question in the title relates to how advances in computational science are related to the math used to explain them. The core of the question is whether the method is first demonstrated, or a problem is first found before the mathematical analysis explains why. My general belief is that most of the time the computed results plqjWKrecede the math. The math provides rigor, explanation and bounds for applying techniques. This reflects upon our considerations of where the balance of effort should be placed in driving innovative solutions. Generally speaking, I would posit that the computational experimentation should come first, followed by mathematical rigor, followed by more experimentation, and so on… This structure is often hidden by the manner in which mathematics is presented in the literature.

Unknown-1In developing the history of CFD I am trying to express a broader perspective than currently exists on the topic. Part of the perspective is defining the foundation that existed before computational science was even a conceptual leap in Von Neumann’s mind. I knew that a number of numerical methods existed including integration of ODE’s (the work of Runge, Kutta, Adams, Bashforth, etc…). One of Von Neumann’s great contributions to numerical methods was stability analysis, and now I’m convinced it was even greater than I had imagined.

I had incorrectly assumed that ODE stability preceded Von Neumann’s work, but instead it came in the wake of it. To me, this is utterly remarkable because ODE theory is much simpler. Note that a few weeks ago I used it to introduce analysis of methods, and Von Neumann’s stability technique, but instead the more difficult thing was done first.

Think about it. None of the precursors to the modern era in ODE integration had explored the time stability of the methods. The issue was clearly present and surely observed. It took the availability of (mechanical) computers to generate the impetus to study the topic. Perhaps the human computing of the earlier era was too dubious for the instability to warrant a deeper mathematical investigation. The problem is that the writing about the topic shines little or no light on the reasoning. None. This comes down to the style of the writing, which provides no context for the work; instead it hops right into the math. Any context in the literature seems to only come when the work is completed and the author is famous (and old). Then the work is discussed in a historical overview, which provides details that are completely absent from the earlier (technical) works. If the author Ps-adams-rootlocusdies early (e.g. Von Neumann) no such retrospective is available.

“For balance to be restored, lessons must be learned.” ― Sameh Elsayed

The true reasoning and inspiration for many of the great works of numerical mathematics is hidden by the accepted practices of the field. This is counter-productive and antithetical to pedagogical discourse. Too often in the modern literature work is done without any reason to believe it will manifest any utility in actual computing. In many cases this is indeed the case. In my opinion the literature has moved away from numerical analysis that should show numerical utility in the past several decades. In reading the older ODE literature I see that this is an amplification of previous tendencies.

This personally infuriates me because I often find no reason to actually digest the detailed mathematics without some sense that it will be useful. It also encourages the publication of results that have no practical value. This frustrating state of affairs is at the core of my comments last week, which, in hindsight, may have been aimed at the wrong target.

250px-Bdf3_ostarWhat is lost from the literary record is profound. Often the greatest discoveries in applied math come trying a well-crafted heuristic on a difficult problem and finding that it works far better than could be expected. The math then comes in to provide an ordered structural explanation for the empirical observation. Lost in the fray is the fact that the device was heuristic and perhaps a leap or inspiration from some other source. In other cases progress comes from a failure or problem with something that should work. We explain why it doesn’t in a rigorous fashion with a barrier theorem. These barrier theorems are essential to progress. The math then forms the foundation for the next leap. The problem is that the process is undocumented and this ill prepares the uninitiated for how to make the next leap. Experimentation and heuristic is key, and often the math only follows.

Worse yet, this tendency is only getting more acute. I’m not sure why the literature is like this. Is it that people are too insecure to admit the pedestrian events that led to creation? Do parts of the work just seem to be to close to engineering? I think these tendencies lead to bigger problems than simply historical inaccuracy and incompleteness; they lead to less progress and less innovation. This tendency is actually holding the field of numerical methods for scientific computing back.

I’ve noted a general lack of progress with algorithms in the last 20-30 years. Perhaps part of the issue is related to the lack of priority given to simply experimenting with methods and trying things, then doing the math. Instead there is too much just doing math, or even worse only doing the methods that produce the math you already kUnknownnow how to do. We need methods that work, and invent math that explains the things that work. A more fruitful path would involve working hard to solve problems that we don’t know how to attack, finding some fruitful avenues for progress, and then trying to systematically explain progress. Along the way we might try being a bit more honest about how the work was accomplished.

“In science if you know what you are doing you should not be doing it. In engineering if you do not know what you are doing you should not be doing it. Of course, you seldom, if ever, see the pure state.” – Richard Hamming

 

What do I have against the finite element method?

01 Friday Aug 2014

Posted by Bill Rider in Uncategorized

≈ 4 Comments

Over the past couple of weeks I’ve experienced something very irritating time and time again. Each time I’ve been left more frustrated and angry than before. It has been a continual source of disappointment. I went into a room expecting to learn something and left knowing less than when I entered. What is it? “The finite element method”

“If you can’t explain it to a six year old, you don’t understand it yourself.” ― Albert Einstein

In short, the answer to my title is nothing at all and everything. Nothing is technically wrong with the finite element method, absolutely nothing at all. Given that nothing is wrong with it there is a lot wrong with what it does to the practice of mathematics and scientific computing. More specifically there isn’t a thing wrong with the method except how people use it, which is too damn abstractly. Much of the time the method is explained in a deep code undecipherable to anyone except a small cadre of researchers working in the field. Explaining finite elements to a six year old is a long suit, but a respectable goal. Too often you can’t explain what you’re doing to a 46 year old with a PhD unless they are part of the collective of PhD’s working directly in the field and have received the magic decoder ring during their graduate education.

A common occurrence for someone to begin their research career with papers that clearly state what they are doing, and as the researcher becomes successful, all clarity leaves their writing. I saw a talk at a meeting where a researcher who used to write clearly had simultaneously obscured their presentation while pivoting toward research on easier problems. This is utter madness! The mathematics of finite element research tends to take a method that works well on hard problems, and analyze them on simpler problems while making the whole thing less clear. One of the key reasons to work on simpler problems is to clarify, not complicate. Too often the exact opposite is done.

bonekey20050187-f1Sometimes this blog is about working stuff out that bugs me in a hopefully articulate way. I’ve spent most of the last month going to scientific meetings and seeing a lot of technical talks and one of the things that bugs me the most are finite element methods (FEM). More specifically the way FEM is presented. There really isn’t a lot wrong with FEM per se, it’s a fine methodology that might even be optimal for some problems. I can’t really say because its proponents so often do such an abysmal job of explaining what they are doing and why. That is the crux of the matter.

mechanical-finite-element-analysisScientific talks on the finite element method tend to be completely opaque and I walk out of them knowing less than I walked in. The talks are often given in a manner that seems to intentionally obscure the topic with the seeming objective of making the speaker seem much smarter than they actually are. I’m not fooled. The effect they have gotten is to piss me off, and cause me to think less of them. Presenting a simple problem in an intentionally abstract and obtuse way is simply a disservice to science. It serves no purpose, but to make the simple grandiose and distant. It ultimately hurts the field, deeply.

The point of a talk is to teach, explain and learn not to make the speaker seem really smart. Most FEM talks are about making the speaker seem smart instead of explaining why something works. The reality is that the simple clear explanation is actually the hallmark of intellectual virtue. Simplicity is a virtue that seems to be completely off the map with FEM, FEM is about making the simple, complex instead. To make matter more infuriating, much of the current research on FEM is focused on attacking the least important and most trivial mathematical problems instead of the difficult problems that are pacing computational science. Computational science is being paced today by issues such as multiphysics (where multiple physical effects interact to define a problem) particularly involving transport equations (defined by hyperbolic PDE’s). In addition uncertainty quantification along with verification and validation is extremely important.

Finite_element_method_1D_illustration1Instead FEM research is increasingly focused on elliptic PDE’s, which are probably the easiest thing to solve in the PDE world. In other words, if you can solve an elliptic PDE well I know very little about the ability of a methodology’s capacity to attack the really hard important problems. It is nice, but not very interesting (the very definition of necessary and insufficient). Frankly the desire and interest in taking a method designed for solving hyperbolic PDE’s such as discontinuous Galerkin and applying it to elliptic PDE’s is worthwhile, but should not receive anywhere near the attention I see. It is not important enough to get the copious attention it is getting.

The effect is that we are focused on the areas of less importance, which has the impact of taking the methodology backwards. The research dollars are focused on less important problems instead of more important ones. Difficult important problems should be the focus of research, not the kind of “Mickey Mouse” stuff I’ve seen the whole month. On top of Mickey Mouse problems, the talks make the topic as complex as possible, and seem to be focused on trying not to explain anything in simple terms.

“Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.” ― Edgar Wybe Dijkstra

I think Dijkstra was talking about something entirely different, but the point is similar, the complexity sells and that is why it is trotted out time and time again. While it sells, it also destroys the sort of understanding that allows ideas to be extended and modified to solve new problems. The complexity tends to box ideas in rather than making them more general and less specific. There is a lot at stake beyond style, the efficacy of science is impacted by a false lack of simplicity. Ultimately it is the lack of simplicity that works against FEM, not the method itself. This is a direct failure of the practice of FEM rather than the ideas embedded within.Finite_element_method_1D_illustration2

The people who tend to work on FEM tend to significantly overelaborate things. I’m quite close to 100% convinced that the overelaboration is completely unnecessary, and it actually serves a supremely negative purpose in the broader practice of science. One of the end products is short-changing the FEM. In a nutshell, people can solve harder problems with finite volume methods (FVM) than FEM. The quest for seemingly rigorous mathematics has created a tendency to work toward problems with well-developed math. Instead we need to be inventing math to attack important problems even if the rigor is missing. Additionally, researchers over time have been far more innovative with FVM than FVM.

The FEM folks usually trot out that bullshit quip that FEM is exactly like FVM with the properly chosen test function. OK, fair enough, FEM is equivalent to FVM, but this fails to explain the generic lack of innovation in numerical methods arising from the FEM community. In the long run it is the innovations that determine the true power of a method, not the elaborate theories surrounding relatively trivial problems. These elaborations actually undermine methods and lead to a cult of complexity that so often defines the practice.

AircraftWhere FEM excels is the abstraction of geometry from the method and ability to include geometric detail in the simulation within a unified framework. This is extremely useful and explains the popularity of FEM for engineering analysis where geometric detail is important, or assumed to be important. Quite often the innovative methodology is shoehorned into FEM having been invented and perfected in the finite volume (or finite difference) world. Frequently the innovative devices have to be severely modified to fit into the FEM’s dictums. These modifications usually diminish the overall effectiveness of the innovations relative to their finite volume or difference forbearers. These innovative devices are necessary to solve the hard multiphysics problems often governed by highly nonlinear hyperbolic (conservation or evolution) equations. I personally would be more convinced by FEM if some of the innovation happened within the FEM framework instead of continually being imported.

Perhaps most distressingly FEM allows one to engage in mathematical masturbation. I say this with complete sincerity because the development of methods in FVM is far more procreative where methods are actually born of the activity. Too often FEM leads to mathematical fantasy that have no useful end product aside from lots of self-referential papers in journals, and opaque talks at meetings such as those I’ve witnessed in the last month. For example computational fluid dynamics (CFD) is dominated by FVM methods. CFD solvers are predominantly FVM not FEM largely for the very reason that innovative methods are derived first and used best in FVM. Without the innovative methods CFD would not be able to solve many of its most important and challenging problems today.

Mathematically speaking, I think the issue comes down to regularity. For highly regular and well-behaved problems FEM works very well, and it’s better than FVM. In a sense FEM often doubles down on regularity with test functions. When the solution is highly regular this yields benefits. The issue is that highly regular problems actually define the easier and less challenging problems to be solved, not the hard technology-pacing ones. FVM on the other hand hedges its’ bets. Discontinuous Galerkin (DG) is a particular example. It is a really interesting method because it sits between FEM and FVM. The DG community puts a lot of effort in making it a FEM method with all the attendant disadvantages of assumed regularity.  This is the heart of the maddening case of taking a method so well suited to very hard problems and studying in incessantly on very easy problems with no apparent gain in utility. It seems to me that DG methods have actually gone backwards in the last decade due to this practice.

triple-point_BLAST_q8q7In a sense the divide is defined by whether you don’t assume regularity and add it back, or you assume it is there and take measures to deal with it when it’s not there. Another good example comes from the use of FEM for hyperbolic PDE’s where conservation form is important. Conservation is essential, and the weak form of the PDE should give conservation naturally. Instead with the most common Galerkin FEM if one isn’t careful the implementation can destroy conservation. This should not happen, conservation should be a constraint, an invariant that comes for free. It does with FVM, it doesn’t with FEM, and that is a problem. Simple mistakes should not cause conservation errors. In FVM this would have been structurally impossible because of how it was coded. The conservation form would have been built in. In FEM the conservation is a specially property, which is odd for something built on the weak form of the PDE. This goes directly to the continuous basis selected in the construction of the scheme.

Another place where the FEM community falls short is stability and accuracy analysis. With all the mathematical brouhaha surrounding the method one might think that stability and accuracy analysis would be ever-present in FEM practice. Quite the contrary is true. Code and solution verification are common and well practiced in the FVM world and almost invisible in FEM. It makes no sense. A large part of the reason is the abstract mathematical focus of FEM instead of the practical approach of FVM. At the practical end where engineering and science are being accomplished with the aid of scientific computing, the mathematical energy seems to yield very little. It is utterly baffling.

“Simplicity is the ultimate sophistication.” ― Leonardo da Vinci

The issue is where the math community spends its time; do they focus on proving things for easy problems, or expand the techniques to handle hard problems? Right now, it seems to focus on making the problem easier and proving things rather than expanding the techniques available and create structures that would work on the harder problems.  The difference is rather extreme. The goal should be to solve the hard problems we are offered, not transform the hard problems into easy problems with existing math. If the math needed for the hard problems aren’t there we need to invent it and start extending ourselves to provide the rigor we want to see. Too often the opposite path is chosen.

A big issue is the importance or prevalence of problems for which strong convergence can be expected. How much of the work in the world is focused where this doesn’t or can’t happen. How much is? Where is the money or importance?

A think a much better path for FEM in the future is to focus on first making the style and focus of presentation simple and pedagogical. Secondarily the focus should be pushed toward solving harder problems that pace computational science rather than toys that are amenable to well-defined mathematical analysis. The advantages of FEM are clear, the hardest this we have to do is make the method clear, comprehensible and extensible.

Lecture%2020%20-%20Finite%20element%20method;%20equilibrium%20equations

Gil Strang is a good example of presenting the FEM in a clear manner free of jargon and emphasizing understanding.

I fully expect to catch grief over what I’m saying. Instead I’d like to spur those working on FEM to both attack harder problems, and make their explanation of what they are doing simple. The result will be a better methodology that more people understand. Maybe then the FEM will start to be the source of more innovative numerical methods. Everyone will benefit from this small, but important change in perspective.

“Any darn fool can make something complex; it takes a genius to make something simple.” ― Pete Seeger

What is the future of Computational Science, Engineering and Mathematics?

24 Thursday Jul 2014

Posted by Bill Rider in Uncategorized

≈ 2 Comments

I spent the week at a relatively massive conference (3000 attendees) in Barcelona Spain, the World Congress on Computational Mechanics. The meeting was large enough that I was constantly missing talks that I wanted to see because other talks were even more interesting. Originally I wanted to give four talks, but the organizers allowed only one so I was attending more talks and giving far less. Nonetheless such meetingsBarcelona-Photoxpress_8240667 are great opportunities to learn about what is going on around the World, get lots of new ideas, meet old friends and make new ones. It is exactly what I wrote about a few weeks ago, giving a talk is second, third or even fourth on the list of reasons to attend such a meeting.

The span and scope of the Congress is truly impressive. Computational modeling has become a pervasive aspect of modern science and engineering. The array of application is vast and impressively international in flavor. While all of this is impressive, such venues offer an opportunity to take stock of where I am, and where the United States and the rest of the World stand. All of this provides a tremendously valuable opportunity to gain much needed perspective.

An honest assessment is complex. On the one hand, the basic technical and scientific progress is immense, on the other hand there are concerns lurking around every corner. While the United States probably remain in a preeminent state for computational science and engineering, the case against this is getting stronger every day. Europe and Asia are catching up quickly if not having overtaken the USA in many subfields. Across the board there are signs of problems and stagnation in the field. It would seem clear that people know this and it isn’t clear whether there is any action to address problems. Among these issues is the increased use of a narrow set of software tools either commercial or open source computational tools with a requisite lack of knowledge and expertise in the core methods and algorithms used inside them. In addition, the nature of professional education and the state of professionalism is under assault by societal forces,

Evernote Camera Roll 20140723 181749Despite the massive size of the meeting there are substantial signs that support for research in the field is declining in size and changing in character. It was extremely difficult to see “big” things happening in the field. The question is whether this is the sign of a mature field where slow progress is happening, or the broad lack of support for truly game-changing work. It could also be a sign that the creative energy in science has moved to other areas that are “hotter” such biology, medicine, materials, … There was a notable lack of exciting keynote lectures at the meeting. There didn’t seem to be any “buzz” with any of them. This was perhaps the single most disappointing aspect of the conference.

A couple of things are clear in the United States and Europe the research environment is in crisis under assault from short-term thinking, funding shortfalls (after making funding the end-all and be-all), and educational malaise. For example, I was horrified that Europeans are looking to the USA for guidance on improving their education. This comes on top of my increasing concern about the nature of professional development at the sort of Labs where I work, and the general lack of educational vitality at universities. More and more it is clear that the chief measure of academic success for professors in monetary. The claims of research quality are measured in dollars and the publish or perish mentality that has ravaged the scientific literature. It is a system in dire need of focused reform and should not be the blueprint for anything but failure. The monetary drive comes from the lack of support that education is receiving from the government, which has driven tuition higher at a stunning pace. At the same time the monetary objective of research funding is hollowing out the educational focus universities should possess. The research itself has a short-term focus, and the lack of emphasis or priority for developing people be they students or professionals shares the short sighted outcome. We are draining our system of the vital engine of innovation that has been the key to our recent economic successes.

Another clear trend that resonates with my attendance at the SIAM annual meeting a few weeks ago is the increasing divide between applied mathematic (or theoretical mechanics) and applications. The disparity in focus between the theoretically minded scientists and the application-focused scientist-engineer is growing to the detriment of the community. The application side of things is increasingly using commercial codes that tend to reflect a deep stagnation in capability (aside from the user interface). The theoretical side is focused on idealized problems stripped of real features that complicate the results making for lots of results that no one on the applied side cares about or can use. The divide is only growing with fewer and fewer reaching across the chasm to connect theory to application.

The push from applications has in the past spurred the theoretical side to advance by attacking more difficult problems. Those days appear to be gone. I might blame the prevalence of the sort of short-term thinking investing other areas for this. Both sides of this divide seem to be driven to take few chances and place their efforts into the safe and sure category of work. The theoretical side is working on problems where results can surely be produced (with the requisite publications). By the same token the applied side uses tried and true methods to get some results without having to wait or hope for a breakthrough. The result is a deep sense of abandonment of progress on many fronts.

The increasing dominance of a small number of codes either commercial or open source would be another deep concern. Part of the problem is a reality (or perception) of extensive costs associated with the development of software. People choose to use these off-the-shelf systems because they cannot afford to build their own. On the other hand, by making these choices they and their students or staff are denied the hands on knowledge of the methodology that leads to deep expertise. This is all part of this short-term focus that is bleeding the entire community of deep expertise development necessary for excellence. The same attitudes and approach happen at large laboratories that should seemingly not have the sort of financial and time pressures operating in academia. This whole issue is exacerbated by the theoretical versus applied divide. So far we haven’t made scientific and engineering software modular or componentized. Further the leading edge efforts with “modules” often are so divorced from real problems that they can’t really be relied upon for hard-core applications. Again we have problems with adapting to the modern world confounded with the short-term focus, and success measures that do not measure success.

Perhaps what I’m seeing is a veritable mid-life crisis. The field of computational science and engineering has become mature. It is remarkably broad and making inroads into new areas and considered a full partner with traditional activities in most high-tech industries. At the same time there is a stunning lack of self-awareness, and a loss of knowledge and perspective on the history of the past fifty to seventy years that led to this point. Larger societal pressures and trends are pushing the field in directions that are counter-productive and work actively to undermine the potential of the future. All of this is happening at the same time that computer hardware is either undergoing a crisis or phase transition to a different state. Together we are entering an exciting, but dangerous time that will require great wisdom to navigate. I truly fear that the necessary wisdom while available will not be called upon. If we continue to choose the shortsighted path and avoid doing some difficult things, the outcome could be quite damaging.

Evernote Camera Roll 20140723 181903A couple of notes about the venue should be made. Barcelona is a truly beautiful city with wonderful weather, people, architecture food, mass transit, I really enjoyed the visit, and there is plenty to comment on. Too few Americans have visited other countries to put their own country in perspective. After a short time you start to hone in on the differences between where you visit and where you live. Coming from America and hearing about the Spanish economy I expected far more homelessness and obvious poverty. I saw very little of either societal ill during my visit. If this is what economic disaster looks like, then it’s hard to see it as aSanta-Caterina-Market-in-Barcelonan actual disaster. Frankly, the USA looks much worse by comparison with a supposedly recovering economy. There are private security guards everywhere. The amount of security and the meeting was actually a bit distressing. In contrast to this in a week, at a hotel across the street from the hospital, I heard exactly one siren, amazing. As usual getting away from my standard environment is thought provoking, which is always a great thing.

von Neumann Analysis of Finite Difference Methods for First-Order Hyperbolic Equations

21 Monday Jul 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Last week I showed how the accuracy, stability and general properties of an ODE integrator might be studied with the aid of Mathematica. This week I will do the same for a partial differential equations solution. Again, I will provide the commands used in Mathematica to conduct the analysis reported at the end of the post.

It is good to start as simple as possible. That was the reason for retreading the whole ODE stability analysis last week. Now we can steadily go forward toward looking at something a bit harder partial differential equations, starting with a first-order method for a first-order hyperbolic equation, the linear advection equation,

u_t + u_x=0, where the subscript denotes differentiation with respect to the variable. This equation is about as simple as PDEs get, but it is notoriously difficult to solve numerically.

Before getting to the analysis we can state a few properties of the equation. The exact solution is outrageously simple, u \left(x, t \right) = u(x-t,0). This means that the temporal solution is simply defined by the initial condition translated by the velocity (which is one in this case) and time. Nothing changes it simply moves in space. This is a very simple form of space-time self-similarity. If we are solving this equation numerically, any change in the waveform is an error. We can also note that the integral of the value is preserved (of course) making this a “conservation law”. Later when you’d like to solve harder problems this property is exceedingly important.

Now we can proceed to the analysis. The basic process is to replace the function with an analytical representation and similar to ODEs we use the complex exponential (Fourier transform), \exp\left(\imath j \theta\right), where j is the grid index of our discretized function, and \theta is the angle parameterizing frequency of the waveform. The analysis then proceeds much as in the style as the ODE work from last week, one substitutes this function into the numerical scheme and works out the modification of the waveform by the numerical method. We then take this modification to be the symbol of the operator A\left(\theta\right) = \left| A \right\| \exp\left(\imath\alpha\right). In this form we have divided the symbol into two effects its amplitude and its modulation of the waveform or phase. Finishing our conceptual toolbox is the expression of the exact solution as u\left(x,0\right)\exp\left(-\imath t \theta\right) .

We are now ready to apply the analysis technique to the scheme. We can start off with something horribly simple like first-order upwind. The numerical method is easy to write down as u_j^{n+1}=u_j^n-\nu\left(u_{j+1/2}^n-u_{j-1/2}^n\right) where \nu= \Delta t / \Delta x is the Courant or CFL number and u_{j+1/2}^n = u_j^n is the upwind edge value. The CFL number is the similarity variable (dimensionless) of greatest important for numerical schemes for hyperbolic PDEs. Now we plug our Fourier function into the grid values in the scheme and evaluate for a single grid point j=0. Without showing the trivial algebraic steps this gives A = 1 - \nu\left(1-\exp(-\imath \theta)\right). We can make the substitution of the trigonometric functions for the complex exponential, $\exp\left(-\imath \theta\right) = \cos\left(\theta\right) – \imath \sin\left(\theta\right)$.

Now it is time to use these relations to provide the properties of the numerical scheme. We will divide these effects into two categories, changes in the amplification of the function that will define stability, $\latex \left| A \right|$ and the phase error \alpha. The exact solution has amplitude of one, and a phase of \nu \theta. Once we have separated the symbol into its pieces we can then examine the formal truncation error of the method (as \theta\rightarrow 0 is equivalent to \Delta x\rightarrow 0) in a straightforward manner.

 

phase-1We can also expand these in a Taylor series to get a result for the truncation error. For the amplitude we get the following \left|A\right\| \approx 1 -\frac{1}{2} \left(\nu-\nu^2 \right)\theta^2 + O\left(\theta^4\right). The phase error can be similarly treated, \alpha \approx 1 + \frac{1}{6}\left(1-2\nu + \nu^2\right) + O\left(\theta^4\right). Please note that the phase error is actually one order higher than I’ve written because of its definition where I have divided

phase-1-contthrough by \nu\theta. The last bit of analysis we conduct is to make an estimate of the rate of convergence as a function of the mesh spacing and CFL number. Given the symbol we can compute the error E=A - \exp\left(-\imath \nu\theta\right). We then compute the error with a refined grid by a factor of two and note that it must applied twice to get the solution to the same point in time. The error for the refined calculation is E_{\frac{1}{2}} = A_{\frac{1}{2}} - \exp \left( - \frac{\imath \nu \theta}{2} \right), which is squared to account for two time steps being taken to get to the same simulation time, E_{\frac{1}{2}}:=E_{\frac{1}{2}}^2 .Given these errors the local rate of convergence is simple, n = \log\left(\left|E\right|/\left|E_\frac{1}{2}\right| \right)/log\left(2\right). We can then plot the function where we see that the convergence rate deviates significantly from one (the expected value) for finite values of \theta and \nu.

conv-rate-1We can now apply the same machinery to more complex schemes. Our first example is the time-space coupled version of Fromm’s scheme, which is a second-order method. Conducting the analysis is largely a function of writing the numerical scheme in Mathematica much in the same fashion we would use to write the method into a computer code.

The first version of Fromm’s scheme uses a combined space time differencing introduced by Lax-Wendroff implemented using a methodology similar to Richtmyer’s two-step scheme, which makes the steps clear. First, define a cell-centered slope s_j^n =\frac{1}{2}\left( u_{j+1}^n - u_{j-1}^n\right) and then use this to define a edge-centered, time-centered value, u_{j+1/2}^{n+1/2} = u_j^n + \frac{1}{2}\left(1 - \nu\right) s_j^n. This choice has a “build-in” upwind bias. If the velocity in the equation were oriented oppositely, this choice would be u_{j+1/2}^{n+1/2} = u_{j+1}^n - \frac{1}{2}\left(1 - \nu\right)s_{j+1}^n instead (\nu<0). Now we can write the update for the cell-centered variables as u_j^{n+1} = u_j^n - \nu\left(u_{j+1/2}^{n+1/2} - u_{j-1/2}^{n+1/2}\right), substitute in the Fourier transform and apply all the same rules as for the first-order upwind method.

Just note that in the Mathematica the slope, and edge variables are defined as general functions of the mesh index j and the substitution is accomplished without any pain. This property is essential for analyzing complicated methods that effectively have very large or complex stencils.

amp-2-contphase-2conv-rate-2

The results then follow as before. We can plot the amplitude and phase error easily and the first thing we should notice is the radical improvement over the first-order method, particularly the amplification error at large wavenumbers (i.e., the grid scale). We can go further and use the Taylor series expansion to express the formal accuracy for the amplification and phase error. The amplification error is two orders higher than upwind and is \left| A \right| \approx 1. The phase error is smaller than the upwind scheme, but the same order, \alpha\approx 1. This is the leading order error in Fromm’s scheme.

We can finish by plotting the convergence rate as a function of finite time step and wavenumber. Unlike the upwind scheme as the wavenumber approaches one the rate of convergence is larger than the formal order of accuracy.

The Mathematica commands used to conduct the analysis above:

(* 1st order 1-D *)

U[j_] := T[j]

U1[j_] := U[j] – v (U[j] – U[j – 1])

sym = 1/2 U[0] + 1/2 U1[0] – v/2 (U1[0] – U1[-1]);

T[j_] := Exp[I j t]

Series[sym – Exp[-I v t], {t, 0, 5}]

Simplify[sym]

Sym[v_, t_] := 1/2 E^(-2 I t) (-2 E^(I t) (-1 + v) v + v^2 + E^(2 I t) (2 – 2 v + v^2))

rg1 = Simplify[ComplexExpand[Re[sym]]];

ig1 = Simplify[ComplexExpand[Im[sym]]];

amp1 = Simplify[Sqrt[rg1^2 + ig1^2]];

phase1 = Simplify[ ArcTan[-ig1/rg1]/(v t)];

Series[amp1, {t, 0, 5}]

Series[phase1, {t, 0, 5}]

Plot3D[amp1, {t, 0, Pi}, {v, 0, 1}, AxesLabel -> {t, v}, LabelStyle -> Directive[18, Bold, Black]]

Export[“amp-1.jpg”, %]

ContourPlot[amp1, {t, 0, Pi}, {v, 0, 1}, PlotPoints -> 250, Contours -> {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.99, 1}, ContourShading -> False, Axes -> {True, True}, AxesLabel -> {t, v}, ContourLabels -> All, LabelStyle -> Directive[18, Bold, Black]]

Export[“amp-1-cont.jpg”, %]

Plot3D[phase1, {t, 0, Pi}, {v, 0, 1}, AxesLabel -> {t, v}, LabelStyle -> Directive[18, Bold, Black]]

Export[“phase-1.jpg”, %]

ContourPlot[phase1, {t, 0, Pi}, {v, 0, 1}, PlotPoints -> 250, Contours -> {-1, -0.5, -0.25, 0, 0.25, 0.5, 0.75, 0.9, 0.99, 1, 1.25, 1.5}, ContourShading -> False, Axes -> {True, True}, AxesLabel -> {t, v}, ContourLabels -> All, LabelStyle -> Directive[18, Bold, Black]]

Export[“phase-1-cont.jpg”, %]

err = Sym[v, t] – Exp[-I v t];

err2 = Sym[v, t/2]^2 – Exp[-I v t];

Plot3D[Log[Abs[err]/Abs[err2]]/Log[2], {t, 0.01, Pi/2}, {v, 0, 1}, AxesLabel -> {t, v}, LabelStyle -> Directive[18, Bold, Black]]

Export[“conv-rate-1.jpg”, %]

ContourPlot[Log[Abs[err]/Abs[err2]]/Log[2], {t, 0, Pi}, {v, 0, 1}, PlotPoints -> 250, Contours -> {0.5, 0.75, 0.9, 0.95, 0.99}, ContourShading -> False, Axes -> {True, True}, AxesLabel -> {t, v}, ContourLabels -> All, LabelStyle -> Directive[18, Bold, Black]]

Plot3D[Abs[sym/Exp[-I v t]], {t, 0, Pi}, {v, 0, 5}]

ContourPlot[ If[Abs[sym/Exp[-I v t]] <= 1, Abs[sym/Exp[-I v t]], 0], {t, 0, Pi}, {v, 0, 5}, PlotPoints -> 250, Contours -> {0.1, 0.25, 0.5, 0.75, 0.9, 1}, ContourShading -> False, Axes -> {True, True}, AxesLabel -> {t, v}, ContourLabels -> All, LabelStyle -> Directive[18, Bold, Black]]

errs = Sym[v, t/2]^2 – Sym[v, t];

errs2 = Sym[v, t/4]^4 – Sym[v, t/2]^2;

Plot3D[Log[Abs[errs]/Abs[errs2]]/Log[2], {t, 0.01, Pi/2}, {v, 0, 1}]

errt = Sym[v/2, t]^2 – Sym[v, t];

errt2 = Sym[v/4, t]^4 – Sym[v/2, t]^2;

Plot3D[Log[Abs[errt]/Abs[errt2]]/Log[2], {t, 0, Pi/2}, {v, 0, 1}]

(* classic fromm *)

U[j_] := T[j]

S[j_] := 1/2 (U[j + 1] – U[j – 1])

Ue[j_] := U[j] + 1/2 S[j] (1 – v)

sym2 = U[0] – v (Ue[0] – Ue[-1]);

T[j_] := Exp[I j t]

Series[sym2 – Exp[-I v t], {t, 0, 5}];

Simplify[Normal[%]];

Collect[Expand[Normal[%]], t]

Simplify[sym2]

Sym2[v_, t_] := 1/32 E^(-4 I t) (v^2 – 10 E^(I t) v^2 + E^(6 I t) v^2 + 2 E^(5 I t) v (-4 + 3 v) – 4 E^(3 I t) v (-10 + 7 v) + E^(2 I t) v (-8 + 31 v) – E^(4 I t) (-32 + 24 v + v^2))

rg2 = Simplify[ComplexExpand[Re[sym2]]];

ig2 = Simplify[ComplexExpand[Im[sym2]]];

amp2 = Simplify[Sqrt[rg2^2 + ig2^2]];

phase2 = Simplify[ ArcTan[-ig2/rg2]/(v t)];

Series[amp2, {t, 0, 5}]

Series[phase2, {t, 0, 5}]

Plot3D[amp2, {t, 0, Pi}, {v, 0, 1}, AxesLabel -> {t, v}, LabelStyle -> Directive[18, Bold, Black]]

Export[“amp-2.jpg”, %]

ContourPlot[amp2, {t, 0, Pi}, {v, 0, 1}, PlotPoints -> 250, Contours -> {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.99, 1}, ContourShading -> False, Axes -> {True, True}, AxesLabel -> {t, v}, ContourLabels -> All, LabelStyle -> Directive[18, Bold, Black]]

Export[“amp-2-cont.jpg”, %]

Plot3D[phase2, {t, 0, Pi}, {v, 0, 1}, AxesLabel -> {t, v}, LabelStyle -> Directive[18, Bold, Black]]

Export[“phase-2.jpg”, %]

ContourPlot[phase2, {t, 0, Pi}, {v, 0, 1}, PlotPoints -> 250, Contours -> {-1, -0.5, -0.25, 0, 0.25, 0.5, 0.75, 0.9, 0.99, 1, 1.25, 1.5}, ContourShading -> False, Axes -> {True, True}, AxesLabel -> {t, v}, ContourLabels -> All, LabelStyle -> Directive[18, Bold, Black]]

Export[“phase-2-cont.jpg”, %]

err = Sym2[v, t] – Exp[-I v t];

err2 = Sym2[v, t/2]^2 – Exp[-I v t];

Plot3D[Log[Abs[err]/Abs[err2]]/Log[2], {t, 0.01, Pi/2}, {v, 0, 1}, AxesLabel -> {t, v}, LabelStyle -> Directive[18, Bold, Black]]

Export[“conv-rate-2.jpg”, %]

ContourPlot[ Log[Abs[err]/Abs[err2]]/Log[2], {t, 0.01, Pi}, {v, 0.01, 1}, PlotPoints -> 250, Contours -> {1, 1.5, 1.75, 1.9, 2, 2.1, 2.2, 2.3, 2.4}, ContourShading -> False, Axes -> {True, True}, AxesLabel -> {t, v}, ContourLabels -> All, LabelStyle -> Directive[18, Bold, Black]]

Export[“conv-rate-2-cont.jpg”, %]

ContourPlot[ If[Abs[sym2/Exp[-I v t]] <= 1, Abs[sym2/Exp[-I v t]], 0], {t, 0, Pi}, {v, 0, 1}, PlotPoints -> 250, Contours -> {0.1, 0.25, 0.5, 0.75, 0.9, 1}, ContourShading -> False, Axes -> {True, True}, AxesLabel -> {t, v}, ContourLabels -> All, LabelStyle -> Directive[18, Bold, Black]]

errs = Sym2[v, t] – Sym2[v, t/2]^2;

errs2 = Sym2[v, t/4]^4 – Sym2[v, t/2]^2;

Plot3D[Log[Abs[errs]/Abs[errs2]]/Log[2], {t, 0.01, Pi/2}, {v, 0, 1}, AxesLabel -> {t, v}, LabelStyle -> Directive[18, Bold, Black]]

errt = Sym2[v, t] – Sym2[v/2, t]^2;

errt2 = Sym2[v/4, t]^4 – Sym2[v/2, t]^2;

Plot3D[Log[Abs[errt]/Abs[errt2]]/Log[2], {t, 0.0, Pi/2}, {v, 0, 0.1}]

(* 2nd order Fromm – RK *)

U[j_] := T[j]

S[j_] := 1/2 (U[j + 1] – U[j – 1])

Ue[j_] := U[j] + 1/2 S[j]

U1[j_] := U[j] – v (Ue[j] – Ue[j – 1])

S1[j_] := 1/2 (U1[j + 1] – U1[j – 1])

Ue1[j_] := U1[j] + 1/2 S1[j]

sym2 = 1/2 U[0] + 1/2 U1[0] – v/2 (Ue1[0] – Ue1[-1]);

T[j_] := Exp[I j t]

Series[sym2 – Exp[-I v t], {t, 0, 5}];

Simplify[Normal[%]];

Collect[Expand[Normal[%]], t]

Simplify[sym2]

Sym2[v_, t_] := 1/32 E^(-4 I t) (v^2 – 10 E^(I t) v^2 + E^(6 I t) v^2 + 2 E^(5 I t) v (-4 + 3 v) – 4 E^(3 I t) v (-10 + 7 v) + E^(2 I t) v (-8 + 31 v) – E^(4 I t) (-32 + 24 v + v^2))

rg2 = Simplify[ComplexExpand[Re[sym2]]];

ig2 = Simplify[ComplexExpand[Im[sym2]]];

amp2 = Simplify[Sqrt[rg2^2 + ig2^2]];

phase2 = Simplify[ ArcTan[-ig2/rg2]/(v t)];

Series[amp2, {t, 0, 5}]

Series[phase2, {t, 0, 5}]

Plot3D[amp2, {t, 0, Pi}, {v, 0, 1}, AxesLabel -> {t, v}, LabelStyle -> Directive[18, Bold, Black]]

Export[“amp-2rk.jpg”, %]

ContourPlot[amp2, {t, 0, Pi}, {v, 0, 1}, PlotPoints -> 250, Contours -> {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.99, 1}, ContourShading -> False, Axes -> {True, True}, AxesLabel -> {t, v}, ContourLabels -> All, LabelStyle -> Directive[18, Bold, Black]]

Export[“amp-2rk-cont.jpg”, %]

Plot3D[phase2, {t, 0, Pi}, {v, 0, 1}, AxesLabel -> {t, v}, LabelStyle -> Directive[18, Bold, Black]]

Export[“phase-2rk.jpg”, %]

ContourPlot[phase2, {t, 0, Pi}, {v, 0, 1}, PlotPoints -> 250, Contours -> {-1, -0.5, -0.25, 0, 0.25, 0.5, 0.75, 0.9, 0.99, 1, 1.25, 1.5}, ContourShading -> False, Axes -> {True, True}, AxesLabel -> {t, v}, ContourLabels -> All, LabelStyle -> Directive[18, Bold, Black]]

Export[“phase-2rk-cont.jpg”, %]

err = Sym2[v, t] – Exp[-I v t];

err2 = Sym2[v, t/2]^2 – Exp[-I v t];

Plot3D[Log[Abs[err]/Abs[err2]]/Log[2], {t, 0.01, Pi/2}, {v, 0, 1}, AxesLabel -> {t, v}, LabelStyle -> Directive[18, Bold, Black]]

Export[“conv-rate-2rk.jpg”, %]

ContourPlot[ Log[Abs[err]/Abs[err2]]/Log[2], {t, 0.01, Pi}, {v, 0.01, 1}, PlotPoints -> 250, Contours -> {1, 1.5, 1.75, 1.9, 2, 2.1, 2.2, 2.3, 2.4}, ContourShading -> False, Axes -> {True, True}, AxesLabel -> {t, v}, ContourLabels -> All, LabelStyle -> Directive[18, Bold, Black]]

Export[“conv-rate-2rk-cont.jpg”, %]

ContourPlot[ If[Abs[sym2/Exp[-I v t]] <= 1, Abs[sym2/Exp[-I v t]], 0], {t, 0, Pi}, {v, 0, 1.5}, PlotPoints -> 250, Contours -> {0.1, 0.25, 0.5, 0.75, 0.9, 1}, ContourShading -> False, Axes -> {True, True}, AxesLabel -> {t, v}, ContourLabels -> All, LabelStyle -> Directive[18, Bold, Black]]

errs = Sym2[v, t] – Sym2[v, t/2]^2;

errs2 = Sym2[v, t/4]^4 – Sym2[v, t/2]^2;

Plot3D[Log[Abs[errs]/Abs[errs2]]/Log[2], {t, 0.01, Pi/2}, {v, 0, 1}]

errt = Sym2[v, t] – Sym2[v/2, t]^2;

errt2 = Sym2[v/4, t]^4 – Sym2[v/2, t]^2;

Plot3D[Log[Abs[errt]/Abs[errt2]]/Log[2], {t, 0.0, Pi/2}, {v, 0, 0.1}]

Conducting von Neumann stability analysis

15 Tuesday Jul 2014

Posted by Bill Rider in Uncategorized

≈ 4 Comments

In order to avoid going on another (epic) rant this week, I’ll change gears and touch upon a classic technique for analyzing the stability of numerical methods along with extensions or the traditional approach.

Before diving into partial differential equations, I thought it would be beneficial to analyze the stability of ordinary differential equation integrators first. This provides the basis of the approach. Next I will show how the analysis proceeds for an important second-order method for linear advection. I will close with providing the analysis of second-order discontinuous Galerkin methods, which introduce an important wrinkle on the schemes. I will close by producing the Mathematica commands used to give the results.

It is always good to have references that can be read for detail and explanation than I will give a few seminal ones here:

* Ascher, Uri M., and Linda R. Petzold. Computer methods for ordinary differential equations and differential-algebraic equations. Vol. 61. Siam, 1998.

* Durran, Dale R. Numerical methods for wave equations in geophysical fluid dynamics. No. 32. Springer, 1999.

* LeVeque, Randall J., and Randall J. Le Veque. Numerical methods for conservation laws. Vol. 132. Basel: Birkhäuser, 1992.

* LeVeque, Randall J. Finite volume methods for hyperbolic problems. Vol. 31. Cambridge university press, 2002.

* Strikwerda, John C. Finite difference schemes and partial differential equations. Siam, 2004.

Let’s jump into the analysis of ODE solvers by looking at a fairly simple method, the forward Euler method. We can write the solver for a simple ODE, u_t =\lambda u, as simply u^{n+1} = u^n + \Delta t \lambda u^n. We take the right hand side \lambda = a + b \imath, and do some algebra. We have several principal goals, establish conditions for stability, accuracy, and overall behavior of the method.

For stability we determine how much the value of the solution is amplified by the action of the integration scheme, u^{n+1}= A u^n = u^n + \Delta t \lambda u^n, we remove the variable and call the result the “symbol’’ of the integrator, A = 1 + \Delta t \lambda , we then solve for A=\left| A \right| \exp(-\imath \alpha) then take its magnitude, = \left| A \right|, being less than one for stability. We can write down this answer explicitly =\left| A \right|\ = \sqrt{(1+\Delta t a)^2 + (\Delta t b)^2}. We can also plot this result easily (see the commands I used in Mathematica at the end of the post).  On all the plots the horizontal axis is the real values a \Delta t and the vertical axis is the imaginary values b\Delta t.

 

This plot just includes the values where they amplitude of the symbol is less than or equal to one.forwardEuler

Next we look at accuracy using a Taylor series expansion. The Taylor series is simple given the analytical solution to the ODE, u(t)=\exp(\lambda t), and the Taylor series expansion is classical, \exp(\lambda t)\approx 1 + t \lambda + \frac{1}{2}(\lambda t)^2 + \frac{1}{6}(\lambda t)^3 + O(t^4). We simply subtract this Taylor series from the symbol of the operator and look at the remainder, $\latex E= \frac{1}{2}(\lambda \Delta t)^2+ O(\Delta t^3)$, where the time has been replaced by the time step size.

The last couple of twists can add some significant texture to the behavior of the integration scheme. We can plot the “order stars’’ which shows whether the numerical scheme changes the amplitude of the answer more or less than the exact operator. These are call stars because they start to show star-like shapes for higher order methods (mostly starting at third- and higher order accuracy). He is the plot for forward Euler.

forwardEuler-star

The last thing we will examine for the forward Euler scheme is the order of accuracy you should see during a time step refinement study as part of a verification exercise. Usually this is thought of as being the same as the order of the numerical scheme, but for a finite time step size the result deviates from the analytical order substantially. Computing this is really quite simple, one simply compute the symbol of the operator for half the time step size \Delta t/2, A_\frac{1}{2} for two time steps (so that the time ends up at the same place as you get for a single step with \Delta t. This is simply the square of the operator at the smaller time step size. To get the order of accuracy you take the operators and subtract the exact solution, take the absolute value of the result, then compute the order of accuracy like usual verification,

a=\frac{\log\frac{\left|A-\exp(\lambda \Delta t)\right|}{\left| A_\frac{1}{2}^2 -\exp(\lambda \Delta t )\right|} }{\log(2)}.

We can plot the result easily with Mathematica. It is notable how different from the asymptotic value of one the results are for reasonable, but finite values of \Delta t. As the operator becomes unstable, the convergence rate actually becomes very large. This is a word of warning to the practitioner that very high rates of convergence can actually be a very bad sign for a calculation.

forwardEuler-order

forwardEuler-order-contourLook for

We can now examine a second-order method with relative ease.  Doing the analysis is akin to writing a computer code albeit symbolically.  The second-order is using a predictor-corrector format where a half step is taken using forward Euler and this result is used to advance the solution the full step. This is an improved forward Euler method.  It is explicit in that the solution can be evaluated solely in terms of the initial data. The scheme is the following: u^{n+1/2} = u^n + \Delta t/2 \lambda u^n for the predictor, and u^{n+1} = u^n + \Delta t \lambda u^{n+1/2}.  The symbol is computed as before giving A=1+ \Delta t \lambda \left( 1+\Delta t \lambda /2\right).  Getting the full properties of the method now just requires “turning the crank’’ as we did for the forward Euler scheme.

The truncation error has gained an order of accuracy and now is E= \frac{1}{6}(\lambda \Delta t)^3+ O(\Delta t^4).

The stability plot is more complex giving a larger region for the stability particularly along the imaginary axis.rk2

The order star looks much more like a star.

rk2-star

Finally the convergence rate plot is much less pathalogical although some of the same conclusions can be drawn from the behavior where the scheme is unstable (giving the very large convergance rates).

rk2-orderrk2-order-contour

We will finish this week’s post by turning our attention to a second order implicit scheme, the backwards differentiation formula (BDF).  Everything will follow from the previous two example, but the scheme adds an important twist (or two).  The first twist is that the method is implicit, meaning that the left and right hand sides of the method are coupled, and the second is that the method depends on three time levels of data, not two as the first couple of methods.

The update for the method is written \frac{3}{2} u^{n+1}-2 u^n +\frac{1}{2} u^{n-1} = \Delta t \lambda u^{n+1}, and the amplification is now a quadratic equation, \left( \frac{3}{2} - \Delta t \lambda\right) A^2 -2 A +\frac{1}{2} = 0  with two roots. One of these roots will have a Taylor series expansion that demonstrates second-order accuracy for the scheme, the other will not. The inaccurate root must still be stable for the scheme to be stable. The accurate root is

A=\frac{-2-\sqrt{1+\Delta t \lambda}}{\Delta t \lambda -3}

with a error of E= \frac{1}{3}(\lambda \Delta t)^3+ O(\Delta t^4).

The second inaccurate root is also called spurious and has the form

bdf2A=\frac{-2+\sqrt{1+\Delta t \lambda}}{\Delta t \lambda -3}.

The stability of the scheme requires taking the maximum of the magnitude of both roots.

Using the accurate root we can examine the order star, and the rate of convergence of the method as before.

 

Next week we will look at a simple partial differential equation analysis, which adds new wrinkles.

While this sort of analysis can be done by hand, the greatest utility can be achbdf2-order-contourbdf2-starbdf2-orderieved by using symbolic or numerical packages such as Mathematica. Below I’ve included the Mathematica code used for the analyses given above.

Soln = Collect[Expand[Normal[Series[Exp[h L], {h, 0, 6}]]], h]

(* Forward Euler *)

a =.; b =.

A = 1 + h L

Aab2 = (1 + h L/2 )^2

Collect[Expand[Normal[Series[A, {h, 0, 6}]]], h];

Soln – %

L = ( a + b I)/h;

ContourPlot[If[Abs[A] < 1, -Abs[A], 1], {a, -3, 1}, {b, -2, 2},
PlotPoints -> 250,
Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},
ContourShading -> False, Axes -> {True, True},
AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black]]

Export[“forwardEuler.jpg”, %]

“forwardEuler.jpg”

ContourPlot[
If[Abs[A]/Abs[Exp[a + b I]] < 1, -Abs[A]/Abs[Exp[a + b I]],
1], {a, -3, 2}, {b, -2, 2}, PlotPoints -> 250,
Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},
ContourShading -> False, Axes -> {True, True},
AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black]]

Export[“forwardEuler-star.jpg”, %]

“forwardEuler-star.jpg”

Plot3D[Log[Abs[A – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]]]/
Log[2], {a, -3, 3}, {b, -2, 2}, PlotPoints -> 100,
AxesLabel -> {a dt, b dt, n},
LabelStyle -> Directive[18, Bold, Black], PlotRange -> All]

Export[“forwardEuler-order.jpg”, %]

“forwardEuler-order.jpg”

ContourPlot[
Log[Abs[A – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]]]/Log[2], {a, -3,
3}, {b, -2, 2}, PlotPoints -> 250,
Contours -> {0, 0.25, 0.5, 0.75, 0.9, 1, 1.1, 1.25, 1.5, 2, 3, 4, 5,
10}, ContourShading -> False, ContourLabels -> All,
Axes -> {True, True}, AxesLabel -> {a dt, b dt},
LabelStyle -> Directive[18, Bold, Black], PlotRange -> All]

Export[“forwardEuler-order-contour.jpg”, %]

“forwardEuler-order-contour.jpg”

ContourPlot[
If[Abs[A] < 1, Abs[A – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]],
0], {a, -3, 2}, {b, -2, 2}, PlotPoints -> 250,
Contours -> {2, 2.5, 3, 3.5, 4}, ContourShading -> False,
Axes -> {False, True}]

Plot3D[Abs[A – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]], {a, -3,
3}, {b, -2, 2}, PlotPoints -> 100]

ContourPlot[
If[Abs[A]/Abs[Exp[a + b I]] < 1, -Abs[A]/Abs[Exp[a + b I]],
1], {a, -3, 2}, {b, -2, 2}, PlotPoints -> 250,
Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},
ContourShading -> False, Axes -> {False, True}]

(* RK 2 *)

L =.

A1 = 1 + 1/2 h L

A = 1 + h L A1

A12 = 1 + 1/4 h L;

Aab2 = (1 + h L A12/2 )^2

Collect[Expand[Normal[Series[A, {h, 0, 6}]]], h] – Soln

L = ( a + b I)/h;

ContourPlot[If[Abs[A] < 1, -Abs[A], 1], {a, -3, 1}, {b, -2, 2},
PlotPoints -> 250,
Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},
ContourShading -> False, Axes -> {True, True},
AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black]]

Export[“rk2.jpg”, %]

ContourPlot[
If[Abs[A]/Abs[Exp[a + b I]] < 1, -Abs[A]/Abs[Exp[a + b I]],
1], {a, -3, 2}, {b, -2, 2}, PlotPoints -> 250,
Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},
ContourShading -> False, Axes -> {True, True},
AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black]]

Export[“rk2-star.jpg”, %]

Plot3D[Log[Abs[A – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]]]/
Log[2], {a, -3, 3}, {b, -2, 2}, PlotPoints -> 100,
AxesLabel -> {a dt, b dt, n},
LabelStyle -> Directive[18, Bold, Black], PlotRange -> All]

Export[“rk2-order.jpg”, %]

ContourPlot[
Log[Abs[A – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]]]/Log[2], {a, -3,
3}, {b, -2, 2}, PlotPoints -> 250,
Contours -> {0, 0.25, 0.5, 0.75, 0.9, 1, 1.1, 1.25, 1.5, 2, 3, 4, 5,
10}, ContourShading -> False, ContourLabels -> All,
Axes -> {True, True}, AxesLabel -> {a dt, b dt},
LabelStyle -> Directive[18, Bold, Black], PlotRange -> All]

Export[“rk2-order-contour.jpg”, %]

(* BDF 2 *)

A =.

L =.

Solve[3/2 A^2 – 2 A + 1/2 == h A^2 L, A]

A1 = (-2 – Sqrt[1 + 2 h L])/(-3 + 2 h L); A2 = (-2 + Sqrt[
1 + 2 h L])/(-3 + 2 h L);

Solve[3/2 A^2 – 2 A + 1/2 == 1/2 h A^2 L, A]

Aab2 = ((-2 – Sqrt[1 + h L])/(-3 + h L) )^2;

Collect[Expand[Normal[Series[A1, {h, 0, 6}]]], h] – Soln

Collect[Expand[Normal[Series[A2, {h, 0, 6}]]], h] – Soln

L = ( a + b I)/h;

ContourPlot[
If[Max[Abs[A1], Abs[A2]] < 1, -Max[Abs[A1], Abs[A2]], 1], {a, -8,
6}, {b, -5, 5}, PlotPoints -> 250,
Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},
ContourShading -> False, Axes -> {True, True},
AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black]]

Export[“bdf2.jpg”, %]

ContourPlot[

If[Abs[A1]/Abs[Exp[a + b I]] < 1, -Abs[A1]/Abs[Exp[a + b I]],
1], {a, -3, 2}, {b, -2, 2}, PlotPoints -> 250,
Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},
ContourShading -> False, Axes -> {True, True},
AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black]]

Export[“bdf2-star.jpg”, %]

Plot3D[Log[Abs[A1 – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]]]/
Log[2], {a, -8, 6}, {b, -5, 5}, PlotPoints -> 100,
AxesLabel -> {a dt, b dt, n},
LabelStyle -> Directive[18, Bold, Black]]

Export[“bdf2-order.jpg”, %]

ContourPlot[
Log[Abs[A1 – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]]]/Log[2], {a, -8,
6}, {b, -5, 5}, PlotPoints -> 250,
Contours -> {-1, 0, 0.5, 0.9, 1, 1.1, 1.5, 2, 5, 10},
ContourShading -> False, ContourLabels -> All, Axes -> {True, True},
AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black],
PlotRange -> All]

Export[“bdf2-order-contour.jpg”, %]

 

The 2014 SIAM Annual Meeting, or what is the purpose of Applied Mathematics?

11 Friday Jul 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Donald Knuth -“If you find that you’re spending almost all your time on theory, start turning some attention to practical things; it will improve your theories. If you find that you’re spending almost all your time on practice, start turning some attention to theoretical things; it will improve your practice.”

This week I visited Chicago for the 2014 SIAM Annual Meeting (Society for Industrial and Applied Mathematics). It was held at tPalmer-house-lobby-final-larger-e1380753290284he Palmer House, which is absolutely stunning venue swimming in old-fashioned style and grandeur. It is right around the corner from Millennium Park, which is one of the greatest Urban green spaces in existence, which itself is across the street from the Art Institute. What an inspiring setting to hold a meeting. Chicago itself is one of the great American cities with a vibrant downtown and numerous World-class sites.

The meeting included a lot of powerful content and persuasive applications of applied mathematics. Still some of the necessary gravity for the work seems to be missing from the overall dialog with most of the research missing the cutting edge of reality. There just seems to be a general lack of vitality and impCloud_Gate_at_Millenium_Park_Chicago_aug_2007_2__soul-amportance to the overall scientific enterprise, and applied mathematics is suffering likewise. This isn’t merely the issue of funding, which is relatively dismal, but overall direction and priority. In total, we aren’t asking nearly enough from science, and mathematics is no different. The fear of failure is keeping us from collectively attacking society’s most important problems. The distressing part of all of this is the importance and power of applied mathematics and the rigor it brings to science as a whole. We desperately need some vision moving forward.

The importance of applied mathematics to the general scientific enterprise should not be in doubt, but it is. I sense a malaise in the entire scientific field stemming from the overall lack of long-term perspective for the Nation as a whole. Is the lack of vitality specific to this field, or a general description of research?

I think it is useful to examine how applied mathematics can be an important force for order, confidence and rigor in science. Indeed applied mathematics can be a powerful force to aid the practice of science. For example there is the compelling example of compressed sensing (told in the Wired article ct-foihttp://www.wired.com/2010/02/ff_algorithm/). The notion that the L1 norm had magical properties to help unveil the underlying sparsity in objects was an old observation, but not until mathematical rigor was put in place to underpin this observation did the practice take off. There is no doubt that the entire field exploded in interest when the work of Candes, Tao and Donoho put a rigorous face on the magical practice of regularizing a problem with the L1 norm. It shouldn’t be under-estimated that the idea came at the right time; this is a time when we are swimming in data from an increasing array of sources, and compressed sensing conceptually provides a powerful tool for dealing with this. At the same time, the lack of rigor limited the interest in the technique prior to 2004 or 2005.

One of the more persuasive cases where applied mathematics has provided a killer theory is 41CvwPJb73Lthe work of Peter Lax on hyperbolic conservation laws. He laid the groundwork for stunning progress in modeling and simulating with confidence and rigor. There are other examples such as the mathematical order and confidence of the total variation diminishing theory of Harten to power the penetration of high-resolution methods into broad usage for solving hyperbolic PDEs. Another example is the relative power and confidence brought to the solution of ordinary differential equations, or numerical linear algebra by the mathematical rigor underlying the development of software. These are examples where the presence of applied mathematics makes a consequential and significant difference in the delivery of results with confidence and rigor. Each of these is an example of how mathematics can unleash a capability in truly “game-changing” ways. A real concern is why this isn’t happening more broadly or in targeted manner.

I started the week with a tweet of Richard Hamming’s famous quote – “The purpose of computing is insight, not numbers.” During one of the highlight talks of the meeting we received a modification of that maxim by the lecturer Leslie Greengard,

“The purpose of computing is to get the right answer”.

A deeper question with an uncertainty quantification spin would be “which right answer?” My tweet in response to Greengard then said

“The purpose of computing is to solve more problems than you create.”

This entire dialog is the real topic of the post. Another important take was by Joseph Teran on scientific computing in special effects for movies. Part of what sat wrong with me was the notion that looking right becomes equivalent to being right. On the other hand the perception and vision of something like turbulent fluid flow shouldn’t be underestimated. If it looks right there is probably something systematic lying beneath the superficial layer of entertainment. The fact that the standard for turbulence modeling for science and movies might be so very different should be startling. Ideally the two shouldn’t be that far apart. Do special effects have something to teach us? Or something worthy of explanation? I think these questions make mathematicians very uncomfortable.

climate-modelIf it makes you uncomfortable, it might be a good or important thing to ask. That uncomfortable question might have a deep answer that is worth attacking. I might prefer to project this entire dialog into the broader space of business practice and advice. This might seem counter-intuitive, but the broader societal milieu today is driven by business.

“Don’t find customers for your products, find products for your customers.” ― Seth Godin

One of the biggest problems in the area where I work is the maturity of the field. People simply don’t think about what the entire enterprise is for. Computational simulation and modeling is about using a powerful tool to solve problems. The computer allows certain problem solving approaches to be used that aren’t possible with out it, but the problem solving is the central aspect. I believe that the fundamental utility of modeling and simulation is being systematically taken for granted. The centrality of the problem being solved has been lost and replaced by simpler, but far less noble pursuits. The pursuit of computational power has become a fanatical desire that has swallowed the original intent. Those engaging in this pursuit have generally good intentions, but lack the well-rounded perspective on how to achieve success. For example, the computer is only one small piece of the toolbox and to use a mathematical term, necessary, but gloriously insufficient.

jaguar-7Currently the public policy is predicated upon the notion that a bigger faster computer provides an unambiguously better solution. Closely related to this notion is a technical term in computational modeling and mathematics known as convergence. The model converges or approaches a solution as more computational resource is applied. If you do everything right this will happen, but as problems become more complex you have to do a lot of things right. The problem is that we don’t have the required physical or mathematical knowledge to have the expectation of this in many cases. These are the very cases that we use to justify the purchase of new computers.

The guarantee of convergence ought to be at the very heart of where applied mathematics is striving; yet the community as a whole seems to be shying away from the really difficult questions. Today too much applied mathematics focuses upon simple model equations that are well behaved mathematically, but only capture cartoon aspects of the real problems facing society. Over the past several decades focused efforts on attacking these real problems have retreated. This retreat is part of the overall base of fear of failure in research. Despite the importance of these systems, we are not pushing the boundaries of knowledge to envelop them with better understanding. Instead we spend effort redoubling our efforts to understand simple model equations. This lack of focus on real problems is one of the deepest and most troubling aspects of the current applied mathematics community.

We have evolved to the point in computational modeling and simulation where today we don’t actually solve problems any more. We have developed useful abstractions that have taken the place of the actual problem solving. In a deep sense we now solve cartoonish versions of actual problems. These cartoons allow the different sub-fields to work independently of one another. For example, the latest and greatest computers require super high-resolution 3-D (or 7-D) solutions to the model problems. Actual problem solving rarely (never) works this way. If the problem can be solved in a lower-dimensional manner, it is better. Actual problem solving always starts simple and builds its way up. We start in one dimension and gain experience, run lots of problems, add lots of physics to determine what needs to be included in the model. The mantra of the modern day is to short-circuit this entire approach and jump to add in all the physics, and all the dimensionality, and all the resolution. It is the recipe for disaster, and that disaster is looming before us.

The reason for this is a distinct lack of balance in how we are pursing the objective of better modeling and simulation. To truly achieve progress we need a return to a balanced problem solving perspective. While this requires attention to computing, it also requires physical theory and experiment, deep engineering, computer science, software engineering, mathematics, and physiology. Right now, aside from computers themselves and computer science, the endeavor is woefully out of balance. We have made experiments almost impossible to conduct, and starved the theoretical aspects of science in both physics and mathematics.

Take our computer codes as an objective example of what is occurring. The modeling and simulation is no better than the physical theory and the mathematical approximations used. In many cases these ideas are now two or three decades old. In a number of cases the theory gives absolutely no expectation of convergence as the computational resource is increased. The entire enterprise is predicated on this assumption, yet it has no foundation in theory! The divorce between what the codes do and what the applied mathematicians at SIAM do is growing. The best mathematics is more and more irrelevant to the codes being run on the fastest computers. Where excellent new mathematical approximations exist they cannot be applied to the old codes because of the fundamental incompatibility of the theories. Despite these issues little or no effort exists to rectify this terrible situation.

Why?

friedman_postcardPart of the reason is our fixation on short-term goals, and inability to invest in long-term ends. This is true in science, mathematics, business, roads, bridges, schools, universities, …

Long-term thinking has gone the way of the dinosaur. It died in the 1970’s. I came across a discussion of one of the key ideas of our time, the perspective that business management is all about maximizing shareholder value. It was introduced in 1976 by Nobel Prize-winning economist, Milton Friedman and took hold like a leech. It arguable that it is the most moronic idea ever in business (“the dumbest idea ever”). Nonetheless it has become the lifeblood of business thought, and by virtue of being a business mantra, lifeblood of government thinking. It has been poisoning the proverbial well ever since. It has become the reason for the vampiric obsession with short-term profits, and a variety of self-destructive business practices. The only “positive” side has been its role in driving the accumulation of wealth within chief executives, and financial services. Stock is no longer held for any significant length of time, and business careers hinge upon the quarterly balance sheet. Whole industries have been ground under the wheels of the quarterly report. Government research in a lemming like fashion has followed suit and driven research to be slaved to the quarterly report too.

Philippine-stock-market-boardThe consequences for the American economy have been frightening. Aside from the accumulation of wealth by upper management, we have had various industries completely savaged by the practice, rank and file workers devalued and fired, and no investment in future value. The stock trading frenzy created by this short-term thinking has driven the creation of financial services that produce nothing of value for the economy, and have succeeded in destabilizing the system. As we have seen in 2008 the results can be nearly catastrophic. In addition, the entire business-government system has become unremittingly corrupt and driven by greed and influence peddling. Corporate R&D used to be a vibrant source of science funding and form a pipeline for future value. Now it is nearly barren with the great corporate research labs fading memories. The research that is funded is extremely short-term focused and rarely daring or speculative. The sorts of breakthroughs that have become the backbone of the modern economy no longer get any attention.

The government has been similarly infested as anything that is “good” business practice is “good” for government management. Science is no exception. We now have to apply similar logic to our research and submit quarterly reports. Similar to business we have had to strip mine the future and inflate our quarterly bottom line. The result has been a systematic devaluing of the future. The national leadership has adopted the short-term perspective whole cloth.

At least in some quarters there is recognition of this trend and a push to reverse this trend. It is going to be a hard path to reversing the problem as the short-term focus has been the “goose that laid the golden egg” for many. These ideas have also distorted the scientific enterprise in many ways. The government’s and business’ investment in R&D has become inherently shortsighted. This has caused the whole approach to science to become radically imbalanced. Computational modeling and simulation is but one example that I’m intimately familiar with. It is time to turn things around.

Irrational fear is killing our future?

04 Friday Jul 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

“Fear is the mind-killer.” — Frank Herbert

The United States likes to think of itself as a courageous country (a country full of heros). This picture is increasingly distant from the reality of a society of cowards who are almost scared of their own shadows. Why? What is going on in our society to drive trend to be scared of everything?Calling the United States a bunch of cowards seems BaldEaglerather hyperbolic, and it is. The issue is that the leadership of the nation is constantly stoking the fires of irrational fear as a tool to drive political goals. By failing to aspire toward a spirit of shared sacrifice and duty, we are creating a society that looks to avoid anything remotely dangerous or risky. The consequences of this cynical form of gamesmanship are slowly ravaging the United States’ ability to be a dynamic force for anything good. In the process we are sapping the vitality that once brought the nation to the head of the international order. In some ways this trend is symptomatic of our largess as the sole military and economic superpower of the last half of the 20th Century. The fear is drawn from the societal memory of our fading roll in the World, and the evolution away from the mono-polar power we once represented.

Where is the national leadership that calls on citizens to reach for the stars? Where are the voices asking for courage and sacrifice? Once upon a time we had leaders who asked much of us.

“For of those to whom much is given, much is required. “
and
“And so, my fellow Americans: ask not what your country can do for you — ask what you can do for your country.” – John F. Kennedy.

The consequences of this fear go well beyond mere name-calling or implications associated with the psychological aspects of fear, but undermine the ability of the Country to achieve anything of substance, or spend precious resources rationally. The use of fear to motivate people’s choices by politicians is rampant as is the use of fear in managing work. Fear moves people to make irrational choices, and our Nation’s leader whether in government or business want people to choose irrationally in favor of outcomes that benefit those in power. Fear is a powerful way to achieve this. All of this is a serious negative drain on the nation. In almost any endeavor trying to do things you are afraid of leads to diminished performance. One works harder to avoid the negative outcome than achieve the positive one. Fear is an enormous tax on all our efforts, and usually leads to the outcomes that we feared in the first place. We live in a world where broad swaths of public policy are fear-driven. It is a plague on our culture.

Like many of you, my attention has been drawn to the event in Iraq (and Syria) with the onslaught of ISIS. A chorus of fear mongering by politicians bent of scaring the public to support military action to stem the tide of anti-Western faFighters of  al-Qaeda linked Islamic State of Iraq and the Levant parade at Syrian town of Tel Abyadctions in the region has coupled this. Supposedly ISIS is worse than Al Qaeda, and we should be afraid. You are so afraid that you will demand action. In fact that hamburger you are stuffing into your face is a much larger danger to your well being than ISIS will ever be. Worse yet, we put up with the fear-mongers whose fear baiting is aided and abetted by the new media because they see ratings. When we add up the costs, this chorus of fear is savaging us and it is hurting our Country deeply.

“Stop letting your fear condemn you to mediocrity.” ― Steve Maraboli,

We have collectively lost the ability to judge the difference between a real threat and an unfortunate occurrence. Even if we include the loss of life on 9-11 the threat to you due to terrorism is minimal. Despite this reality we expend fast sums of money, time, effort and human lives trying to stop it. It is an abysmal investment of all of these things. We could do so much more with those resources. To make matters worse, the “War on Terror” has distorted our public policy in numerous ways. Stating with the so-called Patriot act we have sacrifice freedom and privacy at the altar of public safety and national security. We create the Department of Homeland Security (a remarkably Soviet sounding name at that), which is a monument to wasting taxpayer money. Perhaps the most remarkable aspect of the DHS is that entering the BinLadenUnited States is now more arduous than entering the former Soviet Union (Russia). This fact ought to absolutely be appalling to the American psyche. Meanwhile, numerous bigger threats go completely untouched by action or effort to mitigate their impact.

For starters as the news media became more interested in ratings than news, they began to amplify the influence of the exotic events. Large, unusual, violent events are ratings gold, and their presence in the news is grossly inflated. The mundane everyday things that are large risks are also boring or depressing, and people would just as soon ignore them. In many cases the mundane everyday risks are huge moneymakers for the owners and advertisers in the media, and they have no interest in killing their cash cow even at the expense of human life (think the medical-industrial complex, and agri-business). Given that people are already horrific at judging statistical risks, these trends have only tended to increase the distance between perceived and actual danger. Politicians know all these things and use them to their advantage. The same things that get ratings for the news grab voter’s attention, and the cynics “leading” the country know it.

TrickyDickWhen did all this start? I tend to think that the tipping point was the mid-1970’s. This era was extremely important for the United States with a number of psychically jarring events taking center stage. The upheaval of the 1960’s had turned society on its head with deep changes in racial and sexual politics. The Vietnam War had undermined the Nation’s innate sense of supremacy while scandal ripped through the government. Faith and trust in the United States took a major hit. At the same time it marked the apex of economic equality with the beginnings of the trends that have undermined it ever since. This underlying lack of faith and trust in institutions has played a key roll in powering our decline. The anti-tax movement that set in motion public policy that drives the growing inequality in income and wealth began then arising from these very forces. These coupled to the insecurities of national defense, gender and race to form the foundation of the modern conservative movement. These fears have been used over and over to drive money and power into the military-intelligence-industrial-complex at a completely irrational rate.

“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” ― Benjamin Franklin

President Bush Renews USA Patriot ActThe so-called Patriot Act is an exemplar of the current thinking. There seems to be no limit to the amount of freedom American will sacrifice to gain a marginal and inconsequential amount of safety. The threat of terrorism in no way justifies the cost. Dozens of other issues are a greater threat to the safety of the public, yet receive no attention. We can blame the unholy alliance of the news media and politicians for fueling this completely irrational investment in National security coupled to a diminishment of personal, and societal liberty. We have created a nation of cowards who will be declared to be heros by the same forces that have fueled the unrelenting cowardice. The fear that 9-11 engendered in the culture unleashed a number of demons on our culture that we continue to hold onto. In addition to the reduction in the freedoms we supposedly cherish, we have allowed our nation to conduct themselves in manner opposed to our deepest principles for more than a decade.

“If failure is not an option, then neither is success.” ― Seth Godin

We are left with a society that commits major resources and effort into managing inconsequential risks. Our public policy is driven by fear instead of hope. Our investments are based on fear, and lack of trust. Very little we end up doing now is actually bold or far-sighted. Instead we are over-managed and choose investments with a guarantee of payoff however small it might be.

Fear of failure is killing progress. Research is about doing new things, things that have never been done before. This entails a large amount of risk of failure. Most of the time there is a good reason why things haven’t been done before. Sometimes it is difficult, or even seemingly impossible. At other times technology is opening doors and possibilities that didn’t exist. Nonetheless the essence of good research is discovery and discovery involves risk. The better the research is, the higher the chance for failure, but the potential for higher rewards also exists. What happens when research can’t ever fail? It ceases being research. More and more our public funding of research is falling prey to the fear-mongering, risk avoiding attitudes, and suffering as a direct result.

At a deep level research is a refined form of learning. Learning is powered by failure. If you are not failing, you are not learning or more deeply stretching yourself. One looks to put themselves into the optimal mode for learning by stretching themselves beyond their competence just enough. Under these conditions people should fail a lot, not so much as to be disastrous, but enough to provide feedback. Research is the same. If research isn’t failing it is not pushing boundaries and the efforts are suboptimal. This nature of suboptimality defines the current research environment. The very real conclusion is that our research is not failing nearly as much as it needs to. Too much success is actually a sign that the management of the research is itself failing.

wall_street_bullA huge amount of the problem is directly related to the combination of short-term thinking where any profit made now is celebrated regardless of how the future works out. This is part of the whole “maximize shareholder value” mindset that has created a pathological business climate. Future value and long-term planning has become meaningless in business because any money invested isn’t available for short-term shareholder value. More than this, the shareholder is free to divest themselves of their shares once the value has been sucked away. Over the long-term this has created a lot of wealth, but slowly and steadily hollowed out the long-term future prospects for broad swaths of the economy.

To make matters worse government has become addicted to these very same business practices. Research funding is no exception. The results must be immediate and any failure to give an immediate return is greeted as a failure. The quality and depth of long-term research is being destroyed by the application of these ideas. These business ideas aren’t good for business either, but for science they are deadly. We are slowly and persistently destroying the vitality of the future for fleeting gains in the present.

“Anyone who says failure is not an option has also ruled out innovation.” ― Seth Godin

Maybe if the United States continues to proudly proclaim itself as the “home of the brave and the land of the free” we might make an effort to actually act like it. Instead we just proclaim it like another empty slogan. Right now this slogan is increasingly false advertising.

Keeping it real in high performance computing

27 Friday Jun 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

“Theories might inspire you, but experiments will advance you.” ― Amit Kalantri

This week I have a couple of opportunities to speak directly with my upper management. At one level this is nothing more than an enormous pain in the ass, but that is my short sighted monkey-self speaking. I have to prepare two talks and spend time vetting them with others. It is enormously disruptive to getting “work” done.

On the other hand, a lot of my “work” is actually a complete waste of time. Really. Most of what I get paid for is literally a complete waste of a very precious resource, time. So it might be worthwhile making good use of these opportunities. Maybe something can be done to provide work more meaning, or perhaps I need to quit feeling the level of duty to waste my precious time on stupid meaningless stuff some idiot calls work. Most of the time wasting crap is feeding the limitless maw of the bureaucracy that infests our society.

Now we can return to the task at hand. The venues for both engagements are somewhat artificial and neither is ideal, but its what I have to work with. At the same time, it is the chance to say things that might influence change for the better. Making this happen to the extent possible has occupied my thoughts. If I do it well, the whole thing will be worth the hassle. So with hope firmly in my grasp, I’ll charge ahead.

I always believe that things can get better, which could be interpreted as whining, but I prefer to think of this as a combination of the optimism of continuous improvement and the quest for excellence. I firmly believe that actual excellence is something we have a starkly short supply of. Part of the reason is the endless stream of crap that gets in the way of doing things of value. I’m reminded of the phenomenon of “bullshit jobs” that has been recently been observed (http://www.salon.com/2014/06/01/help_us_thomas_piketty_the_1s_sick_and_twisted_new_scheme/). The problem with bullshit jobs is that they have to create more work to keep them in business, and their bullshit creeps into everyone’s life as a result. Thus, we have created a system that works steadfastly to keep excellence at bay. Nonetheless in keeping with this firmly progressive approach, I need to craft a clear narrative arc that points the way to a brighter, productive future.

Image

High performance computing is one clear over-arching aspect of what I work on. Every single project I work on connects to this. The problem is that to a large extent HPC is becoming increasingly disconnected from reality. Originally computing was an important element in various applied programs starting with the Manhattan project. Computing had grown in prominence and capability through the (first) nuclear age in supporting weapons and reactors alike. NASA also relied heavily on contributions from computing, and the impact of computation modeling improved the efficiency of delivery of science and engineering. Throughout this period computing was never the prime focus, but rather a tool for effective delivery of a physical product. In other words there was always something real at stake that was grounded in the physical “real” world. Today, more and more there seems to have been a transition to a World where the computers became the reality.

More and more the lack of support for the next supercomputer is taking on the tone and language of the past, as if we have “supercomputer gap” with other countries. The tone and approach is reminiscent of the “missile gap” of a generation ago, or the “bomber gap” two generations ago. Both of those gaps were BS to a very large degree, and I firmly believe the supercomputer gap is too. These gaps are effective marketing ploys to garner support for building more of our high performance computers. Instead we should focus on the good high performance computing can do for real problem solving capability, and let the computing chips fall where they may.

ImageThere is a gap, but it isn’t measured in terms of FLOPS, CPUs, memory, it is measured in terms of our practice. Our supercomputers have lost touch with reality. Supercomputing needs to be connected to a real tangible activity where the modeling assists experiments, observations and design in producing something that services a societal need. These societal needs could be anything from national defense, cyber-security, space exploration, to designing better more fuel-efficient aircraft, or safer more efficient energy production. The reality we are seeing is that each of these has become secondary to the need for the fastest supercomputer.

A problem is that the supercomputing efforts are horribly imbalanced having become primarily a quest for hardware capable of running the LINPAC benchmark the fastest. LINPAC does not reflect the true computational character of the real applications supercomputers use. In many ways it is almost ideally suited towards demonstrating high operation count. Ironically it is nearly optimal in its lack of correspondence to applications. As a result of the dynamic that has emerged is that real application power has become a secondary, optional element in our thinking about supercomputing.

These developments highlight our disconnect from reality. In the past, the reality of the objective was the guiding element in computing. If the computing program got out of balance, reality would intercede to slay any hubris that developed. This formed a virtuous cycle where experimental data would push theory, or computed predictions would drive theorists to explain, or design experiments to provide evidence.

In fact, we have maimed this virtuous cycle by taking reality out of the picture.

The Stockpile Stewardship program was founded as the alternative to the underground testing of nuclear weapons, and supercomputing was its flagship. We even had a certain official say that a computer could be “Nevada* in box” and pushing the return key would be akin to pressing the button on a nuclear test. It was a foolish and offensive thing to say, almost everyone else in the room knew it was; yet this point of view has taken root, and continues to wreck havoc. Then and now, the computer hardware has become nearly to sole motivation with a loss of the purpose for the entire activity far too common. Everything else needed to be successful has been short-changed in the process. With the removal of the fully integrated experiments of the nuclear test from the process, the balance in everything else needed to be carefully guarded. Instead, this balance was undermined almost from the start. We have not put together a computing program with sufficient balance, support and connections to theory and experiment to succeed, as the Country should demand.

“The real world is where the monsters are.” ― Rick Riordan

Image

I have come to understand that there is something essential in building something new. In the nuclear reactor business, the United States continues to operate old reactors, and fails to build new ones. Given the maturity of the technology, the tendency in high performance computing is to allow highly calibrated models to be used. These models are highly focused on working within a parameter space that is well trodden and containing to be the focus. If the United States were building new reactors with new designs the modeling would be taxed by changes in the parameter space. The same is true for nuclear weapons. In the past there were new designs and tests that either confirmed existing models, or yielded a swift kick to the head with an unexplained result. It is the continued existence of the inexplicable that would jar models and modeling out of an intellectual slumber. Without this we push ourselves into realms of unreasonable confidence in our ability to model things. Worse yet we allow ourselves to pile all our uncertainty into calibration, and then declare confidently that we understand the technology.

Image

At the core of the problem is the simple, easy and incorrect view that bigger, faster supercomputers are the key. The key is deep thought and problem solving approach devised by brilliant scientists exercising the full breadth of scientific tools available. The computer in many ways is the least important element in successful stewardship; it is necessary, but woefully insufficient to provide success.

“Never confuse movement with action.” ― Ernest Hemingway

Supercomputing was originally defined as the use of powerful computers to solve problems. Problem solving was the essence of the activity. Today this is only true by fiat. Supercomputing has become almost completely about the machines, and the successful demonstration of the machines power on stunt applications or largely irrelevant benchmarks. Instead of defining the power of computing by problems being solved, the raw power of the computer haImages become the focus. This has led to a diminishment in the focus on algorithms and methods, which has actually a better track record than Moore’s law for improving computational problem solving capability. The consequence of this misguided focus is a real diminishment in our actual capability to solve problems with supercomputers. In other words, our quest for the fastest computer is ironically undermining our ability to use computers effectively as possible.

The figure below shows how improvements in numerical linear algebra have competed with Moore’s law over a period of nearly forty years. This figure was created in 2004 as part of a DOE study (the Scales workshop URL?). The figure has several distinct problems: the dates are not included, and the algorithm curve is smooth. Adding texture to this is very illuminating because the last big algorithmic breakthrough occurred in the mid 1980’s (twenty years prior to the report). Previous breakthroughs occurred on an even more frequent time scale, 7-10 years. Therefore in 2004 we were already overdue for a new breakthrough, which has not come yet. On the other hand one might conclude that multigrid is the ultimate linear algebra algorithm for computing (I for one don’t believe this). Another meaningful theory might be that our attention was drawn away from improving the fundamental algorithms towards a focus on making these algorithms work on massively parallel supercomputers. Perhaps improving on multigrid is a difficult problem, and the problem might be that we have already snatched all the low hanging fruit. I’d even grudgingly admit that multigrid might be the ultimate linear algebra methods, but my faith is that something better is out there waiting to be discovered. New ideas and differing perspectives are needed to advance. Today, we are a full decade further along without a breakthrough, and even more due for a breakthrough. The problem is that we aren’t thinking along the lines of driving for algorithmic advances.

Image

I believe in progress; I think there are discoveries to be made. The problem is we are putting all of our effort into moving our old algorithms to the new massively parallel computers of the past decade. Part of the reason for this is the increasingly perilous nature of Moore’s law. We have had to increase the level of parallelism in our codes by immense degrees to continue following Moore’s law. Around 2005 the clock speeds in microprocessors stopped their steady climb. For Moore’s law this is the harbinger of doom. The end is near, the combination of microprocessor limits and parallelism limits are conspiring to make computers amazingly power intensive, and the continued rise as in the past cannot continue. At the same time, we are suffering from the failure to continue supporting the improvements in problem solving capability from algorithmic and method investments that had provided more than Moore’s law-worth in increased capability.

A second piece of this figure that is problematic is the smooth curve of advances in algorithm power. This is not how it happens. Algorithms have breakthroughs and in the case of numerical linear algebra it is how the solution time scales with the number of unknowns. This results is quantum leaps in performance when a method allows us to access a new scaling. In between these leaps we have small improvements as the new method is made more efficient or procedural improvements are made. This is characteristically different than Moore’s law in a key way. Moore’s law is akin to a safe bond investment that provides steady returns in a predictable safe manner. Program managers and politicians love this because it is safe whereas algorithmic breakthroughs are like tech stocks; sometimes it pays off hugely, most of the time the return is small. This dynamic is beginning to fall apart; Moore’s law will soon fail (or maybe it won’t).

I might even forecast that the demise of Moore’s law even for a short while might be good for us. Instead of relying on power to grow endlessly, we might have to think a bit harder about how we solve problems. We won’t have an enormously powerful computer that will simply crush problems into submission. This doesn’t happen in reality, but listening to supercomputing proponents you’d think it is common. Did I mention bullshit jobs earlier?

The truth of the matter is that computing might benefit from a discovery that will allow the continuation of the massive progress of the past 70 years. There is no reason to believe that some new technology will bail us out. The deeper issue regards the overall balance of the efforts. The hardware and software technologies have always worked together in a sort of tug-of-war that bares similarity to what we see in tension between theoretical and experimental science. One field drives the other depending on the question and the availability of emergent ideas or technologies that opens new vistas. Insofar as computing is concerned my concern is plain: hardware concerns have had preeminence for twenty or thirty years while algorithmic and method focus has waned. The balance has been severely compromised. Enormous value has been lost to this lack of balance.

This gets to the core of what computing is about. Computing is a tool. It is a different way to solve problems, manage or discover information and communicate. For some computing has become an end unto itself rather than a tool for modern society. We have allowed this perspective to infect scientific computing as a discipline because of the utility of acquiring new supercomputers outweighs using them effectively. This is the root of the problem and the cause of the lack of balance we see at present. This is coupled to a host of other issues in society, not the least of which is a boundless superficiality that drives a short-term focus and disallows real achievement because of the risk of failure has been deemed unacceptable.

ImageWe should work steadfastly to restore the necessary balance and perspective for success. We need to allow risk to enter into our research agenda and set more aggressive goals. Requisite with this risk we should provide greater freedom and autonomy to those striving for the goals. Supercomputing should recognize that the core of its utility is computing as a problem solving approach that relies upon computing hardware for success. There is an unfortunate tendency to simply state supercomputing as a national security resource regardless of the actual utility of the computer for problem solving. These claims border on being unethical. We need computers that are primarily designed to solve important problems. Problems don’t become important because a computer can solve them.

* Nevada is the location of the site the United States used for underground nuclear testing.

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 56 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...