The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.
― Nick Trefethen
Modern modeling and simulation is viewed as a transformative technology for science and engineering. Invariably the utility of modeling and simulation is grounded on t
he solution of models via numerical approximations. The fact that numerical approximation is the key to unlocking its potential seems largely lost in the modern perspective, and engaged in any increasingly naïve manner. For example much of the dialog around high performance computing is predicated on the notion of convergence. In principle, the more computing power one applies to solving a problem, the better the solution. This is applied axiomatically and relies upon a deep mathematical result in numerical approximation. This heritage and emphasis is not considered in the conversation to the detriment of its intellectual depth.
Where all think alike there is little danger of innovation.
― Edward Abbey
At this point, the mathematics and specifics of numerical approximation is then
systematically ignored by the dialog. The impact of this willful ignorance is felt across the modeling and simulation world, a general lack of progress and emphasis on numerical approximation is evident. We have produced a situation where the most valuable aspect of numerical modeling is not getting focused attention. People are behaving as if the major problems are all solved and not worthy of attention or resources. The nature of the numerical approximation is the second most important and impactful aspect of modeling and simulation work. Virtually all the emphasis today is on the computers themselves based on the assumption of their utility in producing better answers. The most important aspect is the modeling itself; the nature and fidelity of the models define the power of the whole process. Once a model has been defined, the numerical solution of the model is the second most important aspect. The nature of this numerical solution is most dependent on the approximation methodology rather than the power of the computer.
The uncreative mind can spot wrong answers, but it takes a very creative mind to spot wrong questions.
― Anthony Jay
People act as if the numerical error is so small as not to be important on one hand, while encouraging great focus on computing power where the implicit reasoning for the computing power is founded on reducing numerical error. To make matters worse with this corrupt logic, the most effective way to reduce numerical error is being starved for attention and resources having little or no priority. The truth is that numerical errors are still too large, and increasing computing power is lousy way and inefficient to make them smaller. We are committed to a low-risk path that is also highly inefficient because the argument is accessible to the most naïve people in the room.
What is important is seldom urgent and what is urgent is seldom important.
― Dwight D. Eisenhower
Another way of getting to the heart of the issue is the efficacy of using gains in computer power to get better solutions. Increases in computing power are a terrible way to produce better results; it is woefully inefficient. One simply needs to examine the rate of solution improvement based on scaling arguments. First, we need to recognize that practical problems converge quite slowly in terms of the application of enhanced computational resources. For almost any problem of true real world applicability, high-order convergence (higher than first-order) is never seen. Generally we might expect solutions to improve at first-order with the inverse of mesh size. If we look at three dimensional, time dependent problems and we want to halve the numerical error, we need to apply at least 16 times the computing power. Usually convergence rates are less than first order, so the situation is actually even worse. As a result we are investing an immense amount in progressing in an incredibly inefficient manner, and starving more efficient means of progress. To put more teeth on the impact of current programs, the exascale initiative wants to compute things fifty times better, which will only result is reducing errors by slightly more than one half. So we will spend huge effort and billions of dollars in making numerical errors smaller by half. What an utterly shitty return on investment! This is doubly shitty when you realize that so much more could be done to improve matter by other means.
The first thing we need to recognize for progress is relative efficacy of different modes of investment. The most effective way to progress in modeling and simulation are better models. Better models require work on theory and experiment with deeply innovative thinking based on inspiration and evidence of limitations of current theory and modeling. For existing and any new models the next step is solving the models numerically. This involves detailed and innovative numerical approximations of the models. The power of modeling and simulation with computers is predicated on the ability to solve complex models that cannot be understood analytically (or analytically without severe restrictions or assumptions). The fidelity of the numerical approximations is the single most effective way to improve results once modeling errors have been addressed. Numerical approximations can make a huge difference in the accuracy of simulations far more effectively than computer power.
Don’t tell me about your effort. Show me your results.
― Tim Fargo
So why are we so hell bent on investing in a more inefficient manner of progressing? Our mindless addiction to Moore’s law providing improvements in computing power over the last fifty years for what in effect has been free for the modeling and simulation community.
Our modeling and simulation programs are addicted to Moore’s law as surely as a crackhead is addicted to crack. Moore’s law has provided a means to progress without planning or intervention for decades, time passes and capability grows almost if by magic. The problem we have is that Moore’s law is dead, and rather than moving on, the modeling and simulation community is attempting to raise the dead. By this analogy, the exascale program is basically designed to create zombie computers that completely suck to use. They are not built to get results or do science, they are built to get exascale performance on some sort of bullshit benchmark.
This gets to the core of the issue, our appetite for risk and failure. Improving numerical approximations is risky and depends on breakthroughs and innovative thinking. Moore’s law has sheltered the modeling and simulation community from risk and failure in computing hardware for a very long time. If you want innovation you need to accept risk and failure; innovation without risk and failure simply does not happen. We are intolerant of risk and failure as a society, and this intolerance dooms innovation literally strangling it to death in its crib. Moore’s law allowed progress without risk, as if it came for free. The exascale program will be the funeral pyre for Moore’s law and we are threatening the future of modeling and simulation with our unhealthy addiction to it.
If failure is not an option, then neither is success.
― Seth Godin
There is only one thing that makes a dream impossible to achieve: the fear of failure.
― Paulo Coelho
The key thing to realize about this discussion is that improving numerical
approximations is risky and highly prone to failure. You can invest in improving numerical approximations for a very long time without any seeming progress until one gets a quantum leap in performance. The issue in the modern world is the lack of predictability to such improvements. Breakthroughs cannot be predicted and cannot be relied upon to happen on a regular schedule. The breakthrough requires innovative thinking and a lot of trial and error. The ultimate quantum leap in performance is founded on many failures and false starts. If these failures are engaged in a mode where we continually learn and adapt our approach, we eventually solve problems. The problem is that it must be approached as an article of faith, and cannot be planned. Today’s management environment is completely intolerant of such things, and demands continual results. The result is squalid incrementalism and an utter lack of innovative leaps forward.
Civilizations… cannot flourish if they are beset with troublesome infections of mistaken beliefs.
― Harry G. Frankfurt
What is the payoff for methods improvement?
If we improve a method we can achieve significantly better results without a finer computational mesh. This results in a large saving in computational cost as long as the improved method isn’t too expensive. As I mentioned before one needs 16 times the computational resources to knock error down by half for a 3-D time dependent calculation. If I produce a method with half the error, it can be more efficient if it is less than 16 times as expensive. In other word, the method can use 16 times the computational resource and still be more efficient. This is a lot of headroom to work with!
The most dangerous ideas are not those that challenge the status quo. The most dangerous ideas are those so embedded in the status quo, so wrapped in a cloud of inevitability, that we forget they are ideas at all.
― Jacob M. Appel
For some cases the p
ayoff is far more extreme than these simple arguments. The archetype of this extreme payoff is the difference between first and second order monotone schemes. For general fluid flows, second-order monotone schemes produce results that are almost infinitely more accurate than first-order. The reason for this stunning claim are acute differences in the results comes from the impact of the form of the truncation error expressed via the modified equations (the equations solved more accurately by the numerical methods). For first-order methods there is a large viscous effect that makes all flows laminar. Second-order methods are necessary for simulating high Reynolds number turbulent flows because their dissipation doesn’t interfere directly with the fundamental physics.
As technology advances, the ingenious ideas that make progress possible vanish into the inner workings of our machines, where only experts may be aware of their existence. Numerical algorithms, being exceptionally uninteresting and incomprehensible to the public, vanish exceptionally fast.
― Nick Trefethen
We don’t generally have good tools for numerical error approximation in non-standard (or unresolved) cases. One digestion of one of the key problems is found in Banks, Aslam, Rider where sub-first-order convergence is described and analyzed for solutions of a discontinuous problem for the one-way wave equation. The key result in this paper is the nature of mesh convergence for discontinuous or non-differentiable solutions. In this case we see sub-linear fractional order convergence. The key result is a general relationship between the convergence rate and the formal order of accuracy for the method, , which is
. This comes from the analysis of the solution to the modified equation including the leading order truncation error. For nonlinear discontinuous solutions, the observed result is first-order where one establishes a balance between the regularization and the self-steepening in shock waves. At present there is no theory of what this looks like theoretically. Seemingly this system of equations could be analyzed as we did for the linear equations. Perhaps this might provide guidance for numerical method development. It would seemingly be worthy progress if we could analyze such systems more theoretically providing a way to understand actual accuracy.
Another key limitation of existing theory is chaotic solutions classically associated with turbulent or turbulent-like flows. These solutions are extremely (perhaps even infinitely) sensitive to initial conditions. It is impossible to get convergence results for point values, and the only convergence is for integral measures. These measures are generally convergent very slowly and they are highly mesh-dependent. This issue is huge in high performance computing. One area of study is measure-valued solutions where convergence is examined statistically. This is a completely reasonable approach for convergence of general solutions to hyperbolic PDE’s.
The much less well-appreciated aspect comes with the practice of direct numerical simulation of turbulence (DNS really of anything). One might think that having a DNS would mean that the solution is completely resolved and highly accurate. They are not! Indeed they are not highly convergent even for integral measures. Generally speaking, one gets first-order accuracy or less under mesh refinement. The problem is the highly sensitive nature of the solutions and the scaling of the mesh with the Kolmogorov scale, which is a mean squared measure of the turbulence scale. Clearly there are effects that come from scales that are much smaller than the Kolmogorov scale associated with highly intermittent behavior. To fully resolve such flows would require the scale of turbulence to be described by the maximum norm of the velocity gradient instead of the RMS.
If you want something new, you have to stop doing something old
― Peter F. Drucker
When we get to the real foundational aspects of numerical error and limitations, we come to the fundamental theorem of numerical analysis. For PDEs it only applies to linear equations and basically states that consistency and stability is equivalent to convergence. Everything is tied to this. Consistency means you are solving the equations in a valid and correct approximation, stability is getting a result that doesn’t blow up. What is missing is the theoretical application to more general nonlinear equations along with deeper relationships to accuracy, consistency and stability. This theorem was derived back in the early 1950’s and we probably need something more, but there is no effort or emphasis on this today. We need great effort and immensely talented people to progress. While I’m convinced that we have no limit on talent today, we lack effort and perhaps don’t develop or encourage the talent to develop appropriately.
Beyond the issues with hardware emphasis, today’s focus on software is almost equally harmful to progress. Our programs are working steadfastly on maintaining large volumes of source code full of the ideas of the past. Instead of building on the theory, methods, algorithms and idea of the past, we are simply worshiping them. This is the construction of a false ideology. We would do far greater homage to the work of the past if we were building on that work. The theory is not done by a long shot. Our current attitudes toward high performance computing are a travesty, and embodied in a national program that makes the situation worse only to serve the interests of the willfully naive. We are undermining the very foundation upon which the utility of computing is built. We are going to end up wasting a lot of money and getting very little value for it.
We now live in a world where counter-intuitive bullshitting is valorized, where the pose of argument is more important than the actual pursuit of truth, where clever answers take precedence over profound questions.
― Ta-Nahisi Coates
Margolin, Len G., and William J. Rider. “A rationale for implicit turbulence modelling.” International Journal for Numerical Methods in Fluids 39, no. 9 (2002): 821-841.
Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.
Banks, Jeffrey W., T. Aslam, and William J. Rider. “On sub-linear convergence for linearly degenerate waves in capturing schemes.” Journal of Computational Physics 227, no. 14 (2008): 6985-7002.
Fjordholm, Ulrik S., Roger Käppeli, Siddhartha Mishra, and Eitan Tadmor. “Construction of approximate entropy measure-valued solutions for hyperbolic systems of conservation laws.” Foundations of Computational Mathematics (2015): 1-65.
Lax, Peter D., and Robert D. Richtmyer. “Survey of the stability of linear finite difference equations.” Communications on pure and applied mathematics 9, no. 2 (1956): 267-293.
On Saturday I participated in the March for Science in downtown Albuquerque along with many other marches across the World. This was advertised as a non-partisan event, but to anyone there it was clearly and completely partisan and biased. Two things united the people at the march: a philosophy of progressive and liberalism and opposition to conservatism and Donald Trump. The election of a wealthy paragon of vulgarity and ignorance has done wonders for uniting the left wing of politics. Of course, the left wing in the United States is really a moderate wing, made to seem libera
l by the extreme views of the right. Among the greater proponents of the left wing are science as an engine of knowledge and progress. The reason for this dichotomy is the right wing’s embrace of ignorance, fear and bigotry as its electoral tools. The right is really the party of money and the rich with fear, bigotry and ignorance wielded as tools to “inspire” enough of the people to vote against their best (long term) interests. Part of this embrace is a logical opposition to virtually every principle science holds dear.
The premise that a march for science should be non-partisan is utterly wrong on the face of it; science is and has always been a completely political thing. The reasoning for this is simple and persuasive. Politics is the way human beings settle their affairs, assign priorities and make decisions. Politics is an essential human endeavor. Science is equally human in its composition being a structured vehicle for societal curiosity leading to the creation of understanding and knowledge. When the political dynamic is arrayed in the manner we see today, science is absolutely and utterly political. We have two opposing views of the future, one consistent with science favoring knowledge and progress, with the other inconsistent with science favoring fear and ignorance. In such an environment science is completely partisan and political. To expect things to be different is foolish and naïve.
One of the key things to understand is that science has always been a political thing although the contrast has been turned up in recent years. The thing driving the political context is the rightward movement of the Republican, which has led to their embrace of extreme views including religiosity, ignorance and bigotry. Of course, these extreme views are not really the core of the GOP’s soul, money is, but the cult of ignorance and anti-science is useful in propelling their political interests. The Republican Party has embraced extremism in a virulent form because it pushes its supporters to unthinking devotion and obedience. They will support their party without regard for their own best interests. The republican voter base hurts their economic standing in favor of policies that empower their hatreds and bigotry while calming their fear. All forms of fact and truth have become utterly unimportant unless they support their world-view. The upshot is the rule of a political class hell bent on establishing a ruling class in the United States composed of the wealthy. Most of the people voting for the Republican candidates are simply duped by their support of extreme fear, hate and bigotry. The Democratic Party is only marginally better since they have been seduced by the same money leaving voters with no one to work for them. The rejection of science by the right will ultimately be the undoing of the Nation as other nations will eventually usurp the United States militarily and economically.
values of progress, love and knowledge are viewed as weakness. In this lens it is no wonder that science is rejected.
id this problem is kill the knowledge before it is produced. We can find example after example of science being silenced because it is likely to produce results that do not match their view of the world. Among the key engines of the ignorance of conservatism is its alliance with extreme religious views. Historically religion and science are frequently at odds because the faith and truth are often incompatible. This isn’t necessarily all religious faith, but rather that stemming from a fundamentalist approach, which is usually grounded in old and antiquated notions (i.e., classically conservative and opposing anything looking like progress). Fervent religious belief cannot deal with truths that do not align with dictums. The best way to avoid this problem is get rid of the truth. When the government is controlled by extremists this translates to reducing and controlling
not support deeper research that forms the foundation allowing us to develop technology. As a result our ability to be the best at killing people is at risk in the long run. Eventually the foundation of science used to create all our weapons will run out, and we no longer will be the top dogs. The basic research used for weapons work today is largely a relic of the 1960’s and 1970’s. The wholesale diminishment in societal support for research during the 1980’s and onward will start to hurt us more obviously. In addition we have poisoned the research environment in a fairly bipartisan way leading to a huge drop in the effectiveness and efficiency of the fewer research dollars spent.
hey hate so much. The right wing has been engaged in an all out assault on universities in part because they view them as the center of left wing views. Attacking and contracting science is part of this assault. In addition to a systematic attack on universities is an increasing categorization of certain research as unlawful because its results will almost certainly oppose right wing views. Examples of this include drug research (e.g. marijuana in particular), anything sexual, climate research, health effects of firearms, evolution, and the list grows. The deepest wounds to science are more subtle. They have created an environment that poisons intellectual approaches and undermines the education of the population because educated intellectual people naturally oppose their ideas.
upset the status quo hurting the ability of traditional industry to make money (with the exception of geological work associated with energy and mining). Much of the ecological research has informed us how human activity is damaging the environment. Industry does not want to adopt practices that preserve the environment primarily due to greed. Moreover the religious extremism opposes ecological research because it opposes the dictums of their faith as chosen people who may exploit the Earth to the limits of their desire. Climate change is the single greatest research threat to the conservative worldview. The fact that mankind is a threat to the Earth is blasphemous to the right wing extremists either impacting their greed or the religious conviction. The denial of climate change is based primarily on religious faith and greed, the pillars of modern right wing extremism.
enable the implicit bigotry in how laws are enforced and people are imprisoned ignoring the damage to society at large.
punishment. This works to destroy people’s potential for economic advancement and burden the World with poor, unwanted children. Sex education for children is another example where ignorance is promoted as the societal response. Science could make the World far better and more prosperous, and the right wing stands in the way. It has everything to do with sex and nothing to do with reproduction.
ach even though it is utterly and completely ineffective and actually damages people. Those children are then spared the knowledge of their sexuality and their reproductive rights through an educational system that suppresses information. Rather than teach our children about sex in a way that honors their intelligence and arms them to deal with life, we send them out ignorant. This ignorance is yet another denial of reality and science by the right. The right wing does not want to seem like they are endorsing basic human instinct around reproduction and sex for pleasure. The result is to create more problems, more STD’s, more unwanted children, and more abortions. It gets worse because we also assure that a new generation of people will reach sexual maturity without the knowledge that could make their lives better. We know how to teach people to take charge of their reproductive decisions, sexual health and pleasure. Our government denies them this knowledge largely driven by antiquated moral and religious ideals, which only serve to give the right voters.
For a lot of people working at a National Lab there are two divergent paths for work, the research path that leads to lots of publishing, deep technical work and strong external connection, or the mission path that leads to internal focus and technical shallowness. The research path is for the more talented and intellectual people who can compete in this difficult world. For the less talented, creative or intelligent people, the mission world offers greater security at the price of intellectual impoverishment. Those who fail at the research focus can fall back onto the mission work and be employed comfortably after such failure. This perspective is a cynical truth for those who work at the Labs and represents a false dichotomy. If properly harnessed the mission focus and empower and energize better research, but it must be mindfully approached.
much better for engaging innovation and producing breakthrough results. It also means a great amount of risk and lots of failure. Pure research can chase unique results, but the utility of those results is often highly suspect. This sort of research entails less risk and less failure as well. If the results are necessarily impactful on the mission, the utility is obvious. The difficulty is noting the broader aspects of research applicability that mission application might hide.
Ubiquitous aspects of modernity such as the Internet, cell phones and GPS all owe their existence to Cold War research focused on some completely different mission. All of these technologies were created through steadfast focus on utility that drove innovation as a mode of problem solving. This model for creating value has fallen into disrepair due to its uncertainty and risk. Risk is something we have lost the capacity to withstand as a result the failure necessary to learn and succeed with research never happens.
nd the mission will suffer as result. This is both shortsighted and foolhardy. The truth is vastly different than this fear-based reaction and the only thing that suffers from shying away from research in mission-based work is the quality of the mission-based work. Doing research causes people to work with deep knowledge and understanding of their area of endeavor. Research is basically the process of learning taken to the extreme of discovery. In the process of getting to discovery one becomes an expert in what is known and capable of doing exceptional work. Today to much mission focused work is technically shallow and risk adverse. It is over-managed and underled in the pursuit of false belief that risk and failure are bad things.
In my own experience the drive to connect mission and research can provide powerful incentives for personal enrichment. For much of my early career the topic of turbulence was utterly terrifying, and I avoided it like the plague. It seemed like a deep, complex and ultimately unsolvable problem that I was afraid of. As I began to become deeply engaged with a mission organization at Los Alamos it became clear to me that I had to understand it. Turbulence is ubiquitous in highly energetic systems governed by the equations of fluid dynamics. The modeling of turbulence is almost always done using dissipative techniques, which end up destroying most of the fidelity in numerical methods used to compute the underlying ostensibly non-turbulent flow. These high fidelity numerical methods were my focus at the time. Of course these energy rich flows are naturally turbulent. I came to the conclusion that I had to tackle understanding turbulence.
ty to work on my computer codes over the break (those were the days!). So I went back to my office (those were the days!) and grabbed seven books on turbulence that had been languishing on my bookshelves unread due to my overwhelming fear of the topic. I started to read these books cover to cover, one by one and learn about turbulence. I’ve included some of these references below for your edification. The best and most eye opening was Uriel Frisch’s “Turbulence: the Legacy of A. N. Kolmogorov”. In the end, the mist began to clear and turbulence began to lose its fearful nature. Like most things one fears; the lack of knowledge of a thing gives it power and turbulence was no different. Turbulence is actually kind of a sad thing; its not understood and very little progress is being made.
The main point is that the mission focus energized me to attack the topic despite my fear of it. The result was a deeply rewarding and successful research path resulting in many highly cited papers and a book. All of a sudden the topic that had terrified me was understood and I could actually conduct research in it. All of this happened because I took contributing work to the mission as an imperative. I did not have the option of turning my back on the topic because of my discomfort over it. I also learned a valuable lesion about fearsome technical topics; most of them are fearsome because we don’t know what we are doing and overelaborate the theory. Today the best things we know about turbulence are simple, and old discovered by Kolmogorov as he evaded the Nazis in 1941.
results in software. Producing software in the conduct of applied mathematics used to be a necessary side activity instead of the core of value and work. Today software is the main thing produced and actual mathematics is often virtually absent. Actual mathematical research is difficult, failure prone and hard to measure. Software on the other hand is tangible and managed. It is still is hard to do, but ultimately software is only as valuable as what it contains, and increasingly our software is full of someone else’s old ideas. We are collectively stewarding other people’s old intellectual content, and not producing our own, nor progressing in our knowledge.
of the problem rather than the problem to the research. The result of this model is the tendency to confront difficult thorny issues rather than shirk them. At the same time this form of research can also lead to failure and risk manifesting itself. This tendency is the rub, and leads to people shying away from it. We are societally incapable of supporting failure as a viable outcome. The result is the utter and complete inability to do anything hard. This all stems from a false sense of the connection between risk, failure and achievement.
Research is about learning at a fundamental, deep level, and learning is powered by failure. Without failure you cannot effectively learn, and without learning you cannot do research. Failure is one of the core attributes of risk. Without the risk of failing there is a certainty of achieving less. This lower achievement has become the socially acceptable norm for work. Acting in a risky way is a sure path to being punished, and we are being conditioned to not risk and not fail. For this reason the mission-focused research is shunned. The sort of conditions that mission-focused research produces are no longer acceptable and our effective social contract with the rest of society has destroyed it.
tics and deep physical principle daily. All of these things are very complex and require immense amounts of training, experience and effort. For most people the things I do, think about, or work on are difficult to understand or put into context. None of this is the hardest thing I do every day. The thing that we trip up on, and fail at more than anything is simple, communication. Scientists fail to communicate effectively with each other in a myriad of ways leading to huge problems in marshaling our collective efforts. Given that we can barely communicate with each other, the prospect of communicating with the public becomes almost impossible.
t they give people the impression that communication has taken place when it hasn’t. The meeting doesn’t provide effective broadcast of information and it’s even worse as a medium for listening. Our current management culture seems to have gotten the idea that a meeting is sufficient to do the communication job. Meetings seem efficient in the sense that everyone is there and words are spoken, and even time for questions is granted. With the meeting, the managers go through the motions. The problems with this approach are vast and boundless. The first issue is the general sense that the messaging is targeted for a large audience and lacks the texture that individuals require. The message isn’t targeted to people’s acute and individual interests. Conversations don’t happen naturally, and people’s questions are usually equally limited in scope. To make matters worse, the managers think they have done their communication job.
nt assumes the exchange of information was successful. A lot of the necessary vehicles for communication are overlooked or discounted in the process. Managers avoid the one-on-one conversations needed to establish deep personal connections and understanding. We have filled manager’s schedules with lots of activities involving other managers and paperwork, but not prioritized and valued the task of communication. We have strongly tended to try to make it efficient, and not held it in the esteem it deserves. Many hold office hours where people can talk to them rather than the more effective habit of seeking people out. All of these mechanisms give the advantage to the extroverts among us, and fail to engage the quiet introverted souls or the hardened cynics whose views and efforts have equal value and validity. All of this gets to a core message that communication is pervasive and difficult. We have many means of communicating and all of them should be utilized. We also need to assure and verify that communication has taken place and is two ways.
ays of delivering information in a professional setting. It forms a core of opportunity for peer review in a setting that allows for free exchange. Conferences are an orgy of this and should form a backbone of information exchange. Instead conferences have become a bone of contention. People are assumed to only have a role there as speakers, and not part of the audience. Again the role of listening as an important aspect of communication is completely disregarded in the dynamic. The digestion of information and learning or providing peer feedback provide none of the justification for going to conferences, yet these all provide invaluable conduits for communication in the technical world.
communication, which varies depending on people’s innate taste for clarity and focus. We have issues with transparency of communication even with automatic and pervasive use of all the latest technological innovations. These days we have e-mail, instant messaging, blogging, Internet content, various applications (Twitter, Snapchat,…), social media and other vehicles information transfer through people. The key to making the technology work to enable better performance still comes down to people’s willingness to pass along ideas within the vehicles available. This problem is persistent whether communications are on Twitter or in-person. Again the asymmetry between broadcasting and receiving is amplified by the technology. I am personally guilty of the sin that I’m pointing out, we never prize listening as a key aspect of communicating. If no one listens, it doesn’t matter who is talking.
sually associated with vested interests, or solving the problems might involve some sort of trade space where some people win or lose. Most of the time we can avoid the conflict for a bit longer by ignoring it. Facing the problems means entering into conflict, and conflict terrifies people (this is where ghosting and breadcrumbing come in quite often). The avoidance is usually illusory; eventually the situation will devolve to the point where the problems can no longer be ignored. Usually the situation is much worse, and the solution is much more painful. We need to embrace means of making facing up to problems sooner rather than later, and seek solutions when problems are small and well confined.
neral public is almost impossible to manage. In many ways the modern world acts to amplify the issues that the technical world has with communication to an almost unbearable level. Perhaps excellence in communication is too much to ask, but the inabilities to talk and listen effectively with the public are hurting science. If science is hurt than society also suffers from the lack of progress and knowledge advancement science provides. When science fails, everyone suffers. Ultimately we need to have understanding and empathy across our societal divides whether it is scientists and lay people or red and blue. Our failure to focus on effective, deep two-way communication is limiting our ability to succeed at almost everything.
a very grim evolution of the narrative. Two things have happened to undermine the maturity of V&V. One I’ve spoken about in the past, the tendency to drop verification and focus solely on validation, which is bad enough. In the absence of verification, validation starts to become rather strained and drift toward calibration. Assurances that one is properly solving the model they are claiming to be solving are unsupported by evidence. This is bad enough all by itself. The use of V&V as a vehicle for improving modeling and simulation credibility is threatened by this alone, but something worse looms even larger.
al cycles thus eliciting lots of love and support in HPC circles. Validation must be about experiments and a broad cross section of uncertainties that may only be examined through a devotion to multi-disciplinary work and collaboration. One must always remember that validation can never be separated from measurements in the real world whether experimental or observational. The experiment-simulation connection in validation is primal and non-negotiable.
one without much deep thought even though it touches upon many deep technical topics. Actual validation is far harder and more holistic. The core to any work in validation is serious experimental expertise and hard-nosed comparison with simulations. The detailed nature of the experiment and its intrinsic errors and uncertainties is the key to any comparison. Without knowing the experimental uncertainty any computational uncertainty is context free. My grumpy intellectual would quip that validation requires thinking and that leads people to avoid it because thinking is so hard. The deeper issues are that validation is complex and mutli-disciplinary in nature making it collaborative and difficult. Experts in a single discipline can do UQ, so it is an easy out.
computational solutions due to changes in these parameters. Among the methods used are things like Markov Chain Monte Carlo (MCMC), polynomial chaos, and other sampling methods. The results from this work are useful and sound, but form a rather incomplete view of uncertainty. Even in these cases the sampling is often subject to lack of certainty with the assessment driven by the difficulty of determining uncertainty in high dimensional spaces. Modeling and simulation suffers from a host of other uncertainties not covered by these methodologies. For example most simulations have some degree of numerical error that may be quite large. Numerous techniques exist for exploring its magnitude and nature. Many systems being modeled have some stochastic or variability associated with them. Modeling assumptions are often made in simulating a system or experiment. The solution may change greatly on the basis of these assumptions or modeling approximations. A different computational modeler may make much different assumptions and produce a different solution.
The whole of the uncertainty provides an important source of scientific tension. If experimental uncertainty is small, it requires modeling and simulation to be equally precise to imply good results. It pushes the modeling to improve to meet the high standard of the experiment. If the modeling and simulation is very good, but the experiments have large uncertainty, it should push the experiments to improve because they fail to constrain and distinguish between models. By having a deep and complete understanding of uncertainty, we can define where we need to put resources to improve. We know what aspects of our current knowledge are the most in need of attention and limiting progress.
issue falls into the area of relying upon expert judgment or politics to make the decisions. We fail to understand where our knowledge is weak and potentially overlook experiments necessary for understanding. We may have the right experiments, but cannot make measurements of sufficient accuracy. We might have models of insufficient complexity, or numerical solutions with too much numerical error. All of these spell out different demands for resource allocation.
provide real value to our applied programs, and with this value provides generous financial support. This support is necessary for the codes to do their job and creates financial stability. With this support comes acute responsibility that needs to be acknowledged and serious effort needs to apply to meeting these.
capability for real problems. While this character is primary in defining production codes everything else important in high performance computing is eclipsed by the modeling imperative. These codes are essential for the utility of high performance computing resources and often become the first codes to make their way and use high-end resources. They quite often are the explicit justification for the purchase of such computing hardware. This character usually dominates and demands certain maturity of software professionalism.
On the flipside there are significant detrimental aspects of such codes. For example the methods and algorithms in production codes are often crude and antiquated in comparison to state of the art. The same can be said for the models, the algorithms and often the computer code itself. The whole of the production code credibility is deeply impacted by these pedigrees and their impact on real World programs and things. This issue comes from several directions; the codes are often old and used for long periods of time. The experts who traditionally define the credibility drive this to some extent. It often takes a long time to develop the code to the level needed to solve the hard real world problems as well as the expertise to navigate the code’s capability into results that have real world meaning. Older methods are robust, proven and trusted (low order, and dissipative is usually how robust happens). Newer methods are more fragile, or simply can’t deal with all the special cases and issues that threaten the solution of real problems. Again, the same issues are present with models, algorithms and the nature or quality of the computer code itself.
In far too many instances, the systematic pedigree defining steps are being skipped for the old system of person-centered credibility. The old person-centered system is simple and straightforward. You trust somebody and develop a relationship that supports credibility. This person’s skills include detailed technical analysis, but also inter-personal relationship building skills. If such a system is in place there is not a problem as long as the deeper modern credibility is also present. Too often the modern credibility is absent or shorted and effectively replaced by a cult of personality. If we put our trust in people who do not value the best technical work available in favor of their force of personality or personal relationships we probably deserve the substandard work that results.











will touch upon at the very end of this post, Riemann solvers-numerical flux functions can also benefit from this, but some technicalities must be proactively dealt with.
(
some optimization solutions (
approximations. This issue can easily be explained by looking at the smooth sign function compared with the standard form. Since the dissipation in the Riemann solver is proportional to the characteristic velocity, we can see that the smoothed sign function is everywhere less than the standard function resulting in less dissipation. This is a stability issue analogous to concerns around limiters where these smoothed functions are slightly more permissive. Using the exponential version of “softabs” where the value is always greater than the standard absolute value can modulate this permissive nature.
computing. Worse yet, computing can not be a replacement for reality, but rather is simply a tool for dealing with it better. In the final analysis the real world still needs to be in the center of the frame. Computing needs to be viewed in the proper context and this perspective should guide our actions in its proper use.
many ways computing is driving enormous change societally and creating very real stress in the real world. These stresses are stoking fears and lots of irrational desire to control dangers and risks. All of this control is expensive, and drives an economy of fear. Fear is very expensive. Trust, confidence and surety are cheap and fast. One totally irrational way to control fear is ignore it, allowing reality to be replaced. For people who don’t deal with reality well, the online world can be a boon. Still the relief from a painful reality ultimately needs to translate to something tangible physically. We see this in an over-reliance on modeling and simulation in technical fields. We falsely believe that experiments and observations can be replaced. The needs of the human endeavor of communication can be done away with through electronic means. In the end reality must be respected, and people must be engaged in conversation. Computing only augments, but never replaces the real world, or real people, or real experience. This perspective is a key realization in making the best use of technology.
on computer power coupled to an unchanging model as the recipe for progress. Focus and attention to improving modeling is almost completely absent in the modeling and simulation world. This ignores one of the greatest truths in computing that no amount of computer power can rescue an incorrect model. These truths do little to alter the approach although we can be sure that we will ultimately pay for the lack of attention to these basics. Reality cannot be ignored forever; it will make itself felt in the end. We could make it more important now to our great benefit, but eventually our lack of consideration will demand more attention.
independently. The proper decomposition of error allows the improvement of modeling in a principled manner.
ng the model we believe we are using, the validation is powerful evidence. One must recognize that the degree of understanding is always relative to the precision of the questions being asked. The more precise the question being asked is, the more precise the model needs to be. This useful tension can help to drive science forward. Specifically the improving precision of observations can spur model improvement, and the improving precision of modeling can drive observation improvements, or at least the necessity of improvement. In this creative tension the accuracy of solution of models and computer power plays but a small role.