Science is what we have learned about how to keep from fooling ourselves.
― Richard Feynman
Last week was another trip to the Multimat conference, a biannual meeting of scientists who solve the equations of multiple material flows under highly energetic and compressible conditions. I’ve always greeted the meeting with great enthusiasm and returned to work bristling with new ideas and inspiration. In many ways this community is the tip of the intellectual spear for modeling and simulation capability. It also marks an anniversary of sorts, four years of blogging. My very first post here was about the San Francisco edition of the meeting that coincided with my 50th birthday (https://williamjrider.wordpress.com/2013/09/13/thoughts-about-multimat2013/). Two years ago we had a wonderful meeting in Wurzburg Germany
(https://williamjrider.wordpress.com/2015/09/11/multimat2015-a-biannual-festival-on-computing-compressible-multiple-materials/). Every meeting was wonderful and this was no exception, except in one very important and perhaps essential regard; the meeting seemed devoid of the usual exciting intellectual inspiration. What happened to the big ideas that flowed so easily in every previous meeting? Is it my imagination, or have the exciting new ideas dried up?
Do stuff. be clenched, curious. Not waiting for inspiration’s shove or society’s kiss on your forehead. Pay attention. It’s all about paying attention. attention is vitality. It connects you with others. It makes you eager. stay eager.
― Susan Sontag
This assessment might seem rather harsh, but upon reflecting on the previous meetings, it holds up under considerable scrutiny. Each previous meeting was full of moments where you are watching someone else’s talk and thinking, “I wish I’d thought of that, this is brilliant”. This is exactly what makes conferences so wonderful and important to attend; you get to cherry pick people’s best ideas accumulated at great effort all at once. In the moment these ideas seem like Athena springing fully formed from Zeus’ brow! Your colleagues get to look like the geniuses they are and present their most creative and ingenious thoughts in an intellectual banquet (https://williamjrider.wordpress.com/2014/06/13/why-do-scientists-need-to-attend-conferences/, https://williamjrider.wordpress.com/2014/10/27/a-holistic-view-of-attending-conferences/). The reason for attending conferences isn’t to give talks; it is to learn new things taught by the smartest people you know. It is to meet and let ideas breed openly over laughter, food and drinks. You give talks as an act of repayment for the knowledge you are granted by
being in the audience. Giving talks is pretty low on the list of reasons, but not in the mind of our overlords, which starts to get at the problems I’ll discuss below. Given the track record of this meeting my expectations were sky-high, and the lack of inspiring ideas left me slightly despondent.
A few more thoughts about the meeting are worth pointing out before getting to the dialog about fresh ideas, their importance and postulates for their recent absence. The meeting is attended by a collection of computational scientists (mathematics, physics, engineering,…) dominated by the nuclear “club”. This means American, French and British with a smattering of Russians and Chinese – who couldn’t come this year for undisclosed reasons. These scientists for the most part work at their nation’s respective nuclear weapons’ labs. Occasional others attend like Israelis (an unacknowledged member of the club) along with a handful of Czechs, Italians, and Germans. As such the meeting serves as a proverbial checkup on the intellectual health of this important area of science at the West’s nuclear weapons Labs. This year’s checkup should give everyone pause, the state of health is declining. There is a real lack of creative energy surrounding the heart of our most important codes. Many important codes are built around a powerful hydro-solver that produces accurate, physically relevant solutions to the multi-material “hydrodynamic” equations. Previous meetings have seen a healthy resurgence of new ideas, but that upswing seems to have come to a staggering halt. These labs have also provided a deep well of inspired research that has benefited the broader scientific community including weather, climate, astrophysics and a broad swath of engineering use of computation.
In my opinion the reasons for this halt in creative energy are simple and straightforward. The foolhardy worldwide push for exascale computers is sucking the air out of the room. It is gobbling up all the resources and attention leaving nothing for new ideas. This completely obsessive and unwise focus on the hardware is attempting to continue – the already dead – Moore’s law. This push is draining the community of vitality, resources and focus. The reasons for the push are worth considering because they help define the increasingly hostile nature of the modern world toward science. The computers being build for the future are abysmal to use and the efforts to move our codes to them are sucking all the energy from the Labs. Nothing is left of creative work; nothing is left for new ideas. Simply put, the continued use of old ideas is hard enough if you add these generally unusable computers to the mix. The reason is simple; the new computers completely suck. They are true monstrosities (in the classic definition of the word) and complete pieces of shit as scientific hardware. They are exactly the computers we don’t want to use. The price of forcing them down our throats is the destruction of research that isn’t associated with simply making these awful computers work. Worse yet, the return on the massive investment of effort will be vanishingly small in terms of added modeling and simulation capability.

As noted this whole direction is a foolish attempt to breathe life into the already rigid corpse of Moore’s law. Now dead at every scale of computing and already a decade deceased at the level of computer chips – note the death of Moore’s law and the ascendency of cell phones is strongly correlated, and that probably is not a coincidence. The truth of our real performance on computers is far more dire and damning of this entire effort. We have been getting an ever-lower proportion of the potential performance on our computers for 25 years. Each computer has a peak performance measured on silly hardware friendly benchmarks that no one gives a flying fuck about (the dense linear algebra LU decomposition, Linpac). This silly and useless benchmark is how we crown the fastest computer! Our actual code performance on these machines is truly atrocious and gets worse every year. The dirty little secret is that its been getting ever worse every year. It was god-awful 20 years ago, and it has just gotten worse. Zero is a pretty good approximation to the proportion of the performance we get – generally much less than one percent. We mindfully ignore the situation just like one would ignore a cancer threatening to devour our lives. The attitude is generally, “look away, nothing to see here”. The exascale program is that cancer metastasized.
Part of the discussion about exascale needs to acknowledge the nature of choices and priorities in research. In isolation, the exascale program is an unambiguous good; it would be genuinely awesome to have – usable – exascale computers (https://williamjrider.wordpress.com/2014/09/19/what-would-we-actually-do-with-an-exascale-computer/). This good needs to be weighed in terms of its cost and the impact of alternatives. It needs to be viewed through the lens of reality too. If one looks at the raw cost, the opportunity cost and collateral damage, and under this examination we can see that the exascale program is a massively negative force in science (https://williamjrider.wordpress.com/2016/06/27/we-have-already-lost-to-the-chinese-in-supercomputing-good-thing-it-doesnt-matter/, https://williamjrider.wordpress.com/2016/05/04/hpc-is-just-a-tool-modeling-simulation-is-what-is-important/, https://williamjrider.wordpress.com/2016/10/19/why-china-is-kicking-our-ass-in-hpc/, ). In isolation without considering anything else, it is a clear positive. In the context of lost opportunities and effective use of available resources, the program is an unmitigated disaster. We will all be poorer for it as it lays waste to potential breakthroughs we will be denied in its wake. In today’s world we talk about things in isolation, free of nuance and trade spaces that would make for a more robust and responsible conversation. Our leaders are irresponsible in the extreme for taking down this path with no real discussion, or any debate taking place. The message in the trenches is “do what you’re paid to do and quit asking questions”.
The really dirty secret is that chasing exascale as a route to scientific discovery is simply bullshit of the highest and most expensive order. We would be far better served by simply figuring out how to use the hardware we already have. Figuring out how to efficiently use hardware we have had for decades would be a difficult and worthy endeavor. The punch line is that we could get orders of magnitude in improved performance out of the hardware we’ve been using for decades. By simply figuring out how to get our codes working more efficiently on the computers already existing would meet most scientific goals without eviscerating the rest of computational science in the process. Instead we chase goals that are utterly meaningless. In the process we are destroying the research that has true and lasting value. The areas being ignored in the push for exascale have the capacity to provide far more scientific capability than even the most successful exascale program could possibly deliver. This brings me back to the meeting in Santa Fe and the lack of energy and exciting ideas. In the past the meeting has been a great survey of the active work from a creative and immensely talented group of people. As such this meeting is the proverbial canary in the coalmine. The ideas are dying right in front of our eyes.
This outcome is conflated with the general lack of intellectual vigor in any public discourse. The same lack of intellectual vigor has put this foolish exascale program in place. Ideas are viewed as counter-productive today in virtually every public square. Alarmingly, science is now suffering from the same ill. Experts and the intellectual elite are viewed unfavorably and their views held in suspicion. Their work is not supported, nor is projects and programs dependent on deep thinking, ideas or intellectual labor. The fingerprints of this systematic dumbing down of our work have reached computational science, and reaping a harvest of poisoned fruit. Another sign of the problem is the lack of engagement of our top scientists in driving new directions in research. Today, managers who do not have any active research define new directions. Every year our manager’s work gets further from any technical content. We have the blind leading the sighted and telling them to trust them, they can see where we are going. This problem highlights the core of the issue; the only thing that matters today is money. What we spend the money on, and the value of that work to advance science is essentially meaningless.
Effectively we are seeing the crisis that has infested our broader public sphere moving into science. The lack of intellectual thought and vitality pushing our public discourse to the lowest common denominator is now attacking science. Rather than integrate the best in scientific judgment into our decisions on research direction, it is ignored. The experts are simply told to get in line with the right answer or be silent. In addition, the programs defined through this process then feed back to the scientific community savaging the expertise further. The fact that this science is intimately connected to national and international security should provide a sharper point on the topic. We are caught in a vicious cycle and we are seeing the evidence in the hollowing out of good work at this conference. If one is looking for a poster child for bad research directions, the exascale programs are a good place to look. I’m sure other areas of science are suffering through similar ills. This global effort is genuinely poorly thought through and lacks any sort of intellectual curiosity.
Moving our focus back to exascale provides a useful case study of what is going wrong. We see that programs are defined by “getting funding” rather than what needs to be done or what should be done. Arguments for funding need to be as simple as possible, and faster computers are naïve enough for unrefined people to buy into. It sounds good and technically unsophisticated people buy it hook line and sinker. Computers are big loud and have lots of flashing lights to impress managers, politicians and business people who know no better. Our scientists have been cowered into compliance and simply act happy to get money for doing something. A paycheck beats the alternative, and we should feel happy that we have that. The level of inspiration in the overall approach has basically fallen off a cliff, and new ideas are shunned because they just make things complicated. We are left with the least common denominator as the driving force. We have no stomach for nuance or subtlety.
Priority is placed on our existing codes working on the new super expensive computers. The up front cost of these computers is the tip of the proverbial cost iceberg. The explicit cost of the computers is their purchase price, their massive electrical bill and the cost of using these monstrosities. The computers are not the computers we want to use, they are the ones we are forced to use. As such the cost of developing codes on these computers is extreme. These new computers are immensely unproductive environments. They are a huge tax on everyone’s efforts. This sucks the creative air from the room and leads to a reduction in the ability to do anything else. Since all the things being suffocated by exascale are more useful for modeling and simulation, the ability to actually improve our computational modeling is hurt. The only things that benefit from the exascale program are trivial and already exist as well-defined modeling efforts.
Increasingly everything is run through disconnected projects that are myopic by construction. The ability to do truly unique and groundbreaking science is completely savaged by this approach to management. Breakthroughs are rarely “eureka” moments where someone simply invents something completely new. Instead, most good research is not made through connections to other good research. Conferences are great incubators for these connections. Well-defined and proven ideas are imported and redefined to make contributions to a new area. This requires people to work across discipline boundaries, and learn about new things in depth. People need to engage deeply with one another, which is similarly undermined today by project management and information security focus. The key thing is exposure to new and related areas of endeavor and basic learning. The breakthroughs come episodically and do not lend themselves to the sort of project management in vogue today.
It isn’t like I came back with nothing. There were a couple of new things that really fall into the category of following up. In one case there was a continuation of a discussion of verification of shock tube problems with someone from Los Alamos. The discussion started in Las Vegas at the ASME VVUQ meeting, and continued in Santa Fe. In a nutshell, we were trying to get cleaner verification results by dividing the problem into specific regions associated with a particular solution feature and the expectation of different rates of convergence for each. We found something unexpected in the process that doesn’t seem to follow theoretical expectations. It’s worth some significant follow-up. A mysterious result is always something worth getting to the bottom of. The second bit of new intellectual blood came in direct response to my talk. I will also freely admit that my contribution wasn’t the best. I haven’t had any better luck with a good free energy at work to energize my work. The same exascale demon is sucking my intellectual lifeblood out. I simply reported on a here-to-fore unreported structural failing of solvers. In summary, we find systematic, but small violations of the second law of thermodynamics in rarefactions for modern and classical methods. This shouldn’t happen and violations of the second law lead to unphysical solutions. All of this stems from identifying a brutal problem (https://williamjrider.wordpress.com/2017/06/09/brutal-problems-make-for-swift-progress/ ) that every general-purpose code fails at – what I call “Satan’s shock tube” with 12 order of magnitude jumps in density and pressure approximating propagation of waves into a vacuum.
We cannot live only for ourselves. A thousand fibers connect us with our fellow men; and among those fibers, as sympathetic threads, our actions run as causes, and they come back to us as effects.
― Herman Melville
Before closing I can say a thing or two about the meeting. None of the issues dulled the brilliance of the venue in Santa Fe, “the City Different”. While I was disappointed about not enjoying the meeting in some exotic European venue, Santa Fe is a fabulous place for a meeting. It is both old (by American standards), yet wonderfully cosmopolitan. There is genuine beauty in the area, and our hotel was nearly perfect. Santa Fe boasts exceptional weather in the fall, and the week didn’t disappoint. It has a vibrant art community including the impressive and psychedelic Meow Wolf. It was the Drury Plaza hotel placed in a remodeled (and supposedly haunted) old hospital. Two short blocks from the plaza, the hotel is enchanting and comfortable. We all shared two meals each day catered by the hotel’s exceptional restaurant. Having meals at the conference and together with the participants is optimal and makes for a much-improved meeting compared to going out to restaurants.
We had a marvelous reception on the hotel’s rooftop bar enjoying a typical and gorgeous New Mexico early autumn sunset with flowing drinks, old friends and incredibly stimulating conversation. American laws virtually prohibit government funds paying for alcohol, thus the drinks were courtesy of the British and French governments. One more idiotic prohibition on productivity and common sense that only undermines our collective efforts especially creatively and collaboratively. These laws have only gotten more prescriptive and limiting. We no longer can pay for meals for interview lunches and dinners, much less business meetings. None of this is reflective of best practice for any business. The power of breaking bread and enjoying a drink to lubricate human interactions is well known. We only hurt our productivity and capacity to produce valuable work by the restrictions. We are utterly delusional about the wisdom of these policies. All of this only serves to highlight the shortcomings in the creative energy evident from the rather low level of vibrancy exhibited by the lack of exciting new ideas.
Never underestimate the power of human stupidity.
– Robert A. Heinlein
rse. Most of the activity for working scientists is at the boundaries of our knowledge working to push back our current limits on what is known. The scientific method is there to provide structure and order to the expansion of knowledge. We have well chosen and understood ways to test proposed knowledge. A method of using and testing our theoretical knowledge in science is computational simulation. Within computational work the use of verification, validation with uncertainty quantification is basically the scientific method in action (
If the uncertainty is irreducible and unavoidable, the problem with not assessing uncertainty and taking an implied value of ZERO for uncertainty becomes truly dangerous (
may prove deadly in rather commonly encountered situations. As systems become more complex and energetic, chaotic character becomes more acute and common. This chaotic character leads to solutions that have natural variability. Understanding this natural variability is essential to understanding the system. Building this knowledge is the first step in moving to a capability to control and engineer it, and perhaps if wise, reduce it. If one does not possess the understanding of what the variability is, such variability cannot be addressed via systematic engineering or accommodation.
systematically is an ever-growing limit for science. We have a major scientific gap open in front of us and we are failing to acknowledge and attack it with our scientific tools. It is simply ignored almost by fiat. Changing our perspective would make a huge difference in experimental and theoretical science, and remove our collective heads from the sand about this matter.
willful uncertainty ignorance. Probably the most common uncertainty to be willfully ignorant of is numerical error. The key numerical error is discretization error that arises from the need to make a continuous problem, discrete and computable. The basic premise of computing is that more discrete degrees of freedom should produce a more accurate answer. Through examining the rate that this happens, the magnitude of the error can be estimated. Other estimates can be had though making some assumptions about the solution and relating the error the nature of the solution (like the magnitude of estimated derivatives). Other generally smaller numerical errors arise from solving systems of equations to a specified tolerance, parallel consistency error and round-off error. In most circumstances these are much smaller than discretization error, but are still non-zero.
The last area of uncertainty is the modeling uncertainty. In the vast majority of cases this will be the largest source of uncertainty, but of course there will be exceptions. It has three major components, the choice of the overall discrete model, the choice of models or equations themselves, and the coefficients defining the specific model. The first two areas are usually the largest part of the uncertainty, and unfortunately the most commonly ignored in assessments. The last area is the most commonly addressed because it is amenable to automatic evaluation. Even in this case the work is generally incomplete and lacks full disclosure of the uncertainty.
repeated using values drawn to efficiently sample the probability space of the calculation and produce the uncertainty. This sampling is done for a very highly dimensional space, and carries significant errors. More often than not the degree of error associated with the under sampling is not included in the results. It most certainly should be.
Every September my wife and I attend the local TeDx event here in Albuquerque. It is a marvelous way to spend the day, and leaves a lasting impression on us. We immerse ourselves in inspiring, fresh ideas surrounded by like-minded people. It is empowering and wonderful to see the local community of progressive people together at once listening, interacting and absorbing a selection of some of the best ideas in our community. This year’s event was great and as always several talks stood out particularly including Jannell MacAulay (Lt.
Col USAF) talking about applying mindfulness to work and life, or Olivia Gatwood inspiring poetry about the seeming mundane aspects of life that speaks to far deeper issues in society. The smallest details are illustrative of the biggest concerns. Both of these talks made me want to think deeply about applying these lessons in some fashion to myself and improving my life consequentially.
We have transitioned from an animal fighting for survival during brief violent lives, to beings capable of higher thought and aspiration during unnaturally long and productive lives. We can think and invent new things instead of simply fighting to feed us and reproduce a new generation of humans to struggle in an identical manner. We also can produce work whose only value is beauty and wonder. TeD provides a beacon for human’s best characteristics along with a hopeful forward-looking community committed to positive common values. It is a powerful message that I’d like to take with me every day. I’d like to live out this promise with my actions, but the reality of work and life comes up short.
TeD talks are often the focus of criticism for their approach and general marketing nature strongly associated with the performance art nature. These critiques are valid and worth considering including the often-superficial nature of how difficult topics are covered. In many ways where research papers can be criticized increasingly as merely being the marketing of the actual work, TeD talks are simply the 30-second mass market advertisement of big ideas for big problems. Still the talks provide a deeply inspiring pitch for big ideas that one can follow up on and provide the entry to something much better. I find the talk is a perfect opening to learning or thinking more about a topic, or merely being exposed to something new.
not identify a single thing recommended in Pink’s book that made it to the workplace. It seemed to me that the book simply inspired the management to a set of ideals that could not be realized. The managers aren’t really in charge; they are simply managing the corporate compliance instead of managing in a way that maximizes the performance of its people. The Lab isn’t about progress any more; it is about everything, but progress. Compliance and subservience has become the raison d’etre.
is progressive in terms of the business world. The problem is that the status quo and central organizing principle today is anti-progressive. Progress is something everyone is afraid of, and the future appears to be terrifying and worth putting off for as long as possible. We see genuinely horrible lurch toward an embrace of the past along with all its anger, bigotry, violence and fear. Fear is the driving force for avoiding anything that looks progressive.
Still I can offer a set of TeD talks that have both inspired me and impacted my life for the better. They have either encouraged me to learn more, or make a change, or simply change perspective. I’ll start with a recent one where David Baron gave us an incredibly inspiring call to see the total eclipse in its totality (
Durkee finding a wonderful community center with a lawn and watched it with 50 people from all over the local area plus a couple from Berlin! The totality of the eclipse lasted only two minutes. It was part of a 22-hour day of driving over 800 miles, and it was totally and completely worth every second! Seeing the totality was one of the greatest experiences I can remember. My life was better for it, and my life was better for watching that TeD talk.
Another recent talk really provoked me to think about my priorities. It is a deep consideration of what your priorities are in terms of your health. Are you better off going to the gym or going to party, or the bar? Conventional wisdom says the gym will extend your life the most, but perhaps not. Susan Pinker provides a compelling case that social connection is the key to longer life (
struggle is for good reasons, and knowing the reasons provides insight to solutions. Perel powerfully explains the problem and speaks to working toward solutions.
other reason that I usually don’t. I will close by honoring the inspirational gift of Olivia Gatwood’s talk on poetry about seeking beauty and meaning in the mundane. I’ll write a narrative of a moment in my life that touched me deeply.
movie “Fight Club” again. This is my 300th blog post here. Its been an amazing experience thanks for reading.
McDonalds for my first job. I was a hard worker, and a kick ass grill man, opener, closer, and whatever else I did. I became a manager and ultimately the #2 man at a store. Still I was 100% replaceable and in no way essential, the store worked just fine without me. I was interchangeable with another hard working person. It isn’t really the best feeling; you’d like to be a person whose imprint on the World means something. This is an aspiration worth having, and when your work is truly creative, you add value in a way that no one else can replicate.
of an incubator for aspiring scientists. You were encouraged to think of the big picture, and the long term while learning and growing. The Lab was a warm and welcoming place where people were generous with knowledge, expertise and time. It was still hard work and incredibly demanding, but all in the spirit of service and work with value. I repaid the generosity through learning and growing as a professional. It was an amazing place to work, an incredible place to be, an environment to be treasured, and made me who I am today.
e scientific culture there were relabeled as “butthead cowboys,” troublemakers, and failures. The culture that was generous, long term in thought, viewing the big picture and focused on National service was haphazardly dismantled. Empowerment was ripped away from the scientists and replaced with control. Caution replaced boldness, management removed generosity, all in the name of formality of operations that removes anything unforeseen in outcomes. The modern world wants assured performance. Today Los Alamos is mere shadow of itself, stumbling forward toward the abyss of mediocrity. Witnessing this happen was one of the greatest tragedies of my life.
importance. Everything is process today and anything bad can be managed out of existence. No one looks at the downside to this, and the downside is sinister to the quality of the workplace.
Instead of encouraging and empowering our people to take risks while tolerating and learning from failure, we do the opposite. We steer people away from doing risky work, punish failure and discourage lesson learning. It is as if we had suddenly become believers in the “free lunch”. True achievement is extremely difficult, and true achievement is powered by the ability to try to do risky almost impossible things. If failure is not used as an opportunity to learn, people will become disempowered and avoid the risks. This in turn will kill achievement before it can even be thought of. The entire system would seem to be designed to disempower people, and lower their potential for achievement.
the knowledge necessary to mentor others. This was a key aspect of my early career experience at Los Alamos. At that time the Lab was teeming with experts who were generous with their time and knowledge. All you had to do was reach out and ask, and people helped you. The experts were eager to share their experience and knowledge with others in a spirit of collective generosity. Today we are managed to completely avoid this with managed time and managed focus. We are trained to not be generous because that generosity would rob our “customers” of our effort and time. The flywheel of the experts of today helping to create the experts of tomorrow is being undone. People are trained to neither ask, nor provide expertise freely.
It is where we find ourselves today. We also know that the state of affairs can be significantly better. How can we get there from here? The first step would be some sort of collective decision that the current system isn’t working. From my perspective, the malaise and lack effectiveness of our current system is so pervasive and evident that action to correct it is overdue. On the other hand, the current system serves the purposes of those in control quite well, and they are not predisposed to be agents of change. As such, the impetus for change is almost invariably external. It is usually extremely painful because the status quo does not want to be rooted out unless it is forced to. The circumstances need to demand performance that current system cannot produce, and as systems degrade this becomes ever more likely.
and not lose all the good things in the process. Bad things, bad outcomes and bad behavior happen, and perhaps need to happen to have all the good (in other words “shit happens”). Today we are gripped with a belief that negative outcomes can be managed away. In the process of managing away bad outcomes, we destroy the foundation of everything good. To put it differently we need to value the good and accept the bad as a necessary condition for enabling good outcomes. If one looks at failure as the engine of learning, we begin to realize that the bad is the foundation of the good. If we do not allow the bad things to happen, let people fuck things up, we can’t have really good things either. One requires the other and our attempts to control bad outcomes, removes a lot of good or even great outcomes at the same time.
The reasons for not estimating uncertainties are legion. Sometimes it is just too hard (or people are lazy). Sometimes the way of examining a problem is constructed to ignore the uncertainty by construction (a common route to ignore experimental variability and numerical error). In other cases the uncertainty is large and it is far more comfortable to be delusional about its size. Smaller uncertainty is comforting and implies a level of mastery that exudes confidence. Large uncertainty is worrying and implies a lack of control. For this reason getting away with choosing a zero uncertainty is a source of false confidence and unfounded comfort, but a deeply common human trait.
If we can manage to overcome the multitude of human failings underpinning the choice of the default zero uncertainty, we are still left with the task of doing something better. To be clear, the major impediment is recognizing that the zero estimate of uncertainty is not acceptable (most “customers” like the zero estimate because it seems better even though its assuredly not!). Most of the time we have a complete absence of information to base uncertainty estimates upon. In some cases we can avoid zero uncertainty estimates by being more disciplined and industrious, in other cases we can think about the estimation from the beginning of the study and build the estimation into the work. In many cases we only have expert judgment to rely upon for estimation. In this case we need to employ a very simple and well-defined technique to providing an estimate.
speaking, there will be a worst case to consider or something more severe than the scenario at hand. Such large uncertainties are likely to be quite uncomfortable to those engaging in the work. This should be uncomfortable if we are doing things right. The goal of this exercise is not to minimize uncertainties, but get things right. If such bounding uncertainties are unavailable, one does not have the right to do high consequence decision-making with results. This is the unpleasant aspect of the process; this needs to be the delivery of the worst case. To be more concrete in the need for this part of the bounding exercise, if you don’t know how bad the uncertainty is you have no business using the results for anything serious. As stated before the bounding process needs to be evidence based, the assignment of lower and upper bounds for uncertainty should have a specific and defensible basis.
To some extent this is a rather easy lift intellectually. Cultural difficulty is another thing altogether. The indefensible optimism associated with the default zero uncertainty is extremely appealing. It provides the user with a feeling that the results are good. People tend to feel that there is a single correct answer. The smaller the uncertainty is the better they feel about the answer. Large uncertainty is associated with lack of knowledge and associated with low achievement. The precision usually communicated with the default, standard approach is highly seductive. It takes a great deal of courage to take on the full depth of uncertainty along with the honest admission of how much is not known. It is far easier to simply do nothing and assert far greater knowledge while providing no evidence for the assertion.
consider this experiment to be a completely determined event with no uncertainty at all. This is the knee jerk response of people is the consideration of this single event as being utterly and completely deterministic with no variation at all. If the experiment were repeated with every attempt to make it as perfect as possible, it would turn out slightly differently. This comes from the myriad of details associated with the experiment that determine the outcome. Generally the more complex and energetic the phenomenon of being examined is, the greater the variation (unless there are powerful forces attracting a very specific solution). There is always a variation, the only question is how large it is; it is never, ever identically zero. The choice to view the experiment as perfectly repeatable is usually an unconscious choice that has no credible basis. It is an incorrect and unjustified assumption that is usually made without a second thought. As such the choice is unquestionably bad for science or engineering. In many cases this unconscious choice is dangerous, and represents nothing more than wishful thinking.