• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: April 2014

Self-fulfilling Prophesies

25 Friday Apr 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

While I’ve opined about the dismaying state of the National Labs (meaning the DOE, NNSA or Weapons’ Labs), the state of affairs I deal with are splendid when compared to what NASA is dealing with. NASA is in terrible shape, particularly with respect to aeronautics. I’d say the support for the planetary exploration is dismal, but it is vibrant compared to the support for things aircraft related. When I’ve visited their centers, the sense of decay and awful morale is palpable. Aeronautics and almost everything about it is woefully stagnant with research support having dried up. Part of this lack of support has been a signaling by industry that research isn’t needed. The tragedy is that it shouldn’t be this way. How can something that has been so transformative to society be so abysmally supported?

Air flight has been one of the several things to utterly transform the World in the past century. Travel across the country or even more across the ocean used to be life altering lasting months or years and had the potential to completely change the course of one’s life. Now we can do any of these things in less than a day. Prior to the Internet, the ubiquity of air travel and the speed of transport had remade the globe. Despite our massive investment in air travel (planes, airports, defense, etc.) the support for scientific research has all but disappeared. I think the current lack of progress is primarily a self-fulfilling prophecy. If no effort it put into progress, progress will stop. There are several issues at play here, not the least of which is a lack of vision, and prognostications that are blatantly pessimistic including one that has become infamous. These prognostications have lacked balance and perspective on where the engines of progress arise.

I was reminded of this state of affairs during my weekly visit to “Another Fine Mesh” (http://blog.pointwise.com). Every Friday, this site publishes a set of links to interesting computational fluid dynamics (CFD) stories. I usually find at least one if not more items of significant interest. Last Friday a real gem was first up in their weekly post, a pointer to a NASA White Paper with a fantastic vision for CFD in 2030 (NASA Vision, CFD 2030 Vision, http://ntrs.nasa.gov/search.jsp?R=20140003093). The report is phenomenal. It provides a positive and balanced view of what can and needs to be accomplished to push CFD for aeronautics forward. A lot of what they discuss is totally on target. The biggest idea is that if we invest in progress we will be able to do some great things for aeronautics. Despite being truly visionary I would say that the authors don’t go far enough in spelling out what could be accomplished.

This NASA vision is running counter to the trend of declining effort, which has a lot of foundational reasons, not the least of which is disappearing support for research federally. Moreover the money spent on federal R&D is horribly inefficient due to numerous useless strings attached. In spite of significant money spent toward research, the amount of real effort has been seriously declining through systematic mismanagement and other disruptive forces. Congress who whines incessantly about waste in spending is actually the chief culprit. They add enormous numbers of wasteful requirements and needless accounting structure onto an already declining budget. They propagate the environment that kills risk taking by demanding no effort fail, and by virtue of this imperative virtually assure failure.

Intellectually, Phillip Spalart of Boeing Aerospace has a paper to which the decline in aeronautics is tied. It isn’t clear if this is more of reason as opposed to being simply an excuse. Spalart projected that the next turbulence modeling advance known as Large Eddy Simulation (LES) would not be truly useful til 2030 or even 2045 because of the computational needs for computing full wing or aircraft flows at flight conditions. You can read Spallart’s important paper at https://info.aiaa.org/tac/ASG/GTTC/Future%20of%20Ground%20Test%20Working%20Group/Reference%20Material/spalart-2000-DNS-scaleup-limits-IJHFF.pdf .

Philosophically, the largest issue with his approach is the fundamental scarcity mentality playing into the assumptions used in making the estimates. Unfortunately, the thinking involved in the LES estimates seems to be common today. It is both too common and dangerous, if not out-and-out destructive to our future.

There are serious problems with how Spalart approached his estimates. Most critically he applied the estimation techniques of 1999 too far into the future. He is assuming that no major discoveries will be made that will impact the efficiency of LES. These changes would be major model improvements, algorithms and theory developments that would more radically change computational efficiency than the computers themselves. I’ve written over the past couple of months about this. The elements in computational science outside computing hardware have always yielded more effective gains. Instead of waiting til 2030 or 2045 we might be looking at meaningful LES capability for applied aeronautics now, or within the next 10 years. Instead we disinvested in aeronautical research and killed the possibility. We have the literal self-fulfilling prophecy.

It gets worse, much worse. The estimate of 2030 to 2045 is based on the advance of computing hardware continuing unabated for that period. This almost certainty will not happen without a sea change in how computers are made. Moore’s law is dying. By 2020 it will be gone. Without the advances in theory, models, methods and algorithms we will never get there. In other words the study of fluid dynamics on a full aircraft via LES will not yield due to overpowering it with hardware. We need to think, we need to innovate, and we need to invent new ideas to solve this problem. Thankfully, this path is being described by the new NASA Vision, which hopefully will overthrow the stale viewpoint justifying the decline aeronautics.

Even worse than the estimates of the computing power are the assumptions that we will continue to use computers like we do today. Each new generation in computing has brought new ways of using them. New applications and new approaches to problem solving will arise that will render such estimates ridiculous. Ingenuity is not limited to increasing the efficiency of our current approaches, but developing new problem-solving regimes. Beyond the realm of computing are deeper discoveries in knowledge. For example, we are long overdue for meaningful developments in our understanding of turbulence. These will likely come through experiments that will utilize advances in material science, computing, optics and other fields to yield heretofore-impossible diagnostics. We will likely observe things that our present theory cannot explain, which in turn will drive new theory. The entire notion of what LES is may be transformed into something unforeseeable today. In other word, the future will probably look nothing like we think today because we can’t imagine what our fellow man will create. We can, however, believe in our fellow man’s potential to solve seemingly impossible problems,

Another argument is that we don’t need to develop better aeronautics because our aircraft are not changing any more. In fact the aircraft can’t change due to the regulatory environment. The belief is that current work is adequate for the mainstream issues in aircraft design. This might be true. Eventually things will need to change. I have a hard time imagining that in 100 years we will be flying planes that look just like today’s planes. Instead someone will decide to push knowledge forward. They will advance science including aerospace science. Doing so, they will develop new airplanes that will be much better than the current ones. The people who do this will own that future. If it isn’t the USA, then we will all be riding in planes built somewhere else. It doesn’t have to be that way, but it will if we don’t change. The same principles hold for computers, cars, toasters, TVs etc. If we allow ourselves to believe that we can’t changes, can’t do better, we won’t. Someone else who does believe they can do better will invent the future, when they invent the future they will own it, and us.

The last bit of wisdom missing from the whole dialog is the serendipity of finding entirely new applications for computational science. Part of progress is inventing entirely new ways of doing business, new ways of solving problems, and ways of thinking that are completely beyond the imagination currently. Our lack of investment in aerospace helps to assure this won’t happen. Even a casual examination of humanity’s march forward shows the folly of our current approach. Man has continuously created the new way of doing things, combined ideas and technology into new things. In fact, looking from the year 1999 it was unreasonable to assume that one could even begin to understand how things would be done in 2045, and certainly such a dismal outlook. A more defensible and honest assessment would have seen processes and progress that would seem otherworldly from the 1999 perspective including discoveries that would undo our limitations. Imaging that current limitations would hold in 2045 is blindness.

This whole episode with aeronautics is just one cautionary tale, in one field. There are many more examples today where small minded, scarcity based thinking is killing our future.

Codes of Myth and Legend

18 Friday Apr 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

If you work actively in modeling and simulation you will encounter the codes of bygone days. If I were more of a historian it could come in really handy although comments in these codes often leave much to be desired. These codes are the stuff of myth and legend, or at least it seems like. The authors of the codes are mythically legendary. They did things we can’t do any longer; they created useful tools for conducting simulation. This is a big problem because we should be getting steadily better at conducting simulations. This becomes an even bigger problem when these people no longer work, and their codes live on.

“What I cannot create, I do not understand.” – Richard Feynman

Too large a portion of the simulation work done today is not understood in any deep way by those doing the simulation. In other words the people using the code and conducting the simulation don’t really understand much about how the answer they are using was arrived at. People run the code, get an answer do analysis without any real idea of how the code actually got the answer. This is dangerous. This actually works to undermine the scientific enterprise. Moreover, this trend is completely unnecessary, but some deep cultural undercurrents that extend well beyond the universities, labs and offices where simulation should be having a massively positive impact on society drive it.

The people who created these codes are surely proud of their work. I’ve been privileged to work with some, and all of them are generally horrified with how long their codes continue to be used to the exclusion of newer replacements. It was their commitment to apply applied technical solutions to real problems that made a difference. Those that create those earlier technical solutions were committed to applying technology to solving problems, and they were good at it. The spirit of discovery that allowed them to create codes and then see them used for meaningful work has dissipated in broad swaths of science and engineering. The disturbing point is that we don’t seem to be very good at it any more. At least we aren’t very good at getting our tools to solve problems. The developers of the mythic codes generally feel quite distressed by the continued reliance on their aging methods, and the lack of viable replacements.

Why?

I don’t believe it is the quality of the people, nor is it the raw resources available. Instead, we lack the collective will to get these things done. Our systems are letting us down. Society is not allowing progress. Our collective judgment is that the risk of change actually outweighs the need for or benefit of progress. Progress still lives on in other areas such as “big data” and business associated with the Internet, but even there you can see the forces of stagnation looming on the horizon. Areas where society should place its greatest hope for the future is under threat by the same forces that are choking the things I work on. This entire narrative needs to change for the good of society and the beneficial aspects of progress.

Remarkably, the systems devised and implemented to achieve greater accountability are themselves at the heart of achieving less. The accountability is a ruse, a façade put into place to comfort the small-minded. The wonder of solving interesting problems on a computer seems to have worn off being replaced by a cautious pessimism about the entire enterprise. None of these factors are necessary, and all of them are absolutely self-imposed limitations. Let’s look at each of these issues and suggest something better.

All the codes were created in the day when computing was in its infancy, and supported ambitious technological objectives. Usually a code would cut its teeth of the most difficult problems available and if it proved useful, the legend would be born. The mythic quality is related to the code’s ability to usefully address the problems of importance. The success of the technology supported by the code would lend itself to the code’s success and mythic status. The success of the code’s users would transfer to the code; the code was part of the path to a successful career. Ambition would be satisfied through the code’s good reputation. As such, the code was part of a flywheel with ambitious projects and ambitious people providing the energy. The legacy of the code creates something that is quite difficult to overcome. It may require more willpower to move on than the code originally harnessed in taking on its mantle of legend.

We seem to have created a federal system that is maximizing the creation of entropy. It is almost as if the government were expressing a deep commitment to the second law of thermodynamics. Despite being showered with resources, the ability to get anything of substance done is elusive. Beyond this, the elusive nature of progress is growing in prominence. Creating a code that has real utility for real applied problems takes focus, ingenuity, luck and commitment. Each of these is in limited supply. The research system of today seems to sap each of these in a myriad of ways. It seems almost impossible to focus on anything today. If I told you how many projects I work on, you’d immediately see part of the problem (7 or 8 a year). This level of accounting comes at me from a myriad of sources, some entirely local, and some National in character. All of it tinged with the sense that I can’t be trusted.

It takes a great deal of energy to drive these projects toward anything that looks coherent; none of this equals the creation and stewardship of a genuine capability. Ingenuity is being crushed by the increasingly risk adverse and politically motivated research management system. Lack of commitment is easy to see with the flighty support for most projects. Even when projects are supported well, the management system slices and dices the effort into tiny bite-sized pieces, and demands success in each. Failure is not tolerated. Wisdom dictates that the lack of tolerance for failure is tantamount to destroying the opportunity for success. In other words, our risk aversion is causing the very thing that it was designed to avoid. Between half-hearted support, and risk aversion the chance for real innovation is being choked to death.

The management of the Labs where I work is becoming ever more intrusive. Take for example the financial system. Every year my work is parceled into ever-smaller chunks. This is done in the name of accountability. Instead the freedom to execute anything big is being choked by all this accountability. The irony is that the detailed accounting is actually assuring that less is accomplished, and the people driving the micromanagement aren’t accountable for the damage they have caused in the slightest. The micro accounting of my time is also driving a level of incrementalism into the work that destroys the ability to do anything game changing. This incrementalism goes hand-in-hand with the lack of any risk-taking. We are dictated to succeed by fiat, and by the same logic success on a large scale will also be inhibited.

When it comes to code development the incremental attitude results in work being accreted onto the same ever-older code base. The low risk path is to add a little bit more onto the already useful (legacy) code. This is done despite the lack of real in-depth knowledge of how the code actually works to solve problems. The part of the code that leads to its success is almost magical, and as magic can’t be tampered with. The foundation for all the new work is corrupted by the lack of understanding which then poisons the quality of the work built on top of the flawed base. As such, the work done on top of the magical foundation is intrinsically superficial. Given the way we manage science today superficiality should actually be expected. Our science and engineering management is focused almost to exclusion on the most superficial aspects of the work.

The fundamental fact is that a new code is a risk. It may not replace or improve upon the existing capability. Success can never be guaranteed, nor should it be. Yet we have created a system of managing science that cannot tolerate any failure. Existing codes already solve the problem well enough for somebody to get answers, and the low risk path is to build upon this. Instead of building upon the foundation of knowledge and applying this to better solutions, it is cheaper and lower risk to simply refurbish the old code. Like much of our culture today the payoff is immediate rather than delayed. You get new capability right away rather than a much better code later. Right away and crummy beats longer term and great every time. Why? Short attention spans? No real accountability? Deep institutional cynicism?

A good analogy is the state of our crumbling physical infrastructure. The United States’ 20th Century infrastructure is rotting before our eyes. When we should be thinking of a 21st Century infrastructure, we are trying to make the last Century’s limp along. Think of an old bridge that desperately needs replacement. It is in danger of collapse and represents a real risk rather than a benefit to its users. More often than not in today’s America, the old bridge is simply repaired, or retrofitted regardless of its state of repair. You can bumble along this path until the bridge collapses. Most bridges don’t, but some do to tragic consequences. Usually there is a tremendous amount of warning that is simply ignored. Reality can’t be fooled. If the bridge needs replacing and you fail to do so, a collapse is a reasonable outcome. Most of the time we just do it on the cheap.

We are doing science exactly the same way. In cases where no one can see the bridge collapse, the guilty are absolved of the damage they do. Just like physical infrastructure, we are systematically discounting the future value for the present cost. The management (can’t really call them leaders) in charge simply is not stewards of our destiny; they are just trying holding the crumbling edifice together until they can move on with the hollow declaration of success. Sooner or later, this lack of care will yield negative consequences.

All this caution is creating an environment that fails to utilize existing talent, and embodies a pessimistic view of man’s capacity to create. This might be the saddest aspect of the overall malaise, the waste of potential. Our “customers” actually ask very little of us, and the efforts don’t really push our abilities; except, perhaps, our ability to withstand work that is utter dreck. The objectives of the work are small-minded with a focus on producing sure results and minimizing risks. The system does little to encourage big thoughts, dreams, or risks behind creating big results. Politicians heighten this sense by constantly discussing how deeply in debt our country is as an excuse for not spending money on things of value. Not every dollar spent is the same; a dollar invested in something of value is not the same as a dollar spent on something with no return. All of this is predicated on the mentality of scarcity, and a failure to see our fellow man or yourself as engines of innovation and unseen opportunities. History will not be kind to our current leadership when it is realized how much was squandered. The evidence that we should have faith in man’s innate creative ability is great, and the ignorance of the possibility of a better world is hard to stomach.

The first step toward a better future is change in the assumptions regarding what an investment in the future looks like. One needs to overthrow the scarcity mentality and realize that money invested wisely will yield greater future value. Education and lifetime learning is one such investment. Faith in creativity and innovation is another investment. Big audacious goals and lofty objectives are another investment. The big goal is more than just an achievement; it is an aspiration that lifts the lives of all who contribute. It also lifts the lives of all that are inspired by it. Do we have any large-scale societal goals today? If we don’t, how can we tolerate such lack of leadership? We should be demanding something big and meaningful in our lives. Something that is worth doing and something that would make the current small-minded micromanagement and lack of risk taking utterly unacceptable. We should be outraged by the state of things, and the degree to which we are being led astray.

All of us should be actively engaged in creating a better world, and solving the problems that face us. Instead we seem to be just hanging on to the imperfect world handed to us. We need to have more faith in our creative problem solving abilities, and less reverence for what was achieved in the past. The future is waiting to be created.

“If you want something new, you have to stop doing something old” ― Peter F. Drucker

 

What constraints are keeping us from progressing?

11 Friday Apr 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” – Clarke’s first law

It would be easy to point fingers at the crushing bureaucratic load we face at many of our premier research institutes. I think that this only compounds the real forces holding us back as a sort of mindless ally in the quest for mediocrity. I for one can feel my ability to think and create being syphoned away by meaningless paperwork, approvals, training and mindless formality. The personal toll is heartbreaking and the taxpayers should be up in arms. Of course most of this is driven by our scandal mongering political system and the increasingly tabloidesque media. These items are merely various forms of societal dissipation aimed at driving entropy into its all-consuming conclusion.

When I came across the article in the Daily Beast (Our Mindless Government Is Heading for a Spending Disaster) yesterday on the book “The Rule of Nobody,” by Phillip K. Howard it became clear that I’m not alone in feeling this way. Our Labs are actually not run by anyone, and certainty not the management of the Lab. The problem with this approach is not partisan, but rather associated with a tendency to be lazy in our rule. The core of what drives this trend is the inability to reinvent our governance. This failure to reinvent is then at the core of the deeper issue, the fear of risk or failure. We have a society-wide inability to see failure for what it is; failure is a necessary vehicle for success. Risk is the thing that allows us to step forward toward both accomplishment and failure. You cannot have one without the other. Somehow as a culture we have forgotten how to strive, to accept the failure as a necessary element for a healthy Country. Somehow this aversion has crept into our collective consciousness. It is sapping our ability to accomplish anything of substance.

In scientific research the inability to accept risk and the requisite failure is incredibly destructive. Research at its essence is doing something that has never been done before. It should be risky and thus highly susceptible to failure. Our ability to learn the limits of knowledge is intimately tied to failure. Yet failure is the very thing that we are not encouraging as a society. In fact, failure is punished without mercy. The aggregate impact of this is the failure to accept the sort of risk that leads to large-scale success. To get a “Google” or a “moon landing” we have to fund, accept and learn from innumerable failures. Without the failure the large success will elude us as well.

Another is the artificial limitation we place on our thinking in the guise of thinking “it’s impossible”. Impossible also implies risk and the large chance of outright failure. We quit pushing the limits of what might be possible and escape into the comfortable confines of the safe possible, A third piece is the inability to marshal our collective efforts in the pursuit of massive societal goals. These goals capture the imagination and drive the orientation toward success beyond us to greater achievements. Again, it is the inability to accept risk. The last I’ll touch upon is the lack of faith in the creative abilities of mankind. Man’s creative energies have continually overcome limitations for millennia and there is no reason to think this won’t continue. Algorithmic improvement’s impacts on computing are but one version of the large theme of man’s ability to create a better world.

It seems that my job is all about NOT taking risks. The opposite should be true. Instead we spend all our time figuring out how to not screw up, how to avoid any failure. This, of course, is antithetical to success. All success, all expertise is built upon the firm foundation of glorious failure and risk. Failure is how we learn and risk helps to stoke the flames of failure. Instead we have grown to accept creeping mediocrity as the goal of our entire society. When the biggest goal at work is “don’t screw up” it is hard to think of a good reason to do anything. We have projects that have scheduled breakthroughs and goals that are easy to meet. Very few projects are funded that actually attack big goals. Instead instrumentalism abounds and the best way to get funded is to solve the problem first then use the result to justify more funding. It’s a vicious cycle, and it is swallowing too much of our efforts.

Strangely enough, the whole viscous cycle also keeps us from doing the mundane. Since our efforts are so horrifically over managed there is no energy to actually execute what should be the trivial aspects of the job. Part of this related to the slicing and dicing of our work into such small pieces any coherence is lost. The second part is the lack of any overarching vision of where we are going. The lack of big projects with scope kills the ability to do consequential tasks that should be easy. Instead we do all sorts of things that seem hard, but really amount to nothing. We are a lot of motion without any real progress. Some of us noted a few weeks ago that new computer codes were started every five to seven years. Then about 25 years ago that stopped. Now everything has to be built upon existing codes because it lowers the risk. We have literally missed four or five generations of new codes. This is failure on an epic scale because no one will risk something new.

“Can we travel faster than the speed of light?” My son once asked me. A reading of the standard, known theories of physics would give a clear unequivical “No, it would be impossible.” I don’t buy this as the ultimate response. A better and more measured response would be “not with what we know today, but there are always new things to be learned about the universe.” “Maybe we can using physical principles that haven’t been discovered yet.” Some day we might travel faster than light, or effectively so, but it won’t look like Star Trek’s warp drive (or maybe it will, who knows). The key is to understand that what is possible or impossible is only a function of what we know today, and our state of knowledge is always growing.

In mathematics these limits on possibility often take the form of barrier theorems. These state what cannot be done. These barriers can be overcome if the barriers are looked at liberally with an eye toward loopholes. A common loophole is linearity. Linearity infuses many mathematical proofs and theorems, and the means to overcoming the limitations are appealing to nonlinearity. One important example is Godunov’s theorem where formal accuracy and monotonicity were linked. The limit only exists for linear numerical methods, and a nonlinear numerical method can be both greater than first order accurate and monotone. The impossible was possible! It was simply a matter of thinking about the problem outside the box of the theorem.

In most of the areas that have traditionally supported scientific computing are languishing today. Almost nothing in the way of big goal oriented projects exist to spur progress. The last such program was the ASCI program from the mid-1990’s, which unfortunately focused too much on pure computing as the route to progress. ASCI bridged the gap between the CPU dominated early era to the growth in massively parallel computation. If fact parallel computing has masked the degree to which we are collectively failing to use our computers effectively. This era is drawing to a close, and in fact Moore’s law is rapidly dying.

While some might see the death of Moore’s law as a problem, it may be an opportunity to reframe to quest for progress. In the absence of computational improvements driven by the technology, the ability to progress could be again given to the scientific community. Without hardware growing in capability the source of progress resides in the ability of algorithms, methods and models to improve. Even under the spell of Moore’s law, these three factors have accounted for more improvement in computational capability than hardware. What will our response be to losing Moore’s law? Will we make investments appropriately in progress? Will we refocus our efforts on improving algorithmic efficiency, better numerical methods and improved modeling? Hope springs eternal!

In the final analysis, such an investment requires a great deal of faith in man’s eternal ability to create, to discover and be inspired. History provides an immense amount of evidence that this faith would be well placed. As noted above, we have created as much if not more computational capability through ingenious algorithms, methods, heuristics, and models than our massive strides in computational hardware.

It is noteworthy that the phone in my pocket today has the raw computational power of a Cray 2. It sits idle most of the time and gets used for email, phone calls, texts and light web browsing. If you had told me that I’d have this power available to me like these 25 years ago, I would have been dumbstruck. Moreover, I don’t really use it for anything like I’d have used a Cray 2. The difference is that the same will almost certainly not happen in the next 25 years. The “easy” progress simply riding the coattails of Moore’s law is over. We will have to think hard to progress and take a different path. I believe the path is clear. We have all the evidence needed to continue our progress.

Unrecognized Bias can govern modeling & simulation quality

04 Friday Apr 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

We are deeply biased by our perceptions and preconceptions all the time. We make many decisions without knowing we are making a decision constantly. Any recognition of this would probably terrify most rational people. We often frame our investigations to prove the conclusion we have already made. Computer modeling and simulation has advanced to the point where it is forming biases. If one’s most vivid view of an unseeable event is a simulation, a deep bias can be shaped in favor of the simulation that unveiled the unseeable. We are now at the point where we need to consider if improvement in modeling and simulation can be blocked by such biases.

For example in one modeling effort, for high explosives efforts had a favored a computer code that is Lagrangian (meaning the mesh moves with the material). The energy release from explosives causes fluid to rotate vigorously and this rotation can render the mesh into a tangled mess. Besides becoming inaccurate, the tangled mesh will invariably endanger the entire simulation. To get rid of the problem, this code converts tangled mesh elements into particles. This is a significant upgrade over the practice of “element death” where the tangled grid is completely removed when it becomes a problem along with mass, momentum and energy… Conservation laws are laws, not suggestions! Instead the conversion to particles allows the simulation to continue, but bring all the problems with accuracy and ultimately conservation that particles bring along (I’m not a fan of particles).

More tellingly, competitor codes and alternative simulation approaches will add particles to their simulation. The only reason the particles are added is to give the users something that looks more like what they are used to. In other words the users expect particles in interesting parts of the flow, and the competitors are eager to give it to them whether it is a good idea or not (it really isn’t!). Rather than develop an honest and earnestly better capability, the developers focus on providing the familiar particles.

Why? The analysts running the simulations have come to expect particles, and the particles are common where the simulations are the most energetic, and interesting. To help make the analysts solving the problems believe the new codes particles come along. I, for one, think particles are terrible. Particles are incredibly seductive and appealing for simulation, but ultimately terrible because of their inability to satisfy even more important physical principles, or provide sufficient smoothness for stable approximations. Their discrete nature causes an unfortunate trade space to be navigated without sufficiently good alternatives. In some cases you have to choose between smoothness for accuracy and conservation. Integrating particles is often chosen because they can be done without dissipation, but dissipation is fundamental to physical, casual events. Causality, dissipation and conservation all trump a calculation with particles without these characteristics. In the end the only reason for the particles is the underlying bias of the analysts who have grown to look for them. Nothing else, no reason based on science, it is based on providing the “customer” what they want.

“If I had asked people what they wanted, they would have said faster horses.”– Henry Ford.

There you have it, give people what they don’t even know they need. This is a core principle in innovation. If we just keep giving people what they think they want, improvements will be killed. This is the principle that code related biases create. They are biased strongly toward what they already have instead of what is possible.

Modeling and simulation has been outrageously successful over the decades. This success has spawned the ability to trick the human brain to believing that what they see is real. The fact that simulations look so convincing is a mark of massive progress that has been made. This is a rather deep achievement, but it is fraught with the danger of coloring perceptions in ways that cannot be controlled. The anchoring bias I spoke of above is part of that danger. The success now provides a barrier to future advances. In other words enough success has been achieved that the human element in determining quality may be a barrier to future improvements.

It might not come as a surprise for you to think that I’ll say V&V is part of the answer.

V&V has a deep role to play in improving upon this state of affairs. In a nutshell, the standard for accepting and using modeling and simulation must improve in order to allow the codes to improve. A colleague of mine has the philosophy, “you can always do better.” I think this is the core of innovation, success and advances. There is always a way to improve. This needs to be a steadfast belief that guides our choices, and provides the continual reach toward bettering our capabilities.

What can overcome this very human reaction to the visual aspects of simulation?

First, the value of simulation needs to be based upon the comparisons with experimental measurements, not human perceptions. This is easier said than done. Simulations are prone to being calibrated to remove differences from experimental measurements. Most simulations cannot match experimental observables without calibration, and/or the quality standards cannot be achieved without calibration. The end result is the inability to assess the proper value of a simulation without the bias that calibration brings. An unambiguously better simulation will require a different calibration, and potentially a different calibration methodology.

 

In complex simulations, the full breadth of calibration is quite difficult to fully grapple with. There are often multiple sources of calibration in simulation including any subgrid physics, or closure relations associated with physical properties. Perhaps the most common place to see calibration is the turbulence model. Being an inherently poorly understood area of physics; turbulence modeling is prone to being a dumping ground for uncertainty. For example, ocean modeling often uses a value for the viscous dissipation that far exceeds reality. As a friend of mine like to say, “if the ocean were as viscous as we model it, you could drive to England (from the USA).” Without strong bounds being put on the form and value of parameters in the turbulence model, the values can be modified to give better matches to more important data. This is the essence of a heavy-handed calibration common. An example might be the detailed equation of state for a material. Often a simulation code has been used in determining various aspects of the material properties or analyzing the experimental data used.

 

I have witnessed several difficult areas of applied modeling and simulation overwhelmed by calibration. The use of calibration is so commonly accepted, the communities engage in it without thinking. If one isn’t careful the ability to truly validate the state of “true” modeling knowledge becomes nearly impossible. The calibration begins to become intimately intertwined with what seems to be fundamental knowledge. For example, a simulation code might be used to help make sense of experimental data. If one isn’t careful errors in the simulation used in reducing the experimental data can be transferred over to the data itself. Worse yet, the code used in interpreting the data might utilize a calibration (it almost certainty does). At that point you are deep down the proverbial rabbit hole. Deep. How the hell do you unwind this horrible knot? You have calibrated the calibrator. Even more pernicious errors might be the failure to characterize the uncertainties in the modeling and simulation that is used to help look at the experiment. In other cases calibrations are used so frequently that they simply get transferred over into what should be fundamental physical properties. If these sorts of steps are allowed to proceed forward, the original intent can be lost.

These steps are in addition to a lot of my professional V&V focus, code verification and numerical error estimation. These practices can provide unambiguous evidence that a new code is a better solution on analytical problems and real applications. Too often code verification simply focuses upon the correctness of implementations as revealed by the order of convergence. The magnitude of the numerical error can be revealed as well. It is important to provide this evidence along with the proof of correctness usually associated with verification. What was solution verification should be called numerical error estimation, and it provides important evidence on how well real problems are solved numerically. Moreover, if part of a calibration is accounting for numerical error, the error estimation will unveil this issue clearly.

The bottom line is to ask questions. Ask lots of questions, especially ones that might seem to be stupid. You’ll be surprised how many stupid questions actually have even stupider answers!

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar