• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: March 2018

We preach diversity and inclusion. We practice exclusion. We need better leadership.

30 Friday Mar 2018

Posted by Bill Rider in Uncategorized

≈ Leave a comment

White privilege is an absence of the consequences of racism. An absence of structural discrimination, an absence of your race being viewed as a problem first and foremost.

― Reni Eddo-Lodge

lorenz150Every Monday I stare at a blank page and write. Sometimes it is obvious with work demands driving my thoughts in a very focused direction. I’d love to be there right now.  Work is pissing me off, and providing no such inspiration, only anger. It’s especially unfortunate because I ought to have something to say on differential equations. Last week’s post was interesting, and the feedback has provided wonderful topics to pick up. One comment focused on other constraints and invariants for modeling that equations commonly violate. A wonderful manager I had from Los Alamos argued with me about nonlinearity, and integral laws with respect to hyperbolic PDEs and causality. Both of these topics are worthy of deep exploration, but my head isn’t there. The issue of societal inclusivity and the maintenance of power structures looms in my thinking. The acceptance of individuality and personal differences within our social structures is another theme that resonates. So, I take a pause from technical to ponder the ability of our modern World to honor and harness the life work and experience from others especially those who are different. The gulf between our leader’s rhetoric and actions is vast leading to poor results, and maintenance of traditional power.

Why am I concerned?

It does strange things to you to realize that the conservative establishment is forcing you to be a progressive liberal fighter for universal rights.

― Brandon Sanderson

This whole topic might seem to be odd coming from me. I am a middle aged white mandownloadwho is a member of the intellectual elite. I’m not a racial minority, a foreigner or part of the LBGTQ community although I have a child who is. Of course, my (true) identity is far more complex than that. For example, my personality an outlier at work. It makes many of my co-workers extremely uncomfortable. I’m an atheist and this makes people of faith uncomfortable. Other aspects of my life remain hidden too, maybe they aren’t germane; maybe they simply would make people uncomfortable; maybe they would result in implicit shunning and rejection. All of this runs against the theme of inclusion and diversity, which society as a whole and our institutions play lip service to. In the backdrop of this lip service are entrenched behaviors, practices and power that undermines inclusion at every turn. Many forms of diversity have no protection at all, and the power structures attack them without relief. The talk is all about diversity and inclusion, the actions are all about exclusivity. Without genuine leadership and commitment diversity and inclusion fail. Worse yet, a great deal of our leadership has moved to explicit attacks on diversity as a social ill.

A great democracy has got to be progressive or it will soon cease to be great or a democracy.

― Theodore Roosevelt

Let’s pull this thread a bit with respect to the obvious diversity that we see highlighted societally, race, gender, or sexual identity. These groups are protected legally and the focus of most of our diversity efforts. The discrimination against these groups of people has been pervasive and chronic. Moreover, there are broad swaths of our society that seek to push back progress and reinstitute outright discrimination. The efforts resisting diversity are active and vigorous. In lieu of the explicit and obvious discrimination we have plenty of implicit institutional discrimination and oppression. The most obvious societal form that implicitly targets minorities is the drug war and the plague of mass incarceration. It is a way of instituting “Jim Crow” laws within a junkiemodern society. Any examination of the prisons in the United States shows massive racial inequity. Police enforce laws in a selective fashion letting whites go, while imprisoning blacks and Hispanics for the same offenses. Prison sentences are also harsher toward minorities. Other minority groups have been targeted and selected for similar treatments. In these cases, everyone is claimed to be under the same legal system, but the execution of the law lacks uniformity. At work we have policies and practices often revolving around hiring that have the same effect. They work to implicitly exclude people and undermine the actual impact of diversity.download-1

Equality before the law is probably forever unattainable. It is a noble ideal, but it can never be realized, for what men value in this world is not rights but privileges.

― H.L. Mencken

download-1Other practices hurt women in an implicit manner. These practices are relabeled as “traditional values” and are reflected through health care and sexual harassment. The implicit message is that women should be barefoot and pregnant, second class citizens whose role in society is child rearing and home making. Young women are also there to provide sexual outlets to men, although those that do are sluts. They are not worthy of enjoying the same sexual pleasure the men are free to pursue. The whole of the behavior is focused on propping up men and the center of society, work and power. None of their needs or contributions in the realm where men traditionally rule is welcome. Our health care coverage and laws completely reflect this polarity where women’s needs are controversial. Men’s needs are standard. Nothing says this more than the attitudes toward reproductive and sexual health. Men are cared for; women are denied. The United States is a nexus of backwards thought with labor laws that penalize women in the name of the almighty dollar using religious freedom as an excuse.

If we cannot end now our differences, at least we can help make the world safe for diversity.

― John F. Kennedy

05VOWS1-master675In the past decades we have seen huge strides for the LBGTQ community. The greatest victory is marriage equality. Discrimination is still something many people in society would love to practice. At an implicit level they do. Everywhere that discrimination can be gotten away with, it happens. Many people remain in the closet and do not feel free letting people know who they really are. Of course, this is a bigger issue for transgender people for whom hiding is almost impossible. At the same time the level of discrimination is still largely approved by society. Most of the discrimination is the result of people’s innate discomfort with LBGTQ sexuality and opening themselves to even considering being somewhere on any non-standard spectrum of sexuality. It becomes an issue and a thought they would just as soon submerge. This discomfort extends to other forms of non-standard expressions of sexuality that invariably leak out into people’s everyday lives. This sort of discomfort is greater in the United States where sexuality is heavily suppressed and viewed as morally inferior. Our sexuality is an important part of our self-identity. For many people it is an aspect of self that must be hidden lest they suffer reprisals.

This is a global issue

download
President Donald Trump reacts before speaking at a rally at the Phoenix Convention Center, Tuesday, Aug. 22, 2017, in Phoenix. (AP Photo/Alex Brandon)
President Donald Trump reacts before speaking at a rally at the Phoenix Convention Center, Tuesday, Aug. 22, 2017, in Phoenix. (AP Photo/Alex Brandon)
482208094-anti-abortion-activists-hold-a-rally-opposing-federal.jpg.CROP.promovar-mediumlarge
maxresdefault copy
Isis fighters, pictured on a militant website verified by AP.

Hate and oppression are rising Worldwide. This hate is a reaction to many changes in society. Ethnic changes, migrations, displacement and demographics are undermining traditional majorities. Global economics and broad information technology/telecommunication are also stressing traditional structures at every turn. Disparate people and communities can now form through the power of this medium. At the same time disinformation, propaganda and wholesale manipulation are empowering the masses to both good and evil. Some of these online communities are wonderful such as LGBTQ people who can form greater bonds, and outreach to similar people. It also allows hate to flourish and bond in the same way. The technology isn’t bad or good, and its impacts are similarly divided. It is immensely destabilizing. Traditional culture and society is also rising up to push back. The change makes people uncomfortable. Exclusion and casting the different out is a way for them to push back. They respond with a defense of traditional values and judgements grounded in exclusion of those who don’t fit in. As with most bigotry and exclusion, it is fear based. Fear is something promoted across the globe by the enemies of inclusion and progress. The same fear is being harnessed by the traditional powers to solidify their hold over their hegemony. This fear is particularly acute with the older part of the population who also tend to be more politically active. These two things form the most active implicit threat to achieving diversity.

Park-911--56a9a6593df78cf772a9390aIn many cases this whole issue is framed in terms of religion. Many traditional religious views are really excuses and justification for exclusion and bigotry. The aspects of the religious traditions focused on love, compassion and inclusion are diminished or ignored. This sort of perversion of religious views is a common practice by authoritarian regimes who harness the fear-based aspects of faith to enhance their power and sway over the masses. This is true for Christians, Jews, Muslims, Hindus, … virtually every major faith. It is manifesting itself in the movement to authoritarian rule in the United States. The forces of hate are cloaked in faith to immunize themselves from critical views. Hatred, discrimination and bigotry are justified in their faith and freed from much critical feedback. They also complain about being repressed by society even when they are the majority and ruling social order. This is a common response when the forces of inclusion complain about their institutionalized bigotry. In the United States the minority that is truly oppressed are atheists. An atheist can’t generally get elected to office and in many places needs to hide this identity lest be subject to persecution. It is among the personal identities that I need to hide. At the same time Christians can be utterly open and brazen in self-expression of their faith.

In virtually every case the forces against inclusion are fear-driven. Many people are not comfortable with people who are different because of how it reflects on them. For example, many of the greatest homophobes actually harbor homosexual feelings of their own they are trying to squash. Openly accepted homosexuality is something that they resist because it seems to implicitly encourage them to act on their own feelings. In response to these fears they engage in bigotry. Generally, gender and sexual identities will bring these attitudes out because of the discomfort with possibilities the identities offer people. These sorts of dynamics are present with religious minorities too. Rather than question their faith, gender or sexuality, the different people are driven into their respective closets and out of view.

Getting at Subtle Exclusion

 ‘Controversial’ as we all know, is often a euphemism for ‘interesting and intelligent’.

― Kevin Smith

87777323
images

I’m writing this essay on diversity as a white middle-aged male who is seemingly a member of the ruling class, what gives? I represent a different diversity in many respects than the obvious forms we focus on. Being discriminated against happen without consequence. I’m outgoing and extroverted in a career and work environment dominated by introverts who are uncomfortable with emotion and human contact. I’m opinionated, outspoken and brave enough to be controversial in a sea of committed conformists. Both of these traits are punished with advice to blunt my natural tendencies. You are expected to toe the line and conform. The powers that be are not welcoming to any challenge or debate, the mantra of the day is “sit down, shut up, and do as you’re told” and “you’re lucky to be here”. The message is clear, you aren’t here as an equal, you need to be a compliant cog who plays your role. Individual contributions, opinions and talent are not important or welcome unless they fit neatly into the master’s plan. Increasingly in corporate America, you are either part of the ruling class or simply a disempowered serf whose personal value is completely dismissed. In this sense any diversity is discouraged and squashed by our overlords.

vyxvbzwxIf one trait defines the vast number of people today, it is disempowerment. Your personal value and potential are meaningless to our management. Do you job; do what you are told to do; comply with their requirements. Whatever you do don’t make waves or question the powers. I’ve done this, and the reaction is to punish people through marginalization and lack of access. Retribution is implicit and subtle. In the end you don’t know how much you were punished through opportunity and information denial. The key is that the powers that be are in control of you and if you don’t play ball with them, you are an outsider. A way of not getting into trouble is being a compliant, boring and utterly vanilla worker bee. If you tow the party line you simply get the benefit of employment “success”. Eventually if you play your cards right you can join the power structure as one of them. In this way diversity of any sort is punished. They see diversity as a threat and work to drive it out.

You could jump so much higher when you had somewhere safe to fall.

― Liane Moriarty

Given that the treatment of relatively benign diversity is met with implicit resistance, one might reasonably ask how edgier forms of diversity will be welcomed? Granted, one’s personality can have a distinct work-related impact, and behavior is certainly work appropriate, but what are the bounds on how that is managed? My experience that the workplace goes too far in moderating people’s innate tendencies. My particular work has promoted the aspirational view toward diversity of bringing your true and best self to work. In all honesty, it is an aspiration that they fail at to a spectacular degree. When relatively common and universal aspects of our humanity are not accepted, how would more controversial aspects be accepted? A fair reaction is the conclusion that they wouldn’t be accepted at all. To add to the mix is a broader governance that is getting less accepting of differences than more. As a result, people whose lives are outside the conservatively defined norms are prone to hide major aspects of their identity. The accepted majority and accepted identity is white, male, Christian, heterosexual, and monogamous. We still accept political differences, but to varying degrees we expect people to fall into only a few narrow bins.

It is time for parents to teach young people early on that in diversity there is beauty and there is strength.

― Maya Angelou

OpenMarriageWhat happens when someone falls outside these identities? If one is female and/or non-white there are protections and discrimination and bias is purely implicit. What about being an atheist? What about being gay, or transsexual? The legal protections begin to be weaker, and major societal elements seek to remove these protections. What if you’re non-monogamous, or committed to some other divergent sexual identity? What if you’re a communist or fascist? Then you are likely to be actively persecuted and discriminated against. Where is the line where this should happen? What life choices are so completely outside the norm of societal acceptance that they should be subject to effective banishment. If you have made one of these life choices, your choice is often hidden and secret. You are in the closet. This is a very real issue especially when large parts of society want you in the closet or put you back there. Moreover, this part of society is emboldened to try and put gays, women and people of color back in their traditional closets, kitchens and ghettos.

We are in a time where progress in diversity has stopped, and elements of society are moving backwards. Progress in expanding diversity and acceptance has stopped, and we are making America Great Again by enhancing the power of whites hiding people of color, closeting gays, and making women subservient. All of this is done in service of those in power both to maintain their power, wealth and control. Any sort of commitment to diversity is viewed as an assault on the power and wealth of the ruling class. As a result, we see a continued concentration of wealth and power in the hands of the few. General social mobility has been diminished so that people keep their place in society. For the rank and file, we see disempowerment and meager rewards for slavish conformity. Step out of line and draw attention to yourself and expect punishment and shunning. The slavishly conformant masses provide the peer pressure and bulk of the enforcement, but all of it serves those in power.

Instead of creating a society and system that gets the best out of people and maximizes human potential, we have pure maintenance of power by the ruling class. We let pleaderseople act out through their basest fears to squash people and ideas that are uncomfortable. We are not leading people to be better, we are encouraging them to be worse. Rather than act out of love, we are emboldening hate. Rather than accept people and allow them flourish using their distinct individuality and achieve satisfying lives, we choose conformity and order. The conformity and order that is imposed serves only those in power and limits any aspirational dreams of the masses. The masses are controlled by fear that naturally arises with those who are different. Humans are naturally fearful of other humans that are different. Rather than encourage people to accept these differences (i.e., accept and promote diversity), we encourage them to discriminate against them by promoting fear of the other. This fear is the tool of rich and powerful to create systems that maintain their grip on society.

We are in need of leadership that blazes a trail to being courageous and better. Without leadership and bravery, the progress we have achieved will turn back and be lost. There is so much more to do to get the best out of people, which cannot happen when our leaders allow and even encourage the worst in people to serve their own selfish needs. We have a choice and it is past time to choose excellence and progress through inclusion and diversity.

 Here’s to the crazy ones. The misfits. The rebels. The troublemakers. The round pegs in the square holes. The ones who see things differently. They’re not fond of rules. And they have no respect for the status quo. You can quote them, disagree with them, glorify or vilify them. About the only thing you can’t do is ignore them. Because they change things. They push the human race forward. And while some may see them as the crazy ones, we see genius. Because the people who are crazy enough to think they can change the world, are the ones who do.

― Rob Siltanen

 

The Primal Nature of Hyperbolic Conservation Laws

15 Thursday Mar 2018

Posted by Bill Rider in Uncategorized

≈ 2 Comments

Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won’t come in.

― Isaac Asimov

In conducting science, the importance of models is central to practice. Modeling is paired with observation as Man’s abstraction for understanding the World around us. Models need to be descriptive and tractable for examining nature.  These two aspects can be in direct conflict with each other. Observation under natural or controlled circumstances provides the core of scientific knowledge. Observation becomes science when we pp03xsw49rovide a systematic explanation for what we see. More often than not this explanation has a mathematical character as the mechanism we use. Among our mathematical devices differential equations are among our most powerful tools. In the most basic form these equations are rate of change laws for some observable in the World. Most crudely these rate equations can be empirical vehicles for taking observations into a form useful for prediction, design and optimization. A more basic form is partial differential equations (PDEs) that describe the basic physics in a more expansive form. It is important to consider the consequences of the model forms we use. Several important categories of models are intrinsically unphysical in aspects thus highlighting the George Box aphorism that “essentially, all models are wrong”!

Assumptions are the most damaging enemies of our mind’s equilibrium…An assumption is an imaginary truth.

― A.A. Alebraheem

Partial differential equations come in three basic flavors, hyperbolic, parabolic and elliptic. These flavors describe some of the basic character of the equations and have fundamental differences in how they are solved, understood as objects and more importantly physical context. The core of this essay is going to be physical in nature and to the point only hyperbolic equations are primal in physics. This is to say that at the basic level everything that we might describe as a physics law is hyperbolic in character. This is for a simple and very good reason, the principle of causality. Cause and effect, the flow of time and the presence of a cosmic speed limit. If we adhere to these maxims, the conclusion is utterly obvious. Other forms of PDEs produce instantaneous global effects that violate this principle. This in no way implies that parabolic or elliptic models are not incredibly useful, they are. Their utility and other properties exceed the issues with causality violations.

big_thumb
Parabol-el-zy-hy-s.svg

More on that point soon, but first a bit of digression on the other forms of PDEs. The classical elliptic equations is Laplace’s equation, \partial_{xx} u + \partial_{yy} = 0. Elliptic equations are the simplest form and often describe physics where spatial terms are in equilibrium and there is no temporal, rate terms. Elliptic equations can include time terms, but usually implying something so deeply unphysical as to be utterly outlawed. If time is elliptic, the past is determined by the future, and since we know that time flows in one direction, this is deeply and fundamentally unphysical. In other uses, the elliptic PDEs are found through ignoring temporal t9781470414979-uk-300erms. This is a philosophical violation of the second law of thermodynamics, which can be used to establish the arrow of time. In this sense we find that elliptic equations are an asymptotic simplification of more fundamental laws. Another implication of ellipticity of PDEs is infinite speed of information, or more correctly an absence of time. If elliptic equations are found within a set of equations, we can be absolutely sure that some physics has been chosen to be ignored. In many cases these ignored physics are not important and some benefit is achieved through the simplification. On the other hand, we shouldn’t lose sight of what has been done and its potential for mischief. At some point this mischief will become relevant and disqualifying.

Assumptions aren’t facts; they’re opportunities for research and testing.

― Laurie Buchanan

Next along the way we have parabolic equations and we can repeat the above discussion. Most classically the equation of heat transfer is parabolic (along with other diffusion processes). The classical form is the heat equations, \partial_{t} u - \partial_{xx} u. We often learn that these diffusion processes are fundamental, leading to the second law of thermodynamics. This comes with a deep problem that we should acknowledge. The parabolic equations imply an infinite propagation speed. Physically the process of diffusion is quite discrete associated with collisionality of the particles that make up materials, or discrete effects of solids (where electrons are particles that move, exchange and interact). This physical effect is utterly bound by finite speeds of propagation.

downloadWith elliptic equations the strength of the signal is unabated in time, but with parabolic equations, the signal diminishes in time. As such the sin of causality violation isn’t quite so profound, but it is a sin nonetheless. As before we get parabolic equations by ignoring physics. Usually this is a valid thing to do based on the time and length scales of interest. We need to remember that at some point this ignorance will damage the ability to model. We are making simplifications that are not always justified. This point is lost quite often. People are allowed to think the elliptic or parabolic equations are fundamental when they are not.

We now get to the third category of PDEs, the hyperbolic kind. The simplest form is a wave equation, \partial_{tt} u + \partial_{xx} u = 0. This can be written as a system of equations first-order PDEs, \partial_{t} u + \partial_{x} v = 0 and \partial_{t} v - \partial_{x} u = 0 . We can derive the simple wave equation by differentiating the first equation in time and the second in space then substituting to eliminate v. The propriety of these steps depends on the variable being continuously differentiable, i.e., smooth. The second, first-order form is the entry point for the beautiful mathematics of hyperbolic conservation laws. As we will show, the elliptic and parabolic equations are simplifications of the hyperbolic equations made upon applying some assumptions.

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

― Abraham H. Maslow

One key example of an equation where the presumption of the basic laws of physics is generally wrong are diffusion processes. One might consider Fourier’s law to be a fundamental law of physics as applied to heat condition in a parabolic form, C \partial_t T = \nabla\cdot q; q = k \nabla T \rightarrow C \partial_t T = \nabla\cdot\nabla T . Instead this is a simplification of a more broadly valid law where heat flows using a hyperbolic equation. This requires a simple modification of the Fourier law to \tau \partial_t q+ q = k \nabla T .  For most applications heat flow can be modeled in the parabolic form as the hyperbolic form is important over very short time and space scales. Still the more fundamental law is the hyperbolic form, and the classical parabolic form is derived by assuming that certain aspects of the dynamics can be ignored. We must always remember that the standard modeling of diffusion processes has an unphysical aspect baked into the equations.

download
download-1

The importance of the hyperbolic character of heat conduction may be more important. It is related to the property of material called second sound. This property is measured and known to be significant under cryogenic conditions where quantum effects are significant. It is also very hard to measure. The leading and rather compelling fact is its relation to the sound speed; the second sound is slower than the sound speed. This would mean that its effective time scale is also longer than the acoustic time. If this aspect is generally true, then the time scale can’t be ignored under many conditions. The deeper question is how all of this plays with thermodynamics. So much of the grounding of thermodynamics is in an equilibrium setting, and this phenomenon adds a natural and potentially important relaxation time scale.

There is an issue with hyperbolic diffusion that we should acknowledge. This form of the equations can violate the second law of thermodynamics, which underpins the macroscopic dynamics of the universe and the arrow of time. By the same token the imposition of the second law through a physical process in continuum physics is invariably tied to diffusion. As such we have formed a veritable technical Mobius strip.  A question is whether a fundamentally different equation can actually violate a less physical law that it is based upon. This might call the violations of the second law by hyperbolic diffusion into question rather directly! In other words, what would change about the second law of thermodynamics; if the diffusion process itself were hyperbolic. Perhaps this is a specific inroad to discussions of non-equilibrium thermodynamics. This may provide a necessary and distinct framing of a deeper discussion. Clearly infinite speeds of propagation for information is unphysical, functionally the second law could be recovered to account for temporal effects.

There is nothing so expensive, really, as a big, well-developed, full-bodied preconception.

― E.B. White

dag006The incompressible Navier-Stokes equations are second primal example where hyperbolic equations are replaced by both parabolic-elliptic equations. A starting point would be the compressible equations that are purely hyperbolic without viscosity. Of course, the viscosity could be replaced with hyperbolic equations to make the compressible flow totally hyperbolic. These equations are the following, \partial_t {\bf u} + {\bf u} \cdot\nabla{\bf u}  +\nabla p = 0;\nabla\cdot {\bf u} = 0. Previously, we discussed the replacement of hyperbolic diffusion by the parabolic terms. For the incompressibility we remove sound waves analytically. The key to doing this is remove any connection between pressure and density with the divergence free constraint, \nabla\cdot {\bf u} = 0. This also turns mass into a passively advected scalar. This is a useful model for low speed flows, but the diffusion and suppressed sound waves both produce infinite speeds of propagation. This violates the principle of causality where there is cause and effect. Instead everywhere is impacted by everything immediately.

As noted repeatedly these infinite speeds are definitely and unabashedly unphysical and signs that the equations are intrinsically limited in modeling scope. These issues are almost routinely ignored by most scientists and engineers. The reason is that the assumptions associated with parabolic or elliptic equations are valid for use. This will not always be true. It should be in the back of their mind. The message is clear, the equations will become invalid under some conditions, some length or time scale will unveil this invalidity. The question is what are these scales, and have we stumbled upon them yet? More generally the use of parabolic or elliptic equations produce these unphysical effects as a matter of course. This implies that the model equation will lose utility at some point under some conditions. We simply need to guard ourselves to this potential and keep this firmly in the back of our mind. The issue in this regard is the lack of capability to solve these non-standard models and make a complete assessment of model validity. By the same token, the non-standard models are harder to solve and may have deleterious side-effects if the full physics is retained.

A very good example of these side effects occurs with compressible flows when the Mach number is small. Solving low-Mach number flows with compressible codes is terribly inefficient and prone to significant approximation errors. This has a great deal to do with the separation of scales. As a result, the solutions often do not adhere to expectations. The consequence is there are many “fixes” to compressible flow solvers to remove the difficulties. The odd thing about this issue is the definitely greater physical reality associated with the compressible flow equations as compared to the incompressible equations. This might imply that the conditioning of the equations is the greatest problem. In addition, modern shock capturing methods have an implied discontinuity with their construction. It would seem that a continuous approximation might alleviate the problems. Conditioning issues with the separation of scales remains.

download-1For modeling and numerical work, the selection of the less physical parabolic and elliptic equations provides better conditioning. The conditioning provides a better numerical and analytical basis for the solutions. The recognition that the equations are less physical is not commonly appreciated. A broader and common appreciation may provide impetus for identifying when these differences are significant. Holding models that are unphysical as sacrosanct is always dangerous. It is important to recognize the limitations of models and allow ourselves to question them regularly. Even models that are fully hyperbolic are wrong themselves, this is the very nature of models. By using hyperbolic models, we remove an obviously unphysical aspect of a given model. Models are abstractions of reality, not the operating system of the universe. We must never lose sight of this.

Everything must be made as simple as possible. But not simpler.

― Albert Einstein

Courant, Richard, and David Hilbert. Methods of mathematical physics [Methoden der mathematischen Physik, engl.] 1. CUP Archive, 1965.

Lax, Peter D. Hyperbolic partial differential equations. Vol. 14. American Mathematical Soc., 2006.

Körner, C., and H. W. Bergmann. “The physical defects of the hyperbolic heat conduction equation.” Applied Physics A 67, no. 4 (1998): 397-401.

Chester, Marvin. “Second sound in solids.” Physical Review131, no. 5 (1963): 2013.

Christov, C. I., and P. M. Jordan. “Heat conduction paradox involving second-sound propagation in moving media.” Physical review letters 94, no. 15 (2005): 154301.

Fefferman, Charles L. “Existence and smoothness of the Navier-Stokes equation.” The millennium prize problems 57 (2006): 67.

Doering, Charles R. “The 3D Navier-Stokes problem.” Annual Review of Fluid Mechanics 41 (2009): 109-128.

 

 

Our Models of Reality are Fundamentally Flawed

09 Friday Mar 2018

Posted by Bill Rider in Uncategorized

≈ Leave a comment

… Nature almost surely operates by combining chance with necessity, randomness with determinism…

― Eric Chaisson

On many occasions I’ve noted the tendency for science to see the World through a highly deterministic lens. We do this despite the World around us that includes a large degree of chance and random events. In science we might consider highly deterministic experiments to be well designed and useful. In a sense this is correct as such experiments confirm our existing theories grounded heavily in determinism. When we take this attitude into the real World of observation of nature, or engineered systeimages-2ms, the deterministic attitude runs aground. The natural World and engineered systems rarely behave in a completely deterministic manner. We see varying degrees of non-determinism and chance in how things work. Some of this is the action of humans in a system, some of it are complex initial conditions, or structure that deterministic models ignore. This variability, chance, and structure is typically not captured by our modeling, and as such modeling is limited in utility for understanding reality.

The assumption of an absolute determinism is the essential foundation of every scientific enquiry.

― Max Planck

Classical+Newtonian+Mechanics

Determinism. universe has a starting point (Big Bang?) correct formulations for laws of nature allow histories of all particles to be traced and predicted into the future. everything is predictable, universe functions like clockwork. Free will? Sir Isaac Newton.

Mathematical models of reality are heavily grounded in a deterministic assumption. This grounding is largely the legacy of Newton whose assumptions were heavily influenced by his religious faith and an almighty God. This God controlled the universe and determined the outcomes. These beliefs ran headlong into reality in the 20th Century with quantum physics and the need for probabilities in models. The power of non-determinism for the most fundamental laws of physics was undeniable, but at larger scales determinism rules supreme. We explain that the law of large numbers pushes the laws of physics over into determinism. On the other hand, we have pervasive laws like the second law of thermodynamics that encapsulate the disorder in the World in the deterministic view. Is this sufficient to capture all of non-determinisms role? I think not. In this sense the work of Newton and 19th Century thought still controls much of science today. Almost every modeling exercise is following determinism as an unspoken underlying assumption. This happens without regard to what we see each day in the real World. The second law of thermodynamics and the power of entropy is not adequate to capture the full span of disorder’s impact on our World. This assumption does untold damage, and it is time to overthrow some aspects of determinism as it has outlived its utility.

We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes. The perfection that the human mind has been able to give to astronomy affords but a feeble outline of such an intelligence.

― Pierre-Simon Laplace

Print

Complex systems and experiments have a great deal of non-determinism in their fundamental behavior and outcomes.   Commonly this non-determinism is completely ignored and modeled with a fully deterministic modeling approach (e.g., the second law). More to the point, a better assumption is a combination of deterministic and stochastic effects are present. The stochastic effects are largely ignored today and swept up into the deterministic model in a heavy handed one-size fits all manner. This sort of approach isn’t usually even considered as a problem because the behavior is considered totally deterministic. The consequence is an inability to consider the proper source of the behavior is misattribution. We are putting physical effects that are non-deterministic into a model that is deterministic. This should seriously limit the predictive power of our modeling.

dag006To move forward we should embrace some degree of randomness in the fundamental models we solve. This random response naturally arises from various sources. In our deterministic models, the random response is heavily incorporated in boundary and initial conditions. The initial conditions include things like texture and structure that the standard models homogenize over. Boundary conditions are the means for the model to communicate with the broader world, which has vast complexities are grossly simplified. In reality both the initial and boundary conditions are far more complex than our models currently use.

The sort of deterministic models we use today attempt to include the entire system without explicitly modeling non-deterministic aspects. These effects are incorporated into the deterministic model or end up increasing the uncertainty of the modeling effort. Our efforts could advance significantly by directly modeling the stochastic aspects. This would produce an ability to separate the modeling effects that are completely deterministic from those that are random plus the interaction between these. We might expect that producing models with an appropriate separation would make the deterministic modeling lower in uncertainty. Some amount of uncertainty in any of these systems is irreducible, and proper modeling of the non-deterministic would produce results that capture these effects properly. Instead of being irreducible this aspect would simply be part of the model, and part of the result. It would move from being uncertain to being part of the answer. We should not expect that modeling non-deterministic dynamics with deterministic models to be the best we can do.

Applying logic to potentially illogical behavior is to construct a house on shifting foundations. The structure will inevitably collapse.

― Stewart Stafford

6507058-1x1-700x700Another aspect of the complexity that current modeling ignores are the dynamics associated with the stochastic phenomena or lumps it whole cloth into the model’s closure. In a real system the stochastic aspects of the model evolve over time including nonlinear interactions between deterministic and stochastic aspects. When the dynamics are completely confined to deterministic models, these nonlinearities are ignored or lumped into the deterministic mean field. When models lack the proper connection to the correct dynamics, the modeling capability is diminished. The result is greater uncertainty and less explanation of what is happening in nature. From an engineering point of view, the problem is that the ability to explicitly control for the non-deterministic aspect of systems is diminished because its influence on results isn’t directly exposed. If the actual dynamics were exposed, we could work proactively to design better. This is the power of understanding in science; if we understand we can attempt to mitigate and control the phenomena. Without proper modeling we are effectively flying blind.

Die Quantenmechanik ist sehr achtung-gebietend. Aber eine innere Stimme sagt mir, daß das doch nicht der wahre Jakob ist. Die Theorie liefert viel, aber dem Geheimnis des Alten bringt sie uns kaum näher. Jedenfalls bin ich überzeugt, daß der nicht würfelt.

Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the “old one.” I, at any rate, am convinced that He does not throw dice.

–Albert Einstein, in a letter to Max Born

Today’s current modeling paradigm is relentlessly deterministic. We attempt to model experiments as a single well-determined event even when significant aspects of the experiment are non-deterministic. Effectively the non-deterministic aspects are ignored or misattributed to determinism. The second experiment is then inconsistent because it is a different instance, instead of being the same deterministic case with a different stochastic forcing. If we model the stochastic element of the phenomena directly we can get to understanding its impact. With our current modeling we simply drive a fundamental misunderstanding of what is happening. We are left with models that have fundamental limitations. None of these issues is going to be handled by brute force. Neither computer power, or computational accuracy, or algorithmic efficiency will impact these problems. The answer to the issues are centered on modeling and increasing its span of physical phenomena addressed.

titan
Unknown-2

The impediments to changing our modeling are massive. We have a strong tendency to lump all of the non-deterministic effects into constitutive laws and closure that lends itself to relative ease of modification. Changing or expanding the governing equations in a code can be utterly daunting and usually not supported by current funding. The entire enterprise of developing new equations is difficult and risky in nature. Our system today is utterly opposed to anything risky and actively undermines attempting anything difficult at every turn. Our computational science is extremely invested in existing models, and most paths to improvement are routed through them. Increasingly we are invested in the most brutish and painfully naïve path to improvement by investing almost entirely in faster computers. The most painful aspect of this path is its lack of timeliness, the ease of creating faster computers has ended with the death of Moore’s law. Getting a faster computer is now extremely expensive and inefficient. Other paths to improvement are not favored and we have almost forgotten how to do science like that. The capper to this sad tale is the utter inability of these computers to help fix faulty models. We have lost the ability to conduct intellectually rigorous work.

030715_1359_1The sort of science needed is enormously risky. I am proposing that we have reached the end of utility for models used for hundreds of years. This is a rather bold assertion on the face of it. On the other hand, the models we are using have a legacy going back to when only analytical solutions to models were used, or only very crude numerical tools. Now our modeling is dominated by numerical solutions, and computing from the desktop (or handheld) to supercomputers of unyielding size and complexity. Why should we expect models derived in the 18th and 19th centuries to still be used today? Shouldn’t our modeling advance as much as our solution methods have. Shouldn’t all the aspects of modeling and simulation be advancing. The answer is a dismal no.

The reasons for this dismal state of affairs is somewhat reasonable. The models defined over the past few centuries defined general solution. Computing offered a path to solution that analytical methods failed to provide. As a result, we saw computing work to provide useful solutions to models that have had limited utility for a very long time. The models now being routinely solved numerically that had been unavailable for a huge amount of time. The numerical work is often done quite poorly with marginal quality control. Assessment of the quality of numerical work is usually slipshod and casual. The “eyeball” and “view graph” norm rule quantified 2panel_HRRR-radar_0uncertainty and error. Most good results using these models are heavily calibrated and lack any true predictive power. In the absence of experiments, we are generally lost and rarely hit the mark. Instead of seeing any of this as shortcomings in the models, we seek to continue using the same models and focus primarily on computing power as a remedy. This is both foolhardy and intellectually empty if not outright dishonest.

As such the evidence that our models are inadequate is overwhelming. Our response to this evidence has been virtual ignorance of this conclusion. We continue to invest in the same areas that have failed to improve results over a long period of time. We continue to sell massive computing power as the fix it all, remedy for problems. We fail to recognize that neither computing power, or solution accuracy will cure any problems if the fundamental model is flawed. Our fundamental models are flawed, and the routes taken for improving modeling and simulation will not help. If the basic model has flaws, a faster computer or a better method, a more accurate discretization, or better scaling will not help. The only cure is to fix or change the model. One of the biggest places where modeling fails is modeling the separation in deterministic and non-deterministic aspects of our models.

35_Putman_W_GEOS5_Fig2_SC14_big
maxresdefault

A simple setting and familiar setting to see how this might help is weather. If we look at any of our models, at any of our scales, it is obvious that enormous variability and details are being excluded from our modeling. One of the biggest needs of weather modeling are extreme weather events that dominate financial and political consequences of weather. Analogous issues exist in a myriad of other fields where modeling and simulation impact the science. A reasonable supposition is that interactions among the averaged over and ignored fine scale structure help produce extreme events when interacting with the large-scale weather. It is well known that large scale weather phenomena set the stage or increases the likelihood of extreme events. The actual phenomenology of extreme events depends on how the large-scale weather interacts with local detail such as the surface topology.

Analogous phenomena happen is many other fields such as material failure, and turbulence. These models are strained under the demands of the modern World and progress is desperately needed. The solutions are not being supported, instead focused on risk adverse and rather pedestrian approaches while eschewing riskier work like model creation. The focus on computing power reflects this intellectual cowardice quite acutely. Our current models are limited by their fundamental structure rather than solution methods, or computing power. Our science programs need to address these challenges in a credible manner by coupling a focus on theory with innovations in experimental science. The challenge is not refining old ideas but allowing ourselves to pursue new ones with sufficient freedom and aggression. Our greatest challenge is not the science, but rather our inability to conceive of solutions in today’s World. This work could be enormously valuable to society as a whole if we could envision it and take the risks necessary to reach success.

The difficulty lies not so much in developing new ideas as in escaping from old ones.

― John Maynard Keynes

Integrating Modeling and Simulation for Predictive Science

02 Friday Mar 2018

Posted by Bill Rider in Uncategorized

≈ 2 Comments

Science is not about making predictions or performing experiments. Science is about explaining.

― Bill Gaede

cargo-cultWe would be far better off removing the word “predictive” as a focus for science. If we replaced the emphasis on prediction with a focus on explanation and understanding, our science would improve overnight. The sense that our science must predict carries connotations that are unrelentingly counter-productive to the conduct of science. The side-effects of the predictivity undermine the scientific method at every turn. The goal of understanding nature and explaining what happens in the natural world is consistent with the conduct of high quality science. In many respects large swaths of the natural world are unpredictable in highly predictable ways. Our weather is a canonical example of this. Moreover, we find the weather to be unpredictable in a bounded manner as time scales become longer. Science that has focused on understanding and explanation has revealed these truths. Attempting to focus prediction under some circumstances is both foolhardy and technically impossible. As such the reality of prediction needs to be entered into carefully and thoughtfully under well-chosen circumstances. We also need the freedom to find out that we are wrong and incapable of prediction. Ultimately, we need to find out limits on prediction and work to improve or accept these limits.

“Predictive Science” is mostly just a buzzword. We put it in our proposals to improve the5f282213e3d57606200fffd45374ecc5chances of hitting funding. A slightly less cynical take would take predictive as the objective for science that is completely aspirational. In the context of our current world, we strive for predictive science as a means of confirming our mastery over a scientific subject. In this context the word predictive implies that the we understand the science well enough to foresee outcomes. We should also practice some deep humility in what this means. Predictivity is always a limited statement, and these limitations should always be firmly in mind. First, predictions are limited to some subset of what can be measured and fail for other quantities. The question is whether the predictions are correct for what matters? Secondly, the understanding is always waiting to be disproved by a reality that is more complex than we realize. Good science is acutely aware of these limitations and actively probes the boundary of our understanding.

Unknown-2In the modern world we constantly have new tools to help expand our understanding of science. Among the most important of these new tools is modeling and simulation. Modeling and simulation is simply an extension of the classical scientific approach. Computers allow us to solve our models in science more generally than classical means. This has increased the importance and role of models in science. We can envision more complex models having more general solutions with computational solutions. Part of this power comes with some substantial responsibility; computational simulations are highly technical and difficult. They come with a host of potential flaws, errors and uncertainties that cloud results and need focused assessment. Getting the science of computation correct and assessed to play a significant role in the scientific enterprise requires a broad multidisciplinary approach with substantial rigor. Playing a broad integrating role in predictive science is verification and validation (V&V). In a nutshell V&V is the scientific method as applied to modeling and simulation. Its outcomes are essential for making any claims regarding how predictive your science is.

Experiment is the sole source of truth. It alone can teach us something new; it alone can give us certainty.

― Henri Poincaré

Richard-feynmanWe can take a moment to articulate the scientific method and then restate it in a modern context using computational simulation. The scientific method involves making hypotheses about the universe and testing those hypotheses against observations of the natural world. One of the key ways to make observations are experiments where the measurements of reality are controlled and focused to elucidate nature more clearly. These hypotheses or theories usually produce models of reality, which take the form of mathematical statements. These models can be used to make predictions about what an observation will be, which then confirms the hypothesis. If the observations are in conflict with the model’s predictions, the hypothesis and model need to be discarded or modified. Over time observations become more accurate, often showing the flaws in models. This usually means a model needs to be refined rather than thrown out. This process is the source of progress in science. In a sense it is a competition between what we observe and how well we observe it, and the quality of our models of reality. Predictions are the crucible where this tension can be realized.

The quest for absolute certainty is an immature, if not infantile, trait of thinking.

― Herbert Feigl

One of the best ways to understand how to do predictive science in the context of modeling and simulation is a simple realization. V&V is basically a methodology that encodes the scientific method into modeling and simulation. All of the content of V&V is assuring that science is being done with a simulation and we aren’t fooling ourselves. Verification is all about making sure the implementation of the model and its solution are credible and correct. The second half of verification is associated with estimating the errors in the numerical solution of the model. We need to assess the numerical uncertainty and the degree to which it clouds the model’s solution.

First-Time-Measurements-of-Turbulent-MixingValidation is then the structured comparison of the simulated model’s solution with observations. Validation is not something that is completed, but rather it is an assessment of work. At the end of the validation process evidence has been accumulated as to the state of the model. Is the model consistent with the observations? If the uncertainties in the modeling and simulation process along with the uncertainties in the observations can lead to the conclusion that the model is correct enough to be used. In many cases the model is found to be inadequate for the purpose and needs to be modified ˙or changed completely. This process is simply the hypothesis testing so central to the conduct of science.

Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.

― George Box

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an “ideal” gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.

― George Box

The George Box maxim about all models being wrong, but some being useful is important and key in the conduct of V&V. It is also central to modeling and simulation’s most important perspective, the constancy of necessity for improvement. Every model is a mathematical abstraction that has limited capacity for explaining nature. At the same time the model has a utility that may be sufficient for explaining everything we can measure. This does not mean that the model is right, or perfect, it means the model is adequate. The creative tension in science is the narrative of arc of refining hypotheses and models of reality or improving measurements and experiments to more acutely test the models. V&V is a process for achieving this end in computational simulations. Our goal should always be to find inadequacy in models and define the demand for improvement. If we do not have the measurements to demonstrate a model’s incorrectness, the experiments and measurements need to improve. All of this serves progress in science in a clear manner.

The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

–Sir Arthur Stanley Eddington

flameLet’s take a well thought of and highly accepted model, the incompressible Navier-Stokes equations. This model is thought to largely contain the proper physics of fluid mechanics, most notably turbulence. Perhaps this is true although our lack of progress in turbulence might indicate that something is amiss. I will state without doubt that the incompressible Navier-Stokes equations are wrong in some clear and unambiguous ways. The deepest problem with the model is incompressibility. Incompressible fluids do not exist and the form of the mass equation showing divergence free velocity fields implies several deeply unphysical things. All materials in the universe are compressible and support sound waves, and this relation opposes this truth. Incompressible flow is largely divorced from thermodynamics and materials are thermodynamic. The system of equations violates causality rather severely, the sound waves travel at infinite speeds. All of this is true, but at the same time this system of equations is undeniably useful. There are large categories of fluid physics that they explain quite remarkably. Nonetheless the equations are also obviously unphysical. Whether or not this unphysical character is consequential should be something people keep in mind.

It is impossible to trap modern physics into predicting anything with perfect determinism because it deals with probabilities from the outset.

― Arthur Stanley Eddington

19.3_F2_ThornquistIn conducting predictive science one of the most important things you can do is make a prediction. While you might start with something where you expect the prediction to be correct (or correct enough), the real learning comes from making predictions that turn out to be wrong. It is wrong predictions that will teach you something. Sometimes the thing you learn is something about your measurement or experiment that needs to be refined. At other times the wrong prediction can be traced back to the model itself. This is your demand and opportunity to improve the model. Is the difference due to something fundamental in the model’s assumptions? Or is it simply something that can be fixed by adjusting the closure of the model?  Too often we view failed predictions as problems when instead they are opportunities to improve the state of affairs. I might posit that if you succeed with a prediction, it is a call to improvement; either improve the measurement and experiment, or the model. Experiments should set out to show flaws in the models. If this is done the model needs to be improved. Successful predictions are simply not vehicles for improving scientific knowledge, they tell us we need to do better.

When the number of factors coming into play in a phenomenological complex is too large scientific method in most cases fails. One need only think of the weather, in which case the prediction even for a few days ahead is impossible.

― Albert Einstein

In this context we can view predictions as things that at some level we want to fail at. If the prediction is too easy, the experiment is not sufficiently challenging. Success and failure exists on a continuum. For simple enough predictions our models will always work, and for complex enough predictions, the models will always fail. The trick is finding the spot where the predictions are on the edge of credibility, and progress is needed and ripe. Too often we take the mindset is taken where predictions need to be successful. An experiment that is easy to predict is not a success, it is a waste. I would rather see predictions be focused at the edge of success and failure. If we are interested in making progress, predictions need to fail so that models can improve. By the same token a successful prediction indicates that the experiment and measurement need to be improved to more properly challenge the models. The real art of predictive science is working at the edge of our predictive modeling capability.

Projected_change_in_annual_mean_surface_air_temperature_from_the_late_20th_century_to_the_middle_21st_century,_based_on_SRES_emissions_scenario_A1BA healthy focus on predictive science with a taste for failure produces a strong driver for lubricating the scientific method and successfully integrating modeling and simulation as a valuable tool. Prediction involves two sides of science to work in concert; the experiment-observation of the natural world, and the modeling of the natural world via mathematical abstraction. The better the observations and experiments, the greater the challenge to models. Conversely, the better the model, the greater the challenge to observations. We need to tee up the tension between how we sense and perceive the natural world, and how we understand that world through modeling. It is important to examine where the ascendency in science exists. Are the observations too good for the models? Or can no observation challenge the models? This tells us clearly where we should prioritize.

We need to understand where progress is needed to advance science. We need to take advantages of technology in moving ahead in either vein. If observations are already quite refined, but new technology exists to improve them, it behooves us to take advantage of it. By the same token modeling can be improved via new technology such a solution methods, algorithmic improvements and faster computers. What is lacking from the current dialog is a clear focus on where the progress imperative exists. A part of integrating predictive science well is determining where the progress is most needed. We can bias our efforts to focus on where the progress is most needed while keeping opportunities to make improvements in mind.

ostrich-head-in-sandThe important word I haven’t mentioned yet is “uncertainty”. We cannot have predictive science without dealing with uncertainty and its sources. In general, we systematically or perhaps even pathologically underestimate how uncertain our knowledge is. We like to believe our experiments and models are more certain than they actually are. This is really easy to do in practice. For many categories of experiments, we ignore sources of uncertainty and simply get away with an estimate of zero for that uncertainty. If we do a single experiment, we never have to explicitly confront that the experiment isn’t completely reproducible. On the modeling side we see the particular experiment as something to be modeling precisely even if the phenomena of interest are highly variable. This is common and a source of willful cognitive dissonance. Rather than confront this rather fundamental uncertainty, we willfully ignore it. We do not run replicate experiments and measure the variation in results. We do not subject the modeling to reasonable variations in the experimental conditions and check the variation in the results. We pretend that the experiment is completely well-posed, and the model is too. In doing this we fail at the scientific method rather profoundly.

Another key source of uncertainty is numerical error. It is still common to present results without any sense of the numerical error. Typically, the mesh used for the calculation is asserted to be fine enough without any evidence. More commonly the results are simply given without any comment at all. At the same time the nation is investing huge amounts of money in faster computers that implicitly assume that faster computers yield better solutions, a priori. This entire dialog often proceeds without any support from evidence. It is 100% assumption. When one examines these issues directly there is often a large amount of numerical error that is being ignored. Numerical error is small in simple problems without complications. For real problems with real geometry and real boundary conditions with real constitutive models, the numerical errors are invariably significant. One should expect some evidence to be presented regarding its magnitude, and you should be suspicious if it’s not there. Too often we simply give simulations a pass on this detail and fail due diligence.

Truth has nothing to do with the conclusion, and everything to do with the methodology.

― Stefan Molyneux

In this sense the entirety of V&V is a set of processes for collecting evidence about credibility and uncertainty. In one respect verification is mostly an exercise in collecting evidence of credibility and due diligence for quality in computational tools. Are the models, codes and methods implemented in a credible and high-quality manner. Has the code development been conducted in a careful manner where the developers have checked and done a reasonable job of producing code without obvious bugs? Validation could be characterized by collecting uncertainties. We find upon examination that many uncertainties are ignored in both computational and experimental work. Without these uncertainties and the evidence surrounding them, the entire practice of validation is untethered from reality. We are left to investigate through assumption and supposition. This sort of validation practice has a tendency to simply regress to commonly accepted notions. In such an environments models are usually accepted as valid and evidence is often skewed toward that as a preordained conclusion. Without care and evidence, the engine of progress for science is disconnected.

downloadIn this light we can see that V&V is simply a structured way of collecting evidence necessary the scientific method. Collecting this evidence is difficult and requires assumptions to be challenged. Challenging assumptions is courting failure. Making progress requires failure and the invalidation of models. It requires doing experiments that we fail to be able to predict with existing models. We need to assure that the model is the problem, and the failure isn’t due to numerical error. To determine these predictive failures requires a good understanding of uncertainty in both experiments and computational modeling. The more genuinely high quality the experimental work is, the more genuinely testing the validation is to model. We can collect evidence about the correctness of the model and clear standards for judging improvements in the models. The same goes for the uncertainty in computations, which needs evidence so that progress can be measured.

It doesn’t matter how beautiful your theory is … If it doesn’t agree with experiment, it’s wrong.

― Richard Feynman

Now we get to the rub in the context of modeling and simulation in modern predictive science. To make progress we need to fail to be predictive. In other words, we need to fail in order to succeed. Success should be denoted by making progress in becoming more predictive. We should take the perspective that predictivity is a continuum, not a state. One of the fundamental precepts of stockpile steward ship is predictive modeling and simulation. We want confident and credible evidence that we are capable of faithfully predicting certain essential aspects of reality. The only way to succeed at this mission is continually challenging and pushing ourselves at the limit of our capability. This is means that failure should be an almost constant state of being. The problem is projecting a sense of success, which society demands while continually failing. We do not do this well. Instead we need to project a sense that we continually succeed at everything we promise.

fig10_roleIn the process we create conditions where the larger goal of prediction is undermined at every turn. Rather than define success in terms of real progress, we produce artificial measures of success. A key to improving this state of affairs is an honest assessment of all of our uncertainties both experimentally and computationally. There are genuine challenges to this honesty. Generally, the more work we do, the more uncertainty we unveil. This is true of experiments and computations. Think about examining replicate uncertainty in complex experiments. In most cases the experiment is done exactly once, and the prospect of reproducing the experiment is completely avoided. As soon as replicate experiments are conducted the uncertainty becomes larger. Before the replicates, this uncertainty was simply zero and no one challenges this assertion. Instead of going back and adjusting our past state based on current knowledge we run the very real risk of looking like we are moving backwards. The answer is not to continue this willful ignorance but take a mea culpa and admit our former shortcomings. These mea culpas are similarly avoided thus backing the forces of progress into an ever-tighter corner.

imagesThe core of the issue is relentlessly psychological. People are uncomfortable with uncertainty and want to believe things are certain. They are uncomfortable about random events, and a sense of determinism is comforting. As such modeling reflects these desires and beliefs. Experiments are similarly biased toward these beliefs. When we allow these beliefs to go unchallenged, the entire basis of scientific progress becomes unhinged. Confronting and challenging these comforting implicit assumptions may be the single most difficult for predictive science. We are governed by assumptions that limit our actual capacity to predict nature. Admitting flaws in these assumptions and measuring how much we don’t know is essential for creating the environment necessary for progress. The fear of saying, “I don’t know” is our biggest challenge. In many respects we are managed to never give that response. We need to admit what we don’t know and challenge ourselves to seek those answers.

Only a few centuries ago, a mere second in cosmic time, we knew nothing of where or when we were. Oblivious to the rest of the cosmos, we inhabited a kind of prison, a tiny universe bounded by a nutshell.

How did we escape from the prison? It was the work of generations of searchers who took five simple rules to heart:

  1. Question authority. No idea is true just because someone says so, including me.
  2. Think for yourself. Question yourself. Don’t believe anything just because you want to. Believing something doesn’t make it so.
  3. Test ideas by the evidence gained from observation and experiment.If a favorite idea fails a well-designed test, it’s wrong. Get over it.
  4. Follow the evidence wherever it leads. If you have no evidence, reserve judgment.

And perhaps the most important rule of all…

  1. Remember: you could be wrong. Even the best scientists have been wrong about some things. Newton, Einstein, and every other great scientist in history — they all made mistakes. Of course they did. They were human.

Science is a way to keep from fooling ourselves, and each other.

― Neil deGrasse Tyson

 

 

 

 

 

 

 

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...