• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Author Archives: Bill Rider

The Default Uncertainty is Always ZERO

22 Friday Apr 2016

Posted by Bill Rider in Uncategorized

≈ 4 Comments

As long as you’re moving, it’s easier to steer.

― Anonymous

Just to be clear, this isn’t a good thing; it is a very bad thing!

technicaldebtI have noticed that we tend to accept a phenomenally common and undeniably unfortunate practice where a failure to assess uncertainty means that the uncertainty reported (acknowledged, accepted) is identically ZERO. In other words if we do nothing at all, no work, no judgment, the work (modeling, simulation, experiment, test) is allowed to provide an uncertainty that is ZERO. This encourages scientists and engineers to continue to do nothing because this wildly optimistic assessment is a seeming benefit. If somebody does work to estimate the uncertainty the degree of uncertainty always gets larger as a result. This practice is desperately harmful to the practice and progress in science and incredibly common.

Of course this isn’t the reality, the uncertainty is actually some value, but the lack of assessed uncertainty is allowed to be accepted as ZERO. The problem is the failure of other scientists and engineers to demand an assessment instead of simply accepting the lack of due diligence or outright curiosity and common sense. The reality is that the situation where the lack of knowledge is so dramatic, the estimated uncertainty should actually be much larger to account for this lack of knowledge. Instead we create a cynical cycle where more information is greeted by more uncertainty rather than less. The only way to create a virtuous cycle is the acknowledgement that little information should mean large uncertainties, and part of the reward for good work is greater certainty (and lower uncertainty).

This entire post is related to a rather simple observation that has broad applications for how science and engineering is practiced today. A great deal of work has this zero uncertainty writ large, i.e., there is no reported uncertainty at all, none, ZERO. Yet, despite of the demonstrable and manifesimagest shortcomings, a gullible or lazy community readily accepts the incomplete work. Some of the better work has uncertainties associated with it, but almost always varying degrees of incompleteness. Of course one should acknowledge up front that uncertainty estimation is always incomplete, but the degree of incompleteness can be spellbindingly large.

One way to deal with all of this uncertainty is to introduce a taxonomy of uncertainty where we can start to organize our lack of knowledge. For modeling and simulation exercises I’m suggesting that three big bins for uncertainty be used: numerical, epistemic modeling, and modeling discrepancy. Each of these categories has additional subcategories that may be used to organize the work toward a better and more complete technical assessment. In the definition for each category we get the idea of the texture in each, and an explicit view of intrinsic incompleteness.

  • Numerical: Discretization (time, space, distribution), nonlinear approximation, linear convergence, mesh, geometry, parallel computation, roundoff,…
  • Epistemic Modeling: black box parametric, Bayesian, white box testing, evidence theory, polynomial chaos, boundary conditions, initial conditions, statistical,…
  • Modeling discrepancy: Data uncertainty, model form, mean uncertainty, systematic bias, boundary conditions, initial conditions, measurement, statistical, …

A very specific thing to note is that the ability to assess any of these uncertainties is always incomplete and inadequate. Admitting and providing some deference to this nature is extremely important in getting to a better state of affairs. A general principle to strive for in uncertainty estimation is a state where the application of greater effort yields smaller uncertainties. A way to achieve this nature of things is to penalize the uncertainty estimation to account for incomplete information. Statistical methods always account for sampling by increasing a standard error proportionally to the root of the number of samples. As such there is an explicit benefit for gathering more data to reduce the uncertainty. This sort of measure is well suited to encourage a virtuous cycle of information collection. Instead modeling and simulation accepts a poisonous cycle where more information implicitly penalizes the effort by increasing uncertainty.images-1

This whole post is predicated on the observation that we willingly enter into a system where effort increases the uncertainty. The direct opposite should be the objective where more effort results in smaller uncertainty. We also need to embrace a state where we recognize that the universe has an irreducible core of uncertainty. Admitting that perfect knowledge and prediction is impossible will allow us to focus more acutely on what we can predict. This is really a situation where we are willfully ignorant and over-confident about your knowledge. One might tag some of the general issue with reproducibility and replicatability of science to the same phenomena. Any effort that reports to provide a perfect set of data perfectly predicting reality should be rejected as being utterly ridiculous.

One of the next things to bring to the table is the application of expert knowledge and judgment to fill in where stronger technical work is missing. Today expert judgment is implicitly present in the lack of assessment. It is a dangerous situation where experts simply assert that things are true or certain. Instead of this expert system being directly identified, it is embedded in the results. A much better state of affairs is to ask for the uncertainty and the evidence for its value. If there has been work to assess the uncertainty this can be provided. If instead, the uncertainty is based on some sort expert judgment or previous experience, the evidence can be provided in this form.

Now let us be more concrete in the example of what this sort of evidence might look like bullshit_everywhere-e1345505471862within the expressed taxonomy for uncertainty. I’ll start with numerical uncertainty estimation that is the most commonly completely non-assessed uncertainty. Far too often a single calculation is simply shown and used without any discussion. In slightly better cases, the calculation will be given with some comments on the sensitivity of the results to the mesh and the statement that numerical errors are negligible at the mesh given. Don’t buy it! This is usually complete bullshit! In every case where no quantitative uncertainty is explicitly provided, you should be suspicious. In other cases unless the reasoning is stated as being expertise or experience it should be questioned. If it is stated as being experiential then the basis for this experience and its documentation should be given explicitly along with evidence that it is directly relevant.

So what does a better assessment look like?

Under ideal circumstances you would use a model for the error (uncertainty) and do enough computational work to determine the model. The model or models would characterize all of the numerical effects influencing results. Most commonly, the discretization error is assumed to be the dominant numerical uncertainty (again evidence should be given). If the error can be defined as being dependent on a single spatial length scale, the standard error model can be used and requires three meshes be used to determine its coefficients. This best practice is remarkably uncommon in practice. If fewer meshes are used, the model is under-determined and information in terms of expert judgment should be added. I have worked on the case of only two meshes being used, but it is clear what to do in that case.

In many cases there is no second mesh to provide any basis for standard numerical error code_monkeyestimation. Far too many calculational efforts provide a single calculation without any idea of the requisite uncertainties. In a nutshell, the philosophy in many cases is that the goal is to complete the best single calculation possible and creating a calculation that is capable of being assessed is not a priority. In other words the value proposition for computation is either do the best single calculation without any idea of the uncertainty versus a lower quality simulation with a well-defined assessment of uncertainty. Today the best single calculation is the default approach. This best single calculation then uses the default uncertainty estimate of exactly ZERO because nothing else is done. We need to adopt an attitude that will reject this approach because of the dangers associated with accepting a calculation without any quality assessment.

In the absence of data and direct work to support a strong technical assessment of uncertainty we have no choice except to provide evidence via expert judgment and experience. A significant advance would be a general sense that such assessments be expected and the default ZERO uncertainty is never accepted. For example there are situations where single experiments are conducted without any knowledge of how the results of the experiment fit within any distribution of results. The standard approach to modeling is a desire to exactly replicate the results as if the experiment were a well-posed initial value problem instead of one realization of a distribution of results. We end up chasing our tails in the process and inhibiting progress. Again we are left in the same boat, as before, the default uncertainty in the experimental data is ZERO. Instead we have no serious attempt to examine the width and nature of the distribution in our assessments. The result is a lack of focus on the true nature of our problems and inhibitions on progress.

The problems just continue in the assessment of various uncertainty sources. In many cases the practice of uncertainty estimation is viewed only as the establishment of the degree of uncertainty is modeling parameters used in various closure models. This is often termed as epistemic uncertainty or lack of knowledge. This sometimes provides the only identified uncertainty in a calculation because tools exist for creating this data from calculations (often using a Monte Carlo sampling approach). In other words the parametric uncertainty is often presented as being all the uncertainty! Such studies are rarely complete and always fail to include the full spectrum of parameters in modeling. Such studies are intrinsically limited by being embedded in a code that has other unchallenged assumptions.

This is a virtue, but ignores broader modeling issues almost completely. For example the basic equations and model used in simulations is rarely, if ever questioned. The governing equations minus the closure are assumed to be correct a priori. This is an extremely dangerous situation because these equations are not handed down from the creator on stone tablets, but full of assumptions that should be challenged and validated with regularity. Instead this happens with such complete rarity despite being the dominant source of error in cases. When this is true the capacity to create a predictive simulation is completely impossible. Take the application of incompressible flow equations, which is rarely questioned. These equations have a number of stark approximations that are taken as the truth almost without thought. The various unphysical aspects of the approximation are ignored. For compressible flow the equations are based on equilibrium assumptions, which are rarely challenged or studied.

A second area of systematic and egregious oversight by the community is aleatory or dag006random uncertainty. This sort of uncertainty is clearly overlooked by our modeling approach in a way that most people fail to appreciate. Our models and governing equations are oriented toward solving the average or mean solution for a given engineering or science problem. This key question is usually muddled together in modeling by adopting an approach that mixes a specific experimental event with a model focused on the average. This results in a model that has an unclear separation of the general and specific. Few experiments or events being simulated are viewed from the context that they are simply a single instantiation of a distribution of possible outcomes. The distribution of possible outcomes is generally completely unknown and not even considered. This leads to an important source of systematic uncertainty that is completely ignored.

It doesn’t matter how beautiful your theory is … If it doesn’t agree with experiment, it’s wrong.

― Richard Feynman

Almost every validation exercise tries to examine the experiment as a well-posed initial value problem with a single correct answer instead of a single possible realization from an unknown distribution. More and more the nature of the distribution is the core of the scientific or engineering question we want to answer, yet our modeling approach is hopelessly stuck in the past because we are not framing the question we are answering thoughtfully. Often the key question we need to answer is how likely a certain bad outcome will be. We want to know the likelihood of extreme events given a set of changes in a system. Think about things like what does a hundred year flood look like under a scenario of climate change, or the likelihood that a mechanical part might fail under normal usage. Instead our fundamental models being the average response for the system are left to infer these extreme events from the average often without any knowledge of the underlying distributions. This implies a need to change the fundamental approach we take to modeling, but we won’t until we start to ask the right questions and characterize the right uncertainties.

One should avoid carrying out an experiment requiring more than 10 per cent accuracy.

― Walther Nernst

The key to progress is work toward some best practices that avoid these pitfalls. First and foremost a modeling and simulation activity should never allow itself to report or even imply that key uncertainties be ZERO. If one has lots of data and make efforts to assess then uncertainties can be assigned through strong technical arguments. This is terribly or even embarrassingly uncommon even today. If one does not have the data or calculations to support uncertainty estimation then significant amounts of expert judgment and strong assumptions are necessary to estimate uncertainties. The key is to make a significant commitment to being honest about what isn’t known and take a penalty for lack of knowledge and understanding. That penalty should be well grounded in evidence and experience. Making progress in these areas is essential to make modeling and simulation a vehicle appropriate to the hype we hear all the time.

Stagnation is self-abdication.

― Ryan Talbot

Modeling and simulation is looked at as one of the great opportunities for industrial, scientific and engineering improvements for society. Right now we are hinging our improvements on a mass of software being moved onto increasingly exotic (and powerful) computers. Increasingly the whole of our effort in modeling and simulation is being reduced to nothing but a software development activity. The holistic and integrating nature of modeling and simulation is being hollowed out and lost to a series of fatal assumptions. One of the places where computing’s power cannot change is how we practice our computational efforts. It can enable the practices in modeling and simulation by making it possible to do more computation. The key to fixing this dynamic is a commitment to understanding the nature and limits of our capability. Today we just assume that our modeling and simulation has mastery and no such assessment is needed.

The computational capability does nothing to improve experimental sciences necessary ClimateModelnestingvalue in challenging our theory. Moreover the whole sequence of necessary activities like model development, and analysis, method and algorithm development along with experimental science and engineering are all receiving almost no attention today. These activities are absolutely necessary for modeling and simulation success along with the sort of systematic practices I’ve elaborated on in this post. Without a sea change in the attitude toward how modeling and simulation is practiced and what it depends upon, its promise as a technology will be stillborn and nullified by our collective hubris.

It is high time for those working to progress modeling and simulation to focus energy and effort it is needed. Today we are avoiding a rational discussion of how to make modeling and simulation successful, and relying on hype to govern our decisions. The goal should not be to assure that high performance computing is healthy, but rather modeling and simulation (or big data analysis) is healthy. High performance computing is simply a necessary tool for these capabilities, but not the soul of either. We need to make sure the soul of modeling and simulation is healthy rather than the corrupted mass of stagnation we have.

You view the world from within a model.

― Nassim Nicholas Taleb

 

The Essential Asymmetry in Fluid Mechanics

15 Friday Apr 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

…beauty is not symmetry of parts- that’s so impotent -as Mishima says, beauty is something that attacks, overpowers, robs, & finally destroys…

― John Geddes

In much of physics a great deal is made of the power of symmetry. The belief is that symmetry is a powerful tool, but also a compelling source of beauty and depth. In fluid mechanics the really cool stuff happens when the symmetry is broken. The power and depth of consequence comes from the asymmetric part of the solution. When things are symmetric they tend to be boring and uninteresting, and nothing beautiful or complex arises. I’ll be so bold as to say that the power of this essential asymmetry hasn’t been fully exploited, but could be even more magnificent.

flow_patternsFluid mechanics at its simplest is something called Stokes flow, basically motion so slow that it is solely governed by viscous forces. This is the asymptotic state where the Reynolds number (the ratio of inertial to viscous forces) is identically zero. It’s a bit oxymoronic as it is never reached, it’s the equations of motion without any motion or where the motion can be ignored. In this limit flows preserve their basic symmetries to a very high degree.

Basically nothing interesting happens and the whole thing is basically a giant boring glop of nothing. It is nice because lots of pretty math can be done in this limit. The equations are very well behaved and solutions have tremendous regularity and simplicity. Let the fluid move and take the Reynolds number away from zero and cool things almost immediately happen. The big thing is the symmetry is broken and the flow begins to contort, and wind into amazing shapes. Continue to raise the Reynolds number and the asymmetries pile up and we have turbulence, chaos and our understanding goes out the window. At the same time the whole thing produces patterns and structures of immense and inspiring beauty. With symmetry fluid mechanics is dull as dirt; without symmetry it is amazing and something to marveled at.

So, let’s dig a bit deeper into the nature of these asymmetries and the opportunity to take them even further.

alleesThe fundamental asymmetry in physics is the arrow of time, and its close association with entropy. The connection with asymmetry and entropy is quite clear and strong for shock waves where the mathematical theory is well-developed and accepted. The simplest case to examine is Burgers’ equation, u_t + u u_x = 0, or its conservation form u_t + 1/2 \left[u^2 \right]_x = 0 . This equation supports shocks and rarefactions, and their formation is determined by the sign of u_x . If one takes the gradient of the governing equation in space, you can see the solution forms a Ricatti equation along characteristics, \left( u_x \right)_t + u u_{xx} + \left( u_x \right)^2 = 0. The solution on characteristics tells one the fate of the solution, u_x\left(t\right) = \frac{u_x\left(0\right)}{1 + t u_x}. The thing to recognize that the denominator will go to zero if u_x<0 and the value of the derivative will become unbounded, i.e., form a shock.

The process in fluid dynamics is similar. If the viscosity is sufficiently small and the gradients of velocity are negative, a shock will form. It is inevitable as death and taxes. Moving back to Burgers’ equation briefly we can also see another aspect of the dynamics that isn’t so commonly known, the presence of dissipation in the absence of viscosity. Without viscosity for the rarefied flow where gradients diminish there is no dissipation. For a shock there is dissipation, and the form of it will be quite familiar by the end of the post. If one forms an equation for the evolution of the energy in a Burgers’ flow and looks at the solution for a shock via the jump conditions a discrepancy is uncovered, the rate of kinetic energy dissipation is \ell \left(1/2 u^2\right)_t = \frac{1}{12}\left(\Delta u\right)^3. The same basic character is shared by shock waves and incompressible turbulent flows. It implies the presence of a discontinuity in the model of the flow.

urlOn the one hand the form seems to be unavoidable dimensionally, on the other it is a profound result that provides the basis of the Clay prize for turbulence. It gets to the core of my belief that to a very large degree the understanding of turbulence will elude us as long as we use the intrinsically unphysical incompressible approximation. This may seem controversial, but incompressibility is an approximation to reality, not a fundamental relation. As such its utility is dependent upon the application. It is undeniably useful, but has limits, which are shamelessly exposed by turbulence. Without viscosity the equations governing incompressible flows are pathological in the extreme. Deep mathematical analysis has been unable to find singular solutions of the nature needed to explain turbulence in incompressible flows.

The real key to understanding the issues goes to a fundamental misunderstanding about shock waves and compressibility. First, it would be very good to elaborate how the same dynamic manifests itself for the compressible Euler equations. For intents and purposes the way to look at shock formation in the Euler equations acts just like Burgers’ equation for the nonlinear characteristics. In its simplest form the Euler equations have three fundamental characteristic modes, two being nonlinear associated with acoustic (sound) waves, one being linear and associated with material motion. The nonlinear acoustic modes act just like Burgers’ equation, and propagate at a velocity of u\pm c where u is the fluid velocity, and c is the speed of sound.

Once the Euler equations are decomposed into the characteristics and the flow is smooth everything follows as Burgers’ equation. Along the appropriate characteristic, the flow will be modulated according to the nonlinearity of the equations, which is slightly different than Burgers’ in an important manner. The nonlinearity now depends on the equation of state in a key was, the curvature of an isentrope, G=\left.\partial_{\rho\rho} p\right|_S . This quantity is dominantly and asymptotically positive (i.e., convex), but may be negative. For ideal gases G=\left(\gamma + 1\right)/2. For convex equations of state shocks then always form given enough time if the velocity gradient is negative just like Burgers’ equation.csd240333fig7

One key thing to recognize is that the formation of the shock does not depend on the underlying Mach number of the flow. A shock always forms if the velocity is negative even as the Mach number goes to zero (the incompressible limit). Almost everything else follows as with Burgers’ equation including the dissipation relation associated with a shock wave, T d S=\frac{G}{12c}\left(\Delta u\right)^3. Once the shock forms, the dissipation rate is proportional to the cube of the jump across the shock. In addition this limit is actually most appropriate in the zero Mach number limit (i.e., the same limit as incompressible flow!).

Shocks aren’t just supersonic phenomena; they are a result of solving the equations in a limit where this viscous terms are small enough to neglect (i.e., the high Reynolds’ number limit!). So just to sum up, the shock formation along with intrinsic dissipation is most valid in the limits where we think of turbulence. We see that this key effect is a direct result of the asymmetric effect of a velocity gradient on the flow. For most flows where the equation of state is convex, the negative velocity gradient sharpens flow features into shocks that dissipate energy regardless of the value of viscosity. Positive velocity gradients smooth the flow and modify the flow via rarefying the flow. Note that physically admissible non-convex equations of state (really isolated regions in state space) have the opposite character. If one could run a classical turbulence experiment where the fluid is non-convex, the conceptual leap I am suggesting could be tested directly because the asymmetry in turbulence would be associated with positive rather than negative velocity gradients.

Now we can examine the basic known theory of turbulence that is so vexing to everyone. Kolmogorov came up with three key relations for turbulent flows. The spectral nature of turbulence is the best known where one looks at the frequency decomposition of the turbulence flow, and finds a distinct region where the decay of energy shows -5/3 slope. There is a lesser know relation of velocity correlations known as the 2/3 law. I believe the most important relation is known as the 4/5 law for the asymptotic decay of kinetic energy in a high Reynolds number turbulent flow. This equation implies that dissipation occurs in the absence of viscosity (sound familiar?).

The law is stated as 4/5 \left< K_t \right>\ell = \left<\left(\Delta_L u\right)^3 \right>. The subscript L means longitudinal where the differences are taken in the direction the velocity is moving over a distance \ell. This relation implies a distinct asymmetry in the equations that means negative gradients are intrinsically sharper than positive gradients. This is exactly what happens in compressible flows. Kolmogorov derived this relation from the incompressible flow equations and it has been strongly confirmed by observations. The whole issue associated with the (in)famous Clay prize is the explanation of this law in the mathematical admissible solutions of the incompressible equations. This law suggests that the incompressible flow equations must support singularities that are in essence like a shock. My point is that the compressible equations support exactly the phenomena we seek in the right limits for turbulence. The compressible equations have none of the pathologies of the incompressible equations and have a far greater physical basis and remove the unphysical aspects of the physical-mathematical description.

The result is a conclusion that the incompressible equations are inappropriate for understanding what is happening fundamentally in turbulence. The right way to think about it is that turbulent relations are supported by the basic physics of compressible flows in the right asymptotic limits of zero Mach number, high Reynolds number limits.

Symmetry is what we see at a glance; based on the fact that there is no reason for any difference…

― Blaise Pascal

The Singularity Abides

08 Friday Apr 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

It isn’t all over; everything has not been invented; the human adventure is just beginning.

― Gene Roddenberry

Listening to the dialog on modeling and simulation is so depressing. There seems to be an assumption implicit to every discussion that all we need to do to unleash predictivefilm-the_big_lebowski-1998-the_dude-jeff_bridges-tops-pendleton_shawl_cardigansimulation is build the next generation of computers. The proposition is so shallow on the face of it as to be utterly laughable. Except no one is laughing, the programs are predicated on it. The whole mentality is damaging because it intrinsically limits our thinking about how to balance the various elements needed for progress. We see a lack of the sort of approach that can lead to progress with experimental work starved of funding and focus without the needed mathematical modeling effort necessary for utility. Actual applied mathematics has become a veritable endangered species only seen rarely in the wild.

Often a sign of expertise is noticing what doesn’t happen.

― Malcolm Gladwell

One of the really annoying aspects of the hype around computing these days is the lack of practical and pragmatic perspective on what might constitute progress. Among the topics revolving around modeling and simulation practice is the pervasive need for singularities of various types in realistic calculations of practical significance. Much of the dialog and dynamic technically seems to completely avoid the issue and act as if it isn’t a driving concern. The reality is that singularities of various forms and functions are an ever-present aspect of realistic problems and their mediation is an absolutely essential for modeling and simulation’s impact to be fully felt. We still have serious issues because of our somewhat delusional dialog on singularities.6767444295_259ef3e354

At a fundamental level we can state that singularities don’t exist in nature, but very small or thin structures do whose details don’t matter for large scale phenomena. Thus singularities are a mathematical feature of models for large scale behavior that ignore small scale details. As such when we talk about the behavior of singularities, we are really just looking at models, and asking whether the model’s behavior is good. The important aspect of the things we call singularities is their impact on the large scale and the capacity to do useful things without looking at the small-scale details. Much, if not all of the current drive for computational power is focused on brute force submission of the small-scale details. This approach fails to ignite the sort of deep understanding that a model, which ignoring the small scales requires. Such understanding is the real role of science, not simply overwhelming things with technology.

The role of genius is not to complicate the simple, but to simplify the complicated.

― Criss Jami

The important this to capture is the universality of the small scale’s impact on the large scale. It is closely and intimately related to ideas around the role of stochastic, random Elmer-pump-heatequationstructures and models for average behavior. One of the key things to really straighten out is the nature of the question we are asking the model to answer. If the question isn’t clearly articulated, the model will provide deceptive answers that will send scientists and engineers in the wrong direction. Getting this model to question dynamic sorted out is far more important to the success of modeling and simulation than any advance in computing power. It is also completely and utterly off the radar of the modern research agenda. I worry that the present focus will produce damage to the forces of progress that may take decades to undue.

A key place where singularities regularly show up is representations of geometry. It is really useful to represent things with sharp corners and rapid transitions geometrically. Our ability to simulate anything engineered would suffer immensely is we had to compute the detailed smooth parts of geometry. In many cases the detail is then computed with a sort of subgrid model, like surface roughness to represent the impact of the true non-idealized geometry. This is a key example of the treatment of such details as being almost entirely physic’s domain specific. There is not a systematic view of this across fields. The same sort of effect shows up when we marry parts together with the same or different materials.

An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.

― Werner Heisenberg

Again the details are immense, the simplification is an absolute necessity. The question that looms over all these discussion is the availability of a mathematical theory that allows the small scale to be ignored, which explains physical phenomena. This would imply a structure for the regularized singularity, and a recipe for successful simulation. For geometric singularities any theory is completely ad hoc and largely missing. Any such theory needs detailed and focused experimental confirmation and attention. As things work today, the basic structure is missing and is relegated to being applied in a domain science manner. We find that this is strong in fluid dynamics, and perhaps plasma physics, but almost absent in many other fields like solid mechanics, the utility of modeling and simulation suffers mightily from this.

2D MHD B_60If there is any place where singularities are dealt with systematically and properly it is fluid mechanics. Even in fluid mechanics there is a frighteningly large amount of missing territory most acutely in turbulence. The place where things really work is shock waves and we have some very bright people to thank for the order. We can calculate an immense amount of physical phenomena where shock waves are important while ignoring a tremendous amount of detail. All that matter is for the calculation to provide the appropriate integral content of dissipation from the shock wave, and the calculation is wonderfully stable and physical. It is almost never necessary and almost certainly wasteful to compute the full gory details of a shock wave.

Fluid mechanics has many nuances and details with important applications. The mathematical structure of fluids is remarkably well in hand. Boundary layer theory is another monument where our understanding is well defined. It isn’t quite as profoundly satisfying as shocks, but we can do a lot of wonderful things. Many important technological items are well defined and engineered with the able assistance of boundary layer theory. We have a great deal of faith in this knowledge and the understanding of what will happen. The state is better than it is problematic. As boundary layers get more and more exciting they lead to a place where problems abound, the problems that appear when a flow becomes turbulent. All of a sudden the structure becomes much more difficult and prediction with deep understanding starts to elude us.

The same can’t be said by and large for turbulence. We don’t understand it very well at all. We have a lot of empirical modeling and convention wisdom that allows useful science and engineering to proceed, but an understanding like we have for shock waves eludes us. It is so elusive that we have a prize (the Clay prize) focused on providing a deep understanding of the mathematical physics of its dynamics. The problem is that the physics strongly implies that the behavior of the governing equations (incompressible Navier-Stokes) admits a singularity, yet the equations don’t seem to. Such a fundamental incongruence is limiting our abilitflamey to progress. I believe the issue is the nature of the governing equations and a need to change this model away from incompressibility, which is a useful and unphysical approximation, not a fundamental physical law. In spite of all the problems, the state of affairs in turbulence is remarkably good compared with solid mechanics.

Another discontinuous behavior of great importance in practical matters are material interfaces. Again these interfaces are never truly singular in nature, but it is essential for utility to represent them that way. The capacity to use such a simple representation is challenged by a lot of things such a chemistry. More and more physics challenge the ability to use the singular representation without empirical and heavy-handed modeling. The ability to use well-defined mathematical models as opposed to ad hoc modeling implies essential understanding that belies a science that is compelling. The better the equations, the better the understanding, which is the essence of science that should provide us faith in its findings.

mechanical-finite-element-analysisAn example of lower mathematical maturity can be seen in the field of solid mechanics. In solids, the mathematical theory is stunted by comparison to fluids. A clear part of the issue is the approach taken by the fathers of the field in not providing a clear path for combined analytical-numerical analysis as fluids had. The result of this is a numerical background that is completely left adrift of the analytical structure of the equations. In essence the only option is to fully resolve everything in the governing equations. No structural and systematic explanation exists for the key singularities in material, which is absolutely vital for computational utility. In a nutshell the notion of the regularized singularity so powerful in fluid mechanics is foreign. This has a dramatically negative impact on the capacity of modeling and simulation to have a maximal impact.

All of these principles apply quite well to a host of other fields. In my work the areas of radiation transport and plasma physics. The merging of mathematics and physical understanding in these areas is better than solid mechanics, but not as advanced as fluid mechanics. In many respects the theories holding sway in these fields have profitably borrowed from fluid mechanics, but not to the extent necessary for a thoroughly vetted mathematical-numerical modeling framework needed for ultimate utility. Both fields suffer from immense complexity and the mathematical modeling tries to steer understanding, but ultimately various factors are holding the field back. Not the least of these is an prevailing undercurrent and intent in modeling for the World to be a well-oiled machine prone to be precisely determined.

I would posit that one of the key aspects holding fields back from progress toward a fully utilitarian capability is the death grip that Newtonian-style determinism has upon our models for the World. Its stranglehold on the philosophy of solid mechanical modeling is nearly fatal and retards progress like a proverbial anchor. To the extent that it governs our understanding in other fields (i.e., plasma physics, turbulence,…), progress is harmed. In any practical sense the World is not deterministic and modeling it as such has limited if not, negative utility. It is time to release this concept as being a useful blueprint for understanding. A far more pragmatic and useful path is to focus much greater energy on understanding the degree of unpredictability inherent in physical phenomena.

The key to making everything work is an artful combination of physical understanding with a mathematical structure. The capacity of mathematics to explain and predict nature is profound and unremittingly powerful. In the case of the singularity it is essential for useful, faithful simulations that we may put confidence in. Moreover the proper mathematical structure can alleviate the need for ad hoc mechanisms, which naturally produce less confidence and lower utility. Even where the mathematics seemingly exists like incompressible flow and turbulence, the lack of a tidy theory that fails to reproduce certain properties limits progress in profound ways. When the mathematics is even more limited and does not provide a structural explanation for what is seen like in fracture and failure in fluids, the simulations become untethered and progress is shuttered by the gaps.

I suppose my real message is that the mathematical-numerical modeling book is hardly closed and complete. It represents very real work that is needed for progress. The current approach to modeling and simulation dutifully ignores this, and produces a narrative that simply presents the value proposition that all that is needed is a faster computer. A faster computer while useful and beneficial to science is not the long pole in the tent insofar as improving the capacity of mathematical-numerical modeling to become more predictive. Indeed the long pole in the tent may well be changing the narrative about the objectives from prediction back to fundamental understanding.

The title of the post is a tongue and cheek homage to the line from the Big Lebowski, a Coen Brothers masterpiece. Like the dude in the movie, a singularity is the essence of coolness and ease of use.

If you want something new, you have to stop doing something old

― Peter F. Drucker

 

Our Collective Lack of Trust and Its Massive Costs

01 Friday Apr 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Low-trust environments are filled with hidden agendas, a lot of political games, interpersonal conflict, interdepartmental rivalries, and people bad-mouthing each other behind their backs while sweet-talking them to their faces. With low trust, you get a lot of rules and regulations that take the place of human judgment and creativity; you also see profound disempowerment. People will not be on the same page about what’s important.

— Stephen Covey

736b673b426eb4c99f7f731d5334861bBeing thrust into a leadership position at work has been an eye-opening experience to say the least. It makes crystal clear a whole host of issues that need to be solved. Being a problem-solver at heart, I’m searching for a root cause for all the problems that I see. One can see the symptoms all around, poor understanding, poor coordination, lack of communication, hidden agendas, ineffective vision, and intellectually vacuous goals… I’ve come more and more to the view that all of these things, evident as the day is long are simply symptomatic of a core problem. The core problem is a lack of trust so broad and deep that it rots everything it touches.

Only those who dare to fail greatly can ever achieve greatly.

― Robert F. Kennedy 

What is the basis of the lack of trust and how can it be cured or at the very least ameliorated? Are we simply fated to live in a World where the trust in our fellow man is intrinsically low?

Failure is simply the opportunity to begin again, this time more intelligently.

— Henry Ford

First, it is beneficial to understand what is at stake. With trust firmly in hand people are unleashed, and new efficiencies are possible. The trust and faith in each other causes people to work better, faster and more effectively. Second-guessing is short-circuited. The capability to achieve big things is harnessed, and lofty goals can be achieved. Communication is easy and lubricated. Without trust and faith the impacts are completely opposite and harm the capacity for excellence, progress and achievement. Whole books are written on the virtues of trust (Stephen Covey’s Speed of Trust comes to mind). Trust is a game changer, and a great enabler. It is very clear that trust is coming we are in short supply of and its harming society as a whole. Its fingerprints at work leave deep bruises and make anyone focused on progress frustrated.

Distrust is like a vicious fire that keeps going and going, even put out, it will reignite itself, devouring the good with the bad, and still feeding on empty.

― Anthony Liccione

The causes of low trust are numerous and deeply engrained in the structure of society today. For example the pervasive greed and seeming profit motive in almost all things undermines any view that people are generous. A general opening ante in any interaction is the feeling that someone is trying to gain advantage by whatever means necessary. The Internet has undermined authority-making information (and disinformation) so ubiquitous that we’ve lost the ability to sift fact from fiction. Almost every institution in society is under attack from legitimacy. All interests are viewed with suspicion and corruption seemingly abounds. We’ve never had a greater capacity to communicate with one another, yet we understand less than ever.

I’ll touch more on the greed and corruption because I think they are real and corrosive in the extreme. The issue that the basic assumption of greed and corruption is wider than its actuality and causes a lot of needless oversight and bureaucracy that lays waste to efficiency. Of course greed is a very real thing and manifests itself in our corporate culture and even celebrated by the very people who are hurt the most. The rise of Donald Trump as a viable Politician is amble evidence of just how incredibly screwed up everything has gotten. How a mildly wealthy, greedy, reality show character could ever be considered as a viable President is the clearest sign of a very sick culture. It is masking a very real problem of a society that celebrates the rich while those rich systematically prey imgreson the whole of society leeches. They claim to be helping society even while they damage our future to feed their hunger. The deeper physic wound is the feeling that everyone is so motivated leading to the broad-based lack of trust of your fellow man.

So where do I see this in work? The first thing is the incredibly short leash we are kept on and the pervasive micromanagement. We here words like “accountability” and “management assurance”, but its really “we don’t trust you at all”. Every single activity is burdened by oversight that acts to question and second-guess every decision and even the motivations behind them. Rather than everyone knowing what the larger aims, objective and goals and assuming that there is a broad based approach to solution, people assume that folks are out to waste and defraud. We impose huge costs in time; money and effort to assure that we don’t waste a dime or hour doing anything except what you are assigned to do. All of this oversight comes at a huge cost, and the expense of the oversight is actually the tip of the proverbial iceberg. The micromanagement is so deep that it kneecaps any and all ability to be agile and adaptive in how work is done. Plans have to be followed to a tee even when it is evident that the plans didn’t really match the reality that develops upon meeting the problem.

Suspicion ruins the atmosphere of trust in a team and makes it ineffective

― Sunday Adelaja

The micromanagement has even deeper impacts on the ability to combine, collaborate and envision broader perspectives. People’s work is closely scrutinized, planned and defined. Rather than engage in deep, broad perspectives in the nature of the work, people are encouraged if not implored to focus on the specific assignments to the exclusion of all else. I’ve seen such narrowness producing deeply pathological effects such as people seeing different projects they personally work on as being executed by a different people and incapable of expressing or articulating connections between the projects even when they are obvious. These impacts are crushing the level of quality in both the direct execution of the work and the development of people in the sense of having a deep, sustained career that builds toward personal growth.

keep-calm-and-put-your-head-in-the-sandIn the overall execution of work another aspect of the current environment can be characterized as the proprietary attitude. Information hiding and lack of communication seem to be a growing problem even as the capacity for transmitting information grows. Various legal, or political concerns seem to outweigh the needs for efficiency, progress and transparency. Today people seem to know much less than they used to instead of more. People are narrower and more tactical in their work rather than broader and strategic. We are encouraged to simply mind our own business rather than seek a broader attitude. The thing that really suffers in all of this is the opportunity to make progress for a better future.

Don’t be afraid to fail. Don’t waste energy trying to cover up failure. Learn from your failures and go on to the next challenge. It’s ok to fail. If you’re not failing, you’re not growing.

— H. Stanley Judd

Milestones and reporting is an epitome of bad management and revolves completely around a lack of trust. Instead of being a tool for managing effort and lubricating the communication these tools are used to dumb down work, and assure low quality, low impact work as the standard of delivery. The reporting has to have marketing value instead of information and truth-value. It is used to sell the program and assure the image of achievement rather than provide an accurate picture of the status. Problems, challenges and issues tend to be soft-pedaled and deep sixed rather than discussed openly and deeply. Project planning in milestone are anchors against progress instead of aspirational goals. There is significant structure in the attitude toward goals that drives quality and progress away from objectives. This drive is the inability to accept failure as a necessary and positive aspect of the conduct of any work that is aggressive and progressive. Instead we are encouraged to always succeed and this encouragement means goals are defined to low and trivially achievable.

I’ve written before about preponderance of bullshit as a means of communicating work. Instead of honest and clear communication of information, we see communication of things that are constructed with the purpose of deceiving rewarded. Part of the issue is the inability for the system to accept failure, accept unexpected results as contributing to bullshit. Another matter that contributes to the amount of bullshit is the lack of expertise in the value system. True experts are not trusted, or simply viewed as having a hidden agenda. Notions of nuance that color almost anything an expert might tell you are simply not trusted. Instead we favor a simple and well-crafted narrative over the truth. It is much easier to craft fiction into a compelling message than a nuanced truth. Once this path is taken it is a quick trip to complete bullshit.

How do we fix any of this?

imagesThe simplest thing to do is value the truth, value excellence and cease rewarding the sort of trust-busting actions enumerated above. Instead of allowing slip-shod work to be reported as excellence we need to make strong value judgments about the quality of work, reward excellence, and punish incompetence. The truth and fact needs to be valued above lies and spin. Bad information needs to be identified as such and eradicated without mercy. Many greedy self-interested parties are strongly inclined to seed doubt and push lies and spin. The battle is for the nature of society. Do we want to live in a World of distrust and cynicism or one of truth and faith in one another? The balance today is firmly stuck at distrust and cynicism. The issue of excellence is rather pregnant. Today everyone is an expert, and no one is an expert with the general notion of expertise being highly suspected. The impact of such a milieu is absolutely damaging to the structure of society and the prospects for progress. We need to seed, reward and nurture excellence across society instead of doubting and demonizing it.

Of course deep within this value system is the concept of failure. Failure achieved while trying to do excellent good things must cease to be punished. Today failure is equivocated with fraud and is punished even when the objectives were good and laudable. This breeds all the bad things that are corroding society. Failure is absolutely essential for learning and the development of expertise. To be an expert is to have failed, and failed in the right way. If we want to progress societally, one needs to allow, even encourage failure. We have to stop the attempts to create a fail-safe system because fail-safe quickly becomes do nothing.

What do you do with a mistake: recognize it, admit it, learn from it, forget it.

— Dean Smith

03ce13fa310c4ea3864f4a3a8aabc4ffc7cd74191f3075057d45646df2c5d0aeUltimately we need to conscientiously drive for trust as a virtue in how we approach each other. A big part of trust is the need for truth in our communication. The sort of lying, spinning and bullshit in communication does nothing but undermine trust, and empower low quality work. We need to empower excellence through our actions rather than simply declare things to be excellent by definition and fiat. Failure is a necessary element in achievement and expertise. It must be encouraged. We should promote progress and quality as a necessary outcome across the broadest spectrum of work. Everything discussed above needs to be based in a definitive reality and have actual basis in facts instead of simply bullshitting about it, or it only having “truthiness”. Not being able to see the evidence of reality in claims of excellence and quality simply amplifies the problems with trust, and risks devolving into a viscous cycle dragging us down instead of a virtuous cycle that lifts us up.

 

If you are afraid of failure you don’t deserve to be successful!

— Charles Barkley

There is only one thing that makes a dream impossible to achieve: the fear of failure.

― Paulo Coelho

Hyperviscosity is a Useful and Important Computational Tool

24 Thursday Mar 2016

Posted by Bill Rider in Uncategorized

≈ 3 Comments

The conventional view serves to protect us from the painful job of thinking.

― John Kenneth Galbraith

it_photo_109585I chose the name the “Regularized Singularity” because it’s so important to the conduct of computational simulations of significance. For real world computations, the nonlinearity of the models dictates that the formation of a singularity is almost a foregone conclusion. To remain well behaved and physical, the singularity must be regularized, which means the singular behavior is moderated into something computable. This almost always accomplished with the application of a dissipative mechanism and effectively imposes the second law of thermodynamics on the solution.

A useful, if not vital, tool is something called “hyperviscosity”. Taken broadly hyperviscosity is a broad spectrum of mathematical forms arising in numerical calculations. I’ll elaborate a number of the useful forms and options. Basically a hyperviscosity is viscous operator that has a higher differential order than regular viscosity. As most people know, but I’ll remind them the regular viscosity is a second-order differential operator, and it is directly proportional to a physical value of viscosity. Such viscosities are usually a weakly nonlinear function of the solution, and functions of the intensive variables (like temperature, pressure) rather than the structure of the solution. The hyperviscosity falls into a couple of broad categories, the linear form and the nonlinear form.

Unlike most people I view numerical dissipation as a good thing and an absolute necessity. This doesn’t mean that it should be wielded cavalierly or brutally because it can and gives computations a bad name. Generally conventional wisdom dictates that dissipation should always be minimized, but this is wrong-headed. One of the key aspects of important physical systems is the finite amount of dissipation produced dynamically. The correct asymptotically correct solution with a small viscosity is not zero dissipation; it is a non-zero amount of dissipation arising from the proper large-scale dynamics. This knowledge is useful in guiding the construction of good numerical viscosities that enable us to efficiently compute solutions to important physical systems.

IBM_Blue_Gene_P_supercomputerOne of the really big ideas to grapple with is the utter futility of using computers to simply crush problems into submission. For most problems of any practical significance this will not be happening, ever. In terms of the physics of the problems, this is often the coward’s way out of the issue. In my view, if nature were going to be submitting to our mastery via computational power, it would have already happened. The next generation of computing won’t be doing the trick either. Progress depends on actually thinking about modeling. A more likely outcome will be the diversion of resources away from the sort of thinking that will allow progress to be made. Most systems do not depend on the intricate details of the problem anyway. The small-scale dynamics are universal and driven by large scales. The trick to modeling these systems is to unveil the essence and core of the large-scale dynamics leading to what we observe.

Given that we aren’t going to be crushing our problems out of existence with raw computing power, hyperviscosity ends up being a handy tool to get more out of the computing we have. Viscosity depends upon having enough computational resolution to effectively allow it to dissipate energy from the computed system. If the computational mesh isn’t fine enough, the viscosity can’t stably remove the energy and the calculation blows up. This provides a very stringent limit on the resolution that can be computationally achieved.

The first form of viscosity to consider is the standard linear form in its simplest form which is a second order differential operator, \nu \nabla^2 u. If we apply a Fourier transform \exp \left( \imath k {\bf x} \right) to the operator we can see how simple viscosity works, \nu \nabla^2 u = - \nu k^2 \exp\left( \imath k {\bf x}\right) (just substitute the Fourier description for the function into the operator). The viscosity grows in magnitude with the square of the wave number k. Only when the product of the viscosity and wavenumber squared becomes large will the operator remove energy from the system effectively.images

Linear dissipative operators only come from even orders of the differential. Moving to a fourth-order bi-Laplacian operator it is easy to see how the hyperviscosity will works, \nu \nabla^4 u = \nu k^4 \exp\left( \imath k {\bf x}\right). The dissipation now kicks in faster (k^4) with the wavenumber allowing the simulation to be stabilized at comparatively coarser resolution than the corresponding simulation only stabilized by a second-order viscous operator. As a result the simulation can attack more dynamic and energetic flows with the hyperviscosity. One detail is that the sign of the operator changes with each step up the ladder, a sixth order operator will have a negative sign, and attack the spectrum of the solution even faster, k^6, and so on.

Taking the linear approach to hyperviscosity is simple, but has a number of drawbacks from a practical point-of-view. First the linear hyperviscosity operator becomes quite broad in its extent as the order of the method increases. The method is also still predicated on a relatively well-resolved numerical solution and does not react well to discontinuous solutions. As such the linear hyperviscosity is not entirely robust for general flows. It is better as an additional dissipation mechanism with more industrial strength methods and for studies of a distinctly research flavor. Fortunately there is a class of methods that remove most of these difficulties, nonlinear hyperviscosity. Nonlinear is almost always better, or so it seems, not easier, but better.

Linearity breeds contempt

– Peter Lax

The first nonlinear viscosity came about from Prantl’s mixing length theory and still forms the foundation of most practical turbulence modeling today. For numerical work the original shock viscosity derived by Richtmyer is the simplest hyperviscosity possible, \nu \ell \left| \nabla u\right| \nabla^2 u. Here \ell is a relevant length scale for the viscosity. In purely numerical work, \ell = C \Delta x. It provides what linear hyperviscosity cannot, stability and robustness, making flows that would be dag006computed with pervasive instability and making them stable and practically useful. It provides the fundamental foundation for shock capturing and the ability to compute discontinuous flows on grids. In many respects the entire CFD field is grounded upon this method. The notable aspect of the method is the dependence of the dissipation on the product of the coefficient nu and the absolute value of the gradient of the solution.

Looking at the functional form of the artificial viscosity, one sees that it is very much like the Prantl mixing length model of turbulence. The simplest model used for large eddy simulation (LES) is the Smagorinsky model developed first by Joseph Smagorinsky and used in the first three dimensional model for global circulation. This model is significant as the first LES and the model that is a precursor of the modern codes used to predict climate change. The LES subgrid model is really nothing more than Richtmyer (and Von Neumann’s) artificial viscosity and is used to stabilize the calculation against instability that invariably creeps in with enough simulation time. The suggestion to do this was made by Jules Charney upon seeing early weather simulations. The significance of having the first useful numerical method for capturing shock waves, and computing turbulence being one and the same is rarely commented upon. I believe this connection is important and profound. Equally valid arguments can be made that state that the form of nonlinear dissipation is fated by the dimensional form of the governing equations and the resulting dimensional analysis.

Before I derive a general form for the nonlinear hyperviscosity, I should discuss a little bit about another shortcoming of the linear hyperviscosity. In its simplest form the linear operator for classical linear viscosity produces a positive-definite operator. Its application as a numerical solution will keep positive quantities positive. This is actually a form of strong nonlinear stability. The solutions will satisfy discrete forms for the second law of thermodynamics, and provide so-called “entropy solutions”. In other words the solutions are guaranteed to be physically relevant.

csd240333fig7This isn’t generally considered important for viscosity, but in the content of more complex systems of equations may have importance. One of the keys to bringing this up is that generally speaking linear hyperviscosity will not have this property, but we can build nonlinear hyperviscosity that will preserve this property. At some level this probably explains the utility of nonlinear hyperviscosity for shock capturing. In nonlinear hyperviscosity we have immense freedom in designing the viscosity as long as we keep it positive. We then have a positive viscosity multiplying a positive definite operator, and this provides a deep form of stability we want along with a connection that guarantees of physically relevant solutions.

With the basic principles in hand we can go wild and derive forms for the hyperviscosity that are well-suited to whatever we are doing. If we have a method with high-order accuracy, we can derive a hyperviscosity to stabilize the method that will not intrude on the accuracy of the method. For example, let’s just say we have a fourth-order accurate method, so we want a viscosity with at least a fifth order operator, \nu \ell^3 \left| \nabla u \nabla^2 u\right| \nabla^2 u . If one wanted better high-frequency damping a different form would work like \nu \ell^3 \left| \nabla^3 u\right| \nabla^2 u . To finish the generalization of the idea consider that you have eighth-order method, now a ninth- or tenth-order viscosity would work, for example, \nu \ell^8 \left( \nabla^2 u\right)^4 \nabla^2 u . The point is that one can exercise immense flexibility in deriving a useful method.

I’ll finish with making brief observation about how to apply these ideas to systems of conservations laws, \partial_t{\bf U} + \partial_x {\bf F} \left( {\bf U} \right) = 0. This system of equations will have characteristic speeds, \lambda determined by the eigen-analysis of the flux Jacobian, \partial_{\bf U} {\bf F} \left( {\bf U} \right). A reasonable way to think about hyperviscosity would be to write the nonlinear version as \nu \ell^q \left|^{p-q} \partial_p \lambda \right| \partial{xx} {\bf U}, where \partial_q is the number of derivatives to take. A second approach that would work with Godunov-type methods would compute the absolute value jump at cell interfaces in the characteristic speeds where the Riemann problem is solved to set the magnitude of the viscous coefficient. This jump is the order of the approximation, and would multiply the cell-centered jump in the variables, {\bf U}. This would guarantee proper entropy production through the hyperviscous flux that would augment the flux computed via the Riemann solver. The hyperviscosity would not impact the formal accuracy of the method.

We can not solve our problems with the same level of thinking that created them

― Albert Einstein

I spent the last two posts railing against the way science works today and its rather dismal reflection in my professional life. I’m taking a week off. It wasn’t that last week was any better, it was actually worse. The rot in the world of science is deep, but the rot is simply part of larger World to which science is a part. Events last week were even more appalling and pregnant with concerns. Maybe if I can turn away and focus on something positive, it might be better, or simply more tolerable. Soon I have a trip to Washington and into the proverbial belly of the beast, it should be entertaining at the very least.

Till next Friday, keep all your singularities regularized.

Think before you speak. Read before you think.

― Fran Lebowitz

VonNeumann, John, and Robert D. Richtmyer. “A method for the numerical calculation of hydrodynamic shocks.” Journal of applied physics 21.3 (1950): 232-237.

Borue, Vadim, and Steven A. Orszag. “Local energy flux and subgrid-scale statistics in three-dimensional turbulence.” Journal of Fluid Mechanics 366 (1998): 1-31.

Cook, Andrew W., and William H. Cabot. “Hyperviscosity for shock-turbulence interactions.” Journal of Computational Physics 203.2 (2005): 379-385.

Smagorinsky, Joseph. “General circulation experiments with the primitive equations: I. the basic experiment*.” Monthly weather review 91.3 (1963): 99-164.

 

Balance must be restored

18 Friday Mar 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 

The greatest danger of a terrorist’s bomb is in the explosion of stupidity that it provokes.

― Octave Mirbeau

Cielo rotatorSometimes the blog is just an open version of a personal journal. I feel myself torn between wanting to write about some thoroughly nerdy topic that holds my intellectual interest (like hyperviscosity for example), but end up ranting about some aspect of my professional life (like last week). I genuinely felt like the rant from last week would be followed this week by a technical post because things would be better. Was I ever wrong! This week is even more appalling! I’m getting to see the rollout of the new national program reaching for Exascale computers. As deeply problematic as the current NNSA program might be, it is a paragon of technical virtue compared with the broader DOE program. Its as if we already had a President Trump in the White House to lead our Nation over the brink toward chaos, stupidity and making everything an episode in the World’s scariest reality show. Electing Trump would just make the stupidity obvious, make no mistake, we are already stupid.

The fantastic advances in the field of electronic communication constitute a greater danger to the privacy of the individual.

― Earl Warren

A while back I talked about the impending conflict brewing in society and the threat of its explosion in the coming year. I fear this is coming to pass. The multiple events of the American presidential election, the refugee crisis, terrorism all playing out in a maelstrom of societal upheaval is stoking flames into a conflagration. It is really clear theme is a general lack of trust and faith in the establishment. Science is suffering greatly from this problem. Expertise is viewed with suspicion and generally associated with elitism. This in manifested in the actions of our government, legislatures, but authorized by the public. Instead of viewing expert judgment as something to be respected and trusted, it is viewed as being biased and self-serving. The governance of science is being crippled by these attitudes, and the quality of our science programs and labs is being destroyed in the process.

I ponder the imprint of all of this in the events that unfold at work. Why is work so completely unremarkable and dull? Why are the things we are supposed to work on so utterly lacking in inspiration, thought and rational basis? Why are workplaces becoming so completely antithetical to progress, empowerment and satisfaction? How does the combination of the pervasive Internet and our reality show politics reflect all of these trends?

Then I think about the public face of the World today. Why are hatred, fear and racism so openly present in public life? Is violence becoming more commonplace? Has the Internet been a positive or a negative force? Are we freer than in the past, or placed in less visible shackles? Why is more information available than ever before, yet society has never seemed more at the mercy of the uninformed?

How are these two worlds connected? Is there a common thread to be explored and understood?

imgresI think there are parallels that are worth discussing in depth. Something big is happening and right now it looks like a great unraveling. People are choosing sides and the outcome will determine the future course of our World. On one side we have the forces of conservatism, which want to preserve the status quo through the application of fear to control the populace. This allows control, lack of initiative, deprivation and a herd mentality. The prime directive for the forces on the right is the maintenance of the existing structures of power in society. The forces of liberalization and progress are arrayed on the other side wanting freedom, personal meaning, individuality, diversity, and new societal structure. These forces are colliding on many fronts and the outcome is starting to be violent. The outcome still hangs in the balance.

The Internet is the first thing that humanity has built that humanity doesn’t understand, the largest experiment in anarchy that we have ever had.

― Eric Schmidt

Society is greatly out of balance and eventually balance must be restored. This lack of balance is extreme enough to assure it will result in conflict and probably violence. We can see how close to violence parts of the political climate is today. It will get worse before it gets better. How it plays out and who wins is still not determined. I favor the left, but the right probably has the advantage for now. The right controls the levers of power and dominates resources be they weaponry, money or influence. The left lacks an element that unifies progress aside from the vast degree of inequality that has arisen, and the connective power of the “Internet”. Insofar as the Internet and connectivity is concerned the impact plays both ways and may favor the right’s establishment cause of preserving the status quo. The right has the resources to harness the Internet to further their cause. The elements at play are worth lying out because of how they affect everyone’s life.

The internet was supposed to liberate knowledge, but in fact it buried it, first under a vast sewer of ignorance, laziness, bigotry, superstition and filth and then beneath the cloak of political surveillance. Now…cyberspace exists exclusively to promote commerce, gossip and pornography. And of course to hunt down sedition. Only paper is safe. Books are the key. A book cannot be accessed from afar, you have to hold it, you have to read it.

― Ben Elton

new-google-algorithmThe Internet is a great liberalizing force, but it also provides a huge amplifier for ignorance, propaganda and the instigation of violence. It is simply a technology and it is not intrinsically good or bad. On the one hand the Internet allows people to connect in ways that were unimaginable mere years ago. It allows access to incredible levels of information. The same thing creates immense fear in society because new social structures are emerging. Some of these structures are criminal or terrorists, some of these are dissidents against the establishment, and some of these are viewed as immoral. The information availability for the general public becomes an overload. This works in favor of the establishment, which benefits from propaganda and ignorance. The result is a distinct tension between knowledge and ignorance, freedom and tyranny hinging upon fear and security. I can’t see who is winning, but signs are not good.

Withholding information is the essence of tyranny. Control of the flow of information is the tool of the dictatorship.

― Bruce Coville

The topic of encryption is another pregnant topic. On the one hand it allows elements that the establishment does not like to communicate and exist in privacy. Some of these elements are criminal, or terrorists, and some are political dissidents or other social deviants. Encryption has some degree of equivalence to freedom in a digital World. I feel that the establishment is not trustworthy enough to have the keys to it. Can we really truly be completely safe and free? The issue is that we can never be either, and the attempt to be completely safe will enslave us. Any attempt to be completely free will endangers us as well. The trick is the balance of the two extremes. I choose freedom as the greater good, but clearly many, if not most, choose safety as the priority. The line between safety and tyranny is thin and guarding our freedom may be sacrificed in favor of safety and security.

How do you defeat terrorism? Don’t be terrorized.

― Salman Rushdie

Terrorism is another huge problem that crystalizes the issues of freedom, safety and security. It is used to frighten and enslave populations. Terrorism has successfully harnessed the will of the American population to support further profit taking by the wealthy. In fact, the key to curing terrorism is brutally simple. It is so simple and yet hard; the cure is to not be terrorized. Our fear of terrorists is their greatest force, and amplifies the damage done by actual terrorist acts by orders of magnitude. If we refuse to be terrorized, terrorists lose all their power. The problem is that terrorists are used by the establishment to frighten, control and corral the population to do the establishment’s bidding. It is an incredibly powerful political tool to mobilize the population to support tyranny. It drives the desire to have strong, protective and violent governance. It encourages a populace to consume itself with fear and hatred. It has led us to consider Trump as a viable candidate for President.

Our media including the Internet does immense amount of exaggeration of the risks in the World, and amplifies the impact of those risks on society. As one of my Facebook friends likes to say, “we are terrorism’s greatest force multiplier”. The risk from terrorism is vastly less than our actions would indicate. The deluge of information is making terrorism seem commonplace while the reality is how utterly uncommon and rare it is. For the media terrorism is great source of customer attention and a source of money. For politicians on the right terrorism is a way of channeling the ignorance and hatred in society to their side. For the interests of the wealthy terrorism is a great source of money for defense and intelligence industries to line their pockets with taxpayer money. All of these actions work to help society’s unraveling through opposing the forces of progress and liberalization by strengthening the power of the establishment whether it is industry, police or military.

Abandoning open society for fear of terrorism is the only way to be defeated by it.

— Edward Snowden

We need to strike a balance that allows freedom and progress to continue. Too many in the public do not realize that fear and security concerns are being used to enslave them. The politics of fear and hatred are the tool of the rich and powerful. They are driving maintenance of the status quo that hurts the general population and only benefits those already in power. It is continuing to drive an imbalance that can only end up with societal conflict. The larger the imbalance grows and the longer it goes unchecked, the greater the resulting conflict. If things don’t blow up this year, the blowup will only grow in severity.

You can fool some of the people all of the time, and all of the people some of the time, but you can not fool all of the people all of the time.

― Abraham Lincoln

 

 

 

 

How more management becomes less leadership

11 Friday Mar 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Action expresses priorities.

― Mahatma Gandhi

There is nothing so useless as doing efficiently that which should not be done at all.

― Peter F. Drucker

I know that I’ve written on this general topic before, but it keeps coming up as one of the biggest issues in my work life. We are getting more and more management while less and less leadership is evident. I know the two things shouldn’t be mutually exclusive, but seemingly in practice they are. With each passing year we get more and more management assurance, more measurement of compliance all the while our true performance slips. We are “managed” in the modern sense of the word better than ever, yet our science and research is a mere shadow of its former glory. Perhaps this is the desired outcome even if only implicitly by society where lack of problems and readily identifiable fuck ups is valued far more than accomplishments. A complete lack of leadership nationally that values accomplishment certainly shares part of the collective blame.

8286049510_dd79681555_cThe core of the issue is an unhealthy relationship to risk, fear and failure. Our management is focused upon controlling risk, fear of bad things, and outright avoidance of failure. The result is an implemented culture of caution and compliance manifesting itself as a gulf of leadership. The management becomes about budgets and money while losing complete sight of purpose and direction. The focus on leading ourselves in new directions gets lost completely. The ability take risks get destroyed because of fears and outright fear of failure. People are so completely wrapped up in trying to avoid ever fucking up that all the energy behind doing progressive things moving forward are completely sapped. We are so tied to compliance that plans must be followed even when they make no sense at all.

Any imperative revolving around progress and overall technical quality has absolutely no gravity in this environment. The drive to be managed well simply overwhelms us. Of course managed well means that nothing identifiable as a fuck up happens; it almost never means doing something great, wonderful or revolutionary. Accomplishment is limited to safe, incremental things that couldn’t possibly go wrong. Part of the issue is our adoption of modern management principles, which put a massive emphasis on the short term. To be clear, modern business management is obsessively short term focused. This short-term focus is completely contrary to progress, quality and imagination. These impacts are felt deeply in the private sector and manifesting themselves profoundly in the public sector where I work. One of the key aspects are the structural aspects of modern management practice. We are too obsessed with following our management plans to completion as opposed to being flexible and adaptive.

We put management practices that are intrusively damaging on a virtual pedestal. A prime example is the quarterly progress obsession. Business is massively damaged by the short-term focus embodied by demands for unwavering quarterly profits. The same idea manifests itself more broadly in public sector management to a deeply distressing degree. The entire mentality is undermining the long-term quality of our scientific base nationally and internationally. We are unwilling to change directions even when it makes the best sense and the change is based on a rational analysis of lessons learned and produces the best outcomes.

All of it produces a lack of energy and focus necessary for leadership. We do not exercise the art of saying NO. We are managed to a very high degree, we a led to a vegettowork-topdemotivatorsry small degree. Our managers are human and limited in capacity for complexity and time available to provide focus. If all of the focus is applied to management nothing is left for leadership. The impact is clear, the system is full of management assurance, compliance and surety, yet virtually absent of vision and inspiration. We are bereft of aspirational perspectives with clear goals looking forward. The management focus breeds an incremental approach that too concretely grounds future vision completely on what is possible today. All of this is brewing in a sea of risk aversion and intolerance for failure.

Start with the end in mind.

― Stephen R. Covey

The focus of our management is not performance of our jobs in the accomplishment of our missions, science or engineering. The focus of our management is to keep fuck ups to a minimum. If some one fucks up, they are generally thrown to the wolves, or the fuck up is rebranded as a glorious success. This increasingly means that our management insofar as the actual work is concerned contributes to the systematic generation and encouragement of bullshit. The best managers can bullshit their way out of a fuckup and spin it into a glorious success.

This is incredibly corrosive to the overall quality of the institutions Unknown-2that I work for. It has resulted in the wholesale divestiture of quality because quality no longer matters to success. It is creating a thoroughly awful and untenable situation where truth and reality are completely detached from how we operate. Every time that something of low quality is allowed to be characterized as being high quality, we undermine our culture. Capability to make real progress is completely undermined because progress is extremely difficult and prone to failure and setbacks. It is much easier to simply incrementally move along doing what we are already doing. We know that will work and frankly those managing us don’t know the difference anyway. Doing what we are already doing is simply the status quo and orthogonal to progress.

Things which matter most must never be at the mercy of things which matter least.

― Johann Wolfgang von Goethe

Managing and leading are different, but strongly related. We need both in the right measure and they shouldn’t be exclusive, but time and energy is limited. Today have too much management and virtually no leadership because the emphasis is on managing a whole bunch of risks and fears. We are creating systems that try to push away the possibility of any number of identified bad things. We soak up every minute of time and amount of available effort in this endeavor leaving nothing left. Leadership and the actual practice of good personnel management is left without any time or energy to be practiced. The result is a gulf in both areas that becomes increasingly evident with each passing day.

Most of us spend too much time on what is urgent and not enough time on what is important.

― Stephen R. Covey

Leadership or the positive qualities of management do not stop or control all the bad things directly. Leadership and management impact these things in a soft and indirect way. Rather than step away from the overly prescriptive and failed approach to control every little thing that might go wrong, we continue down the path of mediocritydemotivatormicromanagement. Each step in micromanagement produces another tax on the time and energy of every one impacted by the system and diminishes the good that can be done. In essence we are draining our system of energy for creating positive outcomes. The management system is unremittingly negative in its focus, trying to stop stuff from happening rather than enable stuff. It is ultimately a losing battle, which is gutting our ability to produce great things.

Producing great things is in the service of the National interest in the best way. By not producing great things and calling not great things, great, acts to undermine the National interest. Today we are doing exactly this and letting ourselves off the hook. We have made management of risks and failure the focus of our energy. We had sidelined leadership by fiat and allowed mediocrity to creep into our psyche and let progress and quality drift. Embracing quality, progress, risk and allowing failure in service of greater achievement can make change happen in ways that matter.

What’s measured improves

― Peter F. Drucker

The issue isn’t that most of the c037fa3f2632d31754b537b793dc8403management work shouldn’t be done in the abstract. Almost all of the management stuff are a good ideas and “good”. They are bad in the sense of what they displace from the sorts of efforts we have the time and energy to engage in. We all have limits in terms of what we can reasonably achieve. If we spend our energy on good, but low value activities, we do not have the energy to focus on difficult high value activities. A lot of these management activities are good, easy, and time consuming and directly displace lots of hard high value work. The core of our problems is the inability to focus sufficient energy on hard things. Without focus the hard things simply don’t get done. This is where we are today, consumed by easy low value things, and lacking the energy and focus to do anything truly great.

I think there needs to be a meeting to set an agenda for more meetings about meetings.

― Jonah Goldberg

Examples of this abound in the day to day life of Lab employees. If you are a manager at one of the Labs, your days are choked with low value work. A very large amount of this low value work seems like the application of due diligence and responsibility. I think a more rational view of the activities is to view it through the lens of micromanagement. Our practices lead to our micromanaging people’s time, work and budgets as to absorb all the available time. This effectively leaves no time or effort to be available for people’s judgment. These steps also act to effectively remove the staff’s ability to act as independent professionals. We are transitioning our staff from an active independent community of world-class scientists to a disconnected collection of hourly employees.

Your behavior reflects your actual purposes.

― Ronald A. Heifetz

Perhaps the core issue iimgress a general ambiguity regarding the purpose of our Labs, the goals of our science and the importance of the work. None of this is clear. It is the generic implication of the lack of leadership within the specific context of our Labs, or federally supported science. It is probably a direct result of a broader and deeper vacuum of leadership nationally infecting all areas of endeavor. We have no visionary or aspirational goals as a society either.

The quest for absolute certainty is an immature, if not infantile, trait of thinking.

― Herbert Feigl

 

Entropy, vanishing viscosity, physically relevant solutions and ink

04 Friday Mar 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

You should never be surprised by or feel the need to explain why any physical system is in a high entropy state.

― Brian Greene

For those of you who know me, it’s neither a secret, nor a widely known fact that I’ve gotten some tattoos recently. They aren’t the usual dreck most dudes get (like those tribal ones), but meaningful things to me. Now I have five in total, four of them are science related. One of the things that I wanted was an equation (yeah, I’m a total nerd). The question is what equation do I believe in enough to get permanently inscribed on my skin? A “common” choice for a science tattoo is Maxwell’s equations, and a friend of mine has the Euler equations on this arm from his PhD thesis. This post is about the equation I chose to care enough about to go through with it.

IMG_3502I’ll write the equation in TeX and show all of you a picture, you can make out a little of my other ink too, a lithium-7 atom and a Rayleigh-Taylor instability (I also have my favorite dog’s paw on my right shoulder and the famous Von Karmen vortex street on my forearm). The equation is how I think about the second law of thermodynamics in operation through the application of a vanishing viscosity principle tying the solution of equations to a concept of physical admissibility. In other words I believe in entropy and its power to guide us to solutions that matter in the physical world.

The all-knowing yesterday is obsolete today.

― Jarod Kintz

This is in contrast to much of the mathematical world that often cares about equations that are beautiful, but mean nothing in reality. A lot of the tension is related to following the beauty of Newtonian determinism and its centrality to continuum mathematical physics, and the need to embrace to stochastic nature of the real world. The real world is random and flows along time’s arrow and needs to embrace entropy and uncertainty. Our education and foundational knowledge of the physical world is based on Newton’s simplified view of things (a simplified view that revolutionalized science if not mankind’s understanding and mastery over nature). Newton’s principles can only take us so far, and we are probably reaching to end of its grasp. It is past time to push forward toward incorporating new principles into our model of reality.

Here is the equation in all its glorious mathematical statement, \frac{\partial U}{\partial t} + \nabla \cdot F(U) =\nabla \cdot \nu \nabla U taking the limit as \nu \rightarrow 0^+ \rightarrow \frac{d S}{d t} \ge 0. The equation is the time rate of change of a variable determined by a flux balance and a diffusive term where the limit of diffusion is taken to zero, which implies the satisfaction of the second law of thermodynamics implying that entropy increases. OK, but WTF is it all about? So in words the equation is a hyperbolic conservation law with a diffusive right hand side where the coefficient of viscosity goes to a limit of zero. In this limit we find solutions that are physically admissible, that is ones that could exist in the real World. These solutions lead to satisfaction of the second law of thermodynamics, which implies that entropy or disorder monotonically increases in time. The second law can be viewed as the thing that gives time a direction (time’s arrow!), and without the increase of entropy, time can flow equally well forward or backward, that is being symmetric. We know time flows forward in the real world we all live in, so we want that (or at least I want that, and believe you should too).

images lot of effort is spent studying the equations of inviscid flow, flow without dissipative forces most commonly the Euler equations. This form of equation is studied a lot because it is so pure. One can really get some awesomely beautiful mathematics out of it. Commonly the math leads to some great structure by studying these systems through their Hamiltonian and its evolution. Unfortunately, this endeavor while beautiful and hard has no physical merit whatsoever. No physical system really adheres to this Hamiltonian structure (except perhaps isolated systems of very small scale, and I really don’t give a much of a shit about these). They are seductive, pretty and almost without any physical utility. I care about stuff that appears in nature.

The important thing for equations to represent is physical reality (unless you’re doing math for math’s sake). As Wigner pointed out, mathematics has an incredible, almost mystic capacity to model our reality as he says is unreasonably effective. Exploiting this power should be a privilege we exercise whenever possible. In that vein the equations that connect to reality should be favored. In many cases inviscid equations are incredibly useful for modeling, but an important caveat should be exercised, the solutions to the inviscid equations that are favored are those associated with the presence of viscosity. These solutions are found through the application of an asymptotic principle, vanishing viscosity. The application of vanishing viscosity provides a route for these equations to satisfy the second law of thermodynamics, and its demands for increasing disorder.

2-29s03These principles actually don’t go far enough in distinguishing themselves from inviscid dynamics associated with Hamiltonian systems. Let me explain how they need to go even further. A couple of the most profound observations associated with fluid dynamics are associated with shock waves and turbulence, and share a remarkable similarity (it might be argued that both are inevitable through dimensional similarity arguments!). For shock waves the amount of dissipation occurring via the passage of a shock is proportional to the size of the jump in the variables across the shock cubed (Bethe came up with this in 1942). For turbulence the amount of dissipation is a high Reynolds number flow is proportional to the size of the velocity variation cubed (Kolmogorov came up with this in 1941). Both relations are independent of the specific value of the molecular viscosity.

What people resist is not change per se, but loss.

― Ronald A. Heifetz

These relations are profound in their implications, which are not intuitively obvious upon first seeing it. The dissipation rate being independent of the value of viscosity means that the flow contains something that approaches a singcsd240333fig7ularity. These singularities are called shock waves and have no name at all in turbulence because we don’t know what they are. These singularities are the mystery of turbulence and they are surely ephemeral as they are important. In other words we don’t see the turbulent singularities like we see shocks, but they must be there. Moreover the supposed equations of turbulence, the incompressible Euler equations don’t appear to contain obviously singularity producing features. This whole issue has produced an utterly stagnant scientific endeavor of immense practical importance.

Of course what is usually not covered is the horribly degenerate and unphysical nature of the incompressible Euler or Navier-Stokes equations. The key term is incompressible, which is intrinsically unphysical and removes sound waves from the system making their propagation speed infinite. What if these sound waves, which are always present, contain the essence of what drives dissipation in turbulent system. Incompressibility also removes thermodynamics from the equations and can only be derived from the compressible Navier-Stokes by considering the flow to be adiabatic. Turbulence in its essential character is non-adiabatic and intrinsically dissipative. Anyone see the problem(s)? Perhaps its time to start considering that the lack of progress in turbulence is exposing fundamental flaws in our modeling paradigm.

I would posit that we are trying to solve this monumentally important problem with a set of equations that we have systematically crippled. These equations were posed during an era where the fundamental issues discussed above were not well known. There really isn’t an excuse today. Could our lack of progress with turbulence be completely related to focusing on the wrong set of equations (yes!)?

Let’s dig just a bit deeper on the philosophical implications of the Bethe’s and Kolmogorov’s relations for dissipation of energy. Both of these relations also imply a satisfaction of the second law of thermodynamics by these systems. The limiting value for the satisfaction of the second law is not simply the inequality at zero, but rather an inequality for a finite value of dissipation. This finite value of dissipation is directly related to the large-scale flows structure and quantitatively proportional to the cube of the variations in the inertial range. Thus, the limit of zero dissipation is not physically relevant, the limit is a finite amount of dissipation set by the large scales of the system of interest. This deepens the implications associated with any study of completely dissipation-free dynamics being utterly unphysical. The dissipation-free system is separated from the real world by a finite and non-vanishing distance.

This feature of the physical world should be reflected in how we numerically model things (this is my philosophical point of view). It gets to the core of why I chose the equation to ink on my skin. A lot of numerical work is focused on trying to remove every single bit of dissipation from the method while maintaining stability. This mantra is tied to the belief that dissipation is bad and one is fundamentally interested in the numerical solution to the dissipation-free Euler equations. I believe this is utterly foolhardy and unproductive. The dissipation-free Euler equations are close to useless. The key dissipative relations I’ve introduced above tell us that the dissipation is never zero, but rather non-zero in a very specific way that is irreducible.

Some would argue that this non-zero dissipation should be the target of modeling, and the numerical methods should be pure and not intrude into the modeling. I believe that this perspective is laudable, foolish and unworkable practically. I favor more holistic approaches that combine modeling and numerical methods into a seamless package. This approach works wonderfully well in numerical methods for shocks, and produces a set of methods that revolutionized the field of computational fluid dynamics (CFD). I believe the grasp of these methods is far greater and extends into turbulence through implicit large eddy simulation (ILES). ILES implies strongly that the turbulence modeling is strongly addressed by techniques that practically solve compressible flows in the vanishing viscosity limit.

Generally for turbulence this approach has not been taken and the reason is clear to see. Turbulence modeling remains to this day a dominantly empirical activity. The core reason for this is the comment above about knowing what the dissipative structures are in turbulence. In compressible flows we know that shock waves are the thing to focus on and where the invariant dissipation occurs. Shocks are the hard thing to compute numerically, and we know what to do. For turbulence the same things cannot happen and the result is empirical modeling without targeted numerical methods. What remains is a philosophy that drives numerical methods to be innocuous, and allow the modeling to hold sway. The problem is that the modeling is blind to what the real physics is doing and the precise mechanisms to connect the large-scale flow to the dissipation of energy.

Nothing limits you like not knowing your limitations.

― Tom Hayes

As far as the tattoos are concerned, I haven’t decided yet if I’m getting more ink or not (its probably a yes). Maybe I’ll keep to the theme of science on the left side of my body and personal meaning on the right side of my body. Ideas are hatching, and I need to mind my tendency towards obsessive-compulsive behavior.

The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

— Sir Arthur Stanley Eddington

Wigner, Eugene P. “The unreasonable effectiveness of mathematics in the natural sciences. Richard courant lecture in mathematical sciences delivered at New York University, May 11, 1959.” Communications on pure and applied mathematics 13.1 (1960): 1-14.

Bethe, H. A. “On the theory of shock waves for an arbitrary equation of state.” Classic papers in shock compression science. Springer New York, 1998. 421-495.

Kolmogorov, Andrey Nikolaevich. “Dissipation of energy in locally isotropic turbulence.” Akademiia Nauk SSSR Doklady. Vol. 32. 1941.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

 

Play is essential to happiness, creativity and productivity

26 Friday Feb 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Combinatory play seems to be the essential feature in productive thought.

― Albert Einstein

My wife and I take part in a discussion group twice a month at our church. We get an innocuous sounding word to focus upon and set about answering deep questions about it. Everyone gets a chance to speak without interruption and everyone else focuses on listening. It’s hard (I’m really bad at listening), and it’s rewarding. Last week the wordkids-playing-outside was “play”. In talking about what the concept of play means to me first in the context of childhood then adulthood I had several epiphanies about the health and vitality of our current society and workplaces. Basically, the concept of play is under siege by forces that find it too frivolous to be supported. Societally we have destroyed play as a free wheeling unstructured activity for children, and crushed the freedom to play at work under the banner of accountability. We are poorer and more unhappy as a result and it is yet another manifestation of unremitting fear governing our behaviors.

We are never more fully alive, more completely ourselves, or more deeply engrossed in anything, than when we are at play.

― Charles E. Schaefer

adult-playThe greatest realization in the dialog came when I took note of how I used to play at work and all the good that came from it. The times when I have been the most productive, creative and happy with work have all been associated with being allowed to play at work. By play I mean allowed to experiment, test, and create new ideas in an environment allowing for failure and risk (essentially by placing very few constraints and limitations on what I was doing). The key was the creation and commitment to very high level goals and the freedom to pursue these goals in a relatively free way. The key is the pursuit of the broad objectives using methods that are not strongly prescribed a priori.

Work and play is the same thing just with a different perspective.

― Debasish Mridha

When I was a child, I had immense freedom to play. I would ride bikes around the neighborhood and play at the creek. My parents had a general idea where I was, but not specifically. This level of independence and freedom is almost impossible to imagine today. Children have scheduled and scripted lives where parents know their precise location at almost any time. Instead of learning to manage their lives with a high degree of independence, we teach our children to always be in control. Most of what is being controlled is a set of highly improbable risks that should not warrant such a high degree of control. We are subverting so much of the positive influence that comes from independence to control exotic and tiny probabilities. Societally, the overall impact is counter-productive and hurts us far more than protects us. The treatment of our children is good training for their lives as adults.

The same basic dynamic is working in the adult world of work. We spend an immense amount of time and effort controlling a host of miniscule risks and dangers. The feeling for children and adults alike is that controlling potential bad outcomes is worth the effort. People say things like “if we can prevent just one needless death…” which sounds compelling, but is stupid and inane. Bad things happen all the time, bad things are supposed to happen and the amount of effort spent preventing them is immense. How many lives worth of effort are spent to prevent a single death? No one ever asks if the steps being taken actually have an overall balanced positive effect pro and con.

You cannot build character and courage by taking away people’s initiative and independence

― Abraham Lincoln

Elliott Erwitt

A Transportation Security Administration (TSA) officer pats down Elliott Erwitt as he works his way through security at San Francisco International Airport in San Francisco, Wednesday, Nov. 24, 2010. (AP Photo/Jeff Chiu)

The TSA and airline screening is a good example. What is the cost in number of lives wasted going through their idiotic screening procedures to prevent problems. We also appear not to be able to control our reaction to bad things either. A terrorist act unleashes an avalanche of reaction that magnifies any harm the terrorists intends by orders of magnitude. Yet we continue to act the same without any realization that our fear is in fact the greatest weapon the terrorist possess. Terrorism is quite effective because the public is afraid and the societal response to terror will assist the aims of the terrorists. We have given up an incredible amount of resources, freedom and independence to protect ourselves from minuscule threats. There is a lot of evidence that we will continue to empower terrorists through our fearful responses.

Of course these trends are not solely limited to our response to terrorism. Terrorism simply amplifies the generic response of society. These trends in response occur in a variety of settings and drive short-term, low-risk behavior almost across the board. We typically encourage adults to focus on very short-term goals and take very few risks in working. The result is a loss of long-term goals and objectives in almost all settings in work. In addition the goals and objectives that do exist almost always entail little or no risk. The impact of the environment we have created is a systematic undermining of achievement, innovation and creativity in work. One way to capture this outcome is the recognition that play is not encouraged; it is actively discouraged.

We are being overwhelmed in the workplace, in the schoolroom and in every aspect of life with the concept of accountability. Accountability is one of those things that sounds uniformly good and no one can argue that it’s bad. Unfortunately I have come to the conclusion that the form of accountability we are subjecting our selves to is damaging and destructive. Accountability is used to control people and their activities. It is used to make sure people are doing what they are supposed to be doing. These days we are supposed to be doing what we are told to do. We are not supposed to be creative or innovative and do something that is unpredictable. Accountability is the box we are all being put in, which limits what we can do.

Wal-Mart-GreeterWe end up working extremely hard across everything in society to make sure that bad things don’t ever happen. We put all sorts of measures in place to prevent bad things. We don’t seem to have the capacity to realize that bad things just happen and it’s a fact of life. We spend so much effort trying to manage all the risks that life is just passing us by. This manifests itself with the destructive belief that the government’s job is to protect all of us from bad things (like terrorism). We are willing to give up freedom, accomplishment and productivity to assure a slight increase in safety. Often the risks we are sacrificing so much to diminish are vanishingly small and trivial (like terrorism), yet we are making this trade over and over again. We are allowing ourselves to drown in a sea of safety measures against risks that are inconsequential. The aggregate cost of all of these risk control measures exceeds the value of almost any of the measures. It represents the true threat to our future.

In today’s world, we are in the box all the time whether as children, or as adults. Children’s playtime used to be unscripted and free more often than not. Today it is highly scripted and controlled. Uncontrolled children are viewed quite unfavorably by society as a whole. As adults the exact same thing is happening. Life and work is to be highly scripted and controlled. Anything off script or uncontrolled is considered to be dangerous and highly suspect. The desired result of this scripting and control is predictivity and reliability without risks and failure. The other impact is less happiness and less creativity, less innovation and generally worse outcomes.

Another thread to this thought process is the avoidance of passion in my work. Increasingly I find that expressing any passion or commitment at work is viewed negatively. Work is being driven to be dispassionate and free of deep of emotional connection. In the past when play was very deep part of my productive work life, I also felt great passion for what I did. That passion was tied to the entire way that I worked, and included commitments of quality and learning. More and more today such passion seems to bring nothing but condemnation and seems to be unwelcome. I don’t think that this is disconnected from the issue of play and its diminished role too.

Men do not quit playing because they grow old; they grow old because they quit playing.

― Oliver Wendell Holmes Sr.

Roads not taken

19 Friday Feb 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Our most significant opportunities will be found in times of greatest difficulty.

― Thomas S. Monson

imagesIn economic policy it is well known that monopolies are bad. They are bad for everyone except the people who own and control those monopolies (who invest a lot in retaining their power!). They are drags on growth, innovation and progress. They are the essence of the too big to fail problem. In a very real sense the same thing is happening in science. We are being swallowed by monopolistic ideas. We are too invested in a variety of traditional solutions to problem (which solve traditional problems). Innovation, invention and progress are falling victim to this seemingly societal-wide trend.

We are seeing this in both computers and the codes running on the computers. The seductive nature of these quite capable behemoths is holding sway over a future that offers so much more than we are getting. The moment of epiphany came to me a while ago during some strategic planning for shock physics at work. Basically we have no strategy at all. We have 25 and 30-year-old legacy codes that we continue to develop because they are the only platforms viable within our fragmented funding picture today. The way we fund and manage the work in science is undermining progress and innovation as surely as the sun rides in the East every day.

55306675Looking at our soon to be, if not already ancient codes based on ancient technology I asked how often did we build a new code in the old days? Sure as could be the answer was radically different than today’s world, we build new codes every five to seven years. FIVE TO SEVEN YEARS!!!! Today we are sheparding codes that are at least a quarter of a century old, and nothing new is in sight. We just continue to accrete capability on to these old codes horribly constrained by sets of decisions increasingly divorced from today’s reality, technology and problems. It is a recipe for failure, but not the good kind of failure, the kind of failure that crushes the future slowly and painlessly like the hardening of the arteries.

The deeper question is why we are functioning in this manner? I’d posit an initial answer as a tendency to be obsessively short-term focused in our goals. The stream of decisions leading to our current legacy codes is surely optimal in a per annum basis, just as it is surely suboptimal in the long run. The problem is that the long run has no constituency today. This stems from a rather fundamental societal lack of leadership and vision. We are too easily swayed by the arguments of optimal short-term thinking and unwilling to take risks or invest in the long run success. We see this spirit manifested in our political, business and scientific communities.

road_12Perhaps no greater emblem of our addiction to shortsightedness exists than the crumbling infrastructure. The roads, bridges, electrical grids, airports, sewers, water systems, power plants,… that our core economy depend upon are in horrible shape and no will exists to support them. We can’t even conjure up the vision to create the infrastructure for the new century and leave it to privatized interests that will never deliver it. We are setting ourselves up to be permanently behind the rest of the World. We have no pride as a nation, no leadership and no vision of anything different. We just have short-term narcissistic self-interest embodied by the low tax, low service mentality. The same dynamic is happening at work.

When you do what you fear most, then you can do anything.

― Stephen Richards

We want short term, sure payoff, work without the sacrifice, risk and effort needed any long term vision or leadership. It is exactly what we are getting. We are creating a shell of our former greatness. In terms of codes and the opportunity they provide for modeling and simulation our reliance on legacy code is deeply damaging. In the days past we created new codes on a regular basis along with new modeling capability and philosophy. As a result our modeling approaches would step forward with each new code along with providing a vehicle for innovation in methods, algorithms and computer science. As a result we could try out new ideas for size without completely divesting from what came before. Without the new codes we are straightjacketed into old ideas and technology passes us by. The inability to replace our old codes resulting in legacy codes produces a massive cost in terms of lost opportunity.

How much I missed, simply because I was afraid of missing it.

― Paulo Coelho

The loss of opportunity is becoming increasingly unacceptable. We are producing a future that is shorn of possibilities that should be lying in front of us. Instead of vast possibilities energized by continual changes in our foVyXVbzWundations, we have stale old codes, models, methods and algorithms that ill-serve our potential. The application of too big to fail to our codes is creating a slow-motion failure of epic proportions. The basis for the failure is the loss of innovation and a sense that we are creating the future. Instead we simply curate the past. Our best should be ahead of us and any leadership worth its salt would demand that we work steadfastly to seize greatness. In modeling and simulation the creation of new codes should be an energizing factor creating effective laboratories for innovation, invention and creativity providing new avenues for progress.

I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.

― Stephen Jay Gould

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • March 2026
  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 60 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...