• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: March 2016

Hyperviscosity is a Useful and Important Computational Tool

24 Thursday Mar 2016

Posted by Bill Rider in Uncategorized

≈ 3 Comments

The conventional view serves to protect us from the painful job of thinking.

― John Kenneth Galbraith

it_photo_109585I chose the name the “Regularized Singularity” because it’s so important to the conduct of computational simulations of significance. For real world computations, the nonlinearity of the models dictates that the formation of a singularity is almost a foregone conclusion. To remain well behaved and physical, the singularity must be regularized, which means the singular behavior is moderated into something computable. This almost always accomplished with the application of a dissipative mechanism and effectively imposes the second law of thermodynamics on the solution.

A useful, if not vital, tool is something called “hyperviscosity”. Taken broadly hyperviscosity is a broad spectrum of mathematical forms arising in numerical calculations. I’ll elaborate a number of the useful forms and options. Basically a hyperviscosity is viscous operator that has a higher differential order than regular viscosity. As most people know, but I’ll remind them the regular viscosity is a second-order differential operator, and it is directly proportional to a physical value of viscosity. Such viscosities are usually a weakly nonlinear function of the solution, and functions of the intensive variables (like temperature, pressure) rather than the structure of the solution. The hyperviscosity falls into a couple of broad categories, the linear form and the nonlinear form.

Unlike most people I view numerical dissipation as a good thing and an absolute necessity. This doesn’t mean that it should be wielded cavalierly or brutally because it can and gives computations a bad name. Generally conventional wisdom dictates that dissipation should always be minimized, but this is wrong-headed. One of the key aspects of important physical systems is the finite amount of dissipation produced dynamically. The correct asymptotically correct solution with a small viscosity is not zero dissipation; it is a non-zero amount of dissipation arising from the proper large-scale dynamics. This knowledge is useful in guiding the construction of good numerical viscosities that enable us to efficiently compute solutions to important physical systems.

IBM_Blue_Gene_P_supercomputerOne of the really big ideas to grapple with is the utter futility of using computers to simply crush problems into submission. For most problems of any practical significance this will not be happening, ever. In terms of the physics of the problems, this is often the coward’s way out of the issue. In my view, if nature were going to be submitting to our mastery via computational power, it would have already happened. The next generation of computing won’t be doing the trick either. Progress depends on actually thinking about modeling. A more likely outcome will be the diversion of resources away from the sort of thinking that will allow progress to be made. Most systems do not depend on the intricate details of the problem anyway. The small-scale dynamics are universal and driven by large scales. The trick to modeling these systems is to unveil the essence and core of the large-scale dynamics leading to what we observe.

Given that we aren’t going to be crushing our problems out of existence with raw computing power, hyperviscosity ends up being a handy tool to get more out of the computing we have. Viscosity depends upon having enough computational resolution to effectively allow it to dissipate energy from the computed system. If the computational mesh isn’t fine enough, the viscosity can’t stably remove the energy and the calculation blows up. This provides a very stringent limit on the resolution that can be computationally achieved.

The first form of viscosity to consider is the standard linear form in its simplest form which is a second order differential operator, \nu \nabla^2 u. If we apply a Fourier transform \exp \left( \imath k {\bf x} \right) to the operator we can see how simple viscosity works, \nu \nabla^2 u = - \nu k^2 \exp\left( \imath k {\bf x}\right) (just substitute the Fourier description for the function into the operator). The viscosity grows in magnitude with the square of the wave number k. Only when the product of the viscosity and wavenumber squared becomes large will the operator remove energy from the system effectively.images

Linear dissipative operators only come from even orders of the differential. Moving to a fourth-order bi-Laplacian operator it is easy to see how the hyperviscosity will works, \nu \nabla^4 u = \nu k^4 \exp\left( \imath k {\bf x}\right). The dissipation now kicks in faster (k^4) with the wavenumber allowing the simulation to be stabilized at comparatively coarser resolution than the corresponding simulation only stabilized by a second-order viscous operator. As a result the simulation can attack more dynamic and energetic flows with the hyperviscosity. One detail is that the sign of the operator changes with each step up the ladder, a sixth order operator will have a negative sign, and attack the spectrum of the solution even faster, k^6, and so on.

Taking the linear approach to hyperviscosity is simple, but has a number of drawbacks from a practical point-of-view. First the linear hyperviscosity operator becomes quite broad in its extent as the order of the method increases. The method is also still predicated on a relatively well-resolved numerical solution and does not react well to discontinuous solutions. As such the linear hyperviscosity is not entirely robust for general flows. It is better as an additional dissipation mechanism with more industrial strength methods and for studies of a distinctly research flavor. Fortunately there is a class of methods that remove most of these difficulties, nonlinear hyperviscosity. Nonlinear is almost always better, or so it seems, not easier, but better.

Linearity breeds contempt

– Peter Lax

The first nonlinear viscosity came about from Prantl’s mixing length theory and still forms the foundation of most practical turbulence modeling today. For numerical work the original shock viscosity derived by Richtmyer is the simplest hyperviscosity possible, \nu \ell \left| \nabla u\right| \nabla^2 u. Here \ell is a relevant length scale for the viscosity. In purely numerical work, \ell = C \Delta x. It provides what linear hyperviscosity cannot, stability and robustness, making flows that would be dag006computed with pervasive instability and making them stable and practically useful. It provides the fundamental foundation for shock capturing and the ability to compute discontinuous flows on grids. In many respects the entire CFD field is grounded upon this method. The notable aspect of the method is the dependence of the dissipation on the product of the coefficient nu and the absolute value of the gradient of the solution.

Looking at the functional form of the artificial viscosity, one sees that it is very much like the Prantl mixing length model of turbulence. The simplest model used for large eddy simulation (LES) is the Smagorinsky model developed first by Joseph Smagorinsky and used in the first three dimensional model for global circulation. This model is significant as the first LES and the model that is a precursor of the modern codes used to predict climate change. The LES subgrid model is really nothing more than Richtmyer (and Von Neumann’s) artificial viscosity and is used to stabilize the calculation against instability that invariably creeps in with enough simulation time. The suggestion to do this was made by Jules Charney upon seeing early weather simulations. The significance of having the first useful numerical method for capturing shock waves, and computing turbulence being one and the same is rarely commented upon. I believe this connection is important and profound. Equally valid arguments can be made that state that the form of nonlinear dissipation is fated by the dimensional form of the governing equations and the resulting dimensional analysis.

Before I derive a general form for the nonlinear hyperviscosity, I should discuss a little bit about another shortcoming of the linear hyperviscosity. In its simplest form the linear operator for classical linear viscosity produces a positive-definite operator. Its application as a numerical solution will keep positive quantities positive. This is actually a form of strong nonlinear stability. The solutions will satisfy discrete forms for the second law of thermodynamics, and provide so-called “entropy solutions”. In other words the solutions are guaranteed to be physically relevant.

csd240333fig7This isn’t generally considered important for viscosity, but in the content of more complex systems of equations may have importance. One of the keys to bringing this up is that generally speaking linear hyperviscosity will not have this property, but we can build nonlinear hyperviscosity that will preserve this property. At some level this probably explains the utility of nonlinear hyperviscosity for shock capturing. In nonlinear hyperviscosity we have immense freedom in designing the viscosity as long as we keep it positive. We then have a positive viscosity multiplying a positive definite operator, and this provides a deep form of stability we want along with a connection that guarantees of physically relevant solutions.

With the basic principles in hand we can go wild and derive forms for the hyperviscosity that are well-suited to whatever we are doing. If we have a method with high-order accuracy, we can derive a hyperviscosity to stabilize the method that will not intrude on the accuracy of the method. For example, let’s just say we have a fourth-order accurate method, so we want a viscosity with at least a fifth order operator, \nu \ell^3 \left| \nabla u \nabla^2 u\right| \nabla^2 u . If one wanted better high-frequency damping a different form would work like \nu \ell^3 \left| \nabla^3 u\right| \nabla^2 u . To finish the generalization of the idea consider that you have eighth-order method, now a ninth- or tenth-order viscosity would work, for example, \nu \ell^8 \left( \nabla^2 u\right)^4 \nabla^2 u . The point is that one can exercise immense flexibility in deriving a useful method.

I’ll finish with making brief observation about how to apply these ideas to systems of conservations laws, \partial_t{\bf U} + \partial_x {\bf F} \left( {\bf U} \right) = 0. This system of equations will have characteristic speeds, \lambda determined by the eigen-analysis of the flux Jacobian, \partial_{\bf U} {\bf F} \left( {\bf U} \right). A reasonable way to think about hyperviscosity would be to write the nonlinear version as \nu \ell^q \left|^{p-q} \partial_p \lambda \right| \partial{xx} {\bf U}, where \partial_q is the number of derivatives to take. A second approach that would work with Godunov-type methods would compute the absolute value jump at cell interfaces in the characteristic speeds where the Riemann problem is solved to set the magnitude of the viscous coefficient. This jump is the order of the approximation, and would multiply the cell-centered jump in the variables, {\bf U}. This would guarantee proper entropy production through the hyperviscous flux that would augment the flux computed via the Riemann solver. The hyperviscosity would not impact the formal accuracy of the method.

We can not solve our problems with the same level of thinking that created them

― Albert Einstein

I spent the last two posts railing against the way science works today and its rather dismal reflection in my professional life. I’m taking a week off. It wasn’t that last week was any better, it was actually worse. The rot in the world of science is deep, but the rot is simply part of larger World to which science is a part. Events last week were even more appalling and pregnant with concerns. Maybe if I can turn away and focus on something positive, it might be better, or simply more tolerable. Soon I have a trip to Washington and into the proverbial belly of the beast, it should be entertaining at the very least.

Till next Friday, keep all your singularities regularized.

Think before you speak. Read before you think.

― Fran Lebowitz

VonNeumann, John, and Robert D. Richtmyer. “A method for the numerical calculation of hydrodynamic shocks.” Journal of applied physics 21.3 (1950): 232-237.

Borue, Vadim, and Steven A. Orszag. “Local energy flux and subgrid-scale statistics in three-dimensional turbulence.” Journal of Fluid Mechanics 366 (1998): 1-31.

Cook, Andrew W., and William H. Cabot. “Hyperviscosity for shock-turbulence interactions.” Journal of Computational Physics 203.2 (2005): 379-385.

Smagorinsky, Joseph. “General circulation experiments with the primitive equations: I. the basic experiment*.” Monthly weather review 91.3 (1963): 99-164.

 

Balance must be restored

18 Friday Mar 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

 

The greatest danger of a terrorist’s bomb is in the explosion of stupidity that it provokes.

― Octave Mirbeau

Cielo rotatorSometimes the blog is just an open version of a personal journal. I feel myself torn between wanting to write about some thoroughly nerdy topic that holds my intellectual interest (like hyperviscosity for example), but end up ranting about some aspect of my professional life (like last week). I genuinely felt like the rant from last week would be followed this week by a technical post because things would be better. Was I ever wrong! This week is even more appalling! I’m getting to see the rollout of the new national program reaching for Exascale computers. As deeply problematic as the current NNSA program might be, it is a paragon of technical virtue compared with the broader DOE program. Its as if we already had a President Trump in the White House to lead our Nation over the brink toward chaos, stupidity and making everything an episode in the World’s scariest reality show. Electing Trump would just make the stupidity obvious, make no mistake, we are already stupid.

The fantastic advances in the field of electronic communication constitute a greater danger to the privacy of the individual.

― Earl Warren

A while back I talked about the impending conflict brewing in society and the threat of its explosion in the coming year. I fear this is coming to pass. The multiple events of the American presidential election, the refugee crisis, terrorism all playing out in a maelstrom of societal upheaval is stoking flames into a conflagration. It is really clear theme is a general lack of trust and faith in the establishment. Science is suffering greatly from this problem. Expertise is viewed with suspicion and generally associated with elitism. This in manifested in the actions of our government, legislatures, but authorized by the public. Instead of viewing expert judgment as something to be respected and trusted, it is viewed as being biased and self-serving. The governance of science is being crippled by these attitudes, and the quality of our science programs and labs is being destroyed in the process.

I ponder the imprint of all of this in the events that unfold at work. Why is work so completely unremarkable and dull? Why are the things we are supposed to work on so utterly lacking in inspiration, thought and rational basis? Why are workplaces becoming so completely antithetical to progress, empowerment and satisfaction? How does the combination of the pervasive Internet and our reality show politics reflect all of these trends?

Then I think about the public face of the World today. Why are hatred, fear and racism so openly present in public life? Is violence becoming more commonplace? Has the Internet been a positive or a negative force? Are we freer than in the past, or placed in less visible shackles? Why is more information available than ever before, yet society has never seemed more at the mercy of the uninformed?

How are these two worlds connected? Is there a common thread to be explored and understood?

imgresI think there are parallels that are worth discussing in depth. Something big is happening and right now it looks like a great unraveling. People are choosing sides and the outcome will determine the future course of our World. On one side we have the forces of conservatism, which want to preserve the status quo through the application of fear to control the populace. This allows control, lack of initiative, deprivation and a herd mentality. The prime directive for the forces on the right is the maintenance of the existing structures of power in society. The forces of liberalization and progress are arrayed on the other side wanting freedom, personal meaning, individuality, diversity, and new societal structure. These forces are colliding on many fronts and the outcome is starting to be violent. The outcome still hangs in the balance.

The Internet is the first thing that humanity has built that humanity doesn’t understand, the largest experiment in anarchy that we have ever had.

― Eric Schmidt

Society is greatly out of balance and eventually balance must be restored. This lack of balance is extreme enough to assure it will result in conflict and probably violence. We can see how close to violence parts of the political climate is today. It will get worse before it gets better. How it plays out and who wins is still not determined. I favor the left, but the right probably has the advantage for now. The right controls the levers of power and dominates resources be they weaponry, money or influence. The left lacks an element that unifies progress aside from the vast degree of inequality that has arisen, and the connective power of the “Internet”. Insofar as the Internet and connectivity is concerned the impact plays both ways and may favor the right’s establishment cause of preserving the status quo. The right has the resources to harness the Internet to further their cause. The elements at play are worth lying out because of how they affect everyone’s life.

The internet was supposed to liberate knowledge, but in fact it buried it, first under a vast sewer of ignorance, laziness, bigotry, superstition and filth and then beneath the cloak of political surveillance. Now…cyberspace exists exclusively to promote commerce, gossip and pornography. And of course to hunt down sedition. Only paper is safe. Books are the key. A book cannot be accessed from afar, you have to hold it, you have to read it.

― Ben Elton

new-google-algorithmThe Internet is a great liberalizing force, but it also provides a huge amplifier for ignorance, propaganda and the instigation of violence. It is simply a technology and it is not intrinsically good or bad. On the one hand the Internet allows people to connect in ways that were unimaginable mere years ago. It allows access to incredible levels of information. The same thing creates immense fear in society because new social structures are emerging. Some of these structures are criminal or terrorists, some of these are dissidents against the establishment, and some of these are viewed as immoral. The information availability for the general public becomes an overload. This works in favor of the establishment, which benefits from propaganda and ignorance. The result is a distinct tension between knowledge and ignorance, freedom and tyranny hinging upon fear and security. I can’t see who is winning, but signs are not good.

Withholding information is the essence of tyranny. Control of the flow of information is the tool of the dictatorship.

― Bruce Coville

The topic of encryption is another pregnant topic. On the one hand it allows elements that the establishment does not like to communicate and exist in privacy. Some of these elements are criminal, or terrorists, and some are political dissidents or other social deviants. Encryption has some degree of equivalence to freedom in a digital World. I feel that the establishment is not trustworthy enough to have the keys to it. Can we really truly be completely safe and free? The issue is that we can never be either, and the attempt to be completely safe will enslave us. Any attempt to be completely free will endangers us as well. The trick is the balance of the two extremes. I choose freedom as the greater good, but clearly many, if not most, choose safety as the priority. The line between safety and tyranny is thin and guarding our freedom may be sacrificed in favor of safety and security.

How do you defeat terrorism? Don’t be terrorized.

― Salman Rushdie

Terrorism is another huge problem that crystalizes the issues of freedom, safety and security. It is used to frighten and enslave populations. Terrorism has successfully harnessed the will of the American population to support further profit taking by the wealthy. In fact, the key to curing terrorism is brutally simple. It is so simple and yet hard; the cure is to not be terrorized. Our fear of terrorists is their greatest force, and amplifies the damage done by actual terrorist acts by orders of magnitude. If we refuse to be terrorized, terrorists lose all their power. The problem is that terrorists are used by the establishment to frighten, control and corral the population to do the establishment’s bidding. It is an incredibly powerful political tool to mobilize the population to support tyranny. It drives the desire to have strong, protective and violent governance. It encourages a populace to consume itself with fear and hatred. It has led us to consider Trump as a viable candidate for President.

Our media including the Internet does immense amount of exaggeration of the risks in the World, and amplifies the impact of those risks on society. As one of my Facebook friends likes to say, “we are terrorism’s greatest force multiplier”. The risk from terrorism is vastly less than our actions would indicate. The deluge of information is making terrorism seem commonplace while the reality is how utterly uncommon and rare it is. For the media terrorism is great source of customer attention and a source of money. For politicians on the right terrorism is a way of channeling the ignorance and hatred in society to their side. For the interests of the wealthy terrorism is a great source of money for defense and intelligence industries to line their pockets with taxpayer money. All of these actions work to help society’s unraveling through opposing the forces of progress and liberalization by strengthening the power of the establishment whether it is industry, police or military.

Abandoning open society for fear of terrorism is the only way to be defeated by it.

— Edward Snowden

We need to strike a balance that allows freedom and progress to continue. Too many in the public do not realize that fear and security concerns are being used to enslave them. The politics of fear and hatred are the tool of the rich and powerful. They are driving maintenance of the status quo that hurts the general population and only benefits those already in power. It is continuing to drive an imbalance that can only end up with societal conflict. The larger the imbalance grows and the longer it goes unchecked, the greater the resulting conflict. If things don’t blow up this year, the blowup will only grow in severity.

You can fool some of the people all of the time, and all of the people some of the time, but you can not fool all of the people all of the time.

― Abraham Lincoln

 

 

 

 

How more management becomes less leadership

11 Friday Mar 2016

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Action expresses priorities.

― Mahatma Gandhi

There is nothing so useless as doing efficiently that which should not be done at all.

― Peter F. Drucker

I know that I’ve written on this general topic before, but it keeps coming up as one of the biggest issues in my work life. We are getting more and more management while less and less leadership is evident. I know the two things shouldn’t be mutually exclusive, but seemingly in practice they are. With each passing year we get more and more management assurance, more measurement of compliance all the while our true performance slips. We are “managed” in the modern sense of the word better than ever, yet our science and research is a mere shadow of its former glory. Perhaps this is the desired outcome even if only implicitly by society where lack of problems and readily identifiable fuck ups is valued far more than accomplishments. A complete lack of leadership nationally that values accomplishment certainly shares part of the collective blame.

8286049510_dd79681555_cThe core of the issue is an unhealthy relationship to risk, fear and failure. Our management is focused upon controlling risk, fear of bad things, and outright avoidance of failure. The result is an implemented culture of caution and compliance manifesting itself as a gulf of leadership. The management becomes about budgets and money while losing complete sight of purpose and direction. The focus on leading ourselves in new directions gets lost completely. The ability take risks get destroyed because of fears and outright fear of failure. People are so completely wrapped up in trying to avoid ever fucking up that all the energy behind doing progressive things moving forward are completely sapped. We are so tied to compliance that plans must be followed even when they make no sense at all.

Any imperative revolving around progress and overall technical quality has absolutely no gravity in this environment. The drive to be managed well simply overwhelms us. Of course managed well means that nothing identifiable as a fuck up happens; it almost never means doing something great, wonderful or revolutionary. Accomplishment is limited to safe, incremental things that couldn’t possibly go wrong. Part of the issue is our adoption of modern management principles, which put a massive emphasis on the short term. To be clear, modern business management is obsessively short term focused. This short-term focus is completely contrary to progress, quality and imagination. These impacts are felt deeply in the private sector and manifesting themselves profoundly in the public sector where I work. One of the key aspects are the structural aspects of modern management practice. We are too obsessed with following our management plans to completion as opposed to being flexible and adaptive.

We put management practices that are intrusively damaging on a virtual pedestal. A prime example is the quarterly progress obsession. Business is massively damaged by the short-term focus embodied by demands for unwavering quarterly profits. The same idea manifests itself more broadly in public sector management to a deeply distressing degree. The entire mentality is undermining the long-term quality of our scientific base nationally and internationally. We are unwilling to change directions even when it makes the best sense and the change is based on a rational analysis of lessons learned and produces the best outcomes.

All of it produces a lack of energy and focus necessary for leadership. We do not exercise the art of saying NO. We are managed to a very high degree, we a led to a vegettowork-topdemotivatorsry small degree. Our managers are human and limited in capacity for complexity and time available to provide focus. If all of the focus is applied to management nothing is left for leadership. The impact is clear, the system is full of management assurance, compliance and surety, yet virtually absent of vision and inspiration. We are bereft of aspirational perspectives with clear goals looking forward. The management focus breeds an incremental approach that too concretely grounds future vision completely on what is possible today. All of this is brewing in a sea of risk aversion and intolerance for failure.

Start with the end in mind.

― Stephen R. Covey

The focus of our management is not performance of our jobs in the accomplishment of our missions, science or engineering. The focus of our management is to keep fuck ups to a minimum. If some one fucks up, they are generally thrown to the wolves, or the fuck up is rebranded as a glorious success. This increasingly means that our management insofar as the actual work is concerned contributes to the systematic generation and encouragement of bullshit. The best managers can bullshit their way out of a fuckup and spin it into a glorious success.

This is incredibly corrosive to the overall quality of the institutions Unknown-2that I work for. It has resulted in the wholesale divestiture of quality because quality no longer matters to success. It is creating a thoroughly awful and untenable situation where truth and reality are completely detached from how we operate. Every time that something of low quality is allowed to be characterized as being high quality, we undermine our culture. Capability to make real progress is completely undermined because progress is extremely difficult and prone to failure and setbacks. It is much easier to simply incrementally move along doing what we are already doing. We know that will work and frankly those managing us don’t know the difference anyway. Doing what we are already doing is simply the status quo and orthogonal to progress.

Things which matter most must never be at the mercy of things which matter least.

― Johann Wolfgang von Goethe

Managing and leading are different, but strongly related. We need both in the right measure and they shouldn’t be exclusive, but time and energy is limited. Today have too much management and virtually no leadership because the emphasis is on managing a whole bunch of risks and fears. We are creating systems that try to push away the possibility of any number of identified bad things. We soak up every minute of time and amount of available effort in this endeavor leaving nothing left. Leadership and the actual practice of good personnel management is left without any time or energy to be practiced. The result is a gulf in both areas that becomes increasingly evident with each passing day.

Most of us spend too much time on what is urgent and not enough time on what is important.

― Stephen R. Covey

Leadership or the positive qualities of management do not stop or control all the bad things directly. Leadership and management impact these things in a soft and indirect way. Rather than step away from the overly prescriptive and failed approach to control every little thing that might go wrong, we continue down the path of mediocritydemotivatormicromanagement. Each step in micromanagement produces another tax on the time and energy of every one impacted by the system and diminishes the good that can be done. In essence we are draining our system of energy for creating positive outcomes. The management system is unremittingly negative in its focus, trying to stop stuff from happening rather than enable stuff. It is ultimately a losing battle, which is gutting our ability to produce great things.

Producing great things is in the service of the National interest in the best way. By not producing great things and calling not great things, great, acts to undermine the National interest. Today we are doing exactly this and letting ourselves off the hook. We have made management of risks and failure the focus of our energy. We had sidelined leadership by fiat and allowed mediocrity to creep into our psyche and let progress and quality drift. Embracing quality, progress, risk and allowing failure in service of greater achievement can make change happen in ways that matter.

What’s measured improves

― Peter F. Drucker

The issue isn’t that most of the c037fa3f2632d31754b537b793dc8403management work shouldn’t be done in the abstract. Almost all of the management stuff are a good ideas and “good”. They are bad in the sense of what they displace from the sorts of efforts we have the time and energy to engage in. We all have limits in terms of what we can reasonably achieve. If we spend our energy on good, but low value activities, we do not have the energy to focus on difficult high value activities. A lot of these management activities are good, easy, and time consuming and directly displace lots of hard high value work. The core of our problems is the inability to focus sufficient energy on hard things. Without focus the hard things simply don’t get done. This is where we are today, consumed by easy low value things, and lacking the energy and focus to do anything truly great.

I think there needs to be a meeting to set an agenda for more meetings about meetings.

― Jonah Goldberg

Examples of this abound in the day to day life of Lab employees. If you are a manager at one of the Labs, your days are choked with low value work. A very large amount of this low value work seems like the application of due diligence and responsibility. I think a more rational view of the activities is to view it through the lens of micromanagement. Our practices lead to our micromanaging people’s time, work and budgets as to absorb all the available time. This effectively leaves no time or effort to be available for people’s judgment. These steps also act to effectively remove the staff’s ability to act as independent professionals. We are transitioning our staff from an active independent community of world-class scientists to a disconnected collection of hourly employees.

Your behavior reflects your actual purposes.

― Ronald A. Heifetz

Perhaps the core issue iimgress a general ambiguity regarding the purpose of our Labs, the goals of our science and the importance of the work. None of this is clear. It is the generic implication of the lack of leadership within the specific context of our Labs, or federally supported science. It is probably a direct result of a broader and deeper vacuum of leadership nationally infecting all areas of endeavor. We have no visionary or aspirational goals as a society either.

The quest for absolute certainty is an immature, if not infantile, trait of thinking.

― Herbert Feigl

 

Entropy, vanishing viscosity, physically relevant solutions and ink

04 Friday Mar 2016

Posted by Bill Rider in Uncategorized

≈ Leave a comment

You should never be surprised by or feel the need to explain why any physical system is in a high entropy state.

― Brian Greene

For those of you who know me, it’s neither a secret, nor a widely known fact that I’ve gotten some tattoos recently. They aren’t the usual dreck most dudes get (like those tribal ones), but meaningful things to me. Now I have five in total, four of them are science related. One of the things that I wanted was an equation (yeah, I’m a total nerd). The question is what equation do I believe in enough to get permanently inscribed on my skin? A “common” choice for a science tattoo is Maxwell’s equations, and a friend of mine has the Euler equations on this arm from his PhD thesis. This post is about the equation I chose to care enough about to go through with it.

IMG_3502I’ll write the equation in TeX and show all of you a picture, you can make out a little of my other ink too, a lithium-7 atom and a Rayleigh-Taylor instability (I also have my favorite dog’s paw on my right shoulder and the famous Von Karmen vortex street on my forearm). The equation is how I think about the second law of thermodynamics in operation through the application of a vanishing viscosity principle tying the solution of equations to a concept of physical admissibility. In other words I believe in entropy and its power to guide us to solutions that matter in the physical world.

The all-knowing yesterday is obsolete today.

― Jarod Kintz

This is in contrast to much of the mathematical world that often cares about equations that are beautiful, but mean nothing in reality. A lot of the tension is related to following the beauty of Newtonian determinism and its centrality to continuum mathematical physics, and the need to embrace to stochastic nature of the real world. The real world is random and flows along time’s arrow and needs to embrace entropy and uncertainty. Our education and foundational knowledge of the physical world is based on Newton’s simplified view of things (a simplified view that revolutionalized science if not mankind’s understanding and mastery over nature). Newton’s principles can only take us so far, and we are probably reaching to end of its grasp. It is past time to push forward toward incorporating new principles into our model of reality.

Here is the equation in all its glorious mathematical statement, \frac{\partial U}{\partial t} + \nabla \cdot F(U) =\nabla \cdot \nu \nabla U taking the limit as \nu \rightarrow 0^+ \rightarrow \frac{d S}{d t} \ge 0. The equation is the time rate of change of a variable determined by a flux balance and a diffusive term where the limit of diffusion is taken to zero, which implies the satisfaction of the second law of thermodynamics implying that entropy increases. OK, but WTF is it all about? So in words the equation is a hyperbolic conservation law with a diffusive right hand side where the coefficient of viscosity goes to a limit of zero. In this limit we find solutions that are physically admissible, that is ones that could exist in the real World. These solutions lead to satisfaction of the second law of thermodynamics, which implies that entropy or disorder monotonically increases in time. The second law can be viewed as the thing that gives time a direction (time’s arrow!), and without the increase of entropy, time can flow equally well forward or backward, that is being symmetric. We know time flows forward in the real world we all live in, so we want that (or at least I want that, and believe you should too).

images lot of effort is spent studying the equations of inviscid flow, flow without dissipative forces most commonly the Euler equations. This form of equation is studied a lot because it is so pure. One can really get some awesomely beautiful mathematics out of it. Commonly the math leads to some great structure by studying these systems through their Hamiltonian and its evolution. Unfortunately, this endeavor while beautiful and hard has no physical merit whatsoever. No physical system really adheres to this Hamiltonian structure (except perhaps isolated systems of very small scale, and I really don’t give a much of a shit about these). They are seductive, pretty and almost without any physical utility. I care about stuff that appears in nature.

The important thing for equations to represent is physical reality (unless you’re doing math for math’s sake). As Wigner pointed out, mathematics has an incredible, almost mystic capacity to model our reality as he says is unreasonably effective. Exploiting this power should be a privilege we exercise whenever possible. In that vein the equations that connect to reality should be favored. In many cases inviscid equations are incredibly useful for modeling, but an important caveat should be exercised, the solutions to the inviscid equations that are favored are those associated with the presence of viscosity. These solutions are found through the application of an asymptotic principle, vanishing viscosity. The application of vanishing viscosity provides a route for these equations to satisfy the second law of thermodynamics, and its demands for increasing disorder.

2-29s03These principles actually don’t go far enough in distinguishing themselves from inviscid dynamics associated with Hamiltonian systems. Let me explain how they need to go even further. A couple of the most profound observations associated with fluid dynamics are associated with shock waves and turbulence, and share a remarkable similarity (it might be argued that both are inevitable through dimensional similarity arguments!). For shock waves the amount of dissipation occurring via the passage of a shock is proportional to the size of the jump in the variables across the shock cubed (Bethe came up with this in 1942). For turbulence the amount of dissipation is a high Reynolds number flow is proportional to the size of the velocity variation cubed (Kolmogorov came up with this in 1941). Both relations are independent of the specific value of the molecular viscosity.

What people resist is not change per se, but loss.

― Ronald A. Heifetz

These relations are profound in their implications, which are not intuitively obvious upon first seeing it. The dissipation rate being independent of the value of viscosity means that the flow contains something that approaches a singcsd240333fig7ularity. These singularities are called shock waves and have no name at all in turbulence because we don’t know what they are. These singularities are the mystery of turbulence and they are surely ephemeral as they are important. In other words we don’t see the turbulent singularities like we see shocks, but they must be there. Moreover the supposed equations of turbulence, the incompressible Euler equations don’t appear to contain obviously singularity producing features. This whole issue has produced an utterly stagnant scientific endeavor of immense practical importance.

Of course what is usually not covered is the horribly degenerate and unphysical nature of the incompressible Euler or Navier-Stokes equations. The key term is incompressible, which is intrinsically unphysical and removes sound waves from the system making their propagation speed infinite. What if these sound waves, which are always present, contain the essence of what drives dissipation in turbulent system. Incompressibility also removes thermodynamics from the equations and can only be derived from the compressible Navier-Stokes by considering the flow to be adiabatic. Turbulence in its essential character is non-adiabatic and intrinsically dissipative. Anyone see the problem(s)? Perhaps its time to start considering that the lack of progress in turbulence is exposing fundamental flaws in our modeling paradigm.

I would posit that we are trying to solve this monumentally important problem with a set of equations that we have systematically crippled. These equations were posed during an era where the fundamental issues discussed above were not well known. There really isn’t an excuse today. Could our lack of progress with turbulence be completely related to focusing on the wrong set of equations (yes!)?

Let’s dig just a bit deeper on the philosophical implications of the Bethe’s and Kolmogorov’s relations for dissipation of energy. Both of these relations also imply a satisfaction of the second law of thermodynamics by these systems. The limiting value for the satisfaction of the second law is not simply the inequality at zero, but rather an inequality for a finite value of dissipation. This finite value of dissipation is directly related to the large-scale flows structure and quantitatively proportional to the cube of the variations in the inertial range. Thus, the limit of zero dissipation is not physically relevant, the limit is a finite amount of dissipation set by the large scales of the system of interest. This deepens the implications associated with any study of completely dissipation-free dynamics being utterly unphysical. The dissipation-free system is separated from the real world by a finite and non-vanishing distance.

This feature of the physical world should be reflected in how we numerically model things (this is my philosophical point of view). It gets to the core of why I chose the equation to ink on my skin. A lot of numerical work is focused on trying to remove every single bit of dissipation from the method while maintaining stability. This mantra is tied to the belief that dissipation is bad and one is fundamentally interested in the numerical solution to the dissipation-free Euler equations. I believe this is utterly foolhardy and unproductive. The dissipation-free Euler equations are close to useless. The key dissipative relations I’ve introduced above tell us that the dissipation is never zero, but rather non-zero in a very specific way that is irreducible.

Some would argue that this non-zero dissipation should be the target of modeling, and the numerical methods should be pure and not intrude into the modeling. I believe that this perspective is laudable, foolish and unworkable practically. I favor more holistic approaches that combine modeling and numerical methods into a seamless package. This approach works wonderfully well in numerical methods for shocks, and produces a set of methods that revolutionized the field of computational fluid dynamics (CFD). I believe the grasp of these methods is far greater and extends into turbulence through implicit large eddy simulation (ILES). ILES implies strongly that the turbulence modeling is strongly addressed by techniques that practically solve compressible flows in the vanishing viscosity limit.

Generally for turbulence this approach has not been taken and the reason is clear to see. Turbulence modeling remains to this day a dominantly empirical activity. The core reason for this is the comment above about knowing what the dissipative structures are in turbulence. In compressible flows we know that shock waves are the thing to focus on and where the invariant dissipation occurs. Shocks are the hard thing to compute numerically, and we know what to do. For turbulence the same things cannot happen and the result is empirical modeling without targeted numerical methods. What remains is a philosophy that drives numerical methods to be innocuous, and allow the modeling to hold sway. The problem is that the modeling is blind to what the real physics is doing and the precise mechanisms to connect the large-scale flow to the dissipation of energy.

Nothing limits you like not knowing your limitations.

― Tom Hayes

As far as the tattoos are concerned, I haven’t decided yet if I’m getting more ink or not (its probably a yes). Maybe I’ll keep to the theme of science on the left side of my body and personal meaning on the right side of my body. Ideas are hatching, and I need to mind my tendency towards obsessive-compulsive behavior.

The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

— Sir Arthur Stanley Eddington

Wigner, Eugene P. “The unreasonable effectiveness of mathematics in the natural sciences. Richard courant lecture in mathematical sciences delivered at New York University, May 11, 1959.” Communications on pure and applied mathematics 13.1 (1960): 1-14.

Bethe, H. A. “On the theory of shock waves for an arbitrary equation of state.” Classic papers in shock compression science. Springer New York, 1998. 421-495.

Kolmogorov, Andrey Nikolaevich. “Dissipation of energy in locally isotropic turbulence.” Akademiia Nauk SSSR Doklady. Vol. 32. 1941.

Grinstein, Fernando F., Len G. Margolin, and William J. Rider, eds. Implicit large eddy simulation: computing turbulent fluid dynamics. Cambridge university press, 2007.

 

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...