There is nothing artificial about “Artificial Viscosity”

The name of the original successful shock capturing method is “artificial viscosity,” and it is terrible.  John Von Neumann and Robert Richtmyer were both mathematical geniuses, perhaps, at least in this case, poor at marketing.   To be fair, they struggled with the name, and artificial viscosity was better than “mock” or “fictitious” viscosity, which Richtmyer considered in his earlier Los Alamos reports (LA-671 and LA-699).  Nonetheless, we are left with this less than ideal name. 

I’ve often wished I could replace this name with “shock viscosity” or better “shock dissipation” because the impact of artificial viscosity is utterly real.  It is physical and necessary to compute the solutions of shocks automatically without resolving the ridiculously small length and time scales associated with the physical dissipation.  In a very true sense, the magnitude of viscosity used in artificial viscosity is the correct amount of dissipation for the manner in which the shock is being represented.

The same issue comes about in the models of turbulence.  Again, the effects of nonlinearity enormously augment the impact of physical dissipation.  The nonlinearity is associated with complex small-scale structures whose details are unimportant to the large-scale evolution of the flow.  When simulating these circumstances numerically we are often only interested in the large-scale flow field and we can use an enhanced (nonlinear) numerical viscosity to stably compute the flow.  This approach in both shocks and turbulence has been ridiculously successful.  In turbulence this approach goes by the name implicit large eddy simulation.  It has been extremely controversial, but in the last decade this approach has grown in acceptance.

The key to this whole line of thinking is that dissipation is essential for physical behavior of numerical simulations.  While too much dissipation is undesirable, too little dissipation is a disaster.  The most dangerous situation is when a simulation is stable (that is, runs to completion), and produces seemingly plausible results, but has less dissipation than physically called for.  In this case the simulation will produce an “entropy-violating” solution.  In other words, the result will be unphysical, that is, not achievable in reality.  This is truly dangerous and far less desirable than the physical, but overly dissipated result (IMHO).  I’ve often applied a maxim to simulations, “when in doubt, diffuse it out”.   In other words, dissipation while not ideal (a play on words!) is better than too little dissipation, which allows physically unachievable solutions to persist.

Too often numerical practitioners seek to remove numerical dissipation without being mindful of the delicate balance between, the dissipation that is excessive, and the necessity of dissipation for guaranteeing physically relevant (or admissible) results.  It is a poorly appreciated aspect of nonlinear solutions for physical systems that the truly physical solutions are not those that have no dissipation, but rather produce a finite amount of dissipation.  This finite amount is determined by the large-scale variations in the flow, and is proportional to the third power in these variations. 

Very similar scaling laws exist for ideal incompressible and compressible flows.  In the incompressible case, Kolmogorov discovered the scaling law in the 1940’s in the form of his refined similarity hypothesis, or the “4/5ths” law.  Remarkably, Kolmogorov did this work in the middle of the Nazi invasion of the Soviet Union, and during the darkest days of the war for the Soviets.  At nearly the same time in the United States, Hans Bethe discovered a similar scaling law for shock waves.  Both can be written in stunningly similar forms where the time rate of change of kinetic energy due to dissipative processes (or change in entropy), is proportional to the third power of large-scale velocity differences, and completely independent of the precise value of viscosity. 

These scaling laws are responsible for the success of artificial viscosity, and large eddy simulation.  In fact, the origin of large eddy simulation is artificial viscosity.  The original suggestion that led to the development of the first large eddy simulation by Smagorinsky was made by Von Neumann’s collaborator, Jules Charney to remove numerical oscillations from early weather simulations in 1956.  Smagorinsky implemented this approach in three dimensions in what became the first large eddy simulation, and the first global circulation model.  This work was the origin of a major theme in turbulence modeling, and climate modeling.  The depth of this common origin has rarely been elaborated upon, but I believe the commonality has profound implications.  

At the very least, it is important to realize that these different fields have a common origin.  Dissipation isn’t something to be avoided at all costs, but rather something to manage carefully.  Better yet, dissipation is something to be modeled carefully, and controlled.  In numerical simulations, stability is the most important thing because without stability, the simulation is worthless.  The second priority is producing a physically meaningful (or realizable) simulation.  In other words, we want a simulation that produces a result that matches a situation achievable in the real world.  The last condition is accuracy.  We want a solution that is as accurate as possible, without sacrificing the previous two conditions. 

Too often, the researcher gets hung up on these conditions in the wrong order (like prizing accuracy above all else).   The practitioner applying simulations in an engineering context often does not prize accuracy enough.   The goal is to apply these constraints in balance and in the right order (stability then realizability then accuracy).  Getting the order and balance right is the key to high quality simulations.

Big Science

“Big Science”

After providing a deep technical discussion of an important paper last time, I thought a complete change of pace might be welcome.  I’m going to talk about “big science” and why it is good science.  The standard thought about science today is that small-scale researcher-driven (i.e., curiosity-driven) science is “good science” and “big science” is bad science.  I am going to spend some time explaining why I think this is wrong on several accounts.

First, let’s define big science with some examples.  The archetype of big science is the Manhattan project, which if you aren’t aware is the effort by the United States (and Great Britain) to develop the atomic bomb during World War II. This effort transformed into an effort to maintain scientific supremecy during the Cold War and included the development of the hydrogen bomb.   Many premier institutions were built upon these efforts along with innumerable spinoffs that power today’s economy, and may not have been developed without the societal commitment to National security associated with that era.  A second archetype is the effort to land a Man (an American one) on the Moon during the 1960’s.  Both efforts were wildly successful and captured to collective attention of the World.  The fact is that both projects were born from conflict, in one case a hot war to defeat the Nazis and the second to beat the Soviets during the cold war.  Today it is hard to identify any societal goals of such audacity or reach.   In fact, I am sadly at a loss to identify anything right now with the National genome project being the last big National effort.

One of the keys to the importance and power of big science is motivation.  The goals of big science provide a central narrative and raison d’etre for work.  It also provides constraints on work, which curiously spurs innovation rather than stifles it.    If you don’t believe me, visit the Harvard Business Review and read up on the topic.   Perhaps the best years of my professional career were spent during the early days of the science-based stockpile stewardship program (SBSS) where goals were lofty and aspirations were big.  I had freedom to explore science that was meaningfully tied to the goals of the project.  It was the combination of resources and intellectual freedom tempered by the responsibility to deliver results to the larger end goals.

The constraints of achieving a larger goal act to serve as continual feedback and course correction to research, and provide the researcher with a rubric to guide work.  The work on the atomic bomb and the subsequent cold war provided an immensely important sense of reason to science.  The quality of the science done in the name of these efforts was astounding, yet it was conspicuously constrained to achieving concrete goals.  Today there are no projects with such lofty or existentially necessary outcomes.  Science and society itself is much poorer for it.  The collateral benefits for society as equally massive including ideas that grow the economy and entire industries from the side effects of the research.  I shouldn’t have to mention that today’s computer industry; telecommunication and the Internet were all direct outgrowths of cold war-driven defense-related research efforts.

Why no big science today?  The heart of the reason goes to the largely American phenomena of overwhelming distrust of government that started in the 70’s and is typified by the Watergate scandal.  Government scandal mongering has become regular sport in the United States, and a systematic loss of trust in everything the government does is the result.  This includes science, and particularly large projects.  Working at a government lab is defined by a massive amount of oversight, which amounts to the following fact: the government is prepared to waste an enormous amount of money to make sure that you are not wasting money.  All this oversight does is make science much more expensive and far less productive.  The government is literally willing to waste $10 to make sure that $1 is not wasted in the actual execution of useful work.  The goal of a lot of work is not to screw up rather than accomplish anything important.

The second and more pernicious problem is the utter and complete inability to take risks in science.  The term “scheduled break-through” has become common.  Managers do very little management, and mostly just push paper around while dealing with the vast array of compliance driven initiatives.   Strategic thinking and actual innovation only happen every so often, and the core of government work is based around risk adverse choices, which ends up squandering the useful work of scientists across the country.  The entire notion of professional duty as a scientist has become a shadow of what it once was.

Frankly, too much of what science does get funded today is driven by purely self-serving reasons that masquerade as curiosity.  When science is purely self-defined curiosity it can rapidly become an intellectual sandbox and end up in alleys and side streets that offer no practical benefit to society.  It does, perhaps provide intellectual growth, but the distance between societal needs and utility is often massive.  Furthermore, the legacy of pure curiously driven science is significantly overblown and is largely a legend rather than fact.  Yes, some purely curiosity driven science has made enormous leaps, but much of the best science in the last 50 years has been defined by big science pushing scientists to solve practical problems that comprise aspect of the overarching goal.  The fast Fourier transform (FFT) is a good example, as are most of numerical methods for partial differential equations.  Each of these areas had most of its focus derived from some sort of project related to National defense.   Many if not most of the legendary curiosity driven scientific discoveries are from before the modern era and are not relevant to today’s World.

The deeper issue is the lack of profoundly important and impactful goals for the science to feed into.  This is where big science comes in.  If we had some big important societal goals for the science to contribute toward, it would make all the difference in the world.  If science were counted on to solve problems rather than simply not screw something up, the world would be a better place.  There are a lot of goals worth working on; I will try to name a few as I close.

How do we deal with climate change? And the closely related issue of meeting our energy needs in a sustainable way?

How do we tame the Internet?

How can we maintain the supply of clean water needed for society?

Explore the Moon, Mars and the solar system?

Can we preserve natural resources on the Earth by using extraterrestrial resources?

What can we do to halt the current mass extinction event? Or recover from it?

How can computer technology continue to advance given the physical limits we are rapidly approach?

How can genetics be used to improve our health?

How can we feed ourselves in a sustainable way?

What is our strategy for nuclear weapons for the remainder of the century?

How should transform our National defense into a sustainable modern deterrent without bankrupting the Country?

How can we invest appropriately in a National infrastructure?

As you can see, we have more than enough things to worry about; we need to harness science to help solve our problems by unleashing it, and providing it some big, audacious goals to work on.  Let’s use big science to get big results.  Unfortunately, a great deal blocks progress.  First and foremost it is the pervasive lack of trust and confidence of the public in government primarily, and science in general.  The fact is that the combination of big important goals, societal investment (i.e., government), and scientific talent is the route to prosperity.

Classic Papers: Lax-Wendroff (1960)

Tags

, , , , , , , ,

P.D Lax; B. Wendroff (1960). “Systems of conservation laws”. Communications in Pure and Applied Mathematics. 13 (2): 217–237. doi:10.1002/cpa.3160130205

This paper is fundamental to much of CFD, and that is an understatement.

The paper is acknowledged to have made two essential contributions to the field: a critical theorem, and a basic method.  It also has another nice method for shock dissipation that is not widely seen.  I will discuss all three and try to put things into perspective.

As I recently said at a conference, “Read the classics, don’t just cite them.”  In today’s world of too many research papers, finding a gem is rare.  The older mines are full diamonds, and the Lax-Wendroff paper is a massive one.   By reading the archival literature you can be sure to read something good, and often the older papers have results in them that are not fully appreciated.

Why would I say this? Textbooks will tell you what is important in a paper, and you either believe what you are told, or just read the relevant portions of the paper.  Sometimes developments made later could show the older papers in a new light.  I believe this is the case with Lax-Wendroff.  By read, I mean really read, cover-to-cover.

1. The theorem has enormous implications on how methods are designed because of the benefits of using discrete conservation form.  In this way the numerical method mimics nature and solutions have an intrinsically better change of getting the right answer.

What does the theorem say?

Basically, if a numerical method for a first-order hyperbolic partial differential equation is in discrete conservation form, it must converge to a weak solution (assuming first that its is stable and consistent).  Stable and consistent means the solution is well-behaved numerically (doesn’t blow up), and approximated the differential equation.

If one follows the guidance, it is unabashedly good, that is, we want to get valid weak solutions, but there is a catch.  The catch is that there are literally infinitely many weak solutions, and most of them are not desirable.  What we actually desire is the correct, or physically valid weak solution.  This means we want a solution that is an “entropy” solution, which is essentially the limiting solution to the hyperbolic PDE with vanishing dissipation.  This leads to numerical dissipation (i.e., things like artificial viscosity) to create the entropy needed to select physically relevant solutions.

The simpliest hyperbolic PDE where this issue comes to a head is Burgers’ equation with an associated upwind discretization assuming the velocity is positive,

Image,         (1)

which can be rewritten equivalently as

Image.         (2)

Equations (1) and (2) are identical if and only if the solution is continuously differentiable (and in the limit where error is small), which is not true at a discontinuity.  What the conservation form means effectively that one is sure the amount of stuff (u) that leaves on cell on the mesh exactly enters the next cell.  This allows all the fluxes on the interior of the mesh to telescopically cancel leaving only the boundary terms, and provides a direct approximation to the weak form of the PDE.  Thus the numerical approximation will exactly mimic the dynamics of a weak solution.  For this reason we want to solve (1) and not (2) because it is in conservation form.   With this theorem, the use of conservation form went from mere suggestion, to an iron clad recommendation.

For a general conservation law the conservation form gives,

Image,         (3)

which generalizes to higher order methods depending on how the cell-edge fluxes are defined.   The last bit of detail is picking the right weak solution since there are an infinite number of them.  This depends on numerical dissipation, which mimics the dynamics of vanishingly small physical viscosity.  The presence of viscosity produces entropy as demanded by the second law of thermodynamics.  The inclusion of sufficient dissipation will choose a unique physically relevant solution to the conservation law.

http://en.wikipedia.org/wiki/Lax–Wendroff_theorem

2. The method introduced in this paper was another centerpiece and icon of the numerical solution of hyperbolic PDEs.  This is a stable, second-order, centered method, which was developed because the stable centered method introduced earlier was first-order accurate and quite dissipative (this is the Lax-Friedrichs method from 1954).  The Lax-Wendroff method is derived using the Taylor series where the time derivatives are systematically replaced with space derivatives.

For a simple PDE the procedure is quite straightforward,

Image           (4)

replace the time derivative with space derivatives,

Image         (5)

and discretize with centered differences. This method is second-order accurate in space and time, but will create oscillations near discontinuities.  This will provide a motivation for the third result from the paper, a shock dissipation, which is usually overlooked, but needs a new appreciation.

Several years later Richtmyer presented this as a two-step, predictor-corrector method that made for simple and efficient implementation (NCAR Technical Note 63-2 ).  The Lax-Wendroff method then rapidly became one of the methods of choice for explicit in time aerodynamic calculations.  Perhaps Bob MacCormack at NASA Ames/Stanford developed the most famous of these methods in 1969.  This was a variant of Richtmyer’s approach utilizing a sequence of forward, followed by backwards-biased differences.

http://en.wikipedia.org/wiki/MacCormack_method

All of this set the stage for much bigger things in the mid 1970’s, the introduction of genuinely nonlinear schemes where the finite difference stencil depended on the structure of the solution itself.

The procedure of replacing time- with space-derivatives has reappeared in the past few years often as “Lax-Wendroff” time-differencing.  It can be very complex for systems of conservation laws, or for higher than second-order accurate differencing of genuinely nonlinear equations.  This is a consequence of the explosion of terms arising from the systematic application of the chain rule.  Nonetheless it can be an effective alternative to a method-of-lines approach yielding more satisfying CFL stability limits.

http://en.wikipedia.org/wiki/Lax–Wendroff_method

3. The dissipation introduced by Lax-Wendroff was needed because of the oscillatory solutions (Godunov’s theorem explains why, something to discuss in detail later).  Because of this they introduced a shock dissipation that looked much different at first blush than the Von Neumann-Richtmyer mechanism.  We will show briefly that it is basically identical in an approximate mathematical sense.  Hopefully this connection will allow this result to be more widely appreciated.

In the paper they introduce a shock dissipation based on the variation of the sound speed (the product of density and speed of sound, “c”, I removed the constant and the mesh spacing from the formula too, from pp. 232 of the paper),

Image,                        (6)

which I will show through the application of the chain rule as being equivalent to,

Image  ,             (7)

where the second transformation has used the differential equations themselves (or the Rankine-Hugoniot conditions). This is just like the Von Neumann-Richtmyer approach if we take,

Image.       (8)

This then implies that the viscosity coefficient actually arises from the equation of state because this quantity is known as the fundamental derivative,

Image,                  (9)

the only proviso is that the fundamental derivative is computed at constant entropy.

Bethe, Zeldovich and Thompson each studied the fundamental derivative, but Menikoff and Plohr’s paper is the best available discussion of its impact on shock wave dynamics (Reviews in Modern Physics, Volume 61, 1989).  This is defined by the thermodynamics of the equation of state as opposed to the mindset that the coefficient of the Von Neumann-Richtmyer viscosity is arbitrary.  It is not!  This coefficient is determined by the state of the material being simulated. Thus the quadratic viscosity can be reinterpreted as

Image    .         (10)

We hope that this observation will allow artificial viscosity to be selected without appealing to arbitrarily selected constants, but rather for the specific material being simulated.

Historical footnotes:

Wendroff still works at Los Alamos (every day as Misha Shashkov notes).   The Los Alamos report number is LA-2285 from 1958/1959.

I asked Burt Wendroff whether anyone working on the Lab’s non-conservative codes based on the staggered mesh, artificial viscosity approach of Von Neumann and Richtmyer had inquired about the necessity of conservation form.  The answer was no, never.  So the theorem that shaped much of CFD outside the Lab had no impact inside the Lab despite being developed there.  Los Alamos was literally the birthplace of CFD.  Its first baby was artificial viscosity and the staggered mesh methods, and its second baby was driven by Lax and includes Lax-Wendroff.  The Lab orphaned the second baby, but found good parents elsewhere where it flourished.  Its would be akin to the first child taking over the family business and living a comfortable, but somewhat unfulfilled life, while the second child left home and found greatness.

Why would this be?  I believe that after Von Neumann and Richtmyer showed the way to make shock capturing work, the Lab turned its energy almost completely toward the development of the H-Bomb, and the development of improved methods that Lax showed the path toward was not paid attention to.  Stunningly this pattern continued for decades.  In the past 10-15 years, the work of Lax has started to have an impact there albeit 50 years on from when it should have.

I will note that the term finite volume scheme didn’t come into use until 1973 (in a paper by Rizzi), but the Lax-Wendroff method pushed the field in this direction.  Lax used finite volume ideas immediately as published in a Los Alamos report from 1952 (LA-1205).  He called it a finite difference method, as was the fashion, but it is clearly a finite volume method, and the conservation form was immediately appreciated by Lax as being special.  It is notable that Von Neumann’s method is not in conservation form, and does not conserve in any simple way.

The authors are famous enough to have biographies in Wikipedia.

http://en.wikipedia.org/wiki/Peter_Lax

http://en.wikipedia.org/wiki/Burton_Wendroff

Thoughts about Multimat2013

Multimat2013 or My Biannual Geekfest.

In all honesty most of you would consider every conference I go to as a “Geekfest” so that label is overdoing it.

mm

https://multimat13.llnl.gov

Last week I attended a meeting of one of the communities I participate actively in.  This meeting goes by the catchy name of “multimat” which is short hand for multimaterial computational hydrodynamics.  Most of the attendees are work at their nation’s nuclear weapons’ labs although we are getting broader attendance from countries and labs outside that special community.  This year’s meeting had a really good energy despite the seemingly overwhelming budgetary constraints both in the United States and Europe.

Why was the energy at the meeting so good, when the funding picture is so bleak?  New ideas.

That’s it, the community has new ideas to work on and it energizes everything.  What new ideas you ask?  Cell-centered and high-order methods for Lagrangian hydrodynamics, concomitant spill over from other areas of science such as optimization.

Let me explain why this might be important given that cell-centered and high-order methods are commonplace in the aerospace community.  In fact, as will discuss at length in a future post these fields were intimately connected at their origins, but the ties have become estranged over the intervening decades.

Using cell-centered methods for Lagrangian hydrodynamics was long thought to be unworkable with episodic failures over the preceding decades.  Lagrangian hydrodynamics has long followed the approach provided by the combination of John Von Neumann’s staggered mesh method published in 1944 combined with the essential artificial viscosity of Richtmyer developed in 1948.*  The staggered mesh has material quantities at cell centered, but velocities (kinematics) at the cell edges.   Everything done within the confines of this community proceeded using this approach for decades (including France, England, Soviet Union/Russia, China, Israel).    All of these methods are also either first- or second-order accurate.  Cell-centered approaches based upon Godunov’s method appear every so often, but are viewed as practical failures (example, the Caveat code from Los Alamos).

A second historical footnote is that cell-centered methods started at Los Alamos shortly after the famous Von Neumann-Richtmyer paper appeared in 1950.  By 1952 Peter Lax introduced a cell-centered finite difference method, which we know as the Lax-Friedrichs methods (really finite volume, but that term didn’t exist till 1973).  Godunov’s method was developed between 1954 and 1956 independently.

Multimat2013 took place of five days with about 50 talks and 25 posters.  In particular I thought the first three days were fantastic.   As I noted, a great deal of the positive energy comes from the development of cell-centered Lagrangian methods starting with the work of Depres and Maire in France.  Similar methods have been developed from that foundation in England and the United States.  Further developments have been made to these methods with high-order approaches including discontinuous Galerkin, and high-order “traditional” finite elements.  This seems to have opened the door to high-order methods which has been an active area World-wide since the 1980’s.

This in part was the inspiration for my talk.  Recently, I attended the JRV symposium (http://dept.ku.edu/~cfdku/JRV.html), which preceded the AIAA CFD Meeting in June.  JRV stands for Jameson, Roe and Van Leer.  Bram Van Leer (http://en.wikipedia.org/wiki/Bram_van_Leer) gave a talk that largely chided the community on referencing classical papers (he has several!) and not really reading their content.  I decided to discuss one of Bram’s papers from that perspective (Journal of Computational Physics, Volume 23, 1977).  To make a long story short, the Multimat community has focused on one of the six methods in Bram’s paper.  In fact, the method has been given the name “Van Leer Method” in the community of code developers represented at Multimat!  When I met Bram and relayed this he found it offputting, and slightly horrified him.  This method is the worst of the six methods from some basic perspectives.  The other methods may gain a second life with new computers, but require some effort to get them up to snuff.  I focused to some degree on the the fifth method, which has very nice properties, and unbeknownst to many of the researchers has been rediscovered without referencing the original work of Van Leer.  Perhaps this method can be the topic of another future post.

Being a multimaterial conference, techniques for evolving material interfaces is of interest.  Again, the conference featured a neat mix of traditional and modern approaches with some trends.  Part of this included the use of optimization/minimization principles for solving particularly pernicious problems.  There is also notable improvement in level set techniques in this area.  I’ll note that Jamie Sethian once told me that he thought that this area provided some of the greatest challenges to level sets (in other words its ideally suited to the other problems it is used for).  Nonetheless, progress has been immense over the past 15 years.

Ann Mattsson gave a talk on our joint work on artificial viscosity.  It received mixed reviews largely due to Ann’s most valuable characteristic.  She isn’t one of us in that she is an accomplished atomic physicist and not a numerical hydrodynamics expert.  She took her unique professional perspective to try and build artificial viscosity from the ground up.  She also started from the viewpoint of the less widely known first report on the method written by Richtmyer in 1948.  These conditions conspire to create a functionally different perspective and different method than the classical viscosity arising from the Von Neumann-Richtmyer paper.  I then took her results and put together an initial implementation of the method (I am probably significantly biased by the classical approach, that has had 65 years of use).  One other aspect of the Richtmyer report that is notable is that it was classified secret until 1993.   It is nothing but mathematical physics and its status only robbed us of having a correct history of the lineage of shock capturing methods.

To be clear, Von Neumann conceived of shock capturing as a concept, but needed Richtmyer’s contribution to make it practical.

I also gave a poster on the goings on and progress with the code I support.  This included the introduction of a meme to the proceedings to explain why things are difficult.  It turns out this is a common issue (not surprising at all!).

simply

The last two days seemed a bit less exciting with more traditional themes taking over.  That might have been simply a function of an over-celebration of my birthday, which occurred fortuitously on the night of the banquet (and the wonderful hospitality of my fellow travelers leading to less sleep than I normally need).

The meeting has been in Europe previously (Paris, Oxford, Prague, Pavia Italy, Archchon France), and very well executed.  We Americans had a high bar to meet, and I think the organizers from Lawrence Livermore Lab did very well. The choice of San Francisco was inspired, and did a great deal to help make the meeting successful.  We managed to provide hospitality that didn’t embarrass the United States.**  So hats off to Rob Reiben, Mike Owen, and Doug Miller for a job well done.  They also had wonderful assistance from Darlene Henry and Jenny Kelley who kept everything humming for the entire week.

sunset

Here is a picture of the sunset at the banquet.  Really beautiful (Hans and Vince got in the way).

* I will briefly note that artificial viscosity is an example of a method that regularizes a singularity, and leads to this blog’s name.

** I am often taken aback by the degree to which our European colleagues offer far greater hospitality than we Americans can.  We literally can’t match them.  It is a continual issue with working for the government, and a source of personal embarrassment.  We government contractors are required to be run “like a business” yet we offer hospitality that no business would allow.  Frankly, it is complete bullshit.

Approaching the Singularity

Tags

, , , ,

This is just the start of this endeavor. Part of the reason is to provide a platform for my opinions that aren’t fit for work, the second part is to practice writing, and the third is to try this form of media. Next time I’ll introduce myself in more detail, then get to some real content.

What does “the regularized singularity” mean?

Compressible fluid flows forms shock waves naturally. Mathematically these are singular, meaning they represent a discontinuous change in the fluid. In the real world, viscosity and heat conduction being diffusion processes actually make a shock wave continuous and help enforce the second law of thermodynamics (produce entropy). Numerically, we often to the same thing as a way of computing the discontinuous solution on a computational grid. This is a regularized singularity. This is the beginning of fun and games.

More later. Bill