• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Category Archives: Uncategorized

Weak and Strong Forms for Boundary Conditions

07 Thursday Nov 2013

Posted by Bill Rider in Uncategorized

≈ 4 Comments

There are two standard ways of thinking about differential equations, the strong and the weak form.  The strong form is usually the most familiar involving derivatives that all exist.  By the same token it is less useful because derivatives don’t always exist, or a least not in a form that is familiar (i.e., singularities form or exist and evolve).  The weak form is less familiar, but more general and more generally useful.  The weak form involves integrals of the strong form differential equations, and describes solutions that are more general because of this.  This is because we can integrate across the areas that undermine the strong solutions and access the underlying well-behaved solution.  Functions that have derivatives that are undefined have perfectly well defined integral.  One of the keys to the utility of the weak form is the existence of many more solutions for the weak form than the associated strong form.   With these many solutions we have additional burden to find solutions that are meaningful.  By meaningful I mean physical, or more plainly those that can be found in the natural universe.*

Just as the differential equations can be cast in the strong and weak form, initial and boundary conditions can do the same.  Just as with the differential equations these differences are consequential in how the solutions to differential equations perform.  I have found that both ideas are useful in the context of successfully running problems.  If you run problems then you have to deal with boundary conditions and boundary conditions are essential, and essentially ignored in most discussions.  Perhaps not conincidently the boundary conditions are the ugliest and kludgiest part of most codes.  If you want to look at bad code, look at how boundary conditions are implemented.

The classic approach for setting boundary conditions in a finite volume code is the use of “ghost” cells.  These ghost cells are added onto the grid in layers outside the domain where the solution is obtained.  The values in the ghost cells is set in order to achieve the boundary effect when the stencil is applied to the entire method (i.e., the finite difference method).  For a finite volume method, the values in the cells are updated through the application of fluxes in and out of each of the cells.  This step is where issues occur namely that the fluxes are not necessarily the same fluxes one would get by applying the boundary condition correctly.  Ghost cells use the strong form of the PDE in their mindset.  This is the imposition of the continuously differentiable equation outside the domain.  One proviso is that a control volume method is inherently a weak form of the PDE concept, so there is a sort of mixed boundary condition once you decide how to compute the flux at the boundary.

Fluxing is where the weak boundary condition comes in.  In the weak boundary condition, the flux itself or the process of the flux calculation is used to impose the boundary condition.   If one is using a solver based upon a Riemann solver then the state going into the Riemann solution on the boundary from outside the domain imposes the boundary condition.  The best thing about this approach is that the update to the values in the cells is updated in a manner that is consistent with the boundary condition. 

If you are using a first-order method for integrating the equations of motion, the weak boundary condition is the only one that matters.  For most modern shock capturing methods, the first-order method is essential for producing quality results.  In this case the weak boundary condition determines the wave interaction at the boundary because it defines the state of the fluid at the boundary on the outside of the domain.

Just to be more concrete regarding the discussion I will provide a bit of detail on one particular boundary condition, a reflection (or inviscid wall).  At a wall the normal velocity is zero, but the velocity extrapolated to the boundary from cell-centeres is rarely identically zero.  The other variables all have a zero gradient.  The normal velocity outside the boundary in the ghost cell is the mirror of the cells internal, thus taking on a value of negative the value on the physical grid cells.  By the same token, the other variables take identical values to those inside the domain.  More particularly for the strong boundary conditions, the first ghost cell takes the same value as the first interior cell except for normal velocity where the value is negative the interior value.  If you have a second ghost cell then the same is applied to values taken from the second interior cell, and so on.  For the weak boundary condition, the same procedure is taken, but applied to the values at the boundary extrapolated from the last interior cell.

In practice, I use both strong and weak boundary conditions to minimize problems in each step of the finite volume method.  In this manner the boundary conditions are reinforced by each step and the solution is more consistent with the desired conditions.  This is clearest when a scheme with a wide stencil is used where non-boundary cells access ghost cells.  In practice the discrepancy between the strongly and weakly imposed boundary conditions is small; however, when limiters and other nonlinear numerical mechanisms are used in a solver these differences can become substantial.

 If you want to read more I would suggest the chapter on boundary conditions from Culbert Laney’s excellent “Computational Gasdynamics” that touches upon some of these issues.

 * Mathematicians can be interested in more general unphysical solutions and do lots of beautiful mathematics.  The utility of this work is may be questionable, or at least scrutized more than it often is. The beauty of it is a matter of aesthetics, much like art.  An example would be the existence of solutions to the incompressible Euler equations where the lack of connection to the physical world is two fold**: incompressibility isn’t entirely physical (infinite sound speeds! No thermodynamics) and setting viscosity equal to zero isn’t physical either (physical solutions come from viscosity being positive definite, which is a different limiting process).  These distinctions seem to be lost in some math papers.

 **Remarkably the lack of connection to the physical World doesn’t limit things to being useless.  For many applications the unphysical approximation of incompressibility is useful.  Many engineering applications profitably use incompressible flow because it gets rid of sound waves that are not particularly important or useful to get right.  The same is true for inviscid flows such as potential flow, which can be used for aircraft design at a basic level.  The place where this lack of physical connection should be more worrisome is in physics.  Take turbulent flow, which is thought of as a great-unsolved physics problem.  The fact that turbulence is believed to be associated with the mathematics of incompressible flows, yet remains largely unsolved should come as no surprise.  I might posit that the ability to make progress on physical problems with unphysical approximation might be dubious.

12 ways a Riemann solver is better than artificial viscosity, and 9 ways artificial viscosity is better than a Riemann solver

01 Friday Nov 2013

Posted by Bill Rider in Uncategorized

≈ 2 Comments

For the compressible Euler equations one requires some sort of dissipation mechanism to get stable numerical solutions.  Without dissipation you are limited to solving useless and uninteresting problems.  For almost any choice of initial data, shocks will automatically form, and dissipation must arise. This is the “regularized” in a singularity of this blog’s title.  The original dissipation is the Richtmyer-Von Neumann artificial viscosity.

A few years later (1951-1952) Lax recognized the sensibility of using the conservation form of the equations to compute shocks.  This was the “Lax-Friedrich’s” method written as a finite difference scheme, but in fact was a finite volume method (with a sneaky Riemann solver hidden underneath, but no one really realized this for a long time).  Lax’s method was first-order accurate and very dissipative (in fact one can show that it is the most dissipative stable method!).  It is notable that Lax did his original work in Los Alamos, which is the same place where the Richtmyer-Von Neumann method arose.

Around the same time in Russia, Sergei Godunov was working on his PhD thesis.  He was asked to develop a method to demonstrate a real computer that his institute would soon install by solving the Euler equations.  Finding methods provided to him unworkable and unstable, he developed something new.  This became Godunov’s method, which is distinguished by using a Riemann solver to evolve the solution in time.  The Riemann solution was developed in Germany in 1859 as the solution of an idealized initial value problem, the evolution of two semi-infinite discontinuous states.  Godunov realized it could be used as for a finite period of time (a time step size) as a semi-analytical portion of a numerical solution.

Things percolated along for a decade or so.  Artificial viscosity became the workhorse method across the World at nuclear weapons’ Labs.  The basic method developed relatively little save a handful of improvements in the early 50’s in Los Alamos, and extensions to two dimensions (this is almost certainly an overly harsh assessment).  Lax worked with Wendroff to introduce a second-order method that improved on Lax-Friedrichs, but gave oscillations too.  Lax also sorted through a general theory on the mathematics of hyperbolic conservation laws.  This work formed much of the basis for his being awarded the Abel prize.  Godunov, on the other hand, received little support and eventually quit working on fluid dynamics, and moved into numerical linear algebra while moving to Siberia in 1969.

Then everything changed.  At virtually the same time Godunov gave up the importance of Godunov’s work was rediscovered by two young researchers in the West. In fact it was never lost as noted by the coverage of Godunov’s method in Richtmyer and Morton’s book, “Finite Difference Methods for Initial Value Problems”.  He really was ignored in the Soviet Union. First, Jay Boris overcame Godunov’s barrier theorem with flux corrected transport, then by Bram Van Leer who did the same while embracing Godunov’s method as a framework as well.   Actually, Boris’ papers don’t mention Godunov, but upon reflection it is clear that he was aware of Godunov’s work and its importance.  Finally Kolgan did little appreciated work that also extended Godunov’s work directly, but went largely unnoticed until Van Leer’s recent work to bring his work to the attention of the International community.  Unfortunately Kolgan died before his work was discovered or completed.  Boris’ work was popular in the (plasma) physics community, while Van Leer’s approach had more traction with astrophysicists and aerospace engineers.  With Van Leer’s work, Riemann solvers became an important part of numerical methods and spawned what James Quirk described as a “cottage industry” in the creation of them.

Artificial viscosity kept on advancing at the Labs. Moving to three dimensions and incorporating innovations from other fields such as the limiters that Boris and Van Leer used to overcome Godunov’s barrier theorem.   Even more recently a couple of brilliant French researchers (Bruno Depres and Pierre-Henri Maire) have brought Riemann solvers to multidimensional Lagrangian codes, basically allowing Godunov’s method to improve upon the methods evolved from Richtmyer and Von Neumann’s original efforts.

It is largely a matter of philosophy and the same results can be achieved via either approach.  Nonetheless, the philosophy for developing a method can greatly influence the method developed and the results achieved.  Think about the relative success for finite volume and finite element methods in inventing new approaches for compressible flows.  The finite volume philosophy has been far more successful even though finite element aficionados would claim that they can be completely equivalent.  The same applies here, philosophy matters.

So what methodology is better?  Neither, but I do favor on balance Riemann solvers.  Why?  Here are 12 reasons:

  1. Riemann solvers force you analyze the equations deeply; this is a good thing, right?  The better your analysis, the better your Riemann solver and the better your method will be.
  2. Riemann solvers have exact solutions, and a sequence of well-controlled approximations
  3. Riemann solvers allow one to control the dissipation quite well using your deep knowledge of mathematics and the physics.
  4. Riemann solvers have a deep level of nonlinearity
  5. Some of the approximations are really cool.  The paper by Harten, Lax and Van Leer (HLL, SIAM Review 1983) is wonderful and the solvers derived from that work have a wonderful form.
  6. Multi-dimensional approaches are around the corner courtesy of the French work mentioned at the end of the narrative
  7. They can use results from artificial viscosity to make your Riemann solver better especially in the case of strong shocks (see my Computers & Fluids paper from 1999)
  8. You can design your level of detail, and dissipation wave-by-wave with enormous flexibility meaning you can ignore details, but in a mindful planned manner.
  9. The Riemann solver is abstracted from the details of the mesh (they are mesh independent).  This is a huge problem for artificial viscosity where most forms have an explicit choice regarding length scales.

10. All first-order monotone schemes can be placed into a single unified framework.  Godunov’s method is the least dissipative one, and Lax-Friedrich’s is the most dissipative one.

11. Riemann solvers work well in both physics and engineering settings.

12. Riemann solvers naturally work with the thermodynamics of real materials while artificial viscosity appears to have arbitrary coefficients that can be tuned by uses (not a good thing!).  Riemann solver work can provide a clear story about the true non-arbitrary nature of these coefficients (ironically recognized for the first time by another Soviet scientist, Kurapatenko).

Artificial viscosity has advantages too; here is my list of these:

  1. The nonlinearity of the dissipation for shocks is clear from the get-go, with the quadratic viscosity arising from an analysis of the Riemann solution by Richtmyer.   He didn’t recognize this, but it is true.
  2. The method extends to multiple dimensions sensibly
  3. They perform well with multiple physical effects
  4. They have a low computational overhead
  5. They can use results from Riemann solvers to make themselves better (point 12 above).
  6. They are robust
  7. They have a direct analogy to physical dissipation and you have control over the magnitude of dissipation directly through the coefficients.
  8. The basic concepts can be extended to hyperviscosity, which is a viscosity consistent with a higher-order solver.
  9. Artificial viscosity was one of the first turbulence models!

In the final analysis you should probably use a combination.  My preferred mode of operation is to make the Riemann solver primary and use artificial viscosity to clean up loose ends.

13 good reasons to verify your code and calculations

26 Saturday Oct 2013

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Code and calculation verification are an overlooked step for assuring the quality of codes and calculations.  Perhaps people don’t have enough good reasons to do this work.  It can often be labor intensive, frustrating and disappointing.  That is a real awful sales pitch! I really believe in doing this sort of work, but the need to provide better reasoning is clear.  I’ll get at the core of the issue right away, and then expound on ancillary benefits.

In the final analysis, not doing code verification is simply asking for trouble.  Doing code verification well is another matter altogether, but doing it well is not as necessary as simply doing it.  Many developers, students, and researchers simply compare the solutions to benchmarks using visual means (i.e., comparing to the benchmark solution in the infamous eyeball norm, or the “viewgraph” norm if its presented).  This is better than nothing, but not by much at all.  It is certainly easier and more convenient to simply not verify calculations.

Very few of us actually create codes that are free of bugs (in fact I would posit none of us). To not verify is to commit an act of extreme hubris.  Nevertheless, admitting one’s intrinsic fallibility is difficult; dealing with the impact of one’s fallibility is inevitable.

So, without further philosophizing, here is my list:

  1. Don’t assert that your code is correct; prove it’s correct.  This is the scientific method; respect it, and apply it appropriately.  Treat a code as you would an experiment, and apply many of the same procedures and measures to ensure quality results.
  2. Mistakes found later in the code development process are harder and more expensive to fix.  There is vast evidence of this and I recommend reading the material on the Capability Maturity Model for Software, or better yet Steve McConnell’s book, “Code Complete,” which is a masterpiece.
  3. Once completed (and you aren’t ever really done), you will be confident in how your code will preform.  You will be confident that you’ve done things correctly.  You can project this confidence to others and the results you and your handiwork produce.
  4. You will find the problems with your code instead of someone else like a code customer or user. As much as we dislike finding problems, some one else finding our problems is more painful.
  5. Verification will allow you to firmly establish the accuracy properties of your method if you look at error and convergence rates.  You might have to confront the theory behind you method and problem, and this might help you learn something.  All of this is really good for you.
  6. Doing the same thing with your calculations will allow you to understand the error associated with solving the equation approximately.  Again, confront the available theory, or its lack of availability.  It will provide you much needed humility.
  7. It is embarrassing when a numerical error is influencing or hiding a model deficiency.  Worse yet, it is badly conducted science and engineering.  Don’t be responsible for more embarrassments.
  8. When you are calibrating your model (admit it, you do it!), you might just be calibrating the model to the numerical error.  You want to model physics, not truncation error, right?
  9. Verification results will force you to confront really deep questions that will ultimately make you a better scientist or engineer.  Science is about asking good questions, and verification is about asking good numerical questions.
  10. You are a professional, right?  Doing verification is part of due diligence, it is being the best you can be.   Adopting a personal quality mentality is important in one’s development.  If you are in the business of numerical solutions, verification is a key part of the quality arsenal.
  11. You won’t really understand your code until you look really hard at the results, and verification helps you understand the details you are examining.  You will looks at your work more deeply than before and that is a good thing.
  12. Conducting real error analysis can help you make sure and prove that your mesh is adequate for the problem you are solving.  Just because a mesh looks good, or looks like the thing you’re simulating isn’t evidence that it actually allows you to simulate that thing.
  13. It is 2013, so come on!  Please do things in a way that reflects a modern view of how to do computational science.

Robust Verification

18 Friday Oct 2013

Posted by Bill Rider in Uncategorized

≈ Leave a comment

The excuses for not doing verification of numerical solutions are myriad.  One of the best although it is unstated, is that verification just doesn’t work all the time.  The results are not “robust” even though the scientist “knows” the results are OK.  A couple things happen to produce this outcome:

  1. The results are actually not OK,
  2. The mesh isn’t in the “asymptotic” range of convergence
  3. The analysis of verification is susceptible to numerical problems.

I’ll comment on each of these in turn and suggest some changes to mindset and analysis approach.

First, I’d like to rant about a few more pernicious ways people avoid verification.  These are especially bad because they often think they are doing it.  For example, let’s say you have an adaptive mesh code (this is done without AMR, but more common with AMR because changing resolution is often so easy).   You get solutions at several resolutions, and you “eyeball” the solutions.  The quantity you or your customer cares about isn’t changing much as you refine the mesh.  You declare victory.

What have you done? It is not verification, it is mesh sensitivity.  You could actually have a convergent result, but you actually don’t have proof.  The solution could be staying the same because it isn’t changing, and is in fact, mesh-insensitive.  What would verification bring to the table?  It would provide a convergence rate, and an error estimate.  In fact, error estimates are the true core of verification.  The error estimate is a much stronger statement about the influence of mesh on your solution, and you almost never see it.

Why? Quite often the act of computing the error estimate actually undermines your faith in the seemingly wonderful calculation and leaves you with questions you can’t answer.  It is much easier to simply exist in comfortable ignorance and believe that your calculations are awesome.  This state of ignorance is the most common way that people almost do verification of calculations, but fail at the final hurtle.  Even at the highest level of achievement in computational science, the last two paragraphs describe the state of affairs.  I think its rather pathetic.

OK, rant off.

Let’s get the zeroth point out of the way first.  Verification is first and foremost about estimating numerical errors.  This counters the oft-stated purpose associated with “order” verification where the rate of convergence for a numerical method is computed.  Order verification is used exclusively with code verification where a solution is known to be sufficiently differentiable to provide proof that a method achieves the right rate of convergence as a function of the mesh parameters.  It is an essential part of the verification repertoire, and it squashes a more important reason to verify, error quantification whether true or estimated.

The first point is the reason for doing verification in the first place.   You want to make sure that you understand how large the impact of numerical error is on your numerical solution.  If the numerical errors are large they can overwhelm the modeling you are interested in doing.  If the numerical errors grow as a function of the mesh parameters, something is wrong.  It could be the code, or it could be the model, or the mesh, but something is amiss and the solution isn’t trustworthy.  If it isn’t checked, you don’t know.

The second point is much more subtle.  So let’s get the elephant in the room identified, meshes used in practical numerical calculations are almost never asymptotic.  This is true even in the case of what is generously called “direct numerical simulation (DNS)” where it is claimed that the numerical effects are small.  Rarely is there an error estimate in sight.  I’ve actually looked into this and the errors are much larger than scientists would have you believe, and the rates of convergence are clearly not asymptotic.

All of this is bad enough, but there is more and it is not easy to understand.  Unless the mesh parameters are small the rate of convergence should systematically deviate from the theoretical rate in a non-trivial way.  Depending on the size of the parameter and the nature of the equation being solved, the correct convergence rate could be smaller or larger than expected.  All of this can be analyzed for ideal equations such as a linear ordinary differential equation.  Depending on the details of the ODE method, and the solution one can get radically different rates of convergence.

The third point is the analysis of numerical solutions.  Usually we just take our sequence of solution and apply standard regression to solve for the convergence rate and estimated converged solution.  This simple approach is the heart of many unstated assumptions that we shouldn’t be making without at least thinking about them.   Standard least squares relies a strong assumption about the data and its errors to begin with.  It assumes that the errors from the regression are normally distributed (i.e., Gaussian).  Very little about numerical error leads one to believe this is true.  Perhaps in the case where the errors are dominated by dissipative dynamics a Gaussian would be plausible, but again this analysis itself only holds in the limit where the mesh is asymptotic.  If one is granted the luxury of analyzing such data, the analysis methodology, frankly, matters little.

What would I suggest as an alternative?

One of the problems that plagues verification is bogus results associated with either bad data, changes in convergence behavior, or outright failure of the (nonlinear) regression.  Any of these should be treated as an outlier and disregarded.  Most common outlier analysis itself relies on the assumption of Gaussian statistics.   Again, making use of this assumption is unwarranted.  Standard statistics using the mean, and standard deviation is the same thing.  Instead one should use median statistics, which can withstand the presence of up to half the data being outliers without problems.  This is the definition of robust and this is what you should use.

Do not use a single regression to analyze data, but instead do many regressions using different formulations of the regression problem, and apply constraints to the solution using your knowledge.  If you have the luxury of many mesh solutions run the regression over various subsets of your data.  For example, you know that a certain quantity is positive, or better yet must take on values between certain limits.  The same applies to convergence rates, you generally have an idea what would be reasonable from analysis; i.e., a first-order method might converge at a rate between one-half and two.   Use these constraints to make your regression fits better and more guaranteed to produce results you can use.  Make sure you throw out results that show that your second-order method is producing 25th order convergence.  This is simply numerical garbage and there is no excuse.

At the end of this you will have a set of numerical estimates for the error and convergence rate.  Use median statistics to choose the best result and the variation from the best so that outliers along the way are disregarded naturally.

Verification should (almost) always produce a useful result and injecting ideas from robust statistics can do this.

Going beyond this point leads us to some really beautiful mathematics that is hot property now (L1 norms leading to compressed sensing, robust statistics, …).   The key is not using the standard statistical toolbox without at least thinking about it and justifying its use.  Generally in verification work it is not justifiable.  For general use a constrained L1 regression would be the starting point, maybe it will be more generally available soon.

Why do verification?

10 Thursday Oct 2013

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Verification is an important activity for numerical computation that is too seldom done. Almost anybody with a lick of common sense will acknowledge how important it is and vital to achieve.  On the other hand very few actually do it.

Why?

 It is where arrogant hubris intersects laziness.  It is where theory goes to die.  Verification produces an unequivocal measure of your work.  Sometimes the result you get is unpleasant and sends you back to the drawing board.  It basically checks whether you are full of shit or not.  Most people who are full of shit would rather not admit it. 

 What is actually worse is to do verification of someone else’s work.  This puts one in the position of telling someone else that they are full of shit.  This never makes you popular or welcome.  For this reason I’ve compared V&V work to the “dark side” of the force, and by virtue of this analogy, myself to a Sith lord.   At least it’s a pretty cool bad guy.

 Areas where I’ve worked a good bit are methods for shock physics.   There is a standard problem almost everyone working on shock physics uses to test their method, Sod’s shock tube.  Gary Sod introduced his problem is a 1978 paper in the Journal of Computational Physics to compare existing methods.  By modern standards the methods available in 1978 were all bad. This was only presented as comparisons between the numerical solution, and the exact solution graphically.  This is important to be sure, but so much more could be done. Sod’s problem is a Riemann problem, which can be solved and evaluated exactly (actually solved accurately using Newton’s method).  The cool thing with an exact solution is that you can compute the errors in your numerical solution.   Unfortunately Sod didn’t do this. 

 Again, why?

 Instead he reported run time for the methods, as if to say, “all of these suck equally, so what matters is getting the bad answer the fastest”.  A more technical response to this query is that shock captured solutions to problems with discontinuous solutions (Sod’s problem, has a rarefaction, contact discontinuity and a shock wave) are intrinsically limited to first-order accuracy.  At a detail level the solution is to Sod’s shock tube is typically less than first-order accurate because the contact discontinuity’s solution convergences at less than first-order and eventually this dominates the numerical error under mesh refinement (covered in a paper by Banks, Aslam, and Rider(me)).  The mindset is “well its going to be first order anyway so what’s the point of computing convergence rates?” 

 I’m going to tell you what the point is.  The point is that the magnitude of the errors in the solutions is actually a lot different even among a whole bunch of methods of similar construction and accuracy.  Not every method is created equal and there is a good bit of difference between the performance of Godunov’s original method, Van Leer’s method, Colella’s refined PLM method, WENO, PPM, etc.  Ultimately what someone really cares about is the overall accuracy of the solution for amount of computational resource, as well as scheme robustness and extensibility. 

 So going back to the reality, people continued to use Sod’s problem and presented results with a graphical comparison usually at a single grid, and a comparison at runtime.  I never saw anyone present the numerical error.  People would only provide numerical errors for problems where they expected their method to achieve its full order of accuracy, i.e., smooth problems like advecting a Gaussian function or Sine wave.   The only exception came from the flux-corrected transport world where they did present numerical accuracy for the propagation of square waves, but not systems of equations. 

 In 2004-2005 Jeff Greenough and I attacked this low hanging piece of fruit.  Jeff’s Lab was using a weighted ENO (WENO) code developed at Brown University (where the inventor of WENO is a professor).  Jeff worked on a code that used Colella’s PLM method (basically an improved version of Van Leer’s second-order Godunov method).  I had my own implementations of both methods in a common code base.  We undertook a comparison of the two methods head-to-head, with the following question in mind: “what method is the most efficient for different classes of problems?”  We published the answer in the Journal of Computational Physics. 

 Before giving the answer to the question a little more background is needed.  WENO methods are enormously popular in the research community with the fifth-order method being the archetype.  I would say that the implementation of a WENO method is much more elegant and beautiful than the second-order Godunov method.  It is compact and modular, and actually fun to implement.   It is also expensive.  

 We used a set of 1-D problems: Sod’s shock tube, a stronger shock tube, a popular blast wave problem, and a shock hitting smooth density perturbations.  For the first three problems, the 2nd order method either out performed or was even with the 5th order method at the same mesh.  On the final problem WENO was better, until we asked the efficiency question.  Accounting for runtime to achieve the same accuracy, the 2nd order method outperformed WENO on every problem.  If the problem was shock dominated, the difference in efficiency wasn’t even close.  WENO was only competitive on the problem with lots of smooth structure.  On the advection of a Gaussian pulse WENO wins at almost any level of error.

 This asks a deeper question, why would a second-order method be more efficient?

 This answer gets to the heart of method design.  The 2nd order method cheaply introduces some elements of high-order methods, i.e., it uses 4th order approximations to the slope.  Its nonlinear stability mechanism is cheap, and it uses a single step time integrator rather than a multistep integrator.  The single step integrator provides a more generous time step size.  The combination of the larger time step, single step integrator and simpler nonlinear stability mechanism equates to a factor of six in runtime.  In addition to accommodate the fifth-order formal accuracy, the WENO method actually increases the dissipation used near shocks, which increases the error.

 The bottom line is that verification is about asking good questions, answering these questions, and then going back to the well… it is in essence the scientific method. 

There is nothing artificial about “Artificial Viscosity”

04 Friday Oct 2013

Posted by Bill Rider in Uncategorized

≈ 3 Comments

The name of the original successful shock capturing method is “artificial viscosity,” and it is terrible.  John Von Neumann and Robert Richtmyer were both mathematical geniuses, perhaps, at least in this case, poor at marketing.   To be fair, they struggled with the name, and artificial viscosity was better than “mock” or “fictitious” viscosity, which Richtmyer considered in his earlier Los Alamos reports (LA-671 and LA-699).  Nonetheless, we are left with this less than ideal name. 

I’ve often wished I could replace this name with “shock viscosity” or better “shock dissipation” because the impact of artificial viscosity is utterly real.  It is physical and necessary to compute the solutions of shocks automatically without resolving the ridiculously small length and time scales associated with the physical dissipation.  In a very true sense, the magnitude of viscosity used in artificial viscosity is the correct amount of dissipation for the manner in which the shock is being represented.

The same issue comes about in the models of turbulence.  Again, the effects of nonlinearity enormously augment the impact of physical dissipation.  The nonlinearity is associated with complex small-scale structures whose details are unimportant to the large-scale evolution of the flow.  When simulating these circumstances numerically we are often only interested in the large-scale flow field and we can use an enhanced (nonlinear) numerical viscosity to stably compute the flow.  This approach in both shocks and turbulence has been ridiculously successful.  In turbulence this approach goes by the name implicit large eddy simulation.  It has been extremely controversial, but in the last decade this approach has grown in acceptance.

The key to this whole line of thinking is that dissipation is essential for physical behavior of numerical simulations.  While too much dissipation is undesirable, too little dissipation is a disaster.  The most dangerous situation is when a simulation is stable (that is, runs to completion), and produces seemingly plausible results, but has less dissipation than physically called for.  In this case the simulation will produce an “entropy-violating” solution.  In other words, the result will be unphysical, that is, not achievable in reality.  This is truly dangerous and far less desirable than the physical, but overly dissipated result (IMHO).  I’ve often applied a maxim to simulations, “when in doubt, diffuse it out”.   In other words, dissipation while not ideal (a play on words!) is better than too little dissipation, which allows physically unachievable solutions to persist.

Too often numerical practitioners seek to remove numerical dissipation without being mindful of the delicate balance between, the dissipation that is excessive, and the necessity of dissipation for guaranteeing physically relevant (or admissible) results.  It is a poorly appreciated aspect of nonlinear solutions for physical systems that the truly physical solutions are not those that have no dissipation, but rather produce a finite amount of dissipation.  This finite amount is determined by the large-scale variations in the flow, and is proportional to the third power in these variations. 

Very similar scaling laws exist for ideal incompressible and compressible flows.  In the incompressible case, Kolmogorov discovered the scaling law in the 1940’s in the form of his refined similarity hypothesis, or the “4/5ths” law.  Remarkably, Kolmogorov did this work in the middle of the Nazi invasion of the Soviet Union, and during the darkest days of the war for the Soviets.  At nearly the same time in the United States, Hans Bethe discovered a similar scaling law for shock waves.  Both can be written in stunningly similar forms where the time rate of change of kinetic energy due to dissipative processes (or change in entropy), is proportional to the third power of large-scale velocity differences, and completely independent of the precise value of viscosity. 

These scaling laws are responsible for the success of artificial viscosity, and large eddy simulation.  In fact, the origin of large eddy simulation is artificial viscosity.  The original suggestion that led to the development of the first large eddy simulation by Smagorinsky was made by Von Neumann’s collaborator, Jules Charney to remove numerical oscillations from early weather simulations in 1956.  Smagorinsky implemented this approach in three dimensions in what became the first large eddy simulation, and the first global circulation model.  This work was the origin of a major theme in turbulence modeling, and climate modeling.  The depth of this common origin has rarely been elaborated upon, but I believe the commonality has profound implications.  

At the very least, it is important to realize that these different fields have a common origin.  Dissipation isn’t something to be avoided at all costs, but rather something to manage carefully.  Better yet, dissipation is something to be modeled carefully, and controlled.  In numerical simulations, stability is the most important thing because without stability, the simulation is worthless.  The second priority is producing a physically meaningful (or realizable) simulation.  In other words, we want a simulation that produces a result that matches a situation achievable in the real world.  The last condition is accuracy.  We want a solution that is as accurate as possible, without sacrificing the previous two conditions. 

Too often, the researcher gets hung up on these conditions in the wrong order (like prizing accuracy above all else).   The practitioner applying simulations in an engineering context often does not prize accuracy enough.   The goal is to apply these constraints in balance and in the right order (stability then realizability then accuracy).  Getting the order and balance right is the key to high quality simulations.

Big Science

27 Friday Sep 2013

Posted by Bill Rider in Uncategorized

≈ 3 Comments

“Big Science”

After providing a deep technical discussion of an important paper last time, I thought a complete change of pace might be welcome.  I’m going to talk about “big science” and why it is good science.  The standard thought about science today is that small-scale researcher-driven (i.e., curiosity-driven) science is “good science” and “big science” is bad science.  I am going to spend some time explaining why I think this is wrong on several accounts.

First, let’s define big science with some examples.  The archetype of big science is the Manhattan project, which if you aren’t aware is the effort by the United States (and Great Britain) to develop the atomic bomb during World War II. This effort transformed into an effort to maintain scientific supremecy during the Cold War and included the development of the hydrogen bomb.   Many premier institutions were built upon these efforts along with innumerable spinoffs that power today’s economy, and may not have been developed without the societal commitment to National security associated with that era.  A second archetype is the effort to land a Man (an American one) on the Moon during the 1960’s.  Both efforts were wildly successful and captured to collective attention of the World.  The fact is that both projects were born from conflict, in one case a hot war to defeat the Nazis and the second to beat the Soviets during the cold war.  Today it is hard to identify any societal goals of such audacity or reach.   In fact, I am sadly at a loss to identify anything right now with the National genome project being the last big National effort.

One of the keys to the importance and power of big science is motivation.  The goals of big science provide a central narrative and raison d’etre for work.  It also provides constraints on work, which curiously spurs innovation rather than stifles it.    If you don’t believe me, visit the Harvard Business Review and read up on the topic.   Perhaps the best years of my professional career were spent during the early days of the science-based stockpile stewardship program (SBSS) where goals were lofty and aspirations were big.  I had freedom to explore science that was meaningfully tied to the goals of the project.  It was the combination of resources and intellectual freedom tempered by the responsibility to deliver results to the larger end goals.

The constraints of achieving a larger goal act to serve as continual feedback and course correction to research, and provide the researcher with a rubric to guide work.  The work on the atomic bomb and the subsequent cold war provided an immensely important sense of reason to science.  The quality of the science done in the name of these efforts was astounding, yet it was conspicuously constrained to achieving concrete goals.  Today there are no projects with such lofty or existentially necessary outcomes.  Science and society itself is much poorer for it.  The collateral benefits for society as equally massive including ideas that grow the economy and entire industries from the side effects of the research.  I shouldn’t have to mention that today’s computer industry; telecommunication and the Internet were all direct outgrowths of cold war-driven defense-related research efforts.

Why no big science today?  The heart of the reason goes to the largely American phenomena of overwhelming distrust of government that started in the 70’s and is typified by the Watergate scandal.  Government scandal mongering has become regular sport in the United States, and a systematic loss of trust in everything the government does is the result.  This includes science, and particularly large projects.  Working at a government lab is defined by a massive amount of oversight, which amounts to the following fact: the government is prepared to waste an enormous amount of money to make sure that you are not wasting money.  All this oversight does is make science much more expensive and far less productive.  The government is literally willing to waste $10 to make sure that $1 is not wasted in the actual execution of useful work.  The goal of a lot of work is not to screw up rather than accomplish anything important.

The second and more pernicious problem is the utter and complete inability to take risks in science.  The term “scheduled break-through” has become common.  Managers do very little management, and mostly just push paper around while dealing with the vast array of compliance driven initiatives.   Strategic thinking and actual innovation only happen every so often, and the core of government work is based around risk adverse choices, which ends up squandering the useful work of scientists across the country.  The entire notion of professional duty as a scientist has become a shadow of what it once was.

Frankly, too much of what science does get funded today is driven by purely self-serving reasons that masquerade as curiosity.  When science is purely self-defined curiosity it can rapidly become an intellectual sandbox and end up in alleys and side streets that offer no practical benefit to society.  It does, perhaps provide intellectual growth, but the distance between societal needs and utility is often massive.  Furthermore, the legacy of pure curiously driven science is significantly overblown and is largely a legend rather than fact.  Yes, some purely curiosity driven science has made enormous leaps, but much of the best science in the last 50 years has been defined by big science pushing scientists to solve practical problems that comprise aspect of the overarching goal.  The fast Fourier transform (FFT) is a good example, as are most of numerical methods for partial differential equations.  Each of these areas had most of its focus derived from some sort of project related to National defense.   Many if not most of the legendary curiosity driven scientific discoveries are from before the modern era and are not relevant to today’s World.

The deeper issue is the lack of profoundly important and impactful goals for the science to feed into.  This is where big science comes in.  If we had some big important societal goals for the science to contribute toward, it would make all the difference in the world.  If science were counted on to solve problems rather than simply not screw something up, the world would be a better place.  There are a lot of goals worth working on; I will try to name a few as I close.

How do we deal with climate change? And the closely related issue of meeting our energy needs in a sustainable way?

How do we tame the Internet?

How can we maintain the supply of clean water needed for society?

Explore the Moon, Mars and the solar system?

Can we preserve natural resources on the Earth by using extraterrestrial resources?

What can we do to halt the current mass extinction event? Or recover from it?

How can computer technology continue to advance given the physical limits we are rapidly approach?

How can genetics be used to improve our health?

How can we feed ourselves in a sustainable way?

What is our strategy for nuclear weapons for the remainder of the century?

How should transform our National defense into a sustainable modern deterrent without bankrupting the Country?

How can we invest appropriately in a National infrastructure?

As you can see, we have more than enough things to worry about; we need to harness science to help solve our problems by unleashing it, and providing it some big, audacious goals to work on.  Let’s use big science to get big results.  Unfortunately, a great deal blocks progress.  First and foremost it is the pervasive lack of trust and confidence of the public in government primarily, and science in general.  The fact is that the combination of big important goals, societal investment (i.e., government), and scientific talent is the route to prosperity.

Classic Papers: Lax-Wendroff (1960)

19 Thursday Sep 2013

Posted by Bill Rider in Uncategorized

≈ 5 Comments

Tags

approximation, artificial viscosity, conservation, finite difference, finite volume, fundamental derivative, Lax-Wendroff, numerical, theorem

P.D Lax; B. Wendroff (1960). “Systems of conservation laws”. Communications in Pure and Applied Mathematics. 13 (2): 217–237. doi:10.1002/cpa.3160130205

This paper is fundamental to much of CFD, and that is an understatement.

The paper is acknowledged to have made two essential contributions to the field: a critical theorem, and a basic method.  It also has another nice method for shock dissipation that is not widely seen.  I will discuss all three and try to put things into perspective.

As I recently said at a conference, “Read the classics, don’t just cite them.”  In today’s world of too many research papers, finding a gem is rare.  The older mines are full diamonds, and the Lax-Wendroff paper is a massive one.   By reading the archival literature you can be sure to read something good, and often the older papers have results in them that are not fully appreciated.

Why would I say this? Textbooks will tell you what is important in a paper, and you either believe what you are told, or just read the relevant portions of the paper.  Sometimes developments made later could show the older papers in a new light.  I believe this is the case with Lax-Wendroff.  By read, I mean really read, cover-to-cover.

1. The theorem has enormous implications on how methods are designed because of the benefits of using discrete conservation form.  In this way the numerical method mimics nature and solutions have an intrinsically better change of getting the right answer.

What does the theorem say?

Basically, if a numerical method for a first-order hyperbolic partial differential equation is in discrete conservation form, it must converge to a weak solution (assuming first that its is stable and consistent).  Stable and consistent means the solution is well-behaved numerically (doesn’t blow up), and approximated the differential equation.

If one follows the guidance, it is unabashedly good, that is, we want to get valid weak solutions, but there is a catch.  The catch is that there are literally infinitely many weak solutions, and most of them are not desirable.  What we actually desire is the correct, or physically valid weak solution.  This means we want a solution that is an “entropy” solution, which is essentially the limiting solution to the hyperbolic PDE with vanishing dissipation.  This leads to numerical dissipation (i.e., things like artificial viscosity) to create the entropy needed to select physically relevant solutions.

The simpliest hyperbolic PDE where this issue comes to a head is Burgers’ equation with an associated upwind discretization assuming the velocity is positive,

Image,         (1)

which can be rewritten equivalently as

Image.         (2)

Equations (1) and (2) are identical if and only if the solution is continuously differentiable (and in the limit where error is small), which is not true at a discontinuity.  What the conservation form means effectively that one is sure the amount of stuff (u) that leaves on cell on the mesh exactly enters the next cell.  This allows all the fluxes on the interior of the mesh to telescopically cancel leaving only the boundary terms, and provides a direct approximation to the weak form of the PDE.  Thus the numerical approximation will exactly mimic the dynamics of a weak solution.  For this reason we want to solve (1) and not (2) because it is in conservation form.   With this theorem, the use of conservation form went from mere suggestion, to an iron clad recommendation.

For a general conservation law the conservation form gives,

Image,         (3)

which generalizes to higher order methods depending on how the cell-edge fluxes are defined.   The last bit of detail is picking the right weak solution since there are an infinite number of them.  This depends on numerical dissipation, which mimics the dynamics of vanishingly small physical viscosity.  The presence of viscosity produces entropy as demanded by the second law of thermodynamics.  The inclusion of sufficient dissipation will choose a unique physically relevant solution to the conservation law.

http://en.wikipedia.org/wiki/Lax–Wendroff_theorem

2. The method introduced in this paper was another centerpiece and icon of the numerical solution of hyperbolic PDEs.  This is a stable, second-order, centered method, which was developed because the stable centered method introduced earlier was first-order accurate and quite dissipative (this is the Lax-Friedrichs method from 1954).  The Lax-Wendroff method is derived using the Taylor series where the time derivatives are systematically replaced with space derivatives.

For a simple PDE the procedure is quite straightforward,

Image           (4)

replace the time derivative with space derivatives,

Image         (5)

and discretize with centered differences. This method is second-order accurate in space and time, but will create oscillations near discontinuities.  This will provide a motivation for the third result from the paper, a shock dissipation, which is usually overlooked, but needs a new appreciation.

Several years later Richtmyer presented this as a two-step, predictor-corrector method that made for simple and efficient implementation (NCAR Technical Note 63-2 ).  The Lax-Wendroff method then rapidly became one of the methods of choice for explicit in time aerodynamic calculations.  Perhaps Bob MacCormack at NASA Ames/Stanford developed the most famous of these methods in 1969.  This was a variant of Richtmyer’s approach utilizing a sequence of forward, followed by backwards-biased differences.

http://en.wikipedia.org/wiki/MacCormack_method

All of this set the stage for much bigger things in the mid 1970’s, the introduction of genuinely nonlinear schemes where the finite difference stencil depended on the structure of the solution itself.

The procedure of replacing time- with space-derivatives has reappeared in the past few years often as “Lax-Wendroff” time-differencing.  It can be very complex for systems of conservation laws, or for higher than second-order accurate differencing of genuinely nonlinear equations.  This is a consequence of the explosion of terms arising from the systematic application of the chain rule.  Nonetheless it can be an effective alternative to a method-of-lines approach yielding more satisfying CFL stability limits.

http://en.wikipedia.org/wiki/Lax–Wendroff_method

3. The dissipation introduced by Lax-Wendroff was needed because of the oscillatory solutions (Godunov’s theorem explains why, something to discuss in detail later).  Because of this they introduced a shock dissipation that looked much different at first blush than the Von Neumann-Richtmyer mechanism.  We will show briefly that it is basically identical in an approximate mathematical sense.  Hopefully this connection will allow this result to be more widely appreciated.

In the paper they introduce a shock dissipation based on the variation of the sound speed (the product of density and speed of sound, “c”, I removed the constant and the mesh spacing from the formula too, from pp. 232 of the paper),

Image,                        (6)

which I will show through the application of the chain rule as being equivalent to,

Image  ,             (7)

where the second transformation has used the differential equations themselves (or the Rankine-Hugoniot conditions). This is just like the Von Neumann-Richtmyer approach if we take,

Image.       (8)

This then implies that the viscosity coefficient actually arises from the equation of state because this quantity is known as the fundamental derivative,

Image,                  (9)

the only proviso is that the fundamental derivative is computed at constant entropy.

Bethe, Zeldovich and Thompson each studied the fundamental derivative, but Menikoff and Plohr’s paper is the best available discussion of its impact on shock wave dynamics (Reviews in Modern Physics, Volume 61, 1989).  This is defined by the thermodynamics of the equation of state as opposed to the mindset that the coefficient of the Von Neumann-Richtmyer viscosity is arbitrary.  It is not!  This coefficient is determined by the state of the material being simulated. Thus the quadratic viscosity can be reinterpreted as

Image    .         (10)

We hope that this observation will allow artificial viscosity to be selected without appealing to arbitrarily selected constants, but rather for the specific material being simulated.

Historical footnotes:

Wendroff still works at Los Alamos (every day as Misha Shashkov notes).   The Los Alamos report number is LA-2285 from 1958/1959.

I asked Burt Wendroff whether anyone working on the Lab’s non-conservative codes based on the staggered mesh, artificial viscosity approach of Von Neumann and Richtmyer had inquired about the necessity of conservation form.  The answer was no, never.  So the theorem that shaped much of CFD outside the Lab had no impact inside the Lab despite being developed there.  Los Alamos was literally the birthplace of CFD.  Its first baby was artificial viscosity and the staggered mesh methods, and its second baby was driven by Lax and includes Lax-Wendroff.  The Lab orphaned the second baby, but found good parents elsewhere where it flourished.  Its would be akin to the first child taking over the family business and living a comfortable, but somewhat unfulfilled life, while the second child left home and found greatness.

Why would this be?  I believe that after Von Neumann and Richtmyer showed the way to make shock capturing work, the Lab turned its energy almost completely toward the development of the H-Bomb, and the development of improved methods that Lax showed the path toward was not paid attention to.  Stunningly this pattern continued for decades.  In the past 10-15 years, the work of Lax has started to have an impact there albeit 50 years on from when it should have.

I will note that the term finite volume scheme didn’t come into use until 1973 (in a paper by Rizzi), but the Lax-Wendroff method pushed the field in this direction.  Lax used finite volume ideas immediately as published in a Los Alamos report from 1952 (LA-1205).  He called it a finite difference method, as was the fashion, but it is clearly a finite volume method, and the conservation form was immediately appreciated by Lax as being special.  It is notable that Von Neumann’s method is not in conservation form, and does not conserve in any simple way.

The authors are famous enough to have biographies in Wikipedia.

http://en.wikipedia.org/wiki/Peter_Lax

http://en.wikipedia.org/wiki/Burton_Wendroff

Thoughts about Multimat2013

13 Friday Sep 2013

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Multimat2013 or My Biannual Geekfest.

In all honesty most of you would consider every conference I go to as a “Geekfest” so that label is overdoing it.

mm

https://multimat13.llnl.gov

Last week I attended a meeting of one of the communities I participate actively in.  This meeting goes by the catchy name of “multimat” which is short hand for multimaterial computational hydrodynamics.  Most of the attendees are work at their nation’s nuclear weapons’ labs although we are getting broader attendance from countries and labs outside that special community.  This year’s meeting had a really good energy despite the seemingly overwhelming budgetary constraints both in the United States and Europe.

Why was the energy at the meeting so good, when the funding picture is so bleak?  New ideas.

That’s it, the community has new ideas to work on and it energizes everything.  What new ideas you ask?  Cell-centered and high-order methods for Lagrangian hydrodynamics, concomitant spill over from other areas of science such as optimization.

Let me explain why this might be important given that cell-centered and high-order methods are commonplace in the aerospace community.  In fact, as will discuss at length in a future post these fields were intimately connected at their origins, but the ties have become estranged over the intervening decades.

Using cell-centered methods for Lagrangian hydrodynamics was long thought to be unworkable with episodic failures over the preceding decades.  Lagrangian hydrodynamics has long followed the approach provided by the combination of John Von Neumann’s staggered mesh method published in 1944 combined with the essential artificial viscosity of Richtmyer developed in 1948.*  The staggered mesh has material quantities at cell centered, but velocities (kinematics) at the cell edges.   Everything done within the confines of this community proceeded using this approach for decades (including France, England, Soviet Union/Russia, China, Israel).    All of these methods are also either first- or second-order accurate.  Cell-centered approaches based upon Godunov’s method appear every so often, but are viewed as practical failures (example, the Caveat code from Los Alamos).

A second historical footnote is that cell-centered methods started at Los Alamos shortly after the famous Von Neumann-Richtmyer paper appeared in 1950.  By 1952 Peter Lax introduced a cell-centered finite difference method, which we know as the Lax-Friedrichs methods (really finite volume, but that term didn’t exist till 1973).  Godunov’s method was developed between 1954 and 1956 independently.

Multimat2013 took place of five days with about 50 talks and 25 posters.  In particular I thought the first three days were fantastic.   As I noted, a great deal of the positive energy comes from the development of cell-centered Lagrangian methods starting with the work of Depres and Maire in France.  Similar methods have been developed from that foundation in England and the United States.  Further developments have been made to these methods with high-order approaches including discontinuous Galerkin, and high-order “traditional” finite elements.  This seems to have opened the door to high-order methods which has been an active area World-wide since the 1980’s.

This in part was the inspiration for my talk.  Recently, I attended the JRV symposium (http://dept.ku.edu/~cfdku/JRV.html), which preceded the AIAA CFD Meeting in June.  JRV stands for Jameson, Roe and Van Leer.  Bram Van Leer (http://en.wikipedia.org/wiki/Bram_van_Leer) gave a talk that largely chided the community on referencing classical papers (he has several!) and not really reading their content.  I decided to discuss one of Bram’s papers from that perspective (Journal of Computational Physics, Volume 23, 1977).  To make a long story short, the Multimat community has focused on one of the six methods in Bram’s paper.  In fact, the method has been given the name “Van Leer Method” in the community of code developers represented at Multimat!  When I met Bram and relayed this he found it offputting, and slightly horrified him.  This method is the worst of the six methods from some basic perspectives.  The other methods may gain a second life with new computers, but require some effort to get them up to snuff.  I focused to some degree on the the fifth method, which has very nice properties, and unbeknownst to many of the researchers has been rediscovered without referencing the original work of Van Leer.  Perhaps this method can be the topic of another future post.

Being a multimaterial conference, techniques for evolving material interfaces is of interest.  Again, the conference featured a neat mix of traditional and modern approaches with some trends.  Part of this included the use of optimization/minimization principles for solving particularly pernicious problems.  There is also notable improvement in level set techniques in this area.  I’ll note that Jamie Sethian once told me that he thought that this area provided some of the greatest challenges to level sets (in other words its ideally suited to the other problems it is used for).  Nonetheless, progress has been immense over the past 15 years.

Ann Mattsson gave a talk on our joint work on artificial viscosity.  It received mixed reviews largely due to Ann’s most valuable characteristic.  She isn’t one of us in that she is an accomplished atomic physicist and not a numerical hydrodynamics expert.  She took her unique professional perspective to try and build artificial viscosity from the ground up.  She also started from the viewpoint of the less widely known first report on the method written by Richtmyer in 1948.  These conditions conspire to create a functionally different perspective and different method than the classical viscosity arising from the Von Neumann-Richtmyer paper.  I then took her results and put together an initial implementation of the method (I am probably significantly biased by the classical approach, that has had 65 years of use).  One other aspect of the Richtmyer report that is notable is that it was classified secret until 1993.   It is nothing but mathematical physics and its status only robbed us of having a correct history of the lineage of shock capturing methods.

To be clear, Von Neumann conceived of shock capturing as a concept, but needed Richtmyer’s contribution to make it practical.

I also gave a poster on the goings on and progress with the code I support.  This included the introduction of a meme to the proceedings to explain why things are difficult.  It turns out this is a common issue (not surprising at all!).

simply

The last two days seemed a bit less exciting with more traditional themes taking over.  That might have been simply a function of an over-celebration of my birthday, which occurred fortuitously on the night of the banquet (and the wonderful hospitality of my fellow travelers leading to less sleep than I normally need).

The meeting has been in Europe previously (Paris, Oxford, Prague, Pavia Italy, Archchon France), and very well executed.  We Americans had a high bar to meet, and I think the organizers from Lawrence Livermore Lab did very well. The choice of San Francisco was inspired, and did a great deal to help make the meeting successful.  We managed to provide hospitality that didn’t embarrass the United States.**  So hats off to Rob Reiben, Mike Owen, and Doug Miller for a job well done.  They also had wonderful assistance from Darlene Henry and Jenny Kelley who kept everything humming for the entire week.

sunset

Here is a picture of the sunset at the banquet.  Really beautiful (Hans and Vince got in the way).

* I will briefly note that artificial viscosity is an example of a method that regularizes a singularity, and leads to this blog’s name.

** I am often taken aback by the degree to which our European colleagues offer far greater hospitality than we Americans can.  We literally can’t match them.  It is a continual issue with working for the government, and a source of personal embarrassment.  We government contractors are required to be run “like a business” yet we offer hospitality that no business would allow.  Frankly, it is complete bullshit.

About Through The Looking Glass

13 Friday Sep 2013

Posted by Bill Rider in Uncategorized

≈ Leave a comment

About this.

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • March 2026
  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 60 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...