• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: January 2014

What’s wrong with how we talk about verification?

31 Friday Jan 2014

Posted by Bill Rider in Uncategorized

≈ 3 Comments

Solution and code verification are done for quite different reasons.  They reinforce each other, but they serve different purposes in conducting quality computational science.  Code verification is done to provide clear evidence that numerical methods are implemented to solve governing equations correctly.  Solution verification provides an estimate of discretization error for an applied problem.  Insofar as providing evidence of correctness of implementations, solution verification is simply a “feel-good” exercise.  This seeming confusion is actually quite understandable.  In retrospect it is the fault of a poorly chosen taxonomy.

Lately, it has become fairly clear to me people are starting to believe that actively doing verification is a good idea.  This is great, the message has been received and action has been taken.  The good work of code verification helps to provide faith that the implementation of a solution method is correct.  To remind the reader, code verification compares a numerical solution to an analytical solution. One of the key measures in conducting code verification is the observed rate of convergence for a method.  This is directly compared to what is theoretically expected.  If the values match, the verification is confirmed, and this evidence is amassed to indicate that the method is implemented correctly.  If one completes this exercise over-and-over for different problems having analytical solutions, the evidence can become overwhelming.  I’d note for experts out there that the expected rate of convergence depends not only on the method, but the nature of the solution.  In other words if the analytical solution lacks smoothness (differentiability) or contains a discontinuity, the expected rate of convergence will degrade over the ideal case.

As I stated above, solution verification has an entirely different purpose.  It is conducted to help provide an estimate of the numerical error in a solution.  That is the key thing, the error estimate, not the rate of convergence. The rate of convergence is an outcome of secondary interest, an auxiliary quantity.  If the convergence rate does not match the given order of a numerical method, it does not necessarily mean the method is implemented incorrectly.  It might mean that.  Instead it is a quantity that invites caution and examination by the code’s user.  The reason is that rarely do we have firm theoretical expectations for convergence in “real” applied problems.  Often the solution of interest in a problem involves functionals of the solution that are immune to firm theoretical estimates for convergence. The problem being studied does not generally have an analytical solution although the estimates could be applied to the same problems used for code verification.  In a sense “code” verification techniques should be used to verify the error estimates produced by solution verification.

While that last sentence was correct, the language is strained, and prone to confusion.  Perhaps the best thing to do is eliminate the confusion by renaming solution verification something clearer, “numerical error estimation” or “numerical uncertainty”.

That is my suggestion; we should replace the term “solution verification” by “numerical error estimation”.

What Is Computational Science?

26 Sunday Jan 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

A recent post on a Wired Blog by Rhett Allain posed the question, “what kind of science is computational science?” it came to the conclusion that it is neither experimental or theoretical, but not worthy of being a third type of science either (http://www.wired.com/wiredscience/2014/01/what-kind-of-science-is-computational-science/).  This post prompted a rebuttal by Tammy Kolda in a SIAM blog post (http://blogs.siam.org/what-kind-of-science-is-computational-science-a-rebuttal/).  Her defense of computational science is earnest and accurate, but doesn’t quite get to the heart of the philosophical question Allain poses.  Each contains some fair points, although I find the spirit of Allain’s post to be almost completely off base, and his final conclusion quite dubious.  More than dubious, it fails to recognize how computational science is something different, and this difference should be embraced rather than dismissed as second-rate.  Kolda’s post is much closer to the mark, but misses some key aspects that provide computational science a sterner defense to Allain’s dismissal.

In my opinion, computational science is not a third type or pillar of science, but rather a new way to doing science that builds upon the foundation of theoretical and experimental science.  Rather than being something different, it is an innovative product of both.  Its validity is utterly dependent upon the foundation in theory and observation or experiment.  Seeing computational science as something different implies scarcity in that it detracts from the traditional scientific approaches.  I believe it is much more accurate to see computational science as a new, different and complementary approach that only adds to our repertoire of scientific tools.  The entirety of computational science is dependent upon the invention of the digital computer, and computational science developed to allow the utilization of this tool.  Kolda’s post captures much of the depth of scientific disciplines necessary to make this approach to science effective, and some of the key areas where it is currently contributing.

As a computational scientist I think the blame for the miscommunication lies with the computational scientist’s collective message.  The value of computational science lies directly in its capasity to force the combination of disiplines that drives innovation in ways the traditional scientific approach does not.  By opening the lines of communciation between fields, science is enriched in ways that are difficult to measure.  Such innovation is a key to progress and spurs the generation of new ideas. Computational science is as much about how science is conducted as what that science is.  Too often computational science gets stuck in its own little world of computers, algorithms, mathematics, code, data and avoids deep dialog with domain science.  This inward looking emphasis is short-sighted, failing to capitalize on the greatest strength of computational science, its inherently multi-disiplinary point-of-view.

Beyond the difference in approach that computation offers, the importance of modeling on computers is its natural ability to handle complexity that analytical approaches falter under.  This complexity spurs connections between disparate scientific disiplines that ultimately power innovation.  New ideas are usually not new at all, but the combination of different ideas in new ways.  A new technology is the combination and packaging of existing concepts together to offer functionality that its base technologies did not offer.  Rarely are the ideas explored computationally completely new with traditional science supplying most of the concepts explored.  More often, new ideas area melange of existing ideas, but engaged in a fundamentally different context.  As such computational science provides an engine of discovery mearly by providing an effective vehicle for combining disiplines together.  As such computational science is a powerful new “integrating factor” for science.

While I largely disagree with Allain’s assessment, although his comments provide a window into the physicist’s mind and some stark conclusions about why computations are so poorly done.  Doing computational work is difficult and requires deep technical skills usually acquired with years of professional study and practice.  Physicists are responsible for some of the best examples of computational science, but also some of the worst.  I would express a deep concern about an attitude that projects such a degree of dismissal of the field as a valid scientific endeavor. To be done well computational science can’t simply be approached as a hobby, but rather as an equal contributing partner in the scientific enterprise.

Computational science has been an emergent technology in the last 70 years.  In the past 20 years there has been a veritable explosion in capability.  Computation has risen into prominence societally through the Internet with all the good and bad it brings.  All of this new capability and connectivitity will allow new problems to be posed and solved through providing a meaningful path to solving problems.  Hype today circles around big data, and will likely end up with some sort of rational equilibrium where big data contributes meaningfully to scientific progress.  Analytical tractability has always been a limitation to meaningful theory.  Today computers offer different paths to tractable solutions for theory.  For a theory to be solvable no longer requires access to analytical solutions, or their near relative in asymptotic expansions.  Instead, the conditions are lossened by access to a numerical solution.  Of course getting the numerical solutions correct is a subtle technical matter requiring immense skill.  Accurate, robust and convergent numerical approximation can be just as challenging as analytical work, if not more so.  Inspite of this difficult endeavor, numerical approximation is an improvement provided by computational science and a boon to scientific progress in general.

Computers and computation is now an indispensible part of science and engineering.  Doing it well is important, and this may be the heart of what is wrong with Allain’s discussion.  His dismissal of computational science as equals is really about defending the primal place of experiment and theory.  This is old fashioned and shortsighted.  The key to advancing science is to embrace new tools, new perspective and approaches in the desire to discover and understand.  Computational science can bring new perspective and allow old problems to be solved anew.  I do worry that Allain’s perspective is commonplace among the physics community and leads to computational practice that is less then completely rigorous.  Just as rigor and professionalism are absolutely necessary in theory and experiment, they are required for computational science to flourish.

The key argument for defining computational science as a new way of doing investigations are computational experiments that blend complex theoretical models together.  Such blends of models were functionally impossible in the past due to inability to tackle their solutions analytically.  Thus a numerical, i.e., computational approach is necessary.  The classical example of such models comes originally from defense science, e.g., nuclear weapons, but the approach rapidly spanned spinoffs to weather and climate via the efforts of visionaries such as John Von Neumann.  As such these efforts are central to the tension between science, policy and politics when their results indicate the root cause of climate change is human activity.   Before the advent of serious computational power, such a modeling activity would have been impossible.  This is an argument for including computational science as something new.

Science creates knowledge that ultimately comes into common use through engineering.  This is another place where computations are reshaping how work is done and what sort of activities is possible.  With the new power comes danger of over-reliance on the virtual experiment over the cruel mastery of nature.  Often the theory that underpins our models is too weak to capture uncommon events that often dominate issues such as safety.  These tail events quite often become those that shape history.  Think have 9/11, Fukishima, Earthquakes, Tsumanis, Katrina and other massive events that are in the tails of distributions.  Our mean field theory-based science is ill prepared to provide good answers to these issues much less engineer a robust technological response.  The important thing that computational science brings to the table is the ability to identify the issue more clearly.  Only then can we begin to address solutions.  Traditional science hadn’t provided progress enough in that direction prior to the advent of computational science.

While computational science brings new perspectives to the table it should remain firmly entrenched in reality.  This is where theoretical and experimental science in their largely traditional form should come in.  This is realm of verification and validation.  Verification is the way of tying computations directly to theory, that is can I prove that my model is a correct solution to the theory I think I’m solving.  In the same way validation is the way I prove that my model is providing some reasonable representation of the observed universe.  Together these techniques tie the three branches of science together as a cohesive whole, and provide some measure of confidence in the computational view.  Add to this uncertainty quantification and we can start to meaningfully engage with people making decisions.

A key to progress is a natural tension between theory and experiment.  Sometimes a new theory will drive science to make new observations that may confirm or deny an important theory.  At other time observations are made that push theory to explain them.  This is a useful and important tug-of-war.  Computational science now offers a third mechanism to achieve this import dynamic.   A calculation can sometimes serve as the “experiment” to drive new theory and observation, such as the problem of climate change.  Sometimes it fits more firmly into the theoretical camp as turbulent fluid mechanics where experimental techniques are playing catch up in measuring the energetic dynamics at small scales.  At other times it plays the role of experimental evidence as with simulations of the evolution of the large scale of universe.  The important aspect is that it plays both roles in a way that pushes knowledge and understanding forward.

In the end we just have science.  Theory is the way we try to conceptually understand the world and equip ourselves to predict what might happen.  Experiments are the way we record phenomena and observe the world.  Experiments and observations are used to confirm or deny theory.  Computation is another path that stands between these two approaches in a different (but perhaps more unified) manner.  It is a new tool to examine and understand.  It needs to be used properly, but also respected as a field of meaningful endeavor.  I read the Wired blog post and didn’t really feel a lot of respect.  Computation was characterized as being a bit less important than the traditional approaches.  This is not the way to progress.

Realizing that computational science is a new approach to conducting science that enhances the older traditional approaches.  It can offer new solutions to problems and provide a greater chance for success.  It only adds to our knowledge, and poses no risk to the tried and true approaches to science so cherished by many. Rather than competing with traditional scientific practice, computational science enriches and provides new connections and ideas to solve today’s most important challenges.

Polynomials For High-Resolution Schemes

19 Sunday Jan 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Van Leer’s methods for high-resolution differencing methods are based fundamentally on polynomial interpolation for accuracy and then monotonicity preservation (i.e., nonlinear stability). This section will show the basic forms used in Van Leer’s 1977 paper and several extensions we will present as extensions. Each of these will interpolate an analytical function over a set of five cells and we will compare the integrated error over these five cells as a measure of the accuracy of each. The function we use has both sharp unresolved and smoother oscillatory characteristics, so should show the basic nature of the interpolations. The function is 2 + \frac{1}{2}\sin(\frac{1}{4}(x-\frac{1}{2}))+\tanh(3(x+\frac{1}{4})).

Function for displaying polynomial reconstructions

Function for displaying polynomial reconstructions

The first reconstruction we show is the piecewise constant, p(\theta) = P_0 = u_j, which is the foundation of Godunov’s method, a first-order, but monotone method. The integrated difference between the analytical function and this piecewise polynomial is 0.882015, which will serve as a nice measure of success with higher order reconstructions.

Piecewise Constant – Basis of Godunov's MethodPiecewise Constant – Basis of Godunov’s Method

The next reconstruction is the piecewise linear scheme based on a centered slope computed from cell-centered values. The polynomial is P(\theta) = P_0 + P_1 \theta= u_j + s_j \theta . The improvement over the piecewise constant is obvious and the integrated error is now, 0.448331 about half the size of the piecewise constant.

Piecewise Linear – Van Leer's Scheme 1

Piecewise Linear – Van Leer’s Scheme 1 and Scheme 2

The piecewise linear based on moments is even better, cutting the error in half again to 0.210476.

Piecewise Linear based on first moments – Van Leer's Scheme 3

Piecewise Linear based on first moments – Van Leer’s Scheme 3

The next set of plots will be based on parabolas, P(\theta) = P_0 + P_1\theta +P_2(\theta^2 -\frac{1}{12}). We stay with Van Leer’s use of Legendre polynomials because they are mean preserving without difficulty. The first is the parabola determined by the three centered cell center values, which gives a relatively large error of 0.427028 almost as large as the piecewise linear interpolation.

Piecewise Parabolic – Van Leer's Scheme 4

Piecewise Parabolic – Van Leer’s Scheme 4

Our second parabola is much better, it is based on a single cell-centered value and the edge values. It gives an error of 0.08038, which is much better than the last parabola (by a factor of five!).

Piecewise Parabolic average plus edges – Van Leer's Scheme 5

Piecewise Parabolic average plus edges – Van Leer’s Scheme 5

Using the second moment provides even better results and lowers the error to half that of the center-edge reconstruction, 0.04i887. As we will see this result cannot be improved upon too greatly.

Piecewise Parabolic – based on two moments, Van Leer's Scheme 6

Piecewise Parabolic – based on two moments, Van Leer’s Scheme 6

Now we increase the order of reconstruction to piecewise cubic, P(\theta) = P_0 + P_1 \theta + P_2 (\theta^2 - \frac{1}{12}) + P_3 \theta^3. We will look at two reconstructions, the first based on three cell-centers, and an integral of the derivative in the center cell. This is a Hermite scheme, and based on previous experience with Schemes 1 and 4 we should expect its performance to be relatively poor with an error of 0.14884, one-third of that found with the piecewise parabolic scheme 4. The second cubic reconstruction will use the cell-center, edges and the first moment, provides excellent results, 0.044014 approximately on par with the two moment piecewise parabola the basis of scheme 6. The method to update the degrees of freedom is arguably simpler (and the overall degrees of freedom is equivalent).

Piecewise Cubic – Using slopes our first new scheme

Piecewise Cubic – Using slopes our first new scheme

Piecewise Cubic – using moments our second scheme

Piecewise Cubic – using moments our second scheme

I will now finish the post with the last set of reconstructions based on piecewise quartics, P(\theta) = P_0 + P_1 \theta + P_2 (\theta^2 - \frac{1}{12}) + P_3 \theta^3 + P_4 (\theta^4 -\frac{1}{80}). Four different methods will be used to determine the coefficients. The first is a fairly important approach because the evaluation provides the stencil used for the upstream-centered fifth order WENO flux. This polynomial is determined by the cell-centered values in the cell and the two cells to the left or right. The figure shows that the polynomial is relatively poor in terms of accuracy, and confirmed by the integrated error, 0.404114 barely better than the parabola used for scheme 4. The quartic reconstruction must provide greater value.

Piecewise Quartic – Cell Centers only basis of the classical fifth-order WENO edge scheme

Piecewise Quartic – Cell Centers only basis of the classical fifth-order WENO edge scheme

The second is based on three cell centers and two edge values, and is an obvious extension of Scheme 5. This method delivers significantly better results with an integrated error of 0.090751. This is actually disappointing when compared with the parabola for schemes 5 and 6. Clearly the roughness of the function is a problem. This strongly points at the value of compact schemes, and nonlinear stability mechanism.

Piecewise Quartic – three cell centers and edge values

Piecewise Quartic – three cell centers and edge values

We close with a scheme based on cell center, with edge derivatives and slopes. This method is quite compact and we expect good results. It provides the best results thus far with an integrated error of 0.0145172 about a third of the scheme 6 reconstruction with the same number of degrees of freedom. I will note that Phil Roe suggested this approach to me noting that it was outstanding. He is right.

Piecewise Quartic –one cell-center, edges and edge derivatives

Piecewise Quartic –one cell-center, edges and edge derivatives

We look at one last scheme with a quartic reconstruction using the first and second moments, the cell-center value, and the edges. The extra degree of freedom produces outstanding results with an integrated error of 0.00726529 half the size of the error with the previous scheme.

A piecewise quartic reconstruction based on the cell-center, first and second cell-centered moments and edge values.  This is the best reconstruction shown here.

A piecewise quartic reconstruction based on the cell-center, first and second cell-centered moments and edge values. This is the best reconstruction shown here.

Smart things that other people say.

17 Friday Jan 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

I’ve made a commitment to blogging as a concrete step towards writing more.  It seems to be a success so far.  I have a conundrum this week; I’ve been writing a lot and not really blogging per se.   The writing has been for one of the papers I’m writing, and it’s providing a lot of meaningful progress.  Today I’ll post something that is a bit cheaper to put together in terms of time.

A couple things have happened, the deadline for submission has been extended by a month, so the completion of the paper once two weeks away is now six weeks away.  We had a visitor at work that had been a major player in the development of high-resolution methods, Paul Woodward.  Paul is full of interesting history particularly because he played a significant roll of helping to transfer a key technology from its inventor to the world of the National Laboratories.  This was Paul’s collaboration with Bram Van Leer one of the inventors of high-resolution methods.  The techniques that Bram developed are key in making the solution of hyperbolic PDEs better, or in the parlance of the Labs making remap tractably accurate.  I had the chance to talk to Paul and he transferred lots of interesting nuggets of knowledge about high-resolution methods, history of their development, and high performance computing where he actively works today.

While this is on topic, the real point of the post is a bunch of quotes that I like from wise, articulate people.  I think these all say something important about the conduct of modeling and simulation with an emphasis on verification, validation and uncertainty quantification.  Enjoy.

“But the only way of discovering the limits of the possible is to venture a little way past them into the impossible.” –Arthur C. Clarke [Clarke’s Second Law]

“Any sufficiently advanced technology is indistinguishable from magic.
”– Arthur C. Clarke

“Nothing is destroyed until it is replaced.” –Auguste Compte

“There is no democracy in physics.”–Luis Alvarez

“No amount of genius can overcome a preoccupation with detail” – Levy’s Eighth Law

“V&V takes the fun out of computational simulation”– Tim Trucano

“We make no warranties, express or implied, that the programs contained in this volume are FREE OF ERROR, or are consistent with any particular merchantability, or that they will meet your requirements for any particular application. THEY SHOULD NOT BE RELIED UPON FOR SOLVING A PROBLEM WHOSE SOLUTION COULD RESULT IN INJURY TO A PERSON OR LOSS OF PROPERTY…”– (from Numerical Recipes in Fortran, Press, Teukolsky, Vetterling, and Flannery)

“I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic.” – Horace Lamb –

“Turbulence is the most important unsolved problem of classical physics.” – Richard Feynman

“It is dangerous to be right in matters on which the established authorities are wrong.”
– Voltaire

“Thinking is the hardest work there is, which is probably why so few people engage in it.”
– Henry Ford

“Consistency is the last refuge of the unimagininative.”
– Oscar Wilde

“Think? Why think! We have computers to do that for us.”– Jean Rostand

“One of the great tragedies of life is the murder of a beautiful theory by a gang of brutal facts.”
– Benjamin Franklin

“Progress in science depends on new techniques, new discoveries, and new ideas, probably in that order.” — Sidney Brenner

“The purpose of computing is insight, not pictures”–Richard Hamming

“Most daily activity in science can only be described as tedious and boring, not to mention expensive and frustrating.” –Stephen J. Gould, Science, Jan 14, 2000.

“rules are necessary when people lack judgment”.–John Clifford

“The plural of ‘anecdote’ is not ‘evidence’.” Alan Leshner, publisher of Science

“…what can be asserted without evidence can also be dismissed without evidence.”–Chirstopher Hitchens

“A computer lets you make more mistakes faster than any invention in human history — with the possible exceptions of handguns and tequila.” –Mitch Ratliffe, Technology Review, April, 1992

“…who may regard using finite differences as the last resort of a scoundrel that the theory of difference equations is a rather sophisticated affair, more sophisticated than the corresponding theory of partial differential equations.”–Peter Lax

“How fortunate that in this best of all possible worlds the equations of ideal flow are nonlinear!”–Peter Lax

“For the numerical analyst there are two kinds of truth; the truth you can prove and the truth you see when you compute.” – Ami Harten

“An expert is someone who knows some of the worst mistakes that can be made in his (her) subject, and how to avoid them.” – Werner Heisenberg

“Aristotle maintained that women have fewer teeth than men; although he was twice married, it never occurred to him to verify this statement by examining his wives’ mouths.” -Bertrand Russell

“At the end of your private question, add the words “you idiot.” Now you’re saying to yourself, “Why do you think I asked you to finish the work before the end of this fiscal year, you idiots?” If the question still sounds natural with “you idiots” at its end, don’t ask it. It’s really a statement — a pointed rhetorical question.” –Roger Schwarz

“I imagine one of the reasons people cling to their hates so stubbornly is because they sense, once hate is gone, they will be forced to deal with pain.”–James Baldwin

“Dilbert isn’t a comic strip, it’s a documentary” – Paul Dubois

Practical nonlinear stability considerations

11 Saturday Jan 2014

Posted by Bill Rider in Uncategorized

≈ 2 Comments

This section is woefully incomplete without references.  I will have to come back to it and put those in as well as hooks into the previous section where a bunch of new methods are devised.  The section is largely narrative and a bit presumptive.  The devil would indeed be in the details.

When Van Leer’s fourth installment on his series was published in 1977, the concept of limiters to provide non-oscillatory results was still novel.  Boris and Book had introduced a similarly motivated technique of flux-corrected transport in a series of papers.  Loosely speaking, the idea was to provide the non-oscillatory character of first-order methods such as upwind (or donor cell) while providing higher accuracy (resolution) associated with second-order or higher methods.  The geometric character of the approach was particularly appealing and surely encouraged the subsequent adoption of the methodology by the remap community starting in the early 1980’s.

The adoption of these ideas was by no means limited to the remap community, as non-oscillatory methods rapidly became the standard for solving hyperbolic conservation laws.  Extending and improving these non-oscillatory methods became an acute research focus, and many new methods arose in the 1980’s and 1990’s although the pace of research has abated in the 2000’s.  A key aspect of the work on new non-oscillatory method is the property of early limiters to reduce the accuracy of approximation to first-order at discontinuities, and extrema in the solution.  Along with the diminishing accuracy, there is large amount of requisite numerical dissipation with the first-order methods.  To combat the unappealing aspects of first-order results, the non-oscillatory research sought to create approaches that could maintain accuracy at extrema.  At discontinuities the solution will be first-order for any practical method based on the shock capturing philosophy.

One manner of viewing this effort is seeking nonlinear stability for the approximations.  This compliments the linear stability so essential in basic numerical method development.  Nonlinear stability is associated with the detailed nature of the solution, and not simply the combination of accuracy and classical stability.

One of the most focused efforts to overcome the limitations of the first generation of limiters are essentially non-oscillatory (ENO) methods.  These methods are designed to maintain accuracy at discontinuities and accuracy by adaptively selecting the stencil from the smoothest part of the local solution.   ENO methods were extended successfully to a more practical algorithm in the weighted ENO methods, which define weights that blend the stencils together continuously allowing the method to reach higher order accuracy with the same stencil when the solution is sufficiently smooth.  These methods among the most actively developed current methods.

Take for example third-order ENO methods based on deciding which of three stencils will approximate the flux to update u_j^n.  Five cell-center values participate in the scheme, u_j^n, u_{j\pm1}^n, u_{j\pm2}^n.  The standard approach then derives parabolic approximations based on stencil using cells, u_{j-2}^n, u_{j-1}^n, u_j^n, u_{j-1}^n, u_{j}^n, u_{j+1}^n, and u_j^n, u_{j+1}^n, u_{j+2}^n.  By some well-defined means the smoothest stencil among the three is chosen to update u_j^n. In the semi-discrete case only the edge value for the update is chosen, but follows similar principles.  WENO methods predominantly adopt this approach.  There the parabolic stencils are simply evaluated at the edge, for the case descried above u_{j+1/2}^n=\frac{1}{6}(2u_{j-2}^n-7u_{j-1}^n+11 u_j^n), u_{j+1/2}^n=\frac{1}{6}(-u_{j-1}^n+5u_{j}^n+2 u_{j+1}^n), and u_{j+1/2}^n=\frac{1}{6}(2u_{j}^n+5u_{j+1}^n- u_{j+2}^n).  Weights are defined by using the derivatives of the corresponding parabolas to measure the smoothness of each.  The weights are chosen so that in smooth regions the weights give a fifth-order approximation to the edge value, u_{j+1/2}^n=\frac{1}{60}(2 u_{j-2}^n -13 u_{j-1}^n + 47 u_j^n + 27 u_{j+1}^n -3 u_{j+2}^n).  The point is that the same principles could be used for other schemes, indeed they already have been in the case of Hermitian WENO methods.

Given a set of self-consistent stencils for methods utilizing cell-averages, edge values, moments, and edge-derivatives one can easily envision an essentially non-oscillatory approach to providing new schemes with nonlinear stability.  The key is to not use stencils across discontinuous structures in the solution.  The question to answer through future research is the ability of the stencil selection to provide stability with compact stencils provided by the alternative methods.

Schemes III and VI are based on moments and are equivalent to discontinuous Galerkin (DG) methods.  DG methods are quite actively being developed due to their excellent theoretical properties as exemplified by the results Van Leer showed.  The development of nonlinear stability  for discontinuous Galerkin has not been entirely successful because limiters often deeply undermine the properties of the linear schemes so severely.

Cell based schemes have been the focus of virtually all of the work on non-oscillatory methods starting with Van Leer (and collaterally with FCT).  A key example is the associated with PPM where monotonicity-preserving techniques were employed on the higher order approximations.  Extrema-preserving approaches have been devised by several groups including the author and similar work by Sekora in collaboration with Colella.  As noted by the author in his work the broad efforts with ENO methods including the wildly successful WENO could easily be incorporated into the PPM framework because of its notable flexibility.  A third approach are the edge-based limiters developed by Suresh and Huynh that have been somewhat successfully incorporated to reduce some of the poor behavior of very high-order WENO methods.  These methods would be thought of as combining the montonicity-preserving limiters with aspects of the essentially non-oscillatory approaches based on the nature of the solution.  Results indicate that these methods produce better practical accuracy.

For the new methods Suresh and Huynh’s approach would appear to be particularly useful particularly with the semi-discrete formalism.  The application of their approach would be rather straightforward.

Designing New Schemes Based On Van Leer’s Ideas

11 Saturday Jan 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Given the broader spectrum of variables to choose from new schemes with exceptional properties can be developed.  One can blend volume-averaged, point, moment and derivatives together.  These schemes are all compact and have exceptional numerical properties.  We introduced them earlier as the extensions of the basic interpolations employed by Van Leer, but using the same principles.

On thing to recognize immediately is that the famous PPM scheme is a cell-centered implementation of the basic approach associated with Scheme V.  The difference is the construction of edge-centered data from cell-centered quantities.  The way that the original PPM overcame the issue was to use higher order approximations for the cell-edge quantities, specifically fourth-order edges for what is an intrinsically third-order scheme.  This allows the approximation to asymptotically approach fourth-order accuracy in the limit of vanishing Courant number.

More recently fifth, sixth and seventh order approximations have been used with PPM to good effect.  These have been applied to the use of PPM in global climate model dynamic cores as well as by Colella.  The author used this approach in his extrema-preserving methodology where the edge values are utilized to produce useful approximations in regions with smooth extrema.  The bottom line is that this approach has proven to be enormously flexible in practice due to the choice of the edge values coupled with a cell-average preserving reconstructing polynomial.  It is reasonable to have faith in the ability for other schemes presented next to be used in a similarly flexible manner.  Indeed the construction of approximations via polynomial reconstruction is highly amenable to such extensions as the geometric picture offered greatly aids the subsequent method development and testing.

One route to nonlinear stability are ENO methods that offer adaptive stencil selection.  These methods are based upon the choice of the “smoothest” stencil defined in some sense.  Another maxim from the ENO literature is the need to bias the stencils toward being upstream-centered.  Put differently, Van Leer’s paper presents a set of upstream-centered approximation that should be favored over other approximations.  Given this approach we might define an ENO scheme based on Scheme V as predominantly using that scheme, but choosing other more upwind or downwind stencils if the function being approximated is insufficiently smooth.

For a positive characteristic velocity an upwind stencil could define a parabola, by u^n_{j-1}, u^n_{j-1/2} and u^n_{j}.  This gives a parabola of

P(\xi) = u_j^n + (-\frac{5}{4}u_j^n - \frac{1}{4} u_{j-1}^n + \frac{3}{2} u^n_{j+1/2})\xi + (-\frac{9}{4} u_j^n + \frac{3}{4} u_{j-1}^n + \frac{3}{2} u_{j+1/2}^n)(\xi^2 -\frac{1}{12}).

This produces a semi-discrete scheme with third-order accuracy and a dispersion relation of \imath \theta + \frac{1}{18} \theta^4 + \frac{1}{270}\imath\theta^5+ O(\theta^6) .

A second stencil to choose from resulting in a third-order method is defined by u^n_{j-1}, u^n_{j+1/2} and u^n_{j}. This gives a parabola of

P(\xi) = u_j^n + (\frac{5}{2}u_j^n + \frac{1}{2} u_{j-1}^n - 3 u^n_{j-1/2})\xi + (\frac{3}{2} u_j^n + \frac{3}{2} u_{j-1}^n - 3 u_{j-1/2}^n)(\xi^2 -\frac{1}{12}).

This produces a semi-discrete scheme with third-order accuracy and a dispersion relation of \imath \theta - \frac{5}{72} \theta^4 + \frac{1}{270}\imath\theta^5+ O(\theta^6) .

The downwind stencil defined by  u^n_{j+1}, u^n_{j+1/2} and u^n_{j} ends up not being useful because it results in a scheme with centered support.  This gives a parabola of

P(\xi) = u_j^n + (3 u^n_{j-1/2}-\frac{5}{2}u_j^n + \frac{1}{2} u_{j+1}^n)\xi + (\frac{3}{2} u_j^n + \frac{3}{2} u_{j+1}^n - 3 u_{j-1/2}^n)(\xi^2 -\frac{1}{12}).

The updates become effectively decoupled because of cancelation of terms and the order of accuracy drops to second-order.  This produces a semi-discrete scheme with third-order accuracy and a dispersion relation of \imath \theta - \frac{1}{24} \imath\theta^3 + \frac{1}{1920}\imath\theta^5+ O(\theta^7) .

The same thing happens for a stencil of latex u^n_{j+1}$, u^n_{j-1/2} and u^n_{j}.

Now we can move to describing the performance of high-order methods based on Van Leer’s ideas, but extended to slightly larger support.  All these methods will, at most, reference their nearest neighbor cells.  The first is rather obvious and gives fifth-order accuracy.  The reconstruction is defined by cell-averages, u_j^n, u_{j-1}^n, u_{j+1}^n and cell-edge values, u_{j-1/2}^n, u_{j+1/2}^n.  This defines a quartic polynomial,

P(\xi) = u_j^n + \frac{1}{8}(10 u_{j+1/2}^n-10 u_{j-1/2}^n +u_{j-1}^n - u_{j+1}^n)\xi

+ \frac{1}{8} (30 u_{j-1/2}^{n} + 30 u_{j+1/2}^{n} -u_{j-1}^{n}-58 u_j^n -u_{j+1}^{n}) (\xi^2-\frac{1}{12})

+ \frac{1}{2}(u_{j+1}^n-u_{j-1}^n - 2u_{j+1/2}^n+2u_{j-1/2}^n)\xi^3

+\frac{1}{12} (-30 u_{j-1/2}^n - 30u_{j+1/2}^n +5u_{j-1}^n+50 u_{j}^n +5u_{j+1}^n)(\xi^4-\frac{1}{80}).

This produces a semi-discrete scheme with fifth-order accuracy and a dispersion relation of \imath \theta + \frac{1}{900} \theta^6 + \frac{31}{31500}\imath\theta^7+ O(\theta^8) .  This is exceedingly accurate with the dispersion relation peaking at \theta=\pi with an error of only 2.5%.

The next method was discussed and recommended by Phil Roe at the JRV symposium and a conversation at another meeting.  It uses the cell-average u_j^n, the edge values u_{j\pm1/2}^n, and the edge-first-derivatives, s_{j\pm1/2}^n.It is also similar to the PQM method proposed for climate studies.  The PQM method is implemented much like PPM in that the edge variables are approximated in terms of the cell-centered degrees of freedom.  Here we will describe these variables as independent degrees of freedom.  The polynomial reconstruction defined by these conditions is

P(\xi) = u_j^n + \frac{1}{4}(6 u_{j+1/2}^n-6 u_{j-1/2}^n -s_{j-1/2}^n - s_{j+1/2}^n)\xi

+ \frac{1}{4} (3 s_{j-1/2}^{n} - 3 s_{j+1/2}^{n} +30_{j-1/2}^{n}-60 u_j^n +30u_{j+1/2}^{n}) (\xi^2-\frac{1}{12})

+ (s_{j+1/2}^n+s_{j-1/2}^n - 2u_{j+1/2}^n+2u_{j-1/2}^n)\xi^3

+\frac{1}{2} (-30 u_{j-1/2}^n - 30u_{j+1/2}^n +5 s_{j-1/2}^n+60 u_{j}^n +5 s_{j+1/2}^n)(\xi^4-\frac{1}{80}).

This produces a semi-discrete scheme with fifth-order accuracy and a dispersion relation of \imath \theta + \frac{1}{7200} \theta^6 + \frac{1}{42000}\imath\theta^7+ O(\theta^8) .  This is exceedingly accurate with the dispersion relation peaking at $latex\theta=\pi$ with an error of only 1.2%.

We will close with a couple of schemes that are more speculative, but use the degrees of freedom differently than before.  Both will be based on a cubic polynomial reconstruction and involve the first moment of the solution along with either edge-values or edge-slopes.  Aside from the use of the first moment it is similar to the schemes above.  The reconstruction for the moment, cell-average and cell-edge values is

P(\xi) = u_j^n + \frac{3}{2}(20 m_{1,j}^n- u_{j+1/2}^n- u_{j-1/2}^n)\xi

\frac{1}{2}(u_{j-1}^n+ u_{j+1}^n - 2 u_j^n)(\xi^2 -\frac{1}{12}) .

+ (10 u_{j+1/2}^n-10 u_{j-1/2}^n - 120 m_{1,j}^n)\xi^3 .

As previous experience would indicate the use of the moment has remarkable properties, with this method producing the same error as the quartic reconstruction based on edge-values and slopes plus the cell-average, but one order lower reconstruction.

I will close by presenting three schemes based on cubic reconstructions, but utilizing the first moments.  Two will achieve fifth-order accuracy, and the last will suffer from cancellation effects that lower the accuracy to fourth-order.  These methods are unlike the discontinuous Galerkin methods in that their support will go beyond a single cell.  The first of the three will use the three cell-centered variables, and the moment of the central cell , giving a polynomial,

P(\xi) = u_j^n+\frac{3}{44}(200 m_{1,j}^n - u_{j+1}^n + u_{j-1}^n)

3(u_{j-1/2}^n+ u_{j+1/2}^n - 2 u_j^n)(\xi^2 -\frac{1}{12})

+ \frac{5}{11} (u_{j+1}^n - u_{j-1}^n - 24 m_{1,j}^n) \xi^3 .

This produces a semi-discrete scheme with fifth-order accuracy and a dispersion relation of \imath \theta + \frac{11}{7200} \theta^6 + \frac{1157}{3024000}\imath\theta^7+ O(\theta^8) .  This is exceedingly accurate with the dispersion relation peaking at $latex\theta=\pi$ with an error of only 3.1%.

The next method uses the cell-center values of the cell and it upwind neighbor and the moments in each of those cells.  The reconstructing polynomial is

P(\xi) = u_j^n+\frac{1}{8}(78 m_{1,j}^n -18 m_{1,j-1}^n -+3 u_{j}^n -3 u_{j-1}^n)

\frac{1}{4}(66 m_{1,j-1}^n+114 m_{1,j}^n -15 u_j^n+15 u_{j-1}^n)(\xi^2 -\frac{1}{12})

+ \frac{1}{2} (5 u_{j-1}^n - 5 u_{j}^n + 15 m_{1,j}^n +15 m_{1,j-1}^n) \xi^3 .

This produces a semi-discrete scheme with fifth-order accuracy and a dispersion relation of \imath \theta - \frac{1}{1350} \theta^6 + \frac{1}{23625}\imath\theta^7+ O(\theta^8) .  This is  accurate with the dispersion relation peaking at $latex\theta=\pi$ with an error about 12%.

Our final method uses the cell-centered value of the cell and its downwind neighbor along with the first moment in each of those cells.  The reconstruction takes the form,

P(\xi) = u_j^n+\frac{1}{8}(78 m_{1,j}^n -18 m_{1,j+1}^n -+3 u_{j+1}^n -3 u_{j}^n)

\frac{1}{4}(-66 m_{1,j+1}^n+114 m_{1,j}^n -15 u_j^n+15 u_{j+1}^n)(\xi^2 -\frac{1}{12})

+ \frac{1}{2} (5 u_{j}^n - 5 u_{j+1}^n + 15 m_{1,j}^n +15 m_{1,j+1}^n) \xi^3 .

We note that this stencil produces cancellation that lowers its accuracy by one order.  This produces a semi-discrete scheme with fifth-order accuracy and a dispersion relation of \imath \theta - \frac{1}{1080} \imath\theta^5 + \frac{1}{54432}\imath\theta^7+ O(\theta^8) .  This is  accurate with the dispersion relation peaking at $latex\theta=\pi$ with an error of 10%. The last two methods may not be particularly interesting in and of themselves, but may be useful as alternative stencils for ENO-type methods.

Van Leer’s 1977 paper – Paper IV in the Quest for the Ultimate

06 Monday Jan 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Some of these options were profitably framed a long time ago in one of Van Leer’s papers (the fourth of his series published in 1977).  In that paper he described two methods that came into widespread use, the second-order accurate method that is lives on as the origin of much of the limiter work in the 1980’s, and is the most common modern approach used in aerodynamic codes, unstructured codes, and for remap in ALE codes.  He also introduced the base method that became the PPM method.  Ironically, these two methods were the worst of the six methods introduced.  Van Leer labeled them as schemes 1 and 4.    Van Leer’s scheme 1 is called the “piecewise linear method” and the scheme 4 is the “piecewise parabolic method (PPM).”  The polynomial approximation used in scheme 1 uses the integral average of cell value, and the derived gradient (or slope) to define the linear approximation.  Other linear approximations could be developed using biased approximation for the slope, but these methods would not be considered upstream centered.  The PPM approximation is built defining a parabola that reconstructs the integral of a cell and its two neighbors.  The more famous PPM method uses a polynomial matching the cell average, and high-order approximations to the values at the edge of the cell. 

Two other classes of methods were provided.  One of these is based on the moments of a solution (the zeroth moment is the cell average value).  These methods are equivalent to the discontinuous Galerkin (DG) method, which is the focus of copious research today, and one of the key approaches being explored to deal with the upcoming changes in computing architecture.    This is enabled by the scheme being developed to carry extra degrees of freedom in each computational cell (i.e., the moments, a single moment gives a linear basis, a second moment a quadratic basis).  Van Leer labeled them as schemes 3 and 6.  Scheme 6 was not described in details, but has been developed as the PPB (piecewise parabolic Boltzmann) scheme by Paul Woodward, and as a quadratic finite element method with the DG research.  The polynomial derivation will match the cell average and the first and/or second moment of the distribution.

It is notable that the bane of DG methods is the development of nonlinear limiters that do not destroy the increases in accuracy fidelity that the basic (linear) DG methods provide.  Thus far it is the development of effective limiters for DG that has most greatly “limited” their adoption by the community more broadly.  This is coupled to the next issue related to stability.

The methods in Van Leer’s paper used characteristic information to enhance their stability resulting in a Courant condition that was not restrictive.  Most DG methods developed today rely upon method-of-lines approaches, which means the spatial part of the operator is discretized, and the time operator is done using an ODE method.  This approach when using explicit methods results in a very restrictive Courant condition.  A challenge is to produce methods of this sort will greater stability.  Selectively reintroducing the characteristic information may be an answer.

The third class of approaches in Van Leer’s paper is significantly under-developed and deserves attention.  Moreover, I believe it can be profitably being extended to encompass even broader design principles.  As briefly commented on above these methods utilize point values of the quantity being integrated and derivative values.  In order to capture shocks, the integral of the quantity (i.e., the cell-centered value) is also integrated using the familiar control volume form.  Thus these methods can be described as combinations of finite volume and finite difference methods.    Van Leer’s Scheme 2 uses the integral average and a cell-centered derivative to define a linear polynomial.  The difference between this method and Scheme 1 is that Scheme 2’s derivative is evolved in time using the derivative of the PDE.  Scheme 5 is different.  The parabola is defined by the integral average and the edge values.  The difference between this method and the classic PPM method is that the edge values are evolved in time by the continuous version of the PDE.  By continuous version of the PDE I mean the strong differentiable form of the PDE and not the weak form used to integrate the integral average values or moments.

There is no reason, that this approach can’t be taken even further to include moments as well.  This provides the capability to design methods with tailored performance, both in terms of accuracy and memory use.  The key is to provide the capability to introduce appropriate nonlinear stability mechanisms of equivalent impact to those common with Van Leer’s piecewise linear or parabolic methods.  For example, a cubic polynomial could be derived using the integral average, the first moment and the cell-edge values.  A suggestion made by Phil Roe is to use the integral average with the cell-edge and cell-edge-derivative values to define a quartic polynomial and thus a fifth-order method.

Stability and accuracy analysis for these schemes becomes necessarily complex.  Using tools such as Mathematica renders the problem tractable.  I will note that Van Leer analyzed the method without such help, and this makes the analysis contained in his paper all the more masterful.

 

The paper’s abstract and introduction

06 Monday Jan 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

A Basis for Reconsidering Remap Methods

William J. Rider

Sandia National Laboratories, Albuquerque 

Abstract

Methods for discretizing remap are based on algorithms developed for hyperbolic conservation laws.  Since its introduction in 1977 Van Leer’s monotonicity-preserving piecewise linear method and its extensions have been ubiquitous in remap. In that 1977 paper Van Leer introduced another five algorithms, which largely have not been used for remap despite the observation that the piecewise linear method had the least favorable properties. This adoption parallels the algorithmic choices in other related fields.  Two factors have led to the lack of attraction to the five algorithms: the simplicity and effectiveness of the piecewise linear method, and complications in practical implementation of the other methods. Plainly, stated Van Leer’s piecewise linear method enabled ALE methods to move forward by providing a high resolution, monotonicity-preserving remap.  As a cell-centered scheme, the extension to remap was straightforward.  Several factors may be conspiring to reconsider these methods anew: computing architectures are more favorable towards floating point intensive methods, lacking data movement, and 30 years of experience in devising nonlinear stability mechanisms (e.g., limiters).  In particular, one of the method blends characteristics of finite volume and finite difference methods together in an ingenious manner that has exceptional numerical properties, and should be considered a viable alternative to the ubiquitous piecewise linear method.

 Introduction

In the numerical solution of hyperbolic PDEs, a series of papers by Bram Van Leer is considered seminal.  It was a five paper series with the august title “Towards the Ultimate Differencing Scheme…” In the fourth paper of the series a whole set of different methods were proposed (six in total).  Each of the methods is based on piecewise polynomial interpolation using the degrees of freedom to define the polynomials.   A first order polynomial leads to a second-order method, and a second-order polynomial leads to a third order method because of the integration involved in the solution.  Furthermore, each method developed was fundamentally upwind biased.  The broad class of methods can be described as being upstream centered, which is a centered basis for the polynomial with an upwind bias for information propagation.  This class of methods is well known to be advantageous mathematically.

Subsequently most of the development of numerical methods was based upon two of these methods.  The starkest aspect of this thread is that the two base methods were actually the least attractive from their fundamental numerical properties (accuracy, dissipation and propagation or phase error).  The reason for this is related to the more elaborate techniques necessary for nonlinear stability of solutions and practical use.  These techniques go by the name of “limiters” that serve to keep solutions physical and properly bounded. 

The relatively unexplored methods are interesting for several reasons.  Some of the basic design principles are quite different.  The favored methods are all based on integrating over control volumes, implying the use of conservation form (as essentially dictated to the community by Lax-Wendroff’s famous theorem).  A couple of the methods use other integrated quantities such as moments of a solution where the first moment is the integral of the quantity multiplied by a linear function, the second moment, the integral of the quantity multiplied by a quadratic function and so on.  The evolution equations for the moments is derived using the same procedure, and naturally invokes the weak form of the PDE.  The other form uses the differential form of the PDEs and involves introducing point quantities, such as the solution at a point, or derivatives of the solution at a point.  One of the key aspects of this development is to make sure the quantities are all dependent upon each other so that degrees-of-freedom are well coupled.

The present day interest should be driven by the needs of modern computers that are already far different than the previous generations, with massive and pervasive levels of parallelism.  New computers will greatly penalize memory references and communication compared to floating point operations.  In other words, once you’ve got something in your computer memory, you will be greatly rewarded for doing lots of floating point operations.  This character will offer benefits to methods having their data local and high-order accuracy requiring many operations.  The key for all these methods is to develop robust and sensible stability enhancing techniques that do no undermine the fundamental properties of the methods they are applied to. 

As note, by far, the greatest amount of effort for solving hyperbolic PDEs* has gone into cell-centered schemes due to their intrinsic simplicity and connection to existing computational approaches.  This work has been enormously successful over the past thirty some odd years.  Starting in the early 1970’s key concepts were added to the repertoire of scientist that allowed one to simultaneously achieve the physically appealing results provided by first-order method such as upwind differencing with the accuracy of second-order methods such as Lax-Wendroff.  These techniques were probably first introduced in the form of flux-corrected transport (FCT) by Jay Boris (and later in collaboration with David Book), but also independently by Bram Van Leer in his “Toward the Ultimate” series of papers.  There is also the compelling case of a third invention of these methods by Kolgan in the Soviet Union.  Kolgan died before receiving any attention and his invention was only recently uncovered.  A fourth stream of development looks similar in the passage of time is due to the Israeli mathematician, Ami Harten, who figures prominently in our tale later.

These methods are known as monotonicity preserving where the numerical solution of the PDE does not produce any new minimum or maximum values.  This is a property of the simplest form of the equation used extensively in the development of numerical methods.   Upwind differencing is the archetypical monotonicity-preserving linear scheme, and Lax-Friedrichs is another example.  Monotone schemes are first-order accurate.  The work of Boris, Van Leer and Kolgan all sought to produce monotonicity preserving methods that were not limited to first-order accuracy.

All three developments were made with knowledge of Godunov’s theorem that disallows the possibility of developing a linear method that at once second-order accurate and monotone.   The key to the theorem is the word linear.  The key advancement made was to introduce nonlinearity into the method even in the case of solving a linear PDE.  What is a linear scheme?  A linear method uses the same finite difference stencil (or formula) everywhere, while a nonlinear scheme adapts the finite difference scheme to the solution itself.  In other words, the scheme detects the nature of the solution and adapts the finite difference scheme to optimally solve it. 

At a detail level, the developments were much different, but the conceptual framing is nearly identical.  FCT uses a sequential process where the limiter adds back as little of the first-order monotone dissipation as possible to still produce monotone results.  Both Van Leer and Kolgan developed second-order extensions of Godunov’s method**.

In the field of hyperbolic PDEs nonlinear scheme changed everything?  In many fields of computational science this development was like an event horizon.  These nonlinear differencing schemes were an enabling technology for many numerical simulations, and resulted in an order of magnitude or more increase in the effective accuracy of simulations.

From this foundation there was a flurry of research activity in the 1980’s and 1990’s developing, extending and using this broad class of methods.  Examples abound in the literature and the success of these approaches in undeniable.  Several of the ore successful methods from the line of work, are the piecewise parabolic method (PPM), and the weighted essentially non-oscillatory (WENO) methods, which dominate modern use of these methods.  Both of these approaches have the appeal of utilizing elements of method design that are higher order accurate and produce results of significantly higher fidelity than the original nonlinear methods.  For example, PPM dominates the field of astrophysical simulation and WENO dominates mathematical research and related simulations.  Both methods are based on developing a numerical solution for variables all defined at the center of control volumes.

A clear downside to this approach is the need to communicate the information of many control volumes to update the values for any other control volume.  This communication is becoming increasingly problematic on modern computing platforms because of the cost of communication relative to floating-point operations, or memory access.  In order to get the most out of modern computers both communication of information, and memory access must be carefully managed.  Relatively speaking, floating point operations are “free,” which more accurately is that they are not the rate limiting aspect of computing.   Therefore we can stand to have algorithms that are very floating point intensive as long as they have relatively low communication intensity, and memory requirements.  The key is striking the right sort of balance.  Because of this issue it is time to rethink the design of algorithms in a major way.  Recent results at a conference stunned me by showing (for the first time I can remember), new computing architectures actually providing worse performance.  The new chips were better for many things, but not scientific computing.  It has the potential to be a true “wake-up call.”

Some of Van Leer’s methods have received attention and extension, most notably the PPM method developed by Colella and Woodward.  This method is extensively used in astrophysics, but not significantly in remap methods although a very close variant to Van Leer’s approach (scheme IV) as been adopted by ALEGRA.    The PPM method can be thought of as a reasonable cell-centered approach to scheme V where the edge-centered variables are replaced with temporary high-order variables. Perhaps the most extensive development of this method was undertaken by Eymann in conjuction with Roe.  This extension provided a number of advances that may prove useful in the setting of remap.  More direct implementations of scheme V have been followed by Guinot, and Popov (with Norman).  These approaches highlight the positive aspects of the method as we will reiterate below, but thus far have not achieved widespread adoption.  Zeng’s hybrid finite-difference-finite volume method is yet another closely related scheme that may include extensions of the approach important for practical adoption.

Discontinuous Galerkin methods have received the most attention although the direct descendents of Van Leer’s approach are far more limited with the line of work arising from Cockburn and Shu being widest in adoption.  Woodward has developed Van Leer’s sixth method in detail despite its sparseness of detail in the 1977 paper.

*Hyperbolic PDE methods can also be described as solving an advection equation, convection or remapping values from one grid to another.  Where I work, remap is the most common term.  Remap can also be done purely geometrically, and the advection is applied using a mathematical transformation to achieve equivalence.

**Godunov’s work was done largely in isolation to the progress made in the West.  He produced the theorem mentioned above, which is fundamental, and a unique numerical method for solving systems of hyperbolic PDEs.  This method is a fairly pure extension of first-order upwinding to systems.  Godunov used the Riemann problem to provide upwinding to the system, a truly inspired choice.  The Riemann problem is the analytical solution to the pure initial value problem of two semi-infinite states interacting.   In this solution we must note that the number of independent modes of transport, or waves is equal to the number of equations in a system.  The magnitude of the wave’s speed can vary significantly and differ in direction.   This muddies the meaning of upwind in the classical sense.  The Riemann solution is used to sort this out in a physically appealing way.

Review of the analysis of Van Leer’s Six Schemes

05 Sunday Jan 2014

Posted by Bill Rider in Uncategorized

≈ 1 Comment

Van Leer’s analysis of these methods is a tour-de-force especially considering the sort of resources available to him at that time.  He did not have use of tools such as Mathematica upon which I rely.  Nonetheless, the modern tools do allow more flexibility in analysis, and exploration of extensions of the methods without undue effort.  This is not to say that the analysis is particularly easy as Mathematica analysis goes, it does require considerable skill, experience and attention.  My hope for this rearticulation of the results is for clarity with a substantial appreciation for the skill expressed in the original exposition.

Here we briefly review the six schemes presented by Van Leer in that 1977 paper in terms of their basic structure, and numerical properties.  We will utilize Von Neumann analysis for both the fully discrete and semi-discrete schemes.  In Von Neumann analysis the variable on the mesh is replaced by the Fourier transform,

u_j = \exp(\imath j \theta) or u_j^n = A^n\exp(\imath j \theta)

with \theta being the angle and A being the amplification factor.

The fully discrete analysis was the vehicle of presention for Van Leer, and the semi-discrete analysis will be additional detail.   The fully discrete schemes all fufill the characteristic condition, they recover the exact solution as the CFL number goes to one, which also demarks its’ stability limit.  The semi-discrete analysis only provides the character of the spatial scheme.

Scheme I. The Piecewise Linear Method.

This method uses a piecewise linear reconstruction

P(\xi) = P_0 + P_1 \xi with \xi = (x-x_j)/\Delta x, P_0 = u_j^n with slopes computed from local data P_1 = \frac{1}{2} (u_{j+1} - u_{j-1}).

The full update is

u^{n+1}_{j} = u^{n}_{j} - \nu (u^{n+1/2}_{j+1/2} - u^{n+1/2}_{j-1/2})

with u^{n+1/2}_{j+1/2} = P(1/2-1/2\nu) and \nu=1 \Delta t/\Delta x is the CFL number.

This method is second-order accurate as verified by expanding the method function in a Taylor series.  This can also be verified by expanding the Fourier transformed mesh function in a Taylor series expansion around \theta = 0.

amp1

Scheme I amplification surface plot, shows large error at the mesh scale

The amplification factor is A =1-\frac{1}{8} \left(\nu-2 \nu^2+2\nu^3-\nu^4 \right)\theta^4+O\left(\theta^6\right)

The phase error is \phi = 1+\frac{1}{12}(1-2\nu+3\nu^2)\theta^2 + O(\theta^4)  this is the leading order error (being one order higher due to how the phase error is written).  A key aspect of this scheme has zero phase error at CFL numbers of one-half and one.

phase1

As with the amplification, scheme I has large errors at the mesh scale.

The semi-discrete form is u_{j+1/2} - u_{j-1/2} = \imath \theta + \frac{1}{12}\imath \theta^3 + \frac{1}{8}\theta^4+ O(\theta^5), which is useful if the scheme is used with a method-of-lines time integration.

imag-S1

Dispersion error for Scheme I is large past pi/2

Scheme II. The Piecewise Linear Method with Evolving Slopes.

This method evolves both the variable and its first derivative and requires the analysis of a system of variables.  The scalar analysis of stability is replaced by the analysis of a matrix determined by the behavior of its eigenvalues.  One eigenvalue will define the behavior of the scheme, while the second eigenvalue has be referred to as spurious.  More recently it has been observed that the “spurious’’ eigenvalue describes the subcell behavior at the mesh scale.

The method uses the same reconstruction as Scheme I, but evolves both P_0 = u_j^n and P_1 = s_j^n  in time.  The update is then two equations, one for the cell-centered data,

u^{n+1}_{j} = u^{n}_{j} - \nu (u^{n+1/2}_{j+1/2} - u^{n+1/2}_{j-1/2}),

which is identical to Scheme I and a second for the slopes

s^{n+1}_j = u_j^n+\frac{1}{2} s_j^n-u_{j-1}^n - \frac{1}{2} s_{j-1}^n - \nu(s_j^n - s_{j-1}^n) ,

where the usual s_j^n as the initial slope is replaced by the first expression on the right hand side of the equation driving a coupling of the variable and the slope in the update.

The slope update is unusual and causes significant error as the CFL number goes to zero.  This method does not have an obvious method of lines form due to the nature of this update.

The amplification factor is A =1-\frac{1}{8} \left(\nu^4-2 \nu^3+2 \nu^2\right)\theta^4+O\left(\theta^6\right).

amp2

The phase error is \phi = 1+\frac{1}{12}(1-3\nu+2\nu^2)\theta^2 + O(\theta^4)  and is the leading order error as with all of the first three methods. Again, this method does not lend itself to a well-defined semi-discrete method.

phase2

Scheme III. The Piecewise Linear Method with Evolving First Moment.

This method uses a reconstruction that is similar to piecewise linear, but instead of slope (first derivative), the linear term is proportional to the moment of the solution,

P(\xi) = P_0 + P_1 \xi

again P_0=u_j^n, but P_1 = m_1 = \int_{-1/2}^{1/2} P(\xi)\xi d\xi.  Here, as with Scheme II, we evolve both P_0, and P_1, but the evolution equation for P_1 is much different, and naturally yields a coupled system.  This also provides a direct path to writing the scheme in a semi-discrete form.

The update for P_0 is the same as the previous two schemes,

u^{n+1}_{j} = u^{n}_{j} - \nu (u^{n+1/2}_{j+1/2} - u^{n+1/2}_{j-1/2}).

The update for P_1 is based on the weak form of the PDE and uses the definition of the first moment,

\int_{-1/2}^{1/2} \xi(\partial_t u + \partial_x u)d\xi =0, \partial_t m - \int_{-1/2}^{1/2} \partial_x \xi u d\xi + [u(1/2)+u(-1/2)] = 0.

Discretely this equation can be evaluated for the scalar wave equation to

m_j^{n+1} = m_j^n - \frac{1}{2}\nu (u^{n+1/2}_{j-1/2}+u^{n+1/2}_{j+1/2}) + \frac{1}{6}(u_j^n + 4 u_j^{n+1/2} + u_j^{n+1}), where u^{n+1/2}_{j} = u^{n}_{j} - \frac{1}{2}\nu (u^{n+1/2}_{j+1/2} - u^{n+1/2}_{j-1/2}).

The amplification factor is A =1-\frac{1}{72} \left(\nu-2 \nu^2+2 \nu^3-\nu^4\right)\theta^4+O\left(\theta^6\right).

amp3

The phase error is \phi = 1+\frac{1}{540}(2-5\nu+5\nu^3-2\nu^4)\theta^4 + O(\theta^4)  now the amplification error is leading and the method is confirmed as being third-order. This scheme is obviously more accurate than the previous two methods.

phase3

The semi-discrete form follows from the standard updates for both the variable and its moment without any problem.  The update has an accuracy of second order with small coefficients, u_{j+1/2} - u_{j-1/2} = \imath \theta + \frac{1}{72} \theta^4 + \frac{1}{540}\imath \theta^5+ O(\theta^6), which the keen observer will notice being third-order accurate instead of the expected second order.  Van Leer made a significant note of this.

imag-S3real-S3

Scheme IV. The Piecewise Parabolic Method.

This method uses a piecewise parabolic reconstruction

P(\xi) = P_0 + P_1 \xi + P_2(\xi^2 - \frac{1}{12}),

and uses the variables and its two nearest neighbors to determine the fit.  The fit is defined as recovering the integral average in all three cells.  The coefficients are then simply defined by P_0=u_j^n, P_1=\frac{1}{2}(u_{j+1}^n - u_{j-1}^n), and P_2=\frac{1}{2}(u_{j+1}^n - 2u_j^n +u_{j-1}^n) . This is a third-order method.

The update is simple based on the evaluation of edge and time-centered values that need to be considered slightly differently for third-order methods,

\bar{u}_{j+1/2} = \int_{j+1/2}^{j+1/2-\nu} P(\xi) d\xi = P_0 + P_1(\frac{1}{2} - \frac{\nu}{2}) + P_2 (\frac{1}{2} - \frac{\nu}{2} + \frac{\nu^2}{3}).  The update then is simply the standard conservative variable form,

u_j^{n+1} = u_j^n -\nu(\bar{u}_{j+1/2} - \bar{u}_{j-1/2}) .

The amplification factor is A =1-\frac{1}{24} \left(\nu^2-2 \nu^3+\nu^4 \right)\theta^4+O\left(\theta^6\right)

amp4

The phase error is \phi = 1-\frac{1}{60}(2-5\nu+5\nu^3-2\nu^4)\theta^4 + O(\theta^6)  this is the leading order error (being one order higher due to how the phase error is written).  A key aspect of this scheme has zero phase error at CFL numbers of one-half and one.

phase4

The semi-discrete form is u_{j+1/2} - u_{j-1/2} = \imath \theta + \frac{1}{12}\theta^4 - \frac{1}{30}\imath\theta^5 + O(\theta^6), which is useful if the scheme is used with a method-of-lines time integration.

imag-S4real-S4

Scheme V. The Piecewise Parabolic Edge-Based Method.

This method uses a piecewise parabolic reconstruction

P(\xi) = P_0 + P_1 \xi + P_2(\xi^2 - \frac{1}{12}),

and uses the variables and its two edge values determining the fit.  The fit recovers the integral of the cell j and the value at the edges j\pm1/2.  The coefficients are then simply defined by P_0=u_j^n, P_1 =u_{j+1/2}^n - u_{j-1/2}^n , and P_2 = 3(u_{j+1/2}^n-2u_j^n+u_{j-1/2}^n) . This is a third-order method.

This method is distinctly different than Scheme IV in that both the cell-centered, conserved quantities, and the edge-centered (non-conserved) quantities.  The form of the updates are distinctly different and derived accordingly.  The time-integrated edge value follows the polynomial form from Scheme IV,

\bar{u}_{j+1/2} = \int_{j+1/2}^{j+1/2-\nu} P(\xi) d\xi = P_0 + P_1(\frac{1}{2} - \frac{\nu}{2}) + P_2 (\frac{1}{2} - \frac{\nu}{2} + \frac{\nu^2}{3}) = u_{j+1/2}^n -\frac{\nu}{2} P_2 -(\frac{\nu}{2} - \frac{\nu^2}{3})P_3,

giving a cell centered update, u_j^{n+1} = u_j^n -\nu(\bar{u}_{j+1/2} - \bar{u}_{j-1/2}) .  The edge update is defined by a time-integrated, upstream-biased first derivative evaluated at the cell-edge position,

u_{j+1/2}^{n+1}= u_{j+1/2}^n - \nu P_2-\nu P_3+\nu^2 P_3 .

The amplification factor is A =1-\frac{1}{72} \left(\nu-2 \nu^2+2 \nu^3-\nu^4\right)\theta^4+O\left(\theta^6\right).

amp5

The phase error is \phi = 1+\frac{1}{540}(2-5\nu+5\nu^3-2\nu^4)\theta^4 + O(\theta^4)  now the amplification error is leading and the method is confirmed as being third-order. The scheme is very close in performance and formal error form to Scheme III, but is arguably simpler.

phase5

The semi-discrete form follows from the standard updates for both the variable and its moment without any problem.  The update has an accuracy of third order with small coefficients, u_{j+1/2} - u_{j-1/2} = \imath \theta + \frac{1}{72} \theta^4 + \frac{1}{270}\imath \theta^5+ O(\theta^6) .

imag-S5real-S5

Looking at the recent PPML (piecewise parabolic method – local) scheme of Popov and Norman produces an extremely interesting result.  The interpolation used by PPML is identical to Scheme V, but there is a key difference in the update of the edge values.  The differential form of evolution is not used and instead a semi-Lagrangian type of backwards characteristic interpolation is utilized,

u^{n+1}_{j+1/2} = P_2(1/2 -\nu) .

This produces exactly the same scheme as the original Scheme V for the scalar equation.  In many respects the semi-Lagrangian update is simpler and potentially more extensible to complex systems. Indeed this may be actualized by the PPML authors who have applied the method to integrating the equations of MHD.

Scheme VI. The Piecewise Parabolic Method with Evolving First and Second Moments.

This method uses a piecewise parabolic reconstruction

P(\xi) = P_0 + P_1 \xi + P_2(\xi^2 - \frac{1}{12}),

and uses the variables and its two edge values determining the fit.  The fit recovers the integral of the cell j and the value at the edges j\pm1/2.  The coefficients are then simply defined by P_0=u_j^n , P_1 =12 m_{1,j}^n , and P_2= 180 m_{2,j}^n - 15 u_j^n . m_1 is the first moment of the solution, and m_2 is the second moment.  The moments are computed by integrating the solution multiplied by the coordinate over the mesh cell.  This is a fifth-order method.

For all the power Mathematica provides, this scheme’s detailed analysis has remained elusive, so the results are only presented for the semi-discrete case where Mathematica remained up to the task.  Indeed I did get results for the fully discrete scheme, but the plot of the amplification error was suspicious and for that reason will not be shown, nor will the detailed truncation error associated with the Taylor series.

The semi-discrete form follows from the standard updates for both the variable and its moment without any problem.  The update has an accuracy of second order with small coefficients, u_{j+1/2} - u_{j-1/2} = \imath \theta + \frac{1}{720} \theta^6 + \frac{1}{4200}\imath \theta^7+ O(\theta^6), which the keen observer will notice being fifth-order accurate instead of the expected third order.  The semi-discrete update for the first moment is \frac{1}{2}(u_{j+1/2} + u_{j-1/2}) - u_j, and the second moment is

1/4(u_{j+1/2} - u_{j-1/2}) - 2 m_{1,j}.

imag-S6real-S6

Aside

Trying something new while writing a paper

04 Saturday Jan 2014

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Tags

http://postto.wordpress.com/?abpost&bid=9619154

I’m writing a paper right now that reviews some suggestions for solving the ubiquitous scalar advection equation,

\partial_t u + \partial_x u = 0,

which is the basis for remap in arbitrary Lagrangian-Eulerian codes and more generally hyperbolic PDEs.  I will post different sections from the paper week to week as I work toward the submission of the manuscript in a month.  A lot of the paper is already written, so the posts will show the progression of polished text.

The standard method for solving scalar advection has been Van Leer’s piecewise linear method (with an appropriate limiter), and its generalizations.   The method is based on polynomial interpolation normalized within a cell

\xi =\frac{(x-x_j)}{\Delta x};P(\theta)=P_0 + P_1 \xi, where P_0=u_j; P_1 = \frac{\partial u}{\partial x}\Delta x.

Van Leer’s method is really only one of the many schemes he suggested in his 1977 paper with the other methods being largely ignored (with the exception of the PPM method).

My paper will review Van Leer’s schemes and the analysis from the 1977 paper and then suggest how these concepts can be extended.  Some of the methods have direct connections to discontinuous Galerkin, but others are unique.  Some of these methods are completely different than anything else out there and worth a lot of attention.  These methods have the character of both finite volume and finite difference method plus have compactness that might be quite advantageous on modern computers.  Their numerical error characteristics are superior to the standard methods.

Stay tuned.

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...