We’ve taken the world apart but we have no idea what to do with the pieces.
― Chuck Palahniuk
There are a lot of ways to turn differential equations into a discrete system and numerical solve them. The choices usually come down to three different, but slightly ambiguous choices, finite differences, finite volumes, or finite elements. One of the important, but fuzzy pieces of knowledge is the actual meaning of the variables you’re solving in the first place. Some clarity regarding the variables detailed identity can come in handy, if not be essential to high fidelity solution. With any luck we can shed some light on this.
The place to start is writing down the governing (usually differential) equations. If your problem is geometrically complex, finite element methods have a distinct appeal. The finite element method has a certain “turn the crank” approach that takes away much of the explicit decision-making in discretization, but the decision and choices are so much deeper than this if you really care about the answer.
Tiny details imperceptible to us decide everything!
― W.G. Sebald
I’ve never been a fan of not thinking deeply about what you are doing at a very specific way. Finite element aficionados seem to be very keen on avoiding as much thought as possible in discretization with a philosophy of choosing your element and associated shape functions and letting the chips fall where they may. For simple problems with a lot of regularity (smoothness) this can work well and even be advantageous, but for difficult problems (i.e., hyperbolic PDEs) this can be disastrous. In the end people would be far better served by putting more thought into the transition from the continuous to the discrete.
My recent posts have been examples of the sorts of details that really matter a lot in determining the quality of results in computing. Ideas like convergence, limiting solutions, dissipation, and accuracy all matter a tremendous degree and make the difference between stability and instability, high and low fidelity, run of the mill and state of the art, and ultimately high quality. Failure to pay acute attention to the details of the discretization will result in mediocrity.
It should come as no surprise that I don’t particularly care for the finite element method, so in keeping with my tastes I’m focusing on the finite volume and finite difference methods today. There are significant differences between the two that should be taken into account when deriving discrete approximations. Perhaps more interestingly, there is a fairly well defined way to translate between the two points of view. This translation makes for a useful addition to anyone’s discretization “toolbox”.
Once upon a time there was no such thing as the “finite volume method” it was simply a special finite difference method. Some finite differences employed a discrete conservation principle, and could be thought of as directly updating conserved quantities. These distinctions are not terribly important until methods become higher than second-order in order of accuracy. Gradually the methodology became distinct enough that it was worth making the distinction. The term “finite volume” came into the vernacular in about 1973 and stuck (an earlier paper in 1971 coined “finite area” method in 2-D).
The start of the distinctions between the two approaches involves how the equations are updated. For a finite difference method the equations should be updated using the differential equation form of the equations at a point in space. For a finite volume method the equations should be updated in a manner consistent with an integral conservation principle for a well-defined region. So the variables in a finite difference method are defined at a point, and the values in a finite volume method are defined as the integral of that quantity over a region. Transfers between adjoining regions via fluxes are conserved, what leaves one volume should enter the next one. Nothing about the finite difference method precludes conservation, but nothing dictates it either. For a finite volume method conservation is far more intrinsic to its basic formulation.
Consider a conservation law, . One might be interested in approximating with either finite differences or finite volumes. Once a decision is made, the approximation approach falls out naturally. In the finite difference approach, one takes the solution at points in space (or in the case of this PDE, the fluxes,
) and interpolates these values in some sort of reasonable manner. Then the derivative of the flux
is evaluated. The update equation is then
. This then can be used to update the solution by treating the time like an ODE integration. This is often called the “method of lines”.
For a finite volume method the approach is demonstrably different. The update of the variable makes explicit contact with the notion that it is a quantity that is conserved in space. For the simple PDE like that above the update equation requires the updating of the variables with fluxes computed at the edge of the cell. Notice that the fluxes have to be evaluated at the edges, which can assure that the variable, is conserved discretely. The trick to doing the finite volume method properly is the conversion of the set of cell (element) conserved values of to the point values at the edges as fluxes. It isn’t entirely simple. Again an interpolation can be utilized to achieve this end, but the interpolation form must adhere to the character of the conservation of the variable. Thus when the interpolant is integrated over the cell it must return the conserved quantity in that cell precisely. This can be accomplished through the application of the classical Legendre polynomial basis for example.
Perhaps you’re now asking about how to translate between these two points of view. It turns out to be a piece of cake. It is another useful technique to place into the proverbial toolbox. It turns out that the translation between the two can be derived from the interpolations discussed about and rightfully mirrors each other. If one takes a set of control volume, integrally averaged, values and recovers the corresponding point values the formula is remarkably simple. Here I will refer to control volume values by and the point values by
. Then we can transform to point values via
. The inverse operation is
and is derived by integrating the point wise interpolation over a cell. For higher order approximations these calculation are a bit more delicate than these formula imply!
For finite element methods we usually have the standard flavor of the continuous finite elements. Thinking about the variables in that case they are defined by nodal values, the “shape function” describing their variation in space, and its appropriately weighted integral. A more modern and exciting approach is discontinuous Galerkin, which does not require continuity of solution across element boundaries. The lowest order version of this method is equivalent to a low-order finite volume scheme. The variable is the zeroth moment of the solution over a cell. One way of looking at high order discontinuous Galerkin methods is taking successive moments of the solution over the cells (elements). This method holds great promise because of its high fidelity and great locality.
This is just the start of this exploration, but the key is know what your variables really mean.
Little details have special talents in creating big problems!
― Mehmet Murat ildan




ws has been an outstanding achievement for computational physics. These methods have provided an essential balance of accuracy (fidelity or resolution) with physical admissibility with computational tractability. While these methods were a tremendous achievement, their progress has stalled in several respects. After a flurry of development, the pace has slowed and adoption of new methods into “production” codes has come to a halt. There are good reasons for this worth exploring if we wish to see if progress can be restarted. This post builds upon the two posts from last week, which describes tools that may be used to develop methods.



as “entropy stable”. The condition can be computed by checking whether the jump in edge values 



















wheelhouse (along with all the hijinks that the child movie viewers will enjoy).




methods in CFD codes. Methods that were introduced at that time remain at the core of CFD codes today. The reason was the development of new methods that were so unambiguously better than the previous alternatives that the change was a fait accompli. Codes produced results with the new methods that were impossible to achieve with previous methods. At that time a broad and important class of physical problems in fluid dynamics were suddenly open to successful simulation. Simulation results were more realistic and physically appealing and the artificial and unphysical results of the past were no longer a limitation. 
virtually any conceivable standard. In addition, the new methods were not either overly complex or expensive to use. The principles associated with their approach to solving the equations combined the best, most appealing aspects of previous methods in a novel fashion. They became the standard method almost overnight.
This was accomplished because the methods were nonlinear even for linear equations meaning that the domain of dependence for the approximation is a function of the solution itself. Earlier methods were linear meaning that the approximation was the same without regard for the solution. Before the high-resolution methods you had two choices either a low-order method that would wash out the solution, or a high-order solution that would have unphysical solutions. Theoretically the low-order solution is superior in a sense because the solution could be guaranteed to be physical. This happened because the solution was found using a great deal of numerical or artificial viscosity. The solutions were effectively laminar (meaning viscously dominated) thus not having energetic structures that make fluid dynamics so exciting, useful and beautiful.
safe to do so), and only use the lower accuracy, dissipative method when absolutely necessary. Making these choices on the fly is the core of the magic of these methods. The new methods alleviated the bulk of this viscosity, but did not entirely remove it. This is good and important because some viscosity in the solution is essential to connect the results to the real world. Real world flows all have some amount of viscous dissipation. This fact is essential for success in computing shock waves where having dissipation allows the selection of the correct solution.
In the case of simple hyperbolic conservation laws that define the inertial part of fluid dynamics, the low order accuracy methods solve an equation with classical viscous terms that match those seen in reality although generally the magnitude of viscosity is much larger than the real world. Thus these methods produce laminar (syrupy) flows as a matter of course. This makes these methods unsuitable for simulating most conditions of interest to engineering and science. It also makes these methods very safe to use and virtually guarantee a physically reasonable (if inaccurate) solution.
The new methods get rid of these large viscous terms and replace it with a smaller viscosity that depends on the structure of the solution. The results with the new methods are stunningly different and produce the sort of rich nonlinear structures found in nature (or something closely related). Suddenly codes produced solutions that matched reality far more closely. It was a night and day difference in method performance, once you tried the new methods there was no going back.












