Over the past couple of weeks I’ve experienced something very irritating time and time again. Each time I’ve been left more frustrated and angry than before. It has been a continual source of disappointment. I went into a room expecting to learn something and left knowing less than when I entered. What is it? “The finite element method”
“If you can’t explain it to a six year old, you don’t understand it yourself.” ― Albert Einstein
In short, the answer to my title is nothing at all and everything. Nothing is technically wrong with the finite element method, absolutely nothing at all. Given that nothing is wrong with it there is a lot wrong with what it does to the practice of mathematics and scientific computing. More specifically there isn’t a thing wrong with the method except how people use it, which is too damn abstractly. Much of the time the method is explained in a deep code undecipherable to anyone except a small cadre of researchers working in the field. Explaining finite elements to a six year old is a long suit, but a respectable goal. Too often you can’t explain what you’re doing to a 46 year old with a PhD unless they are part of the collective of PhD’s working directly in the field and have received the magic decoder ring during their graduate education.
A common occurrence for someone to begin their research career with papers that clearly state what they are doing, and as the researcher becomes successful, all clarity leaves their writing. I saw a talk at a meeting where a researcher who used to write clearly had simultaneously obscured their presentation while pivoting toward research on easier problems. This is utter madness! The mathematics of finite element research tends to take a method that works well on hard problems, and analyze them on simpler problems while making the whole thing less clear. One of the key reasons to work on simpler problems is to clarify, not complicate. Too often the exact opposite is done.
Sometimes this blog is about working stuff out that bugs me in a hopefully articulate way. I’ve spent most of the last month going to scientific meetings and seeing a lot of technical talks and one of the things that bugs me the most are finite element methods (FEM). More specifically the way FEM is presented. There really isn’t a lot wrong with FEM per se, it’s a fine methodology that might even be optimal for some problems. I can’t really say because its proponents so often do such an abysmal job of explaining what they are doing and why. That is the crux of the matter.
Scientific talks on the finite element method tend to be completely opaque and I walk out of them knowing less than I walked in. The talks are often given in a manner that seems to intentionally obscure the topic with the seeming objective of making the speaker seem much smarter than they actually are. I’m not fooled. The effect they have gotten is to piss me off, and cause me to think less of them. Presenting a simple problem in an intentionally abstract and obtuse way is simply a disservice to science. It serves no purpose, but to make the simple grandiose and distant. It ultimately hurts the field, deeply.
The point of a talk is to teach, explain and learn not to make the speaker seem really smart. Most FEM talks are about making the speaker seem smart instead of explaining why something works. The reality is that the simple clear explanation is actually the hallmark of intellectual virtue. Simplicity is a virtue that seems to be completely off the map with FEM, FEM is about making the simple, complex instead. To make matter more infuriating, much of the current research on FEM is focused on attacking the least important and most trivial mathematical problems instead of the difficult problems that are pacing computational science. Computational science is being paced today by issues such as multiphysics (where multiple physical effects interact to define a problem) particularly involving transport equations (defined by hyperbolic PDE’s). In addition uncertainty quantification along with verification and validation is extremely important.
Instead FEM research is increasingly focused on elliptic PDE’s, which are probably the easiest thing to solve in the PDE world. In other words, if you can solve an elliptic PDE well I know very little about the ability of a methodology’s capacity to attack the really hard important problems. It is nice, but not very interesting (the very definition of necessary and insufficient). Frankly the desire and interest in taking a method designed for solving hyperbolic PDE’s such as discontinuous Galerkin and applying it to elliptic PDE’s is worthwhile, but should not receive anywhere near the attention I see. It is not important enough to get the copious attention it is getting.
The effect is that we are focused on the areas of less importance, which has the impact of taking the methodology backwards. The research dollars are focused on less important problems instead of more important ones. Difficult important problems should be the focus of research, not the kind of “Mickey Mouse” stuff I’ve seen the whole month. On top of Mickey Mouse problems, the talks make the topic as complex as possible, and seem to be focused on trying not to explain anything in simple terms.
“Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.” ― Edgar Wybe Dijkstra
I think Dijkstra was talking about something entirely different, but the point is similar, the complexity sells and that is why it is trotted out time and time again. While it sells, it also destroys the sort of understanding that allows ideas to be extended and modified to solve new problems. The complexity tends to box ideas in rather than making them more general and less specific. There is a lot at stake beyond style, the efficacy of science is impacted by a false lack of simplicity. Ultimately it is the lack of simplicity that works against FEM, not the method itself. This is a direct failure of the practice of FEM rather than the ideas embedded within.
The people who tend to work on FEM tend to significantly overelaborate things. I’m quite close to 100% convinced that the overelaboration is completely unnecessary, and it actually serves a supremely negative purpose in the broader practice of science. One of the end products is short-changing the FEM. In a nutshell, people can solve harder problems with finite volume methods (FVM) than FEM. The quest for seemingly rigorous mathematics has created a tendency to work toward problems with well-developed math. Instead we need to be inventing math to attack important problems even if the rigor is missing. Additionally, researchers over time have been far more innovative with FVM than FVM.
The FEM folks usually trot out that bullshit quip that FEM is exactly like FVM with the properly chosen test function. OK, fair enough, FEM is equivalent to FVM, but this fails to explain the generic lack of innovation in numerical methods arising from the FEM community. In the long run it is the innovations that determine the true power of a method, not the elaborate theories surrounding relatively trivial problems. These elaborations actually undermine methods and lead to a cult of complexity that so often defines the practice.
Where FEM excels is the abstraction of geometry from the method and ability to include geometric detail in the simulation within a unified framework. This is extremely useful and explains the popularity of FEM for engineering analysis where geometric detail is important, or assumed to be important. Quite often the innovative methodology is shoehorned into FEM having been invented and perfected in the finite volume (or finite difference) world. Frequently the innovative devices have to be severely modified to fit into the FEM’s dictums. These modifications usually diminish the overall effectiveness of the innovations relative to their finite volume or difference forbearers. These innovative devices are necessary to solve the hard multiphysics problems often governed by highly nonlinear hyperbolic (conservation or evolution) equations. I personally would be more convinced by FEM if some of the innovation happened within the FEM framework instead of continually being imported.
Perhaps most distressingly FEM allows one to engage in mathematical masturbation. I say this with complete sincerity because the development of methods in FVM is far more procreative where methods are actually born of the activity. Too often FEM leads to mathematical fantasy that have no useful end product aside from lots of self-referential papers in journals, and opaque talks at meetings such as those I’ve witnessed in the last month. For example computational fluid dynamics (CFD) is dominated by FVM methods. CFD solvers are predominantly FVM not FEM largely for the very reason that innovative methods are derived first and used best in FVM. Without the innovative methods CFD would not be able to solve many of its most important and challenging problems today.
Mathematically speaking, I think the issue comes down to regularity. For highly regular and well-behaved problems FEM works very well, and it’s better than FVM. In a sense FEM often doubles down on regularity with test functions. When the solution is highly regular this yields benefits. The issue is that highly regular problems actually define the easier and less challenging problems to be solved, not the hard technology-pacing ones. FVM on the other hand hedges its’ bets. Discontinuous Galerkin (DG) is a particular example. It is a really interesting method because it sits between FEM and FVM. The DG community puts a lot of effort in making it a FEM method with all the attendant disadvantages of assumed regularity. This is the heart of the maddening case of taking a method so well suited to very hard problems and studying in incessantly on very easy problems with no apparent gain in utility. It seems to me that DG methods have actually gone backwards in the last decade due to this practice.
In a sense the divide is defined by whether you don’t assume regularity and add it back, or you assume it is there and take measures to deal with it when it’s not there. Another good example comes from the use of FEM for hyperbolic PDE’s where conservation form is important. Conservation is essential, and the weak form of the PDE should give conservation naturally. Instead with the most common Galerkin FEM if one isn’t careful the implementation can destroy conservation. This should not happen, conservation should be a constraint, an invariant that comes for free. It does with FVM, it doesn’t with FEM, and that is a problem. Simple mistakes should not cause conservation errors. In FVM this would have been structurally impossible because of how it was coded. The conservation form would have been built in. In FEM the conservation is a specially property, which is odd for something built on the weak form of the PDE. This goes directly to the continuous basis selected in the construction of the scheme.
Another place where the FEM community falls short is stability and accuracy analysis. With all the mathematical brouhaha surrounding the method one might think that stability and accuracy analysis would be ever-present in FEM practice. Quite the contrary is true. Code and solution verification are common and well practiced in the FVM world and almost invisible in FEM. It makes no sense. A large part of the reason is the abstract mathematical focus of FEM instead of the practical approach of FVM. At the practical end where engineering and science are being accomplished with the aid of scientific computing, the mathematical energy seems to yield very little. It is utterly baffling.
“Simplicity is the ultimate sophistication.” ― Leonardo da Vinci
The issue is where the math community spends its time; do they focus on proving things for easy problems, or expand the techniques to handle hard problems? Right now, it seems to focus on making the problem easier and proving things rather than expanding the techniques available and create structures that would work on the harder problems. The difference is rather extreme. The goal should be to solve the hard problems we are offered, not transform the hard problems into easy problems with existing math. If the math needed for the hard problems aren’t there we need to invent it and start extending ourselves to provide the rigor we want to see. Too often the opposite path is chosen.
A big issue is the importance or prevalence of problems for which strong convergence can be expected. How much of the work in the world is focused where this doesn’t or can’t happen. How much is? Where is the money or importance?
A think a much better path for FEM in the future is to focus on first making the style and focus of presentation simple and pedagogical. Secondarily the focus should be pushed toward solving harder problems that pace computational science rather than toys that are amenable to well-defined mathematical analysis. The advantages of FEM are clear, the hardest this we have to do is make the method clear, comprehensible and extensible.

Gil Strang is a good example of presenting the FEM in a clear manner free of jargon and emphasizing understanding.
I fully expect to catch grief over what I’m saying. Instead I’d like to spur those working on FEM to both attack harder problems, and make their explanation of what they are doing simple. The result will be a better methodology that more people understand. Maybe then the FEM will start to be the source of more innovative numerical methods. Everyone will benefit from this small, but important change in perspective.
“Any darn fool can make something complex; it takes a genius to make something simple.” ― Pete Seeger
are great opportunities to learn about what is going on around the World, get lots of new ideas, meet old friends and make new ones. It is exactly what I wrote about a few weeks ago, giving a talk is second, third or even fourth on the list of reasons to attend such a meeting.

n actual disaster. Frankly, the USA looks much worse by comparison with a supposedly recovering economy. There are private security guards everywhere. The amount of security and the meeting was actually a bit distressing. In contrast to this in a week, at a hotel across the street from the hospital, I heard exactly one siren, amazing. As usual getting away from my standard environment is thought provoking, which is always a great thing.
through by
We can now apply the same machinery to more complex schemes. Our first example is the time-space coupled version of Fromm’s scheme, which is a second-order method. Conducting the analysis is largely a function of writing the numerical scheme in Mathematica much in the same fashion we would use to write the method into a computer code.





Look for






ieved by using symbolic or numerical packages such as Mathematica. Below I’ve included the Mathematica code used for the analyses given above.
he Palmer House, which is absolutely stunning venue swimming in old-fashioned style and grandeur. It is right around the corner from Millennium Park, which is one of the greatest Urban green spaces in existence, which itself is across the street from the Art Institute. What an inspiring setting to hold a meeting. Chicago itself is one of the great American cities with a vibrant downtown and numerous World-class sites.
ortance to the overall scientific enterprise, and applied mathematics is suffering likewise. This isn’t merely the issue of funding, which is relatively dismal, but overall direction and priority. In total, we aren’t asking nearly enough from science, and mathematics is no different. The fear of failure is keeping us from collectively attacking society’s most important problems. The distressing part of all of this is the importance and power of applied mathematics and the rigor it brings to science as a whole. We desperately need some vision moving forward.
the work of Peter Lax on hyperbolic conservation laws. He laid the groundwork for stunning progress in modeling and simulating with confidence and rigor. There are other examples such as the mathematical order and confidence of the total variation diminishing theory of Harten to power the penetration of high-resolution methods into broad usage for solving hyperbolic PDEs. Another example is the relative power and confidence brought to the solution of ordinary differential equations, or numerical linear algebra by the mathematical rigor underlying the development of software. These are examples where the presence of applied mathematics makes a consequential and significant difference in the delivery of results with confidence and rigor. Each of these is an example of how mathematics can unleash a capability in truly “game-changing” ways. A real concern is why this isn’t happening more broadly or in targeted manner.



rather hyperbolic, and it is. The issue is that the leadership of the nation is constantly stoking the fires of irrational fear as a tool to drive political goals. By failing to aspire toward a spirit of shared sacrifice and duty, we are creating a society that looks to avoid anything remotely dangerous or risky. The consequences of this cynical form of gamesmanship are slowly ravaging the United States’ ability to be a dynamic force for anything good. In the process we are sapping the vitality that once brought the nation to the head of the international order. In some ways this trend is symptomatic of our largess as the sole military and economic superpower of the last half of the 20th Century. The fear is drawn from the societal memory of our fading roll in the World, and the evolution away from the mono-polar power we once represented.
ctions in the region has coupled this. Supposedly ISIS is worse than Al Qaeda, and we should be afraid. You are so afraid that you will demand action. In fact that hamburger you are stuffing into your face is a much larger danger to your well being than ISIS will ever be. Worse yet, we put up with the fear-mongers whose fear baiting is aided and abetted by the new media because they see ratings. When we add up the costs, this chorus of fear is savaging us and it is hurting our Country deeply.
United States is now more arduous than entering the former Soviet Union (Russia). This fact ought to absolutely be appalling to the American psyche. Meanwhile, numerous bigger threats go completely untouched by action or effort to mitigate their impact.
When did all this start? I tend to think that the tipping point was the mid-1970’s. This era was extremely important for the United States with a number of psychically jarring events taking center stage. The upheaval of the 1960’s had turned society on its head with deep changes in racial and sexual politics. The Vietnam War had undermined the Nation’s innate sense of supremacy while scandal ripped through the government. Faith and trust in the United States took a major hit. At the same time it marked the apex of economic equality with the beginnings of the trends that have undermined it ever since. This underlying lack of faith and trust in institutions has played a key roll in powering our decline. The anti-tax movement that set in motion public policy that drives the growing inequality in income and wealth began then arising from these very forces. These coupled to the insecurities of national defense, gender and race to form the foundation of the modern conservative movement. These fears have been used over and over to drive money and power into the military-intelligence-industrial-complex at a completely irrational rate.


There is a gap, but it isn’t measured in terms of FLOPS, CPUs, memory, it is measured in terms of our practice. Our supercomputers have lost touch with reality. Supercomputing needs to be connected to a real tangible activity where the modeling assists experiments, observations and design in producing something that services a societal need. These societal needs could be anything from national defense, cyber-security, space exploration, to designing better more fuel-efficient aircraft, or safer more efficient energy production. The reality we are seeing is that each of these has become secondary to the need for the fastest supercomputer.

s become the focus. This has led to a diminishment in the focus on algorithms and methods, which has actually a better track record than Moore’s law for improving computational problem solving capability. The consequence of this misguided focus is a real diminishment in our actual capability to solve problems with supercomputers. In other words, our quest for the fastest computer is ironically undermining our ability to use computers effectively as possible.
We should work steadfastly to restore the necessary balance and perspective for success. We need to allow risk to enter into our research agenda and set more aggressive goals. Requisite with this risk we should provide greater freedom and autonomy to those striving for the goals. Supercomputing should recognize that the core of its utility is computing as a problem solving approach that relies upon computing hardware for success. There is an unfortunate tendency to simply state supercomputing as a national security resource regardless of the actual utility of the computer for problem solving. These claims border on being unethical. We need computers that are primarily designed to solve important problems. Problems don’t become important because a computer can solve them.



I’ve quipped that we should have a special conference center is some awful place where no one would want to go. That way the Congress and public would know that we go to the conferences to engage in technical work. On the other hand, part of going to conferences involves getting inspired to do better work. Why not go to some place that is inspiring? Why not go to some place that has great restaurants so that the sharing of the meal can be memorable on multiple levels? Why not make the entire event memorable and worthwhile and enriching at a personal level? At the core of the attitude of many in government is a sense that life should be suffered with work being the most unpleasant aspect of them all. It is a rather pathetic point of view that leads to nothing positive. We shouldn’t be punished for working in the public sphere, yet punishment seems to be the objective.








