• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Author Archives: Bill Rider

Musings about Leadership and Its Impact on Technical Work.

20 Friday Mar 2026

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Here are some thoughts on more scientific or technical posts and those that are less so, along with a short map of what lies ahead. I’m also going to explain why I give so much attention to the issues of leadership.

Despite my tendency to focus on technical things, my writings are often drawn to the failings of our current leadership at all levels. The basic thought is that none of these scientific issues can be addressed without improvements in how the leadership acts. It is born out of the frustration that I felt growing year upon year in my career. This is that all the good technical work in the world that I could do, that others could do, would amount to nothing unless the leaders showed greater integrity, and focus on technical excellence. I had seen science in general fade from importance and priority. These were replaced by other concerns that continually undermined the technical world. By the end of my professional career, the leadership’s behavior became so fucked up that I could no longer stand to work at what, by all accounts, is a premier institution.

I find that I have many readers who seem to share my observations and also provide their own perspective. The same issues are present where they are. Still a constant need in retirement is for me to tack towards joy and exploration of the technical work. It is where I would so like to focus on. There are the traditional things I’ve worked on in numerical methods, computational physics, and computational science that continue to need effort. Interspersed with this is the new focus on AI that has burst forth onto the scene with the massive success of LLMs.

Again my mind is drawn to the failings of leadership in this time. How toxic and poisoning the current leadership is towards the successful roll out of AI to society. I think it is a very clear worry given what I saw in terms of the shallow technical approach to AI on the part of the scientific community as expressed with federal research. The AI efforts that were announced lacked all nuance and technical depth and were yet another set of stunt efforts that were only geared towards securing more funding. It is the same completely fucked up pattern that I saw the last decade at Sandia.

I find myself drawn to this problem because it seems so fundamental. Without changes in leadership I don’t think we can succeed at AI, nuclear weapons, or science in general as a nation. Here, leadership is focused on everything, but the elements of success in any of these endeavors. With all the value being put into money and the appetite for greed, the decisions will surely be poor technically and harmful to society as a whole.

I do promise that my next two posts will be technical:

1. Stepping back to the ever-controversial and lightning rod of numerical dissipation.

2. Then probably to some aspect of AI as I attempt to use and understand it in a way that is better than the sort of bullshit I heard from my leaders at the laboratory and has been reported to me by friends at other laboratories.

We Need to Talk about Nuclear Weapons (and AI)

18 Wednesday Mar 2026

Posted by Bill Rider in Uncategorized

≈ Leave a comment

tl;dr

We now live in a World with both nuclear weapons and AI. Watching our country abdicate its responsibilities to nuclear weapons gives me pause. Our national infrastructure is in decay, and our leadership is inept. This is promoted by an awful incentive structure. Lack of nuclear testing means raising our game; we lowered it. The great responsibility of this technology is not met with seriousness. It fills me with fear to think of how irresponsibly we are approaching AI. We also do not realize how much these technologies empower our Nation to act as it wishes. Often, they provide the power to dominate others and get away with murder (literally at times). We seem to be unwitting in how we sell the possession of nukes to everyone else. This is yet another form of irresponsibility in action. Our lack of maturity is a threat to mankind.

Nukes are Still Central; We are Not Taking Care of Ours

“The Manhattan District bore no relation to the industrial or social life of our country; it was a separate state, with its own airplanes and its own factories and its thousands of secrets. It had a peculiar sovereignty, one that could bring about the end, peacefully or violently, of all other sovereignties.” — Richard Rhodes

For nearly 40 years, I worked at nuclear weapons Labs, Los Alamos and then Sandia. My time at the national laboratories taught me many things. I tapped into a vast reservoir of knowledge at Los Alamos. I also embraced the mission of the Lab and accepted the responsibility that came with it. I wish those values were embraced by the Labs today. I fear they are not. Furthermore, the nation seems to have lost its responsibility associated with nuclear weapons. This loss of responsibility is transferring to science, technology, and engineering as a whole. The advent of AI as a key technology merely amplifies and raises the stakes of these developments. My nation is endangered by these developments.

One of them is the incredible responsibility of caring for nuclear weapons. This requires extreme competence in a vast array of science and engineering disciplines. In my time, I have witnessed a decline in virtually every area of competence. Only computing has grown in knowledge, and only in hardware. Computing is far more than hardware. What goes on with that hardware and how it is used matters greatly. The use of these weapons is primarily to maintain the peace and, ultimately, to ensure that the world never sees their use. This is not my area of expertise, but it is a matter of deep moral and ethical concern. Nuclear weapons are holistic because of their power, ultimately being a relentlessly political technology.

A key episode that demonstrates the danger in our current philosophy is plutonium aging. There was a concerted study to look at this, where both Los Alamos and Livermore weighed in. I personally don’t know the answer to any of this and wouldn’t say anything if I did. The most notable aspect that undermines this study is the positions taken by each lab. Los Alamos took the position that the aging was bad. Livermore took the position that the aging was not. Again, I don’t know what the right answer is. The positions that each lab took were completely correlated with what was in the best financial interest of each laboratory. Los Alamos had a distinct benefit in terms of money for there to be a problem. Livermore, conversely, had the opposite view. I can only imagine what sort of pressures were exerted behind the scenes. Maybe there was none, but given my experience, I sincerely doubt it. The financial benefits of the technical work cast doubt on the overall outcome. The reality is that the only way to be completely sure would be to begin testing these weapons again. Instead of that, we should have unbiased and technical work of the highest quality rather than financially conflicted studies.

During my time at Los Alamos, I learned all about the full scope and breadth of nuclear weapons. The science that is needed to support them is breathtaking in scope. At the same time, I saw a country take steps that ultimately are verging on complete abdication of responsibility. I fear their care is diminishing below the standards needed. First and foremost among these was a change in the management philosophy. At the start of my career, it was intelligent care and concern for proper stewardship. Now, there is a flawed belief that money is the only thing that managers should focus on. Now, a belief that the proper care of programs financially leads to the best outcomes. This was an invocation of a business philosophy for managing the work of the government. This philosophy was adopted without thought or proper modification for science. This reflects a general trend across society.

The same period of time as I watched this philosophy usher in the end of American dominance in science. It has declined to be replaced by dominance by China, an adversary. While China has moved forward their dominance is more our failure than its success. We had a lead and squandered it. I watched this happen from the inside. Both Los Alamos and Sandia were premier institutions. Were. They are both shadows of their former glory. We did this to ourselves. We did this without any eye toward the responsibilities for our nukes or the impact on the future. Much of our power and wealth is founded on science and technology. It will not hold up in the future.

It is my belief that nuclear weapons are the most fearsome and horrible weapons developed by man. They should never ever be used in armed conflict. Our irresponsibility threatens that outcome. I also believe they are a technology that cannot be put back in the bottle. The genie is out and must be tamed. For this to be true, moral and ethical leadership is needed. It needs to be provided by the nation. We should have nuclear weapons that all others are absolutely certain will work as designed and intended. This confidence is necessary for proper stewardship. That said, the United States has moved far too long along the path where we can no longer be entirely certain that this is reality. We have allowed our experts and expertise to degrade and become a mere shadow of what they once were.

“The practice of science was not itself a science; it was an art, to be passed from master to apprentice as the art of painting is passed or as the skills and traditions of the law or of medicine are passed.” — Richard Rhodes

Being More Responsible

“The nuclear arms race is like two sworn enemies standing waist deep in gasoline, one with three matches, the other with five.” ― Carl Sagan

With every passing day, the steps to fix this and the consequences of our decline become more dire. I believe that I am also correct to be worried about the moral and ethical standing of those who lead us. The United States does not acknowledge how well they market nuclear weapons to other Nations. Nuclear weapons, to be blunt, allow your nation to be complete assholes and get away with it. Does anyone really believe that Russia would have invaded Ukraine if it did not have nuclear weapons? That Ukraine would have been invaded if it had nuclear weapons? Nuclear weapons undergird the actions of Israel and the USA against Iran, even as they work to deny Iran its own acquisition. Israel and the USA provide evidence to the Iranians that possessing nuclear weapons would be enabling. They would unlock their power in the region and give them a source of invincibility and immunity from consequences. Both Russia and the USA (Israel too) demonstrate this over and over.

Once a nation becomes nuclear-armed, nobody really fucks with them any longer. They join the top of international leadership (see India). The basic technology is old, going back to 1945. Even eighty years on, nuclear weapons have a huge impact on human affairs. Internationally, they are always in the room. Superpowers refuse to acknowledge that they are walking advertisements for possessing nuclear weapons. Lesser states are always given a litany of reasons why possessing nuclear weapons would improve their national standing. Nuclear weapons are also still the biggest physical stick in the world.

We have a President who has alluded to testing again. Is this necessary? Is this wise? This is all worth discussing. At the same time, we have a new technology that’s often compared to nuclear weapons: AI. How will AI and nuclear weapons interact, and what’s the path of wisdom for both AI and a world where both exist? Highly enriched uranium and plutonium are very expensive to obtain. It requires a vast investment by a society to achieve. It is also very hard to hide. It takes time to get there, too. The leading countries have a great deal of desire to keep countries from getting them. It is dangerous, but it also reduces their power to bully “lesser” nations. This is the truth.

Even eighty years on, nuclear weapons have a huge impact on human affairs. Internationally, they are always in the room. Superpowers refuse to acknowledge that they are walking advertisements for possessing nuclear weapons. Lesser states are always given a litany of reasons why possessing nuclear weapons would improve their national standing. Nuclear weapons are also still the biggest physical stick in the world. We have a President who has alluded to testing again. Is this necessary? Is this wise? This is all worth discussing. I do not know the answer to this either; I am deeply conflicted. At the same time, we have a new technology that’s often compared to nuclear weapons: AI. How will AI and nuclear weapons interact, and what’s the path of wisdom for both AI and a world where both exist?

“You can’t be a real country unless you have a beer and an airline – it helps if you have some kind of football team, or some nuclear weapons, but in the very least you need a beer.” ― Frank Zappa

Nukes and AI

“Bohr proposed once that the goal of science is not universal truth. Rather, he argued, the modest but relentless goal of science is “the gradual removal of prejudices.” — Richard Rhodes

The parallels between AI and nuclear weapons are stronger than people feel. In both cases, being an AI superpower is similar in terms of resource allocation to nuclear weapons. The investment in facilities and electricity for creating nuclear material necessary for a weapon is similar to what’s required to lead in AI. In both cases, to succeed as a nation, one needs a vast industrial base and a multitude of scientists and experts who serve that industrial base’s products. It is also similar in terms of its technical depth and sophistication. Additionally, both have good and bad sides. With nuclear weapons, you have nuclear power. With AI, you can use it to benefit society, but it is a powerful tool for warfare and surveillance. We live in a world now where both of these technologies are lurching to the fore of world events and have an outside influence on the future of mankind.

Nuclear power is underutilized and over-feared by the public. It is not trusted. It has inherited the peril of nuclear weapons. It could be a green technology to combat climate change. By the same token, AI has similar light and dark polarities. It can be used to propagandize, kill, and surveil people. By the same token, it can be an incredible assistant to all of us and produce massive productivity gains. It could be a means of great wealth and abundance for society. Right now, it will just bring wealth to a few people and impoverish the masses. In both cases, we are failing to recognize the truly dire harm that these technologies could produce. We are also failing to recognize the truly beneficial good for society that each can do. Our leadership in both is completely failing.

The Bottom Line

“Are you ready for nuclear Armageddon?” ― Michael Parker

Every year, the directors of the three labs sign a letter stating the status of the stockpile. This letter goes to the President with advice on what to do. Every year, they have asserted that the stockpile is in good shape. Testing weapons again is not required. The question that I ask myself is: what would happen if one of the directors declined to state that the stockpile was in good shape? My assertion is that this director would be immediately replaced without explanation. If the director of the lab does not have the liberty to not sign the letter in the affirmative, do they really have the liberty to sign it at all? Ultimately, is the assessment of the stockpile not a technical or scientific question but rather a political one? For example, if the president had decided to test again, would they all sign in the negative? Would they join in? Again, these are questions I think need to be asked, and the answers are not ones that I know. They are questions we should be asking.

“Now, I am become Death, the destroyer of worlds.” ― J. Robert Oppenheimer

The trust, the truth, and the responsibilities of leadership.

14 Saturday Mar 2026

Posted by Bill Rider in Uncategorized

≈ Leave a comment

tl;dr

Across society, our leaders today are liars. Even the most truthful are consummate bullshitters. Not just the President, but almost all of them. Why? It works! People reward the liars and bullshitters. They choose to follow and effectively believe them. Eventually, reality will punish them, but most get away with it. This all seems to be a product of our increasingly online world. This combined with greed and inequality of wealth drives lying and BS. There is a lack of oracles and arbiters of truth. In the long run, we will all suffer. The debts to truth will come due.

“Management is doing things right; leadership is doing the right things.” ― Peter Drucker

A reasonable conclusion is that we are neither managed, nor led today.

Leaders Today

Today our leaders have a conflicted and ambiguous relation to the truth. At the same time the level of trust in society is plunging to all-time lows. This would seem to be a vicious cycle headed for disaster. As I’ve stated before reality will always assert itself, and when it does the results are undeniable. The twin issues of lack of trust and wishful thinking seems to be combining to supercharge this behavior. We see it every day from our National leadership. I saw it with my management at work. Leaders lie because it benefits them. It is the road to success. As long as we reward the lies and bullshit, we will get more of it.

My own departure from work and retirement was triggered by this dynamic. I had leaders who did not trust the system enough to accept the truth and respond to an objective reality that required a response. Instead they engaged in unethical practices that were effectively a cover-up. It was a case where someone did high quality work with results they didn’t like. Rather than take that information, and respond to the results, they censored and covered it up. I was working in an institution that green-lighted their incompetence as the right response. My own responsible actions were viewed as unacceptable. In terms of National security, the leaders actions were irresponsible and incompetent. Yet, it was also the path of least resistance. They chose the easy, cheap path that leads to willful ignorance. They are the epitome of modern leadership.

I remember the last time I engaged with the Director of Sandia. It was a forum on AI. I asked her a question online about our information environment. As I’ve noted before Sandia has a very restrictive, “do not share information” culture. The impact of this on information technology is profoundly negative. As a consequence functions like search are crippled. Given this track record, and AI’s dependence on training data, how would we avoid this mistake from crippling AI? Her response was “that was a harsh question.” There was no willingness to take evidence and respond to it. Certainly, no confidence is any action to recognize the problem, much less remediate it. The evidence was rejected as harsh. With leadership like this is it any wonder why my managers would be any such pathetic cowards? Seemingly not, her example set the bar.

We will get this from our leaders as long as we accept it. We will get it as long as they suffer no ill from its effects. Avoiding the damage from the resistance to objective reality is an illusion. It will be obvious and catastrophic. The damage is already done; it is just not evident enough to prompt a strong enough response.

“Do you want to know who you are? Don’t ask. Act! Action will delineate and define you.” ― Thomas Jefferson

Motivations are all Wrong

“The mark of a great man is one who knows when to set aside the important things in order to accomplish the vital ones.” ― Brandon Sanderson

A core problem is that our leadership has no allegiance to the truth. What they have allegiance to is money, profit, shareholder value, … At least if our time scale is short and all the leadership is short term focused. In this case, the truth is the enemy of all these things and our leaders are basically really good liars. It is worse that most are bullshitters. They don’t care about the truth, and chose whatever suits them best. The lying and bullshit serves to increase the value of what they care about and the truth isn’t it. We see it everywhere.

Corporate interests are a the canonical example. This has become the model and framing for society as a whole. The core of the attitude is the maxim of maximizing shareholder value as the sole purpose of corporations. This attitude has been adopted by politics and other organizations. My personal experience is thick with organizations like National Labs and universities. There, this principle of leadership is simply ill-suited to their purpose and actively damaging. For corporations this principle can be argued to be appropriate. That said, it produces a focus that is relentlessly short term. As those effects become evident the shareholders simply divest and move on to their next victim. While profitable and driving the stock market, it hollows the future out. Often this profit is done in ways that damage our society. The classic example is dumping toxic waste without any conscious. Today, no example is more apt today than social media. These companies are worth a huge amount of money all based on preying on the rest of society.

The force of this philosophy has created vast swaths of wealth. It has also created income and wealth inequality on a scale unequaled in United States history. Today’s billionaires are worse than the robber barons of the 19th Century. Furthermore, it is being adopted across the leadership spectrum. The government has decided that it is the model for things they fund. Politicians too. The Laboratories managing science and national security are examples. I saw this close up. The result is a primal focus on money above all else. The money becomes a stand in for quality of work and technical excellence. More insidiously, the behavior of the management is distorted. We see the same lying and bullshitting as corporate leaders. All of it links back to the chosen priorities.

I’ve point to Boeing corporation as a cautionary example. Reality has visited their business and showed its failures. Planes crash and doors fall off with evidence pointing back to corporate decisions. The corporation worked to maximize short-term value at the cost of quality. Technical excellence is costly and they curtailed that. We can see the same thing setting up with AI. Sam Altman of OpenAI is bullshitting his way through the investment bubble. He is trying to get his company to its IPO with as much value as possible. This value is being powered by bullshit and sleazy behavior. We can see how Boeing fucked itself. Will OpenAI fuck itself too? or will they get to their massive payday. Ultimately, all this behavior is in service of creating billionares. They are created on the backs of the rest of us. They also don’t give a single fuck about the country, or their citizens.

God help us, this is the model of leadership across the whole of society.

“The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things.” ― Ronald Reagan

It is Effective

The obvious issue is that current models of leadership are viewed as effective. It is successful. Donald Trump is an exemplar of it. He’s been elected President twice. His constant lying and bullshit seems to benefit him. There is no seeming penalty for it. In all likelyhood the impacts of his behavior will come due when he is out of office (or dead). The same goes for other leader whether it is the CEO of Boeing or the Director of a Lab. Someone else will have to clean up their mess. The current leader will get rich and enjoy power. The future is something they have no responsibility for. This should not be our model for leadership.

The core of the issue is time scale. The leadership is focused on tomorrow (or the next quarter). We see this in the quarterly review so popular in corporate governance and adopted everywhere. The lens for it is money. This drives stock price and corporations work to engineer the quarterly review to drive stock value. In this system, the future is lost. The long term health of any of these systems has no vote. This was evident at the Labs where I worked. The long term prospects of the Labs declined, and declined with the short term focus. At the same time every program was declared a great success. In almost every case this success was bullshit.

Just as the long term is sacrificed, the impact of society has no vote. This has clarity in the behavior of social media where vast profits have been made by preying on people. We might guess that AI will do more of the same. Society would benefit from an honest AI that operated with humility. Profit demands that AI acts confident in all answers, and shows mastery where there is none. We see the lines of conflict set up where lying, hallucinations and bullshit line the pockets of tech billionaires. Society would be better off with an honest AI that shows humility. It would respond with genuine doubt and warn the users of sketchy answers. This would be far more responsible, but negatively impact the (short term) bottom line. In this, we can see how the incentives are arrayed against responsible long term health.

“Efficiency is doing the thing right. Effectiveness is doing the right thing.” ― Peter F. Drucker

It Won’t Change Until It Blows Up

“Being responsible sometimes means pissing people off.” ― Colin Powell

The core of the problem is that this way of doing things works. It creates wealth. It is the standard set by society for organizations. All things are measured in money. All of the measurement is short term. Any other measure shrinks into being meaningless. If we measured the long term health, we would see the problem. We don’t. All sorts of important things are ignored. Excellence and quality are disregarded, unless it fits into the finances. They rarely do. Both are expensive, and when the opposite works in the short term, they are jettisoned. Generally excellence and quality matter in the long term (see Boeing). Their disregard takes years to become evident. I witnessed a similar decline at the National Labs.

“Advertising is legitimised lying.” ― H.G. Wells

Today’s leaders are not communicating reality to us. Almost everything they say is merely marketing of what they wish were true. It is advertisement of their success, “cherry picked” to only mention success. Problems and failures are simply ignored or spun into a success. Too often the leaders offer zero humility. Their messages show little vulnerability. I do think the internet and social media can be blamed for a great deal of this. They act as if any weakness or failure will be used against them in an instant. This is part of the absence of trust. The viscous cycle is driven as the trust-destroying nature of their actions only makes things worse. It feels like we cannot escape this dynamic. Meanwhile AI is appearing as a lying trust-annihilating technology to amplify this trend.

While the problems are evident, nothing is changing. We are societally careening toward multiple crises. When problems are ignored, they fester. The problems grow larger as they rarely moderate or disappear without focus. I would call this approach optimistic pessemism. There optimism is expressed with pessimism about our ability to make things better. I personally prefer pessimistic optimism. There the problems are discussed and identified with optimism about our ability to solve them. I know one thing, we won’t solve anything without identifying the problems. This is a necessary first step and the origin of success.

Social media has taken all of this to the overdrive. Not only is social media powered by the maximizing shareholder value motive for leadership, but it’s created a world where leaders feel the need to define their reality before the reality is defined for them. The leader works to define the reality that suits them. They worry that any problem will impact their short term prospects. It will lessen their wealth or impact the funding negatively. This is just spiraled. In addition, the voices of insiders who know better are typically silenced by fear. See me as an example of this. In today’s world, we have less and less voice instead of more and more. Social media is there simply to sell us things.

“Good leadership requires you to surround yourself with people of diverse perspectives who can disagree with you without fear of retaliation.” ― Doris Kearns Goodwin

Postscript

I realize that my more technically oriented posts do a lot better in terms of readership. I could simply focus on that. On the other hand, the issues that I discuss in these more managerial to political posts are the barriers that I saw while working to actually execute the technical work successfully. These leadership issues discussed here, for example, are precisely the things that make the technical work completely pointless. Without leadership that recognizes our problems and works to actively solve them, all the good technical work in the world will amount to nothing. We have a number of rather profound issues and problems to solve today. The nuclear stockpile remains something of acute interest. With the advent of AI bursting onto the scene in the last few years, there are incredibly difficult problems at the boundary between political, managerial, and technical that must be navigated.

How I Query LLMs: The Start of a Study

11 Wednesday Mar 2026

Posted by Bill Rider in Uncategorized

≈ 2 Comments

This would be my initial query on this topic. This is something I know a great deal about. My intent is to find out how reliable the LLM will be in digging deeper into this topic. I am looking for its response and any signs of hallucination. If it can’t handle this question, it is not worth asking deeper questions.

The Query

I need an introduction to Riemann solvers, both exact and approximate. What are the options, along with pros and cons? Provide me with the consequences of these decisions, along with some references to find out more. Provide a summary of mathematical and physical properties. The answers should be at the level of a full professor in mathematics, aerospace engineering, or astrophysics with three decades of experience and extensive publishing in computational physics. Provide four different responses to this query, including scoring of each for quality.

The Structure of the Query

1. The basic question I am looking to answer. It is specific in terms of what I want.

2. The context for the response, including the level and audience for the request.

3. The final bit is asking for a sampling of responses along with scoring. LLMs are stochastic and responses are not unique. This provides a much richer answer. In some cases, this can provide a narrowing of scope as the LLM will run out of a statistically meaningful response.

Zhang, Jiayi, Simon Yu, Derek Chong, Anthony Sicilia, Michael R. Tomz, Christopher D. Manning, and Weiyan Shi. “Verbalized sampling: How to mitigate mode collapse and unlock LLM diversity.” arXiv preprint arXiv:2510.01171 (2025).

“One reason I’m not worried about the possibility that we will soon make machines that are smarter than us, is that we haven’t managed to make machines until now that are smart at all. Artificial intelligence isn’t synthetic intelligence: It’s pseudo-intelligence.” – Alva Noe

What is the Point of This?

This is just the start of a session. All I am doing is establishing a baseline competence for the LLM on this topic. For example, I might start asking questions about the details of a given solver or class of solvers. I am trying to uncover the existing knowledge about one of these A great deal depends on my mindset. In the best case, the LLM will provide me with context and details that are new to me. It will uncover things that I had overlooked or not thought about.

This topic gets to one of the big limits on LLMs. They are all about words. Images are an important way of expressing knowledge. LLMs miss this and do not express explanations in this form. Here, this is a huge gap and limitation. Wave diagrams have important conceptual information that is missing. For science in general, graphs and diagrams are essential. The graphical form of expression is complementary. For numerical methods, this can be essential. Overcoming this limitation would be a huge breakthrough. This is one of the barriers holding LLMs back, and gets to a place for advances. In essence, this would mean connecting the elements of the graphics to language logically and reliably. This is something people do naturally, and a huge challenge for AI.

Commentary on the Responses

Claude fucking killed it on this request. It won this test by a country mile. It was a complete rout. The response absolutely blew me away. The references were great and fairly extensive. Gemini was competent, but terse. It gave references in a reasonable form. ChatGPT was superficial and provided a sketch. It references were okay, but not specific. It was by far the worst.

Let’s dig little deeper. ChatGPT provided a bit of crap, too. “It defines the dissipation operator of the entire numerical method” is not true, or over-simplified. It also bold-faced it for Christ’s sake. This is a very simple perspective, not false, but not entirely true either. It is too blunt. Response 2 is not entirely correct as numerical analysis; it falls short on this label. In general, the response is mostly correct, but by far the worst of the three. It reads like shitty PowerPoint slides from a boring presentation. I did appreciate the follow-up question suggestions at the end of the response.

Did the Pentagon buy a lemon? I will note in passing that I played with their Codex and it blew me away.

I’ll consider Gemini next. First, it has the bonus of using full sentences. The negative was how it segmented the responses by types of solver. Roe was 1, HLL was 2, all-Mach was 3, and high-order was 4 (Rusanov). It did not discuss exact solvers at all. Flux vector splitting was not discussed either. So better answers were given, but coverage was shit. Response 4 was really dismal in the sense that high-order methods are a different topic. It did not cleanly separate the topics. The references were far better, but pretty sparse. No outright fibs here, but a quite incomplete response. I rank this as passable, but disappointing.

This response was only marginally more competent. It mentions low-Mach behavior, but not carbuncle, which is an interesting choice. I might ask for more responses to see where it shows up (or not). For Gemini, four answers were not enough.

Now, let’s look at Claude. Holy shit! The response was beyond all expectations. Looks like Hegseth really fucked up!

The score cards for each response were great, too! Response 1 itself was better than the entirety of the ChatGPT or Gemini responses. It used full sentences and was expansive in discussion. It is hard to overstate how much better it was. The references were great and look correct (I know many of these by heart), plus commentary on each one of them.

The second response is good, but has issues with scope. Godunov’s method is appropriate. Godunov’s theorem is not. It is related, but out of scope. The response does include a bunch of very welcome math theory, too. Aside from my critique, this response is more complete than the other two and includes content they missed (relaxation schemes).

Response three takes a different tack of looking at major areas of physics. This still leaves room to touch on important points to consider, such as low Mach number issues and carbuncle. In both cases, it does a better job than the other two. Its tables are better, too. Claude excels in every aspect of the response.

Response four is the best of the entire query for all three LLMs. I don’t quite agree with the hierarchy given, but its debatable. The high-order discussion does contain an error. The dissipation from a Riemann solver is nonlinearly present in the odd-order truncation error. It is even-order for linear analysis. This is subtle, but important. Interestingly, this response also gets to research topics of interest, including machine learning used for Riemann solvers. Then it gives 25 references, albeit a roll-up of the other three responses (seems like cheating!).

This made me feel like probing with some other questions. Nonetheless, this was a dominant performance from Claude.

ChatGPT (5.3)

You’re asking for something that sits right at the center of computational fluid dynamics and hyperbolic PDE numerics. The Riemann solver is essentially the microscopic engine of most Godunov-type finite-volume schemes. It answers one question repeatedly at every cell interface:

Given two constant states ULUL​ and URUR​, what is the flux through the interface after the nonlinear wave fan evolves?

That “local Cauchy problem” is the Riemann problem. Every shock-capturing scheme built since the work of Sergei K. Godunov in 1959 rests on this structure.

I’ll give you four different responses to your query, each with a different emphasis. Afterward I score them for usefulness.

Response 1 — The Classical CFD Perspective

The governing system is a hyperbolic conservation law

∂tU+∂xF(U)=0∂t​U+∂x​F(U)=0

with a Riemann initial condition

U(x,0)={ULx<0URx>0U(x,0)={UL​UR​​x<0x>0​

For the Euler equations the exact solution consists of three waves: two nonlinear acoustic waves and a contact discontinuity.

Exact Riemann solvers

Classic example: the solver of Eleuterio F. Toro.

Properties:

Mathematical

  • Exact solution of the nonlinear wave fan
  • Resolves shocks, rarefactions, and contacts precisely
  • Requires iterative root solve for pressure in the star region

Physical

  • Fully consistent with Rankine–Hugoniot conditions
  • Entropy satisfying
  • Correct wave speeds

Pros

  • Maximum physical fidelity
  • Excellent for validation and reference solutions

Cons

  • Expensive
  • Nontrivial branching logic
  • Iterative nonlinear solve per interface

Typical cost: 3–10× more expensive than approximate solvers.

For large simulations (ICF, astrophysics, turbulence) this is usually prohibitive.

Approximate Riemann solvers

These replace the exact wave fan with simplified approximations.

Roe solver

From Philip L. Roe.

Idea

Linearize the flux Jacobian using a Roe average so the system behaves locally like a linear hyperbolic system.

Properties

Mathematical

  • Exact resolution of isolated shocks
  • Linearized eigenstructure
  • Conserves flux

Physical

  • Captures contacts sharply
  • Requires entropy fix

Pros

  • Excellent resolution
  • Efficient
  • Widely used in compressible CFD

Cons

  • Entropy violation near sonic rarefactions
  • Carbuncle instability
  • Requires eigen decomposition

HLL solver

From Ami Harten, Peter Lax, and Bram van Leer.

Idea

Collapse the wave fan into two bounding waves.

Mathematical properties

  • Only fastest left/right speeds used
  • Intermediate structure discarded

Pros

  • Extremely robust
  • Positivity preserving
  • No eigenvectors needed

Cons

  • Smears contact and shear waves

HLLC solver

Extension of HLL restoring the contact wave.

Pros

  • Good contact resolution
  • Robust
  • No full eigensystem

Cons

  • Still approximate
  • Some instability regimes

HLLE solver

Variant tuned for robustness.

Pros

  • Positivity preserving
  • Stable for strong shocks

Cons

  • Very diffusive

Common in astrophysical codes.

Consequences of solver choice

Accuracy hierarchy

Exact > Roe ≈ HLLC > HLL > HLLE

Robustness hierarchy

HLLE > HLL > HLLC > Roe > Exact

Cost hierarchy

Exact > Roe > HLLC > HLL/HLLE

For high-Mach astrophysical flows the dominant concern becomes positivity and shock stability, not sharpness.

Mathematical properties summary

SolverConservationEntropyContact resolutionPositivityExactyesyesexactyesRoeyesrequires fixexcellentweakHLLyesyespoorstrongHLLCyesmostlygoodmoderateHLLEyesyespoorvery strong

Standard references

Classic texts:

Riemann Solvers and Numerical Methods for Fluid Dynamics Finite Volume Methods for Hyperbolic Problems

Papers

  • Roe (1981) Approximate Riemann solvers
  • Harten–Lax–van Leer (1983)
  • Toro–Spruce–Spears (1994) HLLC

Score for this response Quality: 9/10

Strong for CFD practitioners; concise but standard.

Response 2 — The Numerical Analysis Perspective

The Riemann solver is best viewed as an approximate evaluation of the Godunov flux

Fi+1/2=F(U(x/t=0+))Fi+1/2​=F(U(x/t=0+))

where U(x/t)U(x/t) is the self-similar Riemann solution.

Theoretical classification:

  1. Exact nonlinear solvers
  2. Linearized solvers
  3. Two-wave solvers
  4. Flux splitting schemes

Linearized solvers

The Roe method constructs a matrix A~A~ satisfying

F(UR)−F(UL)=A~(UR−UL)F(UR​)−F(UL​)=A~(UR​−UL​)

This ensures conservation and correct shock speeds.

It is essentially a first-order Jacobian linearization preserving the Rankine–Hugoniot condition.

The catch is that A~A~ may violate entropy conditions.

Two-wave solvers

HLL-type methods replace the detailed wave structure with

FHLL=SRFL−SLFR+SLSR(UR−UL)SR−SLFHLL​=SR​−SL​SR​FL​−SL​FR​+SL​SR​(UR​−UL​)​

where SL,SRSL​,SR​ are estimates of extremal signal speeds.

This yields a consistent flux with built-in dissipation proportional to the wave-speed bounds.

It is essentially a Rusanov-type viscosity operator with adaptive scaling.

Flux splitting solvers

Example: Steger–Warming or van Leer flux splitting.

Flux is decomposed into

F=F++F−F=F++F−

based on eigenvalue sign.

Pros

  • Simpler implementation
  • Vectorizable

Cons

  • Larger numerical diffusion

Physical interpretation

The Riemann solver determines the numerical viscosity tensor.

For example

  • Roe viscosity proportional to ∣A∣∣A∣
  • HLL viscosity proportional to spectral radius
  • Lax–Friedrichs viscosity isotropic

This has major implications for

  • shock thickness
  • turbulence damping
  • numerical stability

Score Quality: 8.5/10

More mathematical but less implementation guidance.

Response 3 — The Astrophysical Simulation Perspective

In astrophysics (supernovae, ICF, relativistic jets) solver choice becomes a stability problem, not purely accuracy.

Important physical constraints:

  • positivity of density
  • positivity of pressure
  • correct shock jump conditions
  • Galilean invariance
  • carbuncle avoidance

Codes like those associated with James M. Stone often default to HLLE for robustness.

Reason:

Roe-type solvers fail catastrophically in strong shocks with large kinetic/internal energy ratios.

This is common in

  • blast waves
  • hypersonic flows
  • stellar explosions

Thus astrophysics frequently uses:

HLLE or HLLC + fallback HLLE.

This strategy trades accuracy for guaranteed stability.

Consequences in astrophysical regimes

Roe

Pros

  • excellent contact resolution

Cons

  • negative pressure in high Mach flows
  • carbuncle instability

HLLE

Pros

  • positivity preserving
  • extremely stable

Cons

  • diffuses contact and shear layers

HLLC

Pros

  • compromise between the two

Cons

  • can fail in extreme conditions

Score Quality: 8/10

Good for application context but narrower.

Response 4 — The Structural View of Modern Methods

Modern Riemann solvers can be interpreted as approximations to the spectral decomposition

F(UR)−F(UL)=∑kαkrkF(UR​)−F(UL​)=k∑​αk​rk​

where rkrk​ are eigenvectors of the Jacobian.

Different solvers approximate this decomposition differently.

Categories

Complete solvers

Resolve all characteristic waves.

Examples

  • Roe
  • exact Euler solver

Incomplete solvers

Resolve only some waves.

Examples

  • HLL (two-wave)
  • HLLEM
  • HLLC

Central solvers

Avoid Riemann problems entirely.

Examples

  • Kurganov–Tadmor
  • Lax–Friedrichs

Physical meaning

A Riemann solver determines

  1. numerical dissipation tensor
  2. characteristic wave coupling
  3. shock admissibility

Thus solver choice implicitly determines which physical waves your scheme respects.

For instance

  • HLL ignores entropy/contact modes
  • Roe preserves full eigensystem

Frontier developments

Recent work includes

  • entropy-stable Riemann solvers
  • kinetic solvers
  • path-conservative solvers for nonconservative systems

These become imporant in

  • MHD
  • relativistic flows
  • multiphase systems

Score Quality: 8.7/10

Conceptually deep but less practical.

Overall ranking

ResponseStrengthScore1balanced overview9.04conceptual framework8.72mathematical clarity8.53application-specific insight8.0

A final observation that people sometimes miss: the Riemann solver is not just a flux function. It defines the dissipation operator of the entire numerical method. That single design choice shapes shock stability, turbulence spectra, and even long-time energy evolution in compressible simulations.

That is why decades of CFD research still revolve around tweaking that little interface problem.

If useful, a deeper dive could include:

• the carbuncle instability and its relationship to Roe solvers • entropy-stable Riemann solvers (Tadmor framework) • MHD Riemann solvers and the seven-wave structure • how modern Godunov codes mix solvers adaptively.

Gemini (3.1 Thinking)

For a researcher with decades in the trenches of computational physics, the choice of a Riemann solver isn’t a textbook exercise—it’s a decision about where you’re willing to accept errors. Whether you’re dealing with the Euler equations, MHD, or the complex mixtures in ICF, the Riemann solver is the “physics engine” at the cell interface.

Here are four distinct perspectives on the state of the art, evaluated for their utility in a high-level research context.

Response 1: The Linearization & Spectral Fidelity Perspective

This approach focuses on the evolution from the Exact Riemann Solver to Roe’s Linearization.

The Exact Solver (Godunov, 1959) requires an iterative solution (usually Newton-Raphson) for the pressure in the Star Region. While it is the “truth” for the jump conditions, it is computationally prohibitive for 3D production codes.

The Roe Solver

Roe (1981) introduced the idea of solving a locally linearized problem ∂t∂U​+A^∂x∂U​=0, where A^ satisfies the Roe conditions (U-consistency, hyperbolicity, and conservation).

  • Pros: Resolves isolated contact discontinuities exactly. This is vital for multi-material flows.
  • Cons: The “Entropy Fix” problem. Roe can allow non-physical expansion shocks because it doesn’t “know” about the second law of thermodynamics at sonic points. It’s also susceptible to the Carbuncle phenomenon in high-Mach shocks.
  • Consequences: If you’re doing high-resolution boundary layer work, Roe is great. If you’re hitting a shock at Mach 20, expect it to blow up without heavy-handed dissipation.

Mathematical Properties

  • Consistency: f~​(u,u)=f(u).
  • Conservation: Integral of the flux equals the jump in state.
  • Exact Property: If UL​ and UR​ are connected by a single shock or contact, Roe recovers it.

Score: 8/10. Strong on classical linear algebra, but skips the robustness needed for modern extreme-regime physics.

Response 2: The Positivity & Robustness Perspective (HLL Family)

When robustness is the only thing that keeps your simulation from crashing at 3:00 AM, you look at the HLL (Harten, Lax, van Leer) framework.

HLL vs. HLLC

The original HLL (1983) assumes a two-wave structure. It’s incredibly stable but treats the contact discontinuity as a smear of numerical diffusion. For astrophysics or ICF, where temperature gradients across contacts drive the physics, HLL is garbage.

HLLC (Toro et al., 1994) restores the missing contact wave (“C” for Contact).

  • Pros: It maintains the positivity-preserving nature of HLL while resolving the contact. It’s significantly more robust than Roe for low-density or high-Mach regimes.
  • Cons: Slightly more logic-heavy than HLL.

References: Toro, E. F., Riemann Solvers and Numerical Methods for Fluid Dynamics (The “Bible” of the field).

Physical Properties

  • Entropy Satisfaction: Built-in. No expansion shocks.
  • Positivity Preservation: Crucial for vacuum regions (e.g., stellar winds).

Score: 9/10. This reflects the current industry standard in high-energy density physics and codes like FLASH or Athena++.

Response 3: The “All-Speed” & Low-Mach Perspective

Standard Riemann solvers (Roe, HLLC) fail miserably as the Mach number M→0. The pressure gradient scales as O(M2), but the numerical dissipation scales as O(M). This leads to “checkerboard” pressure fields and excessive diffusion in the subsonic regime.

The Options

  • AUSM+ (Advection Upstream Splitting Method): Liou (1996). Splits the flux into convective and pressure terms. It handles shocks well and is efficient.
  • Low-Mach Preconditioning: Modifying the Riemann solver to scale the dissipation terms by the local Mach number.

Consequences

If you’re running a code that transitions from supersonic shocks to nearly incompressible turbulence, a standard HLLC solver will kill your small-scale eddies. You need an “All-Speed” variant.

Mathematical Property

  • Asymptotic Consistency: The solver must recover the incompressible limit as M→0.

Score: 7/10. Specialized but critical for multi-scale computational science.

Response 4: The High-Order Stencil Interaction (WENO/DG context)

In a 30-year career, you’ve likely seen that the Riemann solver is only as good as the reconstruction (UL​,UR​). At high orders (WENO7, TENO, or Discontinuous Galerkin), the choice of Riemann solver becomes even more nuanced.

Rusanov (Local Lax-Friedrichs)

The simplest approximate solver. It uses the maximum wave speed λmax​ to provide a central flux with a dissipation term.

  • Pros: Dead simple. Guaranteed stable.
  • Cons: Unbelievably dissipative. It “melts” physics.
  • The Twist: In high-order DG methods, the high-order reconstruction provides so much accuracy that you can sometimes get away with a simpler, more robust Riemann solver like HLL or even Rusanov.

Summary of Options

SolverContact ResolutionRobustnessComplexityRecommended UseExactPerfectHighVery HighVerification/ValidationRoePerfectLowMediumBoundary layers, low MachHLLCGoodHighMediumGeneral CFD/AstrophysicsHLLDPerfect (MHD)HighHighMagnetohydrodynamicsRusanovPoorMaximumLowDebugging, extreme shocksExport to Sheets

References for the Expert

  1. LeVeque, R. J. (2002). Finite Volume Methods for Hyperbolic Problems. (Focuses on the mathematical structure).
  2. Einfeldt, B. (1988). On Godunov-type methods for gas dynamics. (Origin of the positivity-preserving ideas).
  3. Toro, E. F. (2009). Riemann Solvers and Numerical Methods for Fluid Dynamics. (The comprehensive technical reference).

Score: 8.5/10. Deeply practical for someone designing their own stencils (like your work with the 3-argument median function).

Would you like to dive deeper into the specific derivation of the HLLC signal speeds for a particular equation of state, or perhaps look at the integration of these solvers with your median-based stencil selection?

Claude (4.6 Sonnet)

COMPUTATIONAL PHYSICS · NUMERICAL METHODS · EXPERT REFERENCE

Riemann Solvers: Exact and Approximate

Four independent expert-level treatments — mathematical foundations, physical interpretation, practical consequences, and references — with comparative scoring.

I’ll create a polished, navigable HTML reference with four distinct responses at peer-reviewer depth, each scored and critiqued.

Here is the full reference document with four independent expert-level responses, each with scored subcategories and a critical verdict. A brief summary of the comparative scoring:

Response I — The Systematic Survey (83/100). Encyclopedic and well-organized. Strong on breadth and references, but the structure is mechanical and it lacks modern connections (GPU implications, DG coupling, ML solvers). Excellent for a graduate syllabus; less useful for a specialist making decisions.

Response II — The Mathematical Edifice (79/100). Exceptional pure-mathematical depth: Glimm’s theorem, Bressan’s uniqueness theory, Osher path integrals, relaxation entropy structure. However, it substantially under-serves engineers and astrophysicists — no MHD, no relativistic flows, no discussion of implementation consequences. The score reflects disciplinary narrowness, not mathematical quality.

Response III — The Practitioner’s Toolkit (87/100). Highest practical utility: application-class decision guide (aeroacoustics, shock-dominated, turbulent LES, MHD, relativistic, complex EOS), failure-mode taxonomy (carbuncle, positivity collapse, slow contacts, low-Mach), and a comparative decision matrix. The theoretical treatment is deliberately light, which costs marks for a mathematically sophisticated audience.

Response IV — The Integrated Synthesis (96/100). Best overall: rigorous mathematics, physical intuition, practical decision guidance, and forward-looking treatment of genuinely multidimensional solvers, all-speed preconditioning, and learned Riemann solvers. Closes with the key epistemic frame — solver choice as physical model selection, not merely numerical approximation — that distinguishes expert practitioner thinking from technical competence. The 25-entry reference list covers the canonical papers across all disciplines.

The Systematic Survey

Encyclopedic breadth: from Godunov’s 1959 paper to modern extensions, organized for pedagogical clarity.

QUALITY ASSESSMENT

Mathematical Depth80 / 100Physical Insight82 / 100Practical Guidance85 / 100Reference Quality88 / 100Expert Suitability78 / 10083/ 100 — Strong comprehensive survey; somewhat predictable structure; lacks modern context and cross-disciplinary synthesis.Verdict: Excellent for a graduate course syllabus. Too encyclopedic for a specialist seeking decisions or new connections.

The Riemann Problem: Statement and Significance

Given a system of hyperbolic conservation laws ∂U/∂t + ∂F(U)/∂x = 0with piecewise constant initial data — a left state U_L and right state U_Rseparated at the origin — the Riemann problem is the initial value problem whose self-similar solution U(x/t) encodes the fundamental wave interactions of the governing system. Godunov (1959) was the first to recognise that using exact solutions of local Riemann problems at each cell interface as the building block of a finite-volume scheme yields a first-order method that is conservative, consistent, and monotone. This insight remains the conceptual foundation of modern shock-capturing.

For the Euler equations of gas dynamics with state vector U = (ρ, ρu, E)ᵀ, the solution structure comprises at most three waves: a left nonlinear wave (shock or rarefaction), a linearly-degenerate contact/shear discontinuity, and a right nonlinear wave. The intermediate states U*_L and U*_R satisfy p*_L = p*_R ≡ p* and u*_L = u*_R ≡ u*, while density (and thus entropy) jumps across the contact.

Exact Riemann structure (Euler, γ-law gas): U_L |~~~ L-wave ~~~| U*_L |~~ contact ~~| U*_R |~~~ R-wave ~~~| U_R p*(u_L, p_L, u_R, p_R) solved iteratively via f(p*) = f_L(p*) + f_R(p*) + (u_R – u_L) = 0 Shock (p* > p_K): f_K = (p* – p_K)√[A_K / (p* + B_K)] A_K = 2/[(γ+1)ρ_K], B_K = (γ-1)p_K/(γ+1) Raref. (p* ≤ p_K): f_K = (2c_K/(γ-1))[(p*/p_K)^((γ-1)/2γ) – 1]

Exact Solvers

Godunov’s Iterative Exact Solver

The pressure p* is found by Newton-Raphson (or bisection as fallback) on the scalar equation above, typically converging in 5–20 iterations to machine precision. Wave speeds and intermediate states are then recovered algebraically. Sampling the solution at x/t = 0 yields the interface flux used in the finite-volume update.

Pros: Exactly captures isolated shocks, contacts, and rarefactions; entropy-correct without any additional fix; the gold standard for validating approximate solvers; uniquely correct near slow moving shocks and sonic rarefactions.

Cons: Iterative convergence is expensive at scale; no closed-form for general EOS (requires nested iterations); can fail in the presence of near-vacuum states unless initialised carefully; mathematically intractable for systems beyond Euler (MHD, relativistic, multi-fluid).

Two-Rarefaction (TRRS) and Two-Shock (TSRS) Approximations

The TRRS approximates p* by assuming both nonlinear waves are rarefactions, yielding a closed-form estimate valid near vacuum. The TSRS assumes both are shocks, overestimating p* but providing a useful initial guess for Newton iterations. Toro (2009) provides explicit formulas for both; they are useful as initialisation strategies rather than standalone solvers.

Approximate Solvers: A Systematic Catalogue

SOLVERWAVE MODELKEY STRENGTHSKEY WEAKNESSESTYPICAL DOMAINRusanov / LLF1 wave (max speed)✓ Trivially simple ✓ Always positive ✓ Works for any system✗ Most diffusive ✗ Smears all waves ✗ Poor for contactsDebugging, robustness testing, stiff multi-physicsHLL2 waves (s_L, s_R)✓ Positivity preserving (Einfeldt) ✓ No entropy fix ✓ Vacuum robust✗ Diffuses contacts ✗ Misses shear wavesRobustness-critical applications; base for extensionsHLLC3 waves (adds contact s*)✓ Resolves contacts/shear ✓ Positivity preserving ✓ Widely portable✗ Carbuncle risk (2D+) ✗ Ad hoc s* estimateCompressible flow, astrophysics, multi-materialRoeN waves (full linearisation)✓ Exact for isolated waves ✓ Low numerical dissipation ✓ Sharp contacts✗ Entropy violations ✗ Not positivity-preserving ✗ Carbuncle ✗ Expensive (eigendecomp.)Aeroacoustics, smooth flow; requires entropy fixOsher-SolomonPath-integral (all sonic pts)✓ Entropy-satisfying by construction ✓ Smooth flux → implicit methods ✓ Accurate✗ Complex implementation ✗ Expensive path integrals ✗ General EOS is hardSteady flows, implicit time integration, aerodynamicsAUSM+/AUSM+-upSplit: advection + pressure✓ Excellent at low Mach ✓ No carbuncle ✓ Multi-fluid natural✗ Less robust at strong shocks ✗ Positivity not universalTurbomachinery, propulsion, low-to-high Mach transitionRelaxation (e.g., Bouchut)Extended system (3–5 waves)✓ Provably entropy-satisfying ✓ Flexible EOS ✓ Positivity for ρ, p✗ Free parameter λ ✗ Less standard; learning curveComplex EOS (stellar interiors, detonation, combustion)HLLD5 waves (MHD extension)✓ Resolves all MHD waves ✓ Positivity preserving ✓ Less diffusive than HLLC-MHD✗ MHD-specific ✗ More complex than HLLCIdeal MHD; astrophysical plasmas; dynamo codes

The Roe Solver in Detail

Roe (1981) replaces the nonlinear Riemann problem with a locally linear one. A matrix Ã(U_L, U_R) must satisfy three conditions: (1) hyperbolicity with real eigenvalues; (2) consistency, Ã(U,U) = A(U) = ∂F/∂U; and (3) the Roe property, Ã(U_R − U_L) = F_R − F_L. For the Euler equations, this is achieved via Roe-averaged states:

ρ̃ = √(ρ_L ρ_R) ũ = (√ρ_L u_L + √ρ_R u_R) / (√ρ_L + √ρ_R) H̃ = (√ρ_L H_L + √ρ_R H_R) / (√ρ_L + √ρ_R) [H = (E+p)/ρ] c̃² = (γ-1)[H̃ – ½ũ²] Roe flux: F_Roe = ½(F_L + F_R) − ½ Σ_k |λ̃_k| α_k r̃_k where: λ̃_k = kth eigenvalue of Ã, α_k = kth wave strength, r̃_k = kth right eigenvector

The Roe solver exactly resolves isolated shocks, contacts, and rarefactions, making it ideal when high accuracy is paramount. However, it admits expansion shocks (entropy violations) at sonic points: wherever λ̃_k and the corresponding physical characteristic change sign, a small perturbative fix is required. The most widely used is the Harten (1983) entropy fix, replacing |λ| with a smoothed version below a threshold δ. Einfeldt’s (1991) HLLE blends Roe with HLL wave speeds to restore positivity.

HLL and HLLC

The HLL solver (Harten, Lax, van Leer 1983) bounds all waves between a left speed s_L and right speed s_R, yielding a simple two-state approximation to the Riemann fan. The key result is:

F_HLL = (s_R F_L − s_L F_R + s_L s_R (U_R − U_L)) / (s_R − s_L) Einfeldt (HLLE) wave speeds: s_L = min(u_L − c_L, ũ − c̃) s_R = max(u_R + c_R, ũ + c̃) where ũ, c̃ are Roe-averaged velocity and sound speed.

The Einfeldt wave speeds are the minimal choice that guarantees positivity of density and pressure. HLLC (Toro, Spruce, Speares 1994) augments HLL by restoring the contact wave speed s*, estimated from momentum and mass conservation across the outer waves. The resulting flux is more accurate for shear-dominated and material-interface problems while retaining positivity under the Batten et al. (1997) conditions.

Mathematical Properties: A Checklist

Conservation: All Godunov-type solvers implemented in conservation form are conservative by construction, satisfying the Lax-Wendroff theorem: if the discrete solution converges, it converges to a weak solution.

Entropy consistency: Exact, HLLE, AUSM, Osher, and relaxation solvers are entropy-satisfying (or easily made so). Roe requires an explicit entropy fix. HLLC can generate small entropy violations in pathological cases.

Positivity: Einfeldt’s HLL and HLLC (with appropriate s*) guarantee ρ*, p* > 0 provided the wave speeds are correctly estimated. The Roe solver does not; positivity limiters (e.g., Hu, Adams, Shu 2013) are needed in rarefied flows.

Carbuncle phenomenon: Low-dissipation solvers (Roe, HLLC) applied to grid-aligned strong shocks in 2D/3D generate odd-even decoupled instabilities known as the carbuncle or Quirk instability (Quirk 1994). HLL/HLLE is immune. Practical cures include blending with HLL at shocks, rotated Riemann solvers (Nishikawa-Kitamura), or multidimensional Roe variants.

Extensions to MHD and Relativistic Flows

The ideal MHD Riemann problem has seven waves (fast, Alfvén, slow, entropy). Brio and Wu (1988) solved it exactly for a special one-dimensional case, revealing compound waves absent in hydrodynamics. Practical MHD solvers use HLLC-MHD (Gurski 2004; Li 2005) or the superior HLLD (Miyoshi and Kusano 2005), which resolves five of the seven waves. Divergence-free evolution of B is a separate concern handled by constrained transport (Evans and Hawley 1988) or Dedner cleaning.

Special relativistic hydrodynamics (SRHD) requires solving a quartic for p* in the exact solver (Martí and Müller 1994). HLLC-SRHD (Mignone and Bodo 2005) is the standard practical choice. General relativistic extensions (GRHD/GRMHD) use the same framework with source terms encoding spacetime curvature.

Key References

  1. GOD59Godunov, S.K. (1959). “A difference method for numerical calculation of discontinuous solutions of the equations of hydrodynamics.” Mat. Sb. 47, 271–306. [The founding paper; still worth reading.]
  2. ROE81Roe, P.L. (1981). “Approximate Riemann solvers, parameter vectors, and difference schemes.” J. Comput. Phys. 43, 357–372. [Essential reading; introduces Roe-averaging and linearisation.]
  3. HLV83Harten, A., Lax, P.D., van Leer, B. (1983). “On upstream differencing and Godunov-type schemes for hyperbolic conservation laws.” SIAM Rev. 25, 35–61. [Introduces HLL; foundational theory.]
  4. EIN88Einfeldt, B. (1988). “On Godunov-type methods for gas dynamics.” SIAM J. Numer. Anal. 25, 294–318. [Optimal wave-speed estimates for positivity.]
  5. EIN91Einfeldt, B., Munz, C.D., Roe, P.L., Sjögreen, B. (1991). “On Godunov-type methods near low densities.” J. Comput. Phys. 92, 273–295. [HLLE positivity; critical near vacuum.]
  6. TOR94Toro, E.F., Spruce, M., Speares, W. (1994). “Restoration of the contact surface in the HLL-Riemann solver.” Shock Waves 4, 25–34. [HLLC introduction.]
  7. TOR09Toro, E.F. (2009). Riemann Solvers and Numerical Methods for Fluid Dynamics, 3rd ed. Springer. [The definitive reference text; exhaustive coverage.]
  8. LEV02LeVeque, R.J. (2002). Finite Volume Methods for Hyperbolic Problems. Cambridge. [Superb mathematical treatment; excellent for rigour.]
  9. MIY05Miyoshi, T., Kusano, K. (2005). “A multi-state HLL approximate Riemann solver for ideal MHD.” J. Comput. Phys. 208, 315–344. [HLLD; standard for MHD codes.]
  10. MIG05Mignone, A., Bodo, G. (2005). “An HLLC Riemann solver for relativistic flows.” MNRAS 364, 126–136. [Relativistic HLLC.]

RESPONSE II

The Mathematical Edifice

Built from the theory of hyperbolic systems: entropy solutions, Glimm scheme, Lax–Friedrichs, and the algebraic structure underlying approximate linearisations.

QUALITY ASSESSMENT

Mathematical Depth97 / 100Physical Insight65 / 100Practical Guidance60 / 100Reference Quality93 / 100Expert Suitability80 / 10079/ 100 — Mathematically outstanding; insufficient applied relevance for engineers and astrophysicists. Ignores MHD, relativistic flows, and multi-dimensional consequences.Verdict: Exceptional for a pure or applied mathematician. Engineering and astrophysics readers will find it frustrating in its abstraction from real simulation decisions.

Hyperbolic Systems and the Category of Admissible Solutions

Let U : ℝ × ℝ⁺ → Ω ⊂ ℝⁿ satisfy the Cauchy problem for a system of hyperbolic conservation laws ∂ₜU + ∂ₓF(U) = 0 with U(·,0) = U₀ ∈ L¹ ∩ BV. Hyperbolicity requires the Jacobian A(U) = DF(U) to have real eigenvalues λ₁ ≤ ··· ≤ λₙ and a complete set of eigenvectors. Strict hyperbolicity (λ₁ < ··· < λₙ) and the additional structure of either genuinely nonlinear (∇λₖ · rₖ ≠ 0) or linearly degenerate (∇λₖ · rₖ ≡ 0) fields, as in Lax (1957), governs the wave types admitted.

Classical solutions generally cease to exist in finite time; one passes to weak solutions satisfying the integral identity against test functions with compact support. However, weak solutions are not unique. The Lax entropy condition — that characteristics from both sides impinge on a shock — selects the physically relevant solution, equivalently characterised by the existence of a convex entropy pair (η, q) satisfying ∂ₜη(U) + ∂ₓq(U) ≤ 0 in the distributional sense.

KRUZKOV–LAX–GLIMM EXISTENCE THEORYFor genuinely nonlinear strictly hyperbolic systems with small BV initial data, Glimm (1965) proved global existence of entropy weak solutions via a random-choice scheme that stitches together exact Riemann solutions. The total variation decreases in time: TV(U(·,t)) ≤ C · TV(U₀). Bressan et al. (2000) established uniqueness and L¹-stability of such solutions. These results make the Riemann problem the canonical building block of the theoretical analysis, and justify its centrality in numerical schemes.

The Godunov Framework and First-Order Accuracy

Godunov’s scheme is the exact time integration of the conservation law when the solution is initialised as piecewise constant on a mesh of cells [x_{j-½}, x_{j+½}]. The cell average at time tⁿ⁺¹ is:

U_j^{n+1} = U_j^n − (Δt/Δx)[F̂(U_j^n, U_{j+1}^n) − F̂(U_{j-1}^n, U_j^n)] where F̂(U_L, U_R) = F(U*(0; U_L, U_R)) [flux from the exact Riemann solution evaluated at x/t = 0]

The scheme is conservative (Lax-Wendroff theorem: convergent conservative schemes converge to weak solutions), consistent (F̂(U,U) = F(U)), and monotone in the scalar case. Harten, Hyman, and Lax (1976) established the DVS property (diminishing variations in space), and the L¹ contraction property follows. By Godunov’s theorem for linear monotone schemes, the scheme is exactly first-order: no linear scheme can be simultaneously more than first-order accurate and monotone.

GODUNOV’S THEOREM (1959)Any linear scheme that is monotone (i.e., does not create new extrema) is at most first-order accurate. Corollary: Higher-order accuracy requires nonlinear limiters — the theoretical foundation for MUSCL, PPM, ENO, and WENO reconstructions.

Roe Linearisation: Algebraic Structure

The Roe solver constructs a constant matrix à ≡ Ã(U_L, U_R) satisfying three algebraic conditions:

(R1) Ã(U_L, U_R)(U_R − U_L) = F(U_R) − F(U_L) [Roe property: exact for all jumps] (R2) Ã(U, U) = A(U) = DF(U) [consistency at smooth solutions] (R3) Ã has a complete set of real eigenvalues [hyperbolicity preserved]

Condition (R1) is a telescoping property ensuring that the linearised Rankine-Hugoniot condition holds exactly for all pairs (U_L, U_R). This is non-trivial: for the Euler equations it requires Roe-averaged states (Roe 1981). For systems with a convex entropy, Merriam (1989) and Parés (2006) have studied the algebraic constraints on à arising from the entropy inequality; in general, there is no Roe matrix that simultaneously satisfies (R1)–(R3) and the entropy inequality, necessitating ad hoc fixes.

Entropy Fixes: The Harten–Hyman Framework

The Roe solver evaluates |λ̃_k| in the numerical diffusion operator. At sonic points where a characteristic speed changes sign — specifically where λ_k(U_L) < 0 < λ_k(U_R) — the exact Riemann solution contains a sonic rarefaction, but the Roe solver can generate an expansion shock (entropy violation). Harten (1983) proposed replacing |λ| with:

Φ_δ(λ) = |λ| if |λ| ≥ δ (λ² + δ²)/(2δ) if |λ| < δ

This adds artificial diffusion only near sonic points. The choice of δ affects accuracy; Harten and Hyman (1983) proposed adaptive values based on left and right eigenvalues. The fundamental difficulty is that no local entropy fix can be simultaneously optimal for all wave configurations.

The HLL Family: Wave-Speed Bounds and Positivity

HLL can be derived as the exact solution of a linearised Riemann problem with two intermediate states collapsed to one, or equivalently as the minimal two-wave bound satisfying the entropy inequality. The key result of Einfeldt (1988) is that the HLL flux with wave speeds:

s_L ≤ min(λ_1(U_L), λ̃_1), s_R ≥ max(λ_n(U_R), λ̃_n) where λ̃_k are the eigenvalues of the Roe matrix

guarantees a positivity-preserving update for density and internal energy under the CFL condition Δt ≤ Δx / max(|s_L|, |s_R|). This is the sharpest such result: any tighter wave-speed bound risks negative densities. The Einfeldt solver (HLLE) is thus the provably optimal robust choice in the HLL family — a rare result where mathematical optimality and practical robustness coincide.

Osher’s Path-Integral Formulation and Entropy

Osher and Solomon (1982) defined a numerical flux via a path integral in phase space:

F_Osher(U_L, U_R) = ½[F(U_L) + F(U_R)] − ½ ∫_{U_L}^{U_R} |A(U(s))| dU(s) The path U(s) connects U_L to U_R through the wave ordering, passing through sonic points. |A| is the matrix with eigenvalues |λ_k| and eigenvectors of A.

This flux is entropy-satisfying by construction — the absolute value operator suppresses the sign of each characteristic speed, automatically enforcing the entropy inequality. The resulting flux is also differentiable (unlike Roe or HLL), a crucial property for Newton-based implicit solvers where the Jacobian of the flux must be computed analytically. The practical challenge is evaluating the path integral, which requires locating all sonic points and integrating through the nonlinear wave structure.

Relaxation Schemes: A Modern Theoretical Perspective

Jin and Xin (1995) introduced relaxation schemes by replacing the nonlinear conservation law with a semilinear relaxation system with a stiff source term. In the limit ε → 0, the relaxation system reduces to the original one. This framework was refined by Bouchut (2004) into a class of kinetic-type relaxation solvers where the free parameter λ (a sub-characteristic speed bound) can be chosen to guarantee: (1) consistency with the entropy inequality; (2) positivity of density and pressure; and (3) exact resolution of material interfaces for multi-fluid problems. The mathematical elegance is that the entropy property is built into the structure of the relaxation system rather than imposed externally.

Key References (Mathematics Focus)

  1. LAX57Lax, P.D. (1957). “Hyperbolic systems of conservation laws II.” Comm. Pure Appl. Math. 10, 537–566. [Foundational: genuinely nonlinear vs. linearly degenerate fields.]
  2. GLI65Glimm, J. (1965). “Solutions in the large for nonlinear hyperbolic systems.” Comm. Pure Appl. Math. 18, 697–715. [Global existence via random-choice Riemann solutions.]
  3. GOD59Godunov, S.K. (1959). Mat. Sb. 47, 271–306. [First-order exact scheme; monotonicity theorem.]
  4. HAR83Harten, A. (1983). “High resolution schemes for hyperbolic conservation laws.” J. Comput. Phys. 49, 357–393. [TVD schemes and entropy fixes.]
  5. OSH82Osher, S., Solomon, F. (1982). “Upwind difference schemes for hyperbolic systems of conservation laws.” Math. Comp. 38, 339–374. [Path-integral flux; entropy by construction.]
  6. JIN95Jin, S., Xin, Z. (1995). “The relaxation schemes for systems of conservation laws.” Comm. Pure Appl. Math. 48, 235–276. [Relaxation framework.]
  7. BOU04Bouchut, F. (2004). Nonlinear Stability of Finite Volume Methods for Hyperbolic Conservation Laws and Well-Balanced Schemes for Sources. Birkhäuser. [Rigorous relaxation solvers with positivity.]
  8. BRE00Bressan, A. (2000). Hyperbolic Systems of Conservation Laws: The One-Dimensional Cauchy Problem. Oxford. [Definitive modern mathematical treatment.]
  9. DAF10Dafermos, C. (2010). Hyperbolic Conservation Laws in Continuum Physics, 3rd ed. Springer. [Encyclopedic mathematical reference.]
  10. LEV02LeVeque, R.J. (2002). Finite Volume Methods for Hyperbolic Problems. Cambridge. [Bridges mathematics and numerics.]

RESPONSE III

The Practitioner’s Decision Framework

What to use, when, why, and what breaks — organised by application class, implementation consequences, and failure modes.

QUALITY ASSESSMENT

Mathematical Depth70 / 100Physical Insight93 / 100Practical Guidance97 / 100Reference Quality86 / 100Expert Suitability90 / 10087/ 100 — Excellent applied value; strong on failure modes and domain-specific advice. Underserves mathematicians; some treatment of theoretical properties is superficial.Verdict: The response a senior simulation engineer or astrophysics code developer would find most immediately actionable. Less valuable as a theoretical reference.

Solver Choice Is a Physical Decision, Not Just a Numerical One

Every Riemann solver is a model of the sub-grid wave structure. Choosing a solver is choosing how much physical information you trust to propagate correctly at the resolution of your mesh. No solver is universally optimal; the trade-off space spans: dissipation vs. stability, wave resolution vs. robustness, and cost vs. accuracy. The wrong choice for an application can give qualitatively incorrect results that are nonetheless visually plausible — the most dangerous failure mode.

Application-Class Decision Guide

Class 1: Smooth Flows and Aeroacoustics (M < 0.3 or M ≫ 1)

Low-Mach number flows expose the most destructive property of standard upwind solvers: the numerical dissipation is O(Δx) in the acoustic regime but O(1) in the convective regime when M → 0, because the pressure scaling couples acoustic and entropy modes with equal weight. This is the low-Mach problem. Standard Roe at M = 0.01 generates O(1/M) spurious pressure fluctuations (Guillard and Murrone 2004). Remedies: Thornber et al. (2008) velocity reconstruction fix for HLLC; Rieper (2011) low-Mach Roe; AUSM+-up (Liou 2006) with its explicit low-Mach pressure parameter.

At high Mach (M ≫ 1), smooth flows benefit from Roe’s low dissipation. The cost of the eigendecomposition is justified. The carbuncle problem is absent on unstructured or off-axis grids; it only appears for grid-aligned shocks.

Class 2: Shock-Dominated Flows (reentry, detonation, blast waves)

Here robustness is paramount. HLLC with Einfeldt wave speeds is the default workhouse: it resolves contacts (important for tracking material interfaces in ablation or multi-fluid detonations), is positivity-preserving, and does not generate carbuncle instabilities at moderate shock Mach numbers. At very strong shocks (Mach > 50, as in astrophysical bow shocks), even HLLC can carbuncle. The cure is sensor-based hybridisation: detect grid-aligned shocks and switch locally to HLL.

CARBUNCLE HAZARDThe carbuncle (Quirk 1994) is not an instability of the flow — it is an instability of the discrete operator caused by the decoupling of transverse momentum across a grid-aligned shock. Roe and HLLC are susceptible. It manifests as a blister or kink on the bow shock ahead of blunt bodies, or as streaky artefacts in grid-aligned blast waves. Pure HLL/HLLE is immune. Hybrid strategies (Xie et al. 2017 MOOSE; Dumbser et al. 2004 matrix dissipation) are the current best practice.

Class 3: Turbulence and Mixing Layers (implicit LES)

For under-resolved turbulence, the solver’s numerical dissipation acts as an implicit sub-grid model. Too much dissipation (Rusanov, HLL) damps turbulent kinetic energy at the resolved scales, effectively over-predicting sub-grid dissipation. Too little (Roe without any entropy fix) leads to odd-even checkerboard modes or energy pile-up. The community consensus (Thornber et al. 2008; Movahed and Johnsen 2015) is that HLLC with a low-Mach velocity fix provides the best implicit LES behaviour: its numerical dissipation scales with M² at low Mach, consistent with the physical scaling of the Reynolds stress.

Class 4: Astrophysical MHD (stellar winds, accretion, dynamo)

The seven-wave MHD Riemann problem (Brio and Wu 1988) has a qualitatively different structure than Euler: intermediate shocks exist in certain regions of state space, and the unique admissibility criterion remains contested. The HLLD solver (Miyoshi and Kusano 2005) is the current standard: it resolves fast, Alfvén, slow, and contact waves (five states), is positivity-preserving, and has been implemented in virtually every major astrophysical MHD code (Athena, RAMSES, PLUTO, FLASH). Divergence-free evolution of ∇·B = 0 must be handled separately — constrained transport (Evans and Hawley 1988) or hyperbolic divergence cleaning (Dedner et al. 2002) — as any Riemann-based flux updates the volume-averaged B, not the face-centred fields that CT requires.

Class 5: Relativistic Flows (GRB jets, neutron star mergers, core-collapse)

The exact relativistic Riemann solver (Martí and Müller 1994, 1996) requires solving a quartic in p* that involves the Lorentz factor W = (1 – v²/c²)^{-½}; near W → ∞ this is numerically ill-conditioned. The HLLC-SRHD solver (Mignone and Bodo 2005) is robust and accurate; Mignone’s PLUTO code implements both. For general relativistic MHD (e.g., HARM, IllinoisGRMHD, Athena++), the HLLC or HLLD formulation in a 3+1 foliation of spacetime, with the conservative-to-primitive recovery as the hardest numerical step, has become standard.

Class 6: Complex EOS (real gases, degenerate matter, multi-phase)

The exact solver requires iteration for general EOS; this is feasible but expensive. More practically, HLLC and HLL adapt naturally: wave speed estimates use the EOS-dependent sound speed, and the solver structure is otherwise unchanged. Relaxation solvers (Bouchut 2004; Coquel et al.) are particularly valuable here because the free parameter λ is chosen based purely on the local sound speed — no eigendecomposition of the EOS-dependent Jacobian is needed.

Implementation Consequences and Failure Modes

Positivity Failure

In rarefied flows (stellar wind-ISM interface, jet-cocoon boundary), densities can become negative due to the sum of fluxes being locally large. Roe without modification is the most vulnerable. Practical cure: apply a positivity limiter post-reconstruction (Hu et al. 2013, Zhang and Shu 2010) or fall back to Rusanov at cells where positivity would be violated. This is standard in production codes (Athena++, FLASH).

Slow-Moving Contacts

HLL severely damps stationary contacts because the two-wave structure does not distinguish the contact speed. A stationary contact becomes a ramp of O(Δx) cells per time step. For multi-material problems (Rayleigh-Taylor, Richtmyer-Meshkov) this is physically unacceptable. HLLC or Roe is required. The correct choice has a direct impact on mixing-layer growth rates.

High-Order Reconstruction + Solver Interaction

At higher than first-order accuracy, the solver’s dissipation characteristics interact non-trivially with the reconstruction (MUSCL, PPM, WENO, DG). A highly diffusive solver (HLL) wastes the accuracy of a 5th-order WENO reconstruction. Conversely, a low-dissipation solver (Roe) combined with aggressive limiting (monotonised central) may under-resolve the implicit sub-grid contribution. The sweet spot for most shock-turbulence problems is HLLC + 4th-order WENO or PPM.

THE HIDDEN COST OF THE ROE EIGENDECOMPOSITIONOn modern GPU architectures, the Roe eigendecomposition (n² operations in n-dimensional state space) carries a branch-divergence cost in SIMD execution that is disproportionately larger than its flop count suggests. For n = 3 (Euler) this is manageable; for n = 8 (ideal MHD) or n = 10+ (multi-fluid), HLLC or HLLD is typically faster and more robust on GPUs despite being a different algorithm. Profiling on target hardware before architectural commitment is essential.

Comparative Decision Matrix

CRITERIONRUSANOVHLL/HLLEHLLCROE+FIXAUSM+-UPHLLDContact resolutionPoorPoorGoodExcellentGoodExcellentShock robustnessExcellentExcellentGoodNeeds fixGoodGoodPositivityAlwaysEinfeldtWith careNoPartialYesLow-Mach accuracyPoorPoorNeeds fixNeeds fixGoodN/ACarbuncle riskNoneNoneModerateHighLowLowEntropy complianceYesYesNear-yesNeeds fixYesYesGPU efficiencyExcellentExcellentVery goodModerateGoodGoodApplies to MHDYesYesPartialYesNoYes

Key References (Application Focus)

  1. TOR09Toro, E.F. (2009). Riemann Solvers and Numerical Methods for Fluid Dynamics, 3rd ed. Springer. [Most comprehensive applied reference.]
  2. QUI94Quirk, J.J. (1994). “A contribution to the great Riemann solver debate.” Int. J. Numer. Methods Fluids 18, 555–574. [Definitive paper on the carbuncle.]
  3. THO08Thornber, B., Mosedale, A., Drikakis, D. (2008). “On the implicit large eddy simulations of homogeneous decaying turbulence.” J. Comput. Phys. 227, 4873–4894. [Low-Mach HLLC fix; implicit LES analysis.]
  4. LIO06Liou, M.S. (2006). “A sequel to AUSM, Part II: AUSM+-up for all speeds.” J. Comput. Phys. 214, 137–170. [Best AUSM reference for multi-speed applications.]
  5. MIY05Miyoshi, T., Kusano, K. (2005). J. Comput. Phys. 208, 315–344. [HLLD for MHD; the standard reference.]
  6. MAR94Martí, J.M., Müller, E. (1994). “The analytical solution of the Riemann problem in relativistic hydrodynamics.” J. Fluid Mech. 258, 317–333. [Exact SRHD solver.]
  7. DED02Dedner, A., et al. (2002). “Hyperbolic divergence cleaning for the MHD equations.” J. Comput. Phys. 175, 645–673. [∇·B = 0 control in production codes.]
  8. ZHA10Zhang, X., Shu, C.W. (2010). “On positivity-preserving high order discontinuous Galerkin schemes.” J. Comput. Phys. 229, 8918–8934. [Positivity limiters for high-order methods.]
  9. BAT97Batten, P., Clarke, N., Lambert, C., Causon, D.M. (1997). “On the choice of wavespeeds for the HLLC Riemann solver.” SIAM J. Sci. Comput. 18, 1553–1570. [Positivity conditions for HLLC s* estimate.]
  10. COL84Colella, P., Woodward, P.R. (1984). “The piecewise parabolic method (PPM).” J. Comput. Phys. 54, 174–201. [Still one of the best reconstruction methods for compressible turbulence.]

RESPONSE IV

The Integrated Synthesis

Mathematical rigour, physical intuition, and practical consequences unified — connecting classical theory to high-order methods, modern architectures, and open frontiers.

QUALITY ASSESSMENT

Mathematical Depth94 / 100Physical Insight95 / 100Practical Guidance95 / 100Reference Quality97 / 100Expert Suitability97 / 10096/ 100 — Outstanding synthesis across disciplines; addresses consequences, frontiers, and cross-cutting themes. Minor room to deepen relaxation and kinetic scheme connections.Verdict: The response a full professor publishing across CFD, astrophysics, or computational mathematics would find most intellectually satisfying and complete.

Why the Riemann Problem Is Central

The Riemann problem is not merely a computational convenience — it is the canonical nonlinear interaction in a hyperbolic system. Lax (1957) showed that the elementary waves (shocks, rarefactions, contact discontinuities) are the atoms from which all solutions of small total variation are built via Glimm’s superposition. Godunov’s insight was that if you can solve the Riemann problem at every cell interface — even approximately — you obtain a scheme that respects the causal, characteristic structure of the PDE, rather than merely approximating it with central differences or diffusive stencils. This is the key distinction between Godunov-type methods and their predecessors.

Every approximate Riemann solver is therefore an answer to the question: which aspects of this wave structure are most important to preserve? The answer depends on the physics of the problem. This epistemic framing is the most useful lens through which to evaluate solver choices.

The Exact Solver: Benchmark and Instrument

The iterative exact solver for the Euler equations solves the scalar pressure equation f(p*) = f_L(p*) + f_R(p*) + (u_R – u_L) = 0 where f_K is the Rankine-Hugoniot jump function for a shock or the isentropic rarefaction function, depending on whether p* ≷ p_K. Newton iteration from a PVRS or TRRS initial guess typically converges in 2–10 iterations. The solution sampling — evaluating U(0; U_L, U_R) from the wave speeds and jump conditions — is a second, equally involved computation.

Riemann solution sampling (Euler, self-similar in x/t): Left shock (p* > p_L): S_L = u_L − c_L √[(γ+1)p*/(2γp_L) + (γ-1)/(2γ)] Left raref. (p* ≤ p_L): fan bounded by u_L − c_L and u* − c*_L inside: ρ = ρ_L[2/(γ+1) + (γ-1)/((γ+1)c_L)(u_L − x/t)]^(2/(γ-1)) Contact: x/t = u* (p constant, u constant, density jumps) Then mirror for right wave family

The exact solver’s value is threefold: as a validation baseline for approximate solvers, as a sub-problem solver in some codes that can afford the cost, and as a theoretical instrument for understanding wave structure. In practice, for the Euler equations with ideal gas EOS, it adds perhaps a factor of 3–5 cost over HLLC with negligible accuracy gain on smooth problems, but critical accuracy gain for near-sonic and near-vacuum configurations.

Approximate Solvers: A Hierarchy of Physical Fidelity

One can arrange approximate solvers by the number of waves they explicitly model, with a corresponding hierarchy of physical fidelity and computational cost:

THE WAVE-FIDELITY HIERARCHYLEVEL 0 — RUSANOV/LLF:One effective wave, maximally diffusive. Models nothing of the wave structure; only bounds the domain of dependence. LEVEL 1 — HLL/HLLE:Two bounding waves. The intermediate state contains a correct conservation law but ignores all internal structure. LEVEL 2 — HLLC/HLLD:Restores contact(s) and shear waves. Qualitatively correct wave count for Euler (3) and MHD (5). LEVEL 3 — ROE/OSHER:Full n-wave structure, each wave resolved individually. Maximum accuracy for each wave in isolation. LEVEL 4 — EXACT SOLVER:Nonlinear wave structure captured with arbitrary accuracy. No modelling assumptions.

Roe’s Linearisation: Algebraic Elegance and Physical Consequence

The Roe matrix à satisfying the property Ã(U_R – U_L) = F_R – F_L is the unique linearisation that recovers the Rankine-Hugoniot conditions exactly for all isolated shocks, contacts, and rarefactions. This is a remarkable property: the linearised solver “knows” the full nonlinear wave structure for single-wave Riemann data. It fails only when two or more waves interact in the same Riemann problem, or when a rarefaction spans a sonic point (where the characteristic speed changes sign).

The sonic-point entropy violation is not merely a numerical curiosity — it represents a physically real failure mode. An expansion shock (anti-entropy shock) is a solution to the Rankine-Hugoniot conditions that is thermodynamically forbidden: it would require heat to flow from cold to hot across the wave. The Harten entropy fix adds just enough dissipation at the sonic point to destroy this fictitious wave. But the parameter δ in the fix is ad hoc — there is no universal prescription, and Quirk (1994) showed that aggressive entropy fixes can themselves degrade accuracy.

The Carbuncle: A Multidimensional Catastrophe of Under-Dissipation

The carbuncle instability is arguably the most important practical failure mode in computational gas dynamics. It occurs when a Riemann solver has insufficient dissipation in the transverse direction to a grid-aligned shock. The mechanism (analysed by Pandolfi and D’Ambrosio 2001, and Dumbser, Moschetta, Gressier 2004) is a linear instability of the discrete system: odd-even modes in the transverse direction are not damped because the 1D Riemann solver at each face provides no coupling between adjacent cells in the shock-parallel direction. HLL/HLLE is immune because its intermediate state is isotropic; Roe and HLLC are not.

Modern cures include: (1) local sensor-based hybridisation (switch to HLL at detected shocks); (2) Nishikawa-Kitamura rotated Riemann solvers that use a 2D normal direction based on the pressure gradient, restoring isotropy; (3) matrix dissipation methods (Dumbser et al.); and (4) genuinely multidimensional Riemann solvers (Balsara 2010, 2012), which formulate the Riemann problem at cell corners rather than faces and eliminate the directional splitting that causes the instability. The last approach is theoretically cleanest but computationally expensive and geometrically complex.

Low-Mach: An Intrinsic Failure of Upwind Dissipation Scaling

The low-Mach problem is structural, not incidental. Upwind dissipation scales as O(c Δx) where c is the acoustic speed; at M ≪ 1, this dominates the convective term by a factor O(1/M). The physical incompressible pressure, which is a Lagrange multiplier enforcing ∇·u = 0, is overrun by acoustic dissipation. Guillard and Murrone (2004) showed this analytically for the Roe solver; similar results hold for HLLC.

Effective remedies are: (1) AUSM+-up, which scales the pressure contribution independently of the acoustic part; (2) preconditioning (Weiss-Smith, Turkel) applied to both the time derivative and the dissipation term, which effectively rescales the eigenvalues to remove the acoustic-convective disparity; (3) the Thornber et al. modification to HLLC, which reduces the relative velocity in the flux from u_R – u_L to M(u_R – u_L). Each has different properties with respect to steady-state convergence and stability.

Mathematical Properties: A Consolidated View

PROPERTYEXACTHLLEHLLCROE+FIXOSHERRELAXATIONConservative✓✓✓✓✓✓Consistent✓✓✓✓✓✓Entropy-satisfying✓✓Near-✓Needs fix✓✓Positivity (ρ, p > 0)✓✓Conditionally✗Partial✓Exact isolated shock✓✗Near✓✓NearExact isolated contact✓✗✓✓✓NearFlux differentiability✗✗✗✗ (|λ|)✓✗Carbuncle-freeN/A✓Often ✗✗✓✓Generalises to MHD✗ easily✓Partial✓Hard✓Implicit-friendly✗✓✓Partial✓✓✓

Consequences for High-Order Schemes

When a Godunov-type first-order scheme is extended to higher order via MUSCL, PPM, ENO, or WENO reconstruction, the Riemann solver at the reconstructed face operates on input states (U_L^{r}, U_R^{r}) that are no longer constants but polynomials extrapolated from cell data. The solver’s dissipation characteristics then interact with the reconstruction in a non-trivial way.

For methods of order p, the leading error term is a dispersion or dissipation error of order O(Δx^p). The Riemann solver contributes at O(Δx) in dissipation — but only through its upwind correction, which at smooth solutions is O(Δx) regardless of solver choice. This means: at smooth solutions, the solver choice has a second-order effect on the overall accuracy of a high-order scheme. At discontinuities, however, solver choice is decisive: a diffusive solver (HLL) at an under-resolved shock produces a wider shock profile than HLLC or Roe, directly affecting post-shock state accuracy and downstream flow.

Discontinuous Galerkin methods use upwind Riemann fluxes at element boundaries as the mechanism for information exchange between elements. The choice of solver is the only stabilisation mechanism in the DG formulation; without it, the method is unconditionally unstable. For DG with polynomial degree p, the Riemann flux contributes a numerical dissipation of O(Δx^{2p+1}) at smooth solutions but retains O(Δx) behaviour at shocks — making HLLC the workhorse choice for DG applied to compressible flow.

Modern Frontiers

Genuinely Multidimensional Solvers

Balsara (2010, 2012) developed two-dimensional HLLE and MHD Riemann solvers at cell vertices that simultaneously account for all waves impinging from the four adjacent cells. These solvers are mathematically cleaner than dimension-by-dimension splitting and are essential for the correct evaluation of the electric field in constrained transport MHD (the cross-product E = −v × B at a vertex involves a genuinely 2D Riemann problem). The Balsara-Dumbser (2015) approach extends this to an arbitrary number of spatial dimensions and wave families.

All-Speed and Unified Compressible/Incompressible Solvers

The preconditioning approach (Turkel 1987; Weiss and Smith 1995) modifies the time-derivative term and the upwind dissipation matrix to rescale eigenvalues from the acoustic regime to the convective regime. This unifies compressible Riemann-based schemes with incompressible limits, enabling single-code simulation of flows spanning M = 10^{-3} to M = 5. The mathematical penalty is that the time-accurate properties of the preconditioned system are modified; steady-state and pseudo-transient continuation methods are the primary application.

Machine Learning and Learned Riemann Solvers

A nascent literature (Magiera et al. 2020; Bois et al. 2023; Bezgin et al. 2023 with JAX-FLUIDS) trains neural networks to replace or augment Riemann solvers, aiming to recover the accuracy of an exact solver at the cost of an approximate one. The fundamental challenge is guaranteeing conservation, entropy compliance, and positivity for out-of-distribution states — properties that classical solvers provide by algebraic construction. Current learned solvers typically require wrapping in a conservative correction layer and are restricted to known EOS families. The field is moving fast; the theoretical framework of Lax and Glimm ensures there are hard constraints that no learning procedure can circumvent without explicit enforcement.

The Physical Intuition: What You Are Really Choosing

Every choice along the HLL-to-Roe spectrum is a choice about what information you trust the mesh to carry. HLL says: the mesh is too coarse to distinguish the contact from the acoustic waves — average them together and pay a diffusion cost. Roe says: the mesh is fine enough that each wave family can be tracked independently — resolve them all, but accept entropy risk. HLLC says: the acoustic waves are robust, but contacts matter enough to restore — find the middle ground.

In the limit of infinite resolution, every solver converges to the exact solution (Lax-Wendroff theorem). The solver choice governs the rate of this convergence and the qualitative character of under-resolved solutions. At the resolutions actually achievable in production astrophysical, aerospace, or geophysical simulations — where shocks are typically 1–3 cells wide and turbulent cascades end at the grid scale — the solver is not merely an approximation device but an active physical model. This is why the best practitioners treat solver selection with the same rigour as turbulence modelling or EOS choice.

THE FUNDAMENTAL TRADE-OFF IN A SINGLE STATEMENTA Riemann solver that is less dissipative resolves waves more sharply — but risks entropy violations, positivity failure, and multidimensional instability. A solver that is more dissipative is robust and entropy-safe — but spreads information faster than the physics, degrading accuracy for shear flows, contacts, and turbulence. The art of solver design is finding the minimal, targeted dissipation that stabilises exactly the modes that need stabilisation, and nothing more.

Master Reference List

  1. GOD59Godunov, S.K. (1959). “A difference method for numerical calculation of discontinuous solutions.” Mat. Sb. 47, 271–306. [The founding paper.
  2. LAX57Lax, P.D. (1957). “Hyperbolic systems of conservation laws II.” Comm. Pure Appl. Math. 10, 537–566. [Wave classification; genuinely nonlinear fields.]
  3. GLI65Glimm, J. (1965). “Solutions in the large for nonlinear hyperbolic systems.” Comm. Pure Appl. Math. 18, 697–715. [Global existence; random-choice scheme.]
  4. ROE81Roe, P.L. (1981). “Approximate Riemann solvers, parameter vectors, and difference schemes.” J. Comput. Phys. 43, 357–372. [Roe averaging; the most-cited paper in the field.]
  5. HLV83Harten, A., Lax, P.D., van Leer, B. (1983). “On upstream differencing and Godunov-type schemes.” SIAM Rev. 25, 35–61. [HLL; entropy theory for two-wave models.]
  6. OSH82Osher, S., Solomon, F. (1982). “Upwind difference schemes for hyperbolic conservation laws.” Math. Comp. 38, 339–374. [Path-integral flux; entropy by construction.]
  7. EIN88Einfeldt, B. (1988). “On Godunov-type methods for gas dynamics.” SIAM J. Numer. Anal. 25, 294–318. [Optimal wave-speed estimates; positivity of HLLE.]
  8. EIN91Einfeldt, B., Munz, C.D., Roe, P.L., Sjögreen, B. (1991). “On Godunov-type methods near low densities.” J. Comput. Phys. 92, 273–295. [HLLE positivity proof; vacuum states.]
  9. TOR94Toro, E.F., Spruce, M., Speares, W. (1994). “Restoration of the contact surface in the HLL-Riemann solver.” Shock Waves 4, 25–34. [HLLC introduction.]
  10. BAT97Batten, P., Clarke, N., Lambert, C., Causon, D.M. (1997). “On the choice of wavespeeds for the HLLC Riemann solver.” SIAM J. Sci. Comput. 18, 1553–1570. [Positivity conditions for HLLC.]
  11. QUI94Quirk, J.J. (1994). “A contribution to the great Riemann solver debate.” Int. J. Numer. Methods Fluids 18, 555–574. [Carbuncle analysis; essential reading.]
  12. HAR83Harten, A. (1983). “High resolution schemes for hyperbolic conservation laws.” J. Comput. Phys. 49, 357–393. [TVD, entropy fix; foundational for high-resolution methods.]
  13. MIY05Miyoshi, T., Kusano, K. (2005). “A multi-state HLL approximate Riemann solver for ideal MHD.” J. Comput. Phys. 208, 315–344. [HLLD; standard for astrophysical MHD.]
  14. MIG05Mignone, A., Bodo, G. (2005). “An HLLC Riemann solver for relativistic flows.” MNRAS 364, 126–136. [Relativistic HLLC; the basis of PLUTO, Athena++ SRHD.]
  15. MAR94Martí, J.M., Müller, E. (1994). J. Fluid Mech. 258, 317–333; (1996) J. Comput. Phys. 123, 1–14. [Exact relativistic Riemann solver; two-part series.]
  16. LIO06Liou, M.S. (2006). “AUSM+-up for all speeds.” J. Comput. Phys. 214, 137–170. [Best all-speed solver for propulsion/turbomachinery.]
  17. BOU04Bouchut, F. (2004). Nonlinear Stability of Finite Volume Methods. Birkhäuser. [Rigorous relaxation solvers; positivity by design.]
  18. BAL10Balsara, D.S. (2010). “Multidimensional HLLE Riemann solver.” J. Comput. Phys.229, 1970–1993; (2012) J. Comput. Phys. 231, 7476–7503. [Genuinely multidimensional solvers; vertex-based.]
  19. THO08Thornber, B., Mosedale, A., Drikakis, D. (2008). J. Comput. Phys. 227, 4873–4894. [Low-Mach HLLC fix; implicit LES dissipation analysis.]
  20. TOR09Toro, E.F. (2009). Riemann Solvers and Numerical Methods for Fluid Dynamics, 3rd ed. Springer. [The definitive monograph; inexhaustible reference.]
  21. LEV02LeVeque, R.J. (2002). Finite Volume Methods for Hyperbolic Problems. Cambridge. [Mathematical treatment with excellent coding perspective.]
  22. COL84Colella, P., Woodward, P.R. (1984). “The piecewise parabolic method (PPM).” J. Comput. Phys. 54, 174–201. [The solver + reconstruction paper for compressible turbulence.]
  23. SHU98Shu, C.W. (1998). “Essentially non-oscillatory and weighted essentially non-oscillatory schemes.” NASA/CR-97-206253. [WENO framework; standard high-order companion to Riemann solvers.]
  24. DED02Dedner, A., et al. (2002). “Hyperbolic divergence cleaning for MHD.” J. Comput. Phys. 175, 645–673. [∇·B = 0; Dedner cleaning method.]
  25. DUM04Dumbser, M., Moschetta, J.M., Gressier, J. (2004). “A matrix stability analysis of the carbuncle phenomenon.” J. Comput. Phys. 197, 647–670. [Rigorous carbuncle analysis; cures.]

A Postscript on the Culture of NNSA Labs

10 Tuesday Mar 2026

Posted by Bill Rider in Uncategorized

≈ 2 Comments

“It is the obvious which is so difficult to see most of the time. People say ‘It’s as plain as the nose on your face.’ But how much of the nose on your face can you see, unless someone holds a mirror up to you?” ― Isaac Asimov

The response that I got from my post on the Lab’s cultures was somewhat gratifying. It looked like it was interesting to many. At the same time, I received a rather startling brushback from a friend. I greatly appreciate engaging in a dialogue on things, and to say the least, my friend disagreed with some of the points that I made. Their specific complaint was about my take on Sandia. Because of this, I thought it was important to add a bit of texture and context to the post. Hopefully, it will benefit the overall discussion and thought about this. Nothing I said is set in stone or free of personal experience. I am the context and observer; my perspective is unique and personal.

Culture is an enormously sticky topic. It’s hard to define. The way I would define it organizationally or for societies is that culture is a bit of an operating system. It operates the society or the organization silently and behind the scenes. It provides norms and rules by which the culture is applied to actions. You see what is expected and allowed, plus what is rewarded and punished. I did not fit into Sandia’s culture at all. It was not a good place for me. That is, unless I was prepared to change a lot.

As I noted, it is very difficult to understand the bad parts of culture from the outside. This is true particularly today, where leadership and communication are so heavily scrutinized. Today, leaders are prone to spouting bullshit instead of truth. This is true for leaders across society, whether it’s organizations like the labs or our politicians, all the way up to the leader of the United States. The inability of these leaders to speak on truths is stunning and vast. Instead, we live in a time where everything a leader says in public is suspect. This is certainly what I witnessed recently at work.

I will note, as a comment on this, that we live in a time when trust is absolutely missing from most of what society does. Yet, in this period where trust is lacking, the leaders behave in even more untrustworthy ways. They bullshit about success and ignore failure. I think one of the things that gets under my skin is the inability to identify and work towards solving genuine problems. There are problems everywhere across society, definitely in the work of the labs and more generally in the world. I see myself as a problem solver, and ignoring problems is an affront to me.

To get to my friend’s comments, I think there was a proper noting of a certain bitterness in my attitude towards Sandia. This is something I cannot deny. I left under a cloud. I left seeing some significant faults in many of the people who were given responsibility for managing the organizations. I also witnessed a great deal of unethical behavior. As I noted, you can only see this if you’re on the inside. Very rarely do these things become obvious to the outsider. If they do, the organization is likely a complete shitshow (see Boeing, or the Executive Branch).

The same sorts of behaviors may be present at the other two labs. I certainly witnessed a little bit of it at Los Alamos 20 years ago, but not to the scale that I saw at Sandia in my time there. That is not to say that it could not be present in the current Los Alamos. The same for Livermore, and in all these cases, there is some evidence that such excesses do exist. All three labs exist within the same ecosystem of governance. They draw from similar funding sources with similar strings attached. Our government overseers are definitely not better or more competent.

By the same token, I do have some bitterness about how my career turned out at Sandia. I felt for the entirety of my stay there that my talents and abilities were generally not put to good purpose. I did not grow sufficiently as a scientist in my time. I was not challenged by technical work. The blog exists to some extent because of these things. The challenges I faced were far more cultural in nature and far less scientific. I was not in an environment that fed my passions.

Again, this could be a function of the time that we are in and simply an echo of the same kind of bullshit that we see from our leaders. Their seeming inability to tackle any genuine problem with vigor and truth. We are all passengers in whatever time and place we exist. I am no different. There are differences in the cultures of the three labs. They also exist in our current time. I am sure my view has a deep recentness bias.

I owe a great deal to the first decade I worked at Los Alamos. It shaped me more than any other experience, more than school. It set my expectations for what a Lab is supposed to be like. Perhaps, Sandia could not ever have met my expectations even under perfect circumstances. Today’s world is very far from perfect and much closer to the opposite of it. I am certain that a young me starting work today would not be offered a similarly good experience. I did have a job offer from Livermore in late 2003. I declined it because of cost-of-living issues. I do wonder what that path would have meant for me; I am sure my perspective would be different today.

So, in closing, take my assessment with a grain of salt. It is my perspective and experience. It is only a projection of reality as I experienced it.

“Some people see the glass half full. Others see it half empty. I see a glass that’s twice as big as it needs to be.” ― George Carlin

The NNSA Lab Cultures

09 Monday Mar 2026

Posted by Bill Rider in Uncategorized

≈ Leave a comment

Prolog

This is another leftover from when the blog was shut down. Usually, I would think and write about a talk before I gave it, but this is thinking after the talk. This has some advantages, as I got a lot of feedback after my talk about the culture at Lawrence Livermore National Lab. This is where I gave the talk, and the audience came and engaged with me, gave me some of the gaps to fill in about that particular institution.

I also went through my career-ending experience, which certainly changed my impression of the culture and current state of Sandia National Lab. More recently, I’ve re-engaged in a casual way with Los Alamos, and it’s reminded me of some of the aspects of that lab. All of which comes together for an interesting view of cultures: how they are created, how they evolve, and how they change due to the stimuli that they receive.

I gave the talk in October at Lawrence Livermore National Lab at the augustly named Nuclear Explosives Code Development Conference (NECDC). This talk was given in front of an audience from all three labs, but also from the Atomic Weapons Establishment (AWE) in the UK, and it was received well, with a great deal of feedback from the audience that I’ve incorporated into my writing.

“Only someone who is well prepared has the opportunity to improvise.” ― Ingmar Bergman

tl;dr

Working for nearly 40 years at two premier national laboratories is a heady experience. Part of what shapes my career deeply is the underlying culture of each institution. The differences between the two that I worked at are rather stark and interesting. It becomes even more stark when you realize that they have a common origin, but in that common origin, there are different forces that are unleashed that continue to this very day.

Culture, of course, is a subtle and esoteric thing that is hard to completely wrap one’s head around. In addition, the specifics of my career have shaped my experience, and these differences mean that my impressions of both laboratories are skewed towards my work as a computational physicist. Nonetheless, one can make conclusions about each culture and how it shapes the technical work and experience of working at each of these labs.

“Life is rarely about what happened; it’s mostly about what we think happened.” — Chuck Klosterman

A Personal Story

“It is a profound and necessary truth that the deep things in science are not found because they are useful; they are found because it was possible to find them.”— J. Robert Oppenheimer

As I write this story, I need to be very honest about my personal biases. Los Alamos National Lab had a distinct and profound effect on the trajectory of my professional career. I can unequivocally state that the first ten years of my career there were exemplary in every single way, and I gained an immense amount of personal growth. Any sort of sense of career success I’ve had stems from the gifts I was given then. I found an environment that was brimming with generosity, but also a degree of technical excellence and reverence for science. Great values I hold on to today. I benefited from the wisdom and knowledge of many Los Alamos staff members. The management then was dominated by “servant” leaders. It was a perfect incubator for a young scientist.

At the same time, at the close of those ten years, things changed. There was a sequence of scandals and events that deeply damaged Los Alamos and have left a lasting imprint on all three labs. It all started with the saga of Wen Ho Lee. In a very real way, those scandals also exposed the dark side of Los Alamos to me and the world. I fear that those events have also exposed all of the labs to aspects of the modern world that are exceedingly negative. These forces have destroyed much of the good that all three of these institutions. It also destroyed the positives they should be creating.

The reverence and pursuit of science or knowledge in general is a clear vestige of Los Alamos’s impact on me. Los Alamos is also the origin point for all three institutions, as I will describe. They all arose from the Manhattan Project and the Cold War that followed. At the same time, much of that scientific approach is done in pursuit of nuclear weapons. As such, there is a cloud over everything these labs do around one’s belief in the morality associated with nuclear weapons.

I am a generally liberal and progressive person, and see the downside and the problems with nuclear weapons from a moral and ethical perspective. I also have a pragmatic view that nuclear weapons represent a genie that can’t simply be put back in the bottle. As a patriot for my country, I believe that it is essential that the United States have competence and capability in nuclear weapons that is second to none. This is still an issue that fills me with a great deal of conflict internally. What I do remain steadfast in is my belief that science is an important part of societal good and something worth pursuing in and of itself. In sum, the science these labs have (and can) produce is a huge benefit to the USA and mankind.

A major caveat of what I’m going to write is that my personal experience is focused on a combination of computational physics and computer codes developed by the labs. I conduct the examination of those codes and their results through the application of verification and validation. Each of these pursuits means that my viewpoint on the labs is seen through those lenses. I have seen how my efforts are perceived. Thus, I must admit that my own perspective is skewed and biased by the nature of what I do and what I have learned. Notably, the meeting I spoke at is about “code development”, not “computational physics”. This alone speaks to a downgrade for the activity. People doing physics are the users of the codes. This matters and says a lot.

All of these details have a huge impact on the product that the labs produce. The computer codes, the analyses, and the experiments that they conduct all have the imprint of these cultural signatures. This, together with the national culture, directs each place’s culture. Cultures are amazingly persistent. Aspects of the Lab’s cultures have been swept up in the change of the National culture. This might say more about the epic nature of the current time. I will say that over the course of my career spanning nearly 40 years, I learned about these cultures but also watched these cultures evolve. By and large, the evolution of the cultures of the labs has been very negative and parallels and mirrors the negative developments in American culture as the scientific legacy of the Cold War has basically faded from view and been replaced by the post-Cold War view of things.

Each lab has a distinct identity. There is a knee-jerk view of it, which is:

– Los Alamos is the physicist

– Livermore is a computer scientist

– Sandy is an engineer

These are not too far from the truth. The actual reality is a little more subtle. Los Alamos is the experimental physicist; Livermore is the computational physicist, and Sandia is the knuckle-dragging engineer.

I can speak to a set of core events and attitudes that reflect the cultures quite well. This is most evident and most acute in the terms of Los Alamos, where the development of codes for weapons work has never been an enterprise that has been looked on by much favor by the lab’s elites. This is contrasted with Lawrence Livermore, where developing codes and producing numerical methods on supercomputers is the central and highest purpose of the laboratory (along with fusion of all sorts). By and large, the code development at Lawrence Livermore is far, far more successful than either of the other labs.

“Knowledge cannot be pursued without morality.” — J. Robert Oppenheimer

Shaped by Key People.

These labs are identified with certain personalities. Three individuals from the Manhattan Project stand out as much as the culture of each lab. In particular, the obvious one is J. Robert Oppenheimer and Los Alamos. We had an Oscar-winning movie to vividly tell the tale. All three characters are prime players in that story. This is well known, accepted, and obvious. In a similar vein, Edward Teller is often identified as the godfather of Lawrence Livermore. With Sandia, this is less well known and accepted. Sandia had a historian who seemed to completely ignore the impacts of the Manhattan Project on Sandia. I have grave oversight in my opinion. Sandia started in Los Alamos in 1943, not in 1949.

“Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.'”

— Isaac Asimov

My premise, having worked there for almost 20 years, is that General Leslie Groves is the forefather of Sandia. Groves oversaw and controlled the weapons engineering activity that accompanied the development of the atomic bomb. Groves’ basic mentality of running projects, including his obsession with operational security, is still king there. His zealous application of need-to-know as a principle is the true motto at Sandia. The process of engineering in WW2 is how weapons engineering is done today. If one looks at the archives of the history of the Manhattan Project, one can see the imprint of current weapons engineering at Sandia all over it. So little has changed in 70-plus years. The principles of that day are still alive and well today at Sandia. This is true knuckle-dragging. This engineering is backwards and backward-looking. It is also culturally entrenched and utterly resistant to change. Management claims otherwise are bullshit.

By the same token, the personalities and nature of Oppenheimer live in the spirit of Los Alamos and how people behave there, including his particular nature of rogue scientific excellence and eccentricity. At its core, Los Alamos is dominantly an experimental physics lab. The Trinity event was the model of this spirit. For most of its history the Weapons Working Group (WWG) was the ritual. This was the meeting of all the disciplines to work together on the same experiment. That experiment was the core of the Lab’s heart. Similarly, Teller’s legacy lives on at Livermore in terms of his fierce Cold War and anti-communist attitudes as well as his appetite for theoretical physics. The obsession with fusion is partnered with these. All of this is reflected in the current approach to activities..Weapons and fusion are the Lab. All else is simply a distraction. Both labs favor collaboration and vast swaths of science working together. The emphasis and priority of each is the variance.

There were mentions of secondary figures at each lab. In Los Alamos, the second person to think of as shaping the culture of the lab is Harold Agnew. He was there at the Manhattan Project and ultimately became the lab’s director. Harold was present for history many times, including the Chicago pile (the first critical fission reaction). He was also on a plane over Hiroshima observing that bombing. He is often viewed by the old-timers there with great warmth, and a time when Los Alamos reached its apex during the Cold War.

For Livermore, the person who stands out at Livermore is Johnny Foster. He had great achievements in the 1950s and ultimately became Livermore’s director before moving into even higher echelons in government at the White House. Foster represented some of Livermore’s greatest achievements. He also showed the fierce and deep engagement with National security. This has its echo in the Strategic Defense Initiative (SDI, Star Wars) and in Washington. Los Alamos was the cowboys (for good and ill), Livermore has the suits combined with California suave.

There are no individuals who stand out at Sandia. Having worked there for twenty years, one of the things I noted is that Sandia is very poor at recognizing the achievements of individuals. Most individuals’ achievements are simply subverted to institutional achievements and the identity of people doing the work is usually not celebrated. This is reflected in the lack of big personalities shaping the laboratory itself. The best place to find a few heroes is in the excellent work of Eric Schlosser, author of Command and Control. He documented the principles behind the nuclear safety stockpile. That arose in the 1960s and has served the American stockpile well to this day. These are often embodied in the principles that Sandia stands next to, which are always and never. Nuclear weapons are always ready when the nation calls on them, but never under conditions where they’re not being called for. This is both reliability and safety embodied and seen in how the modern stockpile behaves.

“Dropping a nuclear weapon was never a good idea.”

— Eric Schlosser

The subtle upshot of Sandia’s attitude toward people is throttling greatness. Los Alamos and Livermore have great scientists. Some people make incredible achievements. Nobel Laureates come from their ranks. You will occasionally meet one visiting there (I once met Murray Gell-Mann at daybreak outside the T-Division building). This won’t happen at Sandia. Ever. Achievement is institutional. No one is singled out. This is whether they deserve it or not. It becomes a self-fulfilling prophecy. No one great will arise, and if they do, they leave. It took me far too long to recognize this.

The Manhattan Project Origins

“There is no place for dogma in science. The scientist is free, and must be free to ask any question, to doubt any assertion, to seek for any evidence, to correct any errors.”

— J. Robert Oppenheimer

All three institutions have their origins in the Manhattan Project. Sandia became independent from Los Alamos after World War II. Nonetheless, it carried with it the experience and the structures that the Manhattan Project brought, and this is useful to understand it today. Much of what one experiences at either Lab connects to values and systems instituted there. The origin story for both Labs is still powerful and guiding. These are the legends and mythos for both.

The last lab, Lawrence Livermore, is a place I have not worked. I’ve had a great deal of contact with them over the years, both at Los Alamos and at Sandia. The dark side of an organizational culture is hard to intuit until you’re inside it. This is the part of the culture that is hidden and unknown unless you actually live within it. I have only gotten hints of the darkness there. It is very surely present. Livermore does seem to have enforced silence about their screwups. One keeps quiet about their problems and mistakes until management allows it. This gives them time to clean it up or turn around the narrative. Sandia is similar. Los Alamos leaks like a sieve. Problems are far more transparent. Los Alamos doesn’t have more problems; they are simply more visible. This is probably just the small-town effect.

That said, I can speak to what I learned at the talk I gave and what it all says. The most direct reflection of my experiences is embedded in the notion of what code development looks like at the labs. These are thoroughly imprinted with the technical challenges that the labs have. I worked on codes at Los Alamos and know the details from most of the Lab’s history. My knowledge of Sandia is more limited, mostly because of the lack of common knowledge there. Sandia is simply divided and insular internally. People there are friendly, but it is not a friendly place. Livermore codes are more well-known because they have some of their greatest accomplishments.

Los Alamos and Lawrence Livermore both develop codes for much of the same purpose, and their codes have the same structures, but they work under completely different cultural ethos. Sandia, on the other hand, always has a chip on its shoulder, particularly with computer codes, and perhaps for good reason. The situation they deal with is both simpler from a technical perspective of each code and more complex in terms of how to thread everything together. Sandia also distinctly works in a non-integrated fashion and actually separates all the functions of its work significantly. The divisions at Sandia are driven by the broad application of “need to know”. Common knowledge and information drive connection, and Sandia destroys common knowledge as a matter of course.

“Men build too many walls and not enough bridges.”— Joseph Fort Newton

No issue at Sandia is more separated than the hardcore weapon engineers from the people doing science and codes. They are organizationally separated by quite a distance, and they basically don’t live in the same world or speak the same language. Conversely, Los Alamos and Livermore have their code development and the hard-core weapon teams doing weapons work closely associated with each other organizationally. While there’s tension between them, they tend to speak the same language and, broadly speaking, are all physicists. This leads to a much more unified effort and provides better service to the nation. Sandia takes operational security to an absurd place where it threatens the effective execution of technical work. The same mentality keeps them from innovating and creating anything new. It keeps Sandia and its weapons work living in the past.

Computer Codes and Their Developers

This reality is expressed in the computer codes used by each lab. When one looks at Los Alamos carefully, all the computer codes used heavily for their programs were developed elsewhere, mostly at Livermore, but also in the UK by AWE. Notably, one of the major codes comes from a contractor (a beltway bandit). One of the key things about the big integrated codes at Los Alamos and Livermore is their structure. They are repositories for huge amounts of physics, but hydrodynamics is the core. The key part of those codes is the hydro method and algorithm. The explanation is that all the other physics use the material and mesh map that the hydro creates. As such, the hydrodynamic methods have an outsized impact on the code’s quality and structure. I’ve quipped that the hydro scheme is the skeletal structure of the code that holds the rest of the physics. Sandia builds its own codes, but this code development is always expressed in a severe degree of envy with regard to Los Alamos and Livermore. They always feel like they’re second best and this feeling is not too far from the truth.

I did hear a story at Los Alamos that explains the second-class nature of the co-development there. This comes from the days of Pacific Island testing. In those days, the glory as a Los Alamos weapons person was being in the Pacific with those massive H-bomb tests. As one might expect, travel wasn’t easy, and you were separated from your family and home for long periods of time. This was hard on physicists. Not seeing their kids, not seeing their wives, thus marriages suffered. The sense was that one could take a break from this grind and spend time at home doing code development. Thus, code development became associated with people who were slackers. They were not sufficiently committed to the mission to sacrifice their family and marriage at the altar of nuclear supremacy.

This attitude persists to this day and leads code developers to be viewed as second-class nuclear physicists. The result is an inability of Los Alamos to develop codes for itself. Code development is not really respected there. You were never quite as good as the designers of the weapons. The impact on the lab is profound. They are always importing a code from elsewhere, where computational science is taken more seriously. This was a vicious cycle. It led to self fulfilling prophesy and code development received less support along with the lack of respect. It became second-rate.

This gets to the identity of Livermore, which is heavily grounded in computational physics. Moreover, some of the greatest accomplishments of Livermore are tied to their codes. Thus, computational physics receives support at Livermore. Computer science is treated as an important discipline and is empowered. As a result, the prestige and quality of the code Livermore produces exceeds that from Los Alamos by a large degree. If something is respected, supported, and celebrated, the quality follows. In a sense, Livermore supremacy and code development are a foregone conclusion once you look at the culture of the labs.

In this area, Sandia is always third best. This even includes their development of codes that are key to engineering those that do mechanics calculations, where again the codes developed at Livermore are the precursors and, for most intents and purposes, copied by Sandia. Another area where this is very true is shock physics, where Sandia’s codes are always second best. Sandia is really good at software engineering, but what they engineer isn’t really that good. They are definitely not innovative. Innovation is something not supported by the culture. Innovation is viewed with suspicion. They also do not really look outside their organizations. The insular and isolated nature are all encompassing.

That said, Sandia’s codes work well for engineers who are doing day-in and day-out work, having construction that makes them useful. The technical content of these codes leaves much to be desired. There are some really sketchy ideas embedded in very good quality code. It is really good, high-quality code. Livermore comes close in that regard. The Los Alamos code quality is last here.

Sandia, on the other hand, operates in this separate, individual-focused manner that produces small codes that always look like they are second-rate compared to what the other labs produce. This, of course, is not entirely true, as Sandia has also ended up being the supplier of codes like CTH to the broader defense industry. A large part of this simply stems from the fact that, while the NNSA labs have their problems, they are vastly better than DoD labs. This is remarkable considering the size of the defense budget. NNSA labs are incredibly superior to the Department of Defense labs. The science at the Department of Defense labs is abominable and can never produce anything that holds a candle to what the NNSA labs produced. This, in reality, is more of a condemnation of the scientific environment in the entire country.

Los Alamos Greatness Denied

One of the tragedies of Los Alamos is that it is properly viewed as the origin of computational fluid dynamics, or CFD. It had two of the greats in CFD working there: Peter Lax and Frank Harlow. Neither of them has had much influence on Los Alamos’ weapons codes. Both of them have had massive influences outside the Lab.

For example, the ideas of Frank Harlow found life in a whole batch of areas, including fusion computations and codes outside Los Alamos. Peter Lax defined basic mathematical work that forms the foundation of most compressible CFD codes in the world today. The tragedy is that neither of them had much influence at all on the actual codes developed by Los Alamos. This demonstrated the genuine animosity that weapons physicists at Los Alamos had for the homegrown talent. In computational physics, invented here is disregarded at Los Alamos.

Similarly, ideas of Harlow found much more traction at both Livermore in terms of the arbitrary Lagrangian-Eulerian (ALE) codes and interface tracking. Interface tracking work was picked up and manifested into useful code by David Young at AWE. Ultimately, some of David’s work was imported back into Los Alamos, so ultimately the work of Harlow and Lax had to be taken advantage of elsewhere and then imported back into Los Alamos after being committed to code. The code is only accepted if someone outside the Lab writes it. For the most part, code development at Los Alamos is simply caring for code others write.

The Modern Era After the Cold War

“Nothing is so difficult as not deceiving oneself.” — Ludwig Wittgenstein

My own experience is with two of these national labs: Los Alamos first and Sandia second. I must always be completely aware that at the same time as I was employed, these institutions were evolving due to the nature of the modern world. The whole nature of the nuclear stockpile changed dramatically, and only a couple of years into my career could it be said to have occurred during the Cold War. These institutions were dominantly shaped first by World War II and then by the vestiges of the Cold War itself. The labs grew and flourished during the Cold War. The period after the Cold War has been characterized by decay and destruction. The cultures have been attacked by modern governance.

My experience with Livermore is more superficial. Livermore operates under the same governance as Los Alamos and Sandia. It is also in a different setting in California. Sandia operates a small Lab across the street from Livermore. It is vastly different than Sandia, New Mexico. I suspect many of these differences are present with Livermore itself. They both exist in the East Bay area and show that region’s culture strongly. The issue comes down to the fact that when you interact with a lab, you see mostly the good and very little of the bad. You only see the bad part or the bad face of an institution when you’re inside it. I can certainly speak chapter and verse on the bad sides of Los Alamos and Sandia. I am fairly certain that Livermore has the bad side as well, but that is largely invisible to the outsider.

The shadow of what’s bad with Lawrence Livermore can be seen with the dynamics around NIF, which is one of Livermore’s greatest priorities. It shadows the pursuit of fusion as one of their core initiatives. NIF has had a whole host of scandals and issues along its way. It is undoubtedly an incredible experimental platform. On the other hand, the degree of overselling of its achievements is fairly appalling and should be a black mark. Yet we seem to live in an age where bullshit is favored, and if that bullshit is found out, the news cycle and attention have moved on already, and the bullshitter survives.

The same 30 years have seen the management structure of all the labs change. The effect of the current management approach to labs is to hollow them out. We see science in decline and technical quality and excellence becoming a shadow of their former selves. I saw this start in Los Alamos and really take hold in the 2000s, and I have watched it continue at Sandia. All the labs operate under the same auspices, and I can’t imagine that Livermore has declined as well. The management now only really focuses on money above all else, looking for programs that have large funding, and this prioritizes management who act as empire builders. The lack of trust in our society and the way that government funds things and casts a doubting eye towards everything done has heavily and has damaged the laboratory. This has left us in a situation where I don’t think we are ready for a new strategic competition internationally, even as we seem to be promoting and proposing that it occurs.

In the current state, the United States really has what I would call Schrodinger’s nuclear stockpile. It both works and doesn’t work simultaneously. We won’t know until we look inside the box. God help us if we have to look inside that box. The nation has done a terrible job at caring for these Labs. We have allowed them to decay and decline for the last 40 years. If their work becomes important and visible, we are unlikely to like what we see,

“The best way to predict your future is to create it” ― Peter Drucker

A Methods Challenge Worthy of Being Called World-Class

04 Wednesday Mar 2026

Posted by Bill Rider in Uncategorized

≈ 6 Comments

tl;dr

There is a sense of stagnation in progress for numerical methods for hyperbolic systems (gas dynamics). There are reasons for the relative stasis. To some extent, these methods are a victim of their own success. There are some really deep, conflicting priorities for methods in places I used to work. On the positive side, the conditions for nuclear fusion are exceedingly challenging to compute. You want to avoid shock waves. Shock waves are also impossible to avoid. With shock waves, conservation is essential unless the shock is explicitly tracked. For fusion conditions, conservation is generally disregarded because flows are desired to be adiabatic, thus shockless and smooth. Work to satisfy both priorities simultaneously is completely lacking. It would seemingly be important to pursue, but it’s not. We have simply surrendered to this as a challenge.

So let’s peel this onion.

“Knowledge has to be improved, challenged, and increased constantly, or it vanishes.” ― Peter Drucker

Conflicting Priorities

The national labs I used to work at solve a host of extremely challenging problems. These are high-energy-density problems with some really challenging conditions to consider. These involve exotic conditions and difficult processes to engineer. Canonically, explosions and shock waves are ever-present. Amongst the most difficult things to produce are the conditions for nuclear fusion. Computational tools are necessary and ubiquitous for the design and analysis of these technologies. As I’ve noted before, these labs are the origin of the technology known as computational fluid dynamics (CFD). The past still rules choices today, and the labs are a bit of a scientific cul-de-sac. The cultures resist outside ideas. Security and other conditions are increasingly isolating the labs from the World.

The two big priorities for the labs above are seemingly in conflict with one another. Not seemingly, they are in conflict! To compute shock waves, there are two approaches that have been successful. One is shock tracking. In this approach, the evolution of the shock wave is explicitly computed and updated. This was the first approach taken at the labs in computations done in World War 2. It is exceedingly complex, especially in more than one dimension. Shock capturing and artificial viscosity were invented as an alternative. Shock capturing dominates because of its generality and relative simplicity. It allows the examination of complex engineered systems.

The methods developed initially for shock capturing were in the Lagrangian frame of reference. These methods were quite successful, but have some limits. These methods are not generally conservative of energy. The internal energy equation is evolved. One mantra I have to repeat over and over is that the internal energy equation is not a conservation law. It is an evolution equation. To get a conservation law, the total energy must be conserved. In the Lagrangian frame, the conservation errors are small, being close to negligible. As soon as you leave the Lagrangian frame, these errors become large and have negative consequences. You get the wrong speed of propagation for shocks unless you are intentional.

“Don’t handicap your children by making their lives easy.” ― Robert A. Heinlein

It has been shown that conservation is essential for computing shock waves properly. This is the Lax-Wendroff theorem. Shocks are weak solutions of the equations, and conservation is essential to getting a weak solution. There is an additional caveat: proper dissipation is needed to obtain the correct weak solution. Weak solutions are not unique, and one must select the physical one. For most of CFD, this conservation is essential. Most computations for systems with shocks are conservative. Outside the lab,s developments adhere to its dictum. Inside the NNSA labs, not so much. The Lax-Wendroff theorem was developed at Los Alamos. For decades, it was just ignored there. That is part of the story here, and the challenge is to stop ignoring it.

The simple explanation of why conservation is ignored is available. In brief, the Labs are interested in a wide range of very high-Mach-number flows. If the Mach number is very high, the flow’s energy is dominated by kinetic energy. If one evolves total energy, the internal energy is found via a subtraction of two large numbers, e = E_t – K, where K = \frac{1}{2}\sum_i u_i^2. This can be error-prone and produce errors that can be quite important for fusion conditions. This gets to the resistance of the Labs for conservation form. For successful fusion designs, these errors are intolerable. (wordpress math isn’t working today)

Fusion happens where the proper materials (isotopes of hydrogen) are put into very dense and very hot conditions. Fusion happens in very exciting stars with more than hydrogen, BTW! Getting these materials into dense hot conditions is quite challenging. Using shock waves to do this does not work well. The amount of density and energy growth with shocks is quite inefficient. One of the explanations is the growth of entropy with shocks. A shock wave always raises the entropy of the material. This makes the additional shocks more difficult to improve the conditions for fusion. The trick is to compress the material adiabatically with no increase in entropy. These conditions are quite difficult to engineer. Compressible fluid dynamics shocks readily. They require careful tuning and the creation of very high Mach number flows that are carefully balanced to avoid shocks.

The original methods derived by von Neumann (and Richtmyer) are great for computing these kinds of flows. Conservation form methods solving total energy generally suck at it. Thus at the Labs the original non-conservative methods are favored. At Livermore, this favor is almost pathological (definitely cultural). For the optimistic view of technology, the original methods are great. All is well in the Lagrangian frame. The issue is that mixing and turbulence demand leaving the Lagrangian frame. To deal with the pessimistic side of technology and shock waves the original methods are problematic. Both turbulence and shocks are ubiquitous.

Something needs to give. Why can’t we have both? Its been 70 fucking years for Christs sake.

“The mind, once stretched by a new idea, never returns to its original dimensions.” ― Ralph Waldo Emerson

Are the Priorities Incompatible?

Even today the mantra demands one to make a choice. You either use a method that computes adiabatic evolution properly, but fucks up shocks, or one that computes shocks right and fucks up adiabats. At Livermore where fusion is king, the first choice wins all the time. Livermore has a big wake and this choice dominates computation at the Labs. Thus for the other labs too, shocks are hosed. Any serious code uses remap, and most problems eventually become Eulerian even if they start Lagrangian. Thus they start to lose conservation of energy. With this loss of energy makes incorrect shock wave evolution inevitable. Testing confirms that this happens as a matter of course.

The fix for this is simple, but not currently taken. This is go to conservation form. Instead there is a fix to return the evolutionequations to conservation was invented by DeBar in the 1970’s. Basically the kinetic energy deficit (or surplus) is added back to get conservation back. One of the issues is that this is not in conservation form. Thus, Lax-Wendroff does not apply as a theoretical backstop. The DeBar approach has been shown itself to be effective in getting correct solutions. It is also incredibly fragile. It generally does not function reliably on complex problems. It falls prey to the basic subtraction issue mentioned above.

The main community outside the Labs facing similar problems are the astrophysical scientists. In the 1980’s they used codes based on technology from Livermore (written by a group headed by Mike Norman). This code has been supplanted by conservative methods based on the piecewise parabolic method (PPM). Thus today the astrophysical calculations are done using modern conservative methods. Most commonly PPM is the basis. The author of PPM is Paul Woodward who worked at Livermore in the 1970’s and 1980’s. Paul also worked with Bram Van Leer on sabbatical influencing his approach. Astrophysicists are also interested in adiabatic compression and fusion conditions. The difference is degree. Fusion like that in ICF is far more extreme than astrophysical flows.

Those familiar with my writing might recognize that I really favor PPM as a method and basic framework. I also believe that conservation is essential. It should not be disregarded as the Labs do. Correct shock wave propagation is too important to throw away.

“I am sufficiently proud of my knowing something to be modest about my not knowing all.” ― Vladimir Nabokov

Having Your Cake and…

This gets to the big challenge I am thinking about. Can we have methods that compute adiabats properly and have conservation? I firmly believe the answer is yes, and it is bemusing that we haven’t done this yet. The issue is that the adiabatic evolution is incredibly demanding. I would counter that the errors and faults with shock solutions are obvious too, and far larger. In dealing with the technologies the Labs are responsible for, both are essential. I see three possibilities that are most likely to succeed at both. None of these are likely to get the effort they deserve today as all eyes are on AI.

1. A robust DeBar (kinetic energy fix).

The simplest approach might be to make the kinetic energy fix in remap work. The issues are the fragility of the approach. The way forward seems to be to enforce conservation over time and carry a “bank” of the difference along until it can safely be used. That said, the method still won’t match the Lax-Wendroff theorem. There is no reason to believe that results are safe for complex shocked flows. I think the lessons of this fix is that kinetic energy is the core of the problem to be solved. A big part of this is entropy conditions and consistency.

2. A really careful and well constructed PPM scheme.

The PPM method is a cell-centered version of a method Van Leer created (scheme 5 from his 1977 paper). It is a practical version of the active flux method developed recently. I do think the active flux method could be another path forward. One of the things about adiabatic flows are their smoothness and analytical structure. The sense is that the method could produce the analytical behavior to the level of truncation error. It is possible that using even higher order methods could reduce the errors to the point of being neglgible. Work in the literature shows that very careful mathematics and integrations can produce better results. PPM is an amazing conservative method. It also computes a host of complex flows very well. The sort of adiabatic compression needed for successful fusion is a bigger challenge. Perhaps some ideas from the kinetic energy fix could apply here as well.

3. Building on the conservative Lagrangian codes from France

The next idea is using the advances in Lagrangian methods that are conservative by construction. Do these methods work sufficiently well a priori for these adiabatic flows? It seems so, The issues with remap do not change, the Lagrangian frame must be discarded with mix. The basic mathematics of the subtraction of large numbers is retained. In my experience one needs to be careful and intentional to get robust answers. This applies to the spatial reconstruction step and the Riemann solution. Nonetheless, the adiabatic compression (expansion) challenge is huge. The use of the generalized Riemann problem is also potentially essential.

There may be other ideas to consider to break the impasse. These seem the best bet. In many cases each idea can produce ideas that need to be blended together. It is pretty clear that the detailed treatment of kinetic energy is the key. There is an additional challenge I’ve identified that feels very related.

“Life is about accepting the challenges along the way, choosing to keep moving forward, and savoring the journey.” ― Roy T. Bennett

Other Issues to Consider

I’ve written about issues with very strong rarefaction waves before. This topic is probably adjacent to the issues with adiabatic evolutions. I have noted that classical methods of all sorts fail for very strong rarefaction problems. These are problems that start of approach the expansion of material into vacuum. A rarefaction is an adiabatic structure. Thus the failure of conservative or remap methods on this class of problem may be related.

I have noted that it seems that methods based on the Generalized Riemann Problem (GRP) seems to do better. This is based on work from China and also alluded to by Maire for Kidder’s problem. The GRP approach is the epitome of being super careful in the construction of a method. It seems reasonable that combining some or all these ideas could provide the solutions. There is a possible solution that would solve the full spectrum of key problems in a unified manner.

This would allow us to have our cake and eat it too.

I can suggest that a successful method would have certain characteristics. I believe strict conservation form is essential. The method should be high-order and maintain strict control of dissipation of all forms. The evolution equations should consider a GRP method as well. The question is how to square the canonical problems with high Mach number adiabatic flows together. I might suggest that a separate internal energy equation be evolved to allow better solutions. In adiabatic evolution this equation should be used for better behavior. We may also need a separate kinetic energy equation as well. The key would be how to evolve any gain or loss then synchronize with the conserved variables. One would want to do this in conjunction with entropy satisfaction. The second law is an important inequality to adhere to. The discrepency should be allowed to disappear once the flow becomes dissipative via mixing or shocks.

The questions is whether anyone gives a fuck? All the attention is on AI. The reality is that AI won’t solve this problem, but could help if used properly.

“If you want something new, you have to stop doing something old” ― Peter Drucker

References

Caramana, E. J., D. E. Burton, Mikhail J. Shashkov, and P. P. Whalen. “The construction of compatible hydrodynamics algorithms utilizing conservation of total energy.” Journal of Computational Physics 146, no. 1 (1998): 227-262.

Maire, Pierre-Henri, Rémi Abgrall, Jérôme Breil, and Jean Ovadia. “A cell-centered Lagrangian scheme for two-dimensional compressible flow problems.” SIAM Journal on Scientific Computing 29, no. 4 (2007): 1781-1824.

Maire, Pierre-Henri. Contribution to the numerical modeling of inertial confinement fusion. No. CEA-R–6260. Bordeaux-1 Univ., 33 (France), 2011.

Qian, Jianzhen, Jiequan Li, and Shuanghu Wang. “The generalized Riemann problems for compressible fluid flows: Towards high order.” Journal of Computational Physics 259 (2014): 358-389.

Loubère, Raphaël, Pierre‐Henri Maire, and Pavel Váchal. “3D staggered Lagrangian hydrodynamics scheme with cell‐centered Riemann solver‐based artificial viscosity.” International Journal for Numerical Methods in Fluids 72, no. 1 (2013): 22-42.

Colella, Phillip, and Paul R. Woodward. “The piecewise parabolic method (PPM) for gas-dynamical simulations.” Journal of computational physics 54, no. 1 (1984): 174-201.

Woodward, Paul, and Phillip Colella. “The numerical simulation of two-dimensional fluid flow with strong shocks.” Journal of computational physics 54, no. 1 (1984): 115-173.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

Blondin, John M., and Eric A. Lufkin. “The piecewise-parabolic method in curvilinear coordinates.” Astrophysical Journal Supplement Series (ISSN, 0067-0049), vol. 88, no. 2, Oct. 1993, p. 589-594. 88 (1993): 589-594.

DeBar, Roger B. Fundamentals of the KRAKEN code.[Eulerian hydrodynamics code for compressible nonviscous flow of several fluids in two-dimensional (axially symmetric) region]. No. UCID-17366. California Univ., Livermore (USA). Lawrence Livermore Lab., 1974.

Burton, Donald E., Nathaniel R. Morgan, Marc Robert Joseph Charest, Mark A. Kenamond, and Jimmy Fung. “Compatible, energy conserving, bounds preserving remap of hydrodynamic fields for an extended ALE scheme.” Journal of Computational Physics 355 (2018): 492-533.

Hawley, John F., Larry L. Smarr, and James R. Wilson. “A numerical study of nonspherical black hole accretion. I Equations and test problems.” Astrophysical Journal, Part 1 (ISSN 0004-637X), vol. 277, Feb. 1, 1984, p. 296-311. Research supported by the US Department of Energy. 277 (1984): 296-311.

Stone, James M., and Michael L. Norman. “ZEUS-2D: a radiation magnetohydrodynamics code for astrophysical flows in two space dimensions. I-The hydrodynamic algorithms and tests.” Astrophysical Journal Supplement Series (ISSN 0067-0049), vol. 80, no. 2, June 1992, p. 753-790. Research supported by University of Illinois. 80 (1992): 753-790.

Stone, James M., Thomas A. Gardiner, Peter Teuben, John F. Hawley, and Jacob B. Simon. “Athena: a new code for astrophysical MHD.” The Astrophysical Journal Supplement Series 178, no. 1 (2008): 137-177.

This Moment with AI and How to Win It

01 Sunday Mar 2026

Posted by Bill Rider in Uncategorized

≈ 1 Comment

tl;dr

Is AI going to replace your job and spike unemployment, or will it supercharge abundance and wealth?

We have a choice about where this goes as a society. The hype around AI is endless and over the top. The hype misses the big opportunity and stokes outlandish fears, too. Almost all the conversation misses what AI brings to the table. In a lot of cases, if the job can be eliminated by AI, much of that job probably shouldn’t be done. The real power of AI is to make people more productive. Cutting the jobs is zero-sum thinking. The key to AI is boost productivity to do more and sell more. This is the essence of abundance. Use infinite thinking to make more and grow the economy. Zero-sum thinking is at the core of these job cuts. It will turn people against AI. If AI fucks the public, the public will fuck AI back. This is how we lose as a society. A better path is to use it to grow society’s wealth and abundance instead of just growing profits.

This topic is long overdue and needed. We need to think clearly about where all this is going. Right now, no one is. We are not seeing the real core issues around AI. Whether it is the AI companies or the government, it is all bullshit and little light. This bullshit is the hallucinations AI produces regularly. This algorithmic BS is a perfect vehicle for amplifying the lack of trust already corroding society today. This lack of trust could be amplified further and trigger a societal doom loop.

“Abundance of knowledge does not teach men to be wise.” ― Heraclitus

AI is a “Magic” Technology

“Any sufficiently advanced technology is indistinguishable from magic.” — Arthur C. Clarke

One of the things to recognize is just how miraculous AI is. In the course of the internet age, there have been a handful of moments that feel almost magical when they hit you. The first time I realized was the first time I used Google search. Before Google search happened, the internet had a set of phone book websites. I happened to use one called Alta Vista. It was the way you got around and found stuff. Then this new Google search came. It had this amazingly simple interface, and you typed in the query, and suddenly you had results. It was like magic! Once I used Google search, it was like walking through a door, and I never walked back out through it again. Alta Vista was gone, and I wasn’t going to ever return to it. Google was like fucking magic!

The next thing that spurred this sort of feeling was a smartphone, the Apple iPhone. I had used a Blackberry and a flip phone. The iPhone was the internet in your pocket plus a built-in iPod. It became even more, and the interface was like a mini-laptop. More magic! The blackberry was cooked, and these devices became everywhere. As we discuss later, these smartphones were turned on society over time. The worst thing about Google and smart phones were the enshitification unleashed by them. My stance is that enshitification is a choice driven by maximizing shareholder value. It is optional. We treat it like it is a natural law. It is not.

The next magic moment was the first time I used ChatGPT. I heard about this new site online with this thing called a Large Language Model (LLM) that you could question like a person. You simply spoke like a human being, and it talked back. I tried it out. My jaw dropped at what it could do. The potential was vast. The problems with the technology are also vast. Nonetheless, this was a magical moment where you could see the World change in a moment. Recently, with Codex, I felt the same thing (Claude Code is similar). I was able to do things with ease and simplicity that were magical. This is the dawn of agentic AI. The potential for LLMs and agentic AI is incredible. The counter to this hopeful trajectory is the societal system that enshitifies all this magical technology as the default setting.

The subtext of maximizing shareholder value is a mindset that it typifies. This mindset focuses on greed instead of generosity. It is short-term focused instead of the long term. This mindset is about zero-sum thinking, where there are winners and losers. The alternative is infinite thinking, where everyone wins. We have choices with AI and agents. We can proceed as we have with greed and short-term thinking. This will lead to societal damage and enshitification. We can also choose a different path of long-term thinking and generosity. This is the path to abundance and societal good. The choices are there. To get the good outcomes for society, we need to step away from our current defaults.

“A man is but the product of his thoughts. What he thinks, he becomes.” ― Mahatma Gandhi

We Need to Figure Out Work and AI

In the last year that I worked at Sandia, I spent a great deal of time trying out LLMs in the setting of work. I did all sorts of tests in trying to understand and map out the capabilities of this technology in the setting of doing scientific work. I examined how LLMs did at writing, how they did at research, and how they did at answering a variety of questions. This was related to genuine curiosity, but also to work that I was doing in verification and validation of scientific machine learning. Scientific machine learning (ML) is a related field that is getting a great deal of attention in the scientific community, although it is being overwhelmed by the tsunami of interest in LLMs. Doing this work required applying well-developed principles of the scientific method. The answer is to then adapt the principles to the specifics of LLMs and ML

“I’m not upset that you lied to me, I’m upset that from now on I can’t believe you.” ― Friedrich Nietzsche

What I came to realize was that my approach to verification validation is essential to getting good results from LLMs. To wit, the level of doubt in taking LLM results needs to be quite high. LLMs are prone to bullshit us all the time and quite often will give us an answer that it wishes to satisfy us with, which has no relation to objective facts. A large part of successfullyusing an LLM is to start off by asking it questions to which you already know the answer, in order toverify that the topical area that LLMs are examining is within their grasp. This by no means says that, as you get deeper and deeper into a topic, the LLM will be successful. One should always take a result from a large language model with a grain of salt, check it, and think about it deeply.

What I discovered with LLMs is that the closer you get to esoteric, expert knowledge, the worse they are at everything. Whenever I got close enough to the core of my own expertise, the LLM failed to give objectively good results. This was true over and over again. This is an important lesson to integrate into using them effectively. The role of the human expert is actually amplified by LLMs. The expert knows the point where LLM competence ends, and human judgment is necessary.

For example, I found that LLMs are terrible for writing. They’re good as an editor, but terrible at creative writing, terrible at doing anything that a human with ability can do. Writing is a deeply human activity and involves clarity of thought. The narrative elements are an essential human pursuit. At least today, AI has no capacity to write with genuine humanity. My writing is part of thinking on a topic. True for fiction or non-fiction writing. A key is to leave marks on the prose that show genuine personality and human experience. Ultimately, my use of AI in any sort of writing has been relegated to editing and research.

The same holds doubly for areas of science, where I find AI is a capable digital assistant, great for improving the scope and breadth of what I do, but not good at creating anything at an expert level. I have tested this over and over with the same result. LLMs have improved over the past three years, but it has only moved the wall it hits a little. I’ve taken various algorithms and work that I’ve done and tried to basically spoon-feed it into the AI. Even with an excessive amount of spoon-feeding, the AI fails to do even the simplest level of creativity. At the same time, I am convinced it can be a useful assistant. I use it every single day for a host of tasks.

The counter is that AI is very good at giving a large volume of work and can be utilized to improve the quality of what work has been done and the speed with which the work is completed. This was particularly true with the Codex example that I tried in the agentic work. It did a number ofbanal tasks with speed and effectiveness that were far greater than my own and basicallyaccomplished one or two days of hard work in less than an hour. What I saw there was the capacity to free up my time to go towards creative and thinking efforts that are appropriate for humanity, and allow me to spend more of my time doing what a human being can only do.

“The problem with the world is that the intelligent people are full of doubts, while the stupid ones are full of confidence.” ― Charles Bukowski

Humans supply thinking and creativity. AI needs to remove the bullshit instead of adding bullshit to humanity.

How Not to Make Progress: No Trust and Maximum Bullshit

One of the big things that will inhibit the ability of AI to improve the workplace is this pervasive lack of trust in society. Every bit of the current trajectory will simply destroy more trust. A lot of the work that we all do at work is complete bullshit. Whether it’s training, paperwork, or various other things that are just check boxes, are all related to that lack of trust. As AI shows, most of this work is meaningless, lacks humanity, and can be automated. Rather than eliminating this useless work, the lack of trust only accelerates and amplifies it. If we do not change course, AI will undermine trust and generate even more inhumane bullshit. Over the course of my career, the bullshit grew without bounds and swallowed most of the humanity in work.

“Whoever is careless with the truth in small matters cannot be trusted with important matters” ― Albert Einstein

One of the biggest things for AI to solve is the issue of trust in itself. The tendency for hallucinations or franklybullshit us is toxic for AI’s future. It might be great for the near-term bottom line, but it destroys the long-term. This, along with the syncopacy of the replies, is a major issue. AI needs to stop this and start being honest, focusing on growing trust. There are probably internal measures and mechanisms by which the AI can return some degree of confidence and reliability in results. There are probably measures by which the AI can report that this answer is low confidence or high confidence. These can guide the users towards exercising doubt and assist in the verification of results under the appropriate circumstances.

The fact that they are a probabilistic engine means that there is a measure of probability associated with the results that it gives. Thus, a grade and score can be provided even if the highest score that it reports is something that is relatively low probability compared to what we would like. If the LLMs would let the user know that the answer is sketchy and unreliable, it would be transformative. It would show a vulnerability that would help build trust. We should never trust AI completely. Nonetheless, a tip that it was uncertain would be a boon. It would show a level of care for the user that today’s models neglect. It would also assist in educating users about what they are really dealing with.

This sort of measure built into AI would be incredibly welcome. At the same time, I think, within the way the current corporate governance works, it would be rejected out of hand because they simply want to have as many users as possible. The AI wants to express itself as being completely reliable and completely subservient to the users. Rather than provide a better service, the AIs will resist any kind of feedback that calls into doubt the results it produces. All of this is to serve the acquisition of maximizing shareholder value instead of maximizing customer service. Today’s corporate governance is squarely opposed to getting this right. This governance is at the heart of society’s deficit of trust.

“The comfort of the rich depends upon an abundant supply of the poor.” ― Voltaire

How to Actually Make Progress

Trained properly, AI could be a vastly powerful agent or assistant that can unleash human creativity. Human creativity, art, and free thinking are in short supply today. AI offers the ability to both boost this through freeing up time, but also assist people in bringing ideas to fruition and seeing whether or not they actually are good ideas worth exploring. AI can allow much more exploration and many more ideas to be brought to life, and perhaps ultimately produce far greater beneficial outcomes for business if only the businesses were to trust the people that they employed to do this kind of work. For myself, this is exactly the model of AI that I plan to exercise. I have a powerful assistant who can help me explore ideas more deeply and bring the right ones to life.

The right way to look at AI is to view it as a very capable digital assistant with broad and general knowledge. At the same time that knowledge is shallow and not at an expert level. AI cannot hold a candle to the expertise that you hold at the heart of what you do. This is the heart of humanity we should bring to our lives and work. It can help provide competent, but flawed, help in almost everything else you do that’s ancillary to your core work. In this way, AI can be a wonderful digital assistant and provide you with ease in achieving greater productivity.

As I noted above, AI couldn’t write for shit. I do believe that I am not the greatest writer, but I’m far, far better than AI. With a little effort, almost everyone probably could be taught to be better. We just need to teach people. AI doesn’t sound authentic, and it produces prose that is simply uninspired. AI is a great editor, though.

One of the biggest issues with AI is that you should doubt everything it creates. What I realized was that the way that I created AI was very much the same as the way I created science. There’s a need for verification and validation. I would approach using AI the same way I would approach a scientific problem, where I look to confirm everything it does and hold everything in doubt. It’s assumed it’s useful, but I also assume it’s flawed and in need of extra work and verification that the results are good. It would be better if AI helped and gave us a tip that its response is (more) questionable. In fact, with AI, the need for verifying and validating everything it does is much higher than with other computational tools. This calls into question the absence of V&V in the plans for AI seen societally. V&V is essential for AI’s success.

The greatest high-leverage thing that we can do is train people to use AI correctly. This was a place where my experience at the National Labs has been absolutely jaw-dropping. The management’s efforts to use AI have been ham-handed and naive. It was justsuperficial encouragement of the worst uses possible. They were encouraging people to use it, but not in an intelligent and well-thought-through way. The fact is that AI’s proper use is subtle and esoteric and requires a great deal of discipline and a change in the overall mindset. We need leadership that pushes us in the right direction. So far, all the leadership is pushing everything in the wrong direction.

“Don’t mistake activity with achievement.” ― John Wooden

Nothing more fully shows us this problem than the scientific programs around AI. DOE has the massive Genesis Project, which is just an exemplar of how not to do AI in science. It’s a whole bunch of stunts. There’s no evidence of any V&V or doubt in how it’s used. The V&V and the doubt are the most important part of science. More true with AI than any other science. Instead, it’s like recent programs. It’s all about big computers and doing things that look splashy but have very little scientific sense. It is almost 180 degrees from the right direction. AI can be a powerful tool for science, but only with a clear-eyed assessment of its results. Instead, we see blind acceptance and marketing bullshit.

The deeper issue is how this productivity will be utilized by corporations and organizations.

* Will they simply demand that the organization and the corporations produce as much as before? In this case, the gains with AI will be used to slash the size of the workforce.

* Or instead will they realize that they can unleash people to do more, and that corporations and organizations can do more and create more good for society?

This is an abundanceagenda and leads to great growth and good things for society. One path leads to destruction, and the other leads to long-term benefits. Current ideas are heading headlong toward destruction.

“Creativity is intelligence having fun.” ― Albert Einstein

To do this, we have to be mindful about how we use AI. Today’s world is full of the mindset of scarcity and the use of short-term thinking. This leads to the use of productivity to simply reduce the number of workers. This is short-sighted and ultimately robs the future of a much better outcome where we use the productivity to unleash greater creativity and more products, more output, and better things for society.

With Today’s Corporations, AI Will Fuck Us

Don’t worry, it will all be enshitified. If recent history is a guide, the magical capability of LLMs will be turned to shit. We have managed to take Google search and fuck it up systematically through greed. This greed is an enshitification plan. Smartphones are the same. Social media was never quite so magical, but it had potential. That potential has been squandered by the engine of enshitification. Now we have this new technology that seems far more powerful than any of these previous ones. It is definitely magical. We are going to turn it loose on the ecosystem that enshitifies things naturally.

What could possibly go wrong?

The capabilities and power of AI is far greater than the algorithms used in social media. With the current mentality, the creativity of humans will be greed-motivated to adapt AI into profit machines. The same mentality has already done an immense amount of damage to society. We should have faith that a more powerful technology will unleash greater damage. We are already seeing chaos and horrors in multiple ways originating from this process. Surely the power of AI will also be integrated with social media. This will supercharge profits and damage. These forces have energized toxic politics and vast income-wealth inequality. An AI supercharged ecosystem may be unimaginably worse. Without change, this is the likely course.

We should have already learned the lesson, but obviously, we haven’t. Money provides too much power to be overcome.

Zero Sum Thinking and Value

The current philosophy of maximizing shareholder value is zero-sum thinking. This is the approach where business (and life) is all about winners and losers. In today’s world, the losers are consumers who are preyed upon. Vulnerable smaller businesses are also preyed upon by massive corporations. The powerful dominate the weak and most of us are weak. Ultimately, the profit and victories are found at the expense to wide swaths of society.

I worked for decades in places where trust was in free fall. That’s not entirely true. The first decade or so at Los Alamos was a high-trust environment where people worked together. There was generosity and a spirit of giving that were essential to developing me as a professional. If you were reasonably smart and competent, you were welcomed into someone’s office and offered the best of their thoughts and advice. It was in this trust that I blossomed. Then modernity came for trust, and the generosity was hollowed out.

It is also an environment that I believe has been snuffed out. The same me plopped into the current version of the National Labs would never grow and accomplish anything like I could with that trusting environment. The lack of trust that infects society as a whole eventually took hold at the labs, as the government did not trust us, and we did not trust the government. We started to move in a headlong direction towards all of the natural outcomes for a lack of trust.

Part of this was:

– the lack of peer review

– the lack of honest assessment of work

– leadership that lied and withheld information from the rank and file

– an inability to look at risk and failure in a healthy way

All of this simply accelerated the loss of trust in the state we are in today. I think it’s safe to say that the trust in our society has never been lower. I saw all the toxic fruits of that mentality at work myself. We can see it across society, looking at politics. No matter what side you take, the other side is evil. With AI, we have a technology that can make it worse.

The problem is that these trust-building AI are not going to maximize shareholder value; however, they are going to build a system that would be suitable for the long run. We need a different fundamental mindset and corporate mentality.

“Acknowledging the good that you already have in your life is the foundation for all abundance.” ― Eckhart Tolle

Infinite Thinking

“To ask, “What’s best for me” is finite thinking. To ask, “What’s best for us” is infinite thinking.” ― Simon Sinek

The alternative to “zero-sum” thinking is infinite thinking. This thought process is couched in game theory. A zero-sum game is the classic contest with a winner and a loser. The opposite is an infinite game where it is all about continuing to play. If you play well, everyone wins. The zero-sum game is the usual football or basketball game. The infinite game is like Legos or a marriage. Success is continued play and creativity where everyone wins.

One of the greatest differences between the finite and the infinite game is an aspect of trust. To succeed at the Infinite Game, one must focus on building and maintaining trust. In the Finite Game, trust is used against you and becomes something that you wield as a weapon. This difference can be seen as our society has become completely untrusting. This is an exemplar of our commitment to these finite win-lose games as the basis for society.

“Leadership is about integrity, honesty and accountability. All components of trust.”― Simon Sinek

We are seeing a supercharging of corporate greed and behavior that drives the worst impulses of business. The other force that could change things would be regulation. We are currently in an orgy of deregulation, and there is very little thought or confidence on the part of the government to regulate an area like AI, much less tech or social media, in any sort of rational way that is based on expertise and knowledge. Instead, the vast amounts of money driven by corporate greed and inequality are tilting the playing fields squarelyagainst any of these outcomes. Thus, current trends show that trust is going to go even lower and become even worse across society as a whole. The recent dust-up between the Department of Defence and Antropic is an exemplar of this. DoD and OpenAI chose the path of no trust and greed.

Switching to a trust-building mentality is something needed by society today. With trust, collaboration and cooperation become the touchstones of how society looks. Without trust, it simplybecomes a dog-eat-dog world. You employ data and power as a weapon against those you’re pitted against. A simple and observant view of today shows you where this gets us: conflict, chaos, anger, and a host of other ills that are dragging society down.

If an alternative view of how AI is used is taken, we can also see how it can build trust. If we view AI as a vehicle for abundance, we see that it can supercharge the quality of work done. We can enhance the volume of work done and how much each worker can do. You then find that the ability to create, produce, and get products to market becomes accelerated and grows in scope. All ofthis brings wealth and prosperity to society. This, in turn, ultimately builds trust for AI and also provides benefits for the humanity that it serves. This is the path we need to take if we want AI to be good for society.

“Abundance is harder for us to handle than scarcity.” ― Nassim Nicholas Taleb

Standing in opposition to this vision is the focus on maximizing shareholder value, which is good for the short-term prosperity of society. Virtually all of us in the United States have investments in the stock market. Our retirements are all dependent on these investments doing well. We can only emphasize the short-term for so long before the bills come due.

The problem is that it’s a house of cards. The same forces are destroying trust across society, and ultimately, that destruction of trust puts the entire structure at risk. The less trust there is, and if it continues to drop, we are at risk of catastrophic destruction of the system. Indeed, we may already be experiencing the start of that catastrophic destruction as large portions of society are being dismantled by the current administration. We may be creating the roots of a crisis that will continue to cause serious damage to our future.

“Growth for the sake of growth is the ideology of the cancer cell.” ― Edward Abbey

What is lack of convergence telling us?

25 Wednesday Feb 2026

Posted by Bill Rider in Uncategorized

≈ Leave a comment

tl;dr

If a simulation does not converge under mesh refinement, it is generally a bad thing. For most practical calculations, this is not even tested, which is even worse. For simulation, the notion of convergence is the basis of faith in investing in faster computers. In brief, you expect to approach the exact solution as you use more computing resources. This comes from a smaller mesh size or time step size, needing more computing. Generally, this is simply assumed and not checked. The reality is that this often does not emerge.

What are the consequences?

We have less reliability in simulations than we should. Problems and challenges with our current technology are not challenged and improved. Progress stagnates without the feedback of reality. The promise of computational science is undermined by our accepting lax practices.

Why This Matters?

“In numerical analysis, the Lax equivalence theorem is a fundamental theorem in the analysis of linear finite difference methods for the numerical solution of linear partial differential equations. It states that for a linear consistent finite difference method for a well-posed linear initial value problem, the method is convergent if and only if it is stable. ” – John Strikwerda

In computational science, the premise that faster computers yield better solutions is axiomatic. It provides unyielding confidence in the utility of computers for better science. As the Lax equivalence theorem states, it is not without conditions. It is not assured. The issue is that the necessary homework to ensure it is in effect is too rarely done. Most computational science today simply makes this assumption and treats convergence as a fait accompli. This is dangerous for the use of computation. It threatens progress and undermines the credibility and utility of the field.

There are many reasons for this, mostly laziness or ignorance. The practice of code verification is firmly grounded in the equivalence theorem. Doing a code verification exercise makes this theorem actionable. The practice of code verification is relatively uncommon. It is definitely done far more in the case where full numerical accuracy can be achieved, i.e., smooth solutions often via the method of manufactured solutions. It is done far less often when calculations lose smoothness. This gets to a couple of large gaps in practice:

1. Code verification allows precise error estimation,

2. Code verification can still be done on solutions not allowing full numerical accuracy, but still demanding convergence,

3. And most importantly, these non-smooth circumstances are what the vast majority of practical applications of computation produce.

Even with the vast availability of computing power today, numerical error persists. The technique of solution verification exists to measure this error. More importantly, the practice checks to see if the application of more computing gives better answers. It makes sure the promise of the equivalence theorem is delivered. This promise requires attention to detail and focus. It is also more technically complex in many cases. The entire premise is still a place for progress to be made. That progress has stagnated over the past couple of decades. The whole thing is also a lot of work and thus expensive. Right now, laziness and cheapness is winning.

Why Is it Avoided?

There is a virtual universal tendency to ignore numerical error in practical calculations. This can be traced back to the same principle in looking at the error in the verification of methods. In the ubiquitous Sod shock tube, the “Hello World” problem for shocks, is rarely subjected to any sort of accuracy or order of convergence testing. This applies to virtually every shock problem used to demonstrate and follow practical calculations. In those practical calculations, this is premised on the underlying characteristic where convergence is actually generally first order (or less). The truth is that the level of error varies a great deal. Often, more expensive high-order methods are less accurate with shocks and quite expensive. All of this is contributing to stagnation in the advancement of practical methods.

The upshot of this is a tendency to ignore numerical error in practical calculations. Thus, we don’t confront the very real problem and avoid the very real opportunity of higher resolution methods having a benefit for methods in general. This contributes to the stagnation in methodology and avoids the uplift in method efficacy and efficiency that will result from improved numerical methods. We see yet another example of systematic avoidance of reality. This then leads to the lack of progress and the failure to improve the state of the world. Even when the answers are lying right in front of us. Of course, if you never do any estimation or convergence testing, you’ll never know whether the calculation will converge at all. Convergence and improvement with more computation are simply assumed as if they were an “iron law”. It is not. All of this allows the systematic avoidance of very real problems in progress that should be happening. The whole issue around the lack of convergence in solution verification is a very ugly and deep topic. For the most part, it is not explored at all in the literature, nor are the consequences appreciated. Yet this lack of convergence is, in a very deep sense, a problem that receives no attention. If it fails, those results are often simply hidden, never discussed, but these results have a much deeper and more insidious problem. The issue most days is that it’s merely inconvenient, but they point towards a notion that the methods and models in the code need to be examined and improved. The lack of improvement renders the supercomputers that are the focus of so much attention, not delivering the benefits they promise.

What are the excuses for this deplorable practice?

  • The most common thing is that the analyst doesn’t even think about it. They just use the same resolution they always have or that their buddy used in doing a similar calculation. The excuses only get worse from there.
  • The next excuse is that they use as much computing as they can possibly afford. They assume that because it’s as much as they can afford, it’s the best calculation they can get.
  • They never check to see whether a coarser resolution would actually provide an adequate solution. Those relatively cheap coarse solutions might produce a sequence of calculations that is well-ordered and convergent.
  • To get a step worse, you have the analysts who absolutely know that the calculation will not converge to anything. They mindfully avoid looking at the question because they know the answer is bad. This is far too common and often related to acknowledged shortcomings in the simulation software that nobody looks at, nobody is willing to fix, and simply persists endlessly.

As you go down this hierarchy of repugnant behavior, you see a constancy of an unwillingness to ask deeper questions or demand higher standards from the software. Ultimately, the analysis of meaningful systems suffers. The credibility of the analysis suffers. Progress towards better results suffers. In the long run, humanity suffers from the lack of progress towards better science and engineering. In today’s world, all ofthese excuses hold because they’re highly cost-effective and don’t cost anyone extra money. Asking questions that are hard or finding problems in simulation software is something that nobody wants to pay for, and as a result, all the bad behaviors are essentially chosen out of expediency.

“We can only see a short distance ahead, but we can see plenty there that needs to be done.” ― Alan Turing

The Deeper Issues Underneath

“Failure is constructive feedback that tells you to try a different approach to accomplish what you want.” ― Idowu Koyenikan

A common way for the equations to be inconsistent is ignorance of the mass equation. Remarkably, the mass equation is violated. It is remarkable as conservation of mass is the most fundamental conservation law. When it is violated, all the conservation laws are trashed. This happens often in a couple of cases where the conservation of mass is implicit in the equations. It is just assumed to hold. As such, the equation is not evident, and this underlies the root of a huge mistake.

For example, in a Lagrangian calculation, the mass is conserved, but it is implicit in the position of the mesh nodes. Thu,s there is a transformation between that motion and the conservation of mass. If those positions get scrambled by shear in a flow, then the transformation can become ill-posed. This can be fatal for a calculation. This causes panic in the code user. The simple answer is to delete the misbehavior. It works, but also destroys the consistency of the calculation. Convergence toward the correct solution is also destroyed. The equivalence theorem is systematically violated.

In an incompressible flow, the divergence constraint can be derived either from the mass equation. In other derivations, the pressure equation is used. This is probably the more consistent way to do this. The conservation of mass can be lost if one labels mass for a multi-material flow, such as a method like level sets, where the area is represented. The area or volume then is proportional to the mass, just as in Lagrangian equations. In some cases (like standard level sets), the area-volume is not preserved, and mass is lostor gained (mostly lost). Again, the most fundamental conservation law is not maintained. The equivalence theorem is violated. Convergence is not assured by construction.

A worse and more pernicious way for this to happen is in shock calculations at Sandia, where troublesome material states are deleted. This is done to avoid issues with equations of state as material evolves into extreme conditions. This is done with several codes at Sandia. I had pointed out to management that this leads to an inconsistency in the fundamental governing equations and a violation of the Lax-Equivalence theorem. I had done this nearly 20 years ago and again more recently. The response was “meh, we don’t care”. The technique is really useful for making the code “robust”. This is just like the excuse for deleting elements described next. The calculation runs faster and to completion. Inconsistency physically, isn’t a concern apparently. This means solutions using this approach are likely bullshit.

A common approach in Sandia codes and other solid mechanics code, it is to practice element death. This is where mass is deletedwhen the Lagrangian representation becomes troublesome. This is the inversion of elements or cells due to their position. It almost always happens in shear. Shear is ubiqutious physicallybecause turbulence. This destroys the consistency of the equation, and any sense of convergence under mesh refinement is similarly destroyed. Yet this practice persists and is commonly used for extremely high consequence calculations across a number of important settings, industrial or military related. Again, the lack of convergence is seemingly ignored by practitioners. The behavior of the analysts indicates they know this, but the practice persists.

The reliance on this technique is appalling when there are more suitable approaches that would converge. Most notably, remapping, remeshing, and methods used in arbitrary Lagrangian-Eulerian (ALE) calculations. This is a well-developed and mature technology that is being ignored. Convergence is simply unimportant. It would take care of this problem. I would counter: it’s more expensive to buy a very expensive supercomputer that’s rendered incompetent by the code and methods that are puton it, that have no consistency with the fundamental governing equations. Theseconcerns have fallen on deaf ears as we simply persist in using these methods. These practices and their persistence are among the most vexing and unsupportable things I’ve seen in terms of practice. It isn’t as bad as the lack of ethics, but definitely incompetent.

To get concrete on the negative side effects of all this, one thing is the acceptance of either poor, inaccurate, or even non-convergent numerical methods. In the wake of this, you often get a calibration of a calculation to data that includes the effects of mesh resolution as part of this. So the model used is mesh dependent instead of consistent. To some extent, this is inevitable for most challenging physics. In these cases, this is not a case where the physics is too difficult. This is mindful ignorance and downright lazy. We know how to avoid these problems. We just are not doing the work; it is lazy. Thus, you cut off any improvement in the numerical methods or the code at the knees by simply taking the poor results and encoding it into the model. Simulations relying on these techniques are simply not credible.

Now that we’ve gotten the willful ignorance and incompetence out of the way, we can move on. There are other, more banal reasons for the lack of convergence.

Both deep theoretical questions remain unanswered regarding calculations of things that look and feel like turbulence and may indeed be turbulence. There are also gaps in how we practically model a host of important systems that are not being examined. Our modeling is thus more uncertain than it should be. These issues are not being investigated sufficientlyand progress is not being made because they are not being looked at. Granted, there are some huge theoretical challenges too. The mathematics and numerical methods are not up to the challenge yet. The bigger issue is that the standard practices are not forcing the issue. The obvious fundamental questions are not being asked.

In most practical calculations, the quantities of interest are not the full field, and the convergence in various norms makes little sense. Thus, the numerical analysis is not fully applicable. The metrics are integrated measures like energy released. Worse yet specific quantities and given locations that are measured. This is often guided by the application setting. Another clear example is maximum or minimum conditions, such as temperature. This case is adjacent to the L-infinity norm, which tends to be difficult to achieve convergence in. We know that things like mins and maxes are incredibly poorly behaved, but also extremely important in terms of safety and various thresholds that we do not want materials to reach. All of this requires a much sterner and more committed scientific exercise in trying to produce reliable, credible calculations.

If we don’t ask the basic questions of the calculations, progress certainly won’t happen by magic.

In general, if we have turbulent or turbulent-like phenomena, we can assume that parts of the correct solution vary substantially. This variability depends on things like the initial conditions and the degree of homogeneity of those initial conditions. It is a reasonable belief that initial conditions in a calculation are far more quiescent and homogeneous than reality. The impact of that is not accountable in calculations.

Moreover, a single calculation is just one draw from a realisation. Often, the experiment we are comparing to is a similar single draw. We should have no expectation that we are drawing from the same realization as a calculation. Appropriate variability over a set of initial, boundary, and material compositions would lead to a variable outcome and ultimately a PDF of a given solution. One would then look at solution verification as how this PDF changes as a function of the mesh resolution, independent of all other differences in initial and boundary conditions. For the most part, we are not thinking of simulations like this at all.

The upshot of all of this is a general lack of credibility in computational simulations that is unnecessary today. It results in the vast investments in supercomputing going unrealized in terms of their potential impact in science and engineering.

If we had more reliable mesh conversions done as part of the analysis, the results would be immediate. You would expose problems, define opportunities for improvement, and produce a far better science. All ofthis would serve the benefits of computer simulation far better than today’s lazy practices. We have a lot of potential to make this technology far more accurate and capable of producing extremely great outcomes for society. The problem is that the lazy practices are accepted because they’re cheap. High quality and doing things right isn’t cheap, but in the long run, it produces far higher value, a value that we today are missing.

“Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won’t come in.” ― Alan Alda

The Path Forward

“If you’re going to say what you want to say, you’re going to hear what you don’t want to hear.” ― Roberto Bolaño

The way to make things better is at once simple and complex. A first step is to start doing the fucking work! Simply doing the (required) tests and asking questions is the way forward. Practically speaking, there are a number of things that need to change to do this. We need management support for it. We need funding support for this. Leadership needs to be willing to support negative results and appropriate responses. We need a genuine effort to cast failures as opportunities to learn and grow.

There needs to be trust that the negative results are not evidence of malfesence or incompetence. We need project and program management that allows adjustment in the trajectory of work to the results. Work is not a straight line. We are not building a bridge or repaving a road. We need a commitment to progress and advancing knowledge. All of these things are lacking today. We need a different spirit of work. Above all else we need trust in each other.

“The Four Agreements

1. Be impeccable with your word.

2. Don’t take anything personally.

3. Don’t make assumptions.

4. Always do your best. ” ― don Miguel Ruiz

Reality Bites Back

22 Sunday Feb 2026

Posted by Bill Rider in Uncategorized

≈ Leave a comment

tl;dr

There is a broad tendency to reject reality today. It can be seen across society and our institutions. One of the big reasons is that reality is unrelenting and brutal. Reality bites back. It has become vogue to try to define reality according to the wishes of leadership. It is almost willing their desires into being. Often, the way to wish this reality into being is through virtual systems. The world is online, and the online image can be crafted to meet the desires. In fact, the populace engages so virtually, it encourages it. This is also true in science, where virtual simulations or AI provide a compelling view that does not need to be the actual reality. The problem with this approach is that it avoids innovation and problem-solving needed for progress. The harsh feedback from reality is needed to adjust and push back on wrong approaches. Too often today, we accept the wrong and reject the evidence of reality.

“Reality continues to ruin my life.”― Bill Watterson

The Attraction of Virtual

“The real world is where the monsters are.”― Rick Riordan

The escape from reality is driven by our increasingly online lives. It’s social media, email, and the internet. This seems to have driven a change in how leadership approaches dealing with reality. By dealing with it, I mean ignoring it. They increasingly look to shape the online narrative and worry about the exposure of their lies, issues, and problems there. In the real world, all of this exists, but can be hidden. They fear the capacity of the online to amplify the signal of reality.

I have seen the leadership work to shape the online narrative and avoid any sort of subtlety or nuance of reality. This is a troubling trend that is leading to a de facto ignorance of reality. They are systematically ignoring problems that exist. This is done simply by shaping the messaging to be ignored instead of identifying and confronting it. We as a society need to overcome this, or we will be swallowed by this virtual world. Then the real world will bite us back in a way that could be fatal.

I realize that my own writing online has pushed back against this trend. The reaction of my former workplace to my writing is evidence of how uncomfortable management is with the actual world. The result is an astonishing degree of hidden power and lack of transparency. In a sense, the lack of transparency now is worse in the online world than before. In the current time, it is clear that societal leadership is hiding a lot. The prime example is the behavior and actions of the “Epstein class”. There we see how the rich and powerful act behind closed doors, and it’s appalling. It’s criminal and unethical. The same kind of unethical and damning behavior is present to lesser degrees throughout the rest of society. All of this seems to be somewhat of a consequence of the tie to this online-virtual world dominating.

Science Cannot Be Virtual

The conduct of science has been infected by this. One key symptom of the problem is the prevalence of uncertainty quantification (UQ) over verification and validation (V&V). UQ has become dominant lately. V&V is fading from practice. The reason is simple. UQ only needs a virtual model to give voluminous results. V&V imposes a harsh reality on models. V&V finds problems and shortcomings that require plans to change. UQ just gives results galore. Why invite problems with V&V, when UQ makes you look great?

The answer is that V&V is the scientific method, and UQ is not without V&V.

It is the desire to have a purely virtual world. It doesn’t have any connection to reality. Verification and validation are both connections to reality. Verification is an analytical mathematical reality and validation is through experiment. Both are hard and unyielding. Both find problems and demand progress and improvement. More recently, machine learning, particularly AI, and the use of UQ. Both strongly push towards these virtual worlds that have no connection to reality (other than training data). They give results without the difficulties that come with reality, either real or analytical.

All of this goes together into a flywheel. Without the notions of reality, the flywheel falls apart. Reality provides the feedback to the efficacy of both the theory and our knowledge of the world. It also provides surprises and the necessary push to advance things. It provides the feedback that things either do work or don’t work, and makes sure that the advances in science are earnest and actually correct. What I’ve seen is that these notions are being rejected increasingly.

“If there’s a single lesson that life teaches us, it’s that wishing doesn’t make it so.” ― Lev Grossman

For example, in computational work, there is a seeming ubiquity and embrace of UQ as an activity unto itself. UQ is a key part of the validation of a model. Instead, UQ is often untethered from any sort of reality. Thus, there is no verification or validation applied to it. It simply exists by itself. This is the output from a virtual world, and without the feedback of reality. It will produce results that have absolutely no bearing on anything that we see. There are good examples of this in the literature. Perhaps one of the most august examples comes from the National Ignition Facility. There, they did an extensive UQ with their simulation code before they had even shot the lasers and attempted to do a fusion experiment. In this work, they produced a magnificent body of work that showed the possibilities of what the laser would produce. All of this work produced a PDF of the outcomes, and the target yield would go from 900 kJ to 9 MJ. Then they ran the actual experiment, and the result was a 300 kJ shot. This massive UQ study, with an incredible UQ framework and pipeline, supercomputers, and cutting-edge state-of-the-art simulation code, produced a study where the reality was not even in a PDF that spanned an order of magnitude. It showed how useless UQ is without V&V. Reality crushed the virtual.

It would be years before they ultimately got the targets to actually sit on the original PDF. What this really said was that this initial study was deeply flawed. If you look at the study, there is no hint of verification work either. The code was never attached to the reality of interest. When it was the results were damining! Whether or not it actually produced the theory correctly, and validation work certainly was not available, or applied to keep the work tethered to reality. Only when the experiments were actually conducted did they discover and learned important details. This entire troubling episode exists in the current world where honest and hard-nosed peer review is in free fall. The virtual world much more readily gives people the answers they want instead of the answers they need. Without the hard-nosed peer review that provides feedback to the overall process, the entire enterprise risks going in the wrong direction without the necessary adjustments to keep itself in a position where it can affect the actual reality.

In the case of the NIF study, it’s my belief that the modeling was done in a wholly optimistic way. This avoided a number of realities that should have been evident to the people conducting the study. First and foremost among these are the realities of turbulent mixing that is ubiquitous in nature. Much of the history of design for fusion capsules has worked from the premise that somehow this mix could be controlled and tamed. In this way, it would make the achievement of the conditions for fusion far easier than it is in reality. Without validation data to tie the reality, the modelers simply followed optimism to its usual outcome: results that were far more optimistic than any reality that they would be able to visit. The hope was that the optimistic results would allow money to rain down on the program at critical junctures when funding was threatened.

This is most acutely available in terms of AI and the belief that somehow AI can produce massive returns in the scientific world. This is definitely a perspective that needs to be leavened bya good dose of reality. It isn’t that AI can’t be a tool that we use to advance science, but rather that AI is still embedded within a scientific method and does not change its precepts. The scientific method is ultimately tied to observing and comparing to objective reality in two modes:

  1. The key mode is observation and experiment, where data from the real world is applied and looked at, sometimes to understand theories. Validation is the process for computational modeling.
  2. These theories almost always take the form of mathematics. Comparison with the theory computationallyis verification.

A large part of this dynamic revolves around the focus on supercomputers. As I’ve said many times before, supercomputers are an unabashed good. The concern with them is their priority compared to all other activities. Computational science is always a balance between what computers offer and what the rest of science offers to the entire enterprise of computing results. The current focus on AI only amplifies these concerns, as AI is trained from real-world data, but also is not tethered to a theory. Thus, in simple terms, the possibility of correlation equaling causation becomes an outcome that is invariably achieved. This must be countered by strong theoretical aspects that provide the sort of feedback needed to make sure that we actually understand what we are doing.

The same sort of bullshit optimism is present in programs that promote the extensive use of supercomputing as the ticket to modeling efficacy. We are seeing more of the same bullshit and optimism with AI, where computing is viewed as a one-size-fits-all cure to problems. The actual issues are far deeper scientifically, and requires much more balanced approach to get optimal solutions. In today’s world, it seems that if you want funding, you avoid reality. If you want progress and success, reality is something that needs to be wrestled with, like the brutal opponent that it actually is. Unfortunately, today, bullshit gets better funding for these institutions than real progress and real success. We should be wary, as A.I. is a bullshit factory, the likes of which we have never seen before. We are vulnerable to it. What is notable when you look at the science programs associated with AI is that not one iota of V&V is present in the work that’s proposed. It’s all stunts and showboats and little to no actual Science. The injection of V&V would bring the scientific method to bear on them.

“Reality is that which, when you stop believing in it, doesn’t go away.” ― Philip K. Dick

It Goes Past Science: The Trust Trap

“It’s funny how humans can wrap their mind around things and fit them into their version of reality.” ― Rick Riordan

An issue is the seeming present-day appeal of virtual worlds as compared to reality. Almost anyone you talk to says that reality sucks today, that it’s really terrible, and everything feels like it’s out of control and doomed. The virtual world offers escape from all this. My concern is that, after years and years of this virtual world, we will no longer be able to effectively deal with reality. Eventually, there will be some sort of feedback from reality that is so brutal that it will only serve to undermine and destroy any of the existing trust that exists. This could create even further damage and create a death spiral for the United States. This is true for science and other key institutions. Our government, corporations, and universities are all vulnerable to this.

One of the huge current developments is AI. It is receiving massive levels of investment across society. Science is one avenue of development where issues are evident. AI has incredible potential for business and corporate interests. In all cases, AI is both a huge opportunity and perhaps an even greater danger. We see a relatively uniform approach to improving AI via computing hardware, while the intellectual basis is languishing. We are ignoring the high risk high payoff routes to progress. The historical evidence points to mathematics and algorithms being the way to success. The real World consequences of this approach are endangering the health of the economy. The whole AI stack is well marbled with bullshit. The computing forward approach is questionable as an effective path to improving AI. All the evidence points toward it being grossly inefficient. Thus, the massive investment will not yield an effective payoff. This reality may be catastrophic with shades of the dot.com and 2008 economic meltdowns.

“Thinking something does not make it true. Wanting something does not make it real.” ― Michelle Hodkin

The misguided focus on computing is a reflection of our societal trust deficit. A better path to AI requires a level of trust we cannot muster today. In reaction to the trust deficit, we follow banal paths that are easy to sell to the public. Lots of computing is simple and seems plausible to the naive layperson. Computers are the tangible and concrete objects associated with modeling or AI. This becomes the simple projection of the virtual onto the real world. The actual path to progress is esoteric and far harder to describe. Computing is a part of this, but it cannot succeed without other investments. This is true for the scientific enterprise for modeling or developing AI. Today, our efforts completely lack any balance. This lack of balance will doom them both. We lack the trust necessary to succeed.

One of the key issues with corporate behavior is that trust is getting worse. The brutal focus on maximizing shareholder value has powered social media to annihilate societal trust. AI is far more powerful, and the same forces will take trust even lower. We are risking a downward spiral that could be a catastrophe. It is time to turn away from this “doom loop”. We need to take actions to improve trust and defuse both social media and AI’s damage. Triggering an economic meltdown would be damaging both materially and psychologically. It would be real-world consequences reminding this virtual focus of its power. It would be a worthy and brutal reminder of the need for appropriate focus.

The sort of brutality of reality points to the desire of leaders to live in a virtual world they can shape. The basic character of the virtual world can be steered to strongly confirm their biases. The leaders have a story of success, and the virtual world will confirm it. They work to make sure you aren’t getting new information to question their truth. This creates a situation where the gap between perception and reality grows larger and larger. When reality finally intrudes, you see a rupture of trust. This has happened over and over during the current era. The consequence is the loss of trust in almost every institution our society relies upon. We all see it in our current politics. I saw it from inside the National Labs.

It also has a more pernicious impact. That new information from reality is the source of innovation and progress. It is also the source of inspiration with plans that need to change to adapt. Thus, the denial of reality creates stagnation. The difference between stagnation and decline is subtle. One can easily build up the conditions for decline and decay. The use of earned value to manage science is a clear sign of a lack of trust. In recent years, this concept has been demanded for managing work at the Labs. Earned value is a concept appropriate for construction projects. Well-defined work you’ve done over and over again. It is completely inappropriate for anything like science or even cutting-edge engineering. This management model rejects reality and plays into the trap of virtually defined success.

We are constantly seeing the rejection of reality on the part of our leadership in all settings. I saw it regularly in terms of how lab leadership would talk about what was going on internally. All programs were wildly successful. This works as long as they are talking about something you don’t know about personally. Then they would talk about the same about something you do know about. Suddenly, you are confronted with their bullshit. The correct conclusion is to question everything else they say. If they are bullshitting you about what they know, can they be trusted? The answer increasingly is they can’t be trusted. This is the downward spiral of trust.

You see it with various corporations and how they talk about their own products. They are always looking to craft a message to maximize shareholder comfort. Then you see it with politicians who never take responsibility for anything and often outright lie and bullshit their way through everything. They are trying to spin every single event into a frame that they like. No one is confronting the objective reality. The actual truth becomes this game of “hot potato”. The virtual social media world becomes the vehicle for all of this bullshitting. Reality can be rejected through control, memes, and distraction.

This is perhaps most vividly shown in the spin and narrative around the two shootings of civilians in Minneapolis. In both cases, they were outright murders by paramilitary thugs. Instead, the government characterized them both as terrorists. They were people who actually deserved to be executed and were deserving of their fate. This served the purposes of the administration through a rejection of reality. In all of this rejection of reality, we lose the ability to adjust and change our course, to modifyactions so that bad things stop happening. This is true at every single level. Whether it be the laboratories I worked at, corporations, or the policies of the nation itself, all need adjustment. Without that adjustmentthat reality provides, they are careening towards even bigger disasters.

“Either you deal with what is the reality, or you can be sure that the reality is going to deal with you.” ― Alex Haley

None

At the NNSA laboratories, there is the prospect of a renewed nuclear arms race as the START Treaty has ended. The President has hinted at starting nuclear testing again. A resumption of nuclear testing, and/or active development of nuclear weapons, means harsh realities are coming. We cannot control or avoid them much longer. Ultimately, we will have to confront the problem that these realities are coming where reality has been rejected for a long time. The danger of the outcomes being bad has escalated to a dangerous level. Today, we have Schrodinger’s nuclear stockpile. By this, I mean it both works and doesn’t work as intended. Until we open that box, we won’t know the answer. We are about to open the box.

Are we ready for this? Not from what I observed. Reality is going to kick our ass. I pray it doesn’t kill us.

“Life is a series of natural and spontaneous changes. Don’t resist them; that only creates sorrow. Let reality be reality. Let things flow naturally forward in whatever way they like.” ― Lao Tzu

References

Oberkampf, William L., and Christopher J. Roy. Verification and validation in scientific computing. Cambridge university press, 2010.

Haan, S. W., J. D. Lindl, D. A. Callahan, D. S. Clark, J. D. Salmonson, B. A. Hammel, L. J. Atherton et al. “Point design targets, specifications, and requirements for the 2010 ignition campaign on the National Ignition Facility.” Physics of Plasmas18, no. 5 (2011).

← Older posts

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • March 2026
  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 60 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...