• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Category Archives: Uncategorized

American Science is in Free Fall

24 Thursday Jul 2025

Posted by Bill Rider in Uncategorized

≈ 1 Comment

tl;dr

Since the end of World War 2, the United States has led the World in Science and Technology. Today, that leadership is gone. The decline has been decades in the making. That decline has turned into a free fall. In my own work, I have witnessed our almost willful abdication of the throne. As I discovered, this trend was seen across many disciplines. Through a combination of arrogance, mismanagement, and outright incompetence, American supremacy decayed. All of this revolves around a loss of societal focus coupled with waning trust. COVID-19 proved to be a near-death blow to societal support. These trends now combine to see a National suicide pact for scientific superiority under Trump. The lead that had disappeared is now in retreat and absolute surrender.

“Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.’” ― Isaac Asimov

Who was leading a year ago?

Catalog Number: Fermi Enrico E13Met Lab alumni, 1946. Fermi first row left, Szilard second from right. This team worked with Enrico Fermi during the Second World War in achieving the first self-sustained chain reaction in nuclear energy on December 2, 1942, at Stagg Field, University of Chicago.Credit: Digital Photo Archive, Department of Energy, courtesy AIP Emilio Segre Visual ArchivesCredit: U.S. Department of Energy, Historian’s Office.This image is in the Public Domain.

China was already ahead of the USA before Trump took office. Losing the lead was decades in the making. It is well documented by the Australian Strategic Policy Institute’s in a recent report (https://www.aspi.org.au/report/critical-technology-tracker/). “Who is Leading in Critical Technology” shows that China leads in 37 of 44 important areas. This report confirmed signs I’d been seeing in person for years. China’s ascent was rapid and absolutely stunning to watch. In my own area, they went from laughable to world-class in a little more than a decade. When I spoke to scientists from completely different areas, the story was the same (an area of chemistry specifically).

Why did I put this section in this frame? A year ago, the scientific community in the USA was far healthier than it is today. The Trump administration has launched an all-out assault on Science. The core of American science is being slaughtered in plain sight. The top universities in the country are under assault. The key agencies for research funding are being butchered as NIH, NSF, and NASA budgets are being cut by huge amounts. Where the money is going, the science production is highly suspect. This is mostly the defense sector, which already has huge problems.

The upshot is that American science and technology were in poor shape a year ago. The past year has seen the Nation decide to mercy kill it. If we were out of the lead a year ago, we have receded even further. Worse yet, we have no plan or way back. The slow, steady decline of science has turned into a free fall. This is a massive crisis, and American National Security and economic prosperity are in peril. This development puts American lives at risk. It assures that Americans of the future will be poorer and live shorter lives. The loss of scientific supremacy was incompetent, but the current approach is criminal negligence. Rather than fix the underlying problems, the current trajectory is to dig the hole even deeper.

“America says it loves science, but it sure as hell doesn’t want to pay for it.” ― Hope Jahren

What Is Killing Science?

If I look back at my professional life, which spans nearly 40 years, the decline is obvious. It became obvious shortly after I arrived at Los Alamos in 1989. At first, Los Alamos was a godsend, and I felt magnificent. It was a huge upgrade from my University (third-tier New Mexico). Los Alamos would be an upgrade from almost any university, save a select few (Harvard, MIT, Caltech, …). That said, the signs of decline were almost immediately evident. Los Alamos had been in decline since roughly 1980, if not earlier.

(FILES): This April 11, 1979 file photo released by the US National Archives shows a view of the Three Mile Island Nuclear Power Plant near Middletown, Pennsylvania. Nearly 32 years after the March 28, 1979 accident at Three Mile Island, the Fukushima nuclear accident is considered “worse than Three Mile Island, but not as great as Chernobyl,” Andre-Claude Lacoste, head of France’s safety agency, said on March 14, 2011 AFP PHOTO / US NATIONAL ARCHIVES == RESTRICTED TO EDITORIAL USE / MANDATORY CREDIT “AFP PHOTO / US NATIONAL ARCHIVES” / NO MARKETING / NO ADVERTISING CAMPAIGNS / DISTRIBUTED AS A SERVICE TO CLIENTS == (Photo credit should read -/AFP/Getty Images)

As any keen observer of history, 1980 stands out with the election of Reagan and the beginning of an assault against the government. The 1970’s were more likely the origin of the decline with a host of ills from Watergate, the end of the Vietnam War, Love Canal, Three Mile Island, … We felt a collective withdrawal from faith and trust in the Nation represented by the Government. Reagan and the GOP took this into a full-on attack. The trust and confidence in science were part of the decline, as some constraints were placed on science.

Part of the damage Reagan produced came from the ascendancy of the “maximizing shareholder value” philosophy. This became central to American governance, whether in the public or private sector. The use of business principles based on this scarcity approach became how the government worked, too. Thus, the business approach driving massive inequality was applied to science and technology. The other side effect of the greedy business philosophy was the decimation of industrial science. Say goodbye to IBM and AT&T labs, and set the stage for the debacle of Boeing, seen more recently. It is a philosophy that breeds incompetence and fuck-ups. We see the rank and file worker or scientist devalued. Managers are now the valuable ones. Gone are pensions and the value of technical experts.

“The dumbing down of American is most evident in the slow decay of substantive content in the enormously influential media, the 30 second sound bites (now down to 10 seconds or less), lowest common denominator programming, credulous presentations on pseudoscience and superstition, but especially a kind of celebration of ignorance” ― Carl Sagan

This was reflected in Lab leadership and governance, which declined. Reagan also took on the Soviets, and this shielded Los Alamos to some degree. The Star Wars funding also blunted the damage. When the Cold War ended in 1989, marked by the Berlin Wall coming down, the real decline stepped into overdrive. Gradually, all funding support for the defense against Soviet aggression was lost. It was also far less simple than the lack of trust by the Nation also took root. Funding started to come with strings attached and micromanagement by Congress.

This micromanagement has only become worse and has become a slow strangulation of science. The management is being done by people who have little to no business telling the Labs what to do. This is complemented by a host of other dictums in safety and security. Each of these other requirements takes its own pound of flesh. None of them yields any benefit for the Lab’s mission. All of them detract. Each one is a sign of the lack of trust for the Lab. Any minor screwup or failure breeds another useless bit of bureaucracy, training, or overhead. The result is an appalling cost for the Lab’s work and a diminishing effectiveness. Meanwhile, nothing is done to push in the opposite direction.

The consequence is a decline in science. Los Alamos and the other NNSA labs are bastions of competence and accomplishment to much of the rest of science. NASA and the Defense Labs are far worse and have taken a bigger fall from the heights. Despite vast sums of money, Defense Labs are terrible. They rely on crap technology from NNSA labs and can’t produce anything better. Universities are not immune either. We see less accomplishment and increased costs everywhere. The result is American science and technology losing its lead internationally. This retreat was steady and gradual until now. This year, we handed the crown to China with a virtual abdication.

“But you see, a rich country like America can perhaps afford to be stupid.” ― Barack Obama

Who is leading now?

As noted above, China now leads, and the USA may now be behind Europe as well once the Trump Administration’s carnage takes hold. I watched this play out in my area of expertise, Shock Physics. I am an expert in solving the equations of fluid dynamics, especially with shock waves. It is a key and essential science needed in defense science and nuclear weapons. I have always paid attention to the work around the world. Europeans have always been very good. If I go back 15-20 years, the Chinese were modestly to laughably competent. Their papers weren’t very good at all. Over a decade, this changed. All of a sudden, after 2015, something changed. Their work was great. It was as good or better than anything in the West.

A bunch of us from the Western nuclear weapons labs attended a meeting in 2018. It was ECCOMAS in Glasgow, Scotland. We saw the impressive Chinese work in person. We were all in agreement about how impressive it was. We saw talk after talk of world-class work, including efforts that exceeded what was going on at our Labs. Soon after, I had the opportunity to engage our federal program manager. I felt like it was essential to point this out to them. The response was utter and complete indifference. Basically, they could not give a single fuck about the newfound Chinese lead in shock physics. So the USA is fucked.

63 Defense laboratories and engineering centers with ~40,000 scientists and engineers in 22 states and the District of Columbia.

Let’s talk a bit about just how fucked we are. There is a shock code that is used broadly across the American National security establishment. It comes from an NNSA lab, and is used by another Lab, but extensively by DoD. It is this DoD connection that illustrates vividly how fucked we are. Let me remind you that the DoD budget is now over a trillion dollars. Yet for this vast amount of money and copious funding for decades, the DoD can’t make a shock code worth a shit. This code they all use is an abomination. This abomination is better than anything they could make, which is nothing. Easy to use and lots of models, plus it runs fast on computers (although its character is threatened there being a Fortran code). Worse yet, the code was written when I was graduating from high school (class of ’82! Go Eagles!). It includes 1982 technology and none of the vast improvements since. To me, this is unacceptable, but to the USA, this is just what we do. So we are fucked.

This was not a sudden event. It was years in the making. On the one hand, you had mismanagement, poor investment, and different priorities in the USA. This was countered by focused support and radical progress in China. The USA simply stopped striving and allowed the tools to dull. We focused on big computers instead of a balance of computers with codes, methods, and mathematics. We quit doing the things that brought us the lead in the first place. The Chinese did. The USA simply surrendered, not intentionally, but by lack of care mixed with arrogance. We lost a key area of defense science that we invented. This was done by the same indifference and lack of giving fucks I encountered.

“Scientists and inventors of the USA (especially in the so-called “blue state” that voted overwhelmingly against Trump) have to think long and hard whether they want to continue research that will help their government remain the world’s superpower. All the scientists who worked in and for Germany in the 1930s lived to regret that they directly helped a sociopath like Hitler harm millions of people. Let us not repeat the same mistakes over and over again.” ― Piero Scaruffi

As other professionals have told me, this is not limited to shock physics. I talked to a distinguished Oak Ridge Chemist at a cocktail party. He told me the same story in his area. Marginal competence followed by a rapid ascent to superiority. The Australian study noted at the beginning highlights this happening in area after area. These stories are not isolated; the problems are systematic. This was the result of two forces working in concert. American decline and incompetence, together with Chinese focus, investment, and endeavor. Our decline is the product of decades of malpractice. Current policy is not fixing the problems, but adding malice and outright negligence to the problems.

What is at stake?

“Science and technology are the engines of prosperity. Of course, one is free to ignore science and technology, but only at your peril. The world does not stand still because you are reading a religious text. If you do not master the latest in science and technology, then your competitors will.” ― Michio Kaku

The stakes for the USA and the World are huge. For most Americans, the most obvious impact is national security. This feels the highest leverage for pushing back. Ever since World War 2, scientific supremacy has been essential for National defense. Scientific power with nuclear weapons replaced industrial capacity for effective war-making. As drones, robots, and AI become more central, this becomes even more compelling. We already have nuclear weapons and their science as a huge leverage point for science and technology. It is precisely the moment in history when the danger feels maximized. In this moment, American supremacy is disappearing. Future Americans will be less safe and less free.

“The progress and perfection of mathematics are linked closely with the prosperity of the state.” ― Carl Sagan

The signs are more troubling than most realize. Take our industrial base in the form of aerospace. Boeing used to be the apex of engineering in the USA. Greed and short-term focus have annihilated the company’s prowess, seeding a host of disasters. This also hints at another loss from incompetent science, our prosperity. Defense science has been the root of much of our economy today. Just take the internet, a product of a DARPA project amid the Cold War. Now it has become the central backbone of the international economy. The future “internet” and tomorrow’s economy are much more likely to come from elsewhere. Future Americans will be poorer for this. The damning fact is that this is almost entirely self-inflicted.

We are amid vast and over-the-top AI hype. Every fucking Lab is going ape shit over AI. I agree that the current moment is a big deal. The real reason the Labs are all gaga about AI is all about lots of money. Intellectually, almost no one is meeting the moment. The idea space for AI within government is close to the empty set. All this money is going to efforts to apply AI to our mission space. That said, the strategy is abysmal.

Like efforts before this in computational science, the strategy is computer-heavy and thinking-light. It has the intellectual depth of a rain puddle in my driveway. The whole moment with LLMs is grounded on an algorithm improvement. The next big step for AI will be algorithm improvement. The level of advances coming from hardware and data has a very low ceiling, but it is easy. We are taking this easy, simple path, and we will hit the wall. This is the same shallow blueprint we used to hand over computational science to the Chinese.

This will create the next AI winter unless discoveries are made. Worse yet and more dangerously, the next breakthrough won’t likely happen in the USA. It could, but probably not. China is the likely place for this. When they do, China will own the AI future. This may be their route to owning the economic and national security future, too. The incompetence of our leadership is paving the way for their dominance. They do this one stupid and short-term decision at a time. The writing is on the Wall, the future will be Chinese and not American. If we were paying attention instead of bullshitting ourselves about how great we are, this could be stopped. Instead, we are destroying science in the USA and creating an environment where we can’t win.

“For Fauci, science was a self-correcting compass, always pointed at the truth. For Trump, the truth was Play-doh, and he could twist it to fit the shape of his desire.” ― Lawrence Wright

The benefits of science have a massive impact on our health. The fruits of science allow us to live longer and better. This comes from medicines and therapies of all sorts. The backlash after the COVID pandemic is destroying the medical advances and science in the USA. This is the apex of withdrawn trust in science and the road to suffering and death for many. Future Americans will die needlessly. They already are, as Americans reject vaccines. It is pure ignorance. It is another self-inflicted wound that will harm the Nation for the foreseeable future. All of it coming from a lack of trust and some degree of irresponsible arrogance. It is combined with the amoral profit motive governing American medicine. Together, this is literally toxic to the health of Americans.

“It is my hope that this short book will remind all Americans that blind faith in authority is a feature of religion and autocracy, but not of science nor democracy.” ― Robert F. Kennedy Jr.

To go one level deeper, we can examine the roots of this. At the core of our problem is a rejection of expertise. The key aspect is the common thread of American dysfunction, lack of trust. This lack of trust has become a feature of America. This stems from a belief that everyone is out for themselves. No one is committed to anyone but themselves. Left to their own devices, people will choose greed. Self-interest is the core creed of America. One should ask what values any American would sacrifice for others or the good of the Nation. Can you have anything that is real patriotism when you don’t trust your fellow citizens? This ultimately is the root of our decline and may destroy the nation as it has destroyed science.

“Facts do not cease to exist because they are ignored.” ― Aldous Huxley

The Dangers of Finite Thinking

19 Saturday Jul 2025

Posted by Bill Rider in Uncategorized

≈ Leave a comment

tl;dr

Reading a book at the right time and place is a special gift to one’s life. I took a long weekend with a drive and listened to Simon Sinek’s “The Infinite Game.” It crystallized so much about my workplace, my country, and what is going wrong with both. Sinek writes about two mindsets: one that embraces scarcity and another that embraces abundance. He relates these to game theory. Scarcity is connected to the finite game with winners and losers. Abundance is related to the infinite game. An infinite game never ends, and everyone can win. It should come as no surprise that today’s world is dominated by the finite game’s scarcity. This mindset is the foundation of so much of what ails society. It explains a lot of terrible behavior by our “leaders.”

“To ask, “What’s best for me” is finite thinking. To ask, “What’s best for us” is infinite thinking.” ― Simon Sinek

Listening to a Great Book

On a recent long weekend, my wife and I took a road trip to Moab, Utah. It is about a six-hour drive. We decided to listen to a book on Audible, and she let me pick. I chose a book by Simon Sinek. The Infinite Game. To my surprise, my wife thought the choice was inspired. Both of us were transfixed by the narrative as the ideas poured from the “pages” of the book. We found the concepts to have immense relevance and explanatory power for today’s World. I began to see the ideas living in our politics and my work. Much of this is grounded in how powerfully the finite thinking defines everything in sight.

I was familiar with Simon Sinek from multiple sources. He has a weekly podcast, which often provides compelling content. His TED talks are great. He is a phenomenal public speaker. His messages are positive and compelling. I feel like they’re what I need to hear. I’d been introduced to the topic of this book from his podcast, which spurred me to buy the book. The trip felt like a great opportunity to finally read the whole thing. Its a good way to make the driving fly by, and hear new ideas while doing it.

As expected, Simon’s ideas in the book are inspiring. He weaves the narrative and viewpoint in both compelling and attractive ways. They crystallize a perspective that has profound explanatory power. Some themes reign over the modern World we all live in. Our time is immensely troubling, and much of what is bothersome has commonality. In listening and understanding the concepts of finite or infinite thinking, we started to see some explanations. The point in the book that hit us hardest is the description of finite thinking’s impact on ethics. He notes that finite thinking breeds ethical lapses. It destroys trust. These ethical lapses and loss of trust seem to be a common unifying thread in society.

“To ask, “What’s best for me” is finite thinking. To ask, “What’s best for us” is infinite thinking.” ― Simon Sinek

He thoughtfully points a finger at the origin of this mindset. The ideas of Milton Friedman about business have taken over. This is the idea that the only job of a business is to maximize shareholder value. It becomes greed that drives decisions. A core philosophy that has taken control of society. We live in a selfish, self-centered time. This idea has reshaped business and government, becoming a central organizing theme. The government has chosen business ideas to improve its performance (incorrectly, I believe). Sinek points out that this idea is all about finite thinking and is relentlessly short-term focused. Friedman’s ideas have also driven a host of related problems. To make everything worse, finite thinking is central in relationships, government and politics, science and technology. It is absolutely toxic. Focusing on the short term has created many long-term problems.

Before digging into some details, I should explain the meaning of finite or infinite thinking or games. Both of these ideas come from game theory, a powerful mental model useful for analyzing the World. The usual game people think of is a finite game. These are games with winners and losers. There is a limit on the stakes of a game, and typically, you have a single winner. A stalemate is possible, too (although Americans don’t deal well with ties).

“I am favor of cutting taxes under any circumstances and for any excuse, for any reason, whenever it’s possible.” ― Milton Friedman

By contrast, infinite games do not have winners or a defined end. The simplest way to think about infinite thinking is to play. Play is something that goes on for an indeterminate time, and you don’t keep score. Infinite thinking is open-ended and encourages creativity. It is also an underemphasized organizing principle for business. It is definitely a more appropriate principle for government and politics. It is also far better for most of our personal relationships. Operating a relationship as a finite game is transactional and superficial. It leads to abuse and consent violations.

“Culture = Values + Behavior” ― Simon Sinek

Infinite Games

“Working hard for something we don’t care about is called stress: Working hard for something we love is called passion.” ― Simon Sinek

The first thing to take note of with an infinite game is that is much more pleasant and inspiring. Infinite games are something that inspires passion and boundless ambition. I’ve noted that various infinite games are child’s play. Marriage is an infinite game. If you are trying to win your marriage (or relationship), it is the road to failure. Instead, a good relationship is built by playing off each other, and losing the sense of bounds on success. Sex should be an infinite game. When it’s finite (like orgasm focus), the sex is usually bad (or much worse for one of the people).

In business, the infinite game takes the role of a business with a core purpose. The business is about producing something of immense value. More importantly, that thing of value is always a little out of reach, but the process of striving for it is powerful. This pursuit naturally produces profit and success. For government functions producing good for society, this should be a natural fit. In the United States, it is the sense of pursuing a “more perfect union” that is never reached. This idea had powered the expansion of personal rights that characterized the Century after the Civil War. For institutions like those I’ve worked for, the infinite mindset is far more beneficial. It is the pursuit of principles and aims that transcend measurement. I’ve worked in National security all my life. It is work that is never done or never good enough, but can be done very well.

The benefits of infinite thinking are immense. The key is the morale and passion of the workforce. There is a clear “true North” for the entire company (organization). The workforce is energized and believes in the vision, working tirelessly toward it. The long-term approach is never sacrificed. The tactical short-term view never overwhelms and kills the company’s direction. There are clear and seamless priorities that guide decisions. Another less well-known benefit is better ethics as a result, too. Given the dearth of ethics today, that would help a lot. Bad ethics simply erode trust and produce a downward spiral. A spiral we are in the midst of.

“two ways to influence human behavior: you can manipulate it or you can inspire it.” ― Simon Sinek

Apple Computer is an archetype of this sort of thinking. At the same time, this has led to incredible products and vast profits. There were other examples, such as Eastman-Kodak, which did well while they had an infinite mindset. Then this mindset is dropped, and the company implodes. The infinite thinking guides a business through success and failure and keeps attention to the long term. Sinek calls this vision the “just cause” that centers company culture. This long-term vision can be lost at leadership changes. When finite thinking takes over, the vision is lost. The short term takes over, and the company often begins to die.

Infinite games are about pursuit of abundance with few limits on the benefits. The only limits are imposed by creativity and the laws of physics. Rather than cut of the pie, the focus is to grow the pie. The pursuit happens with defined bounds and a narrow focus, but with no ceiling.

“A Just Cause must be: For something—affirmative and optimistic Inclusive—open to all those who would like to contribute Service oriented—for the primary benefit of others Resilient—able to endure political, technological and cultural change Idealistic—big, bold and ultimately unachievable” ― Simon Sinek

Finite Games

“There is one and only one social responsibility of business–to use it resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud” ― Milton Friedman

Finite games are all about winners and losers. The core concept is one of scarcity. As noted above, the prevalence of finite thinking is all driven by the philosophy of Milton Friedman. Sinek goes on to explain all the ills that this mindset breeds. The vast inequality in society today is a direct result of it. The relentless short-term focus that defines everything today, from the stock market to government spending. It defines politics, too. We are run by the winners and losers mindset. It also powers the lack of trust in society and the questionable ethics. If the mindset were simply applied to business, this would be bad enough; it is everywhere. It runs society, and does so badly.

Classical sports competitions are the exemplar of finite games. Americans tend to have problems with games that allow draws. Basketball, football, and baseball all almost invariably have winners and losers in every game. Football has almost eliminated ties as a possibility. This means the outcome is binary. It also encourages cheating (not that soccer/futbal doesn’t have some too, FIFA is corrupt to the core). We’ve had scandals in recent years from pro and college football (the Patriots, anyone?), and the Astros from Major League Baseball. In many ways, this concept is innate in the character of the nation. We fail to recognize the limits and downsides to this organizing philosophy.

When I look at the research institutions I work at or am aware of, finite thinking is everywhere. Our programs have adopted the same short-term focus as business. Everything is revolving around the stupid idea of the quarterly report. We need to apply the business concept of earned value to research. It isn’t even a reasonable idea; it is a moronic one. This short-term focus has simply brought the research in the USA down. Worse yet, the finite thinking corrupts leaders into hollow shells.

“Infinite-minded leaders understand that ‘best’ is not a permanent state. Instead, they strive to be “better.” “Better” suggests a journey of constant improvement and makes us feel like we are being invited to contribute our talents and energies to make progress in that journey.” ― Simon Sinek

As noted above, one of the key aspects of finite games is cheating. Taken more broadly, these are encouragements of ethical lapses. In business, this looks like price controls, stock buybacks, and monopolistic practices. You see the excuse of maximizing shareholder value as the one-size-fits-all explanation. It works if the lens for observation is the stock market. Meanwhile, the company is destroying its customer base, trust, and employee morale. Sometimes, the drive to maximize the output of employees drives them to do unethical things. A stark example is Wells Fargo with fake customer accounts. This includes the management and executives of the company looking the other way multiple times. The way I see it manifest at non-profits is different, but related.

“In weak cultures, people find safety in the rules. This is why we get bureaucrats. They believe a strict adherence to the rules provides them with job security. And in the process, they do damage to the trust inside and outside the organization. In strong cultures, people find safety in relationships. Strong relationships are the foundation of high-performing teams. And all high-performing teams start with trust.” ― Simon Sinek

How Finite Thinking Creates Terrible Leaders

“When leaders are willing to prioritize trust over performance, performance almost always follows.” ― Simon Sinek

One of the key things that the differences in thinking impact is leadership. Finite thinking creates awful leaders. It can even distort people with good capacity for leadership and ruin them (I’ve seen it a lot). Infinite thinking is necessary for great leadership. By no means does it assure it, but it is necessary for greatness. With everything adhering to finite thinking these days, leadership is in crisis. This comes from multiple aspects of finite thinking: the belief of scarcity, win-lose philosophy, short-term focus, and ethics. Conversely, infinite thinking draws on abundance, win-win, long-term strategic perspective, and ethics, bringing trust. The differences should be obvious to all.

“Leadership is about integrity, honesty and accountability. All components of trust.” ― Simon Sinek

When I look at leadership today I see little integrity or honesty. There is absolute rejection of accountability. Anyone who points out a problem is treated as the enemy (i.e., shoot the messenger). I would offer the stark example of the Governor of Texas and the President when asked about a warning system after the recent floods. In both cases, they attacked the questioners as “losers” or “evil” rather than answer the obvious question. This is a rejection of accountability. The leader reflects back an almost pathological lack of trust for those they lead. It might be most accurate to say the leaders actually treat those below them with outright contempt.

This is obvious in the National political leadership, whether you look at the White House, Congress or the Courts. It isn’t everyone there, but it is the dominant behavior. The same trend appears in local politics. At my work it is typical behavior. It creates an awful environment. It creates the outcome of failing American science. I know many of the Lab leaders personally and many great people. The finite thinking crushes their potential to be great leaders (the great ones they should be).

“The best way to drive performance in an organization is to create an environment in which information can flow freely, mistakes can be highlighted and help can be offered and received.” ― Simon Sinek

It is useful to explain how this can happen. One of the major engines of dysfunction is fighting over money. This leads to backstabbing, unethical behavior, and decision-making that murders the long term. It leads to micromanagement and control. Information is hidden or withheld. If you bring them bad news, they shoot the messenger. In the wake of this is the destruction of trust. Managers act with little or no ethics but justify it by “the rules”. In this world, information is power, and it is parceled out. Doing the right thing is never the focus of decision-making. The right thing is always associated with the best money outcome.

All of this leads to an ineffective, short-term focus. It creates a toxic and ineffective organization. The one core element is a complete lack of trust. Ethics is messaged and in name only. Horrible behavior is allowed because the managers write the rules to allow their ethical lapses to be organizationally accepted. All of this stems from the application of finite thinking to managing everything. It is ruining our competence in science. It is ruining the Nation.

“Some in management positions operate as if they are in a tree of monkeys. They make sure that everyone at the top of the tree looking down sees only smiles. But all too often, those at the bottom looking up see only asses.” ― Simon Sinek

Next time, I will discuss how this short-term thinking has destroyed the advantage the USA once had in science and technology. It is clear now that China is in the lead. We gave it up through our own incompetence. Recent actions by the government are making sure the American decline is permanent and irreversible.

The Deeper Analogy of Algorithms As Recipes

09 Wednesday Jul 2025

Posted by Bill Rider in Uncategorized

≈ Leave a comment

TL;DR

Algorithms are like recipes for data. Algorithms take different forms. Some are basic and utilitarian, while others transform science or society. In a similar vein, there are many types of recipes. Some are the basics of human nutrition. Others are transformational and provide an almost spiritual experience. New creative recipes can make dining a different experience. In both cases, technology plays a huge role in driving or being utilized. In computing, the form and details of computers are essential. They constrain the amount of data or the form it takes. In cooking, the instruments and implements of preparation open possibilities. As do the means of cooking, providing endless creativity. This analogy is powerful in explaining algorithms to the uninitiated. It is also a far deeper analogy than usually expressed.

“An algorithm is like a recipe.” ― Muhammad Waseem

An Imperfect, but Useful Analogy

The analogy of an algorithm as a recipe is common but useful. As I will discuss, the parallels of thought involving recipes for food and algorithms for data are broader than one might think initially. As recipes and dishes for cooking are commonly experienced, it works to explain algorithms to a broad audience. Let’s go through the basic elements, and then go beyond the usual narrative. I’ll briefly repeat the usual pieces of the analogy, then go a step or two deeper.

The deeper elements can touch on culture. For recipes, this is obvious as food is an expression of local culture. Algorithms are an expression of scientific culture and priorities. Their importance or lack thereof says a lot about scientific culture. Food culture is intimately related to societal and ethnic influences. It finds inspiration and influence from history. Think of how Italian food is influenced by the imported tomato today. As ingredients, spices, and flavors became available, the food changed and incorporated it. Knowledge, science and cooking techniques have transformed what we eat.

Algorithms are impacted by the scientific culture. Technology has a huge influence. The computer has influenced all of science and produced new areas of science. Information technology has become a central force in all of science. At the core of all this scientific advancement is the algorithm. The algorithm is the vehicle to take the computational engine of the computer into a useful tool for science. By the same token, the culture of science is reflected. In some areas, the algorithmic advances have largely ceased. This reflects the difficulty in funding creative work that leads to many failures between breakthroughs. A drought in algorithmic progress reflects deep issues with how the culture of science is working.

“I read recipes the same way I read science fiction. I get to the end and say to myself “well, that’s not going to happen” ― Rita Rudner

An algorithm works on a computer and operates on data. The kitchen is the realm of recipes, and they operate on ingredients. In many cases, the data needs to be prepared and transformed from its original form before the algorithm operates. Recipes do the same to ingredients. Often, these transformations are key aspects of the instructions. The order of instructions matters in both cases where parts of the solution sequence are immutable. Other parts of the instructions can be changed. They can be conducted in parallel or re-ordered to good effect. This is true of food and computing. A key part of being a chef or scientist is knowing how to make these decisions.

“In algorithms, as in life, persistence usually pays off.” ― Steven S. Skiena

Manipulating data in an array must be done in a certain order. In the same vein, slicing and dicing ingredients must proceed before adding them to the cooking dish. You do have options in the details and ordering in many cases. A key aspect of both cooking and computing is knowing when steps are equivalent. What steps can be reordered and which must be executed in lock step with the recipe? The real genius of algorithms or recipes is how data/ingredients are transformed. The magic is taking something from one form and moving to another with a completely different character. Just as a recipe is a way of drawing out and mixing flavors, an algorithm can draw out the utility and use of data. Data can be understood differently through transformations achieved by the algorithms. New purposes can emerge just as new recipes flow from old standards.

“Algorithm is arguably the single most important concept in our modern world. If we want to understand our life and our future, we should make every effort to understand what an algorithm is and how algorithms are connected to their use. An algorithm is a methodical set of steps that can be used to make calculations, resolve problems, and reach decisions. An algorithm isn’t a particular calculation, but the method followed when making the calculation. For example, if you want to calculate the average between two numbers, you can use a simple algorithm. The algorithm says: ‘First step: add the two numbers together. Second step: divide the sum by two.’ When you enter the numbers 4 and 8, you get 6. When you enter 117 and 231, you get 174.” ― Yuval Noah Harari

The Role of Technology

“I guess love’s kind of like a marshmallow in a microwave on high. After it explodes it’s still a marshmallow. but, you know, now it’s a complicated marshmallow.” ― Cath Crowley, Graffiti Moon

In both cases, technology plays a big role. For algorithms, computer technology of all sorts is essential. The nature of the computer really matters in what algorithms are efficient. A serial computer is much different than a massively parallel computer. The way a computer’s memory is organized makes a huge difference. Optimal algorithms on GPUs are much different than those on a parallel computer. The relative differences in memory levels and speed matter greatly. How do we manage the cache? These changes make an algorithm efficient or easy to program. In cooking, the computer is like the oven and stoves that produce the transformation the recipe is informing. In this way, I wonder if a GPU is the microwave oven of computers. Really fast, but sort of a route to shitty food.

“My Saturday Night. My Saturday night is like a microwave burrito. Very tough to ruin something that starts out so bad to begin with.” ― Michael Chabon

Other technologies matter for algorithms. Computing languages are ways to express algorithms. C++ or Python is vastly better than assembly language. The higher-level languages make the expression of ideas far easier and open up possibilities. Before, language algorithms were extremely limited and simple. There was a hard limit to the complexity you could tolerate in programming. The related technology is compilers, which translate the high-level language into something the computer can work with. We can see parallels to how cooking works. In many ways, programming languages are like utensils and knives used by a chef. The programmer is like the one with great skills for implementing the recipe. If you are cooking, your equipment is essential for how well a dish comes off. A mandolin can produce far better potato dishes than a knife. A ricer is great for creamy, consistent potatoes. In the same way, a high-level computer language can enable complex algorithms to be implemented.

“Controlling complexity is the essence of computer programming.” — Brian Kernighan

If we look at what a chef works with, you can see how important technology is. They use knives and spatulas to assist in the preparation of data and mix the ingredients. These tools are essential extensions of the human body or augment what we don’t easily do. The ovens and stoves are essential. Programming languages take our thoughts as instructions to be executed by the computer. In a real way, the languages are an extension of our minds. It is a way of structuring thinking into actions executed over and over. These are the recipes in the cookbooks of algorithms. The commands are the tools used to prep the data that the algorithms turn into action. These are the intellectual meals we create using computers. These have transformed society just as cooking is essential to human culture.

“An algorithm must be seen to be believed.” ― Donald Knuth,

Types of Recipes and Algorithms

“This is my invariable advice to people: Learn how to cook- try new recipes, learn from your mistakes, be fearless, and above all have fun!”― Julia Child

A useful thing to explore is the types of recipes and how they map onto algorithms. Perhaps the opposite direction is more compelling. With recipes, you have the staples, grandma’s comfort food, fast food, haute cuisine, and the cutting edge of food. We have the basic steps involving making sauces and the basic elements of recipes. These are direct analogies for algorithms of sorting, hash tables, and basic data structures. We can combine algorithms to create more complex techniques and codes for general purposes. In the same way, basic elements of recipes can be combined for something unique and special. Both algorithms and recipes have immense space for creativity and adaptability. In a special moment of creative energy, you may produce something unique and wonderful from the mixture of existing knowledge. True for either algorithms or recipes.

“Once you have mastered a technique, you barely have to look at a recipe again” ― Julia Child

Some recipes change the culinary world and become staples. The creation of the Reuben Sandwich combined pastrami beef with russian dressing, rye bread, sauerkraut and cheese. The Caesar salad has become standard worldwide. If you look at the pieces of the recipe, it is clear that the combination was inspired. It was a moment of sheer genius. Today, new recipes are being created by top chefs, ready to become popular and common in the future. In the same way, algorithms are created to change the world of computing. Google’s PageRank algorithm changed how the internet works, and its ideas power social media. Today, the LLMs creating trillions of value and visions of AI are derived from the Transformer algorithm. In each case, whether recipes or algorithms combine elements of simpler common techniques in new ways. These new ideas become the foundation of the future.

“Science is magic that works.” ― Kurt Vonnegut

A Unified Theory of AI and Bullshit Jobs

21 Saturday Jun 2025

Posted by Bill Rider in Uncategorized

≈ Leave a comment

TL;DR

Right now, there is a lot of discussion of AI job cuts on the horizon. Computer coders are at the top of the list. So are other white-collar jobs. It is one of the dumbest things I can imagine. The reasons are legion. First, you take the greatest proponents for AI at work and make many of them angry and scared. You make the greatest proponents of AI angry. You create Luddites. Secondly, it fails to recognize that AI is an exceptional productivity enhancement. It should make these people more valuable, not remove their jobs. Layoffs are simply scarcity at work. It is cruel and greedy. It is short-sighted in the extreme. It is looking at AI as the glass half empty. The last point gets to bullshit jobs that many people do in full or part. Instead, I think AI is a bullshit job detector. We can use it to get rid of them and find ways to make jobs more human, creative, and productive. This is a path to abundance and a better future.

“It’s hard to imagine a surer sign that one is dealing with an irrational economic system than the fact that the prospect of eliminating drudgery is considered to be a problem.” ― David Graeber, Bullshit Jobs: A Theory

AI as a Threat, Instead of as a Gift

Lately, the news has been full of reports that white-collar jobs are gonna be replaced by AI. I do a white-collar job. I also work with AI in research. I possess first-hand knowledge of how well AI possesses prowess in my areas of expertise. Hint, its prowess is novice and naive at best. I think I have a hell of a lot to say about this. Its ability is quite superficial. As soon as the prompt asks for anything nuanced or deep, AI falls flat on its face.

Back to the concern of vast numbers of white-collar workers. Note, that computer programmers are at the top of the “hit list.” The concern is that all of this will lead to widespread unemployment of educated and talented people. At the same time, those of us who use AI professionally can see the stupidity of this. Firing all these people would be a huge mistake. All the claims and desires to cut jobs by AI make the people saying this look like idiots. These idiots have a lot of power with a vested interest in profiting from that AI. They are mostly AI managers who have lots to gain. In all likelihood, they are just as full of shit as my managers are. My own managers are constantly bullshitting their way through reality whenever they are in public. At the Labs, the comments about fusion are at the top of the bullshit parade. A great example of stupid shit that sells to nonexperts.

“Shit jobs tend to be blue collar and pay by the hour, whereas bullshit jobs tend to be white collar and salaried.” ― David Graeber, Bullshit Jobs: A Theory

AI Can Do Bullshit Jobs

I think This narrative should be more clearly connected to another concept, “Bullshit Jobs.” These are jobs that add little to society and merely make work for lots of people. These jobs also exact an effort tax on every person. These jobs drive the costs and time at work. Most of my exorbitant cost at work is driven by people doing bullshit jobs (my cost is more than 3 times my salary).

On top of that they lower my productivity. What I’ve noticed is that the jobs are mostly related to a lack of trust and lots of checks on stuff. They don’t produce anything, but make sure that I do. My day is full of these things from every corner and touching every activity. I think a hallmark of these jobs is the extent to which AI could do them. I will then take this a step further; if AI can do a job perhaps that job should not be done at all. These jobs are actually beneath the humanity of the people doing them. We need to devote effort to better jobs for people.

The real question is what do we do with all these people who do these bullshit jobs. The AI elite today seem to be saying just fire everyone and reduce payroll. This is an extremely small-minded approach. It is pure greed combined with pessimism and stupidity. The far better approach would be to retool these jobs and people to be creators of value. Unleash creativity and ideas using AI to boost productivity and success. A big part of this is to take more risks and invest in far more failed starts. Allowing more failures will allow more new successes. Among these risks and failed starts are great ideas and breakthroughs. Great ideas and breakthroughs that lie fallow today. They lay fallow under the yoke of all the lack of trust fueling the bullshit jobs. If AI is truly a boon to humanity, we should see an explosion of growth, not mass unemployment.

“We have become a civilization based on work—not even “productive work” but work as an end and meaning in itself.” ― David Graeber, Bullshit Jobs: A Theory

Why don’t we hear a narrative of AI-driven abundance? One really has to wonder if our AI masters are really that smart if their sales pitch is “fire people”. I will just come out and say that the idea of firing swaths of coders because of AI is one of the dumbest things ever. The real answer is to write more code and do more things. The real experience of coders is that AI helps, but ultimately the expert person must be “in the loop”. AI is incapable of replacing code developers. The expert developed is absolutely essential to the process, and that AI just makes them more efficient. We need to embrace the productivity gains and grow the pie. Instead, we are ruled by small-minded greed instead of growth-minded visionaries.

“A human being unable to have a meaningful impact on the world ceases to exist.” ― David Graeber, Bullshit Jobs: A Theory

A Painful Lesson

To reiterate, If AI can do your job, there’s a good chance that your job is bullshit. AI is an enhancement for productivity and it should allow you to be free of much of the bullshit. What companies organization should do is illuminate work and jobs that can be done by AI. If AI can do the job entirely, the job isn’t worth doing. They should use this money to free up productivity enhance what is done, and not cut people’s employment. We’ve seen this mentality in attacks on government programs. This is the single greatest failing of Elon Musk and DOGE. They didn’t realize that what he really needed was to unleash people to do more creative and better work. It is not about getting rid of the work; it is about improving the work that is done.

“Efficiency’ has come to mean vesting more and more power to managers, supervisors, and presumed ‘efficiency experts,’ so that actual producers have almost zero autonomy.” ― David Graeber, Bullshit Jobs: A Theory

i’ve written about AI as producing bullshit. What if AI is a way of detecting bullshit? The real truth is when it comes to American science there’s far too little creativity and far too little freedom to do amazing work. Sometimes amazing work cannot be recognized until it is tried. It looks stupid or insane, worthy of ridicule until its genius is obvious. Or it can be not worth trying, but you don’t know until you try. A lot of bureaucratic bullshit stands in the way of progress. One reason for this is the insane amount of bullshit Jobs. The cost of them is huge with our outrageous overhead rates. In addition, they also make bullshit work for those of us trying to produce science. They get in the way of productivity in a myriad of ways with required bullshit that has no value.

What we really need to do is eliminate the bullshit and free up the mind and the creativity. We already aren’t spending enough on science and what is spent is done very unproductive. We don’t take the risks we need for breakthroughs and don’t allow the right kinds of failure. A variety of forms of bullshit jobs lead the way. Managers obsess with meaningless repetitive reviews. They micromanage and apply far too much accounting. All of this kills creativity and undermines breakthroughs. Managers should know what we do, but do it through managing. Not contrived reporting mechanisms. They should create a productive environment. There should be much more effort to determine what would be better for our lives and better productivity.

AI helps this in some focused ways. It can help to supercharge the abilities of creative and talented scientists. Just get the bullshit out of the way. I’ve found that AI is really good a churning out this bullshit. The best answer is to stop doing any bullshit that AI is capable of producing. It is a telltale sign that the work is worthless.

“Young people in Europe and North America in particular, but increasingly throughout the world, are being psychologically prepared for useless jobs, trained in how to pretend to work, and then by various means shepherded into jobs that almost nobody really believes serve any meaningful purpose.” ― David Graeber, Bullshit Jobs: A Theory

If your job is so mundane so routine and so rudimentary that AI can do it, the best option is to delete it. It is a serious question to ask about a job. Most of the bullshit Jobs revolve around a lack of trust and it’s really a broader social issue. In my life, it has become a science productivity issue. If there are reports and things that AI could just as well produce, the best option is to not produce them at all because no one needs to read them. A very large portion of our reporting is never really read. If no one reads a piece of writing, should it even exist? We have a duty as a society to give people productive useful work. Every job should have that undeniable spark of humanity.

The other part of this dialog is about what kind of future we want. Do we want a scarce future where technology ravages good jobs? Whereas corporations simply think about maximizing money for the rich and care little about the employees. Do we want a future where technology like AI takes humanity away? Instead, we should want abundance and growth. Technology that enhances our humanity and reduces our drudgery. AI should be a tool to unleash our best. Any job should also require the spark of humanity to produce genuine value. It should raise our standard of living and allow more time for leisure, art and the pursuit of pleasure. It should directly lead to a better World to live in.

“Yet for some reason, we as a society have collectively decided it’s better to have millions of human beings spending years of their lives pretending to type into spreadsheets or preparing mind maps for PR meetings than freeing them to knit sweaters, play with their dogs, start a garage band, experiment with new recipes, or sit in cafés arguing about politics, and gossiping about their friends’ complex polyamorous love affairs.” ― David Graeber, Bullshit Jobs: A Theory

Practical Application Accuracy Is Essential

15 Sunday Jun 2025

Posted by Bill Rider in Uncategorized

≈ 5 Comments

TL;DR

In classical computational science applications for solving partial differential equations, discretization accuracy is essential. In a rational world, this solution accuracy would rule (other things are important too). It does not! Worse yet, the manner of considering accuracy is not connected to the objective reality of the method’s use. It is time for this to end. Two things influence true accuracy in practice. One is the construction of discretization algorithms, which have some counterproductive biases focused on order-of-accuracy. In real applications, solutions only achieve a low order of accuracy. Thus the accuracy is dominated by different considerations than assumed. It is time to get real.

“What’s measured improves” ― Peter Drucker

The Stakes Are Big

The topic of solving hyperbolic conservation laws has advanced tremendously during my lifetime. Today, we have powerful and accurate methods at our disposal to solve important societal problems. That said, problems and habits are limiting the advances. At the head of the list is a poor measurement of solution accuracy.

Solution accuracy is expected to be measured, but in practice only on ideal problems where full accuracy can be expected. When these methods are used practically, such accuracy cannot be expected. Any method will produce first-order or lower accuracy. Fortunately, analytical problems exist allowing accuracy to be assessed. The missing ingredient is to actually do the measurement. The benefit of changing our practice would cause focus energy and attention on methods that perform better under realistic circumstances. Today, methods with relatively poor accuracy and great cost are favored. This limits the power of these advances.

I’ll elaborate on the handful of issues hidden by the current practices. Our current dialog in methods is driven by high-order methods. These are studied without regard for their efficiency on real problems. The popular methods such as ENO, WENO, TENO, and discontinuous Galerkin dominate, but practical accuracy is ignored. A couple big issues I’ve written about reign broadly over the field. Time-stepping methods follow the same basic pattern. Inefficient methods with formal accuracy dominate over practical concerns and efficiency. We do not have a good understanding of what aspects of high-order methods pay off in practice. Some high-order aspects appear to matter, but others do not yield practical benefit. This includes truly bounding stability conditions for nonlinear systems, computing strong rarefactions-low Mach flows, and multiphysics integration.

The Bottom Line

I will get right to the punch line for my argument and hopefully show the importance of my perspective. Key to my argument is the observation that “real” or “practical” problems converge at a low order of accuracy. Usually, the order of accuracy is less than one, so assuming first-order accuracy is actually optimistic. The second key bit of assumption is what efficiency means in the context of modeling and simulation. I will define it as the relative cost of getting an answer of a specified accuracy. This seems obvious and more than reasonable.

“You have to be burning with an idea, or a problem, or a wrong that you want to right. If you’re not passionate enough from the start, you’ll never stick it out.” ― Steve Jobs

To illustrate my point I’ll construct a contrived simple example. Define three different methods to get our solution that are otherwise similar. Method 1 gives an accuracy of one for a cost of one. Method 2 gives an accuracy of twice as good as Method 1 for double the cost. Method 3 gives an accuracy of four times Method 1 and a cost of four times the cost. We can now compare the total cost for the same level of accuracy looking for the efficiency of the solution. Each method converges at the (optimistic) first-order rate.

If we use Method 3 on our “standard” mesh we get an answer with one-quarter the error for a cost of four. To get the same error as Method 2, we need to use a mesh of half the spacing (and twice the points). With Method 1 we need four times the mesh for the same accuracy. The relative cost of the equally accurate solution depends on the dimensionality of the problem. For transient fluid dynamics, we solve problems in one- two- or three-dimensions plus time. We are operating with methods that need to have the same time step control, the time step size is always proportional to the spatial mesh.

Let’s consider a one-dimensional problem, the cost scales quadratically with mesh (time and one space-time dimension). Our Method 2 will cost a factor of eight to get the same accuracy as Method 3. Thus is cost twice as much. Method 1 needs two mesh refinements for a cost of 16. Thus it costs four times as much as Method 3. So in one dimension, the more accurate method pays off tremendously, and this is the proverbial tip of the iceberg. As we shall see the efficiency gains grow in two or three dimensions.

In two dimensions the benefits grow. Now Method 2 costs 16 units and thus Method 3 pays off by a factor of four. For Method 1 we have a cost of 64 and the payoff is a factor of 16. You can probably see where this going. In three dimensions Method 2 now costs 32 and the payoff is a factor of 8. For Method 1 the payoff is huge. It now costs 256 times to get the same accuracy, Thus the efficiency payoff is a factor of 64. Almost two orders of magnitude difference. This is meaningful and important whether you are doing science or engineering.

Imagine how a seven-dimensional method like full radiation transport would scale. The payoffs for accuracy could be phenomenal. This is a type of efficiency that has been largely ignored in computational physics. It is time for it to end and focus on what really matters in computational performance. The accuracy under conditions actually faced in applications of the methods matters. This is real efficiency, and an efficiency not examined at all in practice.

“Progress isn’t made by early risers. It’s made by lazy men trying to find easier ways to do something.” ― Robert Heinlein

The Usual Approach To Accuracy

The usual approach to designing algorithms is to define basic “mesh” prototypically space and time. The usual mantra is that the most accurate methods are higher order. The higher order the method, the more accurate it is. High-order is often simply more than second-order accurate. Nonetheless, the assumption is that higher-order methods are always more accurate. Thus the best you can do is a spectral method. This belief has driven research in numerical methods forever (many decades at least). This is where every degree of freedom available contributes to the approximation. We know these methods are not practical for realistic problems.

The standard tool for designing methods is the Taylor series. This relies on several things to be true. The function needs to be smooth, and the expansion needs to be in a variable that is “small” in some vanishing sense. This is a classical tool and has been phenomenally useful for centuries of work in numerical analysis. The ideal nature of when it is true is also a limitation. While the Taylor series still holds for nonlinear cases, the dynamics of nonlinearity invariably destroy the smoothness. If smoothness is retained nonlinearly, the problem is pathological. The classic mechanism for this is shocks and other discontinuities. Even smooth nonlinear structures still have issues like cusps as seen in expansion waves. As we will discuss accuracy is not retained in the face of this.

If your solution is analytical in the best way possible this works. This means the solution can be differentiated infinitely. While this is ideal, it is also very infrequently (basically never) encountered in practice. The other issue is that the complexity of a method also grows massively as you go to a higher order. This is true for linear problems, but extra true for nonlinear problems where the error has many more terms. If it were only this simple! It is not by any stretch of the imagination.

“We must accept finite disappointment, but never lose infinite hope.” ― Martin Luther King Jr.

Stability: Linear and Nonlinear

For any integrator for partial differential equations, stability is a key property. Basically, it is a property where any “noise” in the solution decays away. The truth is that there is always a bit of noise in a computed solution. You never want it to dominate the solution. For convergent solution,s stability is one of two ingredients for convergence under mesh refinement. This is a requirement from the Lax equivalence theorem. The other requirement is the consistency of the approximation with the original differential equation. Together this yields the property of convergence where solutions become more accurate as meshes are refined. This principle is one of the foundational aspects of the use of high-performance computing.

Von Neumann invented a classical method to investigate stability. When devising a method, doing this analysis is a wise and necessary first step. Often subtle things can threaten stability and the method is good for unveiling such issues. For real problems, this stability is only the first step in the derivation. It is necessary, but not sufficient. Most problems have a structure that requires nonlinear stability.

This is caused by nonlinearities. in true problems or non-differentiable features in the solution (like shocks or other discontinuities. These require mechanisms to control things like oscillations and positivity of the solution. These mechanisms are invariably nonlinear even for linear problems. This has a huge influence on accuracy and the sort of accuracy that is important to measure. The nonlinear stability assures results in real circumstances. It has a relatively dominant impact on solutions and lets methods get accurate solutions when things are difficult. One of the damning observations is that the accuracy impact of these measures is largely ignored under realistic circumstances. The only thing really examined is robustness and low-level compliance with design.

“We don’t want to change. Every change is a menace to stability.” ― Aldous Huxley

What Accuracy Actually Matters

In the published literature it is common to see the accuracy reported for idealized conditions. These are conditions where the nonlinear stability is completely unnecessary. We do see if and how nonlinear stability impacts this ideal accuracy. This is not a bad thing at all. It goes into the pile of necessary steps for presenting a method. The problems are generally smooth and infinitely differentiable. A method of increasingly higher-order accuracy will get the full order of convergence and very small errors as the mesh is refined. It is a demonstration of the results of the stability analysis. This is to say that a stability analysis can provide convergence and error characterization. There is also a select set of problems for fully nonlinear effects (e.g., the isentropic vortex or the like).

“I have to go. I have a finite amount of life left and I don’t want to spend it arguing with you.” ― Jennifer Armintrout

There is a huge rub to this practice. This error and behavior for the method is never encountered in practical problems. For practical problems shocks, contacts, and other discontinuous phenomena abound. They are inescapable. Once these are present in the solution the convergence rate is first-order or less (theory for this exists). Now the nonlinear stability and accuracy character takes over being completely essential. The issue with the literature is that errors are rarely reported under these circumstances. This happens even if the exact error can be reported. The standard is simply “the eyeball norm”. This standard serves the use of these methods poorly indeed. Results under more realistic problems is close to purely qualitative. This happens even when there is an exact solution available.

One of the real effects of this difference comes down to the issue of what accuracy really matters. If the goal of computing a solution is to get a certain low level of error for the least effort, the difference is profound. The assessment of this might reasonably be called efficiency. In cases where the full order of accuracy can be achieved, the higher the order of the method, the more efficient it will be for small errors. These cases are virtually never encountered practically. The upshot is that accuracy is examined in cases that are trivial and unimportant.

Practical cases converge at first order and the theoretical order of accuracy for a method doesn’t change that. It can change the relative accuracy, but the relationship there is not one-to-one. That said, the higher order method will not always be better than a low order method. One of our gaps in analysis is understanding how the details of a method lead to practical accuracy. Right now, it is just explored empirically during testing. The issue is that the testing and reporting of said accuracy is quite uncommon in the literature. Making this a standard expectation would improve the field productively.

“Don’t be satisfied with stories, how things have gone with others. Unfold your own myth.” ― Rumi

References

Study of real accuracy

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” Journal of Computational Physics 196, no. 1 (2004): 259-281.

Nonlinear Stability

Guermond, Jean-Luc, and Bojan Popov. “Fast estimation from above of the maximum wave speed in the Riemann problem for the Euler equations.” Journal of Computational Physics321 (2016): 908-926.

Toro, Eleuterio F., Lucas O. Müller, and Annunziato Siviglia. “Bounds for wave speeds in the Riemann problem: direct theoretical estimates.” Computers & Fluids 209 (2020): 104640.

Li, Jiequan, and Zhifang Du. “A two-stage fourth order time-accurate discretization for Lax–Wendroff type flow solvers I. Hyperbolic conservation laws.” SIAM Journal on Scientific Computing 38, no. 5 (2016): A3046-A3069.

High Order Methods

Jiang, Guang-Shan, and Chi-Wang Shu. “Efficient implementation of weighted ENO schemes.” Journal of computational physics 126, no. 1 (1996): 202-228.

Cockburn, Bernardo, Chi-Wang Shu, Claes Johnson, Eitan Tadmor, and Chi-Wang Shu. Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws. Springer Berlin Heidelberg, 1998.

Balsara, Dinshaw S., and Chi-Wang Shu. “Monotonicity preserving weighted essentially non-oscillatory schemes with increasingly high order of accuracy.” Journal of Computational Physics 160, no. 2 (2000): 405-452.

Cockburn, Bernardo, and Chi-Wang Shu. “Runge–Kutta discontinuous Galerkin methods for convection-dominated problems.” Journal of scientific computing 16 (2001): 173-261.

Spiteri, Raymond J., and Steven J. Ruuth. “A new class of optimal high-order strong-stability-preserving time discretization methods.” SIAM Journal on Numerical Analysis40, no. 2 (2002): 469-491.

Methods Advances Not Embraced Enough

Suresh, Ambady, and Hung T. Huynh. “Accurate monotonicity-preserving schemes with Runge–Kutta time stepping.” Journal of Computational Physics 136, no. 1 (1997): 83-99.

Colella, Phillip, and Michael D. Sekora. “A limiter for PPM that preserves accuracy at smooth extrema.” Journal of Computational Physics 227, no. 15 (2008): 7069-7076.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

A Great Workshop Is Inspirational

08 Sunday Jun 2025

Posted by Bill Rider in Uncategorized

≈ Leave a comment

“Knowledge has to be improved, challenged, and increased constantly, or it vanishes.” ― Peter Drucker

Back in the day I used to write up my thoughts on conferences I went to in this blog. It was a good practice and encouraged me to sit back and get perspective on what I saw. What I learned. What I felt. The workshop I attended this week was excellent with amazing researchers. Thoughtful and wise people who shared their knowledge and wisdom. I saw a great menu of super talks and I had phenomenal conversations. Some of these were one-on-one sidebars, but also panel discussions that were engaging and thought-provoking. I am left with numerous themes to write about for the foreseeable future. A good week indeed, but it left me with mourning too.

The workshop was called “Multiphysics Algorithms for the Post Moore’s Law Era.” It was organized by Brian O’Shea from Michigan State along with a group of illustrious scientists largely from Los Alamos. It was really well done and a huge breath of fresh air. Los Alamos Air is good for that too. I was there largely because I had an invited talk, which I really enjoyed giving. I had put a great deal of thought into my talk. It was some thoughts needed for this present moment. Invited talks are an honor and a good thing to accept. They look great on the resume or annual assessments. I quickly lost any sense of making the wrong decision and immediately felt grateful to attend.

I won’t and really can’t hit all the high points or talks, but will give a flavor of the meeting.

Moore’s law is the empirical observation about the growth of computing power. For about fifty-some-odd years computer power doubled about every 18 months. Over such a period of 60 years, this gives an advance of over a billion times (2 to the 30th power). Starting around 2010 people started to see the end of the road for the law. Physics itself is getting in the way and parallel computing or those magical GPUs that AMD and Nvidia produce aren’t enough. Plus those GPUs are a giant fucking pain in the ass to program. We now spend a vast amount of money to keep advancing computing, and we are not going to be able to keep up. This era is over and what the fuck are we going to do? The workshop was put together to answer this WTF question.

“Vulnerability is the birthplace of innovation, creativity and change.” ― Brene Brown

I will start by saying Los Alamos carries some significant meaning for me personally. I lived and worked there for almost 18 years. It shaped me as a scientist, if not made me the one I am today. It has (had) a culture of scientific achievement and open inquiry that I fully embrace and treasure. I had not spent time like this on the main town site for years. It was a stunning melange of things unchanged and radical change. I ate at new places, and old places running into old friends with regularity. I was left with mixed feelings and deep emotions at the end. Most of all my view of whether leaving there was the right professional move for me. It was probably a good idea. The Lab I knew and loved is almost gone. It has disappeared into the maw of our dysfunctional nation’s destruction of science. It is a real example of where greatness has gone, and the MAGA folks are not doing jack shit to fix it.

More later about the Lab and its directions since I left. Now for the good part of the week, the Workshop.

“The important thing is not to stop questioning. ― Albert Einstein

The first day of the workshop should have left me a bit cold, but it didn’t. The focus was what is the computing environment of the near future. It was all the stuff the high-performance computing people were doing to forestall the demise of Moore’s law. There are a bunch of ideas and zero of them are really appealing or exciting. The biggest message of the day is a focus on missed opportunities. The decade of focus on exascale computers has meant huge opportunity cost. This would unfold brilliantly as the week went along. The greatest take-home message was the cost of keeping up and the drop off of performance in the aggregate list of the fastest computers. We can’t do this anymore. The other big lesson is that quantum computing is no way out. It is cool and does some great shit, but it is limited. Plus its always attached to a regular computer, so that’s an intrinsic limit.

The second day was much more about software. We have made a bunch of amazing software to support all these leading-edge computers. This software is created on a shoestring budget and maintaining it is an increasing tax. The biggest point is that GPUs suck ass to program. We have largely wasted 10 years programming these motherfucking monstrosities. If we weren’t doing that what could we have done? Plus the GPUs have a limited future. There have been some great ideas for dealing with complexity like Sandia’s Kokkos, but there are dead ends. We are so attached to performance, why can’t we work with computers that are a joy to program? Maybe that would be a path we could all support.

At the end of each day, all the speakers formed a panel and we had a moderated conversation with the audience. The first day they asked Mike Norman to lead the conversation. Mike is a renowned astrophysicist and leader in the history of high-performance computing. It was cool to get to meet him. During the discussions, major perspectives came clearly into focus. An example is the above comment about whether we wasted time on GPUs for 10 years? Yes is the answer. Another issue is the problems and cost of software, which isn’t well-funded or supported. I can report from my job that the maintenance cost of code can quickly swallow all your resources. This grows as the code gets old and we make a lot of legacy codes in science. Another topic of repeated discussion every day of the meeting is the growing obsession with AI. There is a manic zeal for AI on the part of managers, and it puts all our science at serious risk. A bit more later about this.

Finally, at the end of day 2 we started in on algorithms and the science done with computing. Thank god! While appreciate learning all about software and computing, I need some science! I was introduced to tensor trains and I’ll admit to not quite grokking how they worked. It was one of several ideas for extremely compressed computing. A great thing is to leave a workshop with homework. After this, we heard about MFEM from Livermore. Lots of computing results and not nearly enough algorithms (which I know exist). They didn’t talk about results with the code, only how fucking great it runs. That said this talk was almost an exclamation point on what GPU-based computing has destroyed.

Wednesday was my talk. I was sandwiched between two phenomenal astrophysics talks with jaw-dropping results and incredible graphics. I felt honored and challenged. Jim Stone gave the first talk and wow! Cool methods and amazing studies of important astrophysical questions. He uses methods I know well and they produce magic. My physics brain left the talk wishing for more. I could watch a week of talks like that. Even better he teed up some topics my talk would attack head-on. After my talk, Bronson Messer from Oak Ridge talked about supernovae. It was sort of a topic I have an amateur taste for. Incredible physics again like Jim’s talk and gratifying uses of computing. I want more!

I gave my talk in a state where I was both inspired and a bit gobsmacked having to sit between these two masterpieces. I had trimmed my talk down to 30 minutes to allow 15 minutes for questions. Undaunted, I stepped into the task. My talk had three main pieces: a discussion of the power and nature of algorithms, how V&V is the scientific method, and how to use verification to embrace true computational efficiency. I sized the talk almost perfectly. I do wish I would move more during the talk and be more dynamic. I was too chained to my laptop. Also hated the hand mike (would have loved to drop it at the end, but that would be a total dick move).

Intel will deliver the Aurora supercomputer, the United States’ first exascale system, to Argonne National Laboratory in 2021. Aurora will incorporate a future Intel Xeon Scalable processor, Intel Optane DC Persistent memory, Intel’s Xe compute architecture and Intel OneAPI programming framework — all anchored to Intel’s six key pillars of innovation. (Credit: Argonne National Laboratory)

“The only way of discovering the limits of the possible is to venture a little way past them into the impossible.” ― Arthur C. Clarke

I always believe that a good talk should generate questions. My talk generated a huge reaction and question after question. Some talked about making V&V more efficient and cheaper. I have a new idea about that after answering. No, V&V should not be cheap. It is the scientific method and a truly great human endeavor. It is labor intensive because it is hard and challenging. People don’t do V&V because they are lazy, and want it on the cheap. It is just like thinking and AI. We still need to think when we do math, code, or write. Nothing about AI should take that away. Science is about thinking and we need to think a lot more, not less. Computers, AI, algorithms, and code are all tools, and we need to be skilled and powerful at using them. We need to be encouraged to think more, question, and do the hard things. None of it should be done away with by these new tools. These new tools should augment productivity making us more efficient. They should have free time to really think more.

The big lasting thought from my talk is about the power of algorithms. Algorithms fall into a set of three rough categories worth paying attention to. This taxonomy is structured with the power of these algorithms too. I will write about this more. I have in the past, but now I have new clarity! Thanks workshop! What an amazing fucking gift!

This taxonomy has three parts:

1. Standard efficiency mapping to computers (parallel, vector, memory serving, …). This is the focus of things lately. They are the lowest rung of the ladder.

2. Algorithms that change the scaling of the method in terms of operations. The archetypical example is linear algebra where the scaling originally was the cube of the number of equations like Gaussian elimination. The best is multigrid which scales linearly with the number of equations. The difference in scaling is truly quantum and rivals or beats Moore’s law easily.

3. Next are the algorithms that are the game changers. These algorithms transform a field of science or the world. The archetype of this is the PageRank algorithm that made Google what it is. Google is now a verb. These algorithms are as close to magic as computers do.

The trick is that each of the rungs in the hierarchy of algorithms is harder, more failure-prone, and rare. These days the last two rungs are ignored and only happen with serendipity. We could do so much more if we were intentional about what we pursue. It also requires a taste for risk and tolerance of failure.

“Any sufficiently advanced technology is indistinguishable from magic.”― Arthur C. Clarke

I wanted this to be a brief post. I have failed. The workshop was a wonderful gift to my brain. So this is a core dump and only a partial one. I even had to clip off the last two days of it (shout out to Riley and Daniel for great talks plus the rest, even more homework). Having worked at Los Alamos I have friends and valued colleagues there. To say that conversations left me troubled is an understatement. I am fairly sure that the Los Alamos I knew and loved as a staff member is dead. I’m always struck by how many of my friends are Lab Fellows, and how dismal my recognition is at Sandia. At Los Alamos, I would have been so much more at least technically. That said, I’m not sure my heart could take what was reported to me. The Lab is something else now and has lost its identity as something special.

The Lab was somewhere special and wonderful. It was a place that I owe my scientific identity to. That place no longer exists, You can still make it out in the shadows and echoes of the past. Those are dimming with each passing day. You may recall that last month, Peter Lax died. A friend shared the Lab’s obituary with me. It wasn’t anything horrible or awful, but it was full of outright errors and a lack of attention to detail. Here is one of the greats of the Lab and a member of the few remaining scientists from the Manhattan Project. He was someone whose contributions to science via applied math define what is missing today. The work in applied math that Peter did is missing today. It is what AI and machine learning need. It is absent. Worse yet, the current leaders of the Lab and nation are oblivious. They botched his obituary and I suppose that’s a minor crime compared to the scientific malpractice.

One cool moment happened at Starbucks on Thursday morning. It was a total “only in Los Alamos” moment. I was sitting down enjoying coffee, and a man came up to me. He asked, “Are you Bill Rider?” He was a fan of this blog. I invited him to sit and talk. We had a great conversation although it did little to calm my fears about the Lab. I can’t decide if I should feel disgusted, a nod of submission, or deep sadness. A beacon of science in the USA and the world is flickering out. At the very least this is a tragedy. The tragedy is born of a lack of vision, trust, and stewardship. It’s not like the Lab does anything essential; it’s just nuclear weapons.

“The present changes the past. Looking back you do not find what you left behind.” ― Kiran Desai,

Rather than close on this truly troubling note, I’ll send on a bit of gratitude. First, I would like to give much appreciation to Brian who did much of the operation and management of the workshop. He did an outstanding job. Chris Fryer and CNLS hosted the workshop under its auspices. It was joyful to be back in the CNLS fold once again. I have so many great memories of attending seminars there along with a few that I gave. Chris and his wife Aimee host wonderful parties at their home. They are truly epic and wonderful with a tremendous smorgasbord of culinary delights and even more stimulating conversations with a plethora of brilliant people. Always a delight to visit them and enjoy their generous hospitality.

“Every revolutionary idea seems to evoke three stages of reaction. They may be summed up by the phrases: (1) It’s completely impossible. (2) It’s possible, but it’s not worth doing. (3) I said it was a good idea all along.” ― Arthur C Clarke

When is Research Done?

01 Sunday Jun 2025

Posted by Bill Rider in Uncategorized

≈ Leave a comment

TL;DR

There is this trend I’ve noticed over my career, an increasing desire to see research as finished. The results are good enough, and the effort is moved to new endeavors. Success is then divested of. Research is never done, never good enough, and is simply the foundation for the next discovery. The results of this are tragic. Unless we are continually striving for better, knowledge and capability stagnate and then decay. Competence fades and disappears with a lack of attention. In many important areas, this decay is already fully in effect.. The engine of mediocrity is project management with milestones and regular progress reports. Underlying this trend is a lack of trust and short-term focus. The result is a looming stench of mediocrity where excellence should be demanded. The cost to society is boundless.

Capabilities versus Projects

“The worst enemy to creativity is self-doubt.” ― Sylvia Plath

Throughout my career, I have seen a troubling trend in funding for science. This trend has transformed into sprawling mismanagement. Once upon a time, we funded capabilities and competence in specific areas. I’ve worked at multi-program labs that apply a multitude of disciplines to execute complex programs. Nuclear weapons are the archetype of these programs. These programs require executing and weaving together a vast array of technical areas into a cohesive whole. Amongst these capabilities are a handful of overarching necessities. Even the necessities of competence for nuclear weapons are being ignored. This is a fundamental failure of our national leadership. It is getting much worse, too.

The thing that has changed is the projectization of science. We have moved toward applying project management principles to everything. The dumbest part of this is the application of project management for construction into science. We get to plan breakthroughs (planning that makes sure they don’t happen), and apply concepts like “earned value”. The result is the destruction of science, not its execution. Make-believe success is messaged by managers, but is empty in reality. Instead of useful work, we have constant progress reports, updates, and milestones. We have lost the ability to move forward and replaced it with the appearance of progress. Project management has simply annihilated science and destroyed productivity. Competence is a thing of the past.

“The problem with the world is that the intelligent people are full of doubts, while the stupid ones are full of confidence.” ― Charles Bukowski

The milestones themselves are the topic of great management malpractice. These are supposed to serve as the high-level measure of success. We operate under rules where success is highly scrutinized. The milestones cannot fail, and they don’t. The reason is simple: they are engineered to be foolproof. Thus, any and all risk is avoided. Upper management has its compensation attached to it, too. No one wants to take “food” out of their boss’ mouths either (that’s the quiet part, said out loud). The end result is not excellence, but rather a headlong leap into mediocrity. Milestones are the capstone on project management’s corrosive impact on science.

Rather than great work and maintaining capability, we have the opposite, mediocrity and decay.

The Desire to Finish

“Highly organized research is guaranteed to produce nothing new.” ― Frank Herbert

One of the most insidious aspects of the project mindset is the move to terminate work at the end of the project. There is a lot of work that they want to put a bow on and say, “It is finished.” Then we move to focus on something else. The management is always interested in saying, “This work is done,” and “move to something new.” The something new is something that will be big funding and contribute to managerial empire building (another pox). Once upon a time, the Labs were a national treasure (crown jewels). Now we are just a bunch of cheap whores looking for our next trick. This is part of the legacy of project management and our executive compensation philosophy. Much less progress and competence, much more graft and spin.

A few years ago, we focused on computing and exascale machines. Now we see artificial intelligence as the next big thing (to bring in the big bucks). Nothing is wrong with a temporary emphasis and shift of focus as opportunity knocks. Interestingly, exascale was not an opportunity, but rather a struggle against the inevitable death of Moore’s law. Moore’s law was the gift that kept on giving for project management, reliable progress like clockwork.

The project management desires explain exascale more than any technical reasons. Faster computers are worthwhile for sure; however, the current moment does not favor this as a strategy. In fact, it is time to move away from it. AI is different. It is a once in a generation technology to be harnessed, but we are fucking that up too. We seek computing power to make AI work and step away from algorithms and innovation. Brute force has its limits, and progress will soon languish. We suffer from a horrendous lack of intellectual leadership, basic common sense, and courage. We cannot see the most obvious directions to power scientific progress. The project management obsession can be tagged as a reason. If the work doesn’t fit into that approach, it can’t be funded.

Continual progress and competence are out the window. The skills to do the math, engineering, and physics are deep and difficult. The same holds for the skills to work in high-performance computing. The same again for artificial intelligence. Application knowledge is yet another deep, expansive expertise. None of this expertise easily transfers to the next hot thing. Worse yet, expertise fades and ossifies as those mental patterns lapse into hibernation. Now the projects need to finish, and the program should move to something shiny and new. The cost of this attitude is rather profound, as I explore next.

The Problems with Finishing: Loss of Competence

“The purpose of bureaucracy is to compensate for incompetence and lack of discipline.” ― Jim Collins

This all stems from the need for simplicity in a sales pitch. Simple gets the money today. Much of the explanation for this is our broken politics. Congress and the people have lost confidence and trust in science. We live in a time of extremes and an inability to live in the gray. No one can manage a scintilla of subtlety. Thus, we finish things, followed by a divestment of emphasis. That divestment ultimately ends up hollowing out the built expertise needed for achievement. Eventually, the tools developed in the success of one project and emphasis decayed too. Essential capabilities cannot be maintained successfully without continual focus and support.

A story is helpful here. As part of the programs that were part of the nuclear weapons program at the end of the Cold War, simulation tools were developed. These tools were an alternative to full-scale nuclear tests. To me, one of the more horrifying aspects of today’s world is how many of these tools from that era are still essential today. Even tools built as part of the start of stockpile stewardship after the Cold War are long in the tooth today. In virtually every case, these tools were state of the art when conceived originally. Once they were “finished” and accepted for use in applications, the tools went into stasis. In a world of state-of-the-art science, stasis is decline. The only exception is the move of these codes to new computing platforms. This is an ever-present challenge. The stasis is the intellectual content of the tools, which matters far more than the computing platforms.

What usually does not change are the numerical methods, physics, and models in the codes. These become frozen in time. While all of these can be argued to be state of the art when the code was created, they cease to be with time. We are talking decades. This is the trap of finishing these projects and moving on; the state of the art is transitory. If you rest on success and declare victory, time will take that from you. This is the state that too much of our program is in. We have declared victory and failed to see how time eats away at our edge. Today, we have tools operated by people who don’t understand what they are using. The punch line is that research is never done, and never completed. Today’s research is the foundation of tomorrow’s discoveries and an advancing state of the art.

Some of this is the ravages of age for everything. People age and retire. Skills dull and wither from lack of use. Codes age and become dusty, no longer embodying the state of the art. The state of the art moves forward and leaves the former success as history. All of this is now influencing our programs. Over enough time, this evolves into outright incompetence. Without a change in direction and philosophy that incompetence is inevitable. In some particular corners of our capability, the incompetence is already here.

“Here’s my theory about meetings and life: the three things you can’t fake are erections, competence and creativity.” ― Douglas Coupland

A Mercy Killing of an Ill Patient

“Let’s have a toast. To the incompetence of our enemies.” ― Holly Black

The core issues at work in destroying competence are a combination of short-term thinking and lack of trust. The whole project attitude is emblematic of it. The USA has already ceded the crown of scientific and engineering supremacy to China. American leaders won’t admit this, but it’s already true. Recent actions by the Administration and DOGE will simply unilaterally surrender the lead completely and irreversibly. The corollary to all this negativity is that maintaining the edge of competence requires trust and long-term thinking. Neither is available today in the USA.

There is a sharp critique of our scientific establishment available in the recent book Abundance. There, Klein and Thomson provide commentary on what ails science in the USA. It rings true to me, having worked actively for the last 35 years at two National Labs. Risk avoidance, paralyzing bureaucracy, and misaligned priorities have sapped vitality. Too much overhead wastes money. All these ills stem from those problems of short-termism combined with a lack of trust. A good amount of largess and overconfidence conspires as well. Rather than encourage honesty, the lack of trust empowers bullshit. Our key approach to declaring success is to bullshit our masters.

Today is not the time to fix any of this. It is time to think about what a fix will look like. Recent events are the wanton destructive dismantling of the federal scientific establishment. Nothing is getting fixed or improved. It is simply being thrown into the shredder. If we get to rebuild science, we need to think about what it should look like. If we continue with short-term thinking, success won’t be found. The project management approach needs to be rejected. Trust is absolutely necessary, too. Today, trust is also in freefall. Much of the wanton destruction stems from a lack of trust. This issue is shared by both sides of the partisan divide. Their reasons are different, and the truth is in the middle. Unless the foundation for success is available, scientific success won’t return.

“The problem with doing nothing is not knowing when you are finished.” ― Nelson De Mille

What Americans don’t seem to realize is that so much success is science-based. During the Cold War, the connection between national security was obvious. Nuclear weapons made the case with overwhelming clarity. Economic security and success are no less bound to science. The effect is more subtle and longer-term. The loss of scientific power won’t be obvious for a long time. Eventually, we will suffer from the loss of scientific and engineering success. Our children and grandchildren will be poorer, less safe, and live shorter lives due to our actions today. The past four months simply drove nails into that coffin that had already been fashioned by decades of mismanagement.

“Never put off till tomorrow what may be done day after tomorrow just as well.” ― Mark Twain

A Little Verification Idea Seldom Tried

28 Wednesday May 2025

Posted by Bill Rider in Uncategorized

≈ Leave a comment

“We don’t want to change. Every change is a menace to stability.” ― Aldous Huxley

There is a problem with finishing up a blog post on a vacation day, you forget one of the nice ideas you wanted to share. So here is a brief addendum to what I wrote yesterday.

Here is a little addition to my last post here. It is another type of test I’ve tried, but also seldom seen documented. A standard threat to numerical methods is violating stability conditions. Stability is one of the most important numerical concepts. It is a prerequisite for convergence and is usually implicitly assumed. What is not usually tested is an active test of the transition of a calculation to instability. The simplest way to do this is to violate the time step size determined for instability.

The tests are simple. Basically, run the code with time steps over the stability limit and observe how sharp the limits are in practice. It also does a good job of documenting what an instability actually looks like when it appears. If the limit is not sharp, it might indicate an opportunity to improve the code by sharpening a bound. One could also examine if the lack of stability inhibits convergence, too. This would just be cases where the instability is mild and not catastrophic.

I did this once in my paper with Jeff Greenough, comparing a couple of methods for computing shocks. In this case, the test was the difference between linear and nonlinear stability for a Runge-Kutta integrator. The linear limit is far more generous than the nonlinear limit (by about a factor of three!). The accuracy of the method is significantly impacted at the limit of each of the two conditions. For shock problems, the difference in solutions and accuracy is much stronger. It also impacts the efficiency of the method a great deal.

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” Journal of Computational Physics 196, no. 1 (2004): 259-281.

There’s Much More to Code Verification

27 Tuesday May 2025

Posted by Bill Rider in Uncategorized

≈ Leave a comment

TL;DR

The standard narrative for code verification is demonstrating correctness and finding bugs. While this is true, it also sells verification as a practice that is wildly short. Code verification has a myriad of other uses, foremost the assessment of accuracy without ambiguity. It can also define features of code, such as adherence to invariants in solutions. Perhaps, most compellingly, it can define the limits of a code-method and research needs to advance capability.

“Details matter, it’s worth waiting to get it right.” ― Steve Jobs

What People Think It Does

There is a standard accepted narrative for code verification. It is a technical process of determining that a code is correct. A lack of correctness is caused by bugs in the code that implements a method. It is supported by two engineering society standards written by AIAA (aerospace engineers) and ASME (mechanical engineers). The DOE-NNSA’s computing program, ASC, has adopted the same definition. It is important for the quality of code, but it is drifting to obscurity and a lack of any priority. (Note the IEEE has a different definition for verification, leading to widespread confusion.)

The definition has several really big issues that I will discuss below. Firstly, the definition is too limited and arguably wrong in emphasis. Secondly, it means that most of the scientific community doesn’t give a shit about it. It is boring and not a priority. It plays a tiny role in research. Thirdly, it sells the entire practice short by a huge degree. Code verification can do many important things that are currently overlooked and valuable. Basically there are a bunch of reasons to give a fuck about it. We need to stop undermining the practice.

The basics of code verification are simple. A method for solving differential equations has an ideal order of accuracy. Code verification compares the solution by the code with an analytical solution over a sequence of meshes. If the order of accuracy observed matches the theory, the code is correct. If it does not, the code has an error either in the code or in the construction of the method. One of the key reasons we solve differential equations with computers is the dearth of analytical solutions. For most circumstances of practical interest, there is no analytical solution, nor circumstances that match the order of accuracy of method design.

One of the answers to the dearth of analytical solutions is the practice of the method of manufactured solutions (MMS). It is a simple idea in concept where an analytical right-hand side is added to equations to force a solution. The forced solution is known and ideal. Using this practice, the code can be studied. The technique has several practical problems that should be acknowledged. First is the complexity of these right-hand sides is often extreme, and the source term must be added to the code. It makes the code different than the code used to solve practical problems in this key way. Secondly, the MMS problems are wildly unrealistic. Generally speaking, the solutions with MMS are dramatically unlike any realistic solution.

MMS simply expands the distance of code verification for the code’s actual use. All this does is amplify the degree to which code users disdain code verification. The whole practice is almost constructed to destroy the importance of code verification. It’s also pretty much dull as dirt. Unless you just love math (some of us do), MMS isn’t exciting. We need to move forward towards practices people give a shit about. I’m going to start by naming a few.

“If you thought that science was certain – well, that is just an error on your part.” ― Richard P. Feynman

It Can Measure Accuracy

“I learned very early the difference between knowing the name of something and knowing something.” ― Richard P. Feynman

I’ve already taken a stab at this topic, noting that code verification needs are refresh:

https://williamjrider.wordpress.com/2024/09/21/code-verification-needs-a-refresh/

Here, I will just recap the first and most obvious overlooked benefit, measuring the meaningful accuracy of codes. Code verification’s standard definition concentrates on order of accuracy as the key metric. Practical solutions with code rarely achieve the design order of accuracy. This further undermines code verification as significant. Most practical solutions give first-order accuracy (or lower). The second metric from verification is error, and with analytical solutions, you can get precise errors from the code. The second thing to focus on is the efficiency of a code that connects directly.

A practical measure of code efficiency is accuracy per unit effort. Both of these can be measured with code verification. One can get the precise errors by solving a problem with an analytical solution. By simultaneously measuring the cost of the solution, the efficiency can be assessed. For practical use, this measurement means far more than finding bugs via standard code verification. Users simply assume codes are bug-free and discount the importance of this. They don’t actually care much because they can’t see it. Yes, this is dysfunctional, but it is the objective reality.

The measurement and study of code accuracy is the most straightforward extension of the nominal dull as dirt practice. There’s so much more as we shall see..

It Can Test Symmetries

“Symmetry is what we see at a glance; based on the fact that there is no reason for any difference…” ― Blaise Pascal

One of the most important aspect of many physical laws are symmetries. These are often preserved by ideal versions of these laws (like the differential equations code’s solve). Many of these symmetries are solved inexactly by methods in codes. Some of these symmetries are simple, like preservation of geometric symmetry, such as cylindrical or spherical flows. This can give rise to simple measures that accompany classical analytical solutions. The symmetry measure can augment the standard verification approach with additional value. In some applications, the symmetry is of extreme importance.

There are many more problems that can be examined for symmetry without having an analytical solution. One can create all sorts of problems with symmetries built into the solution. A good example of this is a Rayleigh-Taylor instability problem with a symmetry plane where left-right symmetry is desired. The solution can be examined as a function of time. This is an instability, and the challenge to symmetry grows over time. As the problem evolves forward the lack of symmetry becomes more difficult to control. It makes the test extreme if run for very long times. Symmetry problems also tend to grow as the mesh is refined.

It is a problem I used over 30 years ago to learn how to preserve stability. It was an incompressible variable-density code. I found the symmetry could be threatened by two main parts of the code: the details of upwinding in the discretization, and numerical linear algebra. I found that the pressure solve needed to be symmetric as well. I had to modify each part of the algorithm to get my desired result. The upwinding had to be changed to avoid any asymmetry concerning the sign of upwinding. This sort of testing and improvement is the hallmark of high-quality code and algorithms. Too little of this sort of work is taking place today.

It Can Find “Features”

“A clever person solves a problem. A wise person avoids it.” ― Albert Einstein

The usual mantra for code verification is lack of convergence means a bug. This is not true. This is a very naive and limiting perspective. For codes that compute the solution to shock waves (and weak solutions), correct solutions require conservation and entropy conditions. Methods and codes do not always adhere to these conditions. In those cases, a “perfect” bug-free code will produce incorrect solutions. They will converge on a solution, just converge to the wrong solution. The wrong solution is a feature of the method and code. These wrong solutions are revealed easily by extreme solutions with very strong shocks.

These features are easily fixed by using different methods. The problem is that the codes with these features are the product of decades of investment and reflect deeply held cultural norms. Respect for verification is acutely not one of those norms. My experience is that users of these codes make all sorts of excuses for this feature. Mostly, this sounds like the systematic devaluing of the verification work and excuses for ignoring the problem. Usually, they start talking about how important the practical work the code does. They fail to see how damning the results of failing to solve these problems. Frankly, it is a pathetic and unethical stand. I’ve seen this over and over at multiple Labs.

Before I leave this topic, I will get to another example of a code feature. This has a similarity to the symmetry examination. Shock codes can often have a shock instability with a funky name, the carbuncle phenomenon. This is where you have a shock aligned with a grid, and the shock becomes non-aligned and unstable. This feature is a direct result of properly implemented methods. It is subtle and difficult to detect. For a large class of problems, it is a fatal flaw. Fixing the problem requires some relatively simple but detailed changes to the code. It also shows up in strong shock problems like Noh and Sedov. At the symmetry axes, the shocks can lose stability and show anomalous jetting.

This gets to the last category of code verification benefits, determining a code’s limits and a research agenda.

“What I cannot create, I do not understand.” ― Richard P. Feynman

It Can Find Your Limits and Define Research

If you are doing code verification correctly, the results will show you a couple of key things: what the limits of the code are, and where research is needed. My philosophy of code verification is to beat the shit out of a code. Find problems that break the code. The better the code is, the harder it is to break. The way you break the code is to define harder problems with more extreme conditions. One needs to do research to get to the correct convergent solutions.

Where the code breaks is a great place to focus on research. Moving the horizons of capability outward can define an excellent and useful research agenda. In a broad sense, the identification of negative features is a good practice (the previous section). Another example of this is extreme expansion waves approaching vacuum conditions. In the past, I have found that most usual shock methods cannot solve this problem well. Solutions are either non-convergent or so poorly convergent as to render the code useless.

This problem is not altogether surprising given the emphasis on methods. Computing shock waves has been the priority for decades. When a method cannot compute a shock properly, the issues are more obvious. There is a clear lack of convergence in some cases or catastrophic instability. Expansion waves are smooth, offering less challenge, but they are also dissipation-free and nonlinear. Methods focused on shocks shouldn’t necessarily solve them well (and don’t when they’re strong enough).

I’ll close by another related challenge. The use of methods that are not conservative is driven by the desire to compute adiabatic flows. For some endeavors like fusion, adiabatic mechanisms are essential. Conservative methods necessary for shocks often (or generally) cannot compute adiabatic flows well. A good research agenda might be finding the methods that can achieve conservations, preserving adiabatic flows, and strong shock waves. A wide range of challenging verification test problems is absolutely essential for success.

“The first principle is that you must not fool yourself and you are the easiest person to fool.” ― Richard P. Feynman

Does Uncertainty Quantification Replace V&V?

19 Monday May 2025

Posted by Bill Rider in Uncategorized

≈ Leave a comment

tl;dr

Uncertainty quantification (UQ) is ascending while verification & validation (V&V) is declining. UQ is largely done in silico and offers results that trivially harness modern parallel computing. UQ thus parallels the appeal of AI and easy computational results. It is also easily untethered to objective reality. V&V is a deeply technical and naturally critical practice. Verification is extremely technical. Validation is extremely difficult and time-consuming. Furthermore, V&V can be deeply procedural and regulatory. UQ has few of these difficulties although its techniques are quite technical. Unfortunately, UQ without V&V is not viable. In fact, UQ without the grounding of V&V is tailor-made for bullshit and hallucinations. The community needs to take a different path that mixes UQ with V&V, or disaster awaits.

“Doubt is an uncomfortable condition, but certainty is a ridiculous one.” ― Voltaire

UQ is Hot

If one looks at the topics of V&V and UQ, it is easy to see that UQ is a hot topic. It is vogue and garners great attention and support. Conversely, V&V is not. Before I give my sharp critique of UQ, I need to make something clear. UQ is extremely important and valuable. It is necessary. We need better techniques, codes, and methodologies to produce estimates of uncertainty. The study of uncertainty is guiding us to grapple with a key topic; how much we don’t know? This is difficult and uncomfortable work. Our emphasis on UQ is welcome. That said, this emphasis needs to be grounded in reality. I’ve spoken in the past about the danger of ignoring uncertainty. When an uncertainty is ignored it gets the default value of ZERO. In other words, ignored uncertainties are assigned the smallest possible value.

“The quest for certainty blocks the search for meaning. Uncertainty is the very condition to impel man to unfold his powers. ” ― Erich Fromm

As my last post discussed, the focus needs to be rational and balanced. When I observe the current conduct of UQ research, I see neither character. UQ takes on the mantle of the silver bullet. It is subject to the fallacy of the “free lunch” solutions. It seems easy and tailor-made for our currently computationally rich environment. It produces copious results without many of the complications of V&V. The practice of V&V is doubt based on deep technical analysis. It is uncomfortable and asks hard questions. Often questions that don’t have easy or available answers. UQ just gives answers with ease and it’s semi-automatic, in silico. You just need lots of computing power.

“I would rather have questions that can’t be answered than answers that can’t be questioned.” ― Richard Feynman

This ease gets to the heart of the problem. UQ needs validation to connect it to objective reality. UQ needs verification to make sure the code is faithfully correct to the underlying math. Neither practice is so well established that it can be ignored. Yet, increasingly, they are being ignored. Experimental results are needed to challenge our surety. UQ has a natural appetite for computing thus our exascale computers lap it up. It is a natural way to create vast amounts of data. UQ attaches statistics naturally to modeling & simulation, a long-standing gap. Statistics connects to machine learning as it is the algorithmic extension of statistics. The mindset parallels the euphoria recently for AI.

For these reasons, UQ is a hot topic and attracting funding and attention today. Being in silico UQ becomes an easy way to get V&V-like results from ML/AI. What is missing? For the most part, UQ is done assuming V&V is done. For AI/ML this is a truly bad assumption. If you’re working in simulation you know that assumption is suspect. The basics of V&V are largely complete, but its practice is haphazard and poor generally. My observation is that computational work has regressed concerning V&V. Advances made in publishing and research standards have gone backward in recent years. Rather than V&V being completed and simply applied, it is dispised.

All of this equals a distinct danger for UQ. Without V&V UQ is simply an invitation for ModSim hallucinations akin to the problem AI has. Worse yet, it is an invitation to bullshit the consumers of simulation results. Answers can be given knowing they have a tenuous connection to reality. It is a recipe to fool ourselves with false confidence.

“The first principle is that you must not fool yourself and you are the easiest person to fool.” ― Richard P. Feynman

AI/ML Feel Like UQ and Surrogates are Dangerous

One of the big takeaways from the problems with UQ is the appeal of its in-silico nature. This paves the way to easy results. Once you have a model and working simulation UQ is like falling off a log. You just need lots of computing power and patience. Yes, it can be improved and made more accurate and efficient. Nonetheless, UQ asks no real questions about the results. Turn the crank and the results fall out (unless it triggers a problem with the code). You can easily get results although doing this efficiently is a research topic. Nonetheless being in silico removes most of the barriers. Better yet, it uses the absolute fuck out of supercomputers. You can so fill a machine up with calculations. You get results galore.

If one is paying attention to the computational landscape you should be experiencing deja vu. AI is some exciting in-silico shit that uses the fuck out of the hardware. Better yet, UQ is an exciting thing to do with AI (or machine learning really). Even better you can use AI/ML to make UQ more efficient. We can use computational models and results to train ML models that can cheaply evaluate uncertainty. All those super-fast computers can generate a shitload of data. These are called surrogates and they are all the rage. Now you don’t have to run the expensive model anymore; you just train the surrogate and evaluate the fuck out of it on the cheap. Generally machine learning is poor at extrapolating, and in high dimensions (UQ is very high dimensional) you are always extrapolating. You better understand what you’re doing and machine learning isn’t well understood.

What could possibly go wrong?

If the model you trained the surrogate on has weak V&V, a lot can go wrong. You are basically evaluating bullshit squared. Validation is essential to establishing how good a model is. You should produce a model form error that expresses how well the computational model works. The model also has numerical errors due to the finite representation of the computation. I can honestly say that I’ve never seen either of these fundamental errors associated with a surrogate. Nonetheless, surrogates are being developed to power UQ all over the place. It’s not a bad idea, but these basic V&V steps should be an intrinsic part of this. To me, the state of affairs says more about the rot at the heart of the field. We have lost the ability to seriously question computed results. V&V is a vehicle for asking these questions. These questions are too uncomfortable to be confronted with.

V&V is Hard; Too Hard?

I’ve worked at two NNSA Labs (Los Alamos and Sandia) in the NNSA V&V program. So I know where the bodies are buried. I’ve been part of some of the achievements of the program and where it has failed. I was at Los Alamos when V&V arrived. It was like pulling teeth to make progress. I still remember the original response to validation as a focus, “Every calculation a designer does is all the validation we need!” The Los Alamos weapons designers wanted hegemonic control over any assessment of simulation quality. Code developers, computer scientists, and any non-designer were deemed incompetent to assess quality. To say it was an uphill battle was an understatement. Nonetheless, progress was made, albeit mildly.

V&V fared better at Sandia. In many ways, the original composition of the program had its intellectual base at Sandia. This explained a lot of the foundational resistance from the physics labs. It also gets too much of a problem with V&V today. Its focus on the credibility of simulations makes it very process-oriented and regulatory. As such it is eye-rollingly boring and generates hate. This character was the focus of the opposition at Los Alamos (Livermore too). V&V has too much of an “I told you so” vibe. No one likes this and V&V starts to get ignored because it just delivers bad news. Put differently, V&V asks lots of questions but generates few answers.

Since budgets are tight and experiments are scarce, the problems grow. We start to demand calculations to have close agreement with available data. Early predictions typically don’t meet standards. By and large, simulations have a lot of numerical error even on the fastest computers. The cure for this is an ex-post-facto calibration of the model to match experiments better. The problem is that this then short-circuits validation. Basically, there is little or no actual validation. Almost everything is calibrated unless the simulation is extremely easy. The model has a fixed grid, so there’s no verification either. Verification is bad news without a viable current solution. What you can do with such a model is lots of UQ. Thus UQ becomes the results for the entire V&V program.

To really see this clearly we need to look West to Livermore.

What does UQ mean without V&V?

I will say up front that I’m going to give Livermore’s V&V program a hard time, but first, they need big kudos. The practice of computational physics and science at Livermore is truly first-rate. They have eclipsed both Sandia and Los Alamos in these areas. They are exceptional code developers and use modern supercomputers with immense skill. They are a juggernaut in computing.

By almost any objective measure Livermore’s V&V program produced the most important product of the entire ASC program, common models. Livermore has a track record of telling Washington one thing and doing something a bit different. Even better, this something different is something better. Not just a little better, but a lot better. Common models are the archetypical example of this. Back in the 1990’s, there was this metrics project looking at validating codes. Not a terrible idea at all. In a real sense, Livermore did do the metrics in the end but took a different smarter path to them.

What Livermore scientists created instead was a library of common models that combined experimental data, computational models, and auxiliary experiments. The data was accumulated across many different experiments and connected into a self-consistent set of models. It is an incredible product. It has been repeated at Livermore and then at Los Alamos too across the application space. I will note that Sandia hasn’t created this, but that’s another story of differences in Lab culture. These suites of common models are utterly transformative to the program. It is a massive achievement. Good thing too because the rest of V&V there is far less stellar.

What Livermore has created instead is lots of UQ tools and practice. The common models are great for UQ too. One of the first things you notice about Livermore’s V&V is the lack of V&V. Verification leads the way in being ignored. The reasons are subtle and cultural. Key to this is an important observation about Livermore’s identity as the fusion lab. Achieving fusion is the core cultural imperative. Recently, Livermore has achieved a breakthrough at the National Ignition Facility (NIF). This was “breakeven” in terms of energy. They got more fusion energy out than laser energy in for some NIF experiments (all depends on where you draw the control volume!).

The NIF program also has the archetypical example of UQ gone wrong. Early on in the NIF program, there was a study of fusion capsule design and results. They looked at a large span of uncertainties in the modeling of NIF and NIF capsules. It was an impressive display of the UQ tools developed by Livermore, their codes, and computers. At the end of the study, they created an immensely detailed and carefully studied prediction of outcomes for the upcoming experiments. This was presented as a probability distribution function of capsule yield. It covered an order of magnitude from 900 kJ to 9 MJ of yield. When they started to conduct experiments, the results were not even in the range predicted by about a factor of three on the low side. The problems and dangers of UQ were laid bare.

“The mistake is thinking that there can be an antidote to the uncertainty.” ― David Levithan

If you want to achieve fusion the key is really hot and dense matter. The way to get this hot and dense matter is to adiabatically compress the living fuck out of material. To do this there is a belief in numerical Lagrangian hydrodynamics with numerical viscosity turned off in adiabatic regions gives good results. The methods they use are classical and oppositional to modern shock-capturing methods. To compute shocks properly needs dissipation and conservation. The ugly reality is that hydrodynamic mixing (excessively non-adiabatic dissipative and ubiquitous) is an anathema to Lagrangian methods. Codes need to leave the Lagrangian frame of reference and remap. Conserving energy is one considerable difficulty.

Thus, the methods favored at Livermore cannot successfully pass verification tests for strong shocks. Thus, verification results are not shown. For simple verification problems there is a technique to get good answers. The problem is that the method doesn’t work on applied problems. Thus, good verification results for shocks don’t apply to key cases where the codes are used. They know the results will be bad because they are following the fusion mantra. Failure to recognize the consequences of bad code verification results is magical thinking. Nothing induces magical thinking like a cultural perspective that values one thing over all others. This is a form of extremism discussed in the last blog post.

There are two unfortunate side effects. The first obvious one is the failure to pursue numerical methods that simultaneously preserve adiabats and compute shocks correctly. This is a serious challenge for computational physics. It should be vigorously pursued and developed by the program. It also represents a question asked by verification without an easy answer. The second side effect is a complete commitment to UQ as the vehicle for V&V where answers are given and questions aren’t asked. At least not really hard questions.

UQ is left and becomes the focus. It is much better for results and for giving answers. If those answers don’t need to be correct, we have an easy “success.”

“It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” ― Richard P. Feynman

A Better Path Forward

Let’s be crystal clear about the UQ work at Livermore, it is very good. Their tools are incredible. In a purely in silico way the work is absolutely world-class. The problems with the UQ results are all related to gaps in verification and validation. The numerical results are suspect and generally under-resolved. The validation of the models is lacking. This gap is chiefly from the lack of acknowledgment of calibrations. Calibration of results is essential for useful simulations of challenging systems. NIF capsules are one obvious example. Global climate models are another. We need to choose a better way to focus our work and use UQ properly.

“Science is what we have learned about how to keep from fooling ourselves.” ― Richard Feynman

My first key recommendation is to ground validation with calibrated models. There needs to be a clear separation of what is validated and what is calibrated. One of the big parts of the calibration is the finite mesh resolution of the models. Thus the calibrations mix both model form error and numerical error. All of this needs to be sorted out and clarified. In many cases, these are the dominant uncertainties in simulations. They swallow and neutralize the UQ we spend our attention on. This is the most difficult problem we are NOT solving today. It is one of the questions raised by V&V that we need to answer.

“Maturity, one discovers, has everything to do with the acceptance of ‘not knowing.” ― Mark Z. Danielewski

The practice of verification is in crisis. The lack of meaningful estimates of numerical error in our most important calculation is appalling. Code verification has become a niche activity without any priority. Code verification for finding bugs is about as important as taking your receipt with you from the grocery store. Nice to have, but it’s very rarely checked by anybody. When it is checked, it’s because you’re probably doing something a little sketchy. Code verification is key to code and model quality. It needs to expand in scope and utility. It is also the recipe for improved codes. Our codes and numerical methods need to progress, They must continue to get better. The challenge of correctly computing strong shocks and adiabats is but one. There are many others that matter. We are still far from meeting our modeling and simulation needs.

“What I cannot create, I do not understand.” ― Richard P. Feynman

Ultimately we need to recognize how V&V is a partner to science. It asks key questions of our computational science. It is then the goal to meet these questions with a genuine search for answers. V&V also provides evidence of how well the question is answered, This question-and-answer cycle is how science must work. Without the cycle, progress stalls, and the hopes of the future are put at risk.

“If you thought that science was certain – well, that is just an error on your part.” ― Richard P. Feynman

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • March 2026
  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 60 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...