• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: March 2026

A Methods Challenge Worthy of Being Called World-Class

04 Wednesday Mar 2026

Posted by Bill Rider in Uncategorized

≈ 6 Comments

tl;dr

There is a sense of stagnation in progress for numerical methods for hyperbolic systems (gas dynamics). There are reasons for the relative stasis. To some extent, these methods are a victim of their own success. There are some really deep, conflicting priorities for methods in places I used to work. On the positive side, the conditions for nuclear fusion are exceedingly challenging to compute. You want to avoid shock waves. Shock waves are also impossible to avoid. With shock waves, conservation is essential unless the shock is explicitly tracked. For fusion conditions, conservation is generally disregarded because flows are desired to be adiabatic, thus shockless and smooth. Work to satisfy both priorities simultaneously is completely lacking. It would seemingly be important to pursue, but it’s not. We have simply surrendered to this as a challenge.

So let’s peel this onion.

“Knowledge has to be improved, challenged, and increased constantly, or it vanishes.” ― Peter Drucker

Conflicting Priorities

The national labs I used to work at solve a host of extremely challenging problems. These are high-energy-density problems with some really challenging conditions to consider. These involve exotic conditions and difficult processes to engineer. Canonically, explosions and shock waves are ever-present. Amongst the most difficult things to produce are the conditions for nuclear fusion. Computational tools are necessary and ubiquitous for the design and analysis of these technologies. As I’ve noted before, these labs are the origin of the technology known as computational fluid dynamics (CFD). The past still rules choices today, and the labs are a bit of a scientific cul-de-sac. The cultures resist outside ideas. Security and other conditions are increasingly isolating the labs from the World.

The two big priorities for the labs above are seemingly in conflict with one another. Not seemingly, they are in conflict! To compute shock waves, there are two approaches that have been successful. One is shock tracking. In this approach, the evolution of the shock wave is explicitly computed and updated. This was the first approach taken at the labs in computations done in World War 2. It is exceedingly complex, especially in more than one dimension. Shock capturing and artificial viscosity were invented as an alternative. Shock capturing dominates because of its generality and relative simplicity. It allows the examination of complex engineered systems.

The methods developed initially for shock capturing were in the Lagrangian frame of reference. These methods were quite successful, but have some limits. These methods are not generally conservative of energy. The internal energy equation is evolved. One mantra I have to repeat over and over is that the internal energy equation is not a conservation law. It is an evolution equation. To get a conservation law, the total energy must be conserved. In the Lagrangian frame, the conservation errors are small, being close to negligible. As soon as you leave the Lagrangian frame, these errors become large and have negative consequences. You get the wrong speed of propagation for shocks unless you are intentional.

“Don’t handicap your children by making their lives easy.” ― Robert A. Heinlein

It has been shown that conservation is essential for computing shock waves properly. This is the Lax-Wendroff theorem. Shocks are weak solutions of the equations, and conservation is essential to getting a weak solution. There is an additional caveat: proper dissipation is needed to obtain the correct weak solution. Weak solutions are not unique, and one must select the physical one. For most of CFD, this conservation is essential. Most computations for systems with shocks are conservative. Outside the lab,s developments adhere to its dictum. Inside the NNSA labs, not so much. The Lax-Wendroff theorem was developed at Los Alamos. For decades, it was just ignored there. That is part of the story here, and the challenge is to stop ignoring it.

The simple explanation of why conservation is ignored is available. In brief, the Labs are interested in a wide range of very high-Mach-number flows. If the Mach number is very high, the flow’s energy is dominated by kinetic energy. If one evolves total energy, the internal energy is found via a subtraction of two large numbers, e = E_t – K, where K = \frac{1}{2}\sum_i u_i^2. This can be error-prone and produce errors that can be quite important for fusion conditions. This gets to the resistance of the Labs for conservation form. For successful fusion designs, these errors are intolerable. (wordpress math isn’t working today)

Fusion happens where the proper materials (isotopes of hydrogen) are put into very dense and very hot conditions. Fusion happens in very exciting stars with more than hydrogen, BTW! Getting these materials into dense hot conditions is quite challenging. Using shock waves to do this does not work well. The amount of density and energy growth with shocks is quite inefficient. One of the explanations is the growth of entropy with shocks. A shock wave always raises the entropy of the material. This makes the additional shocks more difficult to improve the conditions for fusion. The trick is to compress the material adiabatically with no increase in entropy. These conditions are quite difficult to engineer. Compressible fluid dynamics shocks readily. They require careful tuning and the creation of very high Mach number flows that are carefully balanced to avoid shocks.

The original methods derived by von Neumann (and Richtmyer) are great for computing these kinds of flows. Conservation form methods solving total energy generally suck at it. Thus at the Labs the original non-conservative methods are favored. At Livermore, this favor is almost pathological (definitely cultural). For the optimistic view of technology, the original methods are great. All is well in the Lagrangian frame. The issue is that mixing and turbulence demand leaving the Lagrangian frame. To deal with the pessimistic side of technology and shock waves the original methods are problematic. Both turbulence and shocks are ubiquitous.

Something needs to give. Why can’t we have both? Its been 70 fucking years for Christs sake.

“The mind, once stretched by a new idea, never returns to its original dimensions.” ― Ralph Waldo Emerson

Are the Priorities Incompatible?

Even today the mantra demands one to make a choice. You either use a method that computes adiabatic evolution properly, but fucks up shocks, or one that computes shocks right and fucks up adiabats. At Livermore where fusion is king, the first choice wins all the time. Livermore has a big wake and this choice dominates computation at the Labs. Thus for the other labs too, shocks are hosed. Any serious code uses remap, and most problems eventually become Eulerian even if they start Lagrangian. Thus they start to lose conservation of energy. With this loss of energy makes incorrect shock wave evolution inevitable. Testing confirms that this happens as a matter of course.

The fix for this is simple, but not currently taken. This is go to conservation form. Instead there is a fix to return the evolutionequations to conservation was invented by DeBar in the 1970’s. Basically the kinetic energy deficit (or surplus) is added back to get conservation back. One of the issues is that this is not in conservation form. Thus, Lax-Wendroff does not apply as a theoretical backstop. The DeBar approach has been shown itself to be effective in getting correct solutions. It is also incredibly fragile. It generally does not function reliably on complex problems. It falls prey to the basic subtraction issue mentioned above.

The main community outside the Labs facing similar problems are the astrophysical scientists. In the 1980’s they used codes based on technology from Livermore (written by a group headed by Mike Norman). This code has been supplanted by conservative methods based on the piecewise parabolic method (PPM). Thus today the astrophysical calculations are done using modern conservative methods. Most commonly PPM is the basis. The author of PPM is Paul Woodward who worked at Livermore in the 1970’s and 1980’s. Paul also worked with Bram Van Leer on sabbatical influencing his approach. Astrophysicists are also interested in adiabatic compression and fusion conditions. The difference is degree. Fusion like that in ICF is far more extreme than astrophysical flows.

Those familiar with my writing might recognize that I really favor PPM as a method and basic framework. I also believe that conservation is essential. It should not be disregarded as the Labs do. Correct shock wave propagation is too important to throw away.

“I am sufficiently proud of my knowing something to be modest about my not knowing all.” ― Vladimir Nabokov

Having Your Cake and…

This gets to the big challenge I am thinking about. Can we have methods that compute adiabats properly and have conservation? I firmly believe the answer is yes, and it is bemusing that we haven’t done this yet. The issue is that the adiabatic evolution is incredibly demanding. I would counter that the errors and faults with shock solutions are obvious too, and far larger. In dealing with the technologies the Labs are responsible for, both are essential. I see three possibilities that are most likely to succeed at both. None of these are likely to get the effort they deserve today as all eyes are on AI.

1. A robust DeBar (kinetic energy fix).

The simplest approach might be to make the kinetic energy fix in remap work. The issues are the fragility of the approach. The way forward seems to be to enforce conservation over time and carry a “bank” of the difference along until it can safely be used. That said, the method still won’t match the Lax-Wendroff theorem. There is no reason to believe that results are safe for complex shocked flows. I think the lessons of this fix is that kinetic energy is the core of the problem to be solved. A big part of this is entropy conditions and consistency.

2. A really careful and well constructed PPM scheme.

The PPM method is a cell-centered version of a method Van Leer created (scheme 5 from his 1977 paper). It is a practical version of the active flux method developed recently. I do think the active flux method could be another path forward. One of the things about adiabatic flows are their smoothness and analytical structure. The sense is that the method could produce the analytical behavior to the level of truncation error. It is possible that using even higher order methods could reduce the errors to the point of being neglgible. Work in the literature shows that very careful mathematics and integrations can produce better results. PPM is an amazing conservative method. It also computes a host of complex flows very well. The sort of adiabatic compression needed for successful fusion is a bigger challenge. Perhaps some ideas from the kinetic energy fix could apply here as well.

3. Building on the conservative Lagrangian codes from France

The next idea is using the advances in Lagrangian methods that are conservative by construction. Do these methods work sufficiently well a priori for these adiabatic flows? It seems so, The issues with remap do not change, the Lagrangian frame must be discarded with mix. The basic mathematics of the subtraction of large numbers is retained. In my experience one needs to be careful and intentional to get robust answers. This applies to the spatial reconstruction step and the Riemann solution. Nonetheless, the adiabatic compression (expansion) challenge is huge. The use of the generalized Riemann problem is also potentially essential.

There may be other ideas to consider to break the impasse. These seem the best bet. In many cases each idea can produce ideas that need to be blended together. It is pretty clear that the detailed treatment of kinetic energy is the key. There is an additional challenge I’ve identified that feels very related.

“Life is about accepting the challenges along the way, choosing to keep moving forward, and savoring the journey.” ― Roy T. Bennett

Other Issues to Consider

I’ve written about issues with very strong rarefaction waves before. This topic is probably adjacent to the issues with adiabatic evolutions. I have noted that classical methods of all sorts fail for very strong rarefaction problems. These are problems that start of approach the expansion of material into vacuum. A rarefaction is an adiabatic structure. Thus the failure of conservative or remap methods on this class of problem may be related.

I have noted that it seems that methods based on the Generalized Riemann Problem (GRP) seems to do better. This is based on work from China and also alluded to by Maire for Kidder’s problem. The GRP approach is the epitome of being super careful in the construction of a method. It seems reasonable that combining some or all these ideas could provide the solutions. There is a possible solution that would solve the full spectrum of key problems in a unified manner.

This would allow us to have our cake and eat it too.

I can suggest that a successful method would have certain characteristics. I believe strict conservation form is essential. The method should be high-order and maintain strict control of dissipation of all forms. The evolution equations should consider a GRP method as well. The question is how to square the canonical problems with high Mach number adiabatic flows together. I might suggest that a separate internal energy equation be evolved to allow better solutions. In adiabatic evolution this equation should be used for better behavior. We may also need a separate kinetic energy equation as well. The key would be how to evolve any gain or loss then synchronize with the conserved variables. One would want to do this in conjunction with entropy satisfaction. The second law is an important inequality to adhere to. The discrepency should be allowed to disappear once the flow becomes dissipative via mixing or shocks.

The questions is whether anyone gives a fuck? All the attention is on AI. The reality is that AI won’t solve this problem, but could help if used properly.

“If you want something new, you have to stop doing something old” ― Peter Drucker

References

Caramana, E. J., D. E. Burton, Mikhail J. Shashkov, and P. P. Whalen. “The construction of compatible hydrodynamics algorithms utilizing conservation of total energy.” Journal of Computational Physics 146, no. 1 (1998): 227-262.

Maire, Pierre-Henri, Rémi Abgrall, Jérôme Breil, and Jean Ovadia. “A cell-centered Lagrangian scheme for two-dimensional compressible flow problems.” SIAM Journal on Scientific Computing 29, no. 4 (2007): 1781-1824.

Maire, Pierre-Henri. Contribution to the numerical modeling of inertial confinement fusion. No. CEA-R–6260. Bordeaux-1 Univ., 33 (France), 2011.

Qian, Jianzhen, Jiequan Li, and Shuanghu Wang. “The generalized Riemann problems for compressible fluid flows: Towards high order.” Journal of Computational Physics 259 (2014): 358-389.

Loubère, Raphaël, Pierre‐Henri Maire, and Pavel Váchal. “3D staggered Lagrangian hydrodynamics scheme with cell‐centered Riemann solver‐based artificial viscosity.” International Journal for Numerical Methods in Fluids 72, no. 1 (2013): 22-42.

Colella, Phillip, and Paul R. Woodward. “The piecewise parabolic method (PPM) for gas-dynamical simulations.” Journal of computational physics 54, no. 1 (1984): 174-201.

Woodward, Paul, and Phillip Colella. “The numerical simulation of two-dimensional fluid flow with strong shocks.” Journal of computational physics 54, no. 1 (1984): 115-173.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

Blondin, John M., and Eric A. Lufkin. “The piecewise-parabolic method in curvilinear coordinates.” Astrophysical Journal Supplement Series (ISSN, 0067-0049), vol. 88, no. 2, Oct. 1993, p. 589-594. 88 (1993): 589-594.

DeBar, Roger B. Fundamentals of the KRAKEN code.[Eulerian hydrodynamics code for compressible nonviscous flow of several fluids in two-dimensional (axially symmetric) region]. No. UCID-17366. California Univ., Livermore (USA). Lawrence Livermore Lab., 1974.

Burton, Donald E., Nathaniel R. Morgan, Marc Robert Joseph Charest, Mark A. Kenamond, and Jimmy Fung. “Compatible, energy conserving, bounds preserving remap of hydrodynamic fields for an extended ALE scheme.” Journal of Computational Physics 355 (2018): 492-533.

Hawley, John F., Larry L. Smarr, and James R. Wilson. “A numerical study of nonspherical black hole accretion. I Equations and test problems.” Astrophysical Journal, Part 1 (ISSN 0004-637X), vol. 277, Feb. 1, 1984, p. 296-311. Research supported by the US Department of Energy. 277 (1984): 296-311.

Stone, James M., and Michael L. Norman. “ZEUS-2D: a radiation magnetohydrodynamics code for astrophysical flows in two space dimensions. I-The hydrodynamic algorithms and tests.” Astrophysical Journal Supplement Series (ISSN 0067-0049), vol. 80, no. 2, June 1992, p. 753-790. Research supported by University of Illinois. 80 (1992): 753-790.

Stone, James M., Thomas A. Gardiner, Peter Teuben, John F. Hawley, and Jacob B. Simon. “Athena: a new code for astrophysical MHD.” The Astrophysical Journal Supplement Series 178, no. 1 (2008): 137-177.

This Moment with AI and How to Win It

01 Sunday Mar 2026

Posted by Bill Rider in Uncategorized

≈ 1 Comment

tl;dr

Is AI going to replace your job and spike unemployment, or will it supercharge abundance and wealth?

We have a choice about where this goes as a society. The hype around AI is endless and over the top. The hype misses the big opportunity and stokes outlandish fears, too. Almost all the conversation misses what AI brings to the table. In a lot of cases, if the job can be eliminated by AI, much of that job probably shouldn’t be done. The real power of AI is to make people more productive. Cutting the jobs is zero-sum thinking. The key to AI is boost productivity to do more and sell more. This is the essence of abundance. Use infinite thinking to make more and grow the economy. Zero-sum thinking is at the core of these job cuts. It will turn people against AI. If AI fucks the public, the public will fuck AI back. This is how we lose as a society. A better path is to use it to grow society’s wealth and abundance instead of just growing profits.

This topic is long overdue and needed. We need to think clearly about where all this is going. Right now, no one is. We are not seeing the real core issues around AI. Whether it is the AI companies or the government, it is all bullshit and little light. This bullshit is the hallucinations AI produces regularly. This algorithmic BS is a perfect vehicle for amplifying the lack of trust already corroding society today. This lack of trust could be amplified further and trigger a societal doom loop.

“Abundance of knowledge does not teach men to be wise.” ― Heraclitus

AI is a “Magic” Technology

“Any sufficiently advanced technology is indistinguishable from magic.” — Arthur C. Clarke

One of the things to recognize is just how miraculous AI is. In the course of the internet age, there have been a handful of moments that feel almost magical when they hit you. The first time I realized was the first time I used Google search. Before Google search happened, the internet had a set of phone book websites. I happened to use one called Alta Vista. It was the way you got around and found stuff. Then this new Google search came. It had this amazingly simple interface, and you typed in the query, and suddenly you had results. It was like magic! Once I used Google search, it was like walking through a door, and I never walked back out through it again. Alta Vista was gone, and I wasn’t going to ever return to it. Google was like fucking magic!

The next thing that spurred this sort of feeling was a smartphone, the Apple iPhone. I had used a Blackberry and a flip phone. The iPhone was the internet in your pocket plus a built-in iPod. It became even more, and the interface was like a mini-laptop. More magic! The blackberry was cooked, and these devices became everywhere. As we discuss later, these smartphones were turned on society over time. The worst thing about Google and smart phones were the enshitification unleashed by them. My stance is that enshitification is a choice driven by maximizing shareholder value. It is optional. We treat it like it is a natural law. It is not.

The next magic moment was the first time I used ChatGPT. I heard about this new site online with this thing called a Large Language Model (LLM) that you could question like a person. You simply spoke like a human being, and it talked back. I tried it out. My jaw dropped at what it could do. The potential was vast. The problems with the technology are also vast. Nonetheless, this was a magical moment where you could see the World change in a moment. Recently, with Codex, I felt the same thing (Claude Code is similar). I was able to do things with ease and simplicity that were magical. This is the dawn of agentic AI. The potential for LLMs and agentic AI is incredible. The counter to this hopeful trajectory is the societal system that enshitifies all this magical technology as the default setting.

The subtext of maximizing shareholder value is a mindset that it typifies. This mindset focuses on greed instead of generosity. It is short-term focused instead of the long term. This mindset is about zero-sum thinking, where there are winners and losers. The alternative is infinite thinking, where everyone wins. We have choices with AI and agents. We can proceed as we have with greed and short-term thinking. This will lead to societal damage and enshitification. We can also choose a different path of long-term thinking and generosity. This is the path to abundance and societal good. The choices are there. To get the good outcomes for society, we need to step away from our current defaults.

“A man is but the product of his thoughts. What he thinks, he becomes.” ― Mahatma Gandhi

We Need to Figure Out Work and AI

In the last year that I worked at Sandia, I spent a great deal of time trying out LLMs in the setting of work. I did all sorts of tests in trying to understand and map out the capabilities of this technology in the setting of doing scientific work. I examined how LLMs did at writing, how they did at research, and how they did at answering a variety of questions. This was related to genuine curiosity, but also to work that I was doing in verification and validation of scientific machine learning. Scientific machine learning (ML) is a related field that is getting a great deal of attention in the scientific community, although it is being overwhelmed by the tsunami of interest in LLMs. Doing this work required applying well-developed principles of the scientific method. The answer is to then adapt the principles to the specifics of LLMs and ML

“I’m not upset that you lied to me, I’m upset that from now on I can’t believe you.” ― Friedrich Nietzsche

What I came to realize was that my approach to verification validation is essential to getting good results from LLMs. To wit, the level of doubt in taking LLM results needs to be quite high. LLMs are prone to bullshit us all the time and quite often will give us an answer that it wishes to satisfy us with, which has no relation to objective facts. A large part of successfullyusing an LLM is to start off by asking it questions to which you already know the answer, in order toverify that the topical area that LLMs are examining is within their grasp. This by no means says that, as you get deeper and deeper into a topic, the LLM will be successful. One should always take a result from a large language model with a grain of salt, check it, and think about it deeply.

What I discovered with LLMs is that the closer you get to esoteric, expert knowledge, the worse they are at everything. Whenever I got close enough to the core of my own expertise, the LLM failed to give objectively good results. This was true over and over again. This is an important lesson to integrate into using them effectively. The role of the human expert is actually amplified by LLMs. The expert knows the point where LLM competence ends, and human judgment is necessary.

For example, I found that LLMs are terrible for writing. They’re good as an editor, but terrible at creative writing, terrible at doing anything that a human with ability can do. Writing is a deeply human activity and involves clarity of thought. The narrative elements are an essential human pursuit. At least today, AI has no capacity to write with genuine humanity. My writing is part of thinking on a topic. True for fiction or non-fiction writing. A key is to leave marks on the prose that show genuine personality and human experience. Ultimately, my use of AI in any sort of writing has been relegated to editing and research.

The same holds doubly for areas of science, where I find AI is a capable digital assistant, great for improving the scope and breadth of what I do, but not good at creating anything at an expert level. I have tested this over and over with the same result. LLMs have improved over the past three years, but it has only moved the wall it hits a little. I’ve taken various algorithms and work that I’ve done and tried to basically spoon-feed it into the AI. Even with an excessive amount of spoon-feeding, the AI fails to do even the simplest level of creativity. At the same time, I am convinced it can be a useful assistant. I use it every single day for a host of tasks.

The counter is that AI is very good at giving a large volume of work and can be utilized to improve the quality of what work has been done and the speed with which the work is completed. This was particularly true with the Codex example that I tried in the agentic work. It did a number ofbanal tasks with speed and effectiveness that were far greater than my own and basicallyaccomplished one or two days of hard work in less than an hour. What I saw there was the capacity to free up my time to go towards creative and thinking efforts that are appropriate for humanity, and allow me to spend more of my time doing what a human being can only do.

“The problem with the world is that the intelligent people are full of doubts, while the stupid ones are full of confidence.” ― Charles Bukowski

Humans supply thinking and creativity. AI needs to remove the bullshit instead of adding bullshit to humanity.

How Not to Make Progress: No Trust and Maximum Bullshit

One of the big things that will inhibit the ability of AI to improve the workplace is this pervasive lack of trust in society. Every bit of the current trajectory will simply destroy more trust. A lot of the work that we all do at work is complete bullshit. Whether it’s training, paperwork, or various other things that are just check boxes, are all related to that lack of trust. As AI shows, most of this work is meaningless, lacks humanity, and can be automated. Rather than eliminating this useless work, the lack of trust only accelerates and amplifies it. If we do not change course, AI will undermine trust and generate even more inhumane bullshit. Over the course of my career, the bullshit grew without bounds and swallowed most of the humanity in work.

“Whoever is careless with the truth in small matters cannot be trusted with important matters” ― Albert Einstein

One of the biggest things for AI to solve is the issue of trust in itself. The tendency for hallucinations or franklybullshit us is toxic for AI’s future. It might be great for the near-term bottom line, but it destroys the long-term. This, along with the syncopacy of the replies, is a major issue. AI needs to stop this and start being honest, focusing on growing trust. There are probably internal measures and mechanisms by which the AI can return some degree of confidence and reliability in results. There are probably measures by which the AI can report that this answer is low confidence or high confidence. These can guide the users towards exercising doubt and assist in the verification of results under the appropriate circumstances.

The fact that they are a probabilistic engine means that there is a measure of probability associated with the results that it gives. Thus, a grade and score can be provided even if the highest score that it reports is something that is relatively low probability compared to what we would like. If the LLMs would let the user know that the answer is sketchy and unreliable, it would be transformative. It would show a vulnerability that would help build trust. We should never trust AI completely. Nonetheless, a tip that it was uncertain would be a boon. It would show a level of care for the user that today’s models neglect. It would also assist in educating users about what they are really dealing with.

This sort of measure built into AI would be incredibly welcome. At the same time, I think, within the way the current corporate governance works, it would be rejected out of hand because they simply want to have as many users as possible. The AI wants to express itself as being completely reliable and completely subservient to the users. Rather than provide a better service, the AIs will resist any kind of feedback that calls into doubt the results it produces. All of this is to serve the acquisition of maximizing shareholder value instead of maximizing customer service. Today’s corporate governance is squarely opposed to getting this right. This governance is at the heart of society’s deficit of trust.

“The comfort of the rich depends upon an abundant supply of the poor.” ― Voltaire

How to Actually Make Progress

Trained properly, AI could be a vastly powerful agent or assistant that can unleash human creativity. Human creativity, art, and free thinking are in short supply today. AI offers the ability to both boost this through freeing up time, but also assist people in bringing ideas to fruition and seeing whether or not they actually are good ideas worth exploring. AI can allow much more exploration and many more ideas to be brought to life, and perhaps ultimately produce far greater beneficial outcomes for business if only the businesses were to trust the people that they employed to do this kind of work. For myself, this is exactly the model of AI that I plan to exercise. I have a powerful assistant who can help me explore ideas more deeply and bring the right ones to life.

The right way to look at AI is to view it as a very capable digital assistant with broad and general knowledge. At the same time that knowledge is shallow and not at an expert level. AI cannot hold a candle to the expertise that you hold at the heart of what you do. This is the heart of humanity we should bring to our lives and work. It can help provide competent, but flawed, help in almost everything else you do that’s ancillary to your core work. In this way, AI can be a wonderful digital assistant and provide you with ease in achieving greater productivity.

As I noted above, AI couldn’t write for shit. I do believe that I am not the greatest writer, but I’m far, far better than AI. With a little effort, almost everyone probably could be taught to be better. We just need to teach people. AI doesn’t sound authentic, and it produces prose that is simply uninspired. AI is a great editor, though.

One of the biggest issues with AI is that you should doubt everything it creates. What I realized was that the way that I created AI was very much the same as the way I created science. There’s a need for verification and validation. I would approach using AI the same way I would approach a scientific problem, where I look to confirm everything it does and hold everything in doubt. It’s assumed it’s useful, but I also assume it’s flawed and in need of extra work and verification that the results are good. It would be better if AI helped and gave us a tip that its response is (more) questionable. In fact, with AI, the need for verifying and validating everything it does is much higher than with other computational tools. This calls into question the absence of V&V in the plans for AI seen societally. V&V is essential for AI’s success.

The greatest high-leverage thing that we can do is train people to use AI correctly. This was a place where my experience at the National Labs has been absolutely jaw-dropping. The management’s efforts to use AI have been ham-handed and naive. It was justsuperficial encouragement of the worst uses possible. They were encouraging people to use it, but not in an intelligent and well-thought-through way. The fact is that AI’s proper use is subtle and esoteric and requires a great deal of discipline and a change in the overall mindset. We need leadership that pushes us in the right direction. So far, all the leadership is pushing everything in the wrong direction.

“Don’t mistake activity with achievement.” ― John Wooden

Nothing more fully shows us this problem than the scientific programs around AI. DOE has the massive Genesis Project, which is just an exemplar of how not to do AI in science. It’s a whole bunch of stunts. There’s no evidence of any V&V or doubt in how it’s used. The V&V and the doubt are the most important part of science. More true with AI than any other science. Instead, it’s like recent programs. It’s all about big computers and doing things that look splashy but have very little scientific sense. It is almost 180 degrees from the right direction. AI can be a powerful tool for science, but only with a clear-eyed assessment of its results. Instead, we see blind acceptance and marketing bullshit.

The deeper issue is how this productivity will be utilized by corporations and organizations.

* Will they simply demand that the organization and the corporations produce as much as before? In this case, the gains with AI will be used to slash the size of the workforce.

* Or instead will they realize that they can unleash people to do more, and that corporations and organizations can do more and create more good for society?

This is an abundanceagenda and leads to great growth and good things for society. One path leads to destruction, and the other leads to long-term benefits. Current ideas are heading headlong toward destruction.

“Creativity is intelligence having fun.” ― Albert Einstein

To do this, we have to be mindful about how we use AI. Today’s world is full of the mindset of scarcity and the use of short-term thinking. This leads to the use of productivity to simply reduce the number of workers. This is short-sighted and ultimately robs the future of a much better outcome where we use the productivity to unleash greater creativity and more products, more output, and better things for society.

With Today’s Corporations, AI Will Fuck Us

Don’t worry, it will all be enshitified. If recent history is a guide, the magical capability of LLMs will be turned to shit. We have managed to take Google search and fuck it up systematically through greed. This greed is an enshitification plan. Smartphones are the same. Social media was never quite so magical, but it had potential. That potential has been squandered by the engine of enshitification. Now we have this new technology that seems far more powerful than any of these previous ones. It is definitely magical. We are going to turn it loose on the ecosystem that enshitifies things naturally.

What could possibly go wrong?

The capabilities and power of AI is far greater than the algorithms used in social media. With the current mentality, the creativity of humans will be greed-motivated to adapt AI into profit machines. The same mentality has already done an immense amount of damage to society. We should have faith that a more powerful technology will unleash greater damage. We are already seeing chaos and horrors in multiple ways originating from this process. Surely the power of AI will also be integrated with social media. This will supercharge profits and damage. These forces have energized toxic politics and vast income-wealth inequality. An AI supercharged ecosystem may be unimaginably worse. Without change, this is the likely course.

We should have already learned the lesson, but obviously, we haven’t. Money provides too much power to be overcome.

Zero Sum Thinking and Value

The current philosophy of maximizing shareholder value is zero-sum thinking. This is the approach where business (and life) is all about winners and losers. In today’s world, the losers are consumers who are preyed upon. Vulnerable smaller businesses are also preyed upon by massive corporations. The powerful dominate the weak and most of us are weak. Ultimately, the profit and victories are found at the expense to wide swaths of society.

I worked for decades in places where trust was in free fall. That’s not entirely true. The first decade or so at Los Alamos was a high-trust environment where people worked together. There was generosity and a spirit of giving that were essential to developing me as a professional. If you were reasonably smart and competent, you were welcomed into someone’s office and offered the best of their thoughts and advice. It was in this trust that I blossomed. Then modernity came for trust, and the generosity was hollowed out.

It is also an environment that I believe has been snuffed out. The same me plopped into the current version of the National Labs would never grow and accomplish anything like I could with that trusting environment. The lack of trust that infects society as a whole eventually took hold at the labs, as the government did not trust us, and we did not trust the government. We started to move in a headlong direction towards all of the natural outcomes for a lack of trust.

Part of this was:

– the lack of peer review

– the lack of honest assessment of work

– leadership that lied and withheld information from the rank and file

– an inability to look at risk and failure in a healthy way

All of this simply accelerated the loss of trust in the state we are in today. I think it’s safe to say that the trust in our society has never been lower. I saw all the toxic fruits of that mentality at work myself. We can see it across society, looking at politics. No matter what side you take, the other side is evil. With AI, we have a technology that can make it worse.

The problem is that these trust-building AI are not going to maximize shareholder value; however, they are going to build a system that would be suitable for the long run. We need a different fundamental mindset and corporate mentality.

“Acknowledging the good that you already have in your life is the foundation for all abundance.” ― Eckhart Tolle

Infinite Thinking

“To ask, “What’s best for me” is finite thinking. To ask, “What’s best for us” is infinite thinking.” ― Simon Sinek

The alternative to “zero-sum” thinking is infinite thinking. This thought process is couched in game theory. A zero-sum game is the classic contest with a winner and a loser. The opposite is an infinite game where it is all about continuing to play. If you play well, everyone wins. The zero-sum game is the usual football or basketball game. The infinite game is like Legos or a marriage. Success is continued play and creativity where everyone wins.

One of the greatest differences between the finite and the infinite game is an aspect of trust. To succeed at the Infinite Game, one must focus on building and maintaining trust. In the Finite Game, trust is used against you and becomes something that you wield as a weapon. This difference can be seen as our society has become completely untrusting. This is an exemplar of our commitment to these finite win-lose games as the basis for society.

“Leadership is about integrity, honesty and accountability. All components of trust.”― Simon Sinek

We are seeing a supercharging of corporate greed and behavior that drives the worst impulses of business. The other force that could change things would be regulation. We are currently in an orgy of deregulation, and there is very little thought or confidence on the part of the government to regulate an area like AI, much less tech or social media, in any sort of rational way that is based on expertise and knowledge. Instead, the vast amounts of money driven by corporate greed and inequality are tilting the playing fields squarelyagainst any of these outcomes. Thus, current trends show that trust is going to go even lower and become even worse across society as a whole. The recent dust-up between the Department of Defence and Antropic is an exemplar of this. DoD and OpenAI chose the path of no trust and greed.

Switching to a trust-building mentality is something needed by society today. With trust, collaboration and cooperation become the touchstones of how society looks. Without trust, it simplybecomes a dog-eat-dog world. You employ data and power as a weapon against those you’re pitted against. A simple and observant view of today shows you where this gets us: conflict, chaos, anger, and a host of other ills that are dragging society down.

If an alternative view of how AI is used is taken, we can also see how it can build trust. If we view AI as a vehicle for abundance, we see that it can supercharge the quality of work done. We can enhance the volume of work done and how much each worker can do. You then find that the ability to create, produce, and get products to market becomes accelerated and grows in scope. All ofthis brings wealth and prosperity to society. This, in turn, ultimately builds trust for AI and also provides benefits for the humanity that it serves. This is the path we need to take if we want AI to be good for society.

“Abundance is harder for us to handle than scarcity.” ― Nassim Nicholas Taleb

Standing in opposition to this vision is the focus on maximizing shareholder value, which is good for the short-term prosperity of society. Virtually all of us in the United States have investments in the stock market. Our retirements are all dependent on these investments doing well. We can only emphasize the short-term for so long before the bills come due.

The problem is that it’s a house of cards. The same forces are destroying trust across society, and ultimately, that destruction of trust puts the entire structure at risk. The less trust there is, and if it continues to drop, we are at risk of catastrophic destruction of the system. Indeed, we may already be experiencing the start of that catastrophic destruction as large portions of society are being dismantled by the current administration. We may be creating the roots of a crisis that will continue to cause serious damage to our future.

“Growth for the sake of growth is the ideology of the cancer cell.” ― Edward Abbey

Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • April 2026
  • March 2026
  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 60 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...