• About The Regularized Singularity

The Regularized Singularity

~ The Eyes of a citizen; the voice of the silent

The Regularized Singularity

Monthly Archives: June 2025

A Unified Theory of AI and Bullshit Jobs

21 Saturday Jun 2025

Posted by Bill Rider in Uncategorized

≈ Leave a comment

TL;DR

Right now, there is a lot of discussion of AI job cuts on the horizon. Computer coders are at the top of the list. So are other white-collar jobs. It is one of the dumbest things I can imagine. The reasons are legion. First, you take the greatest proponents for AI at work and make many of them angry and scared. You make the greatest proponents of AI angry. You create Luddites. Secondly, it fails to recognize that AI is an exceptional productivity enhancement. It should make these people more valuable, not remove their jobs. Layoffs are simply scarcity at work. It is cruel and greedy. It is short-sighted in the extreme. It is looking at AI as the glass half empty. The last point gets to bullshit jobs that many people do in full or part. Instead, I think AI is a bullshit job detector. We can use it to get rid of them and find ways to make jobs more human, creative, and productive. This is a path to abundance and a better future.

“It’s hard to imagine a surer sign that one is dealing with an irrational economic system than the fact that the prospect of eliminating drudgery is considered to be a problem.” ― David Graeber, Bullshit Jobs: A Theory

AI as a Threat, Instead of as a Gift

Lately, the news has been full of reports that white-collar jobs are gonna be replaced by AI. I do a white-collar job. I also work with AI in research. I possess first-hand knowledge of how well AI possesses prowess in my areas of expertise. Hint, its prowess is novice and naive at best. I think I have a hell of a lot to say about this. Its ability is quite superficial. As soon as the prompt asks for anything nuanced or deep, AI falls flat on its face.

Back to the concern of vast numbers of white-collar workers. Note, that computer programmers are at the top of the “hit list.” The concern is that all of this will lead to widespread unemployment of educated and talented people. At the same time, those of us who use AI professionally can see the stupidity of this. Firing all these people would be a huge mistake. All the claims and desires to cut jobs by AI make the people saying this look like idiots. These idiots have a lot of power with a vested interest in profiting from that AI. They are mostly AI managers who have lots to gain. In all likelihood, they are just as full of shit as my managers are. My own managers are constantly bullshitting their way through reality whenever they are in public. At the Labs, the comments about fusion are at the top of the bullshit parade. A great example of stupid shit that sells to nonexperts.

“Shit jobs tend to be blue collar and pay by the hour, whereas bullshit jobs tend to be white collar and salaried.” ― David Graeber, Bullshit Jobs: A Theory

AI Can Do Bullshit Jobs

I think This narrative should be more clearly connected to another concept, “Bullshit Jobs.” These are jobs that add little to society and merely make work for lots of people. These jobs also exact an effort tax on every person. These jobs drive the costs and time at work. Most of my exorbitant cost at work is driven by people doing bullshit jobs (my cost is more than 3 times my salary).

On top of that they lower my productivity. What I’ve noticed is that the jobs are mostly related to a lack of trust and lots of checks on stuff. They don’t produce anything, but make sure that I do. My day is full of these things from every corner and touching every activity. I think a hallmark of these jobs is the extent to which AI could do them. I will then take this a step further; if AI can do a job perhaps that job should not be done at all. These jobs are actually beneath the humanity of the people doing them. We need to devote effort to better jobs for people.

The real question is what do we do with all these people who do these bullshit jobs. The AI elite today seem to be saying just fire everyone and reduce payroll. This is an extremely small-minded approach. It is pure greed combined with pessimism and stupidity. The far better approach would be to retool these jobs and people to be creators of value. Unleash creativity and ideas using AI to boost productivity and success. A big part of this is to take more risks and invest in far more failed starts. Allowing more failures will allow more new successes. Among these risks and failed starts are great ideas and breakthroughs. Great ideas and breakthroughs that lie fallow today. They lay fallow under the yoke of all the lack of trust fueling the bullshit jobs. If AI is truly a boon to humanity, we should see an explosion of growth, not mass unemployment.

“We have become a civilization based on work—not even “productive work” but work as an end and meaning in itself.” ― David Graeber, Bullshit Jobs: A Theory

Why don’t we hear a narrative of AI-driven abundance? One really has to wonder if our AI masters are really that smart if their sales pitch is “fire people”. I will just come out and say that the idea of firing swaths of coders because of AI is one of the dumbest things ever. The real answer is to write more code and do more things. The real experience of coders is that AI helps, but ultimately the expert person must be “in the loop”. AI is incapable of replacing code developers. The expert developed is absolutely essential to the process, and that AI just makes them more efficient. We need to embrace the productivity gains and grow the pie. Instead, we are ruled by small-minded greed instead of growth-minded visionaries.

“A human being unable to have a meaningful impact on the world ceases to exist.” ― David Graeber, Bullshit Jobs: A Theory

A Painful Lesson

To reiterate, If AI can do your job, there’s a good chance that your job is bullshit. AI is an enhancement for productivity and it should allow you to be free of much of the bullshit. What companies organization should do is illuminate work and jobs that can be done by AI. If AI can do the job entirely, the job isn’t worth doing. They should use this money to free up productivity enhance what is done, and not cut people’s employment. We’ve seen this mentality in attacks on government programs. This is the single greatest failing of Elon Musk and DOGE. They didn’t realize that what he really needed was to unleash people to do more creative and better work. It is not about getting rid of the work; it is about improving the work that is done.

“Efficiency’ has come to mean vesting more and more power to managers, supervisors, and presumed ‘efficiency experts,’ so that actual producers have almost zero autonomy.” ― David Graeber, Bullshit Jobs: A Theory

i’ve written about AI as producing bullshit. What if AI is a way of detecting bullshit? The real truth is when it comes to American science there’s far too little creativity and far too little freedom to do amazing work. Sometimes amazing work cannot be recognized until it is tried. It looks stupid or insane, worthy of ridicule until its genius is obvious. Or it can be not worth trying, but you don’t know until you try. A lot of bureaucratic bullshit stands in the way of progress. One reason for this is the insane amount of bullshit Jobs. The cost of them is huge with our outrageous overhead rates. In addition, they also make bullshit work for those of us trying to produce science. They get in the way of productivity in a myriad of ways with required bullshit that has no value.

What we really need to do is eliminate the bullshit and free up the mind and the creativity. We already aren’t spending enough on science and what is spent is done very unproductive. We don’t take the risks we need for breakthroughs and don’t allow the right kinds of failure. A variety of forms of bullshit jobs lead the way. Managers obsess with meaningless repetitive reviews. They micromanage and apply far too much accounting. All of this kills creativity and undermines breakthroughs. Managers should know what we do, but do it through managing. Not contrived reporting mechanisms. They should create a productive environment. There should be much more effort to determine what would be better for our lives and better productivity.

AI helps this in some focused ways. It can help to supercharge the abilities of creative and talented scientists. Just get the bullshit out of the way. I’ve found that AI is really good a churning out this bullshit. The best answer is to stop doing any bullshit that AI is capable of producing. It is a telltale sign that the work is worthless.

“Young people in Europe and North America in particular, but increasingly throughout the world, are being psychologically prepared for useless jobs, trained in how to pretend to work, and then by various means shepherded into jobs that almost nobody really believes serve any meaningful purpose.” ― David Graeber, Bullshit Jobs: A Theory

If your job is so mundane so routine and so rudimentary that AI can do it, the best option is to delete it. It is a serious question to ask about a job. Most of the bullshit Jobs revolve around a lack of trust and it’s really a broader social issue. In my life, it has become a science productivity issue. If there are reports and things that AI could just as well produce, the best option is to not produce them at all because no one needs to read them. A very large portion of our reporting is never really read. If no one reads a piece of writing, should it even exist? We have a duty as a society to give people productive useful work. Every job should have that undeniable spark of humanity.

The other part of this dialog is about what kind of future we want. Do we want a scarce future where technology ravages good jobs? Whereas corporations simply think about maximizing money for the rich and care little about the employees. Do we want a future where technology like AI takes humanity away? Instead, we should want abundance and growth. Technology that enhances our humanity and reduces our drudgery. AI should be a tool to unleash our best. Any job should also require the spark of humanity to produce genuine value. It should raise our standard of living and allow more time for leisure, art and the pursuit of pleasure. It should directly lead to a better World to live in.

“Yet for some reason, we as a society have collectively decided it’s better to have millions of human beings spending years of their lives pretending to type into spreadsheets or preparing mind maps for PR meetings than freeing them to knit sweaters, play with their dogs, start a garage band, experiment with new recipes, or sit in cafés arguing about politics, and gossiping about their friends’ complex polyamorous love affairs.” ― David Graeber, Bullshit Jobs: A Theory

Practical Application Accuracy Is Essential

15 Sunday Jun 2025

Posted by Bill Rider in Uncategorized

≈ 5 Comments

TL;DR

In classical computational science applications for solving partial differential equations, discretization accuracy is essential. In a rational world, this solution accuracy would rule (other things are important too). It does not! Worse yet, the manner of considering accuracy is not connected to the objective reality of the method’s use. It is time for this to end. Two things influence true accuracy in practice. One is the construction of discretization algorithms, which have some counterproductive biases focused on order-of-accuracy. In real applications, solutions only achieve a low order of accuracy. Thus the accuracy is dominated by different considerations than assumed. It is time to get real.

“What’s measured improves” ― Peter Drucker

The Stakes Are Big

The topic of solving hyperbolic conservation laws has advanced tremendously during my lifetime. Today, we have powerful and accurate methods at our disposal to solve important societal problems. That said, problems and habits are limiting the advances. At the head of the list is a poor measurement of solution accuracy.

Solution accuracy is expected to be measured, but in practice only on ideal problems where full accuracy can be expected. When these methods are used practically, such accuracy cannot be expected. Any method will produce first-order or lower accuracy. Fortunately, analytical problems exist allowing accuracy to be assessed. The missing ingredient is to actually do the measurement. The benefit of changing our practice would cause focus energy and attention on methods that perform better under realistic circumstances. Today, methods with relatively poor accuracy and great cost are favored. This limits the power of these advances.

I’ll elaborate on the handful of issues hidden by the current practices. Our current dialog in methods is driven by high-order methods. These are studied without regard for their efficiency on real problems. The popular methods such as ENO, WENO, TENO, and discontinuous Galerkin dominate, but practical accuracy is ignored. A couple big issues I’ve written about reign broadly over the field. Time-stepping methods follow the same basic pattern. Inefficient methods with formal accuracy dominate over practical concerns and efficiency. We do not have a good understanding of what aspects of high-order methods pay off in practice. Some high-order aspects appear to matter, but others do not yield practical benefit. This includes truly bounding stability conditions for nonlinear systems, computing strong rarefactions-low Mach flows, and multiphysics integration.

The Bottom Line

I will get right to the punch line for my argument and hopefully show the importance of my perspective. Key to my argument is the observation that “real” or “practical” problems converge at a low order of accuracy. Usually, the order of accuracy is less than one, so assuming first-order accuracy is actually optimistic. The second key bit of assumption is what efficiency means in the context of modeling and simulation. I will define it as the relative cost of getting an answer of a specified accuracy. This seems obvious and more than reasonable.

“You have to be burning with an idea, or a problem, or a wrong that you want to right. If you’re not passionate enough from the start, you’ll never stick it out.” ― Steve Jobs

To illustrate my point I’ll construct a contrived simple example. Define three different methods to get our solution that are otherwise similar. Method 1 gives an accuracy of one for a cost of one. Method 2 gives an accuracy of twice as good as Method 1 for double the cost. Method 3 gives an accuracy of four times Method 1 and a cost of four times the cost. We can now compare the total cost for the same level of accuracy looking for the efficiency of the solution. Each method converges at the (optimistic) first-order rate.

If we use Method 3 on our “standard” mesh we get an answer with one-quarter the error for a cost of four. To get the same error as Method 2, we need to use a mesh of half the spacing (and twice the points). With Method 1 we need four times the mesh for the same accuracy. The relative cost of the equally accurate solution depends on the dimensionality of the problem. For transient fluid dynamics, we solve problems in one- two- or three-dimensions plus time. We are operating with methods that need to have the same time step control, the time step size is always proportional to the spatial mesh.

Let’s consider a one-dimensional problem, the cost scales quadratically with mesh (time and one space-time dimension). Our Method 2 will cost a factor of eight to get the same accuracy as Method 3. Thus is cost twice as much. Method 1 needs two mesh refinements for a cost of 16. Thus it costs four times as much as Method 3. So in one dimension, the more accurate method pays off tremendously, and this is the proverbial tip of the iceberg. As we shall see the efficiency gains grow in two or three dimensions.

In two dimensions the benefits grow. Now Method 2 costs 16 units and thus Method 3 pays off by a factor of four. For Method 1 we have a cost of 64 and the payoff is a factor of 16. You can probably see where this going. In three dimensions Method 2 now costs 32 and the payoff is a factor of 8. For Method 1 the payoff is huge. It now costs 256 times to get the same accuracy, Thus the efficiency payoff is a factor of 64. Almost two orders of magnitude difference. This is meaningful and important whether you are doing science or engineering.

Imagine how a seven-dimensional method like full radiation transport would scale. The payoffs for accuracy could be phenomenal. This is a type of efficiency that has been largely ignored in computational physics. It is time for it to end and focus on what really matters in computational performance. The accuracy under conditions actually faced in applications of the methods matters. This is real efficiency, and an efficiency not examined at all in practice.

“Progress isn’t made by early risers. It’s made by lazy men trying to find easier ways to do something.” ― Robert Heinlein

The Usual Approach To Accuracy

The usual approach to designing algorithms is to define basic “mesh” prototypically space and time. The usual mantra is that the most accurate methods are higher order. The higher order the method, the more accurate it is. High-order is often simply more than second-order accurate. Nonetheless, the assumption is that higher-order methods are always more accurate. Thus the best you can do is a spectral method. This belief has driven research in numerical methods forever (many decades at least). This is where every degree of freedom available contributes to the approximation. We know these methods are not practical for realistic problems.

The standard tool for designing methods is the Taylor series. This relies on several things to be true. The function needs to be smooth, and the expansion needs to be in a variable that is “small” in some vanishing sense. This is a classical tool and has been phenomenally useful for centuries of work in numerical analysis. The ideal nature of when it is true is also a limitation. While the Taylor series still holds for nonlinear cases, the dynamics of nonlinearity invariably destroy the smoothness. If smoothness is retained nonlinearly, the problem is pathological. The classic mechanism for this is shocks and other discontinuities. Even smooth nonlinear structures still have issues like cusps as seen in expansion waves. As we will discuss accuracy is not retained in the face of this.

If your solution is analytical in the best way possible this works. This means the solution can be differentiated infinitely. While this is ideal, it is also very infrequently (basically never) encountered in practice. The other issue is that the complexity of a method also grows massively as you go to a higher order. This is true for linear problems, but extra true for nonlinear problems where the error has many more terms. If it were only this simple! It is not by any stretch of the imagination.

“We must accept finite disappointment, but never lose infinite hope.” ― Martin Luther King Jr.

Stability: Linear and Nonlinear

For any integrator for partial differential equations, stability is a key property. Basically, it is a property where any “noise” in the solution decays away. The truth is that there is always a bit of noise in a computed solution. You never want it to dominate the solution. For convergent solution,s stability is one of two ingredients for convergence under mesh refinement. This is a requirement from the Lax equivalence theorem. The other requirement is the consistency of the approximation with the original differential equation. Together this yields the property of convergence where solutions become more accurate as meshes are refined. This principle is one of the foundational aspects of the use of high-performance computing.

Von Neumann invented a classical method to investigate stability. When devising a method, doing this analysis is a wise and necessary first step. Often subtle things can threaten stability and the method is good for unveiling such issues. For real problems, this stability is only the first step in the derivation. It is necessary, but not sufficient. Most problems have a structure that requires nonlinear stability.

This is caused by nonlinearities. in true problems or non-differentiable features in the solution (like shocks or other discontinuities. These require mechanisms to control things like oscillations and positivity of the solution. These mechanisms are invariably nonlinear even for linear problems. This has a huge influence on accuracy and the sort of accuracy that is important to measure. The nonlinear stability assures results in real circumstances. It has a relatively dominant impact on solutions and lets methods get accurate solutions when things are difficult. One of the damning observations is that the accuracy impact of these measures is largely ignored under realistic circumstances. The only thing really examined is robustness and low-level compliance with design.

“We don’t want to change. Every change is a menace to stability.” ― Aldous Huxley

What Accuracy Actually Matters

In the published literature it is common to see the accuracy reported for idealized conditions. These are conditions where the nonlinear stability is completely unnecessary. We do see if and how nonlinear stability impacts this ideal accuracy. This is not a bad thing at all. It goes into the pile of necessary steps for presenting a method. The problems are generally smooth and infinitely differentiable. A method of increasingly higher-order accuracy will get the full order of convergence and very small errors as the mesh is refined. It is a demonstration of the results of the stability analysis. This is to say that a stability analysis can provide convergence and error characterization. There is also a select set of problems for fully nonlinear effects (e.g., the isentropic vortex or the like).

“I have to go. I have a finite amount of life left and I don’t want to spend it arguing with you.” ― Jennifer Armintrout

There is a huge rub to this practice. This error and behavior for the method is never encountered in practical problems. For practical problems shocks, contacts, and other discontinuous phenomena abound. They are inescapable. Once these are present in the solution the convergence rate is first-order or less (theory for this exists). Now the nonlinear stability and accuracy character takes over being completely essential. The issue with the literature is that errors are rarely reported under these circumstances. This happens even if the exact error can be reported. The standard is simply “the eyeball norm”. This standard serves the use of these methods poorly indeed. Results under more realistic problems is close to purely qualitative. This happens even when there is an exact solution available.

One of the real effects of this difference comes down to the issue of what accuracy really matters. If the goal of computing a solution is to get a certain low level of error for the least effort, the difference is profound. The assessment of this might reasonably be called efficiency. In cases where the full order of accuracy can be achieved, the higher the order of the method, the more efficient it will be for small errors. These cases are virtually never encountered practically. The upshot is that accuracy is examined in cases that are trivial and unimportant.

Practical cases converge at first order and the theoretical order of accuracy for a method doesn’t change that. It can change the relative accuracy, but the relationship there is not one-to-one. That said, the higher order method will not always be better than a low order method. One of our gaps in analysis is understanding how the details of a method lead to practical accuracy. Right now, it is just explored empirically during testing. The issue is that the testing and reporting of said accuracy is quite uncommon in the literature. Making this a standard expectation would improve the field productively.

“Don’t be satisfied with stories, how things have gone with others. Unfold your own myth.” ― Rumi

References

Study of real accuracy

Greenough, J. A., and W. J. Rider. “A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov.” Journal of Computational Physics 196, no. 1 (2004): 259-281.

Nonlinear Stability

Guermond, Jean-Luc, and Bojan Popov. “Fast estimation from above of the maximum wave speed in the Riemann problem for the Euler equations.” Journal of Computational Physics321 (2016): 908-926.

Toro, Eleuterio F., Lucas O. Müller, and Annunziato Siviglia. “Bounds for wave speeds in the Riemann problem: direct theoretical estimates.” Computers & Fluids 209 (2020): 104640.

Li, Jiequan, and Zhifang Du. “A two-stage fourth order time-accurate discretization for Lax–Wendroff type flow solvers I. Hyperbolic conservation laws.” SIAM Journal on Scientific Computing 38, no. 5 (2016): A3046-A3069.

High Order Methods

Jiang, Guang-Shan, and Chi-Wang Shu. “Efficient implementation of weighted ENO schemes.” Journal of computational physics 126, no. 1 (1996): 202-228.

Cockburn, Bernardo, Chi-Wang Shu, Claes Johnson, Eitan Tadmor, and Chi-Wang Shu. Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws. Springer Berlin Heidelberg, 1998.

Balsara, Dinshaw S., and Chi-Wang Shu. “Monotonicity preserving weighted essentially non-oscillatory schemes with increasingly high order of accuracy.” Journal of Computational Physics 160, no. 2 (2000): 405-452.

Cockburn, Bernardo, and Chi-Wang Shu. “Runge–Kutta discontinuous Galerkin methods for convection-dominated problems.” Journal of scientific computing 16 (2001): 173-261.

Spiteri, Raymond J., and Steven J. Ruuth. “A new class of optimal high-order strong-stability-preserving time discretization methods.” SIAM Journal on Numerical Analysis40, no. 2 (2002): 469-491.

Methods Advances Not Embraced Enough

Suresh, Ambady, and Hung T. Huynh. “Accurate monotonicity-preserving schemes with Runge–Kutta time stepping.” Journal of Computational Physics 136, no. 1 (1997): 83-99.

Colella, Phillip, and Michael D. Sekora. “A limiter for PPM that preserves accuracy at smooth extrema.” Journal of Computational Physics 227, no. 15 (2008): 7069-7076.

Rider, William J., Jeffrey A. Greenough, and James R. Kamm. “Accurate monotonicity-and extrema-preserving methods through adaptive nonlinear hybridizations.” Journal of Computational Physics 225, no. 2 (2007): 1827-1848.

A Great Workshop Is Inspirational

08 Sunday Jun 2025

Posted by Bill Rider in Uncategorized

≈ Leave a comment

“Knowledge has to be improved, challenged, and increased constantly, or it vanishes.” ― Peter Drucker

Back in the day I used to write up my thoughts on conferences I went to in this blog. It was a good practice and encouraged me to sit back and get perspective on what I saw. What I learned. What I felt. The workshop I attended this week was excellent with amazing researchers. Thoughtful and wise people who shared their knowledge and wisdom. I saw a great menu of super talks and I had phenomenal conversations. Some of these were one-on-one sidebars, but also panel discussions that were engaging and thought-provoking. I am left with numerous themes to write about for the foreseeable future. A good week indeed, but it left me with mourning too.

The workshop was called “Multiphysics Algorithms for the Post Moore’s Law Era.” It was organized by Brian O’Shea from Michigan State along with a group of illustrious scientists largely from Los Alamos. It was really well done and a huge breath of fresh air. Los Alamos Air is good for that too. I was there largely because I had an invited talk, which I really enjoyed giving. I had put a great deal of thought into my talk. It was some thoughts needed for this present moment. Invited talks are an honor and a good thing to accept. They look great on the resume or annual assessments. I quickly lost any sense of making the wrong decision and immediately felt grateful to attend.

I won’t and really can’t hit all the high points or talks, but will give a flavor of the meeting.

Moore’s law is the empirical observation about the growth of computing power. For about fifty-some-odd years computer power doubled about every 18 months. Over such a period of 60 years, this gives an advance of over a billion times (2 to the 30th power). Starting around 2010 people started to see the end of the road for the law. Physics itself is getting in the way and parallel computing or those magical GPUs that AMD and Nvidia produce aren’t enough. Plus those GPUs are a giant fucking pain in the ass to program. We now spend a vast amount of money to keep advancing computing, and we are not going to be able to keep up. This era is over and what the fuck are we going to do? The workshop was put together to answer this WTF question.

“Vulnerability is the birthplace of innovation, creativity and change.” ― Brene Brown

I will start by saying Los Alamos carries some significant meaning for me personally. I lived and worked there for almost 18 years. It shaped me as a scientist, if not made me the one I am today. It has (had) a culture of scientific achievement and open inquiry that I fully embrace and treasure. I had not spent time like this on the main town site for years. It was a stunning melange of things unchanged and radical change. I ate at new places, and old places running into old friends with regularity. I was left with mixed feelings and deep emotions at the end. Most of all my view of whether leaving there was the right professional move for me. It was probably a good idea. The Lab I knew and loved is almost gone. It has disappeared into the maw of our dysfunctional nation’s destruction of science. It is a real example of where greatness has gone, and the MAGA folks are not doing jack shit to fix it.

More later about the Lab and its directions since I left. Now for the good part of the week, the Workshop.

“The important thing is not to stop questioning. ― Albert Einstein

The first day of the workshop should have left me a bit cold, but it didn’t. The focus was what is the computing environment of the near future. It was all the stuff the high-performance computing people were doing to forestall the demise of Moore’s law. There are a bunch of ideas and zero of them are really appealing or exciting. The biggest message of the day is a focus on missed opportunities. The decade of focus on exascale computers has meant huge opportunity cost. This would unfold brilliantly as the week went along. The greatest take-home message was the cost of keeping up and the drop off of performance in the aggregate list of the fastest computers. We can’t do this anymore. The other big lesson is that quantum computing is no way out. It is cool and does some great shit, but it is limited. Plus its always attached to a regular computer, so that’s an intrinsic limit.

The second day was much more about software. We have made a bunch of amazing software to support all these leading-edge computers. This software is created on a shoestring budget and maintaining it is an increasing tax. The biggest point is that GPUs suck ass to program. We have largely wasted 10 years programming these motherfucking monstrosities. If we weren’t doing that what could we have done? Plus the GPUs have a limited future. There have been some great ideas for dealing with complexity like Sandia’s Kokkos, but there are dead ends. We are so attached to performance, why can’t we work with computers that are a joy to program? Maybe that would be a path we could all support.

At the end of each day, all the speakers formed a panel and we had a moderated conversation with the audience. The first day they asked Mike Norman to lead the conversation. Mike is a renowned astrophysicist and leader in the history of high-performance computing. It was cool to get to meet him. During the discussions, major perspectives came clearly into focus. An example is the above comment about whether we wasted time on GPUs for 10 years? Yes is the answer. Another issue is the problems and cost of software, which isn’t well-funded or supported. I can report from my job that the maintenance cost of code can quickly swallow all your resources. This grows as the code gets old and we make a lot of legacy codes in science. Another topic of repeated discussion every day of the meeting is the growing obsession with AI. There is a manic zeal for AI on the part of managers, and it puts all our science at serious risk. A bit more later about this.

Finally, at the end of day 2 we started in on algorithms and the science done with computing. Thank god! While appreciate learning all about software and computing, I need some science! I was introduced to tensor trains and I’ll admit to not quite grokking how they worked. It was one of several ideas for extremely compressed computing. A great thing is to leave a workshop with homework. After this, we heard about MFEM from Livermore. Lots of computing results and not nearly enough algorithms (which I know exist). They didn’t talk about results with the code, only how fucking great it runs. That said this talk was almost an exclamation point on what GPU-based computing has destroyed.

Wednesday was my talk. I was sandwiched between two phenomenal astrophysics talks with jaw-dropping results and incredible graphics. I felt honored and challenged. Jim Stone gave the first talk and wow! Cool methods and amazing studies of important astrophysical questions. He uses methods I know well and they produce magic. My physics brain left the talk wishing for more. I could watch a week of talks like that. Even better he teed up some topics my talk would attack head-on. After my talk, Bronson Messer from Oak Ridge talked about supernovae. It was sort of a topic I have an amateur taste for. Incredible physics again like Jim’s talk and gratifying uses of computing. I want more!

I gave my talk in a state where I was both inspired and a bit gobsmacked having to sit between these two masterpieces. I had trimmed my talk down to 30 minutes to allow 15 minutes for questions. Undaunted, I stepped into the task. My talk had three main pieces: a discussion of the power and nature of algorithms, how V&V is the scientific method, and how to use verification to embrace true computational efficiency. I sized the talk almost perfectly. I do wish I would move more during the talk and be more dynamic. I was too chained to my laptop. Also hated the hand mike (would have loved to drop it at the end, but that would be a total dick move).

Intel will deliver the Aurora supercomputer, the United States’ first exascale system, to Argonne National Laboratory in 2021. Aurora will incorporate a future Intel Xeon Scalable processor, Intel Optane DC Persistent memory, Intel’s Xe compute architecture and Intel OneAPI programming framework — all anchored to Intel’s six key pillars of innovation. (Credit: Argonne National Laboratory)

“The only way of discovering the limits of the possible is to venture a little way past them into the impossible.” ― Arthur C. Clarke

I always believe that a good talk should generate questions. My talk generated a huge reaction and question after question. Some talked about making V&V more efficient and cheaper. I have a new idea about that after answering. No, V&V should not be cheap. It is the scientific method and a truly great human endeavor. It is labor intensive because it is hard and challenging. People don’t do V&V because they are lazy, and want it on the cheap. It is just like thinking and AI. We still need to think when we do math, code, or write. Nothing about AI should take that away. Science is about thinking and we need to think a lot more, not less. Computers, AI, algorithms, and code are all tools, and we need to be skilled and powerful at using them. We need to be encouraged to think more, question, and do the hard things. None of it should be done away with by these new tools. These new tools should augment productivity making us more efficient. They should have free time to really think more.

The big lasting thought from my talk is about the power of algorithms. Algorithms fall into a set of three rough categories worth paying attention to. This taxonomy is structured with the power of these algorithms too. I will write about this more. I have in the past, but now I have new clarity! Thanks workshop! What an amazing fucking gift!

This taxonomy has three parts:

1. Standard efficiency mapping to computers (parallel, vector, memory serving, …). This is the focus of things lately. They are the lowest rung of the ladder.

2. Algorithms that change the scaling of the method in terms of operations. The archetypical example is linear algebra where the scaling originally was the cube of the number of equations like Gaussian elimination. The best is multigrid which scales linearly with the number of equations. The difference in scaling is truly quantum and rivals or beats Moore’s law easily.

3. Next are the algorithms that are the game changers. These algorithms transform a field of science or the world. The archetype of this is the PageRank algorithm that made Google what it is. Google is now a verb. These algorithms are as close to magic as computers do.

The trick is that each of the rungs in the hierarchy of algorithms is harder, more failure-prone, and rare. These days the last two rungs are ignored and only happen with serendipity. We could do so much more if we were intentional about what we pursue. It also requires a taste for risk and tolerance of failure.

“Any sufficiently advanced technology is indistinguishable from magic.”― Arthur C. Clarke

I wanted this to be a brief post. I have failed. The workshop was a wonderful gift to my brain. So this is a core dump and only a partial one. I even had to clip off the last two days of it (shout out to Riley and Daniel for great talks plus the rest, even more homework). Having worked at Los Alamos I have friends and valued colleagues there. To say that conversations left me troubled is an understatement. I am fairly sure that the Los Alamos I knew and loved as a staff member is dead. I’m always struck by how many of my friends are Lab Fellows, and how dismal my recognition is at Sandia. At Los Alamos, I would have been so much more at least technically. That said, I’m not sure my heart could take what was reported to me. The Lab is something else now and has lost its identity as something special.

The Lab was somewhere special and wonderful. It was a place that I owe my scientific identity to. That place no longer exists, You can still make it out in the shadows and echoes of the past. Those are dimming with each passing day. You may recall that last month, Peter Lax died. A friend shared the Lab’s obituary with me. It wasn’t anything horrible or awful, but it was full of outright errors and a lack of attention to detail. Here is one of the greats of the Lab and a member of the few remaining scientists from the Manhattan Project. He was someone whose contributions to science via applied math define what is missing today. The work in applied math that Peter did is missing today. It is what AI and machine learning need. It is absent. Worse yet, the current leaders of the Lab and nation are oblivious. They botched his obituary and I suppose that’s a minor crime compared to the scientific malpractice.

One cool moment happened at Starbucks on Thursday morning. It was a total “only in Los Alamos” moment. I was sitting down enjoying coffee, and a man came up to me. He asked, “Are you Bill Rider?” He was a fan of this blog. I invited him to sit and talk. We had a great conversation although it did little to calm my fears about the Lab. I can’t decide if I should feel disgusted, a nod of submission, or deep sadness. A beacon of science in the USA and the world is flickering out. At the very least this is a tragedy. The tragedy is born of a lack of vision, trust, and stewardship. It’s not like the Lab does anything essential; it’s just nuclear weapons.

“The present changes the past. Looking back you do not find what you left behind.” ― Kiran Desai,

Rather than close on this truly troubling note, I’ll send on a bit of gratitude. First, I would like to give much appreciation to Brian who did much of the operation and management of the workshop. He did an outstanding job. Chris Fryer and CNLS hosted the workshop under its auspices. It was joyful to be back in the CNLS fold once again. I have so many great memories of attending seminars there along with a few that I gave. Chris and his wife Aimee host wonderful parties at their home. They are truly epic and wonderful with a tremendous smorgasbord of culinary delights and even more stimulating conversations with a plethora of brilliant people. Always a delight to visit them and enjoy their generous hospitality.

“Every revolutionary idea seems to evoke three stages of reaction. They may be summed up by the phrases: (1) It’s completely impossible. (2) It’s possible, but it’s not worth doing. (3) I said it was a good idea all along.” ― Arthur C Clarke

When is Research Done?

01 Sunday Jun 2025

Posted by Bill Rider in Uncategorized

≈ Leave a comment

TL;DR

There is this trend I’ve noticed over my career, an increasing desire to see research as finished. The results are good enough, and the effort is moved to new endeavors. Success is then divested of. Research is never done, never good enough, and is simply the foundation for the next discovery. The results of this are tragic. Unless we are continually striving for better, knowledge and capability stagnate and then decay. Competence fades and disappears with a lack of attention. In many important areas, this decay is already fully in effect.. The engine of mediocrity is project management with milestones and regular progress reports. Underlying this trend is a lack of trust and short-term focus. The result is a looming stench of mediocrity where excellence should be demanded. The cost to society is boundless.

Capabilities versus Projects

“The worst enemy to creativity is self-doubt.” ― Sylvia Plath

Throughout my career, I have seen a troubling trend in funding for science. This trend has transformed into sprawling mismanagement. Once upon a time, we funded capabilities and competence in specific areas. I’ve worked at multi-program labs that apply a multitude of disciplines to execute complex programs. Nuclear weapons are the archetype of these programs. These programs require executing and weaving together a vast array of technical areas into a cohesive whole. Amongst these capabilities are a handful of overarching necessities. Even the necessities of competence for nuclear weapons are being ignored. This is a fundamental failure of our national leadership. It is getting much worse, too.

The thing that has changed is the projectization of science. We have moved toward applying project management principles to everything. The dumbest part of this is the application of project management for construction into science. We get to plan breakthroughs (planning that makes sure they don’t happen), and apply concepts like “earned value”. The result is the destruction of science, not its execution. Make-believe success is messaged by managers, but is empty in reality. Instead of useful work, we have constant progress reports, updates, and milestones. We have lost the ability to move forward and replaced it with the appearance of progress. Project management has simply annihilated science and destroyed productivity. Competence is a thing of the past.

“The problem with the world is that the intelligent people are full of doubts, while the stupid ones are full of confidence.” ― Charles Bukowski

The milestones themselves are the topic of great management malpractice. These are supposed to serve as the high-level measure of success. We operate under rules where success is highly scrutinized. The milestones cannot fail, and they don’t. The reason is simple: they are engineered to be foolproof. Thus, any and all risk is avoided. Upper management has its compensation attached to it, too. No one wants to take “food” out of their boss’ mouths either (that’s the quiet part, said out loud). The end result is not excellence, but rather a headlong leap into mediocrity. Milestones are the capstone on project management’s corrosive impact on science.

Rather than great work and maintaining capability, we have the opposite, mediocrity and decay.

The Desire to Finish

“Highly organized research is guaranteed to produce nothing new.” ― Frank Herbert

One of the most insidious aspects of the project mindset is the move to terminate work at the end of the project. There is a lot of work that they want to put a bow on and say, “It is finished.” Then we move to focus on something else. The management is always interested in saying, “This work is done,” and “move to something new.” The something new is something that will be big funding and contribute to managerial empire building (another pox). Once upon a time, the Labs were a national treasure (crown jewels). Now we are just a bunch of cheap whores looking for our next trick. This is part of the legacy of project management and our executive compensation philosophy. Much less progress and competence, much more graft and spin.

A few years ago, we focused on computing and exascale machines. Now we see artificial intelligence as the next big thing (to bring in the big bucks). Nothing is wrong with a temporary emphasis and shift of focus as opportunity knocks. Interestingly, exascale was not an opportunity, but rather a struggle against the inevitable death of Moore’s law. Moore’s law was the gift that kept on giving for project management, reliable progress like clockwork.

The project management desires explain exascale more than any technical reasons. Faster computers are worthwhile for sure; however, the current moment does not favor this as a strategy. In fact, it is time to move away from it. AI is different. It is a once in a generation technology to be harnessed, but we are fucking that up too. We seek computing power to make AI work and step away from algorithms and innovation. Brute force has its limits, and progress will soon languish. We suffer from a horrendous lack of intellectual leadership, basic common sense, and courage. We cannot see the most obvious directions to power scientific progress. The project management obsession can be tagged as a reason. If the work doesn’t fit into that approach, it can’t be funded.

Continual progress and competence are out the window. The skills to do the math, engineering, and physics are deep and difficult. The same holds for the skills to work in high-performance computing. The same again for artificial intelligence. Application knowledge is yet another deep, expansive expertise. None of this expertise easily transfers to the next hot thing. Worse yet, expertise fades and ossifies as those mental patterns lapse into hibernation. Now the projects need to finish, and the program should move to something shiny and new. The cost of this attitude is rather profound, as I explore next.

The Problems with Finishing: Loss of Competence

“The purpose of bureaucracy is to compensate for incompetence and lack of discipline.” ― Jim Collins

This all stems from the need for simplicity in a sales pitch. Simple gets the money today. Much of the explanation for this is our broken politics. Congress and the people have lost confidence and trust in science. We live in a time of extremes and an inability to live in the gray. No one can manage a scintilla of subtlety. Thus, we finish things, followed by a divestment of emphasis. That divestment ultimately ends up hollowing out the built expertise needed for achievement. Eventually, the tools developed in the success of one project and emphasis decayed too. Essential capabilities cannot be maintained successfully without continual focus and support.

A story is helpful here. As part of the programs that were part of the nuclear weapons program at the end of the Cold War, simulation tools were developed. These tools were an alternative to full-scale nuclear tests. To me, one of the more horrifying aspects of today’s world is how many of these tools from that era are still essential today. Even tools built as part of the start of stockpile stewardship after the Cold War are long in the tooth today. In virtually every case, these tools were state of the art when conceived originally. Once they were “finished” and accepted for use in applications, the tools went into stasis. In a world of state-of-the-art science, stasis is decline. The only exception is the move of these codes to new computing platforms. This is an ever-present challenge. The stasis is the intellectual content of the tools, which matters far more than the computing platforms.

What usually does not change are the numerical methods, physics, and models in the codes. These become frozen in time. While all of these can be argued to be state of the art when the code was created, they cease to be with time. We are talking decades. This is the trap of finishing these projects and moving on; the state of the art is transitory. If you rest on success and declare victory, time will take that from you. This is the state that too much of our program is in. We have declared victory and failed to see how time eats away at our edge. Today, we have tools operated by people who don’t understand what they are using. The punch line is that research is never done, and never completed. Today’s research is the foundation of tomorrow’s discoveries and an advancing state of the art.

Some of this is the ravages of age for everything. People age and retire. Skills dull and wither from lack of use. Codes age and become dusty, no longer embodying the state of the art. The state of the art moves forward and leaves the former success as history. All of this is now influencing our programs. Over enough time, this evolves into outright incompetence. Without a change in direction and philosophy that incompetence is inevitable. In some particular corners of our capability, the incompetence is already here.

“Here’s my theory about meetings and life: the three things you can’t fake are erections, competence and creativity.” ― Douglas Coupland

A Mercy Killing of an Ill Patient

“Let’s have a toast. To the incompetence of our enemies.” ― Holly Black

The core issues at work in destroying competence are a combination of short-term thinking and lack of trust. The whole project attitude is emblematic of it. The USA has already ceded the crown of scientific and engineering supremacy to China. American leaders won’t admit this, but it’s already true. Recent actions by the Administration and DOGE will simply unilaterally surrender the lead completely and irreversibly. The corollary to all this negativity is that maintaining the edge of competence requires trust and long-term thinking. Neither is available today in the USA.

There is a sharp critique of our scientific establishment available in the recent book Abundance. There, Klein and Thomson provide commentary on what ails science in the USA. It rings true to me, having worked actively for the last 35 years at two National Labs. Risk avoidance, paralyzing bureaucracy, and misaligned priorities have sapped vitality. Too much overhead wastes money. All these ills stem from those problems of short-termism combined with a lack of trust. A good amount of largess and overconfidence conspires as well. Rather than encourage honesty, the lack of trust empowers bullshit. Our key approach to declaring success is to bullshit our masters.

Today is not the time to fix any of this. It is time to think about what a fix will look like. Recent events are the wanton destructive dismantling of the federal scientific establishment. Nothing is getting fixed or improved. It is simply being thrown into the shredder. If we get to rebuild science, we need to think about what it should look like. If we continue with short-term thinking, success won’t be found. The project management approach needs to be rejected. Trust is absolutely necessary, too. Today, trust is also in freefall. Much of the wanton destruction stems from a lack of trust. This issue is shared by both sides of the partisan divide. Their reasons are different, and the truth is in the middle. Unless the foundation for success is available, scientific success won’t return.

“The problem with doing nothing is not knowing when you are finished.” ― Nelson De Mille

What Americans don’t seem to realize is that so much success is science-based. During the Cold War, the connection between national security was obvious. Nuclear weapons made the case with overwhelming clarity. Economic security and success are no less bound to science. The effect is more subtle and longer-term. The loss of scientific power won’t be obvious for a long time. Eventually, we will suffer from the loss of scientific and engineering success. Our children and grandchildren will be poorer, less safe, and live shorter lives due to our actions today. The past four months simply drove nails into that coffin that had already been fashioned by decades of mismanagement.

“Never put off till tomorrow what may be done day after tomorrow just as well.” ― Mark Twain

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • The Regularized Singularity
    • Join 55 other subscribers
    • Already have a WordPress.com account? Log in now.
    • The Regularized Singularity
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...