tl;dr

AI is the single most important technology for our future. It will likely form the foundation of economic and military power for decades if not longer. The USA leads the World in AI largely through our corporate power. Government is a powerful customer and a junior partner in the technology. This is far different than pervious world changing technologies like nuclear weapons or the Internet. Our forward-looking “strategy” is to double down on computing hardware. It envisions a continuation of the current technology rather than future breakthroughs. We are one breakthrough away from losing dominance. Meanwhile, the USA is structuring itself to ensure those breakthroughs happen elsewhere. Our leadership is planning our demise by a pure short-term focus. Their incompetence will have long-lasting and disastrous consequences.

“As we peer into society’s future, we — you and I, and our government — must avoid the impulse to live only for today, plundering, for our own ease and convenience, the precious resources of tomorrow.” – Dwight D. Eisenhower

Something Amazing

In 2022, I used a large language model LLM for the first time. The abilities of ChatGPT felt almost magical and definitely jaw-dropping. It felt almost exactly like the first time I used Google search. I was immediately struck by the feeling that the future had arrived. There was no going back. These LLMs have become synonymous with AI ever since. Furthermore, AI has become the principal engine of economic progress and national power. They have huge implications for work, investment, and national security. Their importance and capability have grown and grown. Industry is investing huge amounts of money in training and running AI in vast data centers.

I am growing more capable with each passing month. I finally bought service from Claude amazed with the LLM, cowork, and code. The things it can do are beyond anything imaginable a mere five years ago. It’s able to do much of many white-collar jobs. This feels acute in programming most of all. This is a manifestation of short-term thinking and scarsity. The real thing for Americans to tackle is to start demanding more from the job. Their needs to be a more human edge and human thought from the job. Leave the tedium for AI, or better yet eliminate it. Make AI a capable assistant that frees up people to create more, and higher quality things. This is the long-term and abundance thinking.

“In order to achieve this, jobs have had to be created that are, effectively, pointless.” – David Graeber

What I found AI is most impressive at is removing tedium from work, not providing better thoughts. It provides a breadth of information and perspective, but the depth of that thought is severely lacking. Every time I went deep into a topic where I had expertise, the AI floundered. It painted in broad brush strokes, but the refined work and understanding was poor. What is needed from jobs is to demand more depth from the work. What this really means is there needs to be a demand for greater quality and less rote, useless work. In other words, AI should be the death of what have been called “bullshit jobs.” If AI can do a job that job is of questionable value.

From what I observed at work as I went out the door, the opposite was happening. The amount of bullshit that I was required to focus on was growing year upon year. The quality was lower and lower. The management actually had less room for high-quality, innovative work and more. In other words, trends were all going in the opposite direction from what is needed for us to survive AI in the workplace. The ways my managers asked me to use AI was moronic. Worse yet. friends at other Labs told me the same thing. I should note this is at a top-tier research institution. I can hardly imagine what it’s like at companies. We are led by people who are clueless. The scary part is I know many of these leaders and they are smarter than this.

The leaders I know are the core of our nation’s intellectual leadership. They guide the very institutions we need for AI to flourish. The irony is that what science needs and what AI needs to truly succeed is higher quality work and much more thinking. Our leads are pushing us to do less, AI or not. Human thinking, not more computing power, is the key. Computing is a great tool, but thinking is the silver bullet. AI is the same, an incredible tool to augment humans. This is the secret to long-term success and victory in the quest for AI dominance across the world. This is making AI a tool and collaborator for humanity rather than a replacement.

My first big point is that we need to embrace the long term and abundance if we want AI to succeed. Humanity and thinking are essential, we need more of it. Not less. Short-term thinking and scarcity is a losing approach. The problem is we have already chosen the losing approach.

How the USA will lose the Lead

What I observed in my last few years at the lab was an almost complete and total lack of coherent strategy around science and technology. Thinking was all short term and money focused. The most acute version of this is around AI. There the lack of thoughtful-principled approaches and strategy has been appalling. Increasingly, the management of the lab simply looks to how they can get money and as much of it as possible. What that money does and how it’s applied and what the long-term future looks like is immaterial. These attitudes are paralleled across society. The nation as a whole has the same disease

This is a final festering of this habit of short-termism. Simply looking at quarterly progress in annual budgets, with little or no thought to any sort of long-term coherent plan. Labs were once engines of innovation and cornerstones of American security. That is difficult to assert today. Over the long run, this simply erodes these institutions and turns them into mere contracting organizations. It also reflects the lack of any sort of coherent strategy nationally. In the end, will lead the United States towards being a second-rate power. Our most powerful technologies in industry and defense are based on research that is decades old. That technology pipeline is almost annihilated today.

The first incarnation of this, and a continuing theme, is this obsession with computing hardware. I have seen this for almost my entire career. Naive short term leadership sees computers as an easy sale politically. When one looks at computing, whether it’s classical or modeling and simulation, or AI, there is a balanced set of activities that need to be cared for. Many different things contribute to the whole. In both cases math, science, and software are more important. In the current age, all the focus and the only strategy that can be seen is hardware. This obsession with hardware leaves most of the ecosystems supporting computing famished. It will produce poor performance compared to a coherent, thoughtful strategy that balances all the needs.

The case for a focus on hardware in classical modeling and simulation was nuanced. It was not a complete slam-dunk although the benefits are small. The case against focus on hardware in AI is quite bulletproof. The scaling laws supporting increased capability in computing for AI are incredibly weak. In fact, far weaker than the scaling of prowess, capability, and simulation. Yet we see complete devotion to hardware across both AI business interests and government. It is as if all the other is simply being ignored, and this is the only thing they know how to do. The lack of coherent thought and broad, encompassing strategy staggers the imagination.

The Labs have completely rejected their traditional role of providing scientific leadership and feedback to the national programs. What we have now is laboratories existing in a hand-to-mouth existence happy for money. Internally they have a complete lack of any sort of strategic thought that could lead to success. The entire system seems to be spiraling down the drain. It is in need of vast reform and improvement. Instead, we are just doubling down on the very forces that have led to the decline in the first place. Now they are aided by a federal government that destroys rather than fix or reform.

“Knowledge is responsibility, which is why people resist knowledge.”― Stefan Molyneux

Personal Perspectives on AI

Part of my decision to retire revolved around these questions. I had of confidence in both Sandia and federal agencies’ ability to appropriately and wisely steer our future. AI was more of the same. I’d already seen horrible decision making. The actions of both Sandia managers, federal agencies, and our national leadership have all convinced me that no one is thinking about how to do this in a balanced, wise manner. Everything revolves around money. Nothing revolves around the scientific work needed to assure American supremacy in these areas. Quite frankly, we have no national strategy, and we soon will be lost in the wilderness. I was wasting my time working.

This is the exemplar of everything I wrote about in “The Decline of American Science.” The way science is managed today, we cannot stay on the cutting edge of anything. All of this is because of a lack of trust and fear. We are so fearful of anything that looks like a scandal that we basically cut our own throats. In AI, which is moving at light speed, this is a fatal flaw. There are other fatal flaws, and the institutions fail to acknowledge all of them. They seem powerless to affect anything for the better.

One of the things I did in my last couple of years at Sandia was start to investigate the power and proper use of AI. In the process, I came to a number of conclusions. Now, I should note that Sandia provided a version of ChatGPT internally. I tested this and used this, but I also compared it to what was available on the outside. This was not just ChatGPT, but also Gemini and Claude. What I determined in short order was that the internal version of ChatGPT that Sandia provided was a piece of shit. It was terrible. At least, compared to the free versions externally. The free versions!

“The first principle is that you must not fool yourself — and you are the easiest person to fool.” – Richard Feynman

It hallucinated worse than the outside models. It answered every single question I asked worse than any of the other available models (the free versions). One of the things to note is the internal version of ChatGPT was structured to not violate security issues (hence data that needed to remain internal was safe). It cut it off from the open internet and by Standards necessary for use were careful and secure yet. Also, it was updated less frequently and was generally behind. It is a general software issue at the Labs. Software processes are extremely conservative, leading to slow progress. One of the things that is most damning, but this is way the labs operate. Security rules are deep, Byzantine and dripping with paranoia.

Cyber security gets power and money by being as paranoid as possible. When a mistake is impossible, progress is impossible too. It is risk adverse in the extreme. Stupid rules are common. For example, the approach to medical device security is insane. They would make people choose between the best medical treatment, and outlandish security concerns. It is a massive “fuck you” to employees and scientists. They cost an enormous amount of effort and time to make sure that everything we do is behind in cutting-edge technology. LLMs for AI were no different. They reflect some of the worst characteristics of attacking science today. AI at the labs is slow, expensive and behind.

How I Learned to be Effective with AI; Its being Ignored

“The greater the gap between self perception and reality, the more aggression is unleashed on those who point out the discrepancy.” ― Stefan Molyneux

While I learned that internal AI efforts were verging on hopeless, I did figure some things out. One of the key things that I discovered in my exploration of AI is the mindset for engagement. This is a verification and validation mindset. I am bootstrapping from the perspective that V&V is the scientific method. This has been seen by others, where one needs to approach AI as a collaborator. To do this with a spirit of pushback and doubt in every interaction. There needs to be a demand for evidence from the AI about their assertions. The evidence needs to be checked independently. This is exactly what is done in V&V and science in general.

After I retired, I continued. This is an essential technology for our future. Eventually. I paid for Claude. Before the purchase Claude impressed me. After the purchase, it’s largely been underwhelming. Largely because my expectations were so high. Nevertheless, the desktop version is amazing with Cowork and Code. It definitely improves my efficiency and it is a huge leap forward. I am fairly sure that the capability for software creation will be incredible. I’ve also worked with friends going amazing work with it. This is hard core science and its level of competence with a good collaborator is unbeatable today. The key is the right approach to using AI. At the labs this is hard to find, certainly from leadership. Lab leadership acts clueless about how to use it well.

“A government contract becomes virtually a substitute for intellectual curiosity.” – Dwight D. Eisenhower

This is the exemplar of everything I wrote about in “The Decline of American Science.” The way science is managed today, we cannot stay on the cutting edge of anything. All of this is because of a lack of trust and fear. We are so fearful of anything that looks like a scandal that we basically cut our own throats. In AI, which is moving at light speed, this is a fatal flaw. There are other fatal flaws, and the institutions fail to acknowledge all of them. They seem powerless to affect anything for the better.

As I’ve noted before, V&V; is in deep decline in science, especially at the labs. The V&V mindset useful for AI isn’t present in science. AI needs V&V thinking in simply judging the results. One of the most repugnant aspects of our current approach to AI is that V&V; is rejected. It should be the standard way to engage with these models for science. In the recent Genesis call V&V is not a priority. It is weakly nodded towards, and not emphasized. With AI V&V is vastly more important than classical computing. This is basically a call to cut the throat of progress, and destroy the best way to interact with AI in a scientific enterprise. This lack of confidence in our direction validated my decision. I was wasting my time.

“For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring.” ― Carl Sagan

I had a direct engagement with a Lab Director about this. It was discouraging in the extreme. While the incompetence and lack of ethics of my direct management at Sandia was the principal reason for retiring, there was a deeper reason. I had engaged with a new lab director on a topic related to AI. Her response was so underwhelming and weak that I lost all faith. Internal search at Sandia is awful. The search technology is first-rate and modern. The reason is information hiding that defines the internal culture. The culture short-circuits an essential technology for the information age.

I asked her about information control and its cost in training for AI. Since training data is essential for LLMs, the issue that undermines search is fatal for LLMs. Security rules and culture would rob AI of training data if applied strictly. Rather than answer the question, she attacked me and said the question I asked was “harsh”. It was a legitimate question, and it gets to the heart of the utility of this work. It confirmed to me that she was just like all the other managers. She would be incapable of solving real problems and addressing real needs of the institution. It should be obvious by now. I am completely fed up with ineffective managers who refuse to confront real problems. The new director would be more of the same incompetence. Another leader to rubber-stamped the current decline. Every single day that this happens, the lab declines further and gets worse and worse.

Moreover, my understanding was that she was a last-minute second choice over another person. Someone who could have been vastly better in all likelyhood. This other person is someone I respected greatly, and knew personally. He was rejected for political reasons. Honestly, I don’t have a lot of confidence that had my friend been chosen as director, the outcome wouldn’t be any better. There seems to be an institutional and societal barrier to addressing any problems. Managers just seem to be completely devoted to the prospect that they can just define success, declare it, and ignore problems.

Maybe it is just the outcome of social media. Our leaders are now just influencers. Moreover, paying attention to problems is a losing prospect and will simply get one fired. The blame for the problems will be laid at their feet. This broad character is one of the main reasons for the accelerated decline in American science. The problems are obvious, but no one is addressing any of them. When managers are confronted with the truth, they reject it and basically shoot the messenger. If we continue down this path, American dominance in AI will be fleeting and short.

“You cannot connect with anyone except through reality.” ― Stefan Molyneux