tl;dr
Is AI going to replace your job and spike unemployment, or will it supercharge abundance and wealth?
We have a choice about where this goes as a society. The hype around AI is endless and over the top. The hype misses the big opportunity and stokes outlandish fears, too. Almost all the conversation misses what AI brings to the table. In a lot of cases, if the job can be eliminated by AI, much of that job probably shouldn’t be done. The real power of AI is to make people more productive. Cutting the jobs is zero-sum thinking. The key to AI is boost productivity to do more and sell more. This is the essence of abundance. Use infinite thinking to make more and grow the economy. Zero-sum thinking is at the core of these job cuts. It will turn people against AI. If AI fucks the public, the public will fuck AI back. This is how we lose as a society. A better path is to use it to grow society’s wealth and abundance instead of just growing profits.
This topic is long overdue and needed. We need to think clearly about where all this is going. Right now, no one is. We are not seeing the real core issues around AI. Whether it is the AI companies or the government, it is all bullshit and little light. This bullshit is the hallucinations AI produces regularly. This algorithmic BS is a perfect vehicle for amplifying the lack of trust already corroding society today. This lack of trust could be amplified further and trigger a societal doom loop.
“Abundance of knowledge does not teach men to be wise.” ― Heraclitus

AI is a “Magic” Technology
“Any sufficiently advanced technology is indistinguishable from magic.” — Arthur C. Clarke
One of the things to recognize is just how miraculous AI is. In the course of the internet age, there have been a handful of moments that feel almost magical when they hit you. The first time I realized was the first time I used Google search. Before Google search happened, the internet had a set of phone book websites. I happened to use one called Alta Vista. It was the way you got around and found stuff. Then this new Google search came. It had this amazingly simple interface, and you typed in the query, and suddenly you had results. It was like magic! Once I used Google search, it was like walking through a door, and I never walked back out through it again. Alta Vista was gone, and I wasn’t going to ever return to it. Google was like fucking magic!
The next thing that spurred this sort of feeling was a smartphone, the Apple iPhone. I had used a Blackberry and a flip phone. The iPhone was the internet in your pocket plus a built-in iPod. It became even more, and the interface was like a mini-laptop. More magic! The blackberry was cooked, and these devices became everywhere. As we discuss later, these smartphones were turned on society over time. The worst thing about Google and smart phones were the enshitification unleashed by them. My stance is that enshitification is a choice driven by maximizing shareholder value. It is optional. We treat it like it is a natural law. It is not.
The next magic moment was the first time I used ChatGPT. I heard about this new site online with this thing called a Large Language Model (LLM) that you could question like a person. You simply spoke like a human being, and it talked back. I tried it out. My jaw dropped at what it could do. The potential was vast. The problems with the technology are also vast. Nonetheless, this was a magical moment where you could see the World change in a moment. Recently, with Codex, I felt the same thing (Claude Code is similar). I was able to do things with ease and simplicity that were magical. This is the dawn of agentic AI. The potential for LLMs and agentic AI is incredible. The counter to this hopeful trajectory is the societal system that enshitifies all this magical technology as the default setting.


The subtext of maximizing shareholder value is a mindset that it typifies. This mindset focuses on greed instead of generosity. It is short-term focused instead of the long term. This mindset is about zero-sum thinking, where there are winners and losers. The alternative is infinite thinking, where everyone wins. We have choices with AI and agents. We can proceed as we have with greed and short-term thinking. This will lead to societal damage and enshitification. We can also choose a different path of long-term thinking and generosity. This is the path to abundance and societal good. The choices are there. To get the good outcomes for society, we need to step away from our current defaults.
“A man is but the product of his thoughts. What he thinks, he becomes.” ― Mahatma Gandhi
We Need to Figure Out Work and AI
In the last year that I worked at Sandia, I spent a great deal of time trying out LLMs in the setting of work. I did all sorts of tests in trying to understand and map out the capabilities of this technology in the setting of doing scientific work. I examined how LLMs did at writing, how they did at research, and how they did at answering a variety of questions. This was related to genuine curiosity, but also to work that I was doing in verification and validation of scientific machine learning. Scientific machine learning (ML) is a related field that is getting a great deal of attention in the scientific community, although it is being overwhelmed by the tsunami of interest in LLMs. Doing this work required applying well-developed principles of the scientific method. The answer is to then adapt the principles to the specifics of LLMs and ML
“I’m not upset that you lied to me, I’m upset that from now on I can’t believe you.” ― Friedrich Nietzsche
What I came to realize was that my approach to verification validation is essential to getting good results from LLMs. To wit, the level of doubt in taking LLM results needs to be quite high. LLMs are prone to bullshit us all the time and quite often will give us an answer that it wishes to satisfy us with, which has no relation to objective facts. A large part of successfullyusing an LLM is to start off by asking it questions to which you already know the answer, in order toverify that the topical area that LLMs are examining is within their grasp. This by no means says that, as you get deeper and deeper into a topic, the LLM will be successful. One should always take a result from a large language model with a grain of salt, check it, and think about it deeply.
What I discovered with LLMs is that the closer you get to esoteric, expert knowledge, the worse they are at everything. Whenever I got close enough to the core of my own expertise, the LLM failed to give objectively good results. This was true over and over again. This is an important lesson to integrate into using them effectively. The role of the human expert is actually amplified by LLMs. The expert knows the point where LLM competence ends, and human judgment is necessary.
For example, I found that LLMs are terrible for writing. They’re good as an editor, but terrible at creative writing, terrible at doing anything that a human with ability can do. Writing is a deeply human activity and involves clarity of thought. The narrative elements are an essential human pursuit. At least today, AI has no capacity to write with genuine humanity. My writing is part of thinking on a topic. True for fiction or non-fiction writing. A key is to leave marks on the prose that show genuine personality and human experience. Ultimately, my use of AI in any sort of writing has been relegated to editing and research.
The same holds doubly for areas of science, where I find AI is a capable digital assistant, great for improving the scope and breadth of what I do, but not good at creating anything at an expert level. I have tested this over and over with the same result. LLMs have improved over the past three years, but it has only moved the wall it hits a little. I’ve taken various algorithms and work that I’ve done and tried to basically spoon-feed it into the AI. Even with an excessive amount of spoon-feeding, the AI fails to do even the simplest level of creativity. At the same time, I am convinced it can be a useful assistant. I use it every single day for a host of tasks.
The counter is that AI is very good at giving a large volume of work and can be utilized to improve the quality of what work has been done and the speed with which the work is completed. This was particularly true with the Codex example that I tried in the agentic work. It did a number ofbanal tasks with speed and effectiveness that were far greater than my own and basicallyaccomplished one or two days of hard work in less than an hour. What I saw there was the capacity to free up my time to go towards creative and thinking efforts that are appropriate for humanity, and allow me to spend more of my time doing what a human being can only do.
“The problem with the world is that the intelligent people are full of doubts, while the stupid ones are full of confidence.” ― Charles Bukowski
Humans supply thinking and creativity. AI needs to remove the bullshit instead of adding bullshit to humanity.


How Not to Make Progress: No Trust and Maximum Bullshit
One of the big things that will inhibit the ability of AI to improve the workplace is this pervasive lack of trust in society. Every bit of the current trajectory will simply destroy more trust. A lot of the work that we all do at work is complete bullshit. Whether it’s training, paperwork, or various other things that are just check boxes, are all related to that lack of trust. As AI shows, most of this work is meaningless, lacks humanity, and can be automated. Rather than eliminating this useless work, the lack of trust only accelerates and amplifies it. If we do not change course, AI will undermine trust and generate even more inhumane bullshit. Over the course of my career, the bullshit grew without bounds and swallowed most of the humanity in work.
“Whoever is careless with the truth in small matters cannot be trusted with important matters” ― Albert Einstein
One of the biggest things for AI to solve is the issue of trust in itself. The tendency for hallucinations or franklybullshit us is toxic for AI’s future. It might be great for the near-term bottom line, but it destroys the long-term. This, along with the syncopacy of the replies, is a major issue. AI needs to stop this and start being honest, focusing on growing trust. There are probably internal measures and mechanisms by which the AI can return some degree of confidence and reliability in results. There are probably measures by which the AI can report that this answer is low confidence or high confidence. These can guide the users towards exercising doubt and assist in the verification of results under the appropriate circumstances.
The fact that they are a probabilistic engine means that there is a measure of probability associated with the results that it gives. Thus, a grade and score can be provided even if the highest score that it reports is something that is relatively low probability compared to what we would like. If the LLMs would let the user know that the answer is sketchy and unreliable, it would be transformative. It would show a vulnerability that would help build trust. We should never trust AI completely. Nonetheless, a tip that it was uncertain would be a boon. It would show a level of care for the user that today’s models neglect. It would also assist in educating users about what they are really dealing with.
This sort of measure built into AI would be incredibly welcome. At the same time, I think, within the way the current corporate governance works, it would be rejected out of hand because they simply want to have as many users as possible. The AI wants to express itself as being completely reliable and completely subservient to the users. Rather than provide a better service, the AIs will resist any kind of feedback that calls into doubt the results it produces. All of this is to serve the acquisition of maximizing shareholder value instead of maximizing customer service. Today’s corporate governance is squarely opposed to getting this right. This governance is at the heart of society’s deficit of trust.
“The comfort of the rich depends upon an abundant supply of the poor.” ― Voltaire
How to Actually Make Progress
Trained properly, AI could be a vastly powerful agent or assistant that can unleash human creativity. Human creativity, art, and free thinking are in short supply today. AI offers the ability to both boost this through freeing up time, but also assist people in bringing ideas to fruition and seeing whether or not they actually are good ideas worth exploring. AI can allow much more exploration and many more ideas to be brought to life, and perhaps ultimately produce far greater beneficial outcomes for business if only the businesses were to trust the people that they employed to do this kind of work. For myself, this is exactly the model of AI that I plan to exercise. I have a powerful assistant who can help me explore ideas more deeply and bring the right ones to life.
The right way to look at AI is to view it as a very capable digital assistant with broad and general knowledge. At the same time that knowledge is shallow and not at an expert level. AI cannot hold a candle to the expertise that you hold at the heart of what you do. This is the heart of humanity we should bring to our lives and work. It can help provide competent, but flawed, help in almost everything else you do that’s ancillary to your core work. In this way, AI can be a wonderful digital assistant and provide you with ease in achieving greater productivity.
As I noted above, AI couldn’t write for shit. I do believe that I am not the greatest writer, but I’m far, far better than AI. With a little effort, almost everyone probably could be taught to be better. We just need to teach people. AI doesn’t sound authentic, and it produces prose that is simply uninspired. AI is a great editor, though.
One of the biggest issues with AI is that you should doubt everything it creates. What I realized was that the way that I created AI was very much the same as the way I created science. There’s a need for verification and validation. I would approach using AI the same way I would approach a scientific problem, where I look to confirm everything it does and hold everything in doubt. It’s assumed it’s useful, but I also assume it’s flawed and in need of extra work and verification that the results are good. It would be better if AI helped and gave us a tip that its response is (more) questionable. In fact, with AI, the need for verifying and validating everything it does is much higher than with other computational tools. This calls into question the absence of V&V in the plans for AI seen societally. V&V is essential for AI’s success.
The greatest high-leverage thing that we can do is train people to use AI correctly. This was a place where my experience at the National Labs has been absolutely jaw-dropping. The management’s efforts to use AI have been ham-handed and naive. It was justsuperficial encouragement of the worst uses possible. They were encouraging people to use it, but not in an intelligent and well-thought-through way. The fact is that AI’s proper use is subtle and esoteric and requires a great deal of discipline and a change in the overall mindset. We need leadership that pushes us in the right direction. So far, all the leadership is pushing everything in the wrong direction.
“Don’t mistake activity with achievement.” ― John Wooden
Nothing more fully shows us this problem than the scientific programs around AI. DOE has the massive Genesis Project, which is just an exemplar of how not to do AI in science. It’s a whole bunch of stunts. There’s no evidence of any V&V or doubt in how it’s used. The V&V and the doubt are the most important part of science. More true with AI than any other science. Instead, it’s like recent programs. It’s all about big computers and doing things that look splashy but have very little scientific sense. It is almost 180 degrees from the right direction. AI can be a powerful tool for science, but only with a clear-eyed assessment of its results. Instead, we see blind acceptance and marketing bullshit.
The deeper issue is how this productivity will be utilized by corporations and organizations.
* Will they simply demand that the organization and the corporations produce as much as before? In this case, the gains with AI will be used to slash the size of the workforce.
* Or instead will they realize that they can unleash people to do more, and that corporations and organizations can do more and create more good for society?
This is an abundanceagenda and leads to great growth and good things for society. One path leads to destruction, and the other leads to long-term benefits. Current ideas are heading headlong toward destruction.
“Creativity is intelligence having fun.” ― Albert Einstein
To do this, we have to be mindful about how we use AI. Today’s world is full of the mindset of scarcity and the use of short-term thinking. This leads to the use of productivity to simply reduce the number of workers. This is short-sighted and ultimately robs the future of a much better outcome where we use the productivity to unleash greater creativity and more products, more output, and better things for society.


With Today’s Corporations, AI Will Fuck Us
Don’t worry, it will all be enshitified. If recent history is a guide, the magical capability of LLMs will be turned to shit. We have managed to take Google search and fuck it up systematically through greed. This greed is an enshitification plan. Smartphones are the same. Social media was never quite so magical, but it had potential. That potential has been squandered by the engine of enshitification. Now we have this new technology that seems far more powerful than any of these previous ones. It is definitely magical. We are going to turn it loose on the ecosystem that enshitifies things naturally.
What could possibly go wrong?

The capabilities and power of AI is far greater than the algorithms used in social media. With the current mentality, the creativity of humans will be greed-motivated to adapt AI into profit machines. The same mentality has already done an immense amount of damage to society. We should have faith that a more powerful technology will unleash greater damage. We are already seeing chaos and horrors in multiple ways originating from this process. Surely the power of AI will also be integrated with social media. This will supercharge profits and damage. These forces have energized toxic politics and vast income-wealth inequality. An AI supercharged ecosystem may be unimaginably worse. Without change, this is the likely course.
We should have already learned the lesson, but obviously, we haven’t. Money provides too much power to be overcome.
Zero Sum Thinking and Value
The current philosophy of maximizing shareholder value is zero-sum thinking. This is the approach where business (and life) is all about winners and losers. In today’s world, the losers are consumers who are preyed upon. Vulnerable smaller businesses are also preyed upon by massive corporations. The powerful dominate the weak and most of us are weak. Ultimately, the profit and victories are found at the expense to wide swaths of society.
I worked for decades in places where trust was in free fall. That’s not entirely true. The first decade or so at Los Alamos was a high-trust environment where people worked together. There was generosity and a spirit of giving that were essential to developing me as a professional. If you were reasonably smart and competent, you were welcomed into someone’s office and offered the best of their thoughts and advice. It was in this trust that I blossomed. Then modernity came for trust, and the generosity was hollowed out.
It is also an environment that I believe has been snuffed out. The same me plopped into the current version of the National Labs would never grow and accomplish anything like I could with that trusting environment. The lack of trust that infects society as a whole eventually took hold at the labs, as the government did not trust us, and we did not trust the government. We started to move in a headlong direction towards all of the natural outcomes for a lack of trust.
Part of this was:
– the lack of peer review
– the lack of honest assessment of work
– leadership that lied and withheld information from the rank and file
– an inability to look at risk and failure in a healthy way
All of this simply accelerated the loss of trust in the state we are in today. I think it’s safe to say that the trust in our society has never been lower. I saw all the toxic fruits of that mentality at work myself. We can see it across society, looking at politics. No matter what side you take, the other side is evil. With AI, we have a technology that can make it worse.

The problem is that these trust-building AI are not going to maximize shareholder value; however, they are going to build a system that would be suitable for the long run. We need a different fundamental mindset and corporate mentality.
“Acknowledging the good that you already have in your life is the foundation for all abundance.” ― Eckhart Tolle


Infinite Thinking
“To ask, “What’s best for me” is finite thinking. To ask, “What’s best for us” is infinite thinking.” ― Simon Sinek
The alternative to “zero-sum” thinking is infinite thinking. This thought process is couched in game theory. A zero-sum game is the classic contest with a winner and a loser. The opposite is an infinite game where it is all about continuing to play. If you play well, everyone wins. The zero-sum game is the usual football or basketball game. The infinite game is like Legos or a marriage. Success is continued play and creativity where everyone wins.
One of the greatest differences between the finite and the infinite game is an aspect of trust. To succeed at the Infinite Game, one must focus on building and maintaining trust. In the Finite Game, trust is used against you and becomes something that you wield as a weapon. This difference can be seen as our society has become completely untrusting. This is an exemplar of our commitment to these finite win-lose games as the basis for society.
“Leadership is about integrity, honesty and accountability. All components of trust.”― Simon Sinek
We are seeing a supercharging of corporate greed and behavior that drives the worst impulses of business. The other force that could change things would be regulation. We are currently in an orgy of deregulation, and there is very little thought or confidence on the part of the government to regulate an area like AI, much less tech or social media, in any sort of rational way that is based on expertise and knowledge. Instead, the vast amounts of money driven by corporate greed and inequality are tilting the playing fields squarelyagainst any of these outcomes. Thus, current trends show that trust is going to go even lower and become even worse across society as a whole. The recent dust-up between the Department of Defence and Antropic is an exemplar of this. DoD and OpenAI chose the path of no trust and greed.


Switching to a trust-building mentality is something needed by society today. With trust, collaboration and cooperation become the touchstones of how society looks. Without trust, it simplybecomes a dog-eat-dog world. You employ data and power as a weapon against those you’re pitted against. A simple and observant view of today shows you where this gets us: conflict, chaos, anger, and a host of other ills that are dragging society down.

If an alternative view of how AI is used is taken, we can also see how it can build trust. If we view AI as a vehicle for abundance, we see that it can supercharge the quality of work done. We can enhance the volume of work done and how much each worker can do. You then find that the ability to create, produce, and get products to market becomes accelerated and grows in scope. All ofthis brings wealth and prosperity to society. This, in turn, ultimately builds trust for AI and also provides benefits for the humanity that it serves. This is the path we need to take if we want AI to be good for society.
“Abundance is harder for us to handle than scarcity.” ― Nassim Nicholas Taleb
Standing in opposition to this vision is the focus on maximizing shareholder value, which is good for the short-term prosperity of society. Virtually all of us in the United States have investments in the stock market. Our retirements are all dependent on these investments doing well. We can only emphasize the short-term for so long before the bills come due.
The problem is that it’s a house of cards. The same forces are destroying trust across society, and ultimately, that destruction of trust puts the entire structure at risk. The less trust there is, and if it continues to drop, we are at risk of catastrophic destruction of the system. Indeed, we may already be experiencing the start of that catastrophic destruction as large portions of society are being dismantled by the current administration. We may be creating the roots of a crisis that will continue to cause serious damage to our future.
“Growth for the sake of growth is the ideology of the cancer cell.” ― Edward Abbey