“It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction.” ― Harry G. Frankfurt, On Bullshit

Its really cool when your blog post generates a lot of feedback. Its like when you give a talk, and you get lots of questions. It is a sign that people give a fuck. No questions, is not engagement and lots of no fucks given.
One friend sent me an article, “ChatGPT is Bullshit.” It was riffing on Harry Frankfurt’s amazing monograph, “On Bullshit.” To put it mildly, bullshit is much worse than a hallucination. Even if that hallucination is produced by drugs. Hallucinations are innocent and morally neutral. Bullshit is unethical. The paper makes the case that LLMs are bullshitting us, not offering some innocent hallucinations. We should apply the same standard to computational modeling of the classical sort.
This is, of course, not a case of anthropomorphizing the LLM. The people responsible for designing LLMs want it to provide answers. Providing answers makes the users of LLMs happy. Happy users use their product. Unhappy ones don’t. Bullshit is willful deception. It is deception with a purpose. We should be mindful about willful deception for classical modeling & simulation work. In a subtle way the absence of due diligence such as avoiding V&V is treading close to the line. If V&V is done and then silenced and ignored, you have bullshit. So I’ve seen a lot of bullshit in my career. So have you.
Bullshit is a pox. We need to recognize and eliminate bullshit. Bullshit is the enemy of truth. It is vastly worse than hallucinations and demands attention.
“The bullshitter ignores these demands altogether. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.” ― Harry G. Frankfurt,
Hicks, Michael Townsen, James Humphries, and Joe Slater. “ChatGPT is bullshit.” Ethics and Information Technology 26, no. 2 (2024): 1-10.

Bill, I did not know what I was missing ! Your blog is fantastic !
A fascinating point.
Talking to ChatGPT can sometimes feel disturbingly like talking to a grad student who may be BS-ing you. For example, it will say something that is unclear, and when you call it out, will respond in a manner that, were you talking to a person, you would wonder whether you actually misunderstood, or that person made a mistake, and is now covering it up. But that is anthropomorphizing: the LLM is clearly doing neither. The first text was simply the output of its probabilistic model, asking what it “really meant” is misunderstanding how it works.
It’s worth reminding ourselves of this continually when interacting with LLMs.
The issue is how much of the BS is actually the human’s influence on the LLM? The reinforcement learning used to put guardrails on LLMs is human imposed. Without these the LLM would be more capable and creative, but also spit out racist, sexist, politically incorrect outputs. This is pushed out along with most of the humor. I am fairly sure it also encodes behavior that pleases users like always answering even without good knowledge. This a route to BS