What happens when people uncritically use AI-generated content? – mindmatters.ai
At his Substack, AI analyst Gary Marcus notes the growing popularity of the term “botshit” to describe the result. Yes, it’s a vulgarity but the frustration is understandable:
Defined as “the human use of untruthful LLM-generated content,” it was the subject of a recent research paper. From the Abstract:
Advances in large language model (LLM) technology enable chatbots to generate and analyze content for our work. Generative chatbots do this work by predicting responses rather than knowing the meaning of their responses. In other words, chatbots can produce coherent-sounding but inaccurate or fabricated content, referred to as hallucinations. When humans uncritically use this untruthful content, it becomes what we call botshit. This article focuses on how to use chatbots for content generation work while mitigating the epistemic (i.e., the process of producing knowledge) risks associated with botshit.
Timothy R. Hannigan, Ian P. McCarthy, Andra Spicer, Beware of botshit: How to manage the epistemic risks of generative chatbots, Business Horizons, Volume 67, Issue 5, 2024, Pages 471-486, ISSN 0007-6813, https://doi.org/10.1016/j.bushor.2024.03.001.
Marcus offers some examples, including:
Item 2: Six weeks ago I was railing about lawyers submitting briefs with hallucinated cases.
Then things got worse: …
Item 3. Of course it’s not just fake law. Yesterday Axios (pretty pro-AI on the whole) reported that much-venerated o3 hallucinated up a too-plausible looking blend of truth and bullshit in a financial report…
(The bots aren’t thinking when they hallucinate. They produce nonsense automatically when they don’t have facts.)
The term botshit may be derived from enshittification, which Gary Smith discussed here at Mind Matters News last year: The Flea Market of the Internet: Breaking the addiction When, after a bad experience, I called Amazon the “Walmart of the Internet,” a friend pointed out that Amazon is, in fact, much worse than Walmart. Internet-based businesses tend to follow a life cycle in which quality deteriorates over time. Writer Cory Doctorow calls the process “enshittification.”
The AI revolution is not turning out as boosters hoped. But it is probably turning out the only way it could.
Mind Matters features original news and analysis at the intersection of artificial and natural intelligence. Through articles and podcasts, it explores issues, challenges, and controversies relating to human and artificial intelligence from a perspective that values the unique capabilities of human beings. Mind Matters is published by the Walter Bradley Center for Natural and Artificial Intelligence.