The Dangers of GenAI and AI illiteracy
People are ascribing them to levels of utility that is compared to the Almighty. They aren't, and aren't going to be like this anytime soon. But the need for $$$ leads leaders to not clear it up
A slew of books are hitting the market that are taking a critical (but not Luddite) view of the current raft of AI tools. That leads to plenty of reporting on these trends, and Tyler Austin Harper has a solid article in The Atlantic, titled: “What Happens When People Don’t Understand How AI Works” (gifted link)
Tyler begins with an analogy that dates back a bit. And a bit here means the latter half of the 19th century:
On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed “Cellarius,” it warned of an encroaching “mechanical kingdom” that would soon bring humanity to its yoke. “The machines are gaining ground upon us,” the author ranted, distressed by the breakneck pace of industrialization and technological development. “Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.” We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language.
I pulled that in its entirety because it sets the table very well. The current wave of technology that is causing elation, chaos, and despair is in no way unprecedented.
Tyler then goes into a couple of recent books that are not the usual ball-fondling of the founders of these AI companies. The first being “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI” written by Karen Hao. Karen has been doing the podcast and media tour, and I first heard of her on Better Offline, Ed Zitron’s podcast. It is the May 13, 2025 post, and I highly recommend it.
In case you think that Karen is just a hater, this is worth reading:
To call AI a con isn’t to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines.
The hype of the founders, who know that their trajectory is reliant on Hovering up prodigious amounts of funding to build out a boatload of energy and water hungry data centers1, needs to keep hyping the benefits of GenAI2 as a revolution.
The latest track is to say that AI is becoming aware by showing an ability to emotionally connect with users.
Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.”
Here I need to remind people that the current wave of generative AI is not thinking. It doesn’t “know” anything. It doesn’t understand anything.
It is merely a database of — well — everything every written, drawn, or put on celluloid with a vector database, and a tuned algorithm that uses statistics and probabilities (derived from very smart mathematics PhD’s) to predict what would come next.
In short, it guesses what would come next, and spits that out.
Sure, some of the recent models do some recursive revision in the background that the AI boosters like to boast is akin to “thinking”, but it isn’t. It is more of a refinement and error correction.
Tylers continues more eloquently than my clumsy text block above:
Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions. (emphasis mine)
This is the truth-bomb. The term AI illiterate is beyond apt.
And that gets me to another serious what the actual fuck moment, namely the statements by that waste of human skin, Mark Zuckerberg, who stated that people currently have like “3” friends, but to be healthy people need to have 15 friends. Facebook has spent that last 17 years or so “connecting” the world to build networks of Friends, but clearly that failed, people are lonelier than ever, and the answer?
Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, “In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.” The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him.
That’s right, Meta will build AI agents to be friends for people3.
Beyond that insanity, or in spite of it; already people are turning to ChatGPT to be friends, girlfriends, spiritual advisors, and even to replace therapists. Ryan Broderick wrote about this in his newsletter: Garbage Day
Back in February, I started using ChatGPT for therapy. I had interviewed a handful of therapists in 2023 for a story and the consensus from most I spoke to was, “if you don’t have access to a human therapist, it’s better than nothing.” I figured I fit the criteria there, I had found myself in that very American predicament of being between health insurance plans and really needed to talk some stuff out with someone — or something, I guess.
Before you panic, I am now sorting through ChatGPT’s advice with a human therapist, which has been interesting in its own right. (It was mostly fine, but pretty shallow.) I’ll also try and spare you the extremely mortifying details about what I spent a few weeks talking to ChatGPT about, but my experience with Dr. ChatGPT did teach me a few things about what it’s actually “good” at. It also convinced me that AI therapy — and maybe AI in general — is quite possibly one of the most dangerous things to ever exist and needs to be outlawed completely. But we’ll get there in a sec.
This is flawed, as Ryan notes:
ChatGPT’s default is to agree with you. And in late April, it got even more aggressively agreeable. OpenAI’s Sam Altman admitted that the newest ChatGPT update, GPT-4o, “glazes too much,” going on to write on X, “The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes ASAP.”
It seems like most users only noticed ChatGPT’s “glazing” when the volume got turned up so high that it was impossible to ignore, but it’s been a problem for a while. I had, thankfully, stopped using it by the time the new update went out, but during my sessions with it in March, it would breathlessly cheer me on and even when I pushed it to give me critical feedback it still would only do so briefly and then immediately revert back to being a cheery idiot. I’d like to imagine that I was able to keep a level head while I was talking to it, but at a certain level of vulnerability no one is safe from the cognitohazard of the delusion machine. Which is exactly what is happening to many users right now.
Yikes, one can see how an AI illiterate person would fail to cotton on to the bias, leading to a self reinforcing feedback loop, that could lead to drastic and tragic results.
Furthermore, this often leads to delusions that ChatGPT is god-like, and a savior.
Where does this lead?
You might be tempted to think that I am dead set against this GenAI wave of hype, but that isn’t true. I tend to be realistic, and like using a calculator to calculate the standard deviation of a population of numbers, Generative AI is a tool. Some times it is a very good tool (I used it last week to draft a job description for a new product manager, and it was brilliant, that it took about 10 minutes to polish up its second try to be awesome), and sometimes it is like using a hammer to adjust the valves on a 1979 Honda XL500.
I am smart enough to know when it is good, and when it is less good or inappropriate.
But most people, including some really high up executives, are far less critical, and they say things like “our goal is to have 70% of our codebase written with AI tools” makes me question the wisdom of leadership.
But one thing I can 100% assure you is that replacing your weekly therapy session with a dialog with ChatGPT is a bad idea.
In 2024, it is reported that $294 BILLION dollars was spent chasing the GenAI dragon. OpenAI spends $2.35 to generate $1 of revenue (you don’t need an MBA or a PhD in Econ to know that is bad)
When I mention AI, GenAI, or anything around these technologies, I am referring to the Chatbots, image generators, video generators as a class of LLM. It is not AGI , or artificial general intelligence.
Now bwain hurtz.
Seriously, I am AI illiterate to be sure but my instincts are good. One thing I've always maintained is the fact that none of these aides can actually think. They can guess very quickly and mimic human-like behaviors that could possibly prompt folks to unknowingly engage in anthropomorphism and assume they are communicating with an actual intelligence.
The idea that some folks would be unable to tell is quite frightening indeed. I've already been reading about AI tools replacing interns and lower level white collar positions and how CEO's love the effect on their bottom line.
Where do you grow the next set of company leaders if AI tools are engaging in the very activities which provide the experience and knowledge those future leaders would need in order to be effective if the door to those experiences and knowledge is closed by the very company they work for?
Is profit so important that one would knowingly hollow out their ability for future prosperity in favor of more profit now? Does that even make sense? Brings to mind that old story about the goose and the golden eggs don't it?
Geoff, you have once again provided much food for thought and as a layman I thanks you for it!
Psychology is partly responsible for this as well. In the process of forging the social science from philosophical questions and biological methods, assumptions and metaphors that were considered placeholders until better ideas and/or empirical results showed otherwise became codified largely as fundamental tenets of the field. As such, it's been rare for them to be questioned, because to do so would challenge and possibly invalidate decades of research findings, and threaten many active labs.
Two interrelated ideas that are examples that lead directly to the "brain as computer" analogy are mechanism and reductionism; both were championed by Descartes.