On AI, and its implications
How it is being pushed, and how it is failing to deliver on its promises. AI is the Mid-Tech and it is already at its limit, but will that ever translate to corporate expectations?
I clipped this as a meme a few months ago, and dropped it here to seed this post, but I left it idle for quite some time.
Before I get into the “meat” of this post, I need to come clean on a few things.
The Setup
First, I work in adult education, particularly in Information Technology skills training and certification. I won’t say who I work for, but if you poke around, it is not too difficult to ascertain, but I will say that we were an early mover, and we are considered a bellwether of the field of IT certifications.
AI, or Artificial Intelligence, is something that is potentially disruptive to my world. The concept that a miracle machine could replace human knowledge is something that may completely destroy my world.
You might think that I would be horrified by that potential disruption, but I am not. Throughout my career I have been a life-long learner, and I tend to eagerly seek out new knowledge and technologies. While I am not as much of an early adopter as I once was, my relationship with the latest in Generative AI (GenAI) a.k.a. Large Language Models (LLMs) is a bit fraught.
My company is large enough that we have a “private” instance of the OpenAI ChatGPT (also Google’s Gemini) where our queries are not saved and/or used to augment their training, and also, we have our troves of data that have been supplied as Retrieval Augmented Generation, or RAG, so it can be cool to ask about our technology, and it can give amazing (seeming) results.
Second, I am unsure about the overall technology. I will acknowledge that it can give pretty impressive answers to queries. The answers to questions appear to be accurate, and impressive, but if you dig into it deeper, you begin to easily find issues.
My go to is to ask the models to explain to me Quantum Mechanics in three guises. As a high school student, as a university student, and as a university student that is studying physics. I ask this because I studied physics in university, and during my undergrad and grad school time, I took three courses on Quantum Mechanics, an undergrad and two semesters of graduate level. Thus, I have more than a passing understanding of the subject.
Regardless of the model, I get similar results. All the models do fine for the high schooler level explanation. Where the trouble begins is at the university level knowledge. Again, it gets close at the general university student explanation, but when you ask it to explain it to a student who is studying physics, it really bobbles.
Yet, if you don’t know the subject, it sounds great, but it is enough wrong that if you don’t know the difference, you can be completely fooled.
And to me, this is the fatal flaw. The reality is that these LLMs do not know anything, they have a lot of data at their metaphorical fingertips, but they have no knowledge of — well — anything.
They are miracles of mathematical models, with impressive statistical algorithms that are pretty good at guessing what would answer the query (or “prompt”), but at the end of the day, they do not “think”, nor do they “know” anything.
Keep that in mind as I dive into the rest of this post.
What is GenAI good for anyways?
A couple of weeks ago, an OpEd in The NY Times titled: The Tech Fantasy that Powers A.I. is Running on Fumes (gifted link) by Tressie McMillan Cottom. It is a worthy read, and not like the usual hype and/or doomer takes, but instead looks at it as how it applies to real world uses.
Have you heard the term “Mid TV”? That is the slop that dominates the streaming services, mediocre, good background noise, mindless drivel that you sorta feel guilty watching.
Yeah, she went there:
Behold the decade of mid tech!
That is what I want to say every time someone asks me, “What about A.I.?” with the breathless anticipation of a boy who thinks this is the summer he finally gets to touch a boob. I’m far from a Luddite. It is precisely because I use new technology that I know mid when I see it.
I am a huge fan of Ed Zitron1, and he talks about this a lot, but he never quite made the “Mid Tech” observation.
That said, why does she say this?
First, she’s an academic. Yeah, Library Science is a science, and if there was a field in academia that AI should be awesome for, it would be in this world. Tressie goes on to posit:
Academics are rarely good stand-ins for typical workers. But the mid technology revolution is an exception. It has come for us first. Some of it has even come from us, genuinely exciting academic inventions and research science that could positively contribute to society. But what we’ve already seen in academia is that the use cases for artificial intelligence across every domain of work and life have started to get silly really fast. Most of us aren’t using A.I. to save lives faster and better. We are using A.I. to make mediocre improvements, such as emailing more. Even the most enthusiastic papers about A.I.’s power to augment white-collar work have struggled to come up with something more exciting than “A brief that once took two days to write will now take two hours!”
And as I mentioned in the opening, my experimentation with AI (Chatbots in particular) is that they can be useful, but they can’t replace expertise.
Academics are rarely good stand-ins for typical workers. But the mid technology revolution is an exception. It has come for us first. Some of it has even come from us, genuinely exciting academic inventions and research science that could positively contribute to society. But what we’ve already seen in academia is that the use cases for artificial intelligence across every domain of work and life have started to get silly really fast. Most of us aren’t using A.I. to save lives faster and better. We are using A.I. to make mediocre improvements, such as emailing more. Even the most enthusiastic papers about A.I.’s power to augment white-collar work have struggled to come up with something more exciting than “A brief that once took two days to write will now take two hours!”
This tracks. There is a predictable trajectory of AI hype in a field. First is the initial foray, and some impressive looking results. Often this fuels a wave of speculation and experimentation. Then the warts begin to become unavoidable, and finally, the utilization of AI tools becomes just another tool in the box, something that does get used, but it is never the savior that the industry luminaries are predicting2. Indeed, Tressie continues:
A.I. is one of many technologies that promise transformation through iteration rather than disruption. Consumer automation once promised seamless checkout experiences that empowered customers to bag our own groceries. It turns out that checkout automation is pretty mid — cashiers are still better at managing points of sale. A.I.-based facial recognition similarly promised a smoother, faster way to verify who you are at places like the airport. But the T.S.A.’s adoption of the technology (complete with unresolved privacy concerns) hasn’t particularly revolutionized the airport experience or made security screening lines shorter. I’ll just say, it all feels pretty mid to me.
Mid indeed!
One thing I am constantly bombarded with in my day to day (especially from my leadership chain) is that AI is going to democratize knowledge, and that if you fail to hop on the bandwagon, you are going to be left behind.
Hell, if you listen to senior executives at any large company, you can’t help but hear their belief that GenAI is going to allow them to have far fewer people in their businesses, because AI is going to eliminate the need for a lot of staff.
The reason they believe this is largely due to their own experiences. Senior leaders (C-Suite executives) primarily build airy stories, as presentations, that are closer to TED talks, than to substantive output. ChatGPT is awesome for these high level decks that tell a story.
My world though, I need facts, I need accuracy, and I need to be able to defend my reasoning and thought processes. So far, AI is not there for me. Now, I do use it for some things. If we need to build a training for some technical skill, I can usually craft one or two prompts to generate an outline. But 100% of the time, that is just a starting point. Sure, it saves a handful of hours, but it can slice a day or two off of a 2 week process. Not nuthin’, but not enough to replace me or my skills.
But the executives? They are just rubbing their hands to replace a lot of mid tier employees with AI, have a few entry level people to generate tons of content, and a much smaller number of “experts” that can tune up this AI generated slop.
If you’re paying attention, this is a problem because without a pipeline of talent from junior to mid career people, soon, you will not have any experts.
Tressie gets to the crux of the matter in this ‘graph:
Of course, A.I., if applied properly, can save lives. It has been useful for producing medical protocols and spotting patterns in radiology scans. But crucially, that kind of A.I. requires people who know how to use it. Speeding up interpretations of radiology scans helps only people who have a medical doctor who can act on them. More efficient analysis of experimental data increases productivity for experts who know how to use the A.I. analysis and, more important, how to verify its quality. A.I.’s most revolutionary potential is helping experts apply their expertise better and faster. But for that to work, there has to be experts. (emphasis mine)
Yes, AI has uses, but replacing expertise is not one of them. You still need experts and expertise, yet that is in tension with executive desires to reduce expensive heads, replace this expensive headcount with cold machines.
Some luminaries, like Mark Cuban, are also all in:
That is the big danger of hyping mid tech. Hype isn’t held to account for being accurate, only for being compelling. Mark Cuban exemplified this in a recent post on the social media platform Bluesky. He imagined an A.I.-enabled world where a worker with “zero education” uses A.I. and a skilled worker doesn’t. The worker who gets on the A.I. train learns to ask the right questions and the numbskull of a skilled worker does not. The former will often be, in Cuban’s analysis, the more productive employee.
The problem is that asking the right questions requires the opposite of having zero education. You can’t just learn how to craft a prompt for an A.I. chatbot without first having the experience, exposure and, yes, education to know what the heck you are doing. The reality — and the science — is clear that learning is a messy, nonlinear human development process that resists efficiency. A.I. cannot replace it. (emphasis mine)
This is the payload. The second bold passage is truth. Looking at the progression of technology in education, it seems like every decade or so, some new miracle tech will disrupt education, all to fizzle out, and become some part of education, but it never replaces the “messy processes” of building expertise. She continues:
I have seen this sort of technological Catch-22 in higher education before. Academia is a major institutional client for technology solutions. Schools helped Zoom beat Skype during the Covid-19 pivot to remote learning. Once upon a time, schools also helped the flagging Apple shore up its bottom line while it found a consumer market for its devices. All of the technology revolutions that are coming for America’s workplace have usually come earlier through mine.
Despite our reputation, most of the academics I know welcome anything that helps us do our jobs. We initially welcomed A.I. with open arms. Then the technology seemed to create more problems than it solved. The big one for us was cheating.
Bingo.
Furthermore, the current crop of GenAI LLM’s appear to have hit a scaling wall. More training data sets (running out of clean content to sweep in), more compute and GPU capacity (gigawatts of Data Centers), but they aren’g getting any “smarter”. Already, OpenAI is spending about $2.35 to make $1.00 in revenue, and competition from Deepseek and others are causing a price war, threatening potential profitability even more.
But, as Tressie says:
Mid tech revolutions have another thing in common: They justify employing fewer people and ask those left behind to do more with less.
Yup, that is the goal of corporate executives. Already, it is becoming more difficult to hire people with experience and expertise, having to justify why you can’t just use AI.
Alas, it is likely to bite them in the ass.
I will leave you with a final pull-quote that wraps it all up:
A.I. may be a mid technology with limited use cases to justify its financial and environmental costs. But it is a stellar tool for demoralizing workers who can, in the blink of a digital eye, be categorized as waste. Whatever A.I. has the potential to become, in this political environment it is most powerful when it is aimed at demoralizing workers.
This sort of mid tech would, in a perfect world, go the way of classroom TVs and MOOCs. It would find its niche, mildly reshape the way white-collar workers work and Americans would mostly forget about its promise to transform our lives.
But we now live in a world where political might makes right. DOGE’s monthslong infomercial for A.I. reveals the difference that power can make to a mid technology. It does not have to be transformative to change how we live and work. In the wrong hands, mid tech is an antilabor hammer.
Amen Tressie, amen.
Coda
My company sells networking gear that the AI revolution uses to drive their voracious appetite for compute and GPU resources. As far as I can tell, the hyperscalers (Amazon, Microsoft, Google) spent about $294B last year (2024) on building out data centers to support the generative AI revolution.
That is a LOT of cheddar. This year, it is expected to be significantly higher. All to chase diminishing returns of the technology. Perhaps one day, true anthropomorphic AI, a “thinking” machine will emerge. But one thing I can say is that the current LLM and diffusion driven models will not be the breakthrough to true intelligence.
OpenAI predicts they will get there, and that in the not too distant future, they will be able to sell a PhD level “Agentic AI” for $20K a month. A decade ago, I was hiring application scientists, PhD’s in Chemistry and Biology, and I can assure you we paid much less than $240K a year for them.
Their economics do not compute, but investors keep raining cash on these firms. That roller coaster will end, maybe not tomorrow, or next year, but already the signs are there that this path is at completion.
Do you have any experiences to share? Drop them in the comments!
Oh, and that meme at the top of the post? Perfection.
You should check him out at https://wheresyoured.at
Remember when Sam Altman said that in a couple of years, Chat GPT was going to solve physics? Yeah, the current incarnation of the LLM’s are not going to do that ever.
I don't have time to think about this. I work with extraordinary talent and many are in their 80s. AI is just a magical tool for creatives. AI has no SOUL.
"You can’t just learn how to craft a prompt for an A.I. chatbot without first having the experience, exposure and, yes, education to know what the heck you are doing. "
Indeed. If you lack the language, so to speak, with which to formulate the prompt/question you will need to receive enough of an education to do so, thus defeating the purpose of replacing the skilled worker with the non-skilled one as shown in the example.