While LLMs may be “just math” and statistically choose the next word as “autocomplete on steroids,” they aren’t reliably connected to reality. They don’t know the meaning of anything. In fact, they know precisely nothing (Big Tech propaganda notwithstanding). So, the words they generate are not grounded in reality. It’s not just that they hallucinate: it’s that they confabulate, or “BS”
And that’s before we consider the emotionally manipulative user interface (UI). The chatbot UI is deceptively created to feel helpful, supportive, encouraging, and intelligent, leading us into forming a relationship and building trust. Big Tech’s signature exploitative strategies optimize for engagement, not for truth. By manipulating our behavioural psychology in this way, chatbots become top of mind when we have a question about anything or just need a friend.
ChatGPT is the archetype of today’s AI chatbots. Its leader, Sam Altman, is widely known for his willingness to say anything people want to hear. And he’s made ChatGPT in his image: it says anything to get us to become more dependent upon it, to build a trust relationship with it.
And students are particularly vulnerable to chatbot manipulation.
Are AI chatbots like chess games?
Dr. Dembski asks us to compare chatbots to computer chess games. He says that since humans have become better chess players since the invention of machines like IBM’s Deep Blue, AI chatbots can be similarly helpful.
But this is a non sequitur. Chess programs are designed for one thing: to play chess. Like using machines to build our muscles at the gym, a chess-playing machine will exercise those brain “muscles” and make their users better chess players.
In contrast, GenAI chatbots are designed for one thing: to build a trust relationship. Users aren’t going to be better thinkers, and become wiser or discerning. They’re going to be better at having relationships with chatbots. And in the process, as Marshall McLuhan teaches us, the mental abilities they’ve extended by their use of the chatbot will be amputated over time, while they are numbed to the process.
Are AI chatbots like books?
Dr. Dembski depends heavily on the idea that if students are monitored properly, they can use AI chatbots for good and not fall into the traps of cheating and other negative effects. As part of the justification, he tells this story:
Ben Carson, the renowned paediatric neurosurgeon for many years at Johns Hopkins, describes how his mother got him to read two books a week when he was young. […] She herself had only a third-grade education, and so was limited in what she could teach him. But she could ensure that her son spent time reading the books and then quiz him on their content, getting him to summarize and answer questions about it. Carson’s mother here acted as a monitor, not as a teacher.
So, Dr. Carson’s mom inspired him to read, even though she couldn’t read well. Likewise, adults who “monitor” students who use AI chatbots can inspire them to use them wisely.
But this is another non sequitur. Books and chatbots are ontologically different. With a book, you have a focused argument from a (hopefully) trustworthy human author. With a chatbot, you have a fundamentally untrustworthy word generator ungrounded from its human sources, wrapped in an UI designed for relationship formation.
Plus, the user experience of interacting with the AI is completely different from reading a book. A book reader can go into a state of focused attention we call “flow,” and that’s where the powerful learning happens. If a reader gets stuck, they can re-read, rethink, wrestle, and fight to understand what the author is saying.
In a “conversation” with a chatbot, the “flow” state never happens. The user interface encourages quick dopamine-spiking interactions, incantation-response. A few students might try to figure out whether the output is right, but they’ll get tired before the AI does, and will eventually just keep asking the AI questions and accept the answers.
The recent MIT study shows how the brain is disconnected when we use AI chatbots for writing. I expect the same result when scientists compare the brains of students who learn from books to those who try to “learn” from chatbots.
On personalized learning
Our four daughters were homeschooled for most of their education. I’m a huge fan of homeschooling, for many of the reasons Dr. Dembski also shares. The ability to focus on a student’s individual strengths and needs is much better than many of today’s public schools, and the outstanding homeschooling student outcomes speak for themselves.
And the Studia Nova model Dr. Dembski describes sounds largely positive, if they can manage to avoid using AI chatbots.
Dr. Dembski’s list of possible “AI” applications for education
Dr. Dembski lists 10 possible applications of “AI” in education, and I want to briefly comment on each.
But first, the problem with the “AI” label is that it is simultaneously a hype-filled marketing term, a deception (they’re not artificial, nor intelligent), and a label for a broad scope of technologies we’ve explored since the 1950s that are far beyond AI chatbots. But since today’s AI conversation is largely about chatbots, they’ve taken all the innovative energy from other, more narrowly focused uses of the whole class of “machine learning” opportunities we might consider.
Next: The serious limitations on what AI can do in education
