Dr. Dembski made a list of possible contributions and offering some comments on it will clarify what I am trying to say:
“Accent and Pronunciation Refinement in Language Learning”
A focused speech-recognition “AI” (not a chatbot) could help students learn languages more efficiently. That tech has been around for a while. A very narrowly, ethically trained product could be useful here, but it’s no replacement for human conversation.
“Creative Writing with Rhetorical Precision”
This is a dangerous proposal. Writing is thinking. AI can’t be trusted to generate coherent content, nor to have a voice that is something students should emulate. Do we really want everyone homogenized into writing like a chatbot? No, the sacred work of choosing words must have authoritative human guides. Improve the curriculum, have better books, and invite humans to teach via video. But don’t use chatbots for creative writing.
“Polyphonic Music Composition and Performance”
I see this as risky. Again, if the AI is not a general purpose chatbot, and perhaps specifically trained on good music, it could potentially be helpful. But today’s AI chatbot-driven song generators are not creating well trained musicians. They’re creating more passive consumers — by design. Chatbot users aren’t interested in learning to “think contrapuntally.” They want quick and easy dopamine hits, which is what chatbots are designed to provide.
“Advanced Sight-Reading and Aural Skills for Musicians”
I give this a maybe. Again, if it’s a very narrowly trained product, not designed to form trusted relationships, and designed to be very accurate, it could be helpful. But as a musician myself, I know that nothing replaces the practice and tutoring guided by a trusted human mentor.
“Scientific Experiment Simulation and Inquiry-Based Discovery”
AI-driven virtual reality environments for mathematical intuition? Risky. I’m reminded of Spock in the recent remake of Star Trek being trained this way. But today’s virtual reality environments are almost all tainted by Big Tech’s perverse incentives. We are changed by our use of VR into different kinds of people. Neil Postman would warn us about Amusing Ourselves to Death.
“Mastery of Mathematical Intuition and Visualization”
With the use of VR, this is so similar to #5 that it shares the same risks.
“Fine Motor and Artistic Skill Development via Gesture Feedback”
Fine motor/artistic skill development? Risky. Yes, some technical art skills might be improved. But artistry is a distinctly human gift. We’re not machines, so our artistry shouldn’t be constrained or discipled by machines. Sure, drills in technical skills might help some artists. But we risk stifling creativity and encouraging robot-like behaviour. I think Dr. Esther Meek’s Doorway to Artistry. would push-back strongly against this.
“Debate and Argumentation Coaching”
This is also super risky. Dr. Dembski believes that chatbot suggestions for stronger evidence and fallacies could be trustworthy. But the debater can’t trust the chatbot’s answer, so they can be led astray. I recommend against this one.
“Emotional Intelligence and Empathy Training through Simulated Dialogue”
Extremely risky. I’m not outsourcing emotional intelligence to a chatbot. Students have arguably lost social skills because of their immersion in technology already. Chatbots aren’t going to help there. I think we should run quickly away from this idea.
“Advanced Memory and Visualization Techniques”
Maybe, but again, Spock’s VR-based school worked for an emotionless Vulcan in science fiction. We learn differently. Entertaining “AI tutors” would change us into people who depend on those tutors, and who become like them (Psalm 115:8). And the MIT study cited above already pushes back against the idea that memory could be improved by the use of chatbots. The opposite seems to be the case.
A scary surveillance idea
Dr. Dembski’s last idea may be the most terrifying:
The Oura Ring tracks sleep, activity, heart rate, temperature, etc. through advanced biosensors, offering detailed insights into recovery and overall well being. … A suitable low-cost unobtrusive device, however, could monitor brain states favorable and unfavorable to learning. Such a device could enable real-time adaptation of instruction, such as adjusting pacing or content to improve focus and retention.
Have we become so accustomed to surveillance capitalism that we are completely blinded to the implications? Or has Big Tech become so trustworthy that we can trust them to handle the data of student brain states with benevolent intent?
This suggestion mirrors the ultra-authoritative Chinese state’s use of AI monitoring for students. In China, students are already wearing AI-powered headbands to constantly monitor student’s brain activity. That data will almost certainly be part of their “social credit score,” among other nefarious things. The video in that Wall Street Journal article is not a dystopian sci-fi story, it’s today’s reality. Those are real kids wearing those devices and being shaped, controlled, and dehumanized.
This is not the educational answer we’re looking for. Let’s not turn our kids’ brainwaves and futures over to the most powerful corporations in the world.
Conclusion
I’ve been a software engineer for 30 over years. I love finding beneficial uses of technology. But I often see people using tech like a hammer looking for nails: everything must have a technical answer. Ellul wasn’t a fan of that, nor am I. Technology isn’t always the answer, and it often adds many harmful, unintended consequences.
Like Dr. Dembski, I want us to push against Sam Altman’s desire for us to merge with AI. But Dr. Dembski seems to trust Altman’s chatbot more than it deserves, and is open to more merging than I am. Dr. Dembski says his view is “humanistic” as opposed to “transhumanistic”:
The humanistic vision is natural, like promoting health through good diet, exercise, and proper rest. The other is artificial, like relying on pharmaceuticals to achieve wellness.
But to me, Dr. Dembski’s “humanistic vision” seems biased towards the unnatural, especially in its embrace of today’s AI chatbots. The most natural way we learn is by human mentoring. Every mediating technology (“the medium is the message,” as McLuhan taught us) changes us in ways we cannot see now.
AI chatbots are far too new, too hyped, and already too exploitative in their design and deceptive in their claims to trust the next generation to them.
Since it is still the “one chief project of that old deluder, Satan, to keep men from the knowledge of the Scriptures,” let’s not expose our students to an intentionally deceptive technology. Let’s take a page from our 17th-century forebears and encourage truth-seeking from reliable, truthful, human sources of knowledge and wisdom instead.
Gratitude
It was gracious of Dr. Dembski to invite me to share this critique. He practices what he preaches, and wants to follow the truth wherever it leads, so he invites push-back. May I do the same? I look forward to our ongoing conversation.
