I recently read an article (“Computers and the Nature of Man: A Historian’s Perspective on Controversies about Artificial Intelligence”) by Judy Grabiner, a celebrated mathematician and historian of mathematics and I learned two things that I want to share.
The first concerns the Eliza effect, our inclination to attribute human-like intelligence and emotions to computers. In the 1960s, MIT computer science professor Joseph Weizenbaum created a chatbot he named ELIZA that conversed with users the way a psychotherapist might; for example, often repeating the user’s words and asking a followup question: “You were unhappy as a child? Tell me more about that.”
Even though the users knew they were interacting with a computer, many were convinced that the program had human-like intelligence and emotions and they happily shared their deepest feelings and most closely held secrets. Scientists now call this the Eliza effect. We are vulnerable to such an illusion because of our inclination to anthropomorphize—to attribute human-like qualities to non-human objects like pets, trees, and computers.
Weizenbaum had intended to demonstrate how easily people anthropomorphize computers, even those that generate primitive conversations. I have written about the Eliza effect (for example, here) but I did not know that many people, including trained psychiatrists, were so convinced of the program’s therapeutic “abilities” that they wanted to use ELIZA in a clinical setting for real therapy with real patients. Some suggested that Weizenbaum had successfully trained a computer to have artificial intelligence rivaling that of human psychotherapists, who could now be replaced by inexpensive chatbots.
Weizenbaum was shocked by the conclusion of many that ELIZA had demonstrated that human intelligence could be replicated by computer code. He became a prominent critic of the idea that humans are just computer-like information-processors. “What could it mean to speak of risk, courage, trust, endurance, and overcoming when one speaks of machines?”
Sound familiar? OpenAI’s Sam Altman currently promotes GPT as an AI buddy that offers advice and friendship, and it is reported that OpenAI will soon launch a portable, screen-free “personal life advisor.” I suggested they name it Brock-Says. Some hope (and others fear) that cost-efficient AI-buddies will soon replace human therapists. LLMs are far more advanced than ELIZA but they are still unreliable and can be dangerous—in some cases, encouraging self-harm, even suicide.
The second revelation for me was a collection of computer programs created by Gary Bradshaw, Pat Langley, and Herbert Simon in the 1980s. They called their programs BACON, in homage to Francis Bacon, a prominent proponent of inductive reasoning—inferring theories from observations. BACON was touted as being able to use data to make scientific discoveries, as evidenced by the fact that it was able to rediscover some known laws, such as Ohm’s law, the ideal gas law, Kepler’s third law of planetary motion, Joseph Black’s law of temperature equilibrium of mixtures, and Snell’s law of refraction.
The claims made by BACON’s creators were grand: “We confront the program with discovery problems that scientists have encountered, and we observe whether the program can make the discovery, starting from the same point the scientists did.” However, the reality is unremarkable: Unlike the human scientists, the computer program was given the relevant variables and asked to curve-fit. The hard part of a scientific discovery is identifying the relevant variables, not tying down the mathematical equation relating them.
As Judy writes,
Once one has chosen the appropriate data, the job of discovering Black’s law is essentially over…. What of Snell’s seventeenth-century law of refraction?…The hard thing, historically, was looking for the ratio of the sines and identifying the relevant angles. Once we know to look for trigonometric functions of these particular angles, or for ratios involving precisely these lines, the job of discovering Snell’s law is essentially done.
In practice, it has been 40-plus years since its creation and the BACON program has not made any new scientific discoveries.
Again, the past repeats itself. In November 2020, the New York Times reported that MIT professor Max Tegmark and a student (Silviu-Marian Udrescu) had created a neural-network system they called AI Feynman, which successfully rediscovered 100 physics equations from the celebrated textbook, The Feynman Lectures on Physics. Professor Tegmark argued that, “We’re hoping to discover all kinds of new laws of physics. We’re already shown that it can rediscover laws of physics.”
AI Feynman is impressive but it doesn’t actually discover any laws of physics. Tegmark and Udrescu used each of the Feynman equations to generate data. AI Feynman was then told which variables are in the equation and tasked with finding a mathematical equation that fits these data. When they identify the correct equation, humans stop the search. As with BACON, it avoids the hard part of scientific discoveries. As with BACON, humans know what the program is looking for. As with BACON, it has not yet discovered any new laws of physics.
These two examples illustrate nicely how easily we can be fooled into thinking that computers have human-like intelligence, even in cases where the creators are trying to demonstrate that computers do not have human-like intelligence. It is still true that the real danger today is not that computers are smarter than us but that we think computers are smarter than us.
