The world has been crazy about chatbots since ChatGPT came out in November last year.
Wait, no… The frenzy started a decade earlier, in October 2011 when Apple announced Siri.
Wait, that’s not right either. We have to go way back. Further than most think, to the hopeful and countercultural 60s.
We haven’t changed much in sixty years of chatbots
The first chatbot, at least by any modern standard, was Joseph Weizembaum’s ELIZA, which shares the name with the now infamous anthropomorphism effect. Weizembaum was the first to successfully create a chatbot that could trick people into thinking there was a presence inside the machine (or rather someone behind the screen pulling the strings). That feat got his name inked in the history of AI fifty-seven years ago.
ELIZA was shallower and more limited than modern bots but not that different neither in the impression it caused to the community nor the debates it sparked among AI experts. Weizembaum took his side, probably not what you expect: He became a fierce AI critic. To the frustration of the pro-AI side, he couldn’t be ignored or dismissed as he was also the father of chatbots; a prominent insider. He thought talking machines could be dangerous for us, a fear reminiscent of the current discussion about whether we should build or not—or pause for a while—the next iterations of these systems. But others with more influence disagreed and the field’s ambitions lived on.
Fast forward sixty years and a young startup called OpenAI goes and decides to turn the world upside down with another chatbot. ChatGPT is both a well-deserved worldwide sensation and the embodiment of Weizembaum's concerns, echoed by today’s louder anti-hype voices.
It is also the reason chatbots are all we think about now. We ponder whether they understand and reason the meaning behind the words, whether they are or will become sentient or conscious as an epiphenomenon of their linguistic prowess, and whether we should keep improving them until they can improve themselves.
Here’s, for instance, the headline of a New York Times article on the topic: “Experts Argue Whether Computers Could Reason, and if They Should.” Alan Turing wouldn't be happy but it still sounds to me like a valid journalistic question—very apt for the current landscape filled with ChatGPTs, Bards, and Llamas.
Except the article is… from 1977!
Our obsession with chatbots and their mysteries is half a century old. And it seems the questions they spark haven’t changed either. In that same NYT article, Lee Dembart considers a few philosophical conundrums we’re still battling today.
How do we know what we know? What does it mean for a human to know something? What does it mean for a computer? What is creativity? How do we think? What are the limits of science? What are the limits of digital computers?
The debates about whether AI is good for humanity too remain intact, as if frozen in time.
Weizembaum thought, as Dembart puts it, that “… the project [of AI] was fundamentally unsound and dangerous to pursue, partly … because the computers’ and humans’ ways of thought would always be alien, and because knowledge might become limited to what a computer could understand.” Herbert A. Simon, another AI pioneer, disagreed. Again, in Dembart’s words: “… computers [are] no more or less dangerous than any other machine of the industrial revolution.”
We could replace Weizembaum for Gary Marcus and Simon for Yann LeCun and we’d have traveled forward 50 years in time without moving an inch forward toward an agreement.
ChatGPT was not the first chatbot and it’s not as dumb as ELIZA (or Siri) but it still inspires the exact same warnings that we keep either buying or ignoring, prompts the exact same debates that divide us into the exact same groups, and raises the exact same riddles—riddles we’re not capable of solving.
It’s the same story all over again.
In the meantime, other things—invisible ones—are changing. Fast. We’re fixated on chatbots; it’s time we move on.