There’s a cliche that AI evangelists love to throw around when someone highlights the limitations of some new system: “There goes Gary Marcus, moving the goalposts again.” The reasoning is that, as systems improve, there will always be those who say, “Doing X is not good enough, because they can’t do Y,” when the new X was the previous Y. That’s cheating, they say.
People who do this are labeled goalpost movers. I see why Marcus is often targeted as one given that he’s among the most vocal AI skeptics, but in some cases he’s right. Modern AI systems can’t do much reliably. At their best, they’re astonishing. At their worst, they can make anyone lose faith in the last 10 years of AI research.
Anyway, moving the goalposts is almost idiomatic now. It was Naval Ravikant, in 2022, who I saw first turning the idea on its head to refresh the meaning:
It’s not so much people moving the goalposts on what an AI is, it’s more AI moving the goalposts on what a human is.
As cleverly stated as it was, the idea didn’t make much sense to me. It was a comment by Geoffrey Hinton last year, during an MIT panel debate with Demis Hassabis and Ilya Sutskever, that made me rethink its significance.
Perhaps AI is, after all, moving goalposts that have been well-shored up for half a century.
Perhaps AI never wanted to be like you
Hinton’s argument is interesting for two reasons: first, it’s a revaluation of a 50-year belief he’s held sacred since he started researching AI but has now changed in the face of the capabilities of AI systems like GPT-4, and second, it makes philosophical sense (even if eventually proven false).