The Algorithmic Bridge

The Algorithmic Bridge

Share this post

The Algorithmic Bridge
The Algorithmic Bridge
We're Not Ready For the Aliens to Come
Copy link
Facebook
Email
Notes
More

We're Not Ready For the Aliens to Come

And we thought octopuses were weird

Alberto Romero's avatar
Alberto Romero
Jun 07, 2024
∙ Paid
25

Share this post

The Algorithmic Bridge
The Algorithmic Bridge
We're Not Ready For the Aliens to Come
Copy link
Facebook
Email
Notes
More
2
5
Share

A blog about AI that’s actually about people

There’s a cliche that AI evangelists love to throw around when someone highlights the limitations of some new system: “There goes Gary Marcus, moving the goalposts again.” The reasoning is that, as systems improve, there will always be those who say, “Doing X is not good enough, because they can’t do Y,” when the new X was the previous Y. That’s cheating, they say.

People who do this are labeled goalpost movers. I see why Marcus is often targeted as one given that he’s among the most vocal AI skeptics, but in some cases he’s right. Modern AI systems can’t do much reliably. At their best, they’re astonishing. At their worst, they can make anyone lose faith in the last 10 years of AI research.

Anyway, moving the goalposts is almost idiomatic now. It was Naval Ravikant, in 2022, who I saw first turning the idea on its head to refresh the meaning:

It’s not so much people moving the goalposts on what an AI is, it’s more AI moving the goalposts on what a human is.

As cleverly stated as it was, the idea didn’t make much sense to me. It was a comment by Geoffrey Hinton last year, during an MIT panel debate with Demis Hassabis and Ilya Sutskever, that made me rethink its significance.

Perhaps AI is, after all, moving goalposts that have been well-shored up for half a century.

Perhaps AI never wanted to be like you

Hinton’s argument is interesting for two reasons: first, it’s a revaluation of a 50-year belief he’s held sacred since he started researching AI but has now changed in the face of the capabilities of AI systems like GPT-4, and second, it makes philosophical sense (even if eventually proven false).

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Alberto Romero
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More