In one of his most popular essays, “What You Can’t Say,” Paul Graham explores the moral fashions of our times and how we can recognize them.
He defines moral fashions as strong beliefs we assume to be good/true that are actually bad/false. Yet, we have no means to figure that out because, as fashions, they’re just as “arbitrary and invisible” as the other ones. You don’t realize you like your clothes because they’re trendy until the trend passes and you cringe at your old pictures.
A defining characteristic of moral fashions is that they’re idiosyncratic of the time they exist in. To use Graham’s example, we find it absurd that pre-Copernicus intellectuals thought the Earth was the center of the universe. It’s only reasonable once we realize they lived at a time when science was profoundly immature. Likewise, a hypothetical visitor from the future might consider our attempts at merging quantum mechanics with relativity ridiculous or our dreaming of Dyson spheres delusional but would understand why we try given our circumstances.
Moral fashions aren’t just about the hard sciences, like physics, whose falsities are (hopefully) eventually disproven but also about people — people-type fashions aren’t as “easily” disprovable and thus even harder to fight.
The interesting thing about moral fashions (any kind) is neither their temporality nor their disprovability status but rather that going against them always gets you in trouble.
You can’t say out loud something that goes against a moral fashion without risking social, moral, and even perhaps legal consequences. As Graham says, “Violating moral fashions can get you fired, ostracized, imprisoned, or even killed.” In the most extreme cases, only history provides a much-deserved forgiveness.
Unlike clothes, music, and other stylistic (and mostly inoffensive) trends, moral fashions constrain what we can think about and what we can say.
Our time traveler would get in trouble today for saying publicly things contrary to our moral fashions, like “democracy is a farce and surely not the best political system,” however reasonable she might consider that statement to be. Just like we’d be prosecuted if we went to Galileo’s era and professed that the Earth moves around the sun — eppur si muove — as he did.
Anyway, this article is a tribute to Graham’s essay: What you can’t say, AI edition.
What you can’t say about AI — in pro-AI circles
There are notable examples of this in AI. Given the leading narratives and the general sentiment inside and outside the field, crossing that line is pretty easy.
Here’s an example: In the future, AI partners might be suitable — even desirable — replacements for human partners.
That sounds weird, almost disgusting. I can visualize your faces ranging from anger to disbelief. Yet our time traveler might find it fine, sensible even: why would anyone want to withstand the annoyance that always comes with a human partner who brings her own needs and wants to the relationship, which often, somehow, conflict with my own?
(Note that something not being morally fashionable doesn’t mean it’s necessarily good or true, e.g. slavery or alchemy. This is all about moral fashions we haven’t yet outgrown.)
But I don’t want to talk about AI stuff that’s generally unacceptable. It’s the ideas, beliefs, and predictions about AI that not even AI people — or, especially not them — want to hear about that interest me. The reason is that moral fashions are bubble-dependent.
In pro-AI spaces (or, alternatively, AI-savvy spaces), it’s not just okay but cool to say that AI art is a valid art movement (perhaps, in some concrete cases, even superior to previous ones) or that progress should always be prioritized over regulation or that generative AI works just fine despite its moderate flaws.
Those things won’t surprise you that much either. We belong to the same bubble. We’re ingroups. We don’t care what normies find unacceptable. The world would throw up their hands at those examples above but you already know why you can’t say those things and are willing to say them anyway.
No. What interests me is what my ingroup thinks can't be said about AI.
What awaits you below are things that’d make you point at me and say “You can’t say that!” Things that’d make you throw your hands up your head.
It’s very easy to see the flaws in outgroups’ beliefs, but being an ingroup won’t save you from a present that fools us all with its moral fashions.
The ones that blind you are the ones you should strive the hardest to see.
So here’s an example of this kind that I like a lot and have a hard time finding AI-savvy people who agree with me: AI is not that revolutionary.
Or, to put it more spicily: AGI will change the world much less than we all think.
Wow, there. That’s too much, right? AGI is the ultimate goal. No one in their right mind would dare make such — pessimistic? ludicrous? delusional? — prediction.
And now you tell me: You can’t say that!
Oh, well, it’s not me who said it. “[AGI] will change the world much less than we all think” is the crazy statement by which we will remember OpenAI CEO, Sam Altman’s visit to the World Economic Forum in Davos.
In that case, if Altman says it, even if he’s wrong, then we all can say that, right?
Not so fast. Moral fashions aren't just ubiquitous but also powerful. First, not everyone has the influence to shift the narratives and move the Overton window, like Altman does. Second, not even Altman can change overnight the sentiment without repercussions. People will question why he’s trying to change the vibe.
He might be famous and a prominent voice in AI (I'll let you decide if that’s deserved) but you just can’t say that.
That’s the power of moral fashions.
Finding them is valuable because, as Graham suggests, some of the most interesting future commonplace truths lie hidden behind the obstacles of current moral fashions.
To understand what’s going on in AI we must epistemically challenge the dictum of what can and can’t be said. That’s where we’ll strike gold.