As I enjoy the tranquil waves of the Mediterranean I write hesitant words on a page that may never be seen. What does it mean to write for no one? Today’s essay—one of my favorites—is a (subtly edited) repost about why it makes sense to write for the sake of writing; about why, otherwise, if we take it for granted, we kill our greatest ideas.
You know I’m not an advocate of using AI tools to write.
The first reason is that once they send you in one direction, you’re much more likely to pursue it than to counter with a different idea, often making you trade off your original intent—if there was any—for the comfort of following instead of leading.
Let’s make the case. Here’s how ChatGPT continues the above paragraph:
This tendency to follow rather than lead can also suppress creativity. When a machine suggests a narrative or concept, it’s based on patterns it has learned from vast amounts of data. While this can be efficient, it also means that the results are often predictable and lack the unique spark that human creativity brings.
That’s fine. It’s a possibility. I could take it and you wouldn’t notice. It’s true, as ChatGPT readily accepts, that the direction chatbots entice you to follow often leads you somewhere boring—to the land of averages.
That’s not what I had in mind, though. Dullness is a valid reason not to use AI to write, but it’s not the reason I wanted to share. ChatGPT offered me that sweet, sweet idea for free, making me pay in cognitive load to develop my original intent. If I want to say my thing I have to reject its appealing suggestion—I have to “unbias” myself.
It’d be so much easier just taking the candy.
Bet let’s go on, guilt-free and sugar-free. The second reason not to use AI to write is something that, funnily, ChatGPT would hardly ever admit: Once AI has chosen a narrative path, it won’t change it.
That’s a beautiful trait we humans enjoy that chatbots lack. Autoregression (they look at previous words to decide which one to write next) makes them unable to backtrack. Even if ChatGPT realizes it has made a dumb mistake a few words ago, it’s constrained by a fixed destiny.
All of the above makes AI a terrible writer.
Humans are much better… right?
There’s a compelling argument in favor of AI that requires us to come down from our self-righteous pedestal. To put it bluntly: we are not as good as we think we are.
Interestingly, the “stochastic parrot” metaphor didn’t convince me of the limitations of language models. It had a different, rather ironic, effect: I started to perceive my own linguistic limitations as a human being more intensely.
I asked myself what I ask you now: Can you do it better than AI?
When I speak, there’s an almost tangible mental cost to stop mid-sentence and say: “wait, let me rephrase that.” I sometimes go along with what I’ve just said and, at best, try to steer my next words so that they land well. Humans suffer from a shade of autoregression as well.
That happens in conversations but writing is different, right?
It’s thoughtful. It’s deliberate. It’s reflective. I can re-read the words I’ve written and decide if I like them. That makes me more powerful than AI. My ability to redirect my thinking and edit my words so that only the final, polished version, ends up on the page is remarkable. ChatGPT doesn’t do that.
That makes sense with words and letters—you see, I can rewrite the phrase and make it clearer: At the level of words or sentences, that’s true.
But there’s a level at which, the more I think about it, the more I feel we’re not that different from AI.
I’m not talking about speech, words, or sentences. I’m talking about ideas.