You know I’m not an advocate of using AI tools to write.
One of the reasons is that once they send you in one direction, you’re much more likely to follow it than to counter with a different idea, often making you trade off your original intent — if there was any — for the comfort of following instead of leading.
Let’s make the case right here, right now. Here’s how ChatGPT continues when I input the above paragraph:
This tendency to follow rather than lead can also suppress creativity. When a machine suggests a narrative or concept, it’s based on patterns it has learned from vast amounts of data. While this can be efficient, it also means that the results are often predictable and lack the unique spark that human creativity brings.
That’s fine. It’s a possibility. I could take that and you may not notice. The direction language models indeed prompt you to follow (pun intended) often leads you somewhere boring — to the land of averages — as ChatGPT readily accepts here, but I could always use my human idiosyncrasy to avoid that.
That’s not what I had in mind, though. Dullness is a reason to not use AI to write, but I wanted to say something else. But now that ChatGPT offered me for free that sweet, sweet idea, developing my original intent entails an additional cognitive load.
I can just write my thing, true, but I have to reject ChatGPT’s appealing suggestion and “unbias” myself from it.
It’d be so much easier just taking the candy.
Bet let’s go on, guilt-free and sugar-free. The second reason I see to not use AI writing tools is, funnily, something ChatGPT will hardly admit: Once it’s chosen a narrative path, it won’t change it.
That’s one of the beautiful traits we humans enjoy that modern language models lack. Their autoregression (to decide what word to write next, they look at the ones they’ve just written) makes them unable to backtrack. Even if ChatGPT realizes it made a dumb mistake a few words ago, it’s constrained by a fixed destiny.
All of that (including ChatGPT’s idea) makes AIs terrible writers.
Humans are much better… right?
I’m not here to criticize language models just for the sake of it. I believe there’s a compelling argument to make in favor of AI that requires us to come down from our self-righteous pedestal. To put it bluntly: we are not as good as we think we are.
It makes sense to compare AI against humans, to see if our pervasive criticisms are warranted at all. So let me ask you this: Can you do it better than AI?
The stochastic parrot thing didn’t convince me of the limitations of language models (I was already aware of those). It had a different, and rather ironic, effect: I started to perceive more intensely my own linguistic limitations as a human being.
When I speak, there’s an almost tangible mental cost to stop mid-sentence and say: “wait, let me rephrase that.” I sometimes just go along with what I’ve just said and, at best, try to steer my next words so that they land well. Humans suffer from a quasi-autoregression as well.
That happens in conversations but writing is different, right?
It’s thoughtful. It’s deliberate. It’s reflective. I can read the words I’ve just written and decide if I like them or not. That makes me much more powerful than AI. My ability to redirect my thinking and edit my words so that only the final, polished version, ends up on the page is remarkable. ChatGPT doesn’t do that.
That makes sense with words and letters. You see, I can just rewrite the phrase and make it clearer: At the level of words or sentences, that’s true.
But there’s a level at which, the more I think about it, the more I feel we’re not that different from AI (even if the process that makes us behaviorally similar is not mechanistically the same).
I’m not talking about speech, words, or sentences. I’m talking about ideas.