How to Survive as a Human Creator in the AI Era
What do you think you, as a human, have that no AI can replicate?
Last week, Sinocism’s author Bill Bishop (the first Substacker) asked this:
Is the coming deluge of AI-generated content good or bad for Substack and its creators and audiences?
To which Substack CEO Chris Best replied:
As AI drives the cost of creating “content” to zero, the value of trusted relationships will skyrocket.
If you are in the business of creating interchangeable content to please an algorithm, you’re in trouble.
But if people subscribe to you to help them make sense of the world and know what to pay attention to, then the value of the work is only going to go up. So people will need you more, and at the same time the machines will give you superpowers. This will be very good for you.
People who succeed on Substack are in that second category, so I am optimistic!
What I assume to be a consequence of this conversation is today’s On Substack piece, published by Substack co-founder Hamish McKenzie and optimistically entitled “The AI revolution is an opportunity for writers (the human kind).” (If you haven’t read it, you should go there first and then come back, because this article is my response to both the above exchange and McKenzie’s essay.)
Bishop, Best, and McKenzie opened a debate that I’ve been wanting to approach for some time: How can creators really survive the AI era? Do we really have a chance? Are writers doomed — either because of AI-driven replacement or asphyxiation by AI-generated garbage? Here’s (part of) my answer to these anxiety-inducing questions.
The intrinsic complexity of a multifaceted topic
Let me start by acknowledging that the debate on AI writing is complex.
There’s no right or wrong perspective and different framings yield different predictions and result in drastically differing reactions. Just to put an example of what I’m referring to, think of the printing press. I don’t think any of you now see it as a devilish invention but as one of the greatest technological innovations of all time. However, if you could ask 15th-century monastic scribes, they’d surely disagree with you. Writing by hand was all their life, everything they knew — a craft carefully honed and mastered over many, many years. The printing press took it from them on a whim; without warning. They probably saw it as a curse, a trap that would make humanity forget how to write by hand. For most of us — those of us who see it from the safe distance that five hundred years of adaptation confer — it was a blessing.
The complexity of this multifaceted issue and the impossible-to-foresee ramifications that will become evident in hindsight in a few years, force me to hold, at the same time, a set of seemingly conflicting views: On the one hand, I agree with Best’s and Hamish’s optimism. Human writers may not write as fast as AI assistants, but we can imbue our words with a kind of life that can’t be explained or mimicked. AI tools will get better over time but I’m not talking about quality — I’m talking about the human idiosyncrasy that emerges out of living; out of the past experiences that aren’t exactly the same as those of our neighbors or our readers but sufficiently similar as to create a limbo of sensations and feelings of familiarity. No AI, however advanced, can enter that place.
On the other hand, I can’t see how we can avoid, as writers, the same destiny as those poor 15th-century scribes who didn’t even see it coming. We might be able to infuse humanness into our writing if we learn how to (as Summer Brennan wrote, “No one is entitled to success in their writing, and notable success in writing always takes a lot of work. No exceptions.”) but in the end, we compete in a market to sell those words. Does that connection that only humans are capable of creating bestow a sufficiently high value on our words to compensate for our slowness and expensiveness? I’m really not sure.
(At the same time, although technological progress might be an unstoppable power, I think that how it’s done can be steered with adequate counter-measures. Let’s not naively believe that AI companies have in mind the interests of anyone else but themselves but let’s not allow either to lose our energy to the pessimism that tells us there’s nothing we can do. You can fight. If you so choose, know there are ongoing legal and ethical battles for your rights. You can join them. As Best says, “We don’t actually have the choice between backwards and forwards. But we can choose which way forward.”)
That said, I expect you to understand that not any one of those views can be easily put into agreement with the others, combining them (and all the other nuances and details I have intentionally left out for the sake of clarity) into one neat essay. Giving each the space they require and navigating smoothly the inherent conflicts that arise when they clash is hard. That’s why I will focus today on the part of me that matches Substack’s founders’ optimism (I will write eventually from the opposite points of view). First, I believe that, as writers, we have real reasons to be optimistic (many of the usual reasons to be pessimistic were there before AI and would remain there even if AI never succeeded). Second, my own experience here on Substack is a good example that you can survive — even thrive — as a creator in the AI era. I can’t help but believe there’s reason to be optimistic.
I’m not saying my career as a writer is representative of anything (although I started from nothing and I’m not famous by any means), but it may serve those of you who are worried and frightened as a way to anchor your mind in some optimism which (even if eventually smashed by the forces of the future as it happened to those ancient manuscript writers) might provide you with enough strength to fly through these turbulent times and land where you want to. In particular, I want to approach this from my position as a writer focused on AI, which I assume many of you will think is the reason why I feel partly optimistic (or confident or safe or whatever). I don’t think that’s the reason — I don’t think it’s relevant except in that it perhaps provides me with a clearness of sight that will help me back up my points.
Importantly, I won’t touch here on the role of AI as “superpower givers”. That’s a key argument that both Best and McKenzie use to support their optimism but I don’t think it’s necessary, at least for now, to leverage AI to succeed as writers. If you don’t feel like using AI there’s no reason to force it to make it work. There are paths you can follow and still come out on top of this apparently deadly wave of innovation. As a case in point, I’ve never used ChatGPT (or any other AI writing tool) to write any of my articles and I very well could without anyone noticing. I simply think it’s an unnecessary condition. Writers can protect themselves without feeling powerless in the face of AI and eventually giving in.
I know some of you are highly skeptical that any writer at all (including very famous authors like Margaret Atwood) has any significant lifeguard against AI. I will try to prove you wrong here (while proving that same part of me wrong as well). The means to survive this wave of technological innovation is within reach of established writers but, much more importantly, within reach of any human being.
Creativity is not about creating content
The origin of the problem that Best accurately identified and dissected last week in a few lines is the concept of “content creator.” I think this concept lies at the center of this debate and provides a useful distinction to base our perceptions of what’s coming more accurately.
I’ve never liked the term content creator. There’s a good chance — given that you are reading this — that you don’t either. Content creators, by definition, don’t care about what they’re creating but only about the fact that they’re creating something. If you are a creator and you care, you should never think of yourself as a “content creator” (if you do, which I doubt, you should drop the label right now). It is what content creators get in return from the creative object that matters to them (perhaps money, but often just online clout and instant gratification), not the meaning of the creation, the creative act itself, the impression it makes on their audience, or the value it provides to the world at large. That’s not to say true creatives should not care about earning a living. That’s absurd. But money shouldn’t be the sole motivation. Creativity should be an end in itself.
The generative AI era promises many things. One of them, as Best says and McKenzie repeats, is “to drive down the cost of content creation to zero” (that’s the optimistic and more humane analogous to Sam Altman’s preferred version of AI as a vehicle to reduce to zero the cost of intelligence). A simple, easily usable, and globally accessible means to create anything (not everything, though) at cost zero makes the above distinction between creatives who care and creatives who don’t absolutely critical. Some people might think what will be driven down to zero is instead the cost of creativity, but that immediately appears false once we have defined that useful distinction between creator types: AI can’t do this with creativity because it has nothing to do with creativity as humans understand and enact it.
If this critical distinction wasn’t as clear before is because although we all experience meaningful creations (I’m biased but Substack is one of the best places for that), we also consume meaningless content, often without noticing. Let me show you how AI will ensure that one thrives while the other struggles — guess which.
But first, let’s get out of the way an important misunderstanding: You don’t need to know anything about AI to create the means to feel safe and build your path forward.
Knowing about AI is not what makes me feel safe
I understand the anxiety of uncertainty. It’s something I’ve felt myself. When OpenAI released ChatGPT I wasn’t sure if that would boost my newsletter — because I write about that stuff — or if would hinder its growth — because you, my dear readers, are among the most AI-savvy people in the world, which also means you’d be the first to try ChatGPT and decide if you needed anything else anymore.
Unsurprisingly (in retrospect), the post-ChatGPT period of The Algorithmic Bridge (TAB) has been the absolute best. In terms of growth, interest, influx of subscribers, upgrading from free to paid — any metric you might choose got better after ChatGPT. The reason, I believe, is not just that ChatGPT put AI into the collective awareness, making TAB arguably more attractive, but that you value the relationship I’m trying to grow and nurture here, with you, and ChatGPT’s existence made it more urgent to double down on those we trust, as Best says, those we feel are authentic. As McKenzie says, this “fuel[s] a tremendous need for … real humans in communion with one another.”
Some people tell me that I should be more afraid, that I’m only feeling invincible because I’m a victim of the illusion of a protective halo, result of writing about AI. I think it makes sense because, for instance, coders, who as a group know more than any other about AI, are arguably in greater danger than anyone else. Knowing about AI doesn’t make me invulnerable to the effects it could have on the creative space. That’s right. What’s wrong is the assumption that I feel safe (definitely not invincible) because I could see the next AI breakthrough coming and could adapt to it. No — I can’t. No one can.
The reason I’m calm is much simpler and certainly more human: I’m working very hard to make the relationship with you as humane as I can make it. Call me delusional, but I firmly believe that, as a human being, you value that above all else. It’s not what I write — the specific words on the screen — that you value. It’s the fact that I’m the one saying it.
There are many other newsletters about AI besides TAB (especially outside Substack). Most of them are lists of curated links that completely hide the creator. But even those others designed to create a writer-reader bond in the form of blog posts, journalistic pieces, or analysis articles, can easily become too impersonal. The problem is not that AI makes it easier to replace those creators (which is an issue but a different one) but that they made themselves be replaceable at all. Again, quoting Brennan: You are not “entitled to people’s attention, time, or money.” Those other writers focused on AI are not safe by default. They have to put on the work and nurture the relationship with their readers if they want to be higher in the replaceability spectrum — whether AI exists or not, because even if not, simply a more hard-working human will do.
In TAB, besides the amount of effort I put into each and every issue I send you, I’ve tried to establish some form of extra-newsletter connection with you, doing open threads, and “Ask Me Anything” sessions, answering comments and inquiries, and engaging in useful and interesting debates article after article. Other Substack creators are exploring chat, podcasts, video, and Notes. They read one another and share and cross-post and cross-recommend. This kind of ecosystem is perfect for humans to build trusted relationships — AI-generated content would not last here for long. And the same applies to all kinds of bloggers spread out on the web whose success lies in who they are — their idiosyncrasy as fellow, but unique, human beings.
AIs passing the Turing Test is not a problem
Let me get philosophical for a moment and counter-argue that, on the internet, a human is a human in all regards as long as it appears to be one or that anything that resembles one can pass as human. A GPT-X that successfully passed the Turing Test would be more human on the internet than any of us.
That’s also right in a way — you don’t know me. But I don’t feel like an AI, do I? That’s the argumentative flaw: No current AI system can truly deceive humans over time (please, hear me out before you shout, “they will!”). They can't insofar as any writer-reader relationship that’s worth it entails some sort of direct, almost personal connection. A content farm can function with swarms of article-writing bots just fine. But you’re here instead of reading spam because you don’t want that. You came here for Substack’s unique value in allowing writer-reader relationships to grow and blossom.
I concede that these words by themselves are not that personal. What’s personal is that you can come back to ask me something in the comment section and I will try my best to answer. Or you might email me with words of support to encourage and motivate me. That’s the meta-level at which creators and audience interact that AI can’t. That’s what breeds trust and radiates authenticity.
Let me answer your implicit complaint: “The Turing Test will be eventually passed.” That’s right (in some settings, Turing’s original prediction has been already achieved), but the truth is that any human relationship — even digital ones, even those not especially intimate like the ones between writers and readers (I love you, though) — requires much more than the ability to deceive. That’s what the test is for, measuring AI’s ability to deceive. Human relationships are literally about the opposite. We want, need, and should engage in a constant effort to understand one another. Our bonds require empathy and sympathy, adaptability and flexibility. We enjoy the ability to express and feel emotions and should develop the skill to understand them back. And we have, unlike any AI — regardless of whether it might pass the Turing Test or not — a unique self and a consciousness behind the words and phrases we write or speak.
In her essay “Fail Better”, Zadie Smith writes:
A writer's personality is his manner of being in the world: his writing style is the unavoidable trace of that manner. When you understand style in these terms, you don't think of it as merely a matter of fanciful syntax, or as the flamboyant icing atop a plain literary cake, nor as the uncontrollable result of some mysterious velocity coiled within language itself. Rather, you see style as a personal necessity, as the only possible expression of a particular human consciousness. Style is a writer's way of telling the truth. Literary success or failure, by this measure, depends not only on the refinement of words on a page, but in the refinement of a consciousness.
When pessimists make arguments of the kind, “AI will do everything better than we do,” they forget that there’s one thing they can’t, by definition, do better than us: Being human.
The value of being a human creator in an impersonal world
Not all humans are equally good at being human, though (or at least at showing that they are). In the creative world, this kind of “salvation from AI” applies exclusively to creators who take seriously both their creative projects and their community. It’s those who care about the craft, about what they communicate, and about the well-thought-out feedback. Those who care about the creative process. Those who care about the meaning of creating something and the satisfaction of having created for someone else.
Those who care about what their readers value. Some value the meaning of the sentences they read. Others the shape and the style — and the consciousness behind them. What we all unequivocally value, readers and writers, all of us as enjoyers of creativity, is the mind-expanding energy that’s only found in art, in any of its forms, as a means to share the things that make our minds alike and the things that make them different. AI systems can do many things but they can’t — and never will be able to — expand our minds like only a fellow human, who stands alongside us in our condition, our nature, our struggles, and our endless yearning for meaning, can. It’s the things that we have in common, as humans, that provide a robust enough overlap to care as creators and care as audiences for the things that distinguish us.
Here’s my favorite part of McKenzie’s essay:
No matter how advanced AI gets, there will be unceasing demand for human connection. We will want to show each other how we feel as people. We’ll tire of getting what we want, and instead yearn to figure out together what we should want. We will share our hearts and compare our scars. We will long for the sound of each other’s voices, and to shape our own and each other’s stories, in wild and wonderful new ways.
AI will never be able to replace the dynamic that is most central to Substack: human-to-human relationships. New robots may rise and try to claim the mantles of writers and other culture makers, but none can seriously lay claim to what is most important about these people and groups—the human connections they are built on.
Perhaps I’m being too naive, foolish even, in being so optimistic about my future as a writer, but I’m surely not alone. I’m certainly biased to believe I’ll be fine (given my still hard-to-grasp success here on Substack) but know that I’m not moved by blind faith, but because I’m confident in what I can offer that AI doesn’t. Anyway, I leave that for you, my dear reader, to decide if it’s true (because, unlike ChatGPT, I care about your opinion). Just know, whatever shadow of doubt may still linger in your heart, that you can offer it, too.
AI might eventually do most things better than us, but I’m convinced they will never replicate our human essence. That’s why creators who nurture their relationships and remain as human as they can be are not in trouble in the AI era. However, those whose goal is empty, impersonal content-creating face a dilemma — not because AI will become more like them but because they are already a lot like AI.
lovely piece. I've been so enjoying your work for all the reasons you've outlined. TAB feels like a warm cabin in a snowstorm!
Hey Alberto, I’m wondering if you might pen a post on the future of writing instruction based on these insights. Sort of a follow up to your summer posts on education. Would love to see what you have in mind!