Why I'll Always Disclose My Use of AI Writing Tools
Transparency will be key for writers—and readers.
I’m barely one year and a half into my writing journey. You could say I’m a newbie writer, but it’s been more than enough time to fall in love with this profession. I’m, however, more versed in AI. I got into it after finishing my engineering undergrad studies around 2017 and decided to pursue it seriously. I landed a job soon after and spent the following three years working for a startup that wanted to change the world — as most AI companies do.
The combination of my knowledge in AI with my passion as a writer put me in a singular position to witness and recognize the impending future that’s about to fall over us — of which we’re already feeling the first symptoms. The AI language revolution took off in 2017. Just three years later, OpenAI released the popular language model GPT-3, an inflection point in a trend that isn’t giving any signs of stopping — and which limits are yet to be discovered.
Large language models (LLMs) like GPT-3 have permeated through the fabric of society in ways not even experts anticipated. I thought — as many others did before — that AI was a threat to blue-collar jobs; physical workers would be replaced by robots. However, with the advent of LLMs, it has become increasingly clear that white-collar jobs are too in danger. In particular, jobs that revolve around written language — whether creatively or routinely — are on the verge of being impacted in a way never seen before.
A multi-billion race to own the future of writing
Companies like Google, Microsoft, and Facebook have been pumping millions of dollars for years now into the language branch of AI. Other, less-known AI companies like OpenAI, AI21 labs, and Cohere have also taken these promising tech developments and converted them into commercial products ready to handle tasks previously reserved for humans. And non-profit organizations like EleutherAI or Hugging Face are making efforts to democratize LLMs through open-source initiatives like BLOOM or GPT-NeoX.
News articles, emails, copywriting, team management, content marketing, poetry, songs, dialogue, essays, and entire novels, are just some of the areas where LLMs have started to show genuine proficiency.
And it doesn’t matter that these systems are “dumb” when compared to a human. They don’t need to understand what they write to write it well.
Liam Porr, a Berkeley alumnus, proved this to be true when he successfully conducted a surprising experiment with GPT-3. He set up a Substack newsletter entirely written by the AI and in just two weeks he attracted +26,000 readers. He even got one article to the number one spot on Hacker News — only a handful of perceptive people noticed the trick. “It was super easy, actually, which was the scary part,” Porr told MIT Technology Review reporter Karen Hao. “One of the most obvious use cases is passing off GPT-3 content as your own… And I think this is quite doable.”
This happened two years ago.
In the span of four years, language AI has gone from a shyly blooming trend to a technology explosion without precedent.
Companies like OpenAI and AI21 labs have now open APIs that allow people to access these powerful LMs and open-source-friendly organizations like Hugging Face offer—on top of the APIs—the possibility of downloading trained models to make further analyses and research.
There has been no better time in history to start a writing career. Not only because there are more platforms to grow an audience. But because people don’t need to know how to write to succeed in the field anymore. And I’m not talking about skill here. With systems like GPT-3, you don’t need to type more than a few words to go from an idea to a finished piece.
How can we, the writers who are genuinely trying to hone our craft, compete against that? Some people will manage to obtain the benefits without making the effort.
No writing skill required.
Writers — among many other professionals — are going to face an unprecedented amount of competence. GPT-3 can’t match a high-quality, coherently structured article with a polished style and a clear thesis, directed toward a specific audience. But we know quality isn’t the only variable to getting views and grabbing attention. When anyone can ask GPT-3 to write 100 articles in a few hours, the phrase “dominance by sheer numbers” gets a whole new meaning.
GPT-3 is the most popular language model today, but it isn’t the most powerful. Newer LLMs like LaMDA or PaLM have already improved their capabilities and future ones will overshadow anything we’ve seen to date.
Technological progress won’t slow down and the field is too profitable for AI companies to ignore. Firing entire teams of writers to hire AI services for a fraction of the cost will become the norm.
Transparency is the key
All I’ve said is to frame my decision to always disclose if I use AI language models to help me write. Whether it is to exemplify the power of GPT-3 with a complete article or just to refine the tone or choose better words. I’ll make sure my readers know when it’s me and when it’s an AI who they’re reading.
LLMs are out there for us to augment our skills. Not using them will put us at a huge disadvantage. We can’t compete by staying traditional. However, not revealing it when we use AI will only increase the discredit this profession will inevitably suffer.
Transparency is the mindset we should adopt as individuals.
But I’m not naive enough to think every writer will be responsible in this sense. That’s why I also support the necessity to create regulations to oblige people and companies to publicly disclose when the writer is a human or an AI (the same way OpenAI’s DALL·E 2 creations have a watermark but without the possibility to crop it out).
Enforcing such laws won’t be easy, but it’s for sure a very necessary first step if we want to protect human-made writing—and the jobs of many people.
I’d support a law for disclosure of LLM’s use. And it’s the stance I think everyone should adopt if we want valuable and reliable information to prevail.
The new normal, powered by AI
I want you as a reader — and maybe as a writer, too — to become aware that this reality won’t ever disappear. You may think you’re pretty good at spotting AI-written text (or AI-generated images, for that matter), but soon not a single soul will be able to distinguish with beyond-chance accuracy whether a piece of text has been made by an expert author or by a powerful language model.
In the future, people will get used to living between human-made and AI-made creations of all kinds: music, paintings, poetry, books, shows, movies… The creative landscape will become a blurred mixture of natural and artificial imagination. It’ll become normal, a new way of life. But we have yet to go from here to there and the gap we have to close isn’t an easy one.
Transparency and regulation are the first steps we need to limit those who intend to leverage the possibilities this new tech provides to the detriment of people gullible enough — soon, all of us — to mistake the real for the fake.
This is an updated version of an article previously published on OneZero.
Good and insightful article. Let me comment on the disclosing of digital writings, because disclosing of printed writing, I guess, is much more difficult to deal with.
Disclosing is certainly a personal decision, that probably not all the writers will do, despite of the regulations. I´m thinking on an automatic and secure encrypted disclosing of authorship embedded in the digital media, that can not be changed later. So if a writing is authored by an AI, the authorship should remain securily in the digital media. A new standard to encode and protect the metadata embeded in the digital creation is required (Perhaps using NFT?). (And also protect the copy & paste as some document and word procesors already does).
Of course a writer can later edit the digital writing but the authorship must remain to the AI and the editor appear as such. On the other side if a writer indeed write a digital writing , the NFT will endorse he/she is really the author. The same could apply to all digital created media. (Its just an idea that came up to my mind). Would that work?. Very probably the technology industry and the regulation institutions will come up with a clever solution. In the mean time... , you are right, this matter is an ethical choice of the writer.
I get this and I do think this is imperative for Journalism. But fiction may be a whole other thing.
Ofcourse, its cool to say this publicly, "I'll always disclose if AI is involved in my writing", but seriously, who gives a damn, if it isn't journalistic writing. For novels and fiction, i couldn't care less what is involved in creating the story. If its a magic pen that wrote -how much? 20% 59% All of it?- the thing, who cares. If it was written by your dog, the only ones interested are cheap clickbait sites who want an article about a writing dog. I care about: Does the story grab me by the balls? Does it speak to me? Is it gripping? Is it interesting? Does it speak to me? Does it swing? Do I like the tone of voice? Does it challenge my views? Is it elegant? Whats the structue, wheres the climax, in short: Does it work? Can I fall into it, can I immerse myself in the writing? Who cares if you used a typewriter or a language model based on sophisticated statistics or a Faber pen. All I care about: Can you put words in a sequence that speaks to me?
And I'm not done yet: Disclosing usage of AI, at least at this point in history, may break the spell and a story that would've spoken to you suddenly doesn't, because you associate it with a robot voice. But its not a robot: Behind the language model stand 1 billion trillion words written by people and you, the curator of AI-chosen words, still put those words into a sequence, you still curate and arrange the output so that it works.
We are far away from a GPT3 that can just spit out The Great Gatsby and until we are not there, it may be counterproductive for the purpose of a story to disclose AI-involvement. But hey it sounds good on Twitter, amirite?
(As I said, I do think journalism is another, uhm, story. I want to know who made the mistake, a human or a machine, and who may be to blame for inaccuracies. Journalism needs transparency, storytelling does not.)
(And yet another story are experimental, explicit machine artworks. But they work *because* of the knowledge of AI-involvement, not *despite* of it.)