What You May Have Missed #16
AI writing in journals and magazines / The clash between Big Tech and AI startups / News beyond text generation: Video / Law, security, and porn
Journals and magazines on ChatGPT
Debate is hot among publishing platforms about ChatGPT (and AI writing tools in general). There's a whole spectrum of opinions.
CNET started to use an AI system to write articles (which they only disclosed after the info was leaked). They eventually had to shut down the system due to flaws, criticism, and reported instances of plagiarism.
In contrast, Nature has published a set of guidelines where they explain ChatGPT can’t be an author and, if used, should be adequately disclosed:
“First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.
Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.”
Medium, the popular writing platform, has taken a similar approach. Scott Lamb, VP of content at Medium, wrote this on Thursday:
“We welcome the responsible use of AI-assistive technology on Medium. To promote transparency, and help set reader expectations, we require that any story created with AI assistance be clearly labeled as such.”
As you know, I have a very similar (non-official) disclosure policy regarding AI writing tools. I think this is the right approach given that using ChatGPT to write a piece that readers think was written by a person is a very shady practice.
At the other end of the spectrum, we find BuzzFeed, which has announced it'll be using ChatGPT to “help create quizzes and other content.”
Should magazines and journals take advantage of text-generation AI? It's hard to say who's right. At the very minimum, we should strive to learn and teach correctly about the abilities and limitations of language models which, in light of the hundreds of examples that flood social media, isn't happening. In my view, ChatGPT isn't simply a calculator for writing, as some like to think.
Amidst the debate, researchers keep trying to build reliable detectors—to me, this is just another attempt at the impossible. One downside? False positives.