53 Comments
User's avatar
Birgitte Rasine's avatar

And you know, of course, that any time we hear someone claim they're building tech to give the gift of democracy (whether in art, politics, or knowledge) to humanity, if you squint just a little you'll see the glint of dollar signs... and the electric sizzle of power. True democracy doesn't start with a tech tool.

Expand full comment
Chris Hurst's avatar

Nice

Expand full comment
Monica Mistretta's avatar

I agree with you 100%. It is very scary as a journalist and as someone who is trying hard to write "the book of my dreams" to accept Gen-AI as a means to produce content that steals from other's creativity. And is also very disgusting that, for now, the new tools are being used for such repulsive BaaS. Congratulations on your enlightning post!

Expand full comment
Chris Hurst's avatar

Interesting perspective

Expand full comment
Sewer Kitten's avatar

The math checks out, at least based on my favorite definition of bullshit: https://en.wikipedia.org/wiki/On_Bullshit

Expand full comment
Alberto Romero's avatar

Exactly the definition I had in mind

Expand full comment
A.J. Sutter's avatar

I don't have any interest in reading an AI-written novel, or even an AI-assisted one. Nor do I understand the arguments that (i) people are entitled to write the book of their dreams despite a lack of effort or skill on their part, and (ii) effort in writing prompts equates to effort in writing a novel. If (ii) were true, just write the novel the old-fashioned way.

I've written non-fiction professionally, and I definitely agree with Thomas Mann's definition, "A writer is someone for whom writing is more difficult than it is for other people." I think anyone who has tried to write conscientiously will identify with it (and I think a lot of real writers will enjoy engaging with that difficulty). Those who claim that writing skill needs to be democratized seem to believe exactly the opposite of Mann's statement, and that an LLM will allow them to level the playing field with professionals, for whom writing is supposedly easier. That's a complete misunderstanding of what writing is.

Expand full comment
Alberto Romero's avatar

I agree with you, but I'm not sure every great writer would agree with Mann's statement. I don't think there's a perfect rule that applies to all writers. Nonetheless, that doesn't mean we should give away the gift to those who deserve it less, which is what I argue and seems to be happening.

Expand full comment
Masaru's avatar

Deception follows mass adoption in a lot of cases. This includes AI, unfortunately. It is sad to see fake images circulating like this, as it is hurting people already.

Expand full comment
Chris Hurst's avatar

Anything can be used as a tool for bad

Expand full comment
Philippe Delanghe's avatar

We believe ourselves to be so complex yet are actually so simple. We are the apes who understood the universe—from an invisible prison. We keep extending our arms high up in the air, reaching ever further, but our feet won’t leave the ground.

I so agree with that. I often say "the only way to become truly human is to accept that we are animals". The prison is how evolution designed our brains for survival 100,000 years ago and how difficult it is to move past it. We are in the evolutionary matrix. We are designed for scarcity (of food, relationships, incoming data to our senses) and we built a civilisation that creates the opposite.

Expand full comment
Alberto Romero's avatar

Agreed. Reflecting on the contrast between evolution and civilization can be extremely insightful

Expand full comment
Kay Lew's avatar

Maybe, this is as far as we can go? Maybe an argument for us being a transitional form?

Expand full comment
Stephen Moore's avatar

Great headline!

Expand full comment
Chris Hurst's avatar

Interesting points

Expand full comment
May's avatar

Agreed, technology can be used as a gift or otherwise, it is a choice and matter of the intention.

Expand full comment
Johnathan Reid's avatar

Given what you say (and I'm broadly in agreement with), why are you recommending a substack which serves as a prompted BaaS? This popped up when I subscribed to you:

"Recommended by Alberto Romero

Write With Al By Nicolas Cole

Turn ChatGPT and other Al tools into a personal writing companion. Write With Al offers carefully chosen prompts every week to craft viral content, build an engaged audience, and rapidly expand your digital business."

Expand full comment
Alberto Romero's avatar

I'm not against integrating AI tools into a writing workflow. I say that much in the article.

But it seems it's impossible to have a nuanced opinion nowadays. You have to choose black or white. If you criticize people who do X, how can you dare agree (or just not disagree) with anything that's within a certain radius of X?

Expand full comment
Johnathan Reid's avatar

My take was only that using LLMs to 'craft viral content' at a faster rate falls very close to X.

Expand full comment
Alberto Romero's avatar

That's what they say but that's just marketing copy. What they do (I know Cole from Medium) is help people learn to use ChatGPT, basically. It's nowhere near the same as having a company that literally makes content to spread on the web just to fill it with, idk, ads about stuff. If they did that, they wouldn't have my recommendation but my report

Expand full comment
Johnathan Reid's avatar

I see your point. But one consumer's BS take can be another marketeer's copy. The beauty of an ad is in the eye of those beholden to a brand. Apple is an excellent example.

Expand full comment
Michael Woudenberg's avatar

Fantastic essay and I 100% agree. The volume of sheer bullshit and snakeoil taking over LinkedIn and Facebook in groups that used to actually be interesting discussion groups on AI...

Expand full comment
Amihai Loven's avatar

The essence of greed... good that you wrote so directly. Invite you to dive into QLN.life

Expand full comment
Noam Miske's avatar

Absolute frames of reference do not exist; the individual self, with its limited intelligence and boundless ego, is merely an illusion born from emergent physical phenomena that take place within the human brain. This certainly applies to bullshit AI researchers, bloggers and writers that publish a lot of informal bullshit in the form of plagiarized research and commentary in a non-peer reviewed media. It is very easy to throw in stolen LLM-curated opinions in the for-profit echo chamber and then claim righteous authorship of circular, but certainly entertaining, discussions.

It is crucial to approach the science and philosophical aspects of AI with honesty, scientific rigor and humility, recognizing our own limitations and the immense potential for AI to benefit all humanity, and the people that risk careers and fortunes to make that happen.

Expand full comment
Alberto Romero's avatar

Is this a convoluted way to defend generative AI given the evidence? Mine isn't a criticism of the technology but the companies that build it with unethical practices and especially the users who misuse it.

Expand full comment
Kay Lew's avatar

The Tower of Babel continues to win.

Expand full comment
Sheila Dean's avatar

I see a contest coming between a people's need to NOT use AI for conventional things like banking & commerce and companies "solution" to plying hyper-securitized mandates are in Bush-era Know Your Customer policies; which will be applied to all things that an AI can generate/aggregate or integrate with.

There is a request for comment CFTC for things like this. The deeper problem here is integration for speculation to see if a commerce system works. They aggregate a lot of data. It is commodified with little amenity for data security and data privacy. So just because a tech business would crown themselves King of a thing doesn't entitle them to plumb every crumb of my personal data in perpetuity and to use online banking as a means to coerce the markets into a identity mandate funnel.

https://www.dwt.com/blogs/financial-services-law-advisor/2024/02/cftc-ai-task-force-seeks-comments-on-uses-risks

Expand full comment
Phil Tanny's avatar

How about this? It can be argued that the biggest source of junk content on the Internet is not AI, but social media typed by humans. What if AI could be a partial solution to the massive human generated social media trash pile?

Let's talk Substack for example. How many posts on both Notes and in blog comments could be substantially improved if a few quick lazy sentences by a human were replaced by an article generated by AI?

So, let's try it. So far, the above is a human generated blog comment. Let's see what ChatGPT could do....

=======================

The Case for AI-Generated Content Over Human Social Media Comments

In the ever-evolving landscape of the internet, the quality of content has become a pressing concern. Social media, a realm dominated by quick, reactive comments, often becomes a breeding ground for misinformation, negativity, and superficial discourse. However, the integration of Artificial Intelligence (AI) in generating content presents a promising alternative that could drastically improve the internet's informational and cultural ecosystem. Here's why replacing many human-generated social media comments with articles generated by AI could be a step towards a more informative, respectful, and constructive online world.

Enhanced Information Accuracy

One of the most significant benefits of AI-generated content is its potential for accuracy. AI systems can be programmed to adhere to factual information, drawing from a vast database of verified sources. Unlike human comments, which can be biased or uninformed, AI-generated articles could ensure that discussions are rooted in truth, reducing the spread of misinformation.

Constructive Engagement

Social media is notorious for its echo chambers and confrontational interactions. AI-generated content, designed to be neutral and informative, could foster more constructive engagement. By presenting balanced views, AI articles could encourage users to consider multiple perspectives, promoting a culture of understanding rather than conflict.

Reduction of Toxicity

The anonymity and immediacy of social media often lead to a toxic online environment, where harassment and bullying are rampant. AI-generated content, devoid of personal vendettas or emotional responses, could significantly reduce the level of toxicity. This change could create a safer, more inclusive online space, encouraging participation from those previously deterred by negative interactions.

Focus on Quality

Human comments on social media tend to prioritize quantity over quality, with many users posting reactions without substantial thought or relevance. AI-generated articles could shift this focus towards quality, offering well-researched and thoughtfully composed pieces. This emphasis on depth over speed could enrich the online discourse, making it more rewarding for both creators and consumers.

Educational Value

The educational potential of AI-generated content is immense. By replacing superficial comments with articles that provide context, explanations, and insights, the internet could become a vast learning resource. This shift could democratize education, making complex subjects accessible to a broader audience and inspiring a culture of lifelong learning.

Challenges and Considerations

While the benefits are clear, implementing AI-generated content as a replacement for human comments is not without challenges. Concerns about the loss of personal expression, homogenization of voices, and the ethical implications of AI in communication must be addressed. Ensuring that AI systems are transparent, unbiased, and respectful of privacy is crucial. Moreover, maintaining a balance where AI enhances rather than replaces human interaction is vital for preserving the internet's dynamic and diverse nature.

Conclusion

In conclusion, the replacement of many human-generated social media comments with AI-generated articles holds the promise of a more informed, respectful, and constructive internet. By prioritizing accuracy, engagement, and quality, AI has the potential to transform online discourse. However, this transition must be navigated carefully, ensuring that the human essence of social media is not lost but rather enhanced by technological advancement. As we stand on the brink of this transformation, it is crucial to consider the vast potential of AI to enrich our digital lives while being mindful of the challenges ahead.

Expand full comment
Alberto Romero's avatar

A few things I dislike about this for everyone to read:

That's terrible lol. You can't really believe that's an improvement. Also, why turn a sentence into a pile of generated text? The idea is to learn to condense and distill so that you don't waste other people's time.

It's also wrong. Many of the things ChatGPT generated are just false. You can, instead, check the references from my piece. Quality information written by humans.

Finally, did you read my arguments? This is not about what's the biggest source of junk (generative AI is rather recent so no wonder most crap is made by us). But about the primary uses we're giving those tools. Btw, what you just did falls under the category of content farms, unintentionally proving my point.

Expand full comment
Phil Tanny's avatar

I'm comparing the average human contribution on social media to the average AI generated article. My argument is that, on average, generally speaking, the AI content is superior. Here's why....

Generally speaking, what motivates humans to posts on social media is that we want attention and validation, or perhaps profit, for the least possible effort. And so most of the social media content is very brief, and of limited value. AI is not distracted by the ego agendas which dominate social media. And AI is not lazy! As one example, the AI article above is correct in claiming that to the degree human content on social media were replaced with AI content, toxicity and personality conflicts etc would be substantially reduced, or eliminated.

I'm not comparing AI content to your articles, but to the average human generated content on social media, the world's largest source of content trash.

Yes, AI sometimes makes mistakes. As do humans. All content is reasonably questioned and challenged, whatever it's source.

As you wisely suggest, let's look forward. It's likely that AI content machines like ChatGPT will continue to improve. It's not likely that human generated content will improve in any meaningful way. So whatever the case may be right now today, in coming years, generally speaking, on average, AI will increasingly out compete humans in terms of quality content.

As far as social media goes at least, imho, we're already there.

Expand full comment
Johnathan Reid's avatar

I think the key problem here, Phil, is that the tool believes its own hallucination. It's as if it's been trained on a different data set to the outside world's dystopian take on AI to generate this response (as if its makers would consider such a thing...). It's impossible for it to do what it's claiming without genuine self-perceived experiences. It has no ethical or moral embedding to achieve any reasoning without reference to other external influences - good and bad [whose definitions are always tbc]. It doesn't genuinely perceive anything with which it can filter its statistical stream of collected and connected symbols. Are mindless 'thoughts' really thinking, let alone a force for positive human progress? If they are - and that's what we also do, regurgitating only what we've sucked in - then the argument doesn't matter. It's merely a further evolution of blind, purposeless inanity.

Expand full comment
Phil Tanny's avatar

Hi Johnathon, thanks for your reply.

I started out commenting on this blog by making the case that AI is generally a bad idea for humanity at this time, as it will generate accelerating social change faster than we can adapt. I still believe that. But AI is going to happen anyway, so....

My interest is in what kind of content AI can produce, either on it's own, or in partnership with humans. What I've seen so far, based on quite limited experience using ChatGPT, is that the content AI produces is often of higher quality than the content humans produce in many circumstances. Where that is the case, such as is so often true in social media, it just makes sense to replace the low quality human content with a higher quality content from AI.

I get the concern about content quality. So why don't we pull the plug on Substack Notes, Chat and blog comments, which more often than not are mostly big piles of lazy human generated junk? Why not erase Facebook and Twitter too while we're at it?

The simplistic premise driving many commentators seems to be:

Human content = Good

AI content = Bad

A better premise would seem to be that content quality varies considerably, both in the human realm, and in the AI realm. In some circumstances human content seems preferable, and in others AI seems to win.

Again, the larger picture is that AI content is likely to continually improve over time, whereas human content will likely remain pretty much the same as it's always been. If true, then whatever comparison we might like to make about AI vs. human content now, is only good for now.

Expand full comment
Alberto Romero's avatar

The problem with this perspective is that you seem to value content on quality. That's not what you get from social media or Substack comments. You get an opinion. You get ideas. You get access to other people's minds and more broadly to people's behavior online. Quality is not important to me unless I'm reading an article or a book, for which AI is much worse. Why would I want to replace Twitter with bots? The existence of the platform would be nonsense 100%. It's weird to think in terms of what's better and what's worse in general but in this case, it's just counterproductive.

Expand full comment
Phil Tanny's avatar

Ok, if we're not concerned with quality then either lazy humans or careless AI use will do. :-)

Why should I be interested in other people's opinions, ideas and behavior if, as you seem to be agreeing, we typically don't get quality from social media and Substack comments? Am I supposed to value humans above all else no matter what they produce?

How many times have you read people here and elsewhere complaining that sites like Facebook and Twitter are typically little more than massive trash piles? Dissatisfaction with social media platforms seems pretty common.

Expand full comment
Phil Tanny's avatar

When we read humans posting on the Internet, we're engaging a human speaking through their computer.

When we read AI content, we're still engaging with humans, this time speaking through both a computer and a gen AI system. In order for AI content to exist on the Web, somebody has to write the prompt. Somebody has to decide if the generated output serves their purpose. Somebody may edit the generated content a number of time before sharing it.

When you watch a movie you're engaging with a movie director. But the director may not have personally written the screen play.

Expand full comment
Alberto Romero's avatar

Come on Phil, you know your argument doesn't apply to what we're talking about. When we read AI content we're still engaging with humans? If you stretch the meaning of that sentence until it becomes essentially meaningless then yes. But that's not what we're doing here if we want to find some deeper truth, right?

Expand full comment