42 Comments
User's avatar
Birgitte Rasine's avatar

And you know, of course, that any time we hear someone claim they're building tech to give the gift of democracy (whether in art, politics, or knowledge) to humanity, if you squint just a little you'll see the glint of dollar signs... and the electric sizzle of power. True democracy doesn't start with a tech tool.

Monica Mistretta's avatar

I agree with you 100%. It is very scary as a journalist and as someone who is trying hard to write "the book of my dreams" to accept Gen-AI as a means to produce content that steals from other's creativity. And is also very disgusting that, for now, the new tools are being used for such repulsive BaaS. Congratulations on your enlightning post!

Chris Hurst's avatar

Interesting perspective

Sewer Kitten's avatar

The math checks out, at least based on my favorite definition of bullshit: https://en.wikipedia.org/wiki/On_Bullshit

Alberto Romero's avatar

Exactly the definition I had in mind

A.J. Sutter's avatar

I don't have any interest in reading an AI-written novel, or even an AI-assisted one. Nor do I understand the arguments that (i) people are entitled to write the book of their dreams despite a lack of effort or skill on their part, and (ii) effort in writing prompts equates to effort in writing a novel. If (ii) were true, just write the novel the old-fashioned way.

I've written non-fiction professionally, and I definitely agree with Thomas Mann's definition, "A writer is someone for whom writing is more difficult than it is for other people." I think anyone who has tried to write conscientiously will identify with it (and I think a lot of real writers will enjoy engaging with that difficulty). Those who claim that writing skill needs to be democratized seem to believe exactly the opposite of Mann's statement, and that an LLM will allow them to level the playing field with professionals, for whom writing is supposedly easier. That's a complete misunderstanding of what writing is.

Alberto Romero's avatar

I agree with you, but I'm not sure every great writer would agree with Mann's statement. I don't think there's a perfect rule that applies to all writers. Nonetheless, that doesn't mean we should give away the gift to those who deserve it less, which is what I argue and seems to be happening.

Masaru's avatar

Deception follows mass adoption in a lot of cases. This includes AI, unfortunately. It is sad to see fake images circulating like this, as it is hurting people already.

Chris Hurst's avatar

Anything can be used as a tool for bad

Schroedinger's Octopus's avatar

We believe ourselves to be so complex yet are actually so simple. We are the apes who understood the universe—from an invisible prison. We keep extending our arms high up in the air, reaching ever further, but our feet won’t leave the ground.

I so agree with that. I often say "the only way to become truly human is to accept that we are animals". The prison is how evolution designed our brains for survival 100,000 years ago and how difficult it is to move past it. We are in the evolutionary matrix. We are designed for scarcity (of food, relationships, incoming data to our senses) and we built a civilisation that creates the opposite.

Alberto Romero's avatar

Agreed. Reflecting on the contrast between evolution and civilization can be extremely insightful

Kay Lew's avatar

Maybe, this is as far as we can go? Maybe an argument for us being a transitional form?

Stephen Moore's avatar

Great headline!

Chris Hurst's avatar

Interesting points

May's avatar

Agreed, technology can be used as a gift or otherwise, it is a choice and matter of the intention.

Johnathan Reid's avatar

Given what you say (and I'm broadly in agreement with), why are you recommending a substack which serves as a prompted BaaS? This popped up when I subscribed to you:

"Recommended by Alberto Romero

Write With Al By Nicolas Cole

Turn ChatGPT and other Al tools into a personal writing companion. Write With Al offers carefully chosen prompts every week to craft viral content, build an engaged audience, and rapidly expand your digital business."

Alberto Romero's avatar

I'm not against integrating AI tools into a writing workflow. I say that much in the article.

But it seems it's impossible to have a nuanced opinion nowadays. You have to choose black or white. If you criticize people who do X, how can you dare agree (or just not disagree) with anything that's within a certain radius of X?

Johnathan Reid's avatar

My take was only that using LLMs to 'craft viral content' at a faster rate falls very close to X.

Alberto Romero's avatar

That's what they say but that's just marketing copy. What they do (I know Cole from Medium) is help people learn to use ChatGPT, basically. It's nowhere near the same as having a company that literally makes content to spread on the web just to fill it with, idk, ads about stuff. If they did that, they wouldn't have my recommendation but my report

Johnathan Reid's avatar

I see your point. But one consumer's BS take can be another marketeer's copy. The beauty of an ad is in the eye of those beholden to a brand. Apple is an excellent example.

Michael Woudenberg's avatar

Fantastic essay and I 100% agree. The volume of sheer bullshit and snakeoil taking over LinkedIn and Facebook in groups that used to actually be interesting discussion groups on AI...

Amihai Loven's avatar

The essence of greed... good that you wrote so directly. Invite you to dive into QLN.life

Noam Miske's avatar

Absolute frames of reference do not exist; the individual self, with its limited intelligence and boundless ego, is merely an illusion born from emergent physical phenomena that take place within the human brain. This certainly applies to bullshit AI researchers, bloggers and writers that publish a lot of informal bullshit in the form of plagiarized research and commentary in a non-peer reviewed media. It is very easy to throw in stolen LLM-curated opinions in the for-profit echo chamber and then claim righteous authorship of circular, but certainly entertaining, discussions.

It is crucial to approach the science and philosophical aspects of AI with honesty, scientific rigor and humility, recognizing our own limitations and the immense potential for AI to benefit all humanity, and the people that risk careers and fortunes to make that happen.

Alberto Romero's avatar

Is this a convoluted way to defend generative AI given the evidence? Mine isn't a criticism of the technology but the companies that build it with unethical practices and especially the users who misuse it.

Kay Lew's avatar

The Tower of Babel continues to win.

Sheila Dean's avatar

I see a contest coming between a people's need to NOT use AI for conventional things like banking & commerce and companies "solution" to plying hyper-securitized mandates are in Bush-era Know Your Customer policies; which will be applied to all things that an AI can generate/aggregate or integrate with.

There is a request for comment CFTC for things like this. The deeper problem here is integration for speculation to see if a commerce system works. They aggregate a lot of data. It is commodified with little amenity for data security and data privacy. So just because a tech business would crown themselves King of a thing doesn't entitle them to plumb every crumb of my personal data in perpetuity and to use online banking as a means to coerce the markets into a identity mandate funnel.

https://www.dwt.com/blogs/financial-services-law-advisor/2024/02/cftc-ai-task-force-seeks-comments-on-uses-risks

Dave Friedman's avatar

I have this theory, which I have been noodling on, and about which I will write soon, that as AI advances and becomes more powerful, the world will become more volatile. This will negatively affect the institutionalists and favor the dynamists. But, alongside the institutionalists, it will also negatively affect those who are susceptible to manipulation, deepfakes, etc. May we live in interesting times, as they saying goes.