55 Comments

And you know, of course, that any time we hear someone claim they're building tech to give the gift of democracy (whether in art, politics, or knowledge) to humanity, if you squint just a little you'll see the glint of dollar signs... and the electric sizzle of power. True democracy doesn't start with a tech tool.

Expand full comment

Nice

Expand full comment

I agree with you 100%. It is very scary as a journalist and as someone who is trying hard to write "the book of my dreams" to accept Gen-AI as a means to produce content that steals from other's creativity. And is also very disgusting that, for now, the new tools are being used for such repulsive BaaS. Congratulations on your enlightning post!

Expand full comment

Interesting perspective

Expand full comment

The math checks out, at least based on my favorite definition of bullshit: https://en.wikipedia.org/wiki/On_Bullshit

Expand full comment

Exactly the definition I had in mind

Expand full comment

I don't have any interest in reading an AI-written novel, or even an AI-assisted one. Nor do I understand the arguments that (i) people are entitled to write the book of their dreams despite a lack of effort or skill on their part, and (ii) effort in writing prompts equates to effort in writing a novel. If (ii) were true, just write the novel the old-fashioned way.

I've written non-fiction professionally, and I definitely agree with Thomas Mann's definition, "A writer is someone for whom writing is more difficult than it is for other people." I think anyone who has tried to write conscientiously will identify with it (and I think a lot of real writers will enjoy engaging with that difficulty). Those who claim that writing skill needs to be democratized seem to believe exactly the opposite of Mann's statement, and that an LLM will allow them to level the playing field with professionals, for whom writing is supposedly easier. That's a complete misunderstanding of what writing is.

Expand full comment

I agree with you, but I'm not sure every great writer would agree with Mann's statement. I don't think there's a perfect rule that applies to all writers. Nonetheless, that doesn't mean we should give away the gift to those who deserve it less, which is what I argue and seems to be happening.

Expand full comment

Deception follows mass adoption in a lot of cases. This includes AI, unfortunately. It is sad to see fake images circulating like this, as it is hurting people already.

Expand full comment

Anything can be used as a tool for bad

Expand full comment

We believe ourselves to be so complex yet are actually so simple. We are the apes who understood the universe—from an invisible prison. We keep extending our arms high up in the air, reaching ever further, but our feet won’t leave the ground.

I so agree with that. I often say "the only way to become truly human is to accept that we are animals". The prison is how evolution designed our brains for survival 100,000 years ago and how difficult it is to move past it. We are in the evolutionary matrix. We are designed for scarcity (of food, relationships, incoming data to our senses) and we built a civilisation that creates the opposite.

Expand full comment

Agreed. Reflecting on the contrast between evolution and civilization can be extremely insightful

Expand full comment

Maybe, this is as far as we can go? Maybe an argument for us being a transitional form?

Expand full comment

Great headline!

Expand full comment

Interesting points

Expand full comment

Agreed, technology can be used as a gift or otherwise, it is a choice and matter of the intention.

Expand full comment

Given what you say (and I'm broadly in agreement with), why are you recommending a substack which serves as a prompted BaaS? This popped up when I subscribed to you:

"Recommended by Alberto Romero

Write With Al By Nicolas Cole

Turn ChatGPT and other Al tools into a personal writing companion. Write With Al offers carefully chosen prompts every week to craft viral content, build an engaged audience, and rapidly expand your digital business."

Expand full comment

I'm not against integrating AI tools into a writing workflow. I say that much in the article.

But it seems it's impossible to have a nuanced opinion nowadays. You have to choose black or white. If you criticize people who do X, how can you dare agree (or just not disagree) with anything that's within a certain radius of X?

Expand full comment

My take was only that using LLMs to 'craft viral content' at a faster rate falls very close to X.

Expand full comment

That's what they say but that's just marketing copy. What they do (I know Cole from Medium) is help people learn to use ChatGPT, basically. It's nowhere near the same as having a company that literally makes content to spread on the web just to fill it with, idk, ads about stuff. If they did that, they wouldn't have my recommendation but my report

Expand full comment

I see your point. But one consumer's BS take can be another marketeer's copy. The beauty of an ad is in the eye of those beholden to a brand. Apple is an excellent example.

Expand full comment

Fantastic essay and I 100% agree. The volume of sheer bullshit and snakeoil taking over LinkedIn and Facebook in groups that used to actually be interesting discussion groups on AI...

Expand full comment

The essence of greed... good that you wrote so directly. Invite you to dive into QLN.life

Expand full comment

What annoys me reading these hindsight pieces is, we DID KNOW this was going to happen.

I wrote a piece about all the sins that went into making our beloved AI a thing (https://substack.cloudbuilder.io/p/ai-and-chatgpt-the-bullshit-generator) and that was built on a piece by Dan McQuillan where he coined the phrase "Bullshit generator".

The hand-wringing now that we've supercharged our way, some might say recklessly, to where we are now, and realising "oh no, maybe we should have thought about this a bit more" just makes me feel some type of way- like "what did you think was going to happen?"

I'm sorry, not having a go at you, but this type of thing that happens often in tech just pisses me off.

If we cared enough about who it might affect the worst, instead of who it would benefit the most (we assumed it would be just "us" right?) we might not be so surprised by any degree of the outcomes we're seeing.

Expand full comment

I've written about this many times before. I also knew it was going to happen. You have to keep writing about it once data is available and not just predictions. It adds power to a story.

There's also a better side to all this. We shouldn't be limited to one side or we risk becoming inescapably biased. That's the problem of preconceptions. I'm all in favor of truth, as nuanced as it is, like it or not, not discourse.

The deepfake problem, for instance, couldn't be more complex. No easy story or narratively-enhanced prediction couldn't have predicted that. Data and knowledge - and the complex realities they reveal - are paramount.

Expand full comment

100% agree we can have predictions, but then let the data bear out the end result, I'm also in favour of not limiting ourselves or being biased (as my post covered).

So, I agree with everything you said up until the deepfake paragraph...

Just so I'm not misunderstanding you- are you saying no-one could have predicted that human beings would abuse a technology that gives them the capability of impersonating anyone in the world in a hyper-realistic way?

Did we need a prediction to tell us humans are capable of and have carried out the worse things, using tech?

Expand full comment

No, I'm saying the problem is much more complex than we tend to think. Some of the references in my article are really good to understand why and how.

Expand full comment

Absolute frames of reference do not exist; the individual self, with its limited intelligence and boundless ego, is merely an illusion born from emergent physical phenomena that take place within the human brain. This certainly applies to bullshit AI researchers, bloggers and writers that publish a lot of informal bullshit in the form of plagiarized research and commentary in a non-peer reviewed media. It is very easy to throw in stolen LLM-curated opinions in the for-profit echo chamber and then claim righteous authorship of circular, but certainly entertaining, discussions.

It is crucial to approach the science and philosophical aspects of AI with honesty, scientific rigor and humility, recognizing our own limitations and the immense potential for AI to benefit all humanity, and the people that risk careers and fortunes to make that happen.

Expand full comment

Is this a convoluted way to defend generative AI given the evidence? Mine isn't a criticism of the technology but the companies that build it with unethical practices and especially the users who misuse it.

Expand full comment

The Tower of Babel continues to win.

Expand full comment

I see a contest coming between a people's need to NOT use AI for conventional things like banking & commerce and companies "solution" to plying hyper-securitized mandates are in Bush-era Know Your Customer policies; which will be applied to all things that an AI can generate/aggregate or integrate with.

There is a request for comment CFTC for things like this. The deeper problem here is integration for speculation to see if a commerce system works. They aggregate a lot of data. It is commodified with little amenity for data security and data privacy. So just because a tech business would crown themselves King of a thing doesn't entitle them to plumb every crumb of my personal data in perpetuity and to use online banking as a means to coerce the markets into a identity mandate funnel.

https://www.dwt.com/blogs/financial-services-law-advisor/2024/02/cftc-ai-task-force-seeks-comments-on-uses-risks

Expand full comment