30 Comments
Dec 14, 2023Liked by Alberto Romero

lovely piece. I've been so enjoying your work for all the reasons you've outlined. TAB feels like a warm cabin in a snowstorm!

Expand full comment
author

Thank you Adam 🙏🏻 That's what I want it to feel! Especially as a newsletter about AI that I think stands out precisely because of that.

Expand full comment

Hey Alberto, I’m wondering if you might pen a post on the future of writing instruction based on these insights. Sort of a follow up to your summer posts on education. Would love to see what you have in mind!

Expand full comment
author

Interesting idea Nick! Will definitely consider it

Expand full comment

Great Piece Alberto. I’m really curious to know (perhaps you could write on this) how to develop the human connection while writing.

Is it something that can be learned?

Expand full comment
author

Of course. I learned it. I think others have written about that much better than I ever could. But I can tell you that it can be learned (what you won't find is a precise manual or recipe for it - it's one of those things you only truly learn by doing). I wouldn't miss those resources, though. They can give you hints that might take you a while to figure out on your own.

Expand full comment

Very good. I'm glad I became a paying subscriber! I was in doubt. No more.

Expand full comment
author

Thanks Paulo for your trust!

Expand full comment
Dec 19, 2023Liked by Alberto Romero

You make the word just a Human a compliment for the rest of us. Keep it up

Expand full comment

The glaring truth is that AI tools have emerged and more advanced GPT tools will be created. They might make our deep thinking capability dull and uncreative but they will never replace our heart-to-heart conversations. They can only make our work efficient but they won't have the emotions that we have about each other.

On June this year, I delved into what will go wrong with us as AI continues to be developed. It is worrying when we let AI replace our precious asset - the mind. Check in the link below

https://open.substack.com/pub/thestartupfromafrica/p/with-the-proliferation-of-artificial?r=m5mq1&utm_campaign=post&utm_medium=web

Expand full comment
Dec 13, 2023Liked by Alberto Romero

Do you think the same logic applies to all other mediums and content types?

Expand full comment
author

I'd say it applies to every medium but not to every creative profession. I'm talking about my experience as a free Substack creator, if I was instead a freelancer then things would be different. I will write about this from that perspective.

Expand full comment

All of us with an established voice will benefit, but starting fresh will be harder.

AI let's people like us distribute our voice and persona further, such as AI-generated voiceovers for our posts in our specific likeness: https://podcast.interconnects.ai/

Expand full comment
author

But does AI add a significant difficulty factor over how difficult it was already to succeed as a writer? (Btw great newsletter Nathan, high value!)

Expand full comment

Honestly, for me, I feel like AI is making me better (more interesting self-feedback, the ability to scale to multiple modalities like voice + images with a python script, and more self reflection about what the human element of writing is).

AI substantially adds to the noise at the bottom, so the barrier to entry is literally higher. Not sure it's an answer.

Expand full comment

Your points are very thoughtful and well taken, and your readers are more thoughtful and engaged than most. That said, please remember the first part of that truism: “You can fool SOME of the people, ALL of the time……” Unfortunately that fraction is now growing every day, and they will indeed be proving themselves to be fools once again. They are a dangerous lot, especially when their stupidity or ignorance is properly harnessed by convincing AI flooding the zone with passable shit. After all, we went to war in Iraq over a false story, created about false WMDs, told by an authority figure (Collin Powell) with mere depictions (drawings actually) of labs which ended up being entirely fabricated. That fooled “most of the people, some of the time”. Yet the agenda to invade was moved forward against more reasoned positions. The result: millions of lives lost over decades and trillions of dollars wasted. Corporations making colossal shitloads of money, if not outright siphoning money through corrupt practices. Meanwhile dumb fucks thought it was all about “freedom”. I made a pretty good living off those wars as a DHL contract pilot flying the mail into Iraq and Afghanistan. An old high school classmate once thanked me for “my service”. Stupid fuck.

AI is currently restricted from calling anyone a “dumb fuck”, even when they are! Imagine an AI that can eventually get away with that. They’re going to automatically sound much more human, especially if they can make a good case for it.

People were even fooled by Eliza when it debuted in the 1960s. Those dim bulbs will always lack the nuance of appreciating human written content and they’ll be completely taken in by processed crap. Not your readers per say, but large swaths of humanity who lack the sophistication or the desire to know any better. Some people absolutely LOVE processed foods, in spite of the fact that it degrades their health and ultimately quality of life, and can even shorten their lives. AI processed crap will have its own set of bad outcomes. A great many humans value lies above truth and authenticity. This will eventually be a colossal fuck’n problem.

Expand full comment

Alberto writes, "AI might eventually do most things better than us, but I’m convinced they will never replicate our human essence."

Not replicate. Imitate. And imitate in an ever more credible manner over time.

Expand full comment

Alberto writes...

"The reason I’m calm is much simpler and certainly more human: I’m working very hard to make the relationship with you as humane as I can make it. Call me delusional, but I firmly believe that, as a human being, you value that above all else."

Ok, if you insist, you're delusional. :-)

Human beings don't value each other just because we're human. We value what we get from each other. Any system which can provide the value we seek in a manner which is cheaper, easier, more convenient, more customizable etc will present a threat to human to human relationships.

As we've discussed before, we're doing a version of that right now, right here, on this blog. Neither of us insist on meeting the other in person.

Expand full comment
author

"Any system which can provide the value we seek." That's where we disagree. No system other than a human can provide the value you seek. At least regarding writing.

Expand full comment

Perhaps you're referring to now, and I'm referring to what is coming? And, it seems to depend on what value one is seeking.

Today, AI can provide basic information about topics which aren't crucial. Here's an example: https://hippytoons.com/s/bands. But, it wouldn't be wise to use today's AI for things like medical or legal advice, topics where factual accuracy is crucial.

The situation today seems similar in fiction. Children's stories yes, award winning literature, no. https://hippytoons.com/s/fiction

Is there some hard limit which AI will never be able to surpass? Or will AI continue to get better and better until it can equal and/or surpass the quality of human writing?

My guess: Some humans love to write, and AI can't stop them from doing so. So human writing will continue. Making a living as a writer may be a different story, at some unknown point in the future.

Expand full comment
author

You hit the nail on the head, Phil: It depends on what value one is seeking. But I defined the terms of the debate in my article! If you're *just* seeking info then you're not here, you're probably on Google or even ChatGPT. If you are seeking a connection with a human writer, an AI writing tool can't provide that. My argument is that humans want that kind of value that only humans can provide.

But I want to qualify that assertion by saying that I'm talking from my experience as a newsletter writer. If I was a freelancer, that would be a different story. But does AI really add a degree of difficulty to succeed as a freelancer that's significant in comparison with how hard it was in the first place? Perhaps, I don't know.

Expand full comment
Dec 15, 2023Liked by Alberto Romero

We don't disagree that much. In the current AI situation, what you say makes sense. Few of us are going to spend the day chatting with ChatGPT, because it's not human enough, and we're still obsessed with the AI vs. human comparison, because the tech is so new.

The farther ahead in time we look, we may agree less. Assuming of course that AI continues to improve, which is currently unknown, and could be debatable. Consider this hypothetical perhaps?

1) Real Alberto - super knowledgable about a very popular topic.

2) AI "Alberto" - super knowledgable about nearly every topic.

1) Real Alberto - will chat with me about AI a few times on each article

2) AI "Alberto" - will chat with me all day long every day on any subject

If we assume that AI will continue to improve, sooner or later it will be able to do everything you can do, and a lot more too.

If we were next door neighbors who met regularly in person, it's hard to imagine AI being able to replicate that. But on the Internet, we're already half way there.

One difference between me, and you and the Substack leadership, is that you guys have a big stake in the status quo, and I do not. I can't calculate the impact of that, but it's a factor that can be considered.

Expand full comment
author

Important response, thanks Phil!

I agree - we don't disagree that much because I think that as AI gets better what I just argued won't apply, as you say. I think, however, we're further from that kind of human level as it might appear. For now, though, the way AI Alberto can chat with you differs radically from the way I can.

The part that interests me more is your last sentences. I agree I'm biased to believe I'm safe because I'm fine with how things are and wouldn't be if people begin to prefer AIs to real humans (especially online). The key, I think, is how far that kind of human-level AI really is. Substack's piece and mine both talk about AI as it is now, not any science fiction scenario we might imagine. Because predicting the future is extremely hard and we don't really know what it takes to build an almost-human AI.

For now, I know I'm beyond the problems AI presents to writers because of the specifics of my job (other writers, like freelancers, aren't). Will I *always* be? No way.

But AI can't simply end my job in the same way that the printing press immediately made scribes unnecessary. That's the difference. How the printing press could be used and what it could do was trivial. Scribes were done. But with AI things are different. We are making wild extrapolations, believing what companies say about the future progress in the field and believing their hopes to be true. The truth is that we don't know. For now, nothing suggests AI will be able to replace (not even online) everything a human can do to relate to other humans.

We can extrapolate the progress we've recently seen on AI and imagine it will, sometime in the future, be able to replace everything I can do as a human writer. I argue that's mostly an illusion of our ability to project the current state of the world into the future (most of the time rather wrongly). You always say thought is the seed of our problems as humans - this is another of the problems that come attached to the ability to think, the inability to realize we're making wrong predictions or the inability to realize we're wildly overestimating our ability to see what's next (this applies to you and to me).

That kind of super human AI will come if we try hard enough, but hopefully not sufficiently soon for me to have to worry that much.

Expand full comment

Yes, by my own behavior I'm of course agreeing that human Alberto is currently superior to AI Alberto. And yes, we don't know what is coming in AI development, or when it might come, or even if big improvements will come.

As to writers worrying, age seems quite relevant here. I'm 71, and so not worried at all. Are you in your 30s? If yes, you have to worry more than me, but not as much as those decades younger.

A speculative question you may wish to address in some future article could be, is there any hard limit to future AI development? If yes, that may settle these questions.

If no, then I think we can predict the path of future AI development in a very general manner by focusing on the question, what do humans want? If enough people want XYZ feature, then there will be money to be made by providing that feature, and that will steer development in the XYZ direction.

Expand full comment

Alberto writes....

"I’m talking about the human idiosyncrasy that emerges out of living; out of the past experiences that aren’t exactly the same as those of our neighbors or our readers but sufficiently similar as to create a limbo of sensations and feelings of familiarity. No AI, however advanced, can enter that place."

AI doesn't have to "enter" that place. It will just mimic it.

Expand full comment
author

It can't without living like a human, being like a human. That's what I'm saying.

Expand full comment

I write an article about my experience of Covid. What stops some version of AI from successfully imitating that article, with "success" defined as most readers not realizing it's AI generated?

Expand full comment
author

I don't define success like that - that's deception. It might be easy to make people think an article has been written by you when it's AI-generated instead (although I'd say that doesn't say anything good about your writing!) What I'm saying is that it's still easier to feel like a person on the other side of the screen in general, not just in the articles you write, also in these kinds of exchanges.

But even if I accepted your premises I'd say: Try. Try to make an article with ChatGPT feel like one written by a person. You will have a hard time because these systems, the ones that exist today, tend to linger around the center of the distributions of what's possible to say. That's why the writing feels bland, dull, and full of generalities. Humans, in contrast, write about what's unlikely, and implausible. We behave very differently.

That's the issue with your thought experiment, that you make the jump from "I wrote X article" to "what stops an AI from writing X article" but in practice, you find that there are actual things that stop it from doing so - not because no one will be fooled but because you can access that place I refer to and AI can't.

Expand full comment