4 Comments
User's avatar
⭠ Return to thread
Alberto Romero's avatar

Important response, thanks Phil!

I agree - we don't disagree that much because I think that as AI gets better what I just argued won't apply, as you say. I think, however, we're further from that kind of human level as it might appear. For now, though, the way AI Alberto can chat with you differs radically from the way I can.

The part that interests me more is your last sentences. I agree I'm biased to believe I'm safe because I'm fine with how things are and wouldn't be if people begin to prefer AIs to real humans (especially online). The key, I think, is how far that kind of human-level AI really is. Substack's piece and mine both talk about AI as it is now, not any science fiction scenario we might imagine. Because predicting the future is extremely hard and we don't really know what it takes to build an almost-human AI.

For now, I know I'm beyond the problems AI presents to writers because of the specifics of my job (other writers, like freelancers, aren't). Will I *always* be? No way.

But AI can't simply end my job in the same way that the printing press immediately made scribes unnecessary. That's the difference. How the printing press could be used and what it could do was trivial. Scribes were done. But with AI things are different. We are making wild extrapolations, believing what companies say about the future progress in the field and believing their hopes to be true. The truth is that we don't know. For now, nothing suggests AI will be able to replace (not even online) everything a human can do to relate to other humans.

We can extrapolate the progress we've recently seen on AI and imagine it will, sometime in the future, be able to replace everything I can do as a human writer. I argue that's mostly an illusion of our ability to project the current state of the world into the future (most of the time rather wrongly). You always say thought is the seed of our problems as humans - this is another of the problems that come attached to the ability to think, the inability to realize we're making wrong predictions or the inability to realize we're wildly overestimating our ability to see what's next (this applies to you and to me).

That kind of super human AI will come if we try hard enough, but hopefully not sufficiently soon for me to have to worry that much.

Expand full comment
Phil Tanny's avatar

Yes, by my own behavior I'm of course agreeing that human Alberto is currently superior to AI Alberto. And yes, we don't know what is coming in AI development, or when it might come, or even if big improvements will come.

As to writers worrying, age seems quite relevant here. I'm 71, and so not worried at all. Are you in your 30s? If yes, you have to worry more than me, but not as much as those decades younger.

A speculative question you may wish to address in some future article could be, is there any hard limit to future AI development? If yes, that may settle these questions.

If no, then I think we can predict the path of future AI development in a very general manner by focusing on the question, what do humans want? If enough people want XYZ feature, then there will be money to be made by providing that feature, and that will steer development in the XYZ direction.

Expand full comment
Alberto Romero's avatar

I'm 30 actually, good eye! I agree, it will depend on what humans want, that's what I refer to with industrial AI (I've talked about that in a few articles recently). People don't care about AGI and that's an important handicap.

Expand full comment
Phil Tanny's avatar

Some day grandpa will tell a story about what he was doing when he was your age. I promise, you'll come out looking VERY GOOD in that story! :-)

Expand full comment