16 Comments
User's avatar
Ricardo Acuna's avatar

This post caused me to remember the dot-com bubble, when I was working at a startup on 2000 year. A lot of money was pouring into startups, early internet-based products and solutions over hyped. And suddenly, everything collapsed. Investors and people lost TRUST in the internet industry for a while. Over time, technology advanced, and now we're using many products and solutions that were envisioned back then which are nowadays real and useful. I think something similar could be happening today. AGI isn't feasible, at least in the short term, maybe 1-5 years away. Could an AI Industry bubble happen again? Who knows, we should learn a little from history.

Expand full comment
Alberto Romero's avatar

Yes! And the part of usefulness is also very true. AI is very useful, there's no denying that, not that I'd want to!

Expand full comment
A Horseman in Shangri-La's avatar

Hey Alberto,

Thank you for the insight you share here, amidst all the crazy hyped up nonsense about Abolition Imagined. I'm a researcher RE the business and IT risks related to AI and all of it correlates exactly with what you are trying to say here.

Sincerely,

Nicolai Moiseyev

Expand full comment
William Meller's avatar

The part you talk about addiction made me think of Shoshana Zuboff’s warnings in The Age of Surveillance Capitalism. She argues that the real product is not the tool, but the behavioral modification it achieves.

AI is the latest and most efficient way to do that at scale.

AI companies are monetizing prediction and shaping user behavior, so they can create dependence while claiming to "empower" users. It is right to call out the moral cost of this trade.

Great reading!

Expand full comment
Alicia Bankhofer's avatar

Sadly, every word is true. It‘s the volatility and lack of moral compass that irks me the most.

Expand full comment
Alberto Romero's avatar

Personally their lack of moral compass is unacceptable. The only part I genuinely despise them for - we know how much harm that causes...

Expand full comment
Mohamed Shamekh's avatar

I think point V is the biggest problem surrounding AI rn. It’s the public messaging and the perception. The culprits are both the people running these AI companies but mostly the way the media is covering. The lack of a middle ground has always been a hallmark of how media treats any story because middle ground doesn’t get clicks. That’s why we go to Substack for these things

Expand full comment
Alberto Romero's avatar

Yes, the incentives are not the best (I include myself here because I try to show common sense but sometimes the reward of a slightly exaggerated story can be big), but I still blame the AI companies more than the media. Their words are all over social media we can hear them say one thing one day and another thing the next

Expand full comment
Kevin Beck's avatar

LLMs and autonomous agents are over hyped. Meanwhile I think there is too little reporting on robotics and automation. Amazon alone has something over a million people employed in its warehouses. If they succeed in replacing even a fraction of those people with robots, that's a lot of unemployment. Waymo might not have achieved 100% autonomous driving yet but it doesn't need to for lots of human drivers to be displaced. The same applies for long haul trucking. Imagine a world where trucks drive autonomously on easy to navigate freeway routes and humans take over for high traffic city driving. Chatbots don't quite seem to be replacing white collar workers yet, but I think this is a matter of people not yet having learned to use them effectively and perhaps lack of integration with other tools.

I consider AGI a distraction. If no new frontier models were produced for the next few years, there would still be a lot of societal disruption already in the pipeline as we learn how to use the existing models.

Expand full comment
Alberto Romero's avatar

Agreed on both points re automation/robotics and AGI as a distraction. Societal change will come just not as utopia or apocalypse

Expand full comment
Paul Topping's avatar

I agree with all of this. I always imagine that the discussion inside these AI companies is more about making LLMs earn their keep rather than reach AGI. They don't have a visible path to AGI, at least not to the researchers with their feet on the ground. There will always be the few that sit waiting for AGI to emerge if they could only find the right spell.

As to the discussion outside the AI companies, this is marketing- and investment-speak. They are only too happy to hint that AGI is just around the corner. They are doing real damage by creating hype, false hope, and by diverting investment away from actual AGI research.

Expand full comment
Alberto Romero's avatar

I think they genuinely believe about AGI just not that it's as close as they state. It's a weird mix between "they're standard grifters" and "they're standard prophets"

Expand full comment
Jurgen Appelo's avatar

Well, anyone who had trust in the AI industry was obviously naive. 🙂 People's beliefs depend on where their money comes from. Everything else follows from that.

Expand full comment
Alberto Romero's avatar

Agreed. This was an important write-up, nevertheless. This needs to be said

Expand full comment
Stefano's avatar

As a non-techie who enjoys reading about AI, but who doesn't use it (because I don't actually need to), I'll throw in my 2¢.

I'll approach the question differently and hopefully someone else can refine the argument. In conclusion I think all the issues you're describing is 'a feature, not a bug'. It's just that the monster is so ugly, and each one of us is part and parcel of it, that being honest would require a completely different approach to life.

By way of analogy I'll set the stage: Years ago as an expat returning to Italy I was incredibly frustrated by all manner of things in this country. Everything from too much bureaucracy, the costs of freelance work, creating a startup, general ignorant people (for ex: 14% of university starters would finish before 2010, when they introduced a 3-yr undergrad degree, standardizing with the rest of Europe) vs pretentious pseudo-intellectuals. Over the years I've lost count of how many Italians have told me Italy is 90% of the world's culture. I'm not kidding, it's that bad and I could go on and on. My point is, underneath the hood, this is a very complicated and complex country. And tourists come and go and think this country is great while very few businesses invest.

At a certain point my quality of life improved substantially when I incorporated 2 axioms in my mindset: a. Italy is a third world country masquerading as an imaginary first world country, and B. All these weird contradictions, like undergrads who would never finish, implying a broken higher education system, were intentionally designed to be so.

Yesterday, replying to an author in the woo sci-fi universe, linked to a post by the author of Primer (sci-fi novel about AI), it becomes clear what the problem with AI (LLMs) is if we contemplate what they could be: values aligned, tailored for age, growth, inbuilt friction, and especially purpose. Another good example from sci-fi is Ironman's Jarvis. This is what our imaginary tells us AI could be, if we designed AI with vision and purpose (and the Primer novel is a kind of antihero thing).

So when I read about the AI industry trying to align values, I laugh hysterically. They're trying to avoid lawsuits, they're not interested in aligning values, or else we wouldn't even be reading about how LLMs are ruining learning and education, or people getting addicted and being unable to do that which they could do without AI a few years ago. I don't think it's because industry has set the bar too high (as you wrote), but because they don't even know what virtuous values to align with, and even if they did, these are secondary to profit. Their ecosystem, our socioeconomic space, has been intentionally designed to be this way.

That's the sad but true reality we live in.

So I read your article, and I've been enjoying your musings on the subject for a while now (as a free sub, apologies for being a freeloader). Everything I write could be (and has been) developed further, but the gist of criticism should concern what's under the hood of the silicone valley business models. These features, which aren't bugs, have destroyed and are ruining the lives of billions of people. Again, sad but true (ex: social media). And the whole throwing money at new tech and then demanding a return is a fancy way of saying money corrupts. You briefly touch on this subject with the whole libertarian vs venture capitalist thing, but honestly, we haven't been living in a capitalist world for a while now, if ever. Today it's sliding more towards feudal oligarchic rent seeking, but it wasn't all that great in the 90s or 70s either (incidentally, here in Italy it's always been kleptocratic: we invented corporatism when we went in full retard with fascism). My point is, all these people and companies can only be as good as the system allows them, if money is part of the equation. So dopamine addictive loops are a feature. And hundreds of books have been written about the perpetual destruction of the MSMI strata (Micro & Small businesses), the backbone of the middle class as it represents about 2/3s of employment in every economy across the developed and "developing" world (So Amazon).

These are all features of the system. They're just as much a feature as is industrialized processed food ruining health, pollution ruining air and water, or manipulating interest rates and printing credit to boost consumption while undermining the foundations of our societies. And I'm an optimist, I'm not writing this to forecast impending doom.

So AGI. AGI is like going to Mars, it's a vision. In the 50s it was flying cars. It's not actually going to happen like in the dream, and when it does, it'll look a lot different, like the flying cars and drones rolling out today. They're ugly and impractical. We'll be lucky if get self driving cars done by 2030.

So LLMs today are glorified chatbots using search. When they hallucinate, the ontological woo explanation is "that's some demonic shit right there bro", while the materialist grit explanation is there's an error somewhere in the specs or coding, and realistically we're not able to achieve the level of perfection demanded to remove all errors. And we're not (everyone) even able to agree on what's intelligence or reasoning, so AGI or AI sounds pretentious. But if we called them chatbots with search, it wouldn't be sexy word choice to create a $600bn outlay, would it?

We'll get AI agents and when they hallucinate or go off script there'll be a lot more than just burnt toast. I haven't watched the video but Karpathy's fallibile agents sounds about right, but maybe why not ask hedge funds and the like about their Algos running low latency trading and what not. They've probably been using agents for a while without calling them that (And their Algos hallucinate too: research micro market crashes, their frequency is actually quite scary).

But if we go back to what's necessary because it's a feature and not a bug, we won't get AI agents. They'll be called AI agents, but actually, we'll be their assistants. If we extrapolate out from what's happening now, between dopamine addictive loops and early heavy users of LLMs realizing they've forgotten how to do without, the future looks bright for everyone who doesn't use this tech. That's a scary vision right there.

Expand full comment
Alberto Romero's avatar

Haha I think we're saying more or less the same thing. I'm just doing it in a more compact way because I have to! I can't say "oh no the world is so bad" and publish that - but I agree haha

Expand full comment