Discussion about this post

User's avatar
Paul Topping's avatar

I agree with all of this. I always imagine that the discussion inside these AI companies is more about making LLMs earn their keep rather than reach AGI. They don't have a visible path to AGI, at least not to the researchers with their feet on the ground. There will always be the few that sit waiting for AGI to emerge if they could only find the right spell.

As to the discussion outside the AI companies, this is marketing- and investment-speak. They are only too happy to hint that AGI is just around the corner. They are doing real damage by creating hype, false hope, and by diverting investment away from actual AGI research.

Expand full comment
Stefano's avatar

As a non-techie who enjoys reading about AI, but who doesn't use it (because I don't actually need to), I'll throw in my 2¢.

I'll approach the question differently and hopefully someone else can refine the argument. In conclusion I think all the issues you're describing is 'a feature, not a bug'. It's just that the monster is so ugly, and each one of us is part and parcel of it, that being honest would require a completely different approach to life.

By way of analogy I'll set the stage: Years ago as an expat returning to Italy I was incredibly frustrated by all manner of things in this country. Everything from too much bureaucracy, the costs of freelance work, creating a startup, general ignorant people (for ex: 14% of university starters would finish before 2010, when they introduced a 3-yr undergrad degree, standardizing with the rest of Europe) vs pretentious pseudo-intellectuals. Over the years I've lost count of how many Italians have told me Italy is 90% of the world's culture. I'm not kidding, it's that bad and I could go on and on. My point is, underneath the hood, this is a very complicated and complex country. And tourists come and go and think this country is great while very few businesses invest.

At a certain point my quality of life improved substantially when I incorporated 2 axioms in my mindset: a. Italy is a third world country masquerading as an imaginary first world country, and B. All these weird contradictions, like undergrads who would never finish, implying a broken higher education system, were intentionally designed to be so.

Yesterday, replying to an author in the woo sci-fi universe, linked to a post by the author of Primer (sci-fi novel about AI), it becomes clear what the problem with AI (LLMs) is if we contemplate what they could be: values aligned, tailored for age, growth, inbuilt friction, and especially purpose. Another good example from sci-fi is Ironman's Jarvis. This is what our imaginary tells us AI could be, if we designed AI with vision and purpose (and the Primer novel is a kind of antihero thing).

So when I read about the AI industry trying to align values, I laugh hysterically. They're trying to avoid lawsuits, they're not interested in aligning values, or else we wouldn't even be reading about how LLMs are ruining learning and education, or people getting addicted and being unable to do that which they could do without AI a few years ago. I don't think it's because industry has set the bar too high (as you wrote), but because they don't even know what virtuous values to align with, and even if they did, these are secondary to profit. Their ecosystem, our socioeconomic space, has been intentionally designed to be this way.

That's the sad but true reality we live in.

So I read your article, and I've been enjoying your musings on the subject for a while now (as a free sub, apologies for being a freeloader). Everything I write could be (and has been) developed further, but the gist of criticism should concern what's under the hood of the silicone valley business models. These features, which aren't bugs, have destroyed and are ruining the lives of billions of people. Again, sad but true (ex: social media). And the whole throwing money at new tech and then demanding a return is a fancy way of saying money corrupts. You briefly touch on this subject with the whole libertarian vs venture capitalist thing, but honestly, we haven't been living in a capitalist world for a while now, if ever. Today it's sliding more towards feudal oligarchic rent seeking, but it wasn't all that great in the 90s or 70s either (incidentally, here in Italy it's always been kleptocratic: we invented corporatism when we went in full retard with fascism). My point is, all these people and companies can only be as good as the system allows them, if money is part of the equation. So dopamine addictive loops are a feature. And hundreds of books have been written about the perpetual destruction of the MSMI strata (Micro & Small businesses), the backbone of the middle class as it represents about 2/3s of employment in every economy across the developed and "developing" world (So Amazon).

These are all features of the system. They're just as much a feature as is industrialized processed food ruining health, pollution ruining air and water, or manipulating interest rates and printing credit to boost consumption while undermining the foundations of our societies. And I'm an optimist, I'm not writing this to forecast impending doom.

So AGI. AGI is like going to Mars, it's a vision. In the 50s it was flying cars. It's not actually going to happen like in the dream, and when it does, it'll look a lot different, like the flying cars and drones rolling out today. They're ugly and impractical. We'll be lucky if get self driving cars done by 2030.

So LLMs today are glorified chatbots using search. When they hallucinate, the ontological woo explanation is "that's some demonic shit right there bro", while the materialist grit explanation is there's an error somewhere in the specs or coding, and realistically we're not able to achieve the level of perfection demanded to remove all errors. And we're not (everyone) even able to agree on what's intelligence or reasoning, so AGI or AI sounds pretentious. But if we called them chatbots with search, it wouldn't be sexy word choice to create a $600bn outlay, would it?

We'll get AI agents and when they hallucinate or go off script there'll be a lot more than just burnt toast. I haven't watched the video but Karpathy's fallibile agents sounds about right, but maybe why not ask hedge funds and the like about their Algos running low latency trading and what not. They've probably been using agents for a while without calling them that (And their Algos hallucinate too: research micro market crashes, their frequency is actually quite scary).

But if we go back to what's necessary because it's a feature and not a bug, we won't get AI agents. They'll be called AI agents, but actually, we'll be their assistants. If we extrapolate out from what's happening now, between dopamine addictive loops and early heavy users of LLMs realizing they've forgotten how to do without, the future looks bright for everyone who doesn't use this tech. That's a scary vision right there.

Expand full comment
7 more comments...

No posts