How to Draft a Low-Cost AGI-Proof Plan
Super powerful AI may never happen, but what's the harm in having a plan?
Don’t freak out. This post isn’t about some big release the public doesn’t know about. I could’ve written this two years ago; it’s just the rational, albeit atypical reaction to what I expect to be the natural continuation of the last decade of AI development.
My decision to publish this now is arbitrary except for one reason: I’ve seen people who know more about AI than I do draft their plans for an AGI world, which is a pretty good hint you should do the same thing. Especially if the cost is low and the upside is high. As it is.
Generative AI is a phase (like all concerns that stem directly and exclusively from it, e.g. hallucination or deepfake panic). The next phase is autonomous agents capable of reasoning and unbounded action and then, sometime down the line, if no significant obstacle blocks the path, general AI (or human-level AI).
The implications of this trend are huge but we ignore what, when, or how huge. We just know the path we’re walking leads there.
So, don’t freak out. Yet.
This post isn’t a prescription or invitation to change your life. You can read it and, as most of you will do, go on with your life as usual. Respectable but, allow me the offense, wasteful. You have the privilege of knowledge that very few in each generation have. Playing deaf is a waste of that knowledge—especially when making a quick plan or a to-do list can have a tremendous upside for little cost.
Admittedly, it’s a terrible idea to change everything, take all the risks, and go crazy in response to an uncertain prediction. It’s smart, however—and the highest possible return on investment if successful—to prepare for that uncertain prediction that becomes true eventually. That’s what I’m doing today and what I urge you to do—I want to inspire your prepper identity (we all have one, don’t hide it!).
I focus on AGI because this is an AI blog but the attitude and even the specifics of the plan can be translated to events under the category of “the unprecedented never happens until it happens.” Think climate change-induced mass migration or nuclear war between NATO and Russia (both of which could very well happen before human-level AI and be more destructive). This is a post about how to plan for the unplannable.
Making the right call is key, though. That’s why I’m not drafting an action.exe file instead. The important part is having the plan written down in case you have to implement it. If I make an incorrect prediction and act mindlessly on it, I’d face a damned future (e.g. don’t cash out all your savings). Preparation isn’t execution. That’s a call for you to make.
Don’t think that because you’ve never had to make such a call you won’t have to in the future. I’m personally more worried about nuclear war than anything else right now but I won’t dismiss the hints once they arrive, whatever the threat. We laugh off of preppers but they’re the ones who survive apocalypses. And, you know, if those with more than enough money are spending it on bunkers, perhaps I should buy the low-cost analogous (a hut in the forest?)
Anyway, all this preamble is to say that powerful AI systems are likely coming (no natural law we know of prevents the existence of silicon-based intelligence. Another question is how to do it or how hard it is, which we don’t know how to answer yet). So if we can be ready, then we should.
I’m not alone in thinking this: among others, Tyler Cowen, Daniel Gross, Nick Cammarata, Henrik Karlsson, and Nabeel S. Qureshi have published some form of the same thing with varying degrees and shapes of “planning” and “future”. My ideas may not make sense to you but perhaps will work as encouragement to do the same.
Again, most of you won’t do it. That’s fine. It’s more than fine, actually, because that’s a key reason why making a plan is so valuable for the rest. If we were equally prepared for it, then we’d be tied up again at some undesirable local equilibrium. Most of the things on my list aren’t zero-sum—everyone having them is a win-win. But you can imagine some could be (e.g. buy all the toilet paper from the nearest supermarket). For those, it matters that we’re better off than them.
Enough ado. Let’s get into it. Here are eight things I plan to do before (and while and after) AI becomes super powerful.