The One AI Skill That Worries Sam Altman More Than Intelligence
It's not what it can do but what it can make you do
Sam Altman isn’t afraid of artificial general intelligence (AGI). He thinks the post-AGI world will be “amazingly great.” He concedes there will be some drawbacks like job losses, but those will be outweighed by the corresponding gains, like access to even better jobs. He also admits there’s a non-zero chance that things go “quite wrong,” but his beliefs make him dismiss this tiny possibility as unworthy of further consideration.
But there’s something he sees as a real risk. Something he has mentioned several times since OpenAI released ChatGPT in late 2022. He doesn’t consider this a possibility — an “if” question — but a certainty. What matters is when will it come and how we can prepare.
I’m talking about personalized disinformation: Information intended and designed specifically for you to perceive a manufactured reality — like a dedicated Matrix that would reinforce your existing views to prevent your critical thinking, ignite your passions to hijack your primitive limbic system and hide any hint of a shared external reality to avoid any conflict.
That certainly sounds like social media but it’s not the same thing. Social media is indeed a disinformation machine that creates huge problems for the information ecosystem but it is neither as refined nor as accurate. The nightmare I’m describing is way worse.
Personalized disinformation isn’t here yet but powerful multimodal AI models like GPT-4 will help make it real. That’s our future — the one Altman is building; the one he truly fears.