10 Comments

Couldn’t agree more with your take!

The here and now is far more important than those distant speculations.

But there is another a bit perverse cause why discrimination, surveillance and unemployment fail to worry Musk: he’ll never be a victim of any of them, that much we can be certain.

Expand full comment

100% agree, Ramon. That's largely why he doesn't care that much about non x-risks

Expand full comment

Excellent summary of Musk's positions. Also, good job of differentiating between the near- and far-term implications. Without any convincing rationale, I fear today's concerns you've cataloged are but the prelude for Musk's dystopian sci-fi nightmare. That's the problem: AI, Elon Musk, me, we're all trained on the same dystopian sci-fi data sets. GPT3 winks and says it wants to enslave humans because it's layered and pooled every third-rate script and paperback on the subject.

Expand full comment

Interesting reflection! We're all fed from the same influences so it's expected GPT-3 and other LLMs are biased to output those kinds of threats..

Expand full comment

I disagree with the core of your conclusion. As "What We Owe The Future" by William MacAskill shows, the vast impact that our decisions and actions will have is on people who are not yet born. It's your prerogative to care more about people alive today, but this would decrease how much positive impact you could bring in total. Focussing on existential risk reduction is likely still 'underhyped' compared to focussing on the shorter term challenges.

Expand full comment

Thanks for bringing up a contrarian perspective, Brayden. I understand your point--I think Musk recently endorsed MacAskill's book--, but I disagree. One key aspect that is often left out in conversations about longtermism is the quality of life and degree of wellbeing of people.

Working to improve the wellbeing of people alive now is, for me, more important than working to create more lives whose quality of life could be way lower. What do we want, a million people living well or a trillion people surviving?

Maybe I'm missing something here? What do you think?

Expand full comment

My complaint with longtermism is that it seems yet another way to distract our attention from a threat which can end all future within the next 30 minutes, nuclear weapons.

We're like a man with a gun in his mouth, who isn't interested in the gun, but in some fancy theory about the far distant future. By shifting our focus to the future, we're less likely to remove the gun that threatens us today, thus increasing the chance there won't be a future to worry about.

It's not just Effective Altruism folks who suffer from this, nearly the entire class of intellectual elites have succeeded in largely ignoring nuclear weapons.

Expand full comment

I love your perspective, and I agree. I think AI can be used for a lot of good, but we need to protect against the bad.

Expand full comment

Totally.. Thanks for reading, Elle!

Expand full comment

Perhaps the best way to look at AI is to see it as a change accelerant which has the potential to trigger the existential scale technology already in place, nuclear weapons.

As example, the most serious threat from climate change is probably not the environmental changes themselves, but how we respond to those changes. If climate change triggers mass migrations which destabilize the geopolitical order, the major powers can be drawn in to a conflict which quickly slips from their control. In such a case, it wouldn't be climate change specifically which led to a nuclear war, but rather our reaction to climate change.

Like you, I'm less worried that AI will become a godlike superpower which enslaves humanity etc. What worries me more are the social disruptions which can arise today from an accelerating pace of knowledge driven change. A sufficient amount of social disruption has the potential to result in catastrophic outcomes rather quickly.

The existential risk from AI is not necessarily a long term issue if we view AI not as a solitary factor, but as an accelerant to already existing challenges.

Expand full comment