The here and now is far more important than those distant speculations.
But there is another a bit perverse cause why discrimination, surveillance and unemployment fail to worry Musk: he’ll never be a victim of any of them, that much we can be certain.
Excellent summary of Musk's positions. Also, good job of differentiating between the near- and far-term implications. Without any convincing rationale, I fear today's concerns you've cataloged are but the prelude for Musk's dystopian sci-fi nightmare. That's the problem: AI, Elon Musk, me, we're all trained on the same dystopian sci-fi data sets. GPT3 winks and says it wants to enslave humans because it's layered and pooled every third-rate script and paperback on the subject.
I disagree with the core of your conclusion. As "What We Owe The Future" by William MacAskill shows, the vast impact that our decisions and actions will have is on people who are not yet born. It's your prerogative to care more about people alive today, but this would decrease how much positive impact you could bring in total. Focussing on existential risk reduction is likely still 'underhyped' compared to focussing on the shorter term challenges.
Thanks for bringing up a contrarian perspective, Brayden. I understand your point--I think Musk recently endorsed MacAskill's book--, but I disagree. One key aspect that is often left out in conversations about longtermism is the quality of life and degree of wellbeing of people.
Working to improve the wellbeing of people alive now is, for me, more important than working to create more lives whose quality of life could be way lower. What do we want, a million people living well or a trillion people surviving?
Maybe I'm missing something here? What do you think?
Couldn’t agree more with your take!
The here and now is far more important than those distant speculations.
But there is another a bit perverse cause why discrimination, surveillance and unemployment fail to worry Musk: he’ll never be a victim of any of them, that much we can be certain.
100% agree, Ramon. That's largely why he doesn't care that much about non x-risks
Excellent summary of Musk's positions. Also, good job of differentiating between the near- and far-term implications. Without any convincing rationale, I fear today's concerns you've cataloged are but the prelude for Musk's dystopian sci-fi nightmare. That's the problem: AI, Elon Musk, me, we're all trained on the same dystopian sci-fi data sets. GPT3 winks and says it wants to enslave humans because it's layered and pooled every third-rate script and paperback on the subject.
Interesting reflection! We're all fed from the same influences so it's expected GPT-3 and other LLMs are biased to output those kinds of threats..
I disagree with the core of your conclusion. As "What We Owe The Future" by William MacAskill shows, the vast impact that our decisions and actions will have is on people who are not yet born. It's your prerogative to care more about people alive today, but this would decrease how much positive impact you could bring in total. Focussing on existential risk reduction is likely still 'underhyped' compared to focussing on the shorter term challenges.
Thanks for bringing up a contrarian perspective, Brayden. I understand your point--I think Musk recently endorsed MacAskill's book--, but I disagree. One key aspect that is often left out in conversations about longtermism is the quality of life and degree of wellbeing of people.
Working to improve the wellbeing of people alive now is, for me, more important than working to create more lives whose quality of life could be way lower. What do we want, a million people living well or a trillion people surviving?
Maybe I'm missing something here? What do you think?
I love your perspective, and I agree. I think AI can be used for a lot of good, but we need to protect against the bad.
Totally.. Thanks for reading, Elle!