Bill Gates, Microsoft co-founder, respected for the philanthropic work he’s done since he stepped down from his position as CEO, recently published a must-read blog post on the present and future of AI entitled “The Age of AI has begun.”
Gates, who can hardly be accused of being a techno-pessimist or anti-technology—much less anticapitalist—concluded with a set of principles that “should guide” the public conversation on AI. Here's the second one:
“[M]arket forces won’t naturally produce AI products and services that help the poorest. The opposite is more likely. With reliable funding and the right policies, governments and philanthropy can ensure that AIs are used to reduce inequity.”
Gates advocating for regulations on AI is the help we didn't know we needed. And his attack on the flaws of the laws of the free market is a big blow to OpenAI's grand purpose: AI can’t benefit all of humanity. Let's see why.
AI to benefit all of humanity?
As we know very well because OpenAI’s PR department has ensured we do, the company’s ultimate purpose—and the reason they’ve built ChatGPT and GPT-4—is to create AGI to “benefit all of humanity.” Despite my critical stance against them, I believe they’re honest about this. They want to fulfill this promise.
Yet their definitions of “benefit” and “all of humanity” don’t necessarily match yours or mine. There’s some hidden irony here: If not all of humanity agrees with OpenAI’s definition of “all of humanity” shouldn’t they change it? What they call AI alignment is only alignment with their values—not “human values” (whatever that is).
Not everyone thinks that keeping closed—or even creating—powerful AI models is the best way to benefit all. OpenAI is in favor of open-source AI, though, as long as there’s no competition. I agree that open-source by default may not be wise, as OpenAI’s Chief Scientist Ilya Sutskever told The Verge after the anticipated release of GPT-4, but preventing interested researchers from studying the model doesn’t sound very collaborative.
They’re also in favor of regulation (advocating for it since the very beginning), but just for others (or for them too, but not now). As Futurism’s Maggie Harrison writes, “while Altman has and is continuing to advocate for regulation, he and OpenAI are still operating without it. As of now, it's up to OpenAI to define what ethics and safety mean and should be—and by keeping its models closed, the company is itself asking the public to do a serious trust fall.”
And they’re in favor of fairness, but their actions scream “The end always justifies the means.” To benefit us they’re willing to assume sacrifices in the form of psychological harm and perpetuate inequality of life between wealthy and poor countries.
All this somehow reminds me of the ways of enlightened despotism: “Everything for the people, nothing by the people.” OpenAI is acting for all of humanity’s benefit but only they can do it well and their practices aren’t to be scrutinized by no one.
So while their goals are worth pursuing and their vision is sincere (or so I believe), the particular circumstances with which they approach their mission—for-profit company, closed-source AI, unregulated space, SF-based rich founders, mainly white men, mainly tech backgrounds, etc.—distances it from its presumed objectivity and unquestionable altruistic nature.
Emerging technologies need regulatory oversight
It’s not OpenAI’s fault. They happen to be well-positioned to be the leaders of this AI arms race that has sucked in tech giants like Google and Microsoft. The decisions OpenAI execs have made to get here are perfectly rational under their business-tinted lens. I could defend them one by one convincingly if I shared their beliefs.
But nothing exists in isolation. And in the broader context of a shared world, a rational explanation isn’t enough to relieve OpenAI from responsibility. There must be an adequate legal framework to judge those “perfectly rational” decisions and assess if they’re also perfectly ethical and socially welcome. That’s what Gates advocates for.
There are two historically-consistent reasons why he’s right that we need regulation in AI (and not just for the problems that might come but, most importantly, for those that exist here and now).
New tech disrupts the poor
Altman said it felt “very cool” to read “the first part” of Gates’ blog post. Yet there are important truths in the second half that he probably didn’t like as much: “[M]arket forces won’t naturally produce AI products and services that help the poorest.” No wonder Altman, already a millionaire when he founded OpenAI, didn’t mention this bit.
But Altman isn’t completely mistaken in his vision. One strong argument in favor of AI—and technology in general—that’s not only hard to refute but widely supported by evidence is that, given enough time, technological progress improves humanity’s quality of life and life expectancy (we’re not necessarily happier, though). There’s no denying that. We live much better than our ancestors—safer, with better resources within reach, and access to improved education and healthcare—and, on average, we live more years.
That sounds like a W for tech but there’s nuance to this argument—and we don’t need to romanticize the past (i.e., “they lived better back then”) to see this. As the fiction writer William Gibson said 30 years ago, “The future has arrived—It’s just not evenly distributed yet.” In some sense, this has always happened: Since when have emerging technologies benefited all of humanity from the onset? Tech improves societies, yes, but it first disrupts them. And who pays the cost? The poor (understood as everyone who, for one reason or another, doesn’t belong to the social, economical, or political elites).
Noah Smith and roon defend in their “Generative AI: autocomplete for everything” article that new technologies don’t take over jobs but tasks (AI won’t take our jobs—a human using AI will) and how, in the end, they create more stuff to do than they take away. But the important details are the timing, i.e., how deep is the disruption before the benefits begin to appear (and therefore how much suffering is created in between)? and the localization, i.e., who are the winners and who are the losers?
The unending opportunity gap between rich and poor countries (and between rich and poor people within a given country) will tip the balance in favor, once again, of those who are already better off.
New tech widens the gap between rich and poor
Claims of the form “you live better than any 19th-century king,” although true (at least in most cases), only distract from the real issue. With time, newer generations reap the rewards of the technological seeds their ancestors saw which elevates everyone’s quality of life to some degree. However, it’s not through an equal process: the rich improve their quality of life more than the poor. There’s a tech-driven (and AI-driven) well-being gap that increases, not decreases, with technological progress.
What we have is that first, technology disrupts the poor more, and then, even if it eventually benefits everyone, it benefits the rich more.
Technically, even under no regulation—like today’s AI landscape—technologies benefit everyone after enough time. And that’s good. So if we kept the current path, generalized benefits would arrive eventually. But the way these technologies are created and deployed—and the rules and norms that govern the processes by which it happens—make the transition infinitely much more pleasant for a very specific subset of people.
If we don’t have the adequate means to make the transition benefit everyone, it’s hard to defend the idea that the outcome will, without entering into some kind of contradiction.
When OpenAI says they want to benefit all of humanity there's some implicitness about doing it equally across the social stratification—even if they never truly intended it that way—that makes it extremely incoherent that they then go and release a closed GPT-4 and outsource Kenyan workers under miserable conditions.
An imperfect, imperfect world
This analysis would be lacking if I didn’t consider the framework under which OpenAI operates as a company, the framework that gives birth to such a wide array of new technologies that suffer from what I’ve described above. You guessed it, capitalism.
As it happens, we, as a society, have a considerable obstacle to making this transition (from new tech being disruptive to being beneficial) flawlessly—and avoid, in doing so, disrupting some while benefiting others. Saying capitalism=bad is extremely simplistic. But it’s undeniable it isn’t fond of the kind of regulations Gates alludes to (and, despite what OpenAI likes to think, that’s not going to change after AGI).
Capitalism defines the dynamics by which companies, users, AI systems, etc. interact. Even under a layer of protective laws, this highly imperfect system (not to say it’s the most imperfect or that the alternatives are perfect) makes claims like “benefit all of humanity” or “align with human values” sound idealistic when not outright deceitful.
Does OpenAI expect to overcome the flaws of capitalist societies? Do they think AGI will be so incredibly powerful that it’ll singlehandedly end all problems that derive from its inherent inequalities? The post-scarcity utopia sounds appealing but it takes a bittersweet flavor if we consider the rules under which we play: Even the infinite resources of a post-scarcity world would be unevenly distributed under an imperfect system. Anyone who thinks otherwise is either very naive or dishonest.
The section’s heading says “imperfect” twice. The reason is that capitalism isn’t the only obstacle to creating AI to benefit all. AI’s imperfections are just as problematic. Not the systems themselves—although there’s some of that, too—but the ways companies train, create, deploy, and commercialize them. Solely accusing capitalism of the practices companies like OpenAI use to achieve their goals is taking all the accountability from them. They’re constrained by the system but could easily do things differently while playing by its flawed rules.
AI’s imperfections in an imperfect world imply there’s twice the amount of imperfection falling onto the already burdened lives of the poor, discriminated minorities, and, more generally, everyone at the lower ends of the social hierarchy.
Both things are intertwined. Capitalism and the practices for-profit companies engage in. DAIR’s statement in response to the FLI open letter refers to “transparency, accountability and preventing exploitative labor practices” as the aspects regulation should focus on—that’s not only about AI but also about the system in which AI happens to exist. OpenAI isn’t changing that.
Sam Altman is wrong. Bill Gates is right
Even if Sam Altman honestly wants OpenAI to benefit all of humanity (I don’t doubt he does), he’d need to change a lot more than he can to fulfill that promise. The question we should ask is this: How does Altman plan to achieve such a grandiose goal in a world that incentivizes him to do the opposite?
There’s some twisted irony in OpenAI using $2/hour Kenyan workers to improve the datasets to train those very systems that would supposedly help them out of their misery. And he knows it. To those who say, “but that’s a good wage in Kenya,” as an argument to absolve OpenAI of its incoherencies and dismiss the evident banality of its supposed goals, I have nothing else to add.
Bill Gates is right. AI won’t, by default, help those that need it more. Adequate regulations and policies are needed. There’s a complete lack of agreement on what “adequate” means here (it’s already becoming evident that’s also a problem), but first, we have to agree—all of us, not just AI experts, AI ethics people, or policymakers, but also AI companies, investors, and users—that we have to set constraints if we are to believe that a better world and everyone’s benefit is what we’re after.
"But Altman isn’t completely mistaken in his vision. One strong argument in favor of AI—and technology in general—that’s not only hard to refute but widely supported by evidence is that, given enough time, technological progress improves humanity’s quality of life and life expectancy "
What is that evidence? After creating the risk of nuclear war in the 20th century (which has not diminished since), we also added the existential risks of climate change and biodeversity loss. Isnt all the evidence pointing towards the conclusion that we increased the risk of extinction over the last 100 years?
Kant’s Categorical Imperative: “The end always justifies the means.” ... maybe I missed the irony but Kant‘s ethics is the exact opposite of the quote in question.