27 Comments
Apr 4, 2023Liked by Alberto Romero

"But Altman isn’t completely mistaken in his vision. One strong argument in favor of AI—and technology in general—that’s not only hard to refute but widely supported by evidence is that, given enough time, technological progress improves humanity’s quality of life and life expectancy "

What is that evidence? After creating the risk of nuclear war in the 20th century (which has not diminished since), we also added the existential risks of climate change and biodeversity loss. Isnt all the evidence pointing towards the conclusion that we increased the risk of extinction over the last 100 years?

Expand full comment

Kant’s Categorical Imperative: “The end always justifies the means.” ... maybe I missed the irony but Kant‘s ethics is the exact opposite of the quote in question.

Expand full comment
Apr 4, 2023Liked by Alberto Romero

"I agree that open-source by default may not be wise, as OpenAI’s Chief Scientist Ilya Sutskever told The Verge after the anticipated release of GPT-4, but preventing interested researchers from studying the model doesn’t sound very collaborative."

If AI is too dangerous to be open sourced, isnt it then also too dangerous to be produced in the first place? What are the scenarios in which not open sourcing the AI will prevent the AI from getting abused? Or isnt it rather more likely that not open sourcing AI will make sure that AI will only get abused by certain people?

Expand full comment
Apr 4, 2023Liked by Alberto Romero

"You guessed it, capitalism."

Since its start with East India Company in 1600, capitalism has always been as much about extraction as about innovation. An exception was the Golden Age of Capitalism, from approx 1950 - 1980 in the West, when innovation was king and increasing productivity got more or less evenly distributed between different incomes.

That has stopped since and I believe that there is evidence that capitalism got back to its extractive roots (colonisation inside). For example, to use a phrase coined by Jaron Lanier, the "siren servers" extract information from users and then exploit them by colluding with the government to build a surveillance state. What will be the role of AI in this? Is there anybody who can point to trends that AI will lead to a redistribution of power from governments and multinational corporations to ordinary citizens?

Finally, let us not forget that AI may well make our civilization more brittle, not more robust. For example, the progress of AI is based on increasing energy consumption, not on lowering it.

Expand full comment
Apr 4, 2023Liked by Alberto Romero

You write, "Gates advocating for regulations on AI is the help we didn't know we needed."

If we can't regulate tailgating on the highway, and a thousand other things, why do we think we can successfully regulate AI??

You write, "As we know very well because OpenAI’s PR department has ensured we do, the company’s ultimate purpose—and the reason they’ve built ChatGPT and GPT-4—is to create AGI to “benefit all of humanity.”

Middle level people probably sincerely believe their mission is to benefit humanity. The mission of those running the show is to chase money and power. That's how those on top got on top, by chasing money and power. What we're witnessing is Mark Zuckerberg syndrome, thirty-something nerd boys who want to become billionaires. The "future of humanity" is a catchy slogan from the marketing department.

You write, "Technically, even under no regulation—like today’s AI landscape—technologies benefit everyone after enough time. And that’s good."

Are readers aware that we've come within minutes of global nuclear war BY MISTAKE on multiple occasions? After enough time, we'll run out of time.

Meaning no offense to anyone, but speculation about the future of AI that doesn't include the realization that violent men with existential scale technology can bring the AI future crashing down at any moment is just more evidence that we aren't ready for AI.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023Liked by Alberto Romero

Excellent writing, thoughtful and intelligent analysis, as usual. Alberto, the world will never be equitable to all people. I would not place the burden of this on a handful of Silicon Valley entrepreneurs. Even the bible says, "For the poor will never cease from the land; therefore I command you, saying, ‘You shall open your hand wide to your brother, to your poor and your needy, in your land.'" - Deuteronomy 15:11. I believe with your articles you are acting the part of that good Samaritan, opening his hand in charity. But is there an answer (nevermind a solution)? I don't know anyone can say. In the meantime we can help those in need by providing encouragement, offering our time and donating money and resources where/when we can, without overextending oneself. Looking after your own house benefits society by not placing further strain on its welfare state.

Ps. I just read Gate's article. He pays lip service with regard to regulating and developing AI to mitigate inequity rather than perpetuate it. I think his article is over-optimistic and paints a rosy picture of all the great AI will do to alleviate human suffering, but he devotes zero space to addressing Automation job displacement (except a perfunctory comment, "Governments need to help workers transition into other roles"). This will be catastrophic for humanity, and certainly the poorest and least educated will continue to pay the highest price. Asking AI to level the playing field for the weakest and most vulnerable is like asking Wall Street to feed the poor. Sounds great on paper, but reality is lightyears away from it.

Expand full comment
Apr 5, 2023Liked by Alberto Romero

Off Topic suggestion for future article: What happens when chatbots are married to 3D human faces, so that we're not talking to a wall of text, but to an apparent "person"? Is anything like this under development?

Expand full comment
Apr 5, 2023Liked by Alberto Romero

You write, "What they call AI alignment is only alignment with their values—not “human values” (whatever that is)."

Yes, the concept of "aligning with human values" is a tricky one indeed. There are the lofty values we strive for, and the reality of how we often act. How we act in a particular circumstance would seem to best represent our real values. So should AI be aligned with the real world reality of our values, or what we wish our values were? And of course there are countless situations where the lofty moral vision and the less wonderful moral reality are combined in all kinds of complex ways.

I've come to believe that discussion of alignment and governance in various emerging technologies is mostly just a way to pacify the public while these technologies are pushed past the point of no return. It gives the money chasers something to say when confronted with inconvenient questions.

Expand full comment
Apr 5, 2023Liked by Alberto Romero

Working for many years in international tax area and observing the digital taxation issue, I must say that the chances of coordinated regulations to redistribute value created by AI (for example, by taxing AI companies) are VERY grim. It will take years to agree. Just saying that taxation will not help - we need to look elsewhere.

Expand full comment

"AI’s imperfections are just as problematic. Not the systems themselves—although there’s some of that, too" I would write: there is a *lot* of that too. It flounders and is still wildly inaccurate on basic language or programming questions. Fully agree with the need for regulation and society to channel this in better directions. I am currently underwhelmed after playing around with the system some more and seriously wonder what this very imperfect system really will amount to in the end. So much work goes in correcting the system's mistakes, the ultimate payoff may not warrant the use in many cases. I am not sure these imperfections really can be addressed adequately. Fingers crossed on that account, but it seems deeply inherent in this approach. I use ChatGPTPlus. Not sure it equates with GPT4.

Expand full comment

We can choose the see the bright side:

They help the poor by having a free version available to public.

(If not for them, we might not have had access to this tech at all!)

Not open sourcing code: protect humanity from rogue AI.

We get to experience all of the perks of AGI while they remain in control.

Expand full comment