20 Comments
User's avatar
Alexander Kurz's avatar

"But Altman isn’t completely mistaken in his vision. One strong argument in favor of AI—and technology in general—that’s not only hard to refute but widely supported by evidence is that, given enough time, technological progress improves humanity’s quality of life and life expectancy "

What is that evidence? After creating the risk of nuclear war in the 20th century (which has not diminished since), we also added the existential risks of climate change and biodeversity loss. Isnt all the evidence pointing towards the conclusion that we increased the risk of extinction over the last 100 years?

Expand full comment
Alberto Romero's avatar

Yes, but I wouldn't say that's because of technology itself (e.g., the technology that created nuclear bombs also helped with nuclear power plants), but because of how the system works. Capitalism is much more at fault than technology here. Also, "given enough time" is doing a lot of work there, because if we were living in the 15th century, you could argue the same about the printing press leading to European religious conflicts.

Expand full comment
Alexander Kurz's avatar

I agree with you here. Afaik, please correct me, we do not have a good theory of power. The best I know is the one of information asymmetries. Information asymmetries, like the ones we have in the internet (with citizens giving away their data for free and corporations/government collecting it to build a surveillance state) can be used as a proxy for power asymmetries. My question now is: Will AI increase or decrease these information asymmetries?

Expand full comment
Michel Schellekens's avatar

Infant child births have drastically decreased worldwide, some diseases have been eradicated. Greed and shortsightedness (related to one another) are a big risk. The fight is for societies to improve and the global trend has been upward on that front. The risks are always great. The more we know the more we can counterbalance. The problem is not with what we know or discover.

Expand full comment
Alexander Kurz's avatar

I agree with you. But I dont see how your reply addresses my point. Afaics you are not arguing against my observation that we increased the risk of extinction over the last 100 years. Unless you can give me some details of what you think we should do to counterbalance ...

Expand full comment
Lukas N.P. Egger's avatar

Kant’s Categorical Imperative: “The end always justifies the means.” ... maybe I missed the irony but Kant‘s ethics is the exact opposite of the quote in question.

Expand full comment
Alberto Romero's avatar

You're right! wanted to say the opposite. Fixed

Expand full comment
Alexander Kurz's avatar

"I agree that open-source by default may not be wise, as OpenAI’s Chief Scientist Ilya Sutskever told The Verge after the anticipated release of GPT-4, but preventing interested researchers from studying the model doesn’t sound very collaborative."

If AI is too dangerous to be open sourced, isnt it then also too dangerous to be produced in the first place? What are the scenarios in which not open sourcing the AI will prevent the AI from getting abused? Or isnt it rather more likely that not open sourcing AI will make sure that AI will only get abused by certain people?

Expand full comment
Alberto Romero's avatar

Open source AI might be referring to research or development, not necessarily production (although since ChatGPT, AI companies--especially OpenAI and Microsoft--seem to have focused on the latter without paying much attention to the former). It's possible to conduct research that's valuable and shouldn't be open-sourced (e.g., finding a new powerful drug and the process for its manufacturing).

I agree with your concerns and don't have an answer. We can debate about who's worthy of our trust and who isn't. But the truth is OpenAI has incentives, governments have incentives, etc. and those are often not aligned with the best interests of people. Anyway, even if it's hard to know whose incentives are best aligned with those of the majority, we can agree that some groups of people have incentives opposite to what most want (e.g., terrorists, propagandists, disinformators, spammers, scammers, etc.) and that might be a strong-enough argument to support non-open-source in some cases.

Expand full comment
Alexander Kurz's avatar

I am glad you talk about alignment. But aligning AI to human values will not change much. Technology changes society by lowering transaction costs of already existing processes. The biggest impact of AI will be via lowering transactions costs in the real world. In a market economy no ethical considerations can possibly stop this. Therefore, what we should concentrate on is not aligning AI but aligning the economy.

Expand full comment
Alexander Kurz's avatar

"You guessed it, capitalism."

Since its start with East India Company in 1600, capitalism has always been as much about extraction as about innovation. An exception was the Golden Age of Capitalism, from approx 1950 - 1980 in the West, when innovation was king and increasing productivity got more or less evenly distributed between different incomes.

That has stopped since and I believe that there is evidence that capitalism got back to its extractive roots (colonisation inside). For example, to use a phrase coined by Jaron Lanier, the "siren servers" extract information from users and then exploit them by colluding with the government to build a surveillance state. What will be the role of AI in this? Is there anybody who can point to trends that AI will lead to a redistribution of power from governments and multinational corporations to ordinary citizens?

Finally, let us not forget that AI may well make our civilization more brittle, not more robust. For example, the progress of AI is based on increasing energy consumption, not on lowering it.

Expand full comment
Borys's avatar

Working for many years in international tax area and observing the digital taxation issue, I must say that the chances of coordinated regulations to redistribute value created by AI (for example, by taxing AI companies) are VERY grim. It will take years to agree. Just saying that taxation will not help - we need to look elsewhere.

Expand full comment
Alberto Romero's avatar

Thanks for the input Borys!

Expand full comment
Michel Schellekens's avatar

"AI’s imperfections are just as problematic. Not the systems themselves—although there’s some of that, too" I would write: there is a *lot* of that too. It flounders and is still wildly inaccurate on basic language or programming questions. Fully agree with the need for regulation and society to channel this in better directions. I am currently underwhelmed after playing around with the system some more and seriously wonder what this very imperfect system really will amount to in the end. So much work goes in correcting the system's mistakes, the ultimate payoff may not warrant the use in many cases. I am not sure these imperfections really can be addressed adequately. Fingers crossed on that account, but it seems deeply inherent in this approach. I use ChatGPTPlus. Not sure it equates with GPT4.

Expand full comment
Alberto Romero's avatar

"I am not sure these imperfections really can be addressed adequately." Definitely agree with you on this one Michel!

Expand full comment
Kevin's avatar

We can choose the see the bright side:

They help the poor by having a free version available to public.

(If not for them, we might not have had access to this tech at all!)

Not open sourcing code: protect humanity from rogue AI.

We get to experience all of the perks of AGI while they remain in control.

Expand full comment
Alberto Romero's avatar

I'd say this take is naive. We can't count on the integrity of OpenAI to do what's right, whatever that is. What if they have misaligned interests? What if they're wrong? Independent auditing of what they do, given its importance and potential impact, is critical.

Expand full comment
User's avatar
Comment deleted
Apr 5, 2023Edited
Comment deleted
Expand full comment
Alberto Romero's avatar

Thanks Luis! I agree, that's why I differentiated between the responsibility that's on OpenAI's shoulders from that that is intrinsic to the system it develops in. Yet, we must ascribe responsibility where it's due, regardless of whether there is or not a solution to the problems I've highlighted.

Expand full comment
User's avatar
Comment deleted
Apr 5, 2023
Comment deleted
Expand full comment
Alberto Romero's avatar

Do you mean a physical 3D human face (e.g., Ameca)? or a digital/VR/AR one (e.g., Replika avatars)? I'd say we're already in the second scenario (Replika uses a variant of GPT-3). The first may take some time still.

Expand full comment
User's avatar
Comment deleted
Apr 4, 2023
Comment deleted
Expand full comment
Alberto Romero's avatar

"Middle level people probably sincerely believe their mission is to benefit humanity."

I'd go further here. I think Sam Altman believes he will benefit all of us. I think he's sincere in that he believes that. The problem is the flaws in his reasoning, his inherent bias, and the power he is amassing. I don't believe for a moment that Sam Altman is chasing money (or power in the more prosaic sense of the word). I'm writing an article on this because I've seen many journalists and analysts make this mistake. You have to understand his influences to understand his motivations, his character, and his psychology. He's not your typical money-seeking SV techie. I might be wrong but I have evidence and good arguments to prove my point (coming soon...)

Expand full comment