26 Comments
Apr 4, 2023Liked by Alberto Romero

"But Altman isn’t completely mistaken in his vision. One strong argument in favor of AI—and technology in general—that’s not only hard to refute but widely supported by evidence is that, given enough time, technological progress improves humanity’s quality of life and life expectancy "

What is that evidence? After creating the risk of nuclear war in the 20th century (which has not diminished since), we also added the existential risks of climate change and biodeversity loss. Isnt all the evidence pointing towards the conclusion that we increased the risk of extinction over the last 100 years?

Expand full comment
author
Apr 6, 2023·edited Apr 6, 2023Author

Yes, but I wouldn't say that's because of technology itself (e.g., the technology that created nuclear bombs also helped with nuclear power plants), but because of how the system works. Capitalism is much more at fault than technology here. Also, "given enough time" is doing a lot of work there, because if we were living in the 15th century, you could argue the same about the printing press leading to European religious conflicts.

Expand full comment

I agree with you here. Afaik, please correct me, we do not have a good theory of power. The best I know is the one of information asymmetries. Information asymmetries, like the ones we have in the internet (with citizens giving away their data for free and corporations/government collecting it to build a surveillance state) can be used as a proxy for power asymmetries. My question now is: Will AI increase or decrease these information asymmetries?

Expand full comment

Infant child births have drastically decreased worldwide, some diseases have been eradicated. Greed and shortsightedness (related to one another) are a big risk. The fight is for societies to improve and the global trend has been upward on that front. The risks are always great. The more we know the more we can counterbalance. The problem is not with what we know or discover.

Expand full comment

I agree with you. But I dont see how your reply addresses my point. Afaics you are not arguing against my observation that we increased the risk of extinction over the last 100 years. Unless you can give me some details of what you think we should do to counterbalance ...

Expand full comment

Kant’s Categorical Imperative: “The end always justifies the means.” ... maybe I missed the irony but Kant‘s ethics is the exact opposite of the quote in question.

Expand full comment
author

You're right! wanted to say the opposite. Fixed

Expand full comment
Apr 4, 2023Liked by Alberto Romero

"I agree that open-source by default may not be wise, as OpenAI’s Chief Scientist Ilya Sutskever told The Verge after the anticipated release of GPT-4, but preventing interested researchers from studying the model doesn’t sound very collaborative."

If AI is too dangerous to be open sourced, isnt it then also too dangerous to be produced in the first place? What are the scenarios in which not open sourcing the AI will prevent the AI from getting abused? Or isnt it rather more likely that not open sourcing AI will make sure that AI will only get abused by certain people?

Expand full comment
author

Open source AI might be referring to research or development, not necessarily production (although since ChatGPT, AI companies--especially OpenAI and Microsoft--seem to have focused on the latter without paying much attention to the former). It's possible to conduct research that's valuable and shouldn't be open-sourced (e.g., finding a new powerful drug and the process for its manufacturing).

I agree with your concerns and don't have an answer. We can debate about who's worthy of our trust and who isn't. But the truth is OpenAI has incentives, governments have incentives, etc. and those are often not aligned with the best interests of people. Anyway, even if it's hard to know whose incentives are best aligned with those of the majority, we can agree that some groups of people have incentives opposite to what most want (e.g., terrorists, propagandists, disinformators, spammers, scammers, etc.) and that might be a strong-enough argument to support non-open-source in some cases.

Expand full comment

I am glad you talk about alignment. But aligning AI to human values will not change much. Technology changes society by lowering transaction costs of already existing processes. The biggest impact of AI will be via lowering transactions costs in the real world. In a market economy no ethical considerations can possibly stop this. Therefore, what we should concentrate on is not aligning AI but aligning the economy.

Expand full comment

Applauding this, "If AI is too dangerous to be open sourced, isnt it then also too dangerous to be produced in the first place?"

Expand full comment
Apr 4, 2023Liked by Alberto Romero

"You guessed it, capitalism."

Since its start with East India Company in 1600, capitalism has always been as much about extraction as about innovation. An exception was the Golden Age of Capitalism, from approx 1950 - 1980 in the West, when innovation was king and increasing productivity got more or less evenly distributed between different incomes.

That has stopped since and I believe that there is evidence that capitalism got back to its extractive roots (colonisation inside). For example, to use a phrase coined by Jaron Lanier, the "siren servers" extract information from users and then exploit them by colluding with the government to build a surveillance state. What will be the role of AI in this? Is there anybody who can point to trends that AI will lead to a redistribution of power from governments and multinational corporations to ordinary citizens?

Finally, let us not forget that AI may well make our civilization more brittle, not more robust. For example, the progress of AI is based on increasing energy consumption, not on lowering it.

Expand full comment
Apr 4, 2023Liked by Alberto Romero

You write, "Gates advocating for regulations on AI is the help we didn't know we needed."

If we can't regulate tailgating on the highway, and a thousand other things, why do we think we can successfully regulate AI??

You write, "As we know very well because OpenAI’s PR department has ensured we do, the company’s ultimate purpose—and the reason they’ve built ChatGPT and GPT-4—is to create AGI to “benefit all of humanity.”

Middle level people probably sincerely believe their mission is to benefit humanity. The mission of those running the show is to chase money and power. That's how those on top got on top, by chasing money and power. What we're witnessing is Mark Zuckerberg syndrome, thirty-something nerd boys who want to become billionaires. The "future of humanity" is a catchy slogan from the marketing department.

You write, "Technically, even under no regulation—like today’s AI landscape—technologies benefit everyone after enough time. And that’s good."

Are readers aware that we've come within minutes of global nuclear war BY MISTAKE on multiple occasions? After enough time, we'll run out of time.

Meaning no offense to anyone, but speculation about the future of AI that doesn't include the realization that violent men with existential scale technology can bring the AI future crashing down at any moment is just more evidence that we aren't ready for AI.

Expand full comment
author

"Middle level people probably sincerely believe their mission is to benefit humanity."

I'd go further here. I think Sam Altman believes he will benefit all of us. I think he's sincere in that he believes that. The problem is the flaws in his reasoning, his inherent bias, and the power he is amassing. I don't believe for a moment that Sam Altman is chasing money (or power in the more prosaic sense of the word). I'm writing an article on this because I've seen many journalists and analysts make this mistake. You have to understand his influences to understand his motivations, his character, and his psychology. He's not your typical money-seeking SV techie. I might be wrong but I have evidence and good arguments to prove my point (coming soon...)

Expand full comment

Ok, great, make the case, educate me. Good plan.

I really don't mean to demonize any particular individual. Perhaps I'm trying to say that when things get to a certain level, the money people tend to take over the show.

Expand full comment
Apr 5, 2023Liked by Alberto Romero

Off Topic suggestion for future article: What happens when chatbots are married to 3D human faces, so that we're not talking to a wall of text, but to an apparent "person"? Is anything like this under development?

Expand full comment
author

Do you mean a physical 3D human face (e.g., Ameca)? or a digital/VR/AR one (e.g., Replika avatars)? I'd say we're already in the second scenario (Replika uses a variant of GPT-3). The first may take some time still.

Expand full comment

The 3D part is probably less important. Let's simplify to a realistic 2D human face image on a computer monitor replacing the current text based interface for chatbots. This article explores it a bit...

https://www.lifewire.com/conversational-ai-like-chatgpt-may-soon-have-a-face-that-looks-human-7167616

This company may be moving in the right direction, though they don't see to be there yet.

https://www.forbes.com/sites/gilpress/2023/03/01/israeli-startup-d-id-puts-a-face-on-generative-ai-chatbots/?sh=e321a1e3c881

I'm guessing that when the chatbot interface shifts from being text based to video and sound interaction with a human face image, interest in chatbots will further explode beyond the nerd class in to a much broader slice of the public.

I already have old discontinued software called CrazyTalk which allows me to animate any face image with text or audio input. It's not interactive though, I can only tell the image what to say. There are various versions of this technology online, as I suspect you've long known.

Expand full comment
Apr 5, 2023Liked by Alberto Romero

You write, "What they call AI alignment is only alignment with their values—not “human values” (whatever that is)."

Yes, the concept of "aligning with human values" is a tricky one indeed. There are the lofty values we strive for, and the reality of how we often act. How we act in a particular circumstance would seem to best represent our real values. So should AI be aligned with the real world reality of our values, or what we wish our values were? And of course there are countless situations where the lofty moral vision and the less wonderful moral reality are combined in all kinds of complex ways.

I've come to believe that discussion of alignment and governance in various emerging technologies is mostly just a way to pacify the public while these technologies are pushed past the point of no return. It gives the money chasers something to say when confronted with inconvenient questions.

Expand full comment
Apr 5, 2023Liked by Alberto Romero

Working for many years in international tax area and observing the digital taxation issue, I must say that the chances of coordinated regulations to redistribute value created by AI (for example, by taxing AI companies) are VERY grim. It will take years to agree. Just saying that taxation will not help - we need to look elsewhere.

Expand full comment
author

Thanks for the input Borys!

Expand full comment

"AI’s imperfections are just as problematic. Not the systems themselves—although there’s some of that, too" I would write: there is a *lot* of that too. It flounders and is still wildly inaccurate on basic language or programming questions. Fully agree with the need for regulation and society to channel this in better directions. I am currently underwhelmed after playing around with the system some more and seriously wonder what this very imperfect system really will amount to in the end. So much work goes in correcting the system's mistakes, the ultimate payoff may not warrant the use in many cases. I am not sure these imperfections really can be addressed adequately. Fingers crossed on that account, but it seems deeply inherent in this approach. I use ChatGPTPlus. Not sure it equates with GPT4.

Expand full comment
author

"I am not sure these imperfections really can be addressed adequately." Definitely agree with you on this one Michel!

Expand full comment

We can choose the see the bright side:

They help the poor by having a free version available to public.

(If not for them, we might not have had access to this tech at all!)

Not open sourcing code: protect humanity from rogue AI.

We get to experience all of the perks of AGI while they remain in control.

Expand full comment
author

I'd say this take is naive. We can't count on the integrity of OpenAI to do what's right, whatever that is. What if they have misaligned interests? What if they're wrong? Independent auditing of what they do, given its importance and potential impact, is critical.

Expand full comment
deletedApr 5, 2023·edited Apr 5, 2023Liked by Alberto Romero
Comment deleted
Expand full comment
author

Thanks Luis! I agree, that's why I differentiated between the responsibility that's on OpenAI's shoulders from that that is intrinsic to the system it develops in. Yet, we must ascribe responsibility where it's due, regardless of whether there is or not a solution to the problems I've highlighted.

Expand full comment