29 Comments
Jun 5, 2023Liked by Alberto Romero

Hi. I’m not an AI expert and had not done much ML hands on. I’m just wondering, in your opinion, whether the first party data that google et al holds actually confer significant advantages to them. If so , how significant?

Expand full comment
author

I don't know how significant but I'd say if we were to enlist the different moats they have, access to high-quality first party data would be near the top.

Expand full comment
Jun 1, 2023Liked by Alberto Romero

I liked the article and it's really well-written. I just wish I liked the conclusion it reaches!

Expand full comment
author

Me too, not gonna lie!

Expand full comment

I'm not an AI expert, but I have been experimenting with Vicuna on a local PC. I respect the author, and follow him, but I believe in this case he is incorrect, "there always is a way" and it will be found. Things are still evolving and never underestimate the open source community. How about a distributed AI? Huh? Ofcourse shared and secure, with a developed protocol. Result might be a little slow, but answers might be as powerful. Sort of a group mind AI. Each local machine refined by its user adding to the whole. Kind of wild, but hey I'm a futurist and sci-fi lover. Maybe not so crazy given a decade or less.

Expand full comment
Jun 1, 2023Liked by Alberto Romero

But what about the fact that Python and other OS tooling is relied on by the ‘Incumbents’ ? Doesn’t that figure for a lot?

Expand full comment
author

A lot of the software that supports the world is OS but that's because eventually OS gets everywhere. But this often doesn't apply to things that are at the frontier of research or where the competition is on emergent tech (unless one player can use OS to kill its competition or avoid antitrust problems). Generative AI requires too many resources.

Expand full comment
Jun 1, 2023·edited Jun 1, 2023Liked by Alberto Romero

AI models MUST NOT be be "Open-Sourced" (Beer),

but training methods, error rates, data cleansing, filters and model spec should be.

AI models MUST be shared with Public Control Agency (in EU, as EU agency), and freely licensed for public utility

Expand full comment
author

Agreed. And that's exactly the part of regulation that Altman rejects. Regulating AGi is fine but reporting data provenance and training methods? No way

Expand full comment
Jun 1, 2023Liked by Alberto Romero

These points make sense to me. The only innovation coming to mind that may change your thoughts around “The limits of on-device inference for LMs” and the supposed moat from big tech’s compute resources is decentralized cloud computing.

Storj is crushing it by competing with AWS’ Simple Storage Service and Microsoft Azure’s Blob storage on cost, scalability, security, fault tolerance, and even sustainability: https://www.storj.io/solutions/big-data. Then there are new entrants like Together getting funding to build a decentralized cloud for artificial intelligence: https://www.together.xyz/blog/seed-funding. These services allow the open source community to scale LLM training. Imagine what would happen if we all donated our spare compute to help train and operate an open source LLM competitor? I’d have my company contribute a section of its data center to such an initiative.

Regardless, I feel this competition conversation along with all the doomsday and AI governance rhetoric is just a distraction from the most pressing issue: wealth and power distribution. While large incumbent enterprises and countries compete, working class people have no control over their lives. AI’s gift is increasing our productivity and saving us our most valuable asset: time. Unfortunately, these benefits and profits from AI will not be distributed equitably. The wealth gap will only widen and I don’t see anyone doing anything about it.

For example, my argument may break if a big tech company buys those competing decentralized tech companies like they did with OpenAI and DeepMind. Power corrupts.

Expand full comment
author

"Regardless, I feel this competition conversation along with all the doomsday and AI governance rhetoric is just a distraction from the most pressing issue: wealth and power distribution"

I plan ~this to be the closing topic for this series actually! I think this is the only perspective that can explain everything going on today on AI and the one that can best predict the near future.

Expand full comment
author

Also thanks for the resources on decentralized cloud, Paul!

Expand full comment

Absolutely! Thank you for the wonderful essays. I look forward to what’s ahead.

Expand full comment

Awesome. That’s so validating. I think about the challenges of power and wealth distribution almost every day. I built a not-for-profit consulting collective and product incubator to help do my part. It’s such a hard problem though. I feel like I’m in the Hunger Games these days.

Expand full comment
Jun 1, 2023Liked by Alberto Romero

I think you're correct, but I also believe that OpenAI, Google, and Microsoft are concerned about the rise of open source models. I believe that Sam Altman's recent trip to congress where he begged them to create a regulatory agency that could issue licenses for LLMs above a certain number of parameters was a calculated attempt to discourage new entrants into the field because these licenses would presumably be very expensive to obtain and require a lot of bureaucratic steps to keep. I could be wrong about that, but at present I believe this desire for regulation is more about creating a government-enforced moat than it is about any concerns for what AI might do to society.

Expand full comment
author

I don't know what to believe but I see another reading here. Altman was so eager to accept regulation and even proactive about it because he knew it was a matter of time before the government and policymakers stepped in. He simply chose the less bad outcome. This way, he won over the Senate and will enjoy a privileged position to decide which regulation matters (AI existential risks) and which doesn't (data and training transparency).

Now, I do believe his concerns are sincere even if delusional. He was writing about this before OpenAI existed. He was already a millionaire. I'm starting to strongly believe he wants to be the next Oppenheimer: "When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb." The main difference is that Altman is trying to warn about this but doesn't intend to stop anyway. I don't know what's worse, Oppenheimer's naivety or Altman's hypocrisy. Up to you to decide.

Expand full comment
author

Back to the topic of open source, I don't think Altman is afraid of them in any way. He has to protect OpenAI's business, though.

Expand full comment

Alberto, your proposition, while fascinating, appears questionable from a business perspective. Reflect on the commanding authority OpenAI enjoys o the subject—virtually uncontested. The supposedly menacing regulators are unlikely to even have a conceptual grasp of the subject. In addition, they enjoy the backing of a benefactor exerting considerable influence in global health and the highest echelons of American society. His status ensures the government fears him, not vice versa. After all, the government's role is to serve capital, bending to its will as directed.

Consider a matter of scale and urgency: Recent times have seen more reported injuries and fatalities due to Covid vaccines than all other vaccines combined over the past thirty years. Even more astonishingly, excess mortality rates have outpaced the supposed threat of Covid itself. Yet, the government remains unmoved, with no discourse on this pressing matter.

It's thus perplexing to envisage such a government taking decisive action against a hypothetical risk. The notion that a prominent AI company's CEO was so alarmed by this looming threat that he admitted guilt and proposed a plea bargain before a trial or charges were announced is even more baffling.

It appears as though he was eager to acquire a criminal record. What else could prompt him to face the harshest possible outcome before deploying the vast resources at his disposal, generously provided by his billionaire benefactor? There was no attempt to influence the media, no disinformation campaigns, no smear tactics against rivals, as is usual custom. Just immediate capitulation.

Alberto, I gather that you're a man of principle and integrity. Those of good character often struggle to comprehend malevolent intentions. I contend that the virtuous actions you're suggesting are more likely to appear in the pages of a thriller than to occur within the realities of American business and politics.

John's perspective is resoundingly accurate. The imminent, pressing threat isn't government regulation, but the inexorable advance of innovative open-source alternatives. It is this danger that is hurtling violently toward incumbent entities whose staff confess to a lack of defensive strategies.

Discretionary regulations typically burden the small—a corrupt reality that the OpenAI executive is eager to exploit as he is now among the Big Men.

Expand full comment
author

Thanks for your comment Edmund! Let me go point by point.

"The supposedly menacing regulators are unlikely to even have a conceptual grasp of the subject." I wouldn't say this is true, but even if it were, there's no need to know every technical detail about a technology to find reasonable ways to regulate it. Also, knowing a lot about the inner mechanisms doesn't guarantee you're well-suited to propose the best regulations. This is an interdisciplinary matter. Regulators should seek advice from experts but not just people like Altman, also people who know a lot about how things interact with the world; sociologists, social psychologists, anthropologists, philosophers, and others.

"His status ensures the government fears him, not vice versa. After all, the government's role is to serve capital, bending to its will as directed." I don't think this is true. Altman is becoming well-known but he's not near influential enough to scare policymakers. Not even Pichai or Zuck or Musk do. I don't think government depends so much on Silicon Valley's money.

"I contend that the virtuous actions you're suggesting are more likely to appear in the pages of a thriller than to occur within the realities of American business and politics." I don't think I'm ascribing much virtue to Altman here actually. I understand the typical "malevolent" kind of person you describe and also the money-seeking type, as I think many categorize Altman. But I understand that some people, scientists, and technologists especially, tend to have weirder motives for their ambitions, not just money or fame. Altman is that kind of person (he needs money, though). If he were really afraid of open source AI he would definitely try to stop them. I'm not saying he isn't because he's good but because he's not really afraid. Yet my reading only makes sense if we understand that Altman is an unusual kind of millionaire.

"the inexorable advance of innovative open-source alternatives. It is this danger that is hurtling violently toward incumbent entities whose staff confess to a lack of defensive strategies." Two things here: there's a lot of hype around OS AI but the advances aren't as great as they seem. As always, there's a lot of emotionality involved. If you check the resources I referenced you can tell. The community is thriving but not threatening, as I wrote. Also, the "staff" was just one Google engineer—I doubt most others would agree with his take (he has some good points, though).

What I want to say with all this is that Altman's behavior doesn't really fit the "regulatory capture" idea well (I hinted at that a few weeks ago in my essay about the Senate hearing, but already qualified my take because o don't think that's the whole story—or even the most important part). But not because he wouldn't gladly regulate to stop competitors—he definitely would—but because he believes he doesn't have to to achieve his goals.

But in no way he's a "good person" in the strict sense of the word. He loves capitalism and uses it for his goals, whatever those are and whatever problems he causes in his path (he weighs them against the benefits and of the latter win, there's no need to think further). He thinks he's risking the world with this tech but he doesn't care because he's an apocalypse prepper and also because he has a high sense of self-importance. And so on.

If he had to, he'd do just as you describe. But his actions and behaviors don't fit your description. His "malevolence," to use your word, simply lies in a less prosaic place than seeking money, fame, and mundane business success. He need to achieve those too as instrumental goals for his ultimate purpose, but that's not the end of his intended journey. He wants to be in History books. He wants to be the architect of the new world. Maybe that's even more problematic than what you describe. I believe it is.

(Sorry for the long comment!)

Expand full comment

A discussion on a matter of importance and mutual interest could easily keep us absorbed in this conversation all day long; I admit to being both a purveyor and happy recipient of extensive comments. However, it seems the time has come for me to conclude this debate—albeit with a touch of reluctance! 😅

Remember the dainty Dall-e, then all the rage, exclusive to a chosen few granted the privilege to partake in its text-to-image sorcery, while the rest of us could only covet in thwarted desire.

Then along came Stable Diffusion. Its open-source model drew a substantial influx of creative and developmental energy, propelling it from glory to glory, managing in the process to relegate the self-important Dall-e to history.

It's apparent that the architects behind Dall-e, however good or kind they may be, would not have appreciated this turn of events. It was Altman, wasn't it? I find it challenging to believe that he would champion open-source, eagerly anticipating another disruptor to do to ChatGPT what befell his cherished Dall-e.

Given the severe blow he received at the hands of open-source, his comments can hardly be deemed objective and or innocent. It's improbable that they are. While we must always consider all possibilities, it's not just that this argument does not hold water, but that the base of the bucket has effectively been sawn off. If anyone has a reason to fear, loathe, or resent open-source, it would be Altman, as he has suffered its harshest blow.

Expand full comment
Jun 1, 2023·edited Jun 1, 2023Liked by Alberto Romero

All of your points and interpretations are certainly valid. All of us are second guessing about his motivations. I completely agree with you about him being a more careless version of Oppenheimer. That interpretation probably explains the persona described in this article: https://www.businessinsider.com/sam-altman-chatgpt-openai-ceo-career-net-worth-ycombinator-prepper-2023-1

That article contains this quote: The man behind ChatGPT is also serious about survival prepping, once telling The New Yorker: "I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to."

Expand full comment
author

Yes, that's another bit of evidence that reveals he's not just a money-seeking Silicon Valley CEO that we all understand super well but much weirder--and unpredictable--than that

I actually have an almost-finished draft on Altman's character that includes these kinds of things so that we all can do a better characterization of a person that has a lot of power right now. I wrote it more than a month ago but am hesitant to publish it. Maybe in the future.

Expand full comment

That’s very illuminating. Thanks. So the war is over? Or there never really was a war? Or there’s no prospect of any future war? These monopolies will simply be our principle drivers?

Expand full comment
author

Good questions Paul.

There was never a war over the leadership in generative AI between incumbents and open source, no. There's one between Google and Microsoft, between OpenAI and Anthropic (and others), and in the future, there will be one between the US and China probably (maybe not on the generative part).

I don't think open-source AI can win against private big tech (maybe in other areas of tech the forces at play are different and the balance favors OSS). Yet so much software is built on top of open source that this sounds in conflict with the evidence.

The truth is that, from time to time, depending on the circumstances, incumbents may decide to use open source to get some benefit like good public image, avoid monopolies, or destroy a competitor.

It's happened a lot in the past, but that doesn't mean they can win as a standalone force. I don't think that can or will happen. (I recommend this article by Gwern on how companies in the past have used OSS against competitors: https://gwern.net/complement

Expand full comment

Thanks. Yes I can see there are layers of complexity and some Machiavellian aspects thrown in. In relation to monopolies I feel 300% ambivalent. For instance I use Microsoft to great effect and with fantastic utility, yet there few software companies that I hate more, ever since Steve Ballmer foisted Vista upon the world, and made the world his beta testers. That piggish OS completely mired my machine in computational mud, so much so that I had Vista removed and reverted to Windows 95. So now I’m using 365, but I still would rejoice if Steve Ballmer choked on a chicken bone. Otherwise monopolies tend to make me feel like they’re led by Colonel Sanders, and we are the chickens. Behold: They have our best interests at heart.

Expand full comment

Interesting thoughts thanks. I was however lacking the actual explanation for why OS couldn't compete. I mean I get the points but I am not sure they are as ironclad as you seem to assume.

Obviously in the short term OpenAI is way ahead, but to stay ahead require that there isn't diminishing returns on new language models which I think is unclear at this point whether there will be.

Also the need for chips might actually put a natural damper on OpenAI's ability to implement their models and it might even be an advantage for models trying to do more with less regardless of them being inferior in quality.

However I admit thats is also pure speculation. :)

Expand full comment

Thanks for writing this! I agree that open source AI models are not as good as proprietary models right now. However, I believe that open source models will continue to improve over time. Especially if you consider the last 12 months. Additionally, the gaps in factuality may not be as important for some use cases. Ultimately, open source AI models will be everywhere in some form, and they will not need to look or perform the same.

I'm not nearly as bearish as you on the promise of open source AI.

Expand full comment

The revelations brought forth in this article are eye opening and illustrate the true nature of the generative AI and how it will turned out to be. In particular, the emergence of OpenAI offered a window of opportunity for the big tech companies like Google, Microsoft, Meta, and Nvidia to reassess their products because AI was inevitably becoming unresistable disruptor that would not have been underestimated. It is evident that generative AI seemed to cement the market leaders in different industries as it already backed up existing products. However, as you mentioned, it showed that the first creator and distributor of generative AI would control the market. Microsoft has taken over OpenAI by being the largest shareholder and this makes it get exclusive rights of the advanced technology. Similarly, with its Bard, Google has responded and it seems the battle for the AI market will not end soon due to the fact that the market is still new and it is gaining mainstream attention.

Meanwhile, check my latest article on AI and look at how I explored its strengths and Weaknesses through my one-on-one interaction.

https://thestartupglobal.substack.com/p/my-encounter-with-ai-assisted-chatbot?sd=pf

Expand full comment

For now it is hard to imagine that LLM development won’t be an arms race with unfathomable resource expenditures in the next couple of years.

That being said, it is equally hard to guess how many applications and use cases will really need the mattest and greatest models.

Think iPhones, WEIRD (western, educated, ...) people tend to have them but the majority of the planet in terms of numbers use different phones. It is plausible that in the future most LLMs won’t be cutting edge but specialized agents.

Also, bar patents or regulatory constraints there isn’t anything yet(!) in principle that can’t be done by a group of motivated people (side note: this is likely the reason why synthetic biology is having a hard time to advance as fast as people had hoped).

Expand full comment