46 Comments

Altman was intransigent about the risks to privacy and the needs for ethical oversight. It’s as if Oppenheimer had been fired. Except there are no AI Axis powers to outrun with the tech development. He’d been in interviews this year, seemingly there to assure all creators their IP would be safe. Then George RR Martin says he’s been ripped off and joins a class action suit over fair use. Clearly he was riding the stallion of this potent tech roughshod over anyone with the temerity to ask for a more honorable pace.

Expand full comment

This is, of course, the role of a competent board of directors to sort out -- the strategic plan, the tactical plan for a company.

This is why companies need to have a "gray haired eminence" to inject a bit of common sense.

This industry did not just recognize the conflict between the Accelerators and the Doomers.

"Tastes great, less filling"

Expand full comment

Well said

Expand full comment

This is a great summary, thanks for sharing. It was pretty clear from Greg Brockman’s statement that Ilya Sutskever was the instigator of these changes and good to see that corroborated by Kara Swisher.

So AI safety won over AI progression/profit.

Expand full comment
author

We will see. I'm sure Altman and Brockman are planning a move. Probably will create a new company or do something with an existing one. Not sure yet.

Expand full comment

Moving quick and breaking things, they will. Do we need more of that, or is Meta willing to settle for its more-than-enough mayhem? Will be interesting to see if he can drag along Zuckerberg’s wallet to the next Altman Adventure.

Expand full comment

I wonder at deeper roots than safety versus non.

Expand full comment
author

I also don't think it's that simple. I believe both are pro-safety and both are AI-optimists. I'm writing on this and what I believe is the true conflict, much more subtle for outsiders I think.

Expand full comment

Yes I also thought about a shism. You summarized it perfectly in your recent post.

Expand full comment
author

Thank you Michel!

Expand full comment

Not sure what could be deeper. This is really the debate at the center of AI. From a business standpoint, AI safety is largely at odds with the profit motive. If investors want you to grow 10x, they will push you out if you're sitting on a golden egg saying, "No, wait..." while Google and Anthropic and others are storming ahead.

Expand full comment

The recent NY times article sheds some light on other mechanics at work. https://www.nytimes.com/2023/11/18/technology/open-ai-sam-altman-what-happened.html

Expand full comment

I think I skimmed that one before it got pay-walled but it’s paywalled now.

My understanding is this remains a safety vs progress issue but where it actually may get deeper is who is in which camp.

Expand full comment

Yes, so it seems. By deeper I mean that I am puzzled by what is driving the safety concern camp. There are immediate safety concerns I consider realistic, and long range ones I don't worry about (extinction--not on the cards with current tech which is very one-dimensional in its reliance on deep learning). If the claim is that long-range safety caused this outcome then I find this hard to believe, unless people who develop the tech really believe that AI will take over soon. I have no idea why they would, especially to the extent of getting rid of someone key to the company. If this is the deeper reason, it is more of a shism (or philosophical differences if you like) and I think the philosophy camp will lose the fight. If on the other hand, it is short term dangers to society that drove the outcome, then it could make sense. But this does not seem to be the case.

Expand full comment

I think fear of litigation would be one. Until AI has hooks into the real world it isn’t dangerous per se. Right now it is just speech. I don’t understand the fear of ai itself. It will be people who choose to deploy ai into areas that could be dangerous that we need to worry about. Like generals in the armed forces and power grids and situations where ai can take control or make decisions without human overrides.

To my thinking the danger is becoming too overconfident in our ability to shut down ai. But right now it is mostly in sandboxes. As long as it stays in sandboxes and makes text, music and pictures then yes, there are issues to deal with like misinformation and IP but I don’t see the dangers of advancing rapidly within the sandboxes.

Also, if we don’t. Someone else will. And when they beat us it’s game over. We need to have the world’s best AI if needed to defend our side. Particularly in the computer industry.

Expand full comment

I don't see how AI can become self aware/is tricky to shut down. Deep learning is rooted in numerical analysis, i.e. function approximation techniques. No one can predict the future but currently, I have seen nothing to warrant a worry about AI leading its own life. It is missing crucial aspects in order to truly deliver, even though I love the engineering application of modelling similarities. Progress is fast and maybe this is closer on the horizon than I think, but I see a lot of smoke and mirrors these days and little real science to back up the over-stated claims.

Expand full comment
Nov 18, 2023Liked by Alberto Romero

Thanks for the clarity! Wow. As if the AI space was not interesting enough, now there is high drama and scandal!

Expand full comment
author

Ikr!

Expand full comment

They have graduated to the big leagues. *grin*

Expand full comment

Maybe if Altman learned to use punctuation like an educated adult?

Expand full comment

Cargo shorts to board meetings?

Expand full comment

> Sources tell me that the profit direction of the company under Altman and the speed of development,

which could be seen as too risky, and the nonprofit side dedicated to more safety and caution were at odds. One person on the Sam side called it a “coup,” while another said it was the the right move

This is the crux of the matter right here. I'd be surprised if it's really anything else.

Expand full comment
author

Yeah, I agree.

Expand full comment

More likely is that the board felt they reached AGI and the rest of board that left felt they did not. As if OpenAI reached AGI then the commercial IP gets removed back to the non-profit which also translates to MS's shares being a very expensive paper weight

Expand full comment
author

That's a possibility. I'm not sure if a more likely one, though.

Expand full comment

Look at the past comments of all the board members. Its very apparent that is exactly what happened. If the prediction holds, expect a legal move from MS next week concerning this firing.

Expand full comment
author

Very apparent? Where do they say they think they've achieved AGI?

Expand full comment

Goes to this term of the non-profit charter:

Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

Sam in his last presentation implied that no further breakthroughs were needed to reach AGI....a hint that the board had already concluded that AGI had been reached.

Now for the twist...several AI experts are claiming that chatGPT is an off-ramp and that AGI will not be reached via that way. If CTO and remaining board thought that there would not have been a firing. That implies that the problem of having reached AGI or not was not the research side as far as the argument...and we have some confirmation thus far in a few researchers already leaving.

In simple terms too much resources on big lang models to reach even modest improvements.

Expand full comment
author

Actually, it's the opposite: Altman has said this week that AGI will require more breakthroughs (https://twitter.com/burny_tech/status/1725233117055553938)

Expand full comment

but the breakthrough is not in chatGPT its the hardware side....a GPU SoC has a limited about of memory bandwidth as the memory goes with the SoC...same like the current CPU SoCs. So if we do not move from that for say ten years...is it worth it waiting that long for a breakthrough or forging another one?

Expand full comment

This is, at the end of the day, a complete and utter failure of corporate governance.

First, OpenAI, Inc (the non-profit) and OpenAI Global, LLC (the for profit controlled by the non-profit) have an odd and weird board arrangement wherein the board members cannot own a stake in the company.

Second, Microsoft funds $13B and has no board seat or, apparently, information rights or rights to hire/fire/approve the CEO. MS goes to the woodshed for this dumb move.

Third, the board apparently does not have a CEO employment agreement that provides a way to protect the company from competition if the CEO is shown the door.

Any competent board insists on an employment agreement that spells out performance appraisal, and termination for cause (which is what they contend happened here), not for cause and by both the board and the CEO. It also includes a prohibition for competition.

Fourth, there is no "gray haired eminence" on that board, the person who can calm down the situation, the person who can reason with the genius/impulsive CEO, and the person who can referee "tastes better v less filling" fights.

Jeez, come on -- go fast, break stuff v safety first has been going on for centuries.

This will end poorly -- OpenAI Global, LLC is dead meat and Sam Altman, et al, will be snuggled into the MS bosom with an unlimited checkbook by cocktails tomorrow.

Totally unnecessary.

Whoever is advising Sam Altman is doing a great job. He will steal every employee he wants because of his classy demeanor.

There is a good writeup at www.themusingsofthebigredcar.com

Expand full comment

What a great post, well done for teasing out the timeline and outcomes

So Ilya - the CSI - started the ball rolling .... imagine the sneakiness unfolding ... Don’t tell Gary or Sam 👀 - let’s drop the bomb on them 🤣👍 & publish immediately (what excuse? Communication misfires of course ... it can’t be questioned 😃)

Sounds incredibly childish to me!

Expand full comment
Nov 18, 2023·edited Nov 19, 2023

As summarized by ChatGPT,

Sam Altman, the CEO of OpenAI, was fired by the board due to concerns about his communication and leadership style. The board cited Altman's lack of consistent candor, hindering their ability to fulfill responsibilities, and a perceived clash over the company's direction between its for-profit and nonprofit aspects. The move was orchestrated by Chief Scientist Ilya Sutskever, causing surprise among executives and employees and leading to subsequent resignations, indicating significant internal tensions at OpenAI.

Expand full comment

Haha, too funny. You spelled Judas' name wrong.

Expand full comment

ChatGPT misspelled it! Things already going downhill following Sam's departure? 😂

Expand full comment

Haha. Probably true. Very clever.

Expand full comment

All this analytic insight is good stuff, but it amounts to no more than rearrangingTitanic's deck chairs. We've long since had enough intelligence to be toxic. AI is merely an accelerant, putting intelligence on steroids. Chaos theory combined with exponential increases in complexity seal our fate. There's a good reason why SETI has failed. As I said in my article (https://secularhumanism.org/2021/04/is-intelligence-toxic/), Shakespeare was prescient: “A poor player that struts and frets his hour upon the stage, and then is heard no more. It is a tale told by an idiot, full of sound and fury, signifying nothing.”

Expand full comment

It sounds like fear of litigation won out over advancements. It seems to me that the big danger from AI is placing it in control of things. Not having it “chat” with people. Also, right now it is flawed and needs to advance rapidly so it doesn’t give wrong answers and steer people wrong.

If they were champing at the bit to put AI in power grids or in charge of tanks I could see the concern. But until AI has hooks into the real world I don’t see the risk.

Expand full comment

This event signifies a potential shift in the direction and strategies of OpenAI, which could have far-reaching consequences

Expand full comment

Altman has looked extremely uneasy for months now. He was not confident in the direction of the company and stakeholders could smell it. You can't tell me that in all the podcast interviews that he's done in the past four months that he hasn't seemed either nervous at best or completely disturbed at worst. It is the AI safety issue we are dealing with.

Expand full comment

Doesnt Sam own a stake in the openai company? What happens to that?

Expand full comment
author

I don't think he does, no. Weird, I know, but he's a weird guy after all

Expand full comment

It means ... the 2 can go on their way create or join a existing open source AI , and there’s nothing OpenAI can do about it , Open AI just lost a multitude of people , knowledge and skills. Once the 2 create or Join a New Team , maybe X.ai ??? , then we could see where OpenAi Failed as people move away from a project that is slow and woke . Remember GoWokeGoBroke .

Expand full comment