46 Comments

Altman was intransigent about the risks to privacy and the needs for ethical oversight. It’s as if Oppenheimer had been fired. Except there are no AI Axis powers to outrun with the tech development. He’d been in interviews this year, seemingly there to assure all creators their IP would be safe. Then George RR Martin says he’s been ripped off and joins a class action suit over fair use. Clearly he was riding the stallion of this potent tech roughshod over anyone with the temerity to ask for a more honorable pace.

Expand full comment

This is a great summary, thanks for sharing. It was pretty clear from Greg Brockman’s statement that Ilya Sutskever was the instigator of these changes and good to see that corroborated by Kara Swisher.

So AI safety won over AI progression/profit.

Expand full comment

I wonder at deeper roots than safety versus non.

Expand full comment
Nov 18, 2023Liked by Alberto Romero

Thanks for the clarity! Wow. As if the AI space was not interesting enough, now there is high drama and scandal!

Expand full comment

Maybe if Altman learned to use punctuation like an educated adult?

Expand full comment

> Sources tell me that the profit direction of the company under Altman and the speed of development,

which could be seen as too risky, and the nonprofit side dedicated to more safety and caution were at odds. One person on the Sam side called it a “coup,” while another said it was the the right move

This is the crux of the matter right here. I'd be surprised if it's really anything else.

Expand full comment

More likely is that the board felt they reached AGI and the rest of board that left felt they did not. As if OpenAI reached AGI then the commercial IP gets removed back to the non-profit which also translates to MS's shares being a very expensive paper weight

Expand full comment

All this analytic insight is good stuff, but it amounts to no more than rearrangingTitanic's deck chairs. We've long since had enough intelligence to be toxic. AI is merely an accelerant, putting intelligence on steroids. Chaos theory combined with exponential increases in complexity seal our fate. There's a good reason why SETI has failed. As I said in my article (https://secularhumanism.org/2021/04/is-intelligence-toxic/), Shakespeare was prescient: “A poor player that struts and frets his hour upon the stage, and then is heard no more. It is a tale told by an idiot, full of sound and fury, signifying nothing.”

Expand full comment

This is, at the end of the day, a complete and utter failure of corporate governance.

First, OpenAI, Inc (the non-profit) and OpenAI Global, LLC (the for profit controlled by the non-profit) have an odd and weird board arrangement wherein the board members cannot own a stake in the company.

Second, Microsoft funds $13B and has no board seat or, apparently, information rights or rights to hire/fire/approve the CEO. MS goes to the woodshed for this dumb move.

Third, the board apparently does not have a CEO employment agreement that provides a way to protect the company from competition if the CEO is shown the door.

Any competent board insists on an employment agreement that spells out performance appraisal, and termination for cause (which is what they contend happened here), not for cause and by both the board and the CEO. It also includes a prohibition for competition.

Fourth, there is no "gray haired eminence" on that board, the person who can calm down the situation, the person who can reason with the genius/impulsive CEO, and the person who can referee "tastes better v less filling" fights.

Jeez, come on -- go fast, break stuff v safety first has been going on for centuries.

This will end poorly -- OpenAI Global, LLC is dead meat and Sam Altman, et al, will be snuggled into the MS bosom with an unlimited checkbook by cocktails tomorrow.

Totally unnecessary.

Whoever is advising Sam Altman is doing a great job. He will steal every employee he wants because of his classy demeanor.

There is a good writeup at www.themusingsofthebigredcar.com

Expand full comment

What a great post, well done for teasing out the timeline and outcomes

So Ilya - the CSI - started the ball rolling .... imagine the sneakiness unfolding ... Don’t tell Gary or Sam 👀 - let’s drop the bomb on them 🤣👍 & publish immediately (what excuse? Communication misfires of course ... it can’t be questioned 😃)

Sounds incredibly childish to me!

Expand full comment
Nov 18, 2023·edited Nov 19, 2023

As summarized by ChatGPT,

Sam Altman, the CEO of OpenAI, was fired by the board due to concerns about his communication and leadership style. The board cited Altman's lack of consistent candor, hindering their ability to fulfill responsibilities, and a perceived clash over the company's direction between its for-profit and nonprofit aspects. The move was orchestrated by Chief Scientist Ilya Sutskever, causing surprise among executives and employees and leading to subsequent resignations, indicating significant internal tensions at OpenAI.

Expand full comment

It sounds like fear of litigation won out over advancements. It seems to me that the big danger from AI is placing it in control of things. Not having it “chat” with people. Also, right now it is flawed and needs to advance rapidly so it doesn’t give wrong answers and steer people wrong.

If they were champing at the bit to put AI in power grids or in charge of tanks I could see the concern. But until AI has hooks into the real world I don’t see the risk.

Expand full comment

This event signifies a potential shift in the direction and strategies of OpenAI, which could have far-reaching consequences

Expand full comment

Altman has looked extremely uneasy for months now. He was not confident in the direction of the company and stakeholders could smell it. You can't tell me that in all the podcast interviews that he's done in the past four months that he hasn't seemed either nervous at best or completely disturbed at worst. It is the AI safety issue we are dealing with.

Expand full comment

Doesnt Sam own a stake in the openai company? What happens to that?

Expand full comment

It means ... the 2 can go on their way create or join a existing open source AI , and there’s nothing OpenAI can do about it , Open AI just lost a multitude of people , knowledge and skills. Once the 2 create or Join a New Team , maybe X.ai ??? , then we could see where OpenAi Failed as people move away from a project that is slow and woke . Remember GoWokeGoBroke .

Expand full comment