4 Comments
User's avatar
⭠ Return to thread
Vincent's avatar

Not sure what could be deeper. This is really the debate at the center of AI. From a business standpoint, AI safety is largely at odds with the profit motive. If investors want you to grow 10x, they will push you out if you're sitting on a golden egg saying, "No, wait..." while Google and Anthropic and others are storming ahead.

Expand full comment
Michel Schellekens's avatar

The recent NY times article sheds some light on other mechanics at work. https://www.nytimes.com/2023/11/18/technology/open-ai-sam-altman-what-happened.html

Expand full comment
Vincent's avatar

I think I skimmed that one before it got pay-walled but it’s paywalled now.

My understanding is this remains a safety vs progress issue but where it actually may get deeper is who is in which camp.

Expand full comment
Michel Schellekens's avatar

Yes, so it seems. By deeper I mean that I am puzzled by what is driving the safety concern camp. There are immediate safety concerns I consider realistic, and long range ones I don't worry about (extinction--not on the cards with current tech which is very one-dimensional in its reliance on deep learning). If the claim is that long-range safety caused this outcome then I find this hard to believe, unless people who develop the tech really believe that AI will take over soon. I have no idea why they would, especially to the extent of getting rid of someone key to the company. If this is the deeper reason, it is more of a shism (or philosophical differences if you like) and I think the philosophy camp will lose the fight. If on the other hand, it is short term dangers to society that drove the outcome, then it could make sense. But this does not seem to be the case.

Expand full comment