24 Comments
Nov 18, 2023Liked by Alberto Romero

You're really onto something here. Early reporting from Kara Swisher suggests that Sutskever found support on the board from Helen Toner, the director of CSET. She's authored numerous reports that make clear her concerns about how generative AI models may inflict harms as they grow more powerful -- whether a closet doomer or not is almost irrelevant, given the present dangers that concern her. Here's a quote from one of her reports:

As machine learning systems become more advanced, they will

likely be deployed in increasingly complex environments to carry

out increasingly complex tasks. This is where specification

problems may begin to bite. Without significant progress in

methods to convey intentions, machine learning systems will

continue to carry out their instructions exactly as given—obeying

the letter, not the spirit, of the rules their designer gives them.

To address the challenges posed by misspecification, more

machine learning research needs to account for worst case

scenarios and develop algorithms that more explicitly incorporate

human supervision or provide theoretical guarantees for the worst

case performance under a given specification.

Almost by definition, using commercial deployment to test generative AI systems as a product runs afoul of this imperative. The (much?) safer way to account for worst-case scenarios and develop theoretical guarantees -- perhaps better described as safeguards -- is to conduct research on them in advance. This may end up being the opening salvo in an epic philosophical war.

https://cset.georgetown.edu/wp-content/uploads/Key-Concepts-in-AI-Safety-Specification-in-Machine-Learning.pdf

Expand full comment

I mean more on a human relationship/emotional level. Like maybe their philosophical alignments haven’t changed at all, but for some reason(s) there’s an inability to assume good intent that previously existed.

Expand full comment
author

Oh, definitely!

Expand full comment

Timely and helpful. Also, there's never been a not-for-profit value-creation rocketship like OpenAI before. Massive incentive/ alignment gap between non-shareholding Board members' academic motivations /instincts and that capitalist reality. As you posit, the market reality will continue, and perhaps its ascendance accelerate, now.

Expand full comment

I think you did an excellent job breaking this down based on the little information that is available today.

The thing that I find odd is fact that the statement explicitly suggest some sort of deception of the board from the side of Altman. From the blog post: “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board”.

I suspect some dirty laundry coming out, at some point.

Expand full comment
author

I think that could be a direct reference to DevDay or in general de ChatGPTization of the company (remember Altman personally pushed for chatgpt against many others). It could be something bigger. I think the story is more complex than I've presented here, but we know so little... Will update as soon as I can.

Expand full comment

>> In short: it's not true that as one is more intelligent in one sense, he/she is less so in other senses. >> The exact opposite, statistically speaking, is true.

This is so not my experience. In my experience social intelligence, physical intelligence, logical intelligence, parental intelligence, political intelligence, etc., musical intelligence are not correlated. They might go together but they usually don't. In fact it does seem to me that in some cases they actually play off against each other. Does being a really good mathematician predispose you to having weak social intelligence? Seems that way.

I suspect that over the next decade neuroanatomy will clarify these questions. When they do I suspect we will find the brain is divided into lots of circuits, each of which will be dedicated to a specific kind of problem. Expose the brain to problem A six times and the same bits of brain will light up six times. Expose it to problem B and you get a different reading. Of course anything can happen but isn't this the way you would bet? If you had to?

Expand full comment
author

These questions don't need more clarification because we already know the answer. The fact that in your experience those things are not correlated says nothing about the world because you're basing your view on anecdotal evidence. If science reveals anything is that we shouldn't let our biases define our perception of the world.

There's a wealth of literature on the topic. Also, I'm not saying everything that can be defined as "an intelligence" is taken into account by g, that's also not true.

Neuroscience can already explain quite a lot about how the brain works actually.

Expand full comment

Why not try to compromise between the two visions in order to move forward?

Alternatively, why not agree to pursue both plans at the same time?

Expand full comment

It's called greed and ego.

A very rich prize always attracts vultures. Their first and primary goal is to replace the "creatives" with the suits and the $$$ people.

Expand full comment
author

It's ego, but not in the sense you say. Their vision goes beyond money. That doesn't make it better, though. Makes them less predictable.

Expand full comment

My theory is that there are dozens of kinds of intelligence. Some people have more of these kinds than others. You can if you like say that the people with more kinds have a higher "g" but that overlooks the critical fact here, which is that there are lots of kinds and each kind is different. You follow neuroanatomy any? So far at least it seems to be the case the same sets of neurons fire when presented with the same problems and different sets fire when presented with different problems. And you have to admit that looks like a pretty sensible arrangement. I expect that by the end of the decade neuroanatomy will clarify all this.

Expand full comment
author

What the existence of g implies is not just that higher values of it correlate with "more intelligences" but that as one person stands out in any given kind of intelligence (e.g. math and logic) it's more likely that he/she also stands out in the rest (e.g. linguistic) which also are comprised into g (note that g doesn't correlate with everything, but it's the best we've got).

The g factor is a measure of general intelligence, it covers many "different intelligences" whatever that means exactly. Gardner's multiple intelligences hypothesis never found empirical support whereas g enjoys good psychometric properties.

In short: it's not true that as one is more intelligent in one sense, he/she is less so in other senses. The exact opposite, statistically speaking, is true.

Expand full comment

Let us suppose the two of us visit our common friend Al. Al is rich and his parents were rich and so were his grandparents. Over the generations the family has amassed a stunning collection of art. We arrive and wander through the house. Totally amazing. Art from every time, every era. Art from Japan, China, India, Italy, Germany, France, the Netherlands ... You get the idea. After our visit, on the way out the door, you turn to me and say, "Al certainly has a high A". And it is true. Al does have a high A. But

getting stuck on that level of generalization misses something important, don't you think?

Expand full comment
author

Not following.

Expand full comment

My own speculation is that there are hundreds of different kinds of intelligence. Each is about a specific problem. Once an intelligence gets good at solving a given problem there is no room for improvement, except perhaps getting faster. Once a calculator can add that is that. There is no such thing as "superaddition".

Some humans have more of these and some less. No human has all of them. Some contradict each other, so as you gain more members of one group you lose functionality in others. It is certainly possible to imagine a computer that has more of these kinds of reasoning than any specific human but the over-all difference would not be huge. There is no such thing as superintelligence. Like I said, this is just speculation.

Expand full comment
author

The well-reproduced psychological literature on the g factor directly contradicts your hypothesis: https://en.m.wikipedia.org/wiki/G_factor_(psychometrics)

Expand full comment

Wikipedia says "... an AGI could learn to accomplish any intellectual task that human beings or animals can perform."

I guess that is as good as I can expect. But it is not what most humans using the term intend. Most usages seem to imply that intelligence is a single spectrum that stretches out forever. Like a giant IQ test. So that one can say, in theory, that an entity has an IQ of 1000.

Expand full comment
author

Yep, not perfect. AGI refers to ~human IQ if you like the IQ measure. ASI ~human x 1000 IQ. I've seen no one define them like this, but just to make clear the difference in magnitude.

Expand full comment

What is a *good* definition of "Artificial General Intelligence"? Where can I find a useful discussion of the term? (Example: I do not consider drawing parallels to the difference between humans and ants useful.)

Expand full comment
author
Nov 18, 2023·edited Nov 18, 2023Author

That was for superintelligence, which has been vaguely defined as an entity more intelligent than all of humanity or an entity millions of times more intelligent than a human, which are equally useless. I did the ant-human thing for visual effect.

About AGI, I think you can find useful resources even on Wikipedia: https://en.m.wikipedia.org/wiki/Artificial_general_intelligence

Expand full comment

Occam’s razor: how do we know this isn’t about some more basic issue like lack of trust between them as individuals? “Consistently candid” to me implies some sort of withholding of information.

Expand full comment
author

"Consistently candid" can mean so many things that it's basically meaningless. On some level it *is* a lack of trust. But why the lack of trust? Trust in what exactly, in executing the mission? And why do they think he wouldn't be able to? Etc. The reasoning, if you know them for what they've said publicly, leads to something like what I propose here. I might be mistaken but "lack of trust" is too unspecific.

Expand full comment

I read it as he lied to the board about something they considered too important to let go. I suspect Altman believed he could get away with it due to the astonishing success of the company under his leadership, but it provided Sutskever's faction the opportunity to consolidate control of the board. Perhaps Altman had agreed to specific actions in promoting OpenAI at the Dev Day event and reneged on his agreement?

Expand full comment