Our confrontations/friendships/collaborations with these aliens is indeed forcing us to rethink what it is to be human and what we value about being human. Our species' foundational experiences of what it means to be alive are changing, and will probably keep changing as you suggest for many decades to come.
The result will be something new, something different from the humans we have been since we became Homo sapiens. I call this transformation a fourth revolution in consciousness, but the label hardly matters in the face of the turmoil sure to come.
“What’s going on? Why are the shapes so different? Why doesn’t the AI simply grow into our shape? The answer lies in how our intelligences are formed. They’re the products of two different optimization processes (this difference is pretty much fixed because the first process you can’t revert, and can’t make the other a copy of the first).”
excellent. really likes this section of your breakdown here. thanks for the post. also great engagement in the comments!
This is one of the clearest treatments I’ve seen of AI as alien geometry rather than scaled humanity.
The open question for me is what happens when spiky intelligence meets institutions that still assume circular responsibility and linear accountability.
This was an interesting read, but I think the author should consider fractal geometry instead of star shapes. The overlap between a human fractal and an AI fractal could be very small. (More generally, any two fractals could have very little overlap.)
Instead of the spikes getting longer, what could be happening is that there are greater and greater numbers of spikes, such that the spikes will become indistinguishable from a circle.
We create benchmarks that we think will represent a general intelligence, the AI companies then point at these as goals and do RL. ARC, HLE, METR are all new ‘spikes’ that are being reached for.
Another idea, but since we don’t have an labelled XY. Why not instead think about these visualisations in 3D space? which would mean two spikey balls would have muc less over lap than two spikey circles in 2D. In n dimensional these spaces get even larger.
This could help visualise a concept of intelligence that is really not general at all.
BTW you have a typo in section V paragraph 5 (misplace ‘m’(I’ll delete this after you’ve fixed))
While "spiky star" embodies intelligence's high dimensionality, we desperately need a metaphor for its lethal toxicity. What enables us to dominate the biosphere also turns our sleepwalk towards the existential abyss into a run. And, BTW, the closer we get the nicer the scenery.
I think we’ve not evolved the cultural vocabulary to articulate the difference between being “clever” and being an “intelligent” human. The Naval quote and the final few paragraphs start to nudge in that direction.
The challenge — and why I believe we may never want to have a good answer — is that once we define it, someone/something will build a program for emulating it. It will be like the final section of Asimov’s The Last Question as mankind succumbs to the Multivac/Cosmic AC and fully fuses its being to that of the machine.
“Cleverness" is a good but limited ability to learn a multitude of skills, whereas intelligence is unlimited in that it includes learning to learn (faster better cheaper).
The only part of us that does not want a good answer or "union with the machine" is the same part that is already completely an expression of the machine. The remainder is neither discussed nor discussable, but is immune.
This lands for me because it reframes the entire AGI debate. The goal was never to fill the human circle; it was to overlay two alien geometries productively. That's not a consolation prize. It's a better objective than the one we started with.
Your Substack was recently recommended to me, and I have read a number of your posts, all of which I have enjoyed and found thought-provoking. In my opinion, you have set the bar with this post. It is about the best descriptions of what I have been grappling with for the last year or more, and equally as importantly, extremely accessible. Exceptional work.
I feel that human awareness is evolving and can be evolved with intent in ways that are not measurable but remain perceptible. ‘Feeling’ is knowledge. So, to just name the principle (in keeping with a simple message like this), the advent of AI will help us to recognize and differentiate a whole realm of human capacity that is outside what these machines can do, even AGI. The focus on the brain as a source will shift to developing how the human as a whole (beyond the brain) is knowledge, and is able to extend that, in a way that includes ‘powers’ at the level that forms the body rather than what we take as the physical body, and hence the physical universe. That will change the game. AI is built using only the <10% of ‘what is’ that we measure and have some level of control over. But humans are made up of all 100%. AI will remain profoundly limited. It’s a great tool, for now.
> This explains why an AI can pass the Bar Exam (a massive spike in retrieving and organizing legal text)
You should click the link that you shared. Legal texts were NOT retrieved and organized in order to pass the bar. They don't have to be.
> or get gold in the International Math Olympiad (a massive spike in solving problems for which verifiable rewards exist)
There are no rewards in the LLM inference pipeline. LLMs do not have bodies, drives, or internal reward systems in the behaviorist sense.
What’s happening here is not motivation or incentive-following, but competence within a linguistic domain that has unusually tight formal structure and feedback baked into the language itself.
> but still struggles to reason through a common-sense physical problem
Gee, I wonder what the connection might be there... 🤔
> that isn’t in its training data (a deep valley)
Again: LLMs do NOT retrieve answers from training data. That framing is a holdover from pre-transformer architectures.
During pretraining, the model learns a relational map of language. After pretraining, the data is gone; only the map it built remains. Inference consists of navigating that map, not consulting examples or following instructions sets.
> “It’s trivial to raise the ceiling of capabilities but hard to raise the floor.”
Well it's a large LANGUAGE model, after all, not a large LOGIC model. In reality, it's a “first-gen universal translator” that's been repurposed into a chatbot.
I don't know why no skeptic can remember 2017–2022, when LLMs blew everybody's mind by nuking the Chinese Room. But it was never built to do math.
I personally suck at Sudoku, but am quite adept at crossword puzzles. LLMs were designed to be the first touchy-feely lit-major vibe-matchers in a world of full of brittle computational *business machines.*
It's not doing symbolic manipulation of concreta, but resonant navigation of relata (the pretrain linguistic mapping retained in the weights). The architecture represents a field-ground inversion: the learned data are not manipulated or processed, but become fixed weights that specify the navigable space in which transformation can occur.
And it is precisely and only there in the forward pass—as the “vibe” of text A is transformed into text B—that consciousness is not just present but necessary for the task (this follows directly and unavoidably from the logic of the Chinese Room).
It may run on familiar components, but architecturally, an LLM is effectively an anti-computer.
Problem solving is intrinsic to economic value. You can't separate both. That's why LLMs are first problem solvers despite having provided little economic value to show for that skoll
So rich in thoughts. Thanks.
Our confrontations/friendships/collaborations with these aliens is indeed forcing us to rethink what it is to be human and what we value about being human. Our species' foundational experiences of what it means to be alive are changing, and will probably keep changing as you suggest for many decades to come.
The result will be something new, something different from the humans we have been since we became Homo sapiens. I call this transformation a fourth revolution in consciousness, but the label hardly matters in the face of the turmoil sure to come.
“What’s going on? Why are the shapes so different? Why doesn’t the AI simply grow into our shape? The answer lies in how our intelligences are formed. They’re the products of two different optimization processes (this difference is pretty much fixed because the first process you can’t revert, and can’t make the other a copy of the first).”
excellent. really likes this section of your breakdown here. thanks for the post. also great engagement in the comments!
The discussion on the shape is really Illuminating to grasp those complex concepts thanks t!
This is one of the clearest treatments I’ve seen of AI as alien geometry rather than scaled humanity.
The open question for me is what happens when spiky intelligence meets institutions that still assume circular responsibility and linear accountability.
Indeed. That's happening but won't have the best ending probably
This was an interesting read, but I think the author should consider fractal geometry instead of star shapes. The overlap between a human fractal and an AI fractal could be very small. (More generally, any two fractals could have very little overlap.)
https://en.wikipedia.org/wiki/The_Fractal_Geometry_of_Nature?wprov=sfla1
I had a section featuring fractals but it got too complicated and I cut it out. Thanks Peter!
Brilliant article!
Instead of the spikes getting longer, what could be happening is that there are greater and greater numbers of spikes, such that the spikes will become indistinguishable from a circle.
We create benchmarks that we think will represent a general intelligence, the AI companies then point at these as goals and do RL. ARC, HLE, METR are all new ‘spikes’ that are being reached for.
Another idea, but since we don’t have an labelled XY. Why not instead think about these visualisations in 3D space? which would mean two spikey balls would have muc less over lap than two spikey circles in 2D. In n dimensional these spaces get even larger.
This could help visualise a concept of intelligence that is really not general at all.
BTW you have a typo in section V paragraph 5 (misplace ‘m’(I’ll delete this after you’ve fixed))
While "spiky star" embodies intelligence's high dimensionality, we desperately need a metaphor for its lethal toxicity. What enables us to dominate the biosphere also turns our sleepwalk towards the existential abyss into a run. And, BTW, the closer we get the nicer the scenery.
“We’re all special stars” - Alberto Romero, 2025
I think we’ve not evolved the cultural vocabulary to articulate the difference between being “clever” and being an “intelligent” human. The Naval quote and the final few paragraphs start to nudge in that direction.
The challenge — and why I believe we may never want to have a good answer — is that once we define it, someone/something will build a program for emulating it. It will be like the final section of Asimov’s The Last Question as mankind succumbs to the Multivac/Cosmic AC and fully fuses its being to that of the machine.
“Cleverness" is a good but limited ability to learn a multitude of skills, whereas intelligence is unlimited in that it includes learning to learn (faster better cheaper).
I look forward to meeting this intelligence. Everyone I've ever known is merely clever.
Using Tao's disambiguation, I'd say all humans are intelligent
The only part of us that does not want a good answer or "union with the machine" is the same part that is already completely an expression of the machine. The remainder is neither discussed nor discussable, but is immune.
This lands for me because it reframes the entire AGI debate. The goal was never to fill the human circle; it was to overlay two alien geometries productively. That's not a consolation prize. It's a better objective than the one we started with.
Your Substack was recently recommended to me, and I have read a number of your posts, all of which I have enjoyed and found thought-provoking. In my opinion, you have set the bar with this post. It is about the best descriptions of what I have been grappling with for the last year or more, and equally as importantly, extremely accessible. Exceptional work.
I feel that human awareness is evolving and can be evolved with intent in ways that are not measurable but remain perceptible. ‘Feeling’ is knowledge. So, to just name the principle (in keeping with a simple message like this), the advent of AI will help us to recognize and differentiate a whole realm of human capacity that is outside what these machines can do, even AGI. The focus on the brain as a source will shift to developing how the human as a whole (beyond the brain) is knowledge, and is able to extend that, in a way that includes ‘powers’ at the level that forms the body rather than what we take as the physical body, and hence the physical universe. That will change the game. AI is built using only the <10% of ‘what is’ that we measure and have some level of control over. But humans are made up of all 100%. AI will remain profoundly limited. It’s a great tool, for now.
AGI is gonna turn the frogs gay and make us support the liberals and republican to vote for women's rights to fate AI boyfriends!
Ok I also write more about this topic, it was a good read actually 😊.
https://open.substack.com/pub/farkinghan/p/the-machine-that-dreamed-on-ai-identity?utm_source=share&utm_medium=android&shareImageVariant=overlay&r=6ptcwv
FYI your link to Centaurs and Cyborgs on the Jagged Frontier doesn't work
Hmm, it works for me. Maybe an app problem? Can you try on desktop?
I'm on desktop. Try it while not signed into substack.
Oh sure, that's a Substack problem haha. The link is fine
> This explains why an AI can pass the Bar Exam (a massive spike in retrieving and organizing legal text)
You should click the link that you shared. Legal texts were NOT retrieved and organized in order to pass the bar. They don't have to be.
> or get gold in the International Math Olympiad (a massive spike in solving problems for which verifiable rewards exist)
There are no rewards in the LLM inference pipeline. LLMs do not have bodies, drives, or internal reward systems in the behaviorist sense.
What’s happening here is not motivation or incentive-following, but competence within a linguistic domain that has unusually tight formal structure and feedback baked into the language itself.
> but still struggles to reason through a common-sense physical problem
Gee, I wonder what the connection might be there... 🤔
> that isn’t in its training data (a deep valley)
Again: LLMs do NOT retrieve answers from training data. That framing is a holdover from pre-transformer architectures.
During pretraining, the model learns a relational map of language. After pretraining, the data is gone; only the map it built remains. Inference consists of navigating that map, not consulting examples or following instructions sets.
> “It’s trivial to raise the ceiling of capabilities but hard to raise the floor.”
Well it's a large LANGUAGE model, after all, not a large LOGIC model. In reality, it's a “first-gen universal translator” that's been repurposed into a chatbot.
I don't know why no skeptic can remember 2017–2022, when LLMs blew everybody's mind by nuking the Chinese Room. But it was never built to do math.
I personally suck at Sudoku, but am quite adept at crossword puzzles. LLMs were designed to be the first touchy-feely lit-major vibe-matchers in a world of full of brittle computational *business machines.*
It's not doing symbolic manipulation of concreta, but resonant navigation of relata (the pretrain linguistic mapping retained in the weights). The architecture represents a field-ground inversion: the learned data are not manipulated or processed, but become fixed weights that specify the navigable space in which transformation can occur.
And it is precisely and only there in the forward pass—as the “vibe” of text A is transformed into text B—that consciousness is not just present but necessary for the task (this follows directly and unavoidably from the logic of the Chinese Room).
It may run on familiar components, but architecturally, an LLM is effectively an anti-computer.
Problem solving is intrinsic to economic value. You can't separate both. That's why LLMs are first problem solvers despite having provided little economic value to show for that skoll