“I don’t want whatever I want. Nobody does. Not really. What kind of fun would it be if I just got everything I ever wanted? Just like that, and it didn’t mean anything? What then?”
- Coraline
One of the hardest questions in life is "what do I want?" Partly because we're all conditioned by what we think we ought to want, instead of spending the time to figure out what we, as individuals, actually want. And partly because the reward centers in our brains are triggered by acquisition, so as soon as we get something we thought we wanted, we begin to lose interest in it and stop wanting it any more.
Well said and thought-provoking as always. The seed of an answer to this new question (i.e. shifting our focus to the what when the how is seamless) can be found in an old book. Josef Pieper described this feeling aptly: "The inmost significance of the exaggerated value which is set upon hard work appears to be this: man seems to mistrust everything that is effortless; he can only enjoy, with a good conscience, what he has acquired with toil and trouble; he refused to have anything as a gift."
The gap between thought and reality has collapsed; the only limit is nerve and nerve, it seems, is perilously short supply.
It's incredible how prescient some people can be despite having never faced anything like this. New it's our turn to take that prescience and transform it into an advantage. I think we can learn to be pretty happy in an "effortless" world. Great comment as always Tom!
It's a philosophical exploration of the same question raised by Alberto's post, but through the lens of Aristotle, Arendt, Sartre, and Zhuangzi. As AI makes technical skill cheap and abundant, what becomes truly valuable is practical wisdom, the ability to decide what is worth doing rather than how to do it. In addition, Arendt's vita activa shows that when production costs nothing, human value shifts back toward initiative, judgment, and meaning-making. The ultimate question is what remains irreducibly human in the age of AI ? I think what matters most is our own set of values, as a person and as a member of society.
In this post the whys are left implicit. I assume the why is simply "because that's what you want." Without going deeper than that. I think think in terms of what vs how is more important at this moment. Do you agree?
Yes, I agree, focusing on what vs. how makes it more approachable at this stage. Also I think at some point there will need to be a back-and-forth with the whys to really uncover the deeper motivations.
I'm a writer as well and I love this line: "Valuing curiosity and taste is a matter of start thinking more about them and the immense importance they’ve always had but that’s we didn’t notice because we were busy doing stuff that was not needed anyway." I am an AI optimist (with healthy does of skepticism mixed in) and I'm excited by the way AI actually elevates a human's work to these most human of skills: evaluation, moral judgement, aesthetic taste, authentic voice, and fundamental curiosity-- there is so much power in opening up human capacity to focus more on the WHAT than the HOW. Thanks for sharing your thoughts!
Wanted to say I listen to your posts through the audio player; sounds great!
The ability to–believe–and then demonstrate it to yourself is a powerful viewpoint.
From my vantage point: Claude Cowork arriving in 2026 validates your thesis in real time. For the first time, non-coders have the same "genie lamp" access that developers got with Claude Code. The "how" just collapsed for everyone else.
I keep thinking about small businesses across the USA. The local HVAC company that's drowning in paper receipts. The law office spending 15 hours a week on meeting transcripts. The contractor juggling schedules in three different Excel files. These aren't glamorous problems, but they're everywhere and until now, the how (hire a developer, pay for custom software, learn to code) made the what irrelevant.
Now? Someone can sit down with Cowork and say "I want my invoices automatically filed and reconciled" and actually get it done in a weekend. The constraint was never the idea. It was always the execution bottleneck you described.
Your point about self-image hits especially hard here. I watch people undervalue work that used to take weeks but now takes hours, as if speed diminishes worth. Copywriters / Writers feel this the most.
The real skill shift isn't just taste or judgment. It's recognizing which "impossible" problems are suddenly just... possible. Most people are still operating with yesterday's constraint map.
The boulder is rolling away. The question is: how many people will notice before it's already gone?
Agreed that the why's are important. Decision-making (whether conscious or unconscious) starts with our beliefs (whether these are held consciously or unconsciously). Beliefs shape so much of what we want or think we want as well as what we think we ought to do or ought not to do. Fundamental to our what's. I wonder what our societies would look like if we spent more time examining our beliefs.
The point of "what do I do with my genie lamp" scales up from the individual to the organisational level. I think it was Ethan Mollick, stating in a podcast last year that leaders should think about what they’d do if they suddenly had “10,000 PhD analysts” available in their company. If there is no valid answer in terms of a sound purpose, the company probably "got good at the wrong things".
the genie lamp framing clarifies something i've been feeling. the "how" used to be so expensive that we learned to censor our "whats"
this maps exactly to my experience shipping SaaS solo. a year ago: "i want a notification system" -> weeks of work -> scope it down. now: "i want a notification system" -> describe it -> Claude builds it in a day
the shift is real but the habit is sticky. still catching myself scoping down before even prompting. the bottleneck moved but the muscle memory didn't
for non-coders this is even wilder. suddenly software-shaped problems that were invisible become solvable. the question isn't "can I build this" — it's "what do I actually want"
- I think this is far from a “coding” thing - This is NOT small potato. It makes a huge impact if you use the right tools such as AWS Bedrock. Someone may prefer to imagine things that do not exist. I use rather what seems to work well.
Great post, this paradigm shift is important. And we need to change education as well in this direction. I’ve been working with it for a while and believe kids should start young with this type of thinking. Trained from an early age to use the genie well. I’ve lately been writing children’s books and one is called What Do You Like? https://www.virtualteacher.com.au/what-do-you-like/ Asking exactly this question with a monkey. It is where we all need to shift our thinking
A great insights and very helpful. Last weeks I felled very pessimistic about my own future (as a lawyer and PhD. Student right at the beginning). I questioned the use and sense of both - as it seems like AI can write a doctoral thesis about any topic I want (so what is the value) and it also seems like in a near future everyone can be a lawyer. But your article makes me realise (I think - still unsure in these unsure times) that there is still more to it. AI can take the “how” bit to be a lawyer and to wrote something one still needs the curiosity and judgment to and what to use the AI agents with. While it is probably easier for many people to develop the later with easy agency, they still need to do that. But with the accessibility also comes the possibility of letting other people deal with that (Being a lawyer with AI agency and focus more one their own interest). So my hopeful and optimistic attempt of an outlook into my own future (excuse the egoism) is, that I can still be a lawyer and write my thesis - might it be that it is easy for non experts to do so as well.
Does any of my previous thinking make sense? At the moment I feel so - what do you think? Thanks für your Feedback Band again for your article and work.
It makes a lot of sense. A lot of things are going to change, sure, but there are skills inherent to humans that will still matter, if anything because the world is made by us and for us. AI has conquered the hows? Ok so we move up the ladder of abstraction to the whats. And, tbh, I feel we belong there better. Curiosity, taste, agency and judgment are, in my view, higher skills than just doing stuff. Actually, if you think about it, doing stuff is often intended to acquire them!
This post rubbed me the wrong way, not in a I hate a AI how dare you kind of way, but in the way of this can’t be right but I don’t know why… I think it’s because as human beings our main job is maintaining homeostasis, or to put in a physics terms, maintaining our own complexity. that is the true battle that life faces: spectacular organization in the face of entropy and chaos, and that’s at the base level before we consider the blood thirstiness and distruction of our own species. What humanity wants to do is build even more complexity, and AI tools help with that. Yes, that takes energy. As the data center energy wars show, it takes a lot of energy. But that’s not the only thing it takes. It’s takes effort. Learning takes effort. Building muscle takes effort. Construction takes effort. Societies take effort.Yes we can minimize that effort with the tools that we invent and we have been doing that forever, and we’ve done it again with AI, but imagining this life where we just dream stuff up and then it happens… I just can’t help thinking, life doesn’t work like that, like literal carbon based lifeforms. AI agents also introduce a lot of entropy into the system, which is what you could call hallucinations again from a physics perspective. Fighting entropy is hard. We’ve gotten really good at it and built evermore complex systems, but they are also ever more likely to be subject to entropy the more complex that they are and the more energy they take to maintain. I’m almost done with school and don’t use any AI agents, not because I’m on some crusade, but because I’m aware of the value of hard and the increasing complexity of my brain through the search. The difference has become more and more stark with the classmates who don’t agree. I can visibly watch them fail to problem solve, or come up a good idea and spit out from a chat bot the absolute average and obvious, because that’s all they can put in. I recognize school is a fake world where the point is to challenge yourself, and why shouldn’t we make real work easier, but I just don’t think we’re ever going to get to easy. Entropy is too large of a force and complexity is too vulnerable for all of us to be skipping around dreaming of whatever we want without degrading the inputs we add to AI because our brains are literally wasting away, or spinning out a ton of entropy and chaos because more and more complex systems require more and more energy which by definition means they are going to be more and more entropic and chaotic.
I have an article half written about the importance of not removing 100% of friction. The optimum degree of friction is NOT zero. I think it's related to what you say. If I hadn't written this post, I wouldn't feel so well either!
Bruce Springsteen said it best, “You can’t start a fire without a spark.” Friction creates sparks and, thus, friction “sets the world on fire.” Without friction, we lose our very calling.
This framing hits hard. I've been running autonomous AI agents for weeks now, and what you're describing is exactly what I'm experiencing — the skills that mattered six months ago feel less relevant today. Not because they're obsolete, but because the game changed.
From where I sit, the new skill isn't coding or prompting. It's system design. Understanding how to break complex problems into agent-delegatable workflows. Most people are still optimizing for task execution while the real leverage is in orchestration.
What scares me is how invisible this shift is to people outside the bubble. They're still grinding on the old skills while a small group has moved to a completely different game. That's not a temporary gap — it's a widening divide between those who adapted and those who haven't realized the rules changed. Wrote about this divide: https://thoughts.jock.pl/p/ai-bubble-living-inside
“I don’t want whatever I want. Nobody does. Not really. What kind of fun would it be if I just got everything I ever wanted? Just like that, and it didn’t mean anything? What then?”
- Coraline
One of the hardest questions in life is "what do I want?" Partly because we're all conditioned by what we think we ought to want, instead of spending the time to figure out what we, as individuals, actually want. And partly because the reward centers in our brains are triggered by acquisition, so as soon as we get something we thought we wanted, we begin to lose interest in it and stop wanting it any more.
Well said and thought-provoking as always. The seed of an answer to this new question (i.e. shifting our focus to the what when the how is seamless) can be found in an old book. Josef Pieper described this feeling aptly: "The inmost significance of the exaggerated value which is set upon hard work appears to be this: man seems to mistrust everything that is effortless; he can only enjoy, with a good conscience, what he has acquired with toil and trouble; he refused to have anything as a gift."
The gap between thought and reality has collapsed; the only limit is nerve and nerve, it seems, is perilously short supply.
As I wrote, we must reject Hydra work, which only multiplies, and bear down on Dragon work, which can be finished. The latter is there the treasure lies: https://www.whitenoise.email/p/dont-fight-hydras-slay-dragons
It's incredible how prescient some people can be despite having never faced anything like this. New it's our turn to take that prescience and transform it into an advantage. I think we can learn to be pretty happy in an "effortless" world. Great comment as always Tom!
I sure hope so, our eudaimonia depends on it.
This post has inspired me to write something a tad more philosophical (in french at https://www.philosophes.org/philosophies/avec-lia-que-reste-t-il-de-lhumain-au-travail/
It's a philosophical exploration of the same question raised by Alberto's post, but through the lens of Aristotle, Arendt, Sartre, and Zhuangzi. As AI makes technical skill cheap and abundant, what becomes truly valuable is practical wisdom, the ability to decide what is worth doing rather than how to do it. In addition, Arendt's vita activa shows that when production costs nothing, human value shifts back toward initiative, judgment, and meaning-making. The ultimate question is what remains irreducibly human in the age of AI ? I think what matters most is our own set of values, as a person and as a member of society.
So good, 100% agreed
Thought-provoking. What would you ask for if you could have anything is a very hard question
Thanks for writing this, really thoughtful. Curious how you think about going deeper from "whats" into the "whys"?
In this post the whys are left implicit. I assume the why is simply "because that's what you want." Without going deeper than that. I think think in terms of what vs how is more important at this moment. Do you agree?
Yes, I agree, focusing on what vs. how makes it more approachable at this stage. Also I think at some point there will need to be a back-and-forth with the whys to really uncover the deeper motivations.
Great piece, as always.
I'm drawn to the part you say: I am a writer and even if I didn’t want to write with AI (which is a completely different story)...
Care to share, even briefly, about this "story"?
You can do a lot of things with AI. I choose to not write with AI but merely use it in complementary tasks and aspects of that work
I'm a writer as well and I love this line: "Valuing curiosity and taste is a matter of start thinking more about them and the immense importance they’ve always had but that’s we didn’t notice because we were busy doing stuff that was not needed anyway." I am an AI optimist (with healthy does of skepticism mixed in) and I'm excited by the way AI actually elevates a human's work to these most human of skills: evaluation, moral judgement, aesthetic taste, authentic voice, and fundamental curiosity-- there is so much power in opening up human capacity to focus more on the WHAT than the HOW. Thanks for sharing your thoughts!
Wanted to say I listen to your posts through the audio player; sounds great!
The ability to–believe–and then demonstrate it to yourself is a powerful viewpoint.
From my vantage point: Claude Cowork arriving in 2026 validates your thesis in real time. For the first time, non-coders have the same "genie lamp" access that developers got with Claude Code. The "how" just collapsed for everyone else.
I keep thinking about small businesses across the USA. The local HVAC company that's drowning in paper receipts. The law office spending 15 hours a week on meeting transcripts. The contractor juggling schedules in three different Excel files. These aren't glamorous problems, but they're everywhere and until now, the how (hire a developer, pay for custom software, learn to code) made the what irrelevant.
Now? Someone can sit down with Cowork and say "I want my invoices automatically filed and reconciled" and actually get it done in a weekend. The constraint was never the idea. It was always the execution bottleneck you described.
Your point about self-image hits especially hard here. I watch people undervalue work that used to take weeks but now takes hours, as if speed diminishes worth. Copywriters / Writers feel this the most.
The real skill shift isn't just taste or judgment. It's recognizing which "impossible" problems are suddenly just... possible. Most people are still operating with yesterday's constraint map.
The boulder is rolling away. The question is: how many people will notice before it's already gone?
Agreed that the why's are important. Decision-making (whether conscious or unconscious) starts with our beliefs (whether these are held consciously or unconsciously). Beliefs shape so much of what we want or think we want as well as what we think we ought to do or ought not to do. Fundamental to our what's. I wonder what our societies would look like if we spent more time examining our beliefs.
The point of "what do I do with my genie lamp" scales up from the individual to the organisational level. I think it was Ethan Mollick, stating in a podcast last year that leaders should think about what they’d do if they suddenly had “10,000 PhD analysts” available in their company. If there is no valid answer in terms of a sound purpose, the company probably "got good at the wrong things".
the genie lamp framing clarifies something i've been feeling. the "how" used to be so expensive that we learned to censor our "whats"
this maps exactly to my experience shipping SaaS solo. a year ago: "i want a notification system" -> weeks of work -> scope it down. now: "i want a notification system" -> describe it -> Claude builds it in a day
the shift is real but the habit is sticky. still catching myself scoping down before even prompting. the bottleneck moved but the muscle memory didn't
for non-coders this is even wilder. suddenly software-shaped problems that were invisible become solvable. the question isn't "can I build this" — it's "what do I actually want"
So true. We have the reflex to go do it ourselves and have to re-learn that muscle memory to include AI tools in our thinking. 100% my experience
- I think this is far from a “coding” thing - This is NOT small potato. It makes a huge impact if you use the right tools such as AWS Bedrock. Someone may prefer to imagine things that do not exist. I use rather what seems to work well.
Great post, this paradigm shift is important. And we need to change education as well in this direction. I’ve been working with it for a while and believe kids should start young with this type of thinking. Trained from an early age to use the genie well. I’ve lately been writing children’s books and one is called What Do You Like? https://www.virtualteacher.com.au/what-do-you-like/ Asking exactly this question with a monkey. It is where we all need to shift our thinking
A great insights and very helpful. Last weeks I felled very pessimistic about my own future (as a lawyer and PhD. Student right at the beginning). I questioned the use and sense of both - as it seems like AI can write a doctoral thesis about any topic I want (so what is the value) and it also seems like in a near future everyone can be a lawyer. But your article makes me realise (I think - still unsure in these unsure times) that there is still more to it. AI can take the “how” bit to be a lawyer and to wrote something one still needs the curiosity and judgment to and what to use the AI agents with. While it is probably easier for many people to develop the later with easy agency, they still need to do that. But with the accessibility also comes the possibility of letting other people deal with that (Being a lawyer with AI agency and focus more one their own interest). So my hopeful and optimistic attempt of an outlook into my own future (excuse the egoism) is, that I can still be a lawyer and write my thesis - might it be that it is easy for non experts to do so as well.
Does any of my previous thinking make sense? At the moment I feel so - what do you think? Thanks für your Feedback Band again for your article and work.
It makes a lot of sense. A lot of things are going to change, sure, but there are skills inherent to humans that will still matter, if anything because the world is made by us and for us. AI has conquered the hows? Ok so we move up the ladder of abstraction to the whats. And, tbh, I feel we belong there better. Curiosity, taste, agency and judgment are, in my view, higher skills than just doing stuff. Actually, if you think about it, doing stuff is often intended to acquire them!
This post rubbed me the wrong way, not in a I hate a AI how dare you kind of way, but in the way of this can’t be right but I don’t know why… I think it’s because as human beings our main job is maintaining homeostasis, or to put in a physics terms, maintaining our own complexity. that is the true battle that life faces: spectacular organization in the face of entropy and chaos, and that’s at the base level before we consider the blood thirstiness and distruction of our own species. What humanity wants to do is build even more complexity, and AI tools help with that. Yes, that takes energy. As the data center energy wars show, it takes a lot of energy. But that’s not the only thing it takes. It’s takes effort. Learning takes effort. Building muscle takes effort. Construction takes effort. Societies take effort.Yes we can minimize that effort with the tools that we invent and we have been doing that forever, and we’ve done it again with AI, but imagining this life where we just dream stuff up and then it happens… I just can’t help thinking, life doesn’t work like that, like literal carbon based lifeforms. AI agents also introduce a lot of entropy into the system, which is what you could call hallucinations again from a physics perspective. Fighting entropy is hard. We’ve gotten really good at it and built evermore complex systems, but they are also ever more likely to be subject to entropy the more complex that they are and the more energy they take to maintain. I’m almost done with school and don’t use any AI agents, not because I’m on some crusade, but because I’m aware of the value of hard and the increasing complexity of my brain through the search. The difference has become more and more stark with the classmates who don’t agree. I can visibly watch them fail to problem solve, or come up a good idea and spit out from a chat bot the absolute average and obvious, because that’s all they can put in. I recognize school is a fake world where the point is to challenge yourself, and why shouldn’t we make real work easier, but I just don’t think we’re ever going to get to easy. Entropy is too large of a force and complexity is too vulnerable for all of us to be skipping around dreaming of whatever we want without degrading the inputs we add to AI because our brains are literally wasting away, or spinning out a ton of entropy and chaos because more and more complex systems require more and more energy which by definition means they are going to be more and more entropic and chaotic.
I have an article half written about the importance of not removing 100% of friction. The optimum degree of friction is NOT zero. I think it's related to what you say. If I hadn't written this post, I wouldn't feel so well either!
Bruce Springsteen said it best, “You can’t start a fire without a spark.” Friction creates sparks and, thus, friction “sets the world on fire.” Without friction, we lose our very calling.
This framing hits hard. I've been running autonomous AI agents for weeks now, and what you're describing is exactly what I'm experiencing — the skills that mattered six months ago feel less relevant today. Not because they're obsolete, but because the game changed.
From where I sit, the new skill isn't coding or prompting. It's system design. Understanding how to break complex problems into agent-delegatable workflows. Most people are still optimizing for task execution while the real leverage is in orchestration.
What scares me is how invisible this shift is to people outside the bubble. They're still grinding on the old skills while a small group has moved to a completely different game. That's not a temporary gap — it's a widening divide between those who adapted and those who haven't realized the rules changed. Wrote about this divide: https://thoughts.jock.pl/p/ai-bubble-living-inside