You Spent Your Whole Life Getting Good at the Wrong Thing
Notes on AI agents
👋 Hey there, I’m Alberto! Each week, I publish long-form AI analysis covering culture, philosophy, and business for The Algorithmic Bridge.
Paid subscribers get Monday news commentary and Friday how-to guides. Free essays weekly. You can support my work by sharing and subscribing.
This is an extra post—working on a Saturday, how lame!—where I share my notes and impressions on the most important question for knowledge workers right now: how should our approach to work change now that we have powerful agentic AI tools?
I was reading an article by an OpenAI employee about how he uses the new Codex model to become ultra-efficient, and I found one particular paragraph that, as a writer (non-coder), I found so relatable. Paraphrasing: coding agents are so good at data analysis that the bottleneck now is figuring out what to analyze.
I’ve been thinking about this shift a lot lately. The whats vs the hows. That’s where AI tools and agents and whatnot are, to me, having the most impact.
I want to share my experience and perceptions and see if you guys agree and what your own impressions are. What follows are my notes on the topic, rather unpolished (sorry about that), but hopefully readable.
The most powerful AI tools today (Codex with GPT-5.3 and Claude Code with Opus 4.6, recently launched) collapse the process of doing things inside a computer into basically a wish. That is the standard view in my online circles. But when you put it this way, you realize that wishing well is more critical than you’re used to thinking.
The way I imagine the extreme case of this, which helps me visualize the core shift and remove the meaningless details, is like a genie lamp or a one-use teleporting device: Oh, look, a lamp. Can I ask a wish? But what do I wish for? What do I want from life? Or: if I had a device that could get me anywhere on the planet, where would I go? Where do I want to go if money, time, etc., were not a problem?
This comes down to the idea that you already know what your ideal life looks like, but you insist on not picturing it (out of fear, habit, etc.).
There are various forms of this: “If you had 10x more agency, what would you be doing right now?” You know the answer, you just don’t imagine yourself as the person with 10x your agency and so you don’t become that person.
There’s also that Warren Buffett anecdote: “If you could invest in a friend and get a 10% return for life, who would you invest in?” What traits made you decide? And then: why not embody those traits yourself? Etc. Etc.
The idea of thinking about “single-use” magical objects is that they invert the effort allocation 100%: the “how” is fully outsourced. How does the genie get you a billion dollars? How does it make you extremely handsome? You don’t care, you don’t want to know. So, you automatically realize that your mental effort must now be fully devoted to the complementary question: what do you want?
This is, of course, an exaggeration of what AI does, but—and this is the fundamental insight—only in degree, not in kind. AI is the closest thing in the world to a genie lamp.
I think people will have a hard time doing the mental shift because we’re not used to thinking for long about the whats. The hows take so much time and effort and resources in normal life that we intuitively assume the whats to have a negligible impact on our lives.
For instance, consider your career. Did you ever wonder about what you wanted to do for a living? Or what you wanted to see more of in the world? Certainly, at some point, when you were younger, you thought this way. “When I grow up I want to be…” Now: are you? Are you what you dreamt of? Certainly not most of you. Life gets in the way, right? The real world forces your paths to adhere to the constraints it imposes on your desires: labor market, career options, salaries, geography, language, viability, stability, etc. etc. The question of “what do I want to do?” changed, sometime in the forgotten past, to “what can I do?” The first one assumes your options are infinite: what you want, you can. The second accepts that that’s not true: the how compresses your life to a bunch of, more often than not, lesser choices. And then you choose among those. Your wish wasn’t granted.
I am not saying that AI allows you to stick loyally to the “what do I want to do” question but the obstacles between you and your dream life are way less severe now if you allow yourself to embrace this shift. Or, at least, you keep an open mind about exploring it. In my experience, it helps to have a disposition to believe and the ability to demonstrate it to yourself. (Two caveats: one, this applies to office work, not necessarily other kinds of work, like manual labor or, say, careers where you need to be there, like nursing. Two, I don’t want you to trust me, go ahead and explore; these are essentially notes for myself that I’m sharing publicly in case someone finds them useful or interesting.)
The belief that doing takes more resources than deciding what to do has been the default operating mode for basically all of human life. The how has always been so expensive that the what barely matters. You didn’t need to be good at wishing because you were never going to get most of what you wished for anyway. That’s why “default to action” (vs planning or reflection) is such good advice. Now I’m not so sure. (Action matters just like agency matters but when you can do anything, what to do becomes relatively a much more important consideration.)
So while the “how” is collapsing for OpenAI and Anthropic engineers and developers and also a good chunk of Silicon Valley nerds and a much smaller chunk of office workers around the world, most people have not realized this is happening. Some reject the idea outright, which is respectable. But most have simply not given it a thought. The crazy part is that I, a random writer based in some Spanish city, have access to the same magical devices that Sam Altman and Dario Amodei have (I assume they have better things internally but presumably not that much better).
And yet people are walking around with a genie’s lamp in one hand and a teleporting device in the pocket and still spending 99% of their time and effort and thoughts on the how.
One trick I’ve found useful as a non-coder is to realize that life is filled with software-shaped problems that we, non-coders, don’t even notice because for us the how is simply unfeasible, so we skip over the whats as if they don’t exist. Even for coders, most of these problems were, not long ago, not addressable due to constraints of time, resources, money. Only now they realize that they can simply do so many of the projects they always wanted to but couldn’t. We, non-coders, can do the same thing. We only need to go through the additional step of learning how to recognize software-shaped problems. Doing them, in turn, is now mostly trivial. A matter of wanting to.
But what terms can we use to concretize this abstract, nebulous “how” vs “what” framing? What is the “what” made of? What skills matter the most in this new paradigm?
I’ve heard people talk about this shift in terms of taste, or judgment, or decision-making or agent management or even curiosity and imagination. For a while, I thought these were all the same thing in various disguises—names for “the part humans still have to do after the shift; the part humans have to do more of after the shift”—but I think they’re actually different skills that all became load-bearing at once.
Taste is about selection (recognizing quality among options: you can have infinite lines of code or sentences written on your behalf, but if you only need one, aligning what you like with what’s good is key).
Judgment is about evaluation (weighing trade-offs under uncertainty: if you don’t know which pitch is going to do better, what’s the worth of automating the process of writing them?).
Agency is about initiation (deciding to act at all and in what direction: you have the lamp but you have to ask the wishes. You have Codex and Claude Code, but you have to start.)
Decision-making is the process that integrates the previous three, and management is the social coordination of other people’s decisions. Or, in this case, coordinating your swarm of agents. If you have 15 agents doing different things, you will experience as much workload as if it were just yourself doing one thing. Solving this is 1) not trivial and 2) a different skill.
Curiosity is perhaps the seed skill: if you are not curious about what AI is and what it can do, you can forget about all the others. I say more: if you were not curious, you wouldn’t be reading this. (Which, by the way, means you are.)
All these skills share a family resemblance—they’re all “what” skills rather than “how” skills—but they’re not interchangeable. Someone can have extraordinary taste and zero agency (the critic who never creates). Someone can have strong agency and terrible judgment (the founder who moves fast toward the wrong thing). Someone can have all the curiosity in the world and zero agency (the vibe-coder who is handling 10 projects at once but none of them will have any impact in the world). Etc.
All these skills are also well-known to those who have dedicated time to thinking about these matters, but for the rest of us, they were all invisible before because the “how” bottleneck was sitting in front of them like a boulder blocking a cave entrance. Now the boulder is rolling away and it turns out there’s an entire stack of capacities behind it that most people never developed because they never had to.
Now, if you’ve come this far, I can assume a few things: you have the predisposition to believe me when I say we’re living through a paradigm shift, you accept my analysis of what skills matter (probably with reserves, that’s fine), you are curious to try the tools to their full power, and you are willing to adapt yourself. But you’re not feeling as excited as my words should make you, right? You are wondering why it feels so hard. Then you are on the right path. That’s exactly how this paradigm shift should make you feel. If you don’t feel a bit of vertigo, you’re not going far enough.
This is my attempt to address these concerns (reflecting on myself), and with that, I close this article that’s already too long.
First, there’s a self-image problem (which I also think of as a “vocabulary” problem): Our language for value and worth is built around execution. “I’m productive” means I’m good at executing. “I worked hard today” means I put effort into my tasks. So when execution gets cheap, it doesn’t feel like progress. It feels like your skills are becoming worthless. The thing you spent years getting good at is now a commodity.
Related: It also feels like you are not doing anything. We have internalized contempt toward product managers and those kinds of organizational roles because they are too abstract. What do they do exactly? What are the deliverables? Their work is invisible but fundamental and only now, when we manage fleets of agents, do we start to realize that not only does this work matter but that our self-image must change.
The immediate consequence of our fear of feeling “worthless” is avoidance. That’s the second problem. People use AI tools for small, safe things because they don’t know the extent of their power but also because it’s terrifying to realize, in your skin, that everything you ascribed value to you can automate. That the skills you honed for years, even decades, are trivial for this machine that not only is faster than you but genuinely better at what you do. (I’m yet to think this way about my writing but I’ve accepted it’s a matter of time.)
So people fix their email writing. Or make summaries or templates or whatever. This is comfortable because 1) it’s a trivial task that anyone could do, 2) you don’t feel useless, and 3) you don’t have to confront the reality that there’s an entire world waiting for you in the “whats” if only you dare outsource the “hows.”
The solution to both of these is to shift internally to the same degree that things are shifting externally. Valuing curiosity and taste is a matter of start thinking more about them and the immense importance they’ve always had but that’s we didn’t notice because we were busy doing stuff that was not needed anyway.
I think this is a good point to close. I think this is far from a “coding” thing. I am a writer and even if I didn’t want to write with AI (which is a completely different story), I’ve found immense value in allowing my mind to be open to this paradigm shift in how we approach our work and, well, how we approach our lives.







This post rubbed me the wrong way, not in a I hate a AI how dare you kind of way, but in the way of this can’t be right but I don’t know why… I think it’s because as human beings our main job is maintaining homeostasis, or to put in a physics terms, maintaining our own complexity. that is the true battle that life faces: spectacular organization in the face of entropy and chaos, and that’s at the base level before we consider the blood thirstiness and distruction of our own species. What humanity wants to do is build even more complexity, and AI tools help with that. Yes, that takes energy. As the data center energy wars show, it takes a lot of energy. But that’s not the only thing it takes. It’s takes effort. Learning takes effort. Building muscle takes effort. Construction takes effort. Societies take effort.Yes we can minimize that effort with the tools that we invent and we have been doing that forever, and we’ve done it again with AI, but imagining this life where we just dream stuff up and then it happens… I just can’t help thinking, life doesn’t work like that, like literal carbon based lifeforms. AI agents also introduce a lot of entropy into the system, which is what you could call hallucinations again from a physics perspective. Fighting entropy is hard. We’ve gotten really good at it and built evermore complex systems, but they are also ever more likely to be subject to entropy the more complex that they are and the more energy they take to maintain. I’m almost done with school and don’t use any AI agents, not because I’m on some crusade, but because I’m aware of the value of hard and the increasing complexity of my brain through the search. The difference has become more and more stark with the classmates who don’t agree. I can visibly watch them fail to problem solve, or come up a good idea and spit out from a chat bot the absolute average and obvious, because that’s all they can put in. I recognize school is a fake world where the point is to challenge yourself, and why shouldn’t we make real work easier, but I just don’t think we’re ever going to get to easy. Entropy is too large of a force and complexity is too vulnerable for all of us to be skipping around dreaming of whatever we want without degrading the inputs we add to AI because our brains are literally wasting away, or spinning out a ton of entropy and chaos because more and more complex systems require more and more energy which by definition means they are going to be more and more entropic and chaotic.
Thanks for writing this, really thoughtful. Curious how you think about going deeper from "whats" into the "whys"?