The AI Theory of Everything
My earlier theory on how to use AI was useful and true—but incomplete
For years I’ve thought the best way to apply generative AI in your work starts by finding which tasks can be safely automated.
Say you write emails for a living (nothing against it). You’d do some trial and error with professional-sounding text generated by ChatGPT and, after tweaking a word here and there you'd decide it’s good enough and hit send. You wouldn't delegate the thinking but the boring translation from intent to final draft. You wouldn't automate selecting the recipient but the opening and closing sentences. Step by step you'd find the ideal degree of outsourcing. Then you’d do the same with the next task and so on.
Some tasks would be too complex to automate. Or simply the AI would be unable to replace your experience. That's the hard part—figuring out the edges, the protrusions, and the loose ends of the jagged frontier. That is, mapping out the limits of AI's ability for each task. AI, unlike humans, can be surprisingly adept at tasks we consider hard and dumb for the simplest puzzles.
Only then, once you’d figure it out—for all tasks and each subtask within—you’d make the first move.
Slow, slow process. Little risk, little benefit. No skin in the game.
They call it “paralysis by analysis” these days, but it was justified because AI systems are weird, too alien to make any assumptions about their behavior or extrapolate satisfactory tests beyond their immediate conclusions.
So I erred on the side of caution—and advised you to do the same.
When ChatGPT launched I wrote a few listicles to explain the rationale behind my approach so that you could follow it. One of those was “How to get the most out of ChatGPT” (Dec 15, 2022):
The primary requirement to perceive [ChatGPT’s] virtues (and not its flaws) is learning to view it as an imagination-enhancing toy rather than a truth-based tool . . . Creativity is quite a special domain for AI: nothing is inherently wrong (or right). . . .
We shouldn’t qualify ChatGPT as more or less intelligent . . . but as more or less suited for a given task. As an autocomplete system with a high component of randomness, pure creativity, inspiration, ideation, etc. are the tasks for which it’s best suited.
The only reason why there’s so much misuse and confusion around it is that neither OpenAI, news outlets, nor even bloggers like me aren’t framing its skillset appropriately.
Or “Five practical applications where ChatGPT shines” (Jan 31, 2023):
I’m critical of people using ChatGPT for everything. And I’m also critical of people claiming you can use ChatGPT for everything. . . .
It can’t do everything but can do some things. And like any other tool, it does some tasks better than others. If we accept these two premises, it follows that there must be a task (or set of tasks) in the applicability space that is the perfect target for ChatGPT. . . . for which ChatGPT is both perfectly-suited and the best option available.
Those excerpts get to the gist of my old thesis: It’s hard to pinpoint which tasks ChatGPT excels at but they exist, and that’s what we should look for, with patience.
I still believe it’s both true and useful.
However, I now think it’s incomplete.