AI's Hidden Unpayable Intellectual Debt
Technology is not at our mercy, we are at the mercy of technology
A terrible deficiency afflicts the AI systems of the world. The recent CrowdStrike outage can help us understand why.
The breakdown affected 8.5 million devices, evidenced the danger of concentrating control over critical technology in a few hands, and revealed that the foundations of our interconnected digital world are brittle.
It also showed us the path forward: We must improve our tolerance for potentially catastrophic systemic failures.
For that, we need two things. First, a better grasp of the great gift of technology. We keep building but our ability to make sense of what we build remains constant. The world’s complexity grows. Our intelligence doesn’t. Second, we need to better handle this gift: Tech exists subjected to financial incentives, often misaligned with safety and security. We’re better at uncovering the wonders nature tried to conceal through the forces of entropy than at protecting them from greedy corruption.
Understandably, the CrowdStrike failure prompted the AI community to address this concern from within its sphere.
The relevant warning isn’t new but often falls on deaf ears: do we want to integrate AI into critical infrastructure or high-stakes services when no one knows how it works? This issue gains unexpected urgency after experiencing firsthand the profound impact of a global collapse precipitated by reasons everyone ignores.
The obvious solution is to do what science does: fail to learn.
But, can we decide by trial and error what we should and shouldn’t do with complex AI systems, like the Wright Brothers did? In a sense, yes. We design spam filters and they work fine. You try, you learn. Eventually, we may also discover that building mechanical arms to play chess is a bad idea. So we don’t do it anymore. You fail, you also learn.
The bad news is that’s not always the case with AI. The key difference between what happened at Crowdstrike (or what the Wright Brothers achieved for the miracle of aviation) and what could happen with AI is this: you try, you fail, and you learn pretty much nothing.
The outage occurred despite engineers knowing well how the software works. It wasn’t a trial—they weren’t trying to learn anything—just a big error and a bigger lesson. Fine. It was unintentional but the feedback loop was intact so they quickly caught the mistake not to repeat it.
With modern AI it goes rather differently: we don’t know how it works. Bad. We don’t know what we don’t know. Worse.
What this epistemic puzzle means in practical terms is that errors in AI systems provide a weak signal due to a short-circuit in the feedback loop—i.e. we don’t learn much because we don’t know what’s going on inside in the first place. You can’t debug a neural network.
That chess robot arm I mentioned above broke a 7-year-old’s finger. We ignore how it concluded it was the right decision—“the kid moved too fast” is a PR excuse. We don’t know what to do to prevent it from happening again (except dismantling the thing). Extrapolate that to the real world. Instead of broken fingers, we have autonomous weaponry that autonomously selects targets, without human control or supervision.
With AI, they’re doing lethal trial-and-error in blindfolds.
There are precedents to this mindless approach to complex technology—you know, break things if it makes you money. They reveal the kind of bottomless pit Silicon Valley is getting us into with how they’re treating AI systems as if they understood them.
In 2019, Jonathan Zittrain wrote about those precedents in an illuminating New Yorker essay. He concluded with a prescient remark about AI:
Much of the timely criticism of artificial intelligence has rightly focussed on the ways in which it can go wrong: it can create or replicate bias; it can make mistakes; it can be put to evil ends. We should also worry, though, about what will happen when A.I. gets it right.
He wasn’t talking about job losses or superintelligence ruin. He was referring to the “intellectual debt” we incur when we create and deploy black-box software everywhere. He was alerting us that we’re playing with a stolen gift from the gods without permission:
This approach to discovery—answers first, explanations later—accrues what I call intellectual debt. It’s possible to discover what works without knowing why it works, and then to put that insight to use immediately, assuming that the underlying mechanism will be figured out later. In some cases, we pay off this intellectual debt quickly. But, in others, we let it compound, relying, for decades, on knowledge that’s not fully known.
This backward approach is common in drug discovery and medicine (think anesthesia or deep brain stimulation). It’s “built into the process,” Zittrain says. Those are welcome precedents. However, it’s unusual that AI—an invention we design and build, not a discovery—falls under this category. Not everything is equally bad with AI, though:
If an A.I. produces new pizza recipes, it may make sense to shut up and enjoy the pizza [perhaps not]; by contrast, when we begin using A.I. to make health predictions and recommendations, we’ll want to be fully informed.
Not all technological problems incur serious intellectual debt but all unpaid intellectual debt eventually becomes a profound social problem.
If the AI industry was wise, they’d look for solutions to pay their more serious dues. But dumb money rules. It leads them to short-term selfish gains at the expense of long-term collective benefits. Sometimes we’re lucky and those two are aligned, and the Wright Brothers happen. Sometimes we aren’t and Crowdstrike happens.
Sometimes it’s something else.
Keep reading with a 7-day free trial
Subscribe to The Algorithmic Bridge to keep reading this post and get 7 days of free access to the full post archives.