I was looking through my old articles and found this one: “5 Reasons Why I Left the AI Industry.” I wrote it in April 2021. I was severely disenchanted with AI at the time. Out of love. I had been naive about the promises and AI hadn’t delivered. I left the industry for good.
In late 2020, I was laid off from a young Spanish startup that was freefalling into bankruptcy. My bosses knew little about AI. They had no idea what it could or couldn’t do—back then you didn’t just casually build a bi-directional real-time sign language translator (I’m not sure one exists yet but I bet it’s possible now).
I made it through with a few scars but I didn’t blame my heartbreak on their failure. It was, after all, my first foray into that world. The problem was not them. It was AI. So I decided to tell my story. Even then, in April 2021—long before ChatGPT—my experience resonated with readers. My chagrin felt justified.
But today, I bet it wouldn’t resonate that much. Not even to me.
So I want to revisit the five reasons why I left.
I’ve copy-pasted the article below in block quotes. I’ve taken the opportunity to sprinkle it with updated comments (in italics, for clarity). These are the questions I’ve answered throughout: Would I go back to the industry? Which of my reasons stands the test of time? Am I still disenchanted about AI? What’s changed for the better?
I’ve also shared my overall conclusion at the end of each section:
AI may not live up to the hype
AI loses its magic when you look from the inside
Everyone can do AI now
We may never achieve artificial general intelligence
The future of AI will include the brain
I. AI may not live up to the hype
“AI has gone through a number of AI winters because people claimed things they couldn’t deliver.”
— Yann LeCun, Chief AI Scientist at Facebook
Can AI save the world? Can AI solve its most pressing problems? Way before AI spread everywhere, some already thought it would change our lives radically. In 1984, computer scientist Frederick Hayes-Roth predicted AI would “replace experts in law, medicine, finance and other professions.”
But it didn’t. Throughout its history, AI has lived many hype cycles. Those cycles are known as AI winters. AI would fail to meet expectations provoking a wave of disbelief that would eventually cause the withdrawal of research funding.
Funny that I could write these paragraphs today and they would still be true, AI hasn’t replaced experts anywhere. If anything, it may replace programmers first, which wasn’t a popular prediction back then. However, I bet not many people would dare predict, without a nervous undertone, that an AI winter is coming (except, perhaps, Gary Marcus). ChatGPT changed many people’s views.
Since the deep learning revolution in 2012, we’ve seen increased interest in the field. Some still think that AI will change the future. But the question remains: will AI ever live up to its hype? Gary Marcus, AI researcher at New York University, said in 2020 that “by the end of the decade there was a growing realisation that current techniques can only carry us so far.” In the words of Geoffrey Hinton, the Godfather of AI:
“My view is: throw it all away and start again.”
Those two quotes squarely contrast with Sam Altman’s recent post on The Intelligence Age: “Deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.” They also contrast Richard Sutton’s Bitter Lesson, already well-known back then, which says it’s better to scale learning and search algorithms than doing engineered maneuvers to improve AI. Altman and Sutton have been more accurate. I bet Hinton disagrees now with his quote.
I entered the world of AI moved by its promises of intelligent machines and artificial general intelligence around the corner. But it’s not going to happen anytime soon.
Holding expectations that don’t match reality is a recipe for discontent and frustration. And I don’t want that.
I still don’t think AGI is around the corner. I side more with François Chollet (than, say, Altman or Hinton) when he says that “skill is not intelligence; general intelligence is the ability to efficiently acquire new skills.” Deep learning isn’t enough by itself. OpenAI o1 is closer to something truly new but it’s still not it. In Chollet’s words, “o1 represents a paradigm shift from ‘memorize the answers’ to ‘memorize the reasoning’ but is not a departure from the broader paradigm of fitting a curve to a distribution in order to boost performance by making everything in-distribution. We still need new ideas for AGI.”
Conclusion
I want to believe I’ve grown wiser, so I’ll leave my old thoughts behind and speak the truth anew: AI can’t, by definition, live up to the hype because hype exists exclusively as an ideal target you don’t reach. However, in the literal terms I used in this section, I can safely say that recent progress has surprised me to the same degree that AlphaGo did—a damn lot.
II. AI loses its magic when you look from the inside
When you talk about AI with an outsider, they immediately think of movies such as Terminator or The Matrix and books such as I, Robot or 2001: A Space Odyssey. All depicting scenarios in which AI has amazing abilities and power.
The term “artificial intelligence” was coined by John McCarthy, who didn’t actually like the term. He knew it was a marketing tool to draw public and private attention. Devin Coldeway writes for TechCrunch that AI is “a marketing term used to create the perception of competence, because most people can’t conceive of an incompetent AI.” Now, AI has become mainstream and many companies take advantage of the status behind the name.
Both things above are true. Most people still think of Hollywood sci-fi tradition when you mention AI (even if they know ChatGPT, for them, is not related to that kind of “superintelligent AI”). The reason is, indeed, that the term was invented for marketing purposes: ChatGPT is incompetent but true AI can’t be. The last sentence, in particular, is truer now than it was when I first wrote it: “Companies take advantage of the status behind the name.”
AI has gone from aiming at uncovering the mysteries of human intelligence in the form of silicon-based entities, to being a buzzword that companies use in their AI-powered products and services. AI has lost, to a large extent, its ambition.
This isn’t true now despite the pile of garbage ChatGPT wrappers and whatnot. The top AI labs are more accomplished AND ambitious than ever.
Not long ago AI was a synonym for human-like robots. Machines capable of incredible feats. Able to mimic human emotion and creativity. Able to plan and display common sense. Now it’s a synonym for data. If you work in AI you are most likely collecting data, cleaning data, labeling data, splitting data, training with data, and evaluating with data. Data, data, data. All for a model to say: It’s a cat.
I guess this one is still funny. It’s also true but I no longer believe this to reflect anything except my emotional reaction due to the role I played in that failed startup. Data is a big fraction of an AI engineer’s work. But so what? The results (e.g. ChatGPT) are fantastic so they reward the more boring, grind-like work 10x, even 100x. I don’t know, ask OpenAI or DeepMind staff why they keep going after 10 or 15 years!
The marketing power of AI is such that many companies use it without knowing why. Everyone wanted to get on the AI bandwagon. I liked the magical world AI promised and I’ve found a shadow of what could’ve been.
We’re not even aiming at creating general intelligence anymore. We’ve settled for stupid software that knows how to do extremely specific tasks very well. I hope AI recovers its ambition to not disappoint those who come looking for magic and end up with just fancy math.
This kind of up-close disappointment is common. I made it sound like AI is special about this. It isn’t. Yes, there’s a lot of hype and the reality behind isn’t as magical as the nice outward-facing embellishment companies sell, but I wouldn’t be so bitter about that today. The other part—companies using AI or being “AI-powered” mindlessly is still true. However, due to the cost of generative tools (in contrast to predictive tools), enterprises are substantially more wary about integrating them into legacy work processes that have remained untouched since the internet revolution in the 2000s. So now what’s cool is being AI-free.
Conclusion
Everything loses its magic when you look from the inside. AI is no different. Actually, if I had to look inside something at the cost of killing the magic, AI would be one of my top picks. This section, I realize now, was more of a tantrum out of broken expectations than anything else. But I shouldn’t have had those expectations in the first place. My fault.
III. Everyone can do AI now
It’s hyperbole. I still can find more people that don’t know a thing about AI, than people who know it’s everywhere—which amazes me. However, when you look within the realm of computer tech, AI is everywhere.
ChatGPT surely changed this to an unprecedented degree. Before, people didn’t know what AI was. Now, they think it’s synonymous with ChatGPT. It’s progress.
Not long ago, AI was a general, broad term that encompassed many areas. One of those was machine learning (ML), which at the same time was divided into different branches including deep learning (DL). Now, I can safely say that, for most, AI = ML = DL.
Deep learning has taken over the world of tech and computer science. Why? Because of its unreasonable effectiveness. Neural nets are good at doing what they do. They do it so well that everyone is trying to take a portion of the pie.
“Now that neural nets work, industry and government have started calling neural nets AI. And the people in AI who spent all their life mocking neural nets and saying they’d never do anything are now happy to call them AI and try and get some of the money.”
— Geoffrey Hinton
I always say people think that AI = generative AI = ChatGPT. I guess there wasn’t much progress on that front. People just changed the object of their confusion. Some out of ignorance, others out of vested interest. As always.
The popularization of AI has made every software-related graduate dream with being the next Andrew Ng. And the apparent easiness with which you can have a powerful DL model running in the cloud, with huge databases to learn from, has made many enjoy the reward of seeing results fast and easy.
This sounded so good—until you become a computer science graduate from Stanford and you realize there’s no $ 200K total compensation job waiting for you.
AI is within reach to almost anyone. You can use Tensorflow or Keras to create a working model in a month. Without any computer science (or programming) knowledge whatsoever. But let me ask you this: Is that what you want? Does it fulfill your hunger for discovering something new? Is it interesting? Even if it works, have you actually learned anything?
It seems to me that AI has become an end in itself. Most don’t use AI to achieve something beyond. They use AI just for the sake of it without understanding anything that happens behind the scenes. That doesn’t satisfy me at all.
Okay, this one’s deep. I was gatekeeping. In a way. I was talking from a scientific standpoint, against the engineering standpoint. I stand by that but will qualify by saying that AI is fulfilling depending on where you work. I tend to write about Google, Microsoft, Meta, OpenAI, Nvidia, etc. The big ones. The attractive ones. To them, it doesn’t matter whether everyone can do AI now. But what about the many millions of devs and engineers working at a mediocre mid-sized tech enterprise that’s still deciding whether to jump into the action or not? To them, the world of AI is but a thin veil on top of their existential dissatisfaction. That was me. That’s the vast majority.
Conclusion
AI is—and has always been—a sweeping gale. It carries those who soar effortlessly through its wild currents, reveling in the storm, while others, caught in its unforgiving surge, are ground beneath its weight. Who are you?
IV. We may never achieve artificial general intelligence
I’ve mentioned already the term artificial general intelligence (AGI). For decades, AGI has been the main goal driving AI forward. The world will change in unimaginable ways when we create AGI. Or should I say if?
How close are we to creating human-level intelligent machines? Some argue that it’ll happen within decades. Many expect to see AGI within our lifetimes. And then there are the skeptics. Hubert Dreyfus, one of the leading critics, says that “computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all.”
For now, it seems that research in AI isn’t even going in the right direction to achieve AGI. Yann LeCun, Geoffrey Hinton, and Yoshua Bengio, winners of the Turing Award — the Nobel Price of AI — in 2018, say we need to imbue these systems with common sense and we’re not close to that yet. They say machines need to learn without labels, as kids do, using self-supervised learning (also called unsupervised learning).
Prescient. Nearly everything AI companies do today involves some element of self-supervised learning.
That’d be the first step. However, there’s too much we don’t understand about the brain yet to try and build AGI. Some say we don’t need to create conscious machines to equal human intelligence. However, can we really separate human intelligence from the human subjective experience of the world? We don’t know yet and we may never know it.
Super intelligent machines may remain forever in the realm of science fiction. Here’s a short terrifying story from Fredric Brown about what could happen if we ever create something above our understanding:
“““
The world was waiting with expectation. One of the leading scientists was about to connect the switch. A super machine, powered by computers from every corner of the universe, condensing the knowledge of all the galaxies.
The machine was on. One of the scientists said to another, “the honors of asking the first question are yours.”
“I’ll ask what no one else has ever been able to answer.” He turned to the machine and asked, “is there a God?”
The mighty voice answered without hesitation: “Yes, now there’s a God.”
The scientist felt the terror running down his back. He leaped to grab the switch when a bolt of lightning from the cloudless sky struck him down and fused the switch shut.
”””
There you go, an x-risk argument in 2021, before it was cool and controversial (Eliezer Yudkowsky and Nick Bostrom started in the early 2000s so it felt late to me nevertheless).
We fear what we don’t understand. And, by definition, we won’t understand AGI. But we can remain calm. Even if AGI seems to be close in the future when we observe AI from the outside, it won’t be happening any time soon. Now that I know we’re not even close to that, my interest in AI has notably diminished. I may come back to AI when I see AGI on the horizon. If it ever happens.
Aha! Alberto of the past, we meet again. I guess I thought I knew more than I did. AGI may not be as close as some people think but everyone’s timelines have shrunk in the past two/three years. Mine as well. Would I say I see AGI on the horizon? Perhaps not sufficiently to fulfill the self-promise I made to come back (not that I could lol) but surely more than I thought I would by 2024.
Conclusion
Yes, we will achieve AGI (or rather, human-level AI, which I consider a better term to describe intelligence like ours). I don’t know when or by what means but I don’t see any hard barrier in principle to not get there. If I recall correctly, I didn’t at the time either; I was playing the devil’s advocate.
V. The future of AI will include the brain
AI appeared officially in the decade of 1950 as a serious endeavor to disentangle the mysteries of human thought. After research in neurology had found the brain was composed of electrical networks of neurons that fired all-or-nothing pulses, the idea of building an electronic brain wasn’t so disparate.
Today, we seem to have forgotten the brain. DL works nothing like it. Computer vision and convolutional neural nets don’t work like our visual system. Supervised learning models (which dominate AI right now) need to learn from labeled data but humans learn from sparse data thanks to innate biological structures. Computers need huge amounts of computing power and data to learn to recognize the simplest objects, whereas a kid only needs to see one dog to recognize every other dog.
There have been some attempts at closing the gap between AI and the brain. One example is neuromorphic computing. The main idea is to create hardware that resembles the structures in our brain.
There’s a big difference between biological and artificial neural nets: A neuron in the brain carries information in the timing and frequency of spikes whereas the strength (voltage) of the signal is constant. Artificial neurons are the exact opposite. They carry info only in the strength of the input and not in the timing or frequency.
This difference between the brain and AI exists at such an elemental level that everything that’s built on it ends up diverging radically. Neuromorphic computing is trying to overcome these issues.
AI hasn’t changed on this front. Neuromorphic computing has been set aside (like alternative paradigms, e.g. neurosymbolic AI) in favor of deep learning and generative AI. Which is fine, no one can tell what’s going to work and what isn’t. However, right now, AI is more an alien intelligence than a primitive form of ours so my argument stands.
Some argue that AI, as it exists today, will reach its ceiling soon. If we want to continue growing toward actual intelligence, we’ll have to rethink everything we’ve been doing shifting the path toward the only system we know that is intelligent enough to guide our efforts; our brain.
This is a bit extreme—“rethink everything”—but directionally correct.
For me, the world of AI was a bridge to the human mind. I thought AI would teach me a lot about the brain, our thought processes, and our intelligence. What I found was that AI had long parted ways with neuroscience and didn’t have intentions to go back.
Believing this was MY mistake.
Deep learning isn’t the future of AI. When the field resembles what was promised, I’ll be happy to combine my knowledge in both AI and neuroscience to do my bit to bring the future closer to us. Until then, I’d rather work in understanding just a little more of the mind than build machines that “are still very, very stupid.”
“I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works.”
— Geoffrey Hinton
I’d rather work on understanding the human brain but AI models are no longer “very, very stupid.” Hinton himself no longer believes what I quoted above: “It may be that we’ve got most of the inspiration we could from the brain and now the new developments don’t necessarily tell us much about the brain. That’s a radically new thought for me. For 50 years I thought that if you make your AI a bit more like the brain it will work a bit better and I no longer really believe that.”
Conclusion
I think the AI community shouldn’t ignore the study of the brain. Few people (e.g. Yann LeCun) are focused on making AI more like the animal/human brain. The rest are all-in on generative AI—it’s what makes them money. Once the bubble pops, we’ll see what remains standing still, what new approaches are born, and what old paradigms re-emerge.
Final thoughts
There are many good reasons to stay in the AI world. And even to enter it now. However, be sure that those reasons are the ones that move you.
In the world of AI, appearances lie. It’s not as fancy as they want to make it. It’s not going to radically change the world with human-like robots, as in I, Robot. You’ll be one of many in the same position. AI isn’t new, exclusive, or necessarily prestigious anymore. And don’t expect to see machines at the level of humans anytime soon.
Lastly, remember that if we want to find the holy grail of human intelligence and win this battle with Mother Nature, we should be looking at the only thing that has human-level intelligence: our brains.
Conclusion
I love writing too much to leave for hands-on AI stuff. However, if I was 23 again I’d try 10x as hard to get into one of the big AI labs after having studied one of the harder engineering disciplines (as I did), combined with CS and neuroscience. Few places in the world are as exciting as those labs right now. I’m nowhere as disenchanted today as I was in early 2021 despite I still have some reservations. And, as I like to say, “you’re never late to AI.”
I love this! It takes bravery to even read what you wrote 3.5 years ago, and especially to engage with it honestly.
One thing I would recommend adding about AGI, is its ability to learn in real time. It should be able to assess quality of information and add it to its knowledge base if it is found useful, or at least add or modify the dimensions of a token embedding. Also it should be able to learn a procedure and create/modify an appropriate agent as needed. In fact, these agents or RNNS should be tokenized and be called upon as needed. (Most) humans learn throughout their lifetimes, and so should a true artificial intelligence.