The Algorithmic Bridge in 2022
A review of the first (half) year at TAB: Statistics, summaries, and a warm thank you!
2022 has been an amazing year for me and I wholeheartedly hope it was for you, too.
I started TAB as a side project to complement my writing on Medium and month after month I dedicated more time and resources to it. Now is my principal focus and will continue to be in 2023.
If this is working it’s because of your interest, support, and engagement. Thank you :)
For this last long-form article of the year I want to do a review of TAB’s 2022. I know some of you also write on Substack—and others may be interested in starting a newsletter—so here’s a quick overview of TAB’s statistics (feel free to ask me any questions on this).
Talking numbers
I started the newsletter in June, so it’s been 6 months. The growth has been insane—much faster than I’d have ever expected. I’m at 6000+ free subs (starting from ~700 that I imported from Medium) and 200+ paid. My compilation on GPT-4 rumors attracted the most new subscribers at a staggering 800. It’s clear that you guys want OpenAI to release it!
In total, I’ve published 52 long-form pieces, 12 issues of the “What You May Have Missed” column, and 6 open threads (3 of them AMAs). That’s 150K+ words or, following Gergely Orosz's example (his review article gave me the idea for this one), a book and a half (non-fiction books average 80K-100K words).
Out of all those essays, the one you liked the most was “Stable Diffusion Is the Most Important AI Art Model Ever.” That bold headline has aged very well, given the huge user base that Stability.ai has amassed in barely half a year. It got the most views of a single TAB article (57K).
It’s also the reason why September was the best month overall (162K views). I’ve averaged 75K-100K views per month for a total of ~500K views since I began in June.
Here’s a view of the growth curve. Quite a nice upward shape!
TAB stories: One-paragraph summaries
Here’s a brief review for each of the 52 long-form articles I’ve published.
If you’ve read them there’s nothing new here. It’s just a nice summary to come back and recall the topics I’ve covered through the months.
June
TAB began with LaMDA, Blake Lemoine, and the question of AI sentience—arguably the biggest AI story of 2022 (DALL-E, Stable Diffusion, and ChatGPT are all up there, too). In contrast to most people, I didn’t take a side explicitly. I reconsidered the validity of the premise of the debate: “Why 'Is LaMDA Sentient?' Is an Empty Question.”
Around that time I was experimenting with GPT-3 and began to realize just how dangerous it’d be if people viewed an autocomplete system as a trustable source of information. I wrote this piece to illustrate the issue in a way that you could feel it (show, don’t tell!).
Another news story hit the field (this time it received much less exposure), ML researcher Yannic Kilcher trained and released GPT-4chan, which led the AI community—mainly ethicists—to reconsider the limits of open-source AI. At that point, I realized that open-source initiatives have lights but also shadows.
We were in June so sooner or later I had to do a write-up on DALL-E, the biggest news at the time on text-to-image generative AI. I had already published a “DALL-E explained” type of article on Towards Data Science, so I decided to take a different perspective. I compared AI art to photography and realized that, despite people’s perceptions, those technologies “threaten(ed)” artists quite differently.
The last article I wrote in June was the first that went viral (with a total of 80K views across platforms): “BLOOM Is the Most Important AI Model of the Decade.” The bold headline—in combination with the fact that almost no one had heard about BLOOM at the time—made for a great scoop. BLOOM turned out to not be super good performance-wise, but I chose the headline from a sociopolitical perspective—I still think it’s correct in that sense.
July
I featured AIVA, the company making music with AI, and its clash with human composers—they didn’t like that an AI composer could threaten their livelihoods. I wondered what would happen if AI surpassed us in those domains that we so jealously have guarded since the dawn of time (art, music, creativity, etc.).
I turned to social media and the way algorithms are designed to keep us engaged regardless of our well-being. AI research is interesting, but there’s a sector where AI is having a real-world negative impact: social media. Do algorithms need to be this way?
Back to AI art, DALL-E mini gave OpenAI a hard lesson: People want to play with generative AI, not look at it from the fence. DALL-E mini, despite its lower quality, “viralized” text-to-image models and showed the world what was possible with state-of-the-art AI, overshadowing the original DALL-E in the process.
Next, I touched on (passive) consumer AI with a critical essay: “AI Emotion Recognition Is a Pseudoscientific Multi-Billion Dollar Industry.” Like recommender algorithms (that power social media), classification systems aren’t free of flaws. AI services built on top of “shaky scientific ground” are problematic. Thankfully, most companies have shut down their emotion recognition tools.
I kept thinking about social media and its addictive features. It led me to write an analysis of TikTok’s For You page. Once the era of social media is over, we’ll look back and recall, horrified, how we let these opaque pieces of software dictate our digital lives by exploiting our psychological vulnerabilities.
AI writing tools started to get attention and I wrote a manifesto: “Why I'll Always Disclose My Use of AI Writing Tools.” I’ve kept my word and will continue to do so.
OpenAI finally conceded victory to DALL-E mini and the open-source community. They opened DALL-E after ensuring the safety measures they had set up were good enough. The race for the throne of AI art was off.
People kept exploring GPT-3’s use cases. A neurobiology student decided to use it to write a paper and publish it. Unsurprisingly, the paper’s quality was insufficient to be accepted in any peer-review journal—except as evidence for the model’s limitations.
I ended July with a tribute to generative AI tools. In a collaborative effort with GPT-3 and Midjourney, I reimagined 10 famous landscape paintings (some were just meh, but others really surprised me).
August
In the middle of Summer I wrote about Joshua and Jessica’s love story. A sad story with a happy ending about love, death, and the power of AI to help people mourn and heal. It could easily be the plot of a Black Mirror chapter.
Algorithms—and especially those that power social media platforms—are very dangerous. This piece is full of interesting references and insights. The main takeaway (paraphrasing bestselling author Yuval Noah Harari): Future algorithms could be able to hack humans in mass.
As AI art models reached notoriety, a few artists voiced concerns about the lack of regulation, unethical data acquisition mechanisms, and the harmful consequences on the creative workforce. In this essay, I debunked the idea that inspiration from human artists and what AI models do is the same. I advocated for constructive discussion: we have to “bridge the gap between the current chaotic state of affairs and a calmer future at the intersection of freedom for AI advocates and justice for affected artists.” It hasn’t happened.
The next one was a fully philosophical, exploratory, and speculative piece where I tried to illustrate a future that we may face someday: Could we co-live with an AI-based species? Would we need to imbue emotions and motivations in those hypothetical entities?
Then, Stable Diffusion appeared and instantly overshadowed DALL-E and Midjourney. Quality-wise they were similar but SD was the first state-of-the-art open-source model in generative AI. A milestone and a breakthrough.
I ended August with an early exploration of the shift from research to production in generative AI. It was already quite clear that we were barely a few months away from an explosion at the consumer level (as it’s happening). I predicted: “this shift from research to production will entail a third technological revolution this century [I meant during the first quarter!] It will complete a trinity formed by smartphones, social media, and … AI models.”
September
Google promised they’d make LaMDA available (through the AI test kitchen). I covered the news that went under the radar (Google didn’t give it much publicity).
Next was one of my favorite articles I’ve written for TAB: “While AI Artists Thrive, Detractors Worry—But Both Miss What's Coming.” I’ve re-read it a few times since. It comprises a nice set of arguments that debunk the ideas that “AI art isn't art” and “there's no skill needed to create art with AI.” I liked this one because it forced me to rethink my beliefs.
Another nice write-up was my speculative exploration of the truth behind the “Dead Internet Theory.” A piece on AI-powered misinformation, the impossibility of finding reliable truths, and a silver lining for all of us.
Then, I turned on my critical inner self with a piece on AI as a celebrity tech. I focused on hype, attention, interest, funding, and the features that make this scientific field unique: it pursues highly ambitious goals, seeks deep questions about ourselves, and seems to work surprisingly well.
I wrote my first article on Robotics well into the fourth month of TAB. I featured Google’s PaLM-SayCan, a robot that combines the best language model (PaLM) with the abilities of a physical robot. Google’s Fei Xia, co-author of the paper, read my piece and told me: “I have read the article, it is technically accurate and communicates our insights pretty well.” Nice!
I doubled down on action-focused AI with Adept’s ACT-1: the first action transformer. Their goal? Building LLMs to interact with your computer and the internet, breaking the barrier that limits GPT-3 and LaMDA. ACT-like AI is the future of human-computer interaction (through a language interface).
OpenAI released an open-sourced Whisper. I took the opportunity to talk about the long-awaited GPT-4 for the first time at TAB. Ethan Caballero suggested that Whisper was the key to training GPT-4.
With the incoming Tesla event on the autonomous humanoid robot, I decided to write an article on Elon Musk’s beliefs on AI risks—where we agree and where we don’t. I don’t think his fears of AI as an x-risk are wrong, simply not urgent enough to keep them at the forefront of our efforts.
October
Tesla’s Optimus robot turned out to be much better than most skeptics thought (including me). It wasn’t anywhere near ready to solve the most challenging problems that a physical entity would face (e.g. flexible planning or improvised decision making), but for a year of progress it was surprisingly good.
As generative AI picked up speed, I noticed a generalized sentiment: AI was going faster than ever. So fast indeed that it was (and is) difficult to keep up even for those who, like me, are paying attention every day. I captured this mixed feeling of overwhelming amounts of info and FOMO in this piece.
My first piece on AI + work/automation was a half fact-based, half speculative essay entitled “Life After Work.” AI will take some (not all) of our jobs. Proposed solutions only solve half the problem—the monetary aspect. But work fills our lives with meaning. How can we solve that?
After DALL-E mini overshadowed DALL-E, OpenAI disappeared from the public light for a few months. They were publishing only minor updates in the blog. I captured this new situation in an essay where I argued how open-source could be—ironically—OpenAI’s downfall. Of course, OpenAI proved later that that’s not going to happen anytime soon (ahem, ChatGPT).
Around that time I decided to focus more on generalized tendencies that approached the events in the field form a holistic bird’s-eye view rather than covering one-off news events. For my next essay I reasoned why AI companies stopped building ever-larger AI models. After Google released PaLM (540B) in April, no company had built anything bigger.
The AI writing tool Lex went viral with a clever marketing campaign on Twitter and suddenly everyone was talking about this. I took the opportunity to write an essay arguing why humans are irreplaceable by language models like GPT-3. It all comes down to intent (also applies to visual artists). (Lex’s founder commented on Twitter saying he agreed with pretty much everything I said).
I kept exploring these tools and co-wrote a piece with Lex. It was a fun experiment that further reinforced my thesis that these AIs are great for ideation and inspiration but not necessarily as co-writers or decision makers (how to direct an essay or which arguments to defend). Their best use case in my opinion? Self-exploration.
As part engineer part cognitive scientist I realized I had never covered directly AI’s relationship with neuroscience. A new paper on NeuroAI gave me the perfect excuse to approach this super interesting topic. The story: “Why AI Is Doomed Without Neuroscience.” My takeaway: If we’re interested in AGI, AI should take more inspiration on neuro because it’s not yet ready to part ways.
At the end of October I read Gwern Branwen clever arguments on why generative AI models’ output won’t pollute the training data of future models. I decided to counter argue his points by taking the argument to the next level: “Generative AI Could Pollute the Internet to Death.” This came at a time when most tech people were well aware of GPT-3, Stable Diffusion, etc. and were using these systems in a daily basis.
November
Writer Tiernan Ray published an essay arguing that “the industrialization of AI is shifting the focus from intelligence to achievement.” I took his argument further and defended the idea that maybe intelligence wasn’t AI’s goal after all—and maybe we should let it go for good.
I came across a paper where Google tested an writing tool—Wordcraft—built on top of the infamous LaMDA. Unlike Lex, Jasper, and others, Wordcraft is focused on creative writing. A group of professional authors explored its virtues and limitations. Their conclusion: A great tool for inspiration but nowhere near the level to replace human writers. Agreed.
Around that time, Elon Musk decided to take over Twitter. I knew it could be a good story so I went for it: “Twitter, Elon Musk, and the Information Crisis” was my attempt at mixing the new face of Twitter, Musk’s presumed goal of killing all bad bots, and the key role of AI in the future of misinformation.
My biggest story in terms of subscriber growth (~800) was a compilation of rumors about GPT-4. The absurd amount of interest this piece got made it even clearer for me that people are truly awaiting for the moment OpenAI releases GPT-4. Whatever else happens next year, I’m quite sure this will be the AI news of 2023.
After that success, I took the opportunity to explore a topic I had wanted to write about for some time—the possibility that we’ll soon face the first AI empathy crisis. Sentient/conscious AI systems aren’t coming, but people will believe they are regardless.
Soon after, Facebook released Galactica demo. It received widespread backlash from the AI scientist Twitter sphere. I covered the news and reactions and explored the importance of good communication tactics when it comes to explaining how these systems work and what they’re intended for.
Another fun experiment: I made GPT-3 talk to J1-Jumbo (a similar language model built by AI21 labs). I complemented this interesting exchange with a bit of commentary on the significance of their words and how they may influence our perception.
Stability.ai released Stable Diffusion 2. They made it more open and more responsible toward artists. Its user base didn’t like the changes and argued the new version was virtually unusable to do anything worthwhile.
My latest for November was a an argumentation extraploated from the issues Galactica made evident. How and why do AI knowledge become so biased when it gets to us? Four main filters force us to keep critical thinking sharp.
December
Then it came, the final great AI news of the year: ChatGPT. It became a global sensation almost instantly. 1 million users in 5 days—the most successful launch ever for a technological product that, funnily enough, wasn’t even a full product, but a research preview. I covered the news (the good and the bad) here: “ChatGPT Is the World’s Best Chatbot.”
In another philosophical, exploratory essay I argued that AI’s imperfection isn’t its main deficiency, but the fact that it’s very similar to us and yet so different in the way it fails. Almost as if it were an “intelligent” alien.
ChatGPT’s success kept growing and people began to acknowledge the second-order consequences: How could we recognize human-made text anymore? I found that OpenAI was working on a prototype watermarking scheme to identify ChatGPT’s outputs.
I decided to turn off my critical side for a moment and wrote a how-to piece on the potential uses for ChatGPT with some tricks and tips for those of you who don’t know how to get the most out of these useful tools.
Noah Smith and roon wrote an essay arguing that generative AI won’t replace us because it takes over tasks, not jobs. I responded with an essay explaining why AI as an enhancer for some people would inevitably be a replacer for others.
Finally, as an end to an amazing year full of AI news and events, I wrote an listicle with my top 7 predictions on AI for 2023. From GPT-4’s release to open-source’s growth and regulations in AI, to 7 things that will definitely not happen in 2023.
See you in 2023!
If you think 2022 has been a crazy year for AI, just imagine how 2023 will be. I’ll be here week after week covering, curating, and highlighting everything that happens with the head-level takes that characterize TAB.
Before I go I want to thank you again. All of you who trust TAB as a source of information. Those of you who support me directly and keep this alive for all of us. And those of you who engage in the comments article after article—you’re the glue of this community.
Again, 2022 has been a great year. Let’s hope 2023 is even better! I’d love to see you around next year. I’ll be back very soon with more news, more articles, and renewed energy!
Happy holidays everyone!!
Alberto
Hi Alberto, congrats on your impressive success with your substack. If it fits within your vision for this blog, I'd enjoy reading more about how you are building your audience. For myself, I'm all the way up to one free subscriber. :-)
I'd also hope to read your additional thoughts on this:
"If you think 2022 has been a crazy year for AI, just imagine how 2023 will be."
As you know by now, I'm very interested in the role AI will play in further accelerating the knowledge explosion. What emerges from an AI accelerated knowledge explosion may prove to have more impact than AI itself?
If each year gets a little crazier than the last, what are the implications of such a trend?
Great summary! I found some articles that I missed and loved them.
I found this substack after getting utterly obsessed about AI after stable diffusion.
Sadly, the mainstream media, or even tech focused substacks, produce too little content on AI. The ones that do focus on AI, are often really just twitter news aggregators, with no analysis or insight. Finally I found this substack, which is properly focused on AI, and clearly has significant effort put into every post, so this is my first paid sub!
I think this sub has plenty of room to grow. Generative AI moves so blisteringly fast and is so widely impactful that it can sustain a specialized publication just focusing exclusively on this topic. This sub could really be the prestige publication for generative AI.