The geopolitical risk discourse (democracy vs authoritarianism) will overshadow the existential risk discourse (humans vs AI)
OpenAI will prove that test-time compute scaling laws work but will realize they’re constrained to formalizable areas like math, coding, and science
Someone will make $1 million using an o-series OpenAI model (o1/o3, perhaps o4), just by following a strategy (vs creating a business on top of the model)
OpenAI will begin to resemble Google more than DeepMind (search, browser, devices, hardware, robotics; not just AI software like GPTs and o1/o3)
The US government will create the first AGI-centered national security project to face China’s alleged threat, collaborating with leading AI companies (including Big Tech). BRICS will follow suit (perhaps not in 2025)
At least one AI product will cost upward of $2000/month
There won’t be an AI winter (defined, using historical figures of previous winter periods, by a reduction in total investment in the generative AI industry below 20% of the previous year)
Leading research will move on from pure pre-trained large language models into overlooked avenues like search-based intelligence, test-time training, test-time thinking, and, to a lesser degree, non-transformer architectures
ARC-AGI (v1 and v2) and FrontierMath benchmarks won’t be solved 100% regardless of the time/money spent
There won’t be mass unemployment due to AI (employment rates will remain stable throughout 2025, at least regarding AI progress)
The ceiling of AI capabilities will rise beyond the superhuman level in coding/math/science (i.e. at least Terence Tao level in math)
The floor of capabilities will remain mostly as it is: dumb failures won’t be solved unless specifically patched and people will still find tricky questions that work
One or various of the big AI labs will put ads into their products
OpenAI’s revenue, dominated by individual users (>60%), will shift towards businesses (B2B), better positioned to afford premium offerings like ChatGPT Pro ($200/month), which will be increasingly more common as we approach AGI
Google will either acquire, acqui-hire, or kill Magic (long context windows) and/or Perplexity (AI search)
Google and DeepMind will be broadly considered the leaders of the AI race toward AGI over OpenAI, Anthropic, and Meta by the end of 2025
An AI-generated movie (featurette length at least) will win an award
Hallucinations won’t be solved; the most advanced model created in 2025 will still make factual mistakes that no human would make
The generative AI financial bubble will pop (reflected in stock graphs and bankruptcy fillings), elevating a few winners (e.g. OpenAI, Google) into an unbreakable oligopoly
At least a major news outlet (e.g. NYT) will claim China is on par with (or above) the US on most fronts: research, development, manufacturing, and productization—with the possible exception of innovation
REMINDER: The Christmas Special offer—20% off for life—runs from Dec 1st to Jan 1st. Lock in your annual subscription now for just $40/year (or the price of a cup of coffee a month). Starting Jan 1st, The Algorithmic Bridge will move to $10/month or $100/year (existing paid subs retain their current rates). If you’ve been thinking about upgrading, now’s the time.