Weekly Top Picks #98
Anthropic's "first" reasoning model / GPT-4.5 and GPT-5 / The end of AI safety / Jeff Dean and Noam Shazeer / DeepSeek and the power of antipathy
The week in AI at a glance
Anthropic will release soon its “first” reasoning model: Anthropic fused reasoning into its models (a new one coming soon), challenging OpenAI’s split approach—now OpenAI is following suit.
OpenAI announces GPT-4.5 (weeks) and GPT-5 (months): Feeling the pressure from competitors, Sam Altman laid out OpenAI’s roadmap: a unified GPT model, streamlined offerings, upcoming GPT-4.5 and GPT-5.
2025 marks the beginning of the end of AI safety: The AI safety era is giving way to a full-blown capabilities race, as seen in the shift from "AI Safety Summit" to "AI Action Summit"—something I think US AI companies saw coming all along.
Jeff Dean and Noam Shazeer on the Dwarkesh Podcast: Listen to two heavyweights in AI and long-time veterans at Google talk about all things AI.
Antipathy has an intense flame but a short wick: DeepSeek rose in popularity not because people felt pro-DeepSeek but because they felt anti-OpenAI, anti-AI, anti-US.
The week in The Algorithmic Bridge
(PAID) Weekly Top Picks #97: Google is back / Coding and sales agents / Altman on AI's economy / Stop asking people / Everything is ChatGPTable / Deep Research makes money / Funniest headline in AI / Super Bowl ads
(FREE) The True Power of AI Deepfakes Is Not What You Think: Deepfakes aren’t dangerous because they deceive, but because they reinforce what people already want to believe—expressing counter-truths rather than hiding reality. They often function more like art or satire than crime. Their impact doesn’t come from realism but from emotional resonance, spreading when they land between “this could be real” and “I want this to be real.” The real threat isn’t to the gullible but to the stubborn.