Weekly Top Picks #97
Google is back / Coding and sales agents / Altman on AI's economy / Stop asking people / Everything is ChatGPTable / Deep Research makes money / Funniest headline in AI / Super Bowl ads
The week in AI at a glance
Google does have the mandate of heaven: Google’s recent releases have gone overlooked but they have the best + most cost-efficient models.
OpenAI is preparing two more agents: Coding and sales: After Operator and Deep Research they’re reading two more AI agents, one to help senior software engineers and another for sales managers.
Sam Altman’s three observations on the economy of AI: “3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature.” Not just “exponential” anymore.
Why you should stop asking questions to people: People get tired, annoyed, may not know the answer, and are not always available. Do you know who is? Your trusted chatbot.
Learn to see that everything is ChatGPTable: This is the modern day’s “learn to see that everything is connected to everything else.” More things are ChatGPTable than you’re willing to consider.
The biggest endorsement of OpenAI’s Deep Research: Mckay Wrigley says the agent has helped him launch an ad campaign making $600/day. Is this the sales agent OpenAI was talking about?
The funniest headline in the AI world this week: “AI Company Asks Job Applicants Not to Use AI in Job Applications” by 404 Media.
Super Bowl: OpenAI ad vs Google ad: Both companies have shown the world their marketing abilities during the Super Bowl, who do you think did it better?
The week in The Algorithmic Bridge
(PAID) Weekly Top Picks #96: OpenAI's Operator and Deep Research / o3-mini for free / Fully automated companies / Tell your AI to wait / A shield for universal jailbreaks / Dylan and Nathan on Lex Fridman
(FREE): AGI Is Already Here—It’s Just Not Evenly Distributed: AI models like OpenAI’s Deep Research can perform at near-AGI levels—if users know how to prompt them properly. The relationship between model performance and prompt quality follows an S-curve: better models improve default output, but true mastery comes from skillful prompting. As AI advances, the real bottleneck will no longer be the model itself but the user’s ability to ask complex, well-structured questions and ask them well (at least until models are so smart they can infer intention from your badly worded queries).
(FREE): Perhaps You Shouldn't Read This: Reading and AI tools both shape the mind, but over-reliance on either can dull original thought—every idea consumed is one less discovered. Just as past scholars feared writing would weaken memory, today’s AI skeptics warn of cognitive decay. Balance, moderation, and restraint—not abstinence—are the answer. The real danger isn’t AI itself but refusing to adapt, clinging to purity while the world moves forward.