Discussion about this post

User's avatar
Mango Chili's avatar

After having tried o1-preview since its release and constantly adding it to my arsenal of LLM solutions to use, I think the future looks bright for OpenAI.

O1 isn’t a replacement to ChatGPT 4, it’s a complement. o1s ability to reason and solve problems outperforms even Claude’s latest model, however, it is time and resources intensive, and it returns so much content that it makes it really hard to iterate results with it. So my workflow is to start an idea with it, take elements of the answer to pass to gpt4 for iteration, formulate and fix the whole answer, then send the whole thing back to o1 for re-evaluation. The results you get are far far far superior to any other LLM I have tried…. Unfortunately it’s not so simple to use. But learning how to use it better has improved my efficiency and output, particularly in coding, problem-solving, and content generation.

Expand full comment
Sahar Mor's avatar

The fixation on benchmark improvements between model generations misses a crucial point: we've barely scratched the surface of what's possible with existing models. The shift toward test-time computation and reasoning described in the article points to a broader truth - perhaps the next breakthroughs won't come from raw model size, but from smarter deployment strategies, better interfaces, and more efficient architectures that prioritize real-world utility over benchmark scores.

Expand full comment
31 more comments...

No posts