A conspiracy theory that I find amusing is that OpenAI keeps releasing so many groundbreaking products because they have already built GPT-5 and its doing all of the coding for them!
But I think there is something about having foundational models and being able to layer on top of them to accelerate the pace of progress. Do you have any insights into why OpenAI is able to ship so much, so fast?
A conspiracy theory that I find amusing is that OpenAI keeps releasing so many groundbreaking products because they have already built GPT-5 and its doing all of the coding for them!
But I think there is something about having foundational models and being able to layer on top of them to accelerate the pace of progress. Do you have any insights into why OpenAI is able to ship so much, so fast?
I'd say the reality is OpenAI has had GPT-4 ready for a long time now. They finished training in August and were able to develop a lot of things that didn't ship for months. They might be already finishing training GPT-5, who knows (I'm not sure about it coding the next GPTs. It's probably helping, though--it'd be quite telling if OpenAI devs weren't using their own product).
Yup, or at least are quite advanced in training GPT-5 right now. It's interesting to listen to Sam Altman on the Lex Friedman podcast. There's an interval of closed testing with partners and safety/ fine-tuning after the pre-training. GPT-4 seems to have a different architecture than their GPT-3.x branch. There is one half sentence where Sam starts to say something about architecture and interrupts himself in the middle of the word. Interesting question whether they are using their own product for coding. Would it make a difference?
A conspiracy theory that I find amusing is that OpenAI keeps releasing so many groundbreaking products because they have already built GPT-5 and its doing all of the coding for them!
But I think there is something about having foundational models and being able to layer on top of them to accelerate the pace of progress. Do you have any insights into why OpenAI is able to ship so much, so fast?
I'd say the reality is OpenAI has had GPT-4 ready for a long time now. They finished training in August and were able to develop a lot of things that didn't ship for months. They might be already finishing training GPT-5, who knows (I'm not sure about it coding the next GPTs. It's probably helping, though--it'd be quite telling if OpenAI devs weren't using their own product).
Yup, or at least are quite advanced in training GPT-5 right now. It's interesting to listen to Sam Altman on the Lex Friedman podcast. There's an interval of closed testing with partners and safety/ fine-tuning after the pre-training. GPT-4 seems to have a different architecture than their GPT-3.x branch. There is one half sentence where Sam starts to say something about architecture and interrupts himself in the middle of the word. Interesting question whether they are using their own product for coding. Would it make a difference?