A new Wall Street Journal report says OpenAI's efforts to develop its next major model, GPT-5, are behind schedule and the results don't yet justify the hefty cost.
This echoes an earlier report in The Information suggesting that OpenAI is considering new strategies as GPT-5 may not represent as much progress as previous models. However, the WSJ article includes additional details about the 18-month development of GPT-5, codenamed Orion.
OpenAI has reportedly completed at least two large-scale training runs aimed at improving models by training them on vast amounts of data. Initial training runs were slower than expected, suggesting that running at scale would be both time-consuming and costly. Additionally, while GPT-5 is reported to have improved performance over previous versions, it is not yet advanced enough to justify the cost of keeping the model running.
The Journal also reported that in addition to relying on publicly available data and licensing agreements, OpenAI hired people to write code and solve math problems to create new data. We also use synthetic data created by another model, o1.
OpenAI did not immediately respond to a request for comment. The company previously announced that the model, codenamed Orion, would not be released this year.