OpenAI’s efforts to develop its next major model, GPT-5, are behind schedule and results do not yet justify the massive costs, according to a new report from The Wall Street Journal.
This echoes a previous report from The Information, which suggests that OpenAI is exploring a new strategy as GPT-5 may not represent as big of a leap forward as its predecessors. However, the WSJ article includes additional details about the 18-month development of GPT-5, codenamed Orion.
OpenAI has reportedly completed at least two large-scale training runs, with the goal of improving its models by training on massive amounts of data. Initial training runs proceeded more slowly than expected, suggesting that larger runs would be time-consuming and costly. GPT-5 is said to be capable of performing better than its predecessors, but it is not yet advanced enough to justify the cost of continuing to run the model.
WSJ also reported that OpenAI doesn’t just rely on publicly available data and licensing agreements, but also hires people to generate new data by writing code or solving math problems. We are also using synthetic data generated from another model, o1.
OpenAI did not immediately respond to a request for comment. The company previously announced that it would not release the codenamed Orion model this year.