Last year, OpenAI held a splashy press conference in San Francisco during which it unveiled a slew of new products and tools, including the ill-fated App Store-like GPT Store.
But this year's event is set to be quieter. On Monday, OpenAI announced it would change the format of its DevDay conference from a main event to a series of on-the-road developer engagement sessions. The company also confirmed that it would not release its next flagship model during DevDay, instead focusing on updates to its APIs and developer services.
“We are not planning on announcing our next models at DevDay,” an OpenAI spokesperson told TechCrunch, “but we will be focused on educating developers about what's available and highlighting stories from our developer community.”
This year's OpenAI DevDay events will take place in San Francisco on October 1, London on October 30, and Singapore on November 1. All events will feature workshops, breakout sessions, OpenAI product demos, and engineering and developer spotlights. Registration is $450 and applications close in August.
OpenAI has taken incremental steps rather than giant leaps in generative AI in recent months, opting to hone and fine-tune its tools while training successors to its current flagship models, GPT-4o and GPT-4o mini. While it has developed techniques to improve the overall performance of its models and prevent them from going off track as often as before, OpenAI has lost its technological lead in the generative AI race, according to some benchmarks.
One reason for this may be that high-quality training data is becoming increasingly difficult to find.
OpenAI's models, like most generative AI models, are trained on vast collections of web data. Many creators choose to block web data for fear of being plagiarized or not being paid. More than 35% of the world's top 1,000 websites currently block OpenAI's web crawlers, according to data from Originality.AI. And research from MIT's Data Provenance Initiative found that roughly 25% of data from “high-quality” sources is restricted from the primary data sets used to train AI models.
If current trends in access blocking continue, research group Epoch AI predicts that developers will run out of data to train generative AI models between 2026 and 2032.
OpenAI is said to have developed inference techniques that can improve a model's response to certain questions, especially mathematical problems, and the company's CTO, Mira Murati, has promised that future models will have “PhD-level” intelligence. This is a big promise, but the pressure is on to deliver. OpenAI is said to be spending billions of dollars on training its models and hiring highly paid staff.