OpenAI tempers expectations with less bombastic, GPT-5-less DevDay this fall

1 month ago 28
ARTICLE AD

Last year, OpenAI held a splashy press event in San Francisco during which the company announced a bevy of new products and tools, including the ill-fated App Store-like GPT Store.

This year will be a quieter affair, however. On Monday, OpenAI said that it’s changing change the format of its DevDay conference from a tentpole event into a series of on-the-road developer engagement sessions. The company also confirmed that it wouldn’t release its next major flagship model during DevDay, instead choosing to focus on updates to its API and dev tools.

“We’re not planning to announce our next model at DevDay,” an OpenAI spokesperson told TechCrunch. “We’ll be focused more on educating developers about what’s available and showcasing dev community stories.”

OpenAI’s DevDay events will take place in San Francisco on October 1, London on October 30 and Singapore on November 1.

OpenAI has in recent months taken more incremental steps than monumental leaps in generative AI, opting to hone and fine-tune its tools as it trains the successor to its current leading model GPT-4o and GPT-4o mini. It’s developed techniques to improve the overall performance of its models and prevent models from going off the rails as often as much as they previously did, but OpenAI, according to some benchmarks, has lost its technical lead in the generative AI race.

One of the reasons could be the increasing challenge of finding high-quality training data.

OpenAI’s models, like most generative AI models, are trained on massive collections of web data — web data that many creators are choosing to gate over fear that it’ll be plagiarized or they won’t receive compensation. More than 35% of the world’s top 1,000 websites now block OpenAI’s web crawler, according to data from Originality.AI. And around 25% of data from “high-quality” sources has been restricted from the major data sets used to train AI models, a study by MIT’s Data Provenance Initiative found.

Should the current access-blocking trend continue, the research group Epoch AI predicts that developers will run out of data to train generative AI models between 2026 and 2032

OpenAI is said to be developing a reasoning technique that could substantially improve its models responses on certain questions, particularly math questions, and the company’s CTO Mira Murati has promised a future model will have “Ph.D.-level” intelligence. That’s promising a lot, and there’s high pressure to deliver; OpenAI’s said to hemorrhaging billions of dollars training its models and hiring top-paid staff.

Read Entire Article