ARTICLE AD
Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.
OpenAI is making gains at the expense of its chief rivals.
On Tuesday, the company announced the Stargate Project, a new joint venture involving Japanese conglomerate SoftBank, Oracle, and others to build AI infrastructure for OpenAI in the U.S. Stargate could attract up to $500 billion in funding for AI data centers over the next four years, should all proceed according to plan.
The news was surely to the chagrin of OpenAI competitors like Anthropic and Elon Musk’s xAI, which will see no comparable enormous infrastructure investment.
xAI intends to expand its data center in Memphis to 1 million GPUs, while Anthropic recently signed a deal with Amazon Web Services (AWS), Amazon’s cloud computing division, to use and refine the company’s custom AI chips. But it’s difficult to imagine that either AI company can outpace Stargate, even, as in the case of Anthropic, with Amazon’s vast resources.
Granted, Stargate may not deliver on its promises. Other tech infrastructure projects in the U.S. haven’t. Recall that, in 2017, Taiwanese manufacturer Foxconn pledged and subsequently failed to spend $10 billion for a plant near Milwaukee.
But Stargate has more backers — and momentum, from what it seems at this juncture — behind it. The first data center to be funded by the effort has already broken ground in Abilene, Texas. And the companies participating in Stargate have promised to invest $100 billion at the outset.
Indeed, Stargate seems poised to cement OpenAI’s incumbency in the exploding AI sector. OpenAI has more active users — 300 million weekly — than any other AI venture. And it has more customers. Over 1 million businesses are paying for OpenAI’s services.
OpenAI had first-mover advantage. Now it could have infrastructure supremacy. Rivals will have to be smart if they hope to compete. Brute force won’t be a viable option.
News
Image Credits:Jakub Porzycki/NurPhoto / Getty ImagesMicrosoft exclusivity no more: Microsoft was once the exclusive provider of data center infrastructure for OpenAI to train and run its AI models. No longer. Now the company only has a “right of first refusal.”
Perplexity launches an API: AI-powered search engine Perplexity has launched an API service called Sonar, allowing enterprises and developers to build the startup’s generative AI search tools into their own applications.
AI speeding the “kill chain”: My colleague Max interviewed the Pentagon’s chief digital and AI officer, Radha Plumb. Plumb said that the Department of Defense is using AI to gain a “significant advantage” in identifying, tracking, and assessing threats.
Benchmarks in question: An organization developing math benchmarks for AI didn’t disclose that it had received funding from OpenAI until relatively recently, drawing allegations of impropriety from some in the AI community.
DeepSeek’s new model: Chinese AI lab DeepSeek has released an open version of DeepSeek-R1, its so-called reasoning model, that it claims performs as well as OpenAI’s o1 on certain AI benchmarks.
Research paper of the week
Image Credits:MicrosoftLast week, Microsoft spotlighted a pair of AI-powered tools, MatterGen and MatterSim, which it claims could help design advanced materials.
MatterGen predicts potential materials with unique properties, grounded in scientific principles. As described in a paper published in the journal Nature, MatterGen generates thousands of candidates with “user-defined constraints” — proposing new materials that meet highly specific needs.
As for MatterSim, it predicts which of MatterGen’s proposed materials are stable and viable.
Microsoft says that a team at the Shenzhen Institute of Advanced Technology was able to use MatterGen to synthesize a new material. The material wasn’t flawless. But Microsoft has released the source code of MatterGen, and the company says it plans to work with other outside collaborators to further develop the tech.
Model of the week
Google has released a new version of its experimental “reasoning” model, Gemini 2.0 Flash Thinking Experimental. The company claims it performs better than the original on math, science, and multimodal reasoning benchmarks.
Reasoning models like Gemini 2.0 Flash Thinking Experimental effectively fact-check themselves, which helps them to avoid some of the pitfalls that normally trip up models. As a consequence, reasoning models take a little longer — usually seconds to minutes longer — to arrive at solutions compared to a typical “non-reasoning” model.
The new Gemini 2.0 Flash Thinking also has a 1 million token context window, meaning it can analyze long documents such as research studies and policy papers. One million tokens is equivalent to about 750,000 words, or 10 average-length books.
Grab bag
Image Credits:GameFactoryAn AI project called GameFactory shows that it’s possible to “generate” interactive simulations by training a model on Minecraft videos and then extending that model to different domains.
The researchers behind GameFactory, most of whom hail from the University of Hong Kong and Kuaishou, a Chinese company that’s partially state-owned, published a few examples of the simulations on the project’s website. They leave something to be desired, but the concept is still an interesting one: a model that can generate worlds in endless styles and themes.