This Week in AI: Will Biden’s AI actions survive the Trump era?

3 hours ago 2
ARTICLE AD

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

This week was something of a swan song for the Biden administration.

On Monday, the White House announced sweeping new restrictions on exporting AI chips — restrictions that tech giants, including Nvidia, loudly criticized. (Nvidia’s business would be seriously affected by the restrictions, should they go into effect as proposed.) Then on Tuesday, the administration issued an executive order that opened up federal land to AI data centers.

But the obvious question is, will the moves have a lasting impact? Will Trump, who takes office on January 20, simply roll back Biden’s enactments? So far, Trump hasn’t signaled his intentions either way. But he certainly has the power to undo Biden’s last AI acts.

Biden’s export rules are scheduled to take effect after a 120-day comment period. The Trump administration will have broad leeway over how the measures are implemented — and whether to change them in any way.

As for the executive order pertaining to federal land use, Trump could repeal it. Former PayPal COO David Sacks, Trump’s AI and crypto “czar,” recently committed to revoking another AI-related Biden executive order that set standards for AI safety and security.

However, there’s reason to believe the incoming administration may not rock the boat too much.

Along the lines of Biden’s move to free up federal resources for data centers, Trump recently promised expedited permits for companies that invest at least $1 billion in the U.S. He also picked Lee Zeldin, who has vowed to cut regulations he sees as burdensome to businesses, to lead the EPA.

Aspects of Biden’s export rules could stand as well. Some of the regulations target China, and Trump has made no secret that he sees China as the U.S.’ biggest rival in AI.

One piece in question is Israel’s inclusion in the list of countries subjected to AI hardware trade caps. As recently as October, Trump described himself as a “protector” of Israel and has signaled that he’s likely to be more permissive toward Israel’s military actions in the region.

In any event, we’ll have a clearer picture within the week.

News

 Bryce Durbin / TechCrunch)Image Credits:Bryce Durbin / TechCrunch

ChatGPT, remind me to: Paying users of OpenAI’s ChatGPT can now ask the AI assistant to schedule reminders or recurring requests. The new beta feature, called Tasks, will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week.

Meta versus OpenAI: Executives and researchers leading Meta’s AI efforts obsessed over beating OpenAI’s GPT-4 model while developing Meta’s own Llama 3 family of models, according to messages unsealed by a court on Tuesday.

OpenAI’s board grows: OpenAI has appointed to its board of directors Adebayo “Bayo” Ogunlesi, an executive at investment firm BlackRock. The company’s current board bears little resemblance to OpenAI’s board in late 2023, the members of which fired CEO Sam Altman only to reinstate him days later.

Blaize goes public: Blaize is set to become the first AI chip startup to go public in 2025. Founded by former Intel engineers in 2011, the company has raised $335 million from investors, including Samsung for its chips for cameras, drones, and other edge devices.

A “reasoning” model that thinks in Chinese: OpenAI’s o1 AI reasoning model “thinks” in languages such as Chinese, French, Hindi, and Thai sometimes, even when asked a question in English — and no one really knows why.

Research paper of the week

A recent study co-authored by Dan Hendrycks, an adviser to billionaire Elon Musk’s AI company, xAI, suggests that many safety benchmarks for AI correlate with AI systems’ capabilities. That is to say, as a system’s general performance improves, it “scores better” on benchmarks — making it appear as though the model is “safer.”

“Our analysis reveals that many AI safety benchmarks — around half — often inadvertently capture latent factors closely tied to general capabilities and raw training compute,” the researchers behind the study write. “Overall, it is hard to avoid measuring upstream model capabilities in AI safety benchmarks.”

In the study, the researchers propose what they describe as an empirical foundation for developing “more meaningful” safety metrics, which they hope will “[advance] the science” of safety evaluations in AI.

Model of the week

Sakana AISakana AI compares its new AI method to an octopus in its adaptability. Image Credits:Sakana AI

In a technical paper published Tuesday, Japanese AI company Sakana AI detailed Transformer² (“Transformer-squared”), an AI system that dynamically adjusts to new tasks.

Transformer² first analyzes a task — for example, writing code — to understand its requirements. Then it applies “task-specific adaptations” and optimizations to tune to that task.

Sakana says the methods behind Transformer² can be applied to open models such as Meta’s Llama and that they offer “a glimpse into a future where AI models are no longer static.”

Grab bag

PrAIvateSearchA flow chart showing PrAIvateSearch’s architecture. Image Credits:PrAIvateSearch

A small team of developers has released an open alternative to AI-powered search engines like Perplexity and OpenAI’s SearchGPT.

Called PrAIvateSearch, the project is available on GitHub under an MIT license, meaning that it can be used largely without restriction. It’s powered by openly available AI models and services, including Alibaba’s Qwen family of models and the search engine DuckDuckGo.

The PrAIvateSearch team says that its goal is to “implement similar features to SearchGPT,” but in an “open source, local, and private way.” For tips to get it up and running, check out the team’s latest blog post.

Read Entire Article