AI's Future Hangs in the Balance With California Law

5 months ago 27
ARTICLE AD

A California bill that attempts to regulate large frontier AI models is creating a dramatic standoff over the future of AI. For years, AI has been divided into “accel” and “decel”. The accels want AI to progress rapidly – move fast and break things – while the decels want AI development to slow down for the sake of humanity. The battle veered into the national spotlight when OpenAI’s board briefly ousted Sam Altman, many of whom have since split off from the startup in the name of AI safety. Now a California bill is making this fight political.

Like It or Not, Your Doctor Will Use AI | AI Unlocked

What Is SB 1047?

SB 1047 is a California state bill that makes AI model providers liable for any “critical harms,” specifically calling out their role in creating “mass casualty events.” As outlandish as that may seem, that’s big because Silicon Valley has historically evaded most responsibility for its harms. The bill empowers California’s Attorney General to take legal action against these companies if one of their AI models causes severe harm to Californians.

The bill, authored by State Senator Scott Wiener, passed through California’s Senate in May, and cleared another major hurdle toward becoming law this week.

Why Should I Care?

Well, it could become the first real AI regulation in the U.S. with any teeth, and it’s happening in California, where all the major AI companies are.

Wiener describes the bill as setting “clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.” Not everyone sees it that way though. Many in Silicon Valley are raising alarm bells that this law will kill the AI era before it starts.

What Does SB 1047 Actually Do?

SB 1047 makes AI model providers liable for any “catastrophic harms,” though it’s a little unclear what those are. Nevertheless, that’s big because Silicon Valley has historically evaded most responsibility for its harms. The bill empowers California’s Attorney General to take legal action against these companies if one of their AI models causes severe harm to Californians.

SB 1047 also includes a “shutdown” provision which effectively requires AI companies to create a kill switch for an AI model in the event of an emergency.

The bill also creates the “Frontier Model Division” within California’s Government Operations Agency. That group would “provide guidance” to these frontier AI model providers on safety standards that each company would have to comply with. If businesses don’t consider the Division’s recommendations, they could be sued and face civil penalties.

Who Supports This Bill?

Besides Senator Wiener, two prominent AI researchers who are sometimes called the “Godfathers of AI,” Geoffrey Hinton and Yoshua Bengio, put their names on this bill. These two have been very prominent in issuing warning calls about AI’s dangers.

More broadly, this bill falls in line with the decel perspective, which believes AI has a relatively high probability of ending humanity and should be regulated as such. Most of these people are AI researchers, and not actively trying to commoditize an AI product since, you know, they think it might end humanity.

The bill is sponsored by the Center for AI Safety, which is led by Dan Hendrycks. His group published an open letter in May 2023 saying AI’s risk for human extinction should be taken as seriously as nuclear wars or pandemics. It was signed by Sam Altman, Bill Gates, Grimes, and plenty of influential tech people. They’re an influential group and a key player in promoting this bill.

In March 2023, decels called for a “pause” on all AI development to implement safety infrastructure. Though it sounds extreme, there are a lot of smart people in the AI community who truly believe AI could end humanity. Their idea is that if there’s any probability of AI ending humanity, we should probably regulate it strictly, just in case.

That Makes Sense. So Who’s Against SB 1047?

If you’re on X, it feels like everyone in Silicon Valley is against SB 1047. Venture capitalists, startup founders, AI researchers, and leaders of the open-source AI community hate this bill. I’d generally categorize these folks as accels, or at least, that’s where they land on this issue. Many of them are in the business of AI, but some are researchers as well.

The general sentiment is that SB 1047 could force AI model providers such as Meta and Mistral to scale back, or completely stop, their open-source efforts. This bill makes them responsible for bad actors that use their AI models, and these companies may not take on that responsibility due to the difficulties of putting restrictions on generative AI, and the open nature of the products.

“It will completely kill, crush, and slow down the open-source startup ecosystem,” said Anjney Midha, A16Z General Partner and Mistral Board Director, in an interview with Gizmodo. “This bill is akin to trying to clamp down progress on the printing press, as opposed to focusing on where it should be, which is the uses of the printing press.”

“Open source is our best hope to stay ahead by bringing together transparent safety tests for emerging models, rather than letting a few powerful companies control AI in secrecy,” said Ion Stoica, Berkeley Professor of Computer Science and executive chairman of Databricks, in an interview.

Midha and Stoica are not the only ones who view AI regulation as existentially for the industry. Open-source AI has powered the most thriving Silicon Valley startup scene in years. Opponents of SB 1047 say the bill will benefit Big Tech’s closed-off incumbents instead of that thriving, open ecosystem.`

“I really see this as a way to bottleneck open source AI development, as part of a broader strategy to slow down AI,” said Jeremy Nixon, creator of the AGI House, which serves as a hub for Silicon Valley’s open source AI hackathons. “The bill stems from a community that’s very interested in pausing AI in general.”

This Sounds Really Technical. Can Lawmakers Get All This Right?

It absolutely is technical, and that’s created some issues. SB 1047 only applies to “large” frontier models, but how big is large? The bill defines it as AI models trained on 10^26 FLOPS and costing more than $100 million to train, a specific and very large amount of computing power by today’s standards. The problem is that AI is growing very fast, and the state-of-the-art models from 2023 look tiny compared to 2024's standards. Sticking a flag in the sand doesn’t work well for a field moving this quickly.

It’s also not clear if it’s even possible to fully prevent AI systems from misbehaving. The truth is, we don’t know a lot about how LLMs work, and today’s leading AI models from OpenAI, Anthropic, and Google are jailbroken all the time. That’s why some researchers are saying regulators should focus on the bad actors, not the model providers.

“With AI, you need to regulate the use case, the action, and not the models themself,” said Ravid Shwartz Ziv, an Assistant Professor studying AI at NYU alongside Yann Lecunn, in an interview. “The best researchers in the world can spend infinite amounts of time on an AI model, and people are still able to jailbreak it.”

Another technical piece of this bill relates to open-source AI models. If a startup takes Meta’s Llama 3, one of the most popular open-source AI models, and fine-tunes it to be something dangerous, is Meta still responsible for that AI model?

For now, Meta’s Llama doesn’t meet the threshold for a “covered model,” but it likely will in the future. Under this bill, it seems that Meta certainly could be held responsible. There’s a caveat that if a developer spends more than 25% of the cost to train Llama 3 on fine-tuning, that developer is now responsible. That said, opponents of the bill still find this unfair and not the right approach.

Quick Question: Is AI Actually Free Speech?

Unclear. Many in the AI community see open-source AI as a sort of free speech (that’s why Midha referred to it as a printing press). The premise is that the code underlying an AI model is a form of expression, and the model outputs are expressions as well. Code has historically fallen under the First Amendment in several instances.

Three law professors argued in a Lawfare article that AI models are not exactly free speech. For one, they say the weights that make up an AI model are not written by humans but created through vast machine learning operations. Humans can barely even read them.

As for the outputs of frontier AI models, these systems are a little different from social media algorithms, which have been considered to fall under the First Amendment in the past. AI models don’t exactly take a point of view, they say lots of things. For that reason, these law professors say SB 1047 may not impinge on the First Amendment.

So, What’s Next?

The bill is racing towards a fast-approaching August vote that would send the bill to Governor Gavin Newsom’s desk. It’s got to clear a few more key hurdles to get there, and even then, Newsom may not sign it due to pressure from Silicon Valley. A big tech trade group just sent Newsom a letter telling him not to sign SB 1047.

However, Newsom may want to set a precedent for the nation on AI. If SB 1047 goes into effect, it could radically change the AI landscape in America.

Correction, June 25: A previous version of this article did not define what “critical harms” are. It also stated Meta’s Llama 3 could be affected, but the AI model is not large enough at this time. It likely will be affected in the future. Lastly, the Frontier Model Division was moved to California’s Government Operations Agency, not the Department of Technology. That group has no enforcement power at this time.

Read Entire Article