ARTICLE AD
California’s bill to prevent AI disasters, SB 1047, has faced significant opposition from many parties in Silicon Valley. Today, California lawmakers bent slightly to that pressure, adding in several amendments suggested by AI firm Anthropic and other opponents.
On Thursday the bill passed through California’s Appropriations Committee, a major step towards becoming law, with several key changes, Senator Wiener’s office tells TechCrunch.
SB 1047 still aims to prevent large AI systems from killing lots of people, or causing cybersecurity events that cost over $500 million, by holding developers liable. However, the bill now grants California’s government less power to hold AI labs to account.
What does SB 1047 do now?
Most notably, the bill no longer allows California’s attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. This was a suggestion from Anthropic.
Instead, California’s attorney general can seek injunctive relief, requesting a company to cease a certain operation it finds dangerous, and can still sue an AI developer if its model does cause a catastrophic event.
Further, SB 1047 no longer creates the Frontier Model Division (FMD), a new government agency formerly included in the bill. However, the bill still creates the Board of Frontier Models – the core of the FMD – and places them inside the existing Government Operations Agency. In fact, the board is bigger now, with nine people instead of five. The Board of Frontier Models will still set compute thresholds for covered models, issue safety guidance, and issue regulations for auditors.
Senator Wiener also amended SB 1047 so that AI labs no longer need to submit certifications of safety test results “under penalty of perjury.” Now, these AI labs are simply required to submit public “statements” outling their safety practices, but the bill no longer imposes any criminal liability.
SB 1047 also now includes more lenient language around how developers ensure AI models are safe. Now, the bill requires developers to provide “reasonable care” AI models do not pose a significant risk of causing catastrophe, instead of the “reasonable assurance” the bill required before.
Further, lawmakers added in a protection for open-source fine tuned models. If someone spends less than $10 million fine tuning a covered model, they are explicitly not considered a developer by SB 1047. The responsibility will still on the original, larger developer of the model.
Why all the changes now?
While the bill has faced significant opposition from U.S. congressmen, renowned AI researchers, Big Tech, and venture capitalists, the bill has flown through California’s legislature with relative ease. These amendments are likely to appease SB 1047 opponents and present Governor Newsom with a less controversial bill he can sign into law without losing support from the AI industry.
While Newsom has not publicly commented on SB 1047, he’s previously indicated his commitment to California’s AI innovation.
That said, these changes are unlikely to appease staunch critics of SB 1047. While the bill is notably weaker than before these amendments, SB 1047 still holds developers liable for the dangers of their AI models. That core fact about SB 1047 is not universally supported, and these amendments do little to address it.
What’s next?
SB 1047 is now headed to California’s Assembly floor for a final vote. If it passes there, it will need to be referred back to California’s Senate for a vote due to these latest amendments. If it passes both, it will head to Governor Newsom’s desk, where it could be vetoed or signed into law.