ARTICLE AD
Helen Toner, a former OpenAI board member and the director of strategy at Georgetown’s Center for Security and Emerging Technology, is worried Congress might react in a “knee-jerk” way where it concerns AI policymaking, should the status quo not change.
“Congress right now — I don’t know if anyone’s noticed — is not super functional, not super good at passing laws, unless there’s a massive crisis,” Toner said at TechCrunch’s StrictlyVC event in Washington, D.C. on Tuesday “AI is going to be a big, powerful technology — something will go wrong at some point. And if the only laws that we’re getting are being made in a knee-jerk way, in reaction to a big crisis, is that going to be productive?”
Toner’s comments, which come ahead of a White House-sponsored summit Thursday on the ways in which AI is being used to support American innovation, highlight the longstanding gridlock in U.S. AI policy.
In 2023, President Joe Biden signed an executive order that implemented certain consumer protections regarding AI and required that developers of AI systems share safety test results with relevant government agencies. Earlier that same year, the National Institute of Standards and Technology, which establishes federal technology standards, published a roadmap for identifying and mitigating the emerging risks of AI.
But Congress has yet to pass legislation on AI — or even propose any law as comprehensive as regulations like the EU’s recently enacted AI Act. And with 2024 a major election year, it’s unlikely that will change any time soon.
As a report from the Brookings Institute notes, the vacuum in federal rulemaking has led to a rush to fill the gap by state and local governments. In 2023, state legislators introduced over 440% more AI-related bills than in 2022; close to 400 new state-level AI laws have been proposed in recent months, according to the lobbying group TechNet.
Lawmakers in California last month advanced roughly 30 new bills on AI aimed at protecting consumers and jobs. Colorado recently approved a measure that requires AI companies to use “reasonable care” while developing the tech to avoid discrimination. And in March, Tennessee governor Bill Lee signed into law the ELVIS Act, which prohibits AI cloning of musicians’ voices or likenesses without their explicit consent.
The patchwork of rules threatens to foster uncertainty for industry and consumers alike.
Consider this example: in many state laws regulating AI, “automated decision making” — a term broadly referring to AI algorithms making some sort of decision, like whether a business receives a loan — is defined differently. Some laws don’t consider decisions “automated” so long as they’re made with some level of human involvement. Others are more strict.
Toner thinks that even a high-level federal mandate would be preferable to the current state of affairs.
“Some of the smarter and more thoughtful actors that I’ve seen in this space are trying to say, OK, what are the pretty light-touch — pretty common-sense — guardrails we can put in place now to make future crises — future big problems — likely less severe, and basically make it less likely that you end up with the need for some kind of rapid and poorly-thought-through response later,” she said.