California lawmakers are moving forward with a bill to regulate powerful artificial intelligence systems.
SACRAMENTO, Calif. — The California Assembly voted to pass a bill Tuesday that would require artificial intelligence (AI) companies to test their systems and add safeguards to prevent them from being used to take down the state's power grid or make chemical weapons, a scenario experts say could occur in the future as technology evolves at lightning speed.
The bill, the first of its kind, aims to mitigate the risks posed by AI. Venture capitalists and technology companies, including Facebook, Instagram's parent company Meta, and Google, have vigorously opposed the bill, arguing that regulations are aimed at developers and should instead focus on those who exploit and harm AI systems.
Democratic state Sen. Scott Wiener, who authored the bill, said the proposal would provide a reasonable standard of safety by preventing “catastrophic harm” from extremely powerful AI models that might be created in the future.
The requirement only applies to systems that require more than $100 million in computing power to train — as of July, no AI models had met that threshold.
At a congressional hearing on Tuesday, Wiener blasted opponents of the bill for spreading inaccurate information about it. He said the bill would not impose new criminal prosecutions if AI developers test their systems and take steps to mitigate the risks, even if their models are misused to cause societal harm.
“This bill is not about putting AI developers in jail,” Wiener said. “I would urge people to stop making that claim.”
Under the bill, only the state attorney general may take legal action in the event of a violation.
Democratic Governor Gavin Newsom has touted California as both an early adopter and regulator of AI, saying the state could quickly deploy generative AI tools to decongest highways, improve road safety and save taxes. At the same time, his administration is considering new rules to ban AI discrimination in hiring practices. The governor declined to comment on the bill but warned that overregulation could put the state in a “dangerous position.”
A growing coalition of technology companies argues that these requirements will stifle companies' ability to develop large-scale AI systems and keep their technology open source.
“This bill would make the AI ecosystem less secure, jeopardize the open source model that startups and small businesses rely on, create a reliance on standards that don't exist, and lead to regulatory fragmentation,” Rob Sherman, Meta's vice president and deputy chief privacy officer, wrote in a letter to lawmakers.
Opponents want to wait for more guidance from the federal government, and supporters of the bill say they can't wait, citing hard lessons learned by California for not doing enough to rein in social media companies.
The proposal, which has the backing of some of AI's most prominent researchers, would also create a new state agency to oversee developers and provide best practices.
State lawmakers on Tuesday also considered two ambitious measures to further protect Californians from potential harm from AI: one would combat automated discrimination when companies use AI models to review job resumes or rental apartment applications, and another would ban social media companies from collecting or selling data about people under the age of 18 without their consent or that of their guardians.