California's controversial AI disaster prevention bill, SB 1047, passed a final vote in the state Senate and is now headed to Governor Gavin Newsom's desk. The Governor must weigh the most extreme theoretical risks of AI systems, including the possibility of them causing human deaths, against the possibility of stifling California's AI boom. The Governor has until September 30 to sign SB 1047 or veto it outright.
SB 1047, introduced by state Sen. Scott Wiener, aims to prevent very large AI models from causing catastrophic events, such as cyberattacks that could result in loss of life or damages exceeding $500 million.
To be clear, very few AI models large enough to be covered by this bill exist today, and AI has never been used in a cyberattack of this scale before, but this bill is about the future of AI models, not the problems that exist today.
SB 1047 would hold AI model developers liable for damages, similar to how gun manufacturers are held responsible for mass shootings, and would give the California Attorney General the power to sue AI companies and impose large fines if their technology is used in a catastrophic incident. Courts could order companies to stop operating if they are acting recklessly. Models would also have to have a “kill switch” that can shut them down if they are deemed dangerous.
The bill has the potential to be a game changer for the American AI industry and if signed into law, we'll see what the future holds for SB 1047.
Why Newsom is signing
Weiner argues that Silicon Valley needs more accountability, previously telling TechCrunch that the US must learn from past mistakes in regulating technology. Newsom could be motivated to take decisive action on AI regulation and hold big tech companies accountable.
Several AI executives have expressed cautious optimism about SB 1047, including Elon Musk.
Another cautious optimist about SB 1047 is Sofia Verastegui, former chief AI officer at Microsoft. “SB 1047 is a good compromise,” she told TechCrunch, but acknowledged the bill isn’t perfect. “I think the U.S., or any country that’s working on AI, needs to have an office of responsible AI. It can’t just be Microsoft’s,” Verastegui said.
Anthropik is also a cautious supporter of SB 1047, but has not taken a public stance on the bill. Some of the startup's proposed changes were included in SB 1047, with CEO Dario Amodei saying in a letter to the governor of California that the bill's “benefits likely outweigh the costs.” Anthropik's amendments would allow AI companies to be sued only after their AI models cause catastrophic harm, rather than before that under the previous SB 1047 law.
Why Newsom might veto the bill
Given the industry's fierce opposition to the bill, it would not be surprising if Governor Newsom vetoes it. If he signs it, he would be putting his reputation on the line on SB 1047, but if he vetoes it, he could also postpone it for another year or leave it to the Legislature to deal with.
“this [SB 1047] “This changes precedent from 30 years of dealing with software policy,” Martin Casado, general partner at Andreessen Horowitz, argued in an interview with TechCrunch. “It shifts the responsibility from the application to the infrastructure, which is something we've never done before.”
The tech industry has responded with outrage to SB 1047. In addition to a16z, House Speaker Nancy Pelosi, OpenAI, major tech industry groups, and prominent AI researchers have also urged Governor Newsom not to sign the bill, fearing that this paradigm shift in liability will have a chilling effect on AI innovation in California.
The last thing anyone wants is a slowdown in the startup economy. The AI boom is a huge stimulant for the U.S. economy, and Newsom is under pressure not to let it go to waste. Even the U.S. Chamber of Commerce has called on Newsom to veto the bill, stating in a letter to him that “AI is fundamental to America's economic growth.”
If SB 1047 becomes law
A source involved in the drafting of SB 1047 told TechCrunch that even if Governor Newsom signs the bill, nothing will happen on day one.
Tech companies will be required to produce safety reports for their AI models by January 1, 2025. At that point, the California Attorney General can request an injunction ordering AI companies to stop training or operating AI models if the court finds them to be unsafe.
More of the bill's provisions will come into play in 2026, when a Frontier Model Commission will be created to begin collecting safety reports from tech companies. The nine-person panel, selected by the governor and state legislature, will make recommendations to the California attorney general about which companies are and aren't in compliance with regulations.
That same year, SB 1047 required AI model developers to hire auditors to evaluate their safety measures, effectively creating a new industry for AI safety compliance, and gave the California Attorney General the ability to sue AI model developers if their tools were used in a catastrophic event.
By 2027, the Frontier Models Commission may begin issuing guidance to AI model developers on how to train and operate AI models safely and securely.
If SB 1047 is rejected
If Governor Newsom vetoes SB 1047, OpenAI's wish would come true and federal regulators would finally take the lead in regulating AI models.
On Thursday, OpenAI and Anthropic laid the groundwork for federal AI regulation. According to a press release, the two companies agreed to give the AI Safety Institute, a federal agency, early access to their advanced AI models. At the same time, OpenAI is supporting legislation that would allow the AI Safety Institute to set standards for AI models.
“We think it's important that this happens at the national level for a number of reasons,” OpenAI CEO Sam Altman wrote in a tweet on Thursday.
Reading between the lines, federal agencies have generally developed less onerous tech regulations than California, and have taken significantly longer to do so. But beyond that, Silicon Valley has historically been a key tactical and business partner to the U.S. government.
“We actually have a long history of working with the federal government to develop cutting-edge computer systems,” Casado said, “When I worked at the national labs, every time a new supercomputer was announced, the first version went to the government so that the government had the capability. I think that's a better reason than safety testing.”