Update: California's Appropriations Committee passed SB 1047 on Thursday, August 15, with significant amendments that change the bill. Read more here.
Outside of science fiction movies, there is no precedent for AI systems being used to kill people or launch large-scale cyberattacks. But some lawmakers want to put in place safeguards before bad actors make this dystopian future a reality. California's bill, SB 1047, aims to prevent real-world disasters caused by AI systems. The bill passed the state Senate in August and now awaits approval or veto from California Governor Gavin Newsom.
While this seems like a goal we can all agree on, SB 1047 has drawn the ire of Silicon Valley players big and small, including venture capitalists, major tech trade associations, researchers, and startup founders. There are many AI bills flying around the country right now, but California's Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act has become one of the most controversial. Here's why:
What does SB 1047 do?
SB 1047 aims to prevent large-scale AI models from being used to cause “significant harm” to humanity.
An example of “significant harm” in the bill would be a bad actor using an AI model to create a mass casualty weapon or directing it to orchestrate a cyberattack that causes more than $500 million in damages. (For comparison, the CrowdStrike outage was estimated to have caused more than $5 billion in damages.) The bill puts the onus on developers — the companies that develop the models — to implement sufficient safety protocols to prevent such outcomes.
What models and companies are subject to these rules?
SB 1047's rules only apply to the world's largest AI models: those that cost at least $100 million and use 10^26 FLOPS during training. That's a huge amount of computation, but OpenAI CEO Sam Altman said that's how expensive it was to train GPT-4. These thresholds could be raised if necessary.
Few companies are currently developing public AI products large enough to meet these requirements, but tech giants like OpenAI, Google, and Microsoft are likely to do so soon. AI models (essentially large statistical engines that identify patterns in data and make predictions) generally get more accurate as they get bigger, and many expect this trend to continue. Mark Zuckerberg recently said that the next generation of Meta Llama will require 10 times the computing power and will fall under SB 1047 regulations.
When it comes to open source models and their derivatives, the bill states that the original developer is liable unless another developer spends an additional $10 million to create a derivative of the original model.
The bill also mandates safety protocols to prevent misuse of covered AI products, including an “emergency stop” button to shut down the entire AI model. Developers would also have to create testing procedures to address risks posed by their AI models and would have to hire third-party auditors annually to evaluate their AI safeguards.
The result must be “reasonable assurance” that following these protocols will prevent serious harm — not absolute certainty, which is certainly impossible to provide.
Who will enforce it and how?
A new California agency, the Frontier Model Commission, will oversee the rules. All new public AI models that meet SB 1047's standards will have to be individually certified with written copies of safety protocols.
The Frontier Models Commission will be led by a nine-member committee appointed by the Governor and the State Legislature, including representatives from the AI industry, open source community, and academia. The Commission will advise the California Attorney General on potential violations of SB 1047 and issue guidance to AI model developers on safeguards.
The developer’s chief technology officer must submit an attestation to the board annually evaluating the potential risks of the AI model, the effectiveness of safety protocols, and a description of how the company complies with SB 1047. Similar to a breach notice, if an “AI safety incident” occurs, the developer must report it to the FMD within 72 hours of learning of the incident.
If a developer's safeguards are deemed inadequate, SB 1047 would allow the California Attorney General to issue an injunction against the developer, meaning the developer may be forced to stop operating or training the model.
If it turns out that an AI model was actually used in a catastrophe, the California Attorney General could sue the company. For a model that cost $100 million to train, the fines could reach up to $10 million for a first violation and $30 million for subsequent violations. This penalty rate increases as the AI model becomes more expensive.
Finally, the bill also includes whistleblower protections in the event that an employee seeks to disclose information about an unsafe AI model to the California Attorney General.
What are advocates saying?
California Sen. Scott Wiener, who authored the bill and represents San Francisco, told TechCrunch that SB 1047 is an attempt to learn from past policy failures on social media and data privacy and protect the public before it's too late.
“In the past, when it comes to technology, we've waited until something bad happened and then sat back,” Wiener said. “Instead of waiting for something bad to happen, let's be proactive and do something about it.”
Even if a company trains its $100 million model in Texas or even France, as long as it operates in California, it would be covered by SB 1047. Wiener said the Legislature has “done very little technology legislation in the last 25 years,” and he thinks it's up to California to set a precedent here.
When asked if he had met with OpenAI or Meta about SB 1047, Wiener said he had “met with all the large labs.”
The bill is backed by two AI researchers sometimes called the “godfathers of AI,” Geoffrey Hinton and Yoshua Bengio. The two belong to a faction of the AI community concerned about dangerous doomsday scenarios caused by AI technology. These “AI doomsayers” have existed in research for some time, and SB 1047 could enshrine some of their preferred safeguards into law. Another group sponsoring SB 1047, the Center for AI Safety, wrote an open letter in May 2023 calling on the world to prioritize “mitigating the risks of AI extinction” as seriously as pandemics or nuclear war.
“This is in the long-term interest of the industry in California, and across the U.S., because a serious safety incident is likely to be the biggest obstacle to further progress,” Dan Hendrix, president of the Center for AI Safety, said in an email to TechCrunch.
Recently, Hendrix's own motives have come into question. In July, he publicly launched Grey Swan, a startup that develops “tools to help companies assess the risk of their AI systems,” according to a press release. Following criticism that Hendrix's startup could benefit from being one of the auditors that SB 1047 would require developers to hire if the bill passes, Hendrix sold his shares in Grey Swan.
“We are withdrawing our investment to send a clear message,” Hendrix said in an email to TechCrunch. “If the billionaire VCs who oppose common sense AI safety want to show that their motives are pure, let them follow suit.”
After some of Anthropik's proposed amendments were added to SB 1047, CEO Dalio Amodei wrote a letter saying the bill's “benefits likely outweigh the costs” — not an endorsement, but a muted signal of support. Shortly after, Elon Musk also voiced his support for the bill.
What are the opponents saying?
Opposition to SB 1047 is growing among Silicon Valley companies.
Hendrix's “opposition to billionaire VCs” is likely a reference to a16z, a venture capital firm founded by Marc Andreessen and Ben Horowitz. The firm has strongly opposed SB 1047. In early August, the venture firm's chief legal officer, Jaykumar Ramaswamy, submitted a letter to Senator Wiener, arguing that the bill “burdens startups with its arbitrary and variable standards” and will have a chilling effect on the AI ecosystem. As AI technology advances, costs will rise, so more startups will cross the $100 million threshold and become eligible for SB 1047. According to a16z, some of its startups are already receiving that amount to train their models.
Fei-Fei Li, often referred to as the Godmother of AI, broke her silence on SB1047 in early August, writing in a Fortune magazine column that the bill would “harm our budding AI ecosystem.” Li is a highly regarded AI research pioneer out of Stanford, but she also reportedly founded World Labs, an AI startup backed by a16z, in April that was valued at $1 billion.
She echoes influential AI scholars like fellow Stanford University researcher Andrew Ng, who called the bill an “attack on open source” when speaking at a Y Combinator event in July. The open source model could create additional risks for creators because, like any open software, it can be easily modified and deployed for arbitrary and malicious purposes.
Yann LeCun, Meta's lead AI scientist, said in a post on X that SB 1047 will harm research efforts and is based on “a fantasy of 'existential risk' pushed by a few delusional think tanks.” Meta's Llama LLM is one of the best examples of an open source LLM.
Startups are also unhappy with the bill. Jeremy Nixon, CEO of AI startup Omniscience and founder of AGI House SF, a San Francisco hub for AI startups, worries that SB 1047 will destroy his ecosystem. He argues that bad actors who cause significant harm should be punished, not AI labs that openly develop and distribute their technology.
“At the heart of this bill is a deep confusion that the level of riskiness of LLMs could differ in any way,” Nixon said. “I think it's very likely that all models are risky as defined in the bill.”
OpenAI opposed SB 1047 in late August, arguing that national security measures related to AI models should be regulated at the federal level, and the company supports federal legislation that would do so.
But Big Tech, which the bill directly targets, is also upset by SB 1047. The Chamber of Progress, an industry group that represents Big Tech giants like Google, Apple, and Amazon, published an open letter in opposition to the bill, saying it would restrict free speech and “drive innovation out of California.” Last year, Google CEO Sundar Pichai and other tech executives supported the idea of federal AI regulation.
U.S. Representative Ro Khanna, who represents Silicon Valley, issued a statement in August opposing SB1047. He expressed concern that the bill was “ineffective, punishes entrepreneurs and small businesses, and undermines California's innovative spirit.” Speaker of the House Nancy Pelosi and the U.S. Chamber of Commerce subsequently joined in, saying they too would undermine innovation.
Silicon Valley has traditionally not liked California enacting such sweeping tech regulations. Big tech companies played a similar card in 2019 when another state privacy bill, the California Consumer Privacy Act, threatened to change the tech landscape. Silicon Valley lobbied against that bill, and a few months before it went into effect, Amazon founder Jeff Bezos and 50 other executives wrote an open letter calling for a federal privacy bill instead.
What happens next?
SB 1047 now sits on the desk of California Governor Gavin Newsom, who will have a final decision by the end of August on whether to sign the bill into law. Wiener said he hasn't spoken to Newsom about the bill and doesn't know his position.
The bill would not go into effect immediately, as the Frontier Model Commission is scheduled to be established in 2026. Moreover, even if the bill passes, it will most likely face legal challenges by then, possibly from some of the same groups that are currently vocal about the bill.
Correction: This article originally referenced language from an earlier draft of SB 1047 about who is responsible for tweaked models. Currently, SB 1047 says that the developer of a derived model is only responsible for that model if they spend three times what the original model developer spent on training it.