Outside of science fiction movies, there's no precedent for AI systems being used to kill people or launch cyberattacks, but some lawmakers want to put in place safeguards before bad actors can make that dystopian future a reality. California's bill, SB 1047, aims to prevent real-world disasters caused by AI systems and is due for a final vote in the state senate in late August.
While this seems like a goal we can all agree on, SB 1047 has drawn the ire of Silicon Valley players big and small, including venture capitalists, major tech industry associations, researchers, and startup founders. There are a number of AI bills currently flying around the country, but California's Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act has become one of the most controversial. Here's why, and who's saying so.
What does SB 1047 do?
SB 1047 aims to prevent large-scale AI models from being used to cause “significant harm” to humanity.
The bill gives examples of “significant harm” as bad actors using AI models to create weapons of mass casualty or directing them to orchestrate a cyberattack that causes more than $500 million in damages. (For comparison, the CrowdStrike outage was estimated to have caused more than $5 billion in damages.) The bill makes developers (i.e. the companies developing the models) responsible for implementing sufficient safety protocols to prevent such outcomes.
What models and companies are subject to these rules?
SB 1047's rules only apply to the world's largest AI models: those that cost at least $100 million and use 10^26 FLOPS during training. That's a huge amount of computation, but OpenAI CEO Sam Altman said that's how expensive it was to train GPT-4. These thresholds could be raised if necessary.
Few companies are currently developing public AI products large enough to meet these requirements, but tech giants like OpenAI, Google, and Microsoft are likely to do so soon. AI models (essentially large statistical engines that identify patterns in data and make predictions) generally get more accurate as they get bigger, and many expect this trend to continue. Mark Zuckerberg recently said that the next generation of Meta Llama will require 10 times the computing power and will fall under SB 1047 regulations.
When it comes to the open source model and its derivatives, the bill states that if another party spends $25 million developing or tweaking it, that party will be responsible for the derivative, not the original developer.
The bill also mandates safety protocols to prevent misuse of covered AI products, including an “emergency stop” button to shut down the entire AI model. Developers would also have to create testing procedures to address risks posed by their AI models and would have to hire third-party auditors annually to evaluate their AI safeguards.
The result must be “reasonable assurance” that following these protocols will prevent serious harm — not absolute certainty, which is certainly impossible to provide.
Who will enforce it and how?
A new California agency, the Frontier Models Department (FMD), will oversee the rules, and all new public AI models that meet SB 1047's standards will have to be individually certified with written copies of safety protocols.
The FMD will be governed by a five-member committee appointed by the California Governor and state legislature that includes representatives from the AI industry, open source community, and academia. The committee will advise the California Attorney General on potential violations of SB 1047 and issue guidance to AI model developers on safeguards.
The developer's chief technology officer must annually submit an attestation to the FMD assessing the potential risks of the AI model, the effectiveness of safety protocols, and a description of how the company complies with SB 1047. Similar to a breach notice, if an “AI safety incident” occurs, the developer must report it to the FMD within 72 hours of learning of the incident.
If a developer fails to comply with any of these provisions, SB 1047 allows the California Attorney General to bring a civil action against the developer. For a model that costs $100 million to train, fines could be up to $10 million for a first violation and $30 million for subsequent violations. This penalty rate increases as the AI model becomes more costly.
Finally, the bill also includes whistleblower protections in the event that an employee seeks to disclose information about an unsafe AI model to the California Attorney General.
What are advocates saying?
California Sen. Scott Wiener, who authored the bill and represents San Francisco, told TechCrunch that SB 1047 is an attempt to learn from past policy failures on social media and data privacy and protect the public before it's too late.
“In the past, when it comes to technology, we've waited until something bad happened and then sat back,” Wiener said. “Instead of waiting for bad things to happen, let's be proactive.”
Even if a company trains its $100 million model in Texas or even France, as long as it operates in California, it's covered by SB 1047. Wiener believes it's up to California to set a precedent here, because the Legislature has “done very little technology legislation in the last 25 years.”
When asked if he had met with OpenAI or Meta about SB 1047, Wiener said he had “met with all the large labs.”
The bill is backed by two AI researchers sometimes called the “godfathers of AI,” Geoffrey Hinton and Yoshua Bengio. The two belong to a faction of the AI community concerned about dangerous doomsday scenarios caused by AI technology. These “AI doomsayers” have existed in research for some time, and SB 1047 could enshrine some of their preferred safeguards into law. Another group sponsoring SB 1047, the Center for AI Safety, wrote an open letter in May 2023 calling on the world to prioritize “mitigating the risks of AI extinction” as seriously as pandemics or nuclear war.
“This is in the long-term interest of the industry in California, and across the U.S., because a serious safety incident is likely to be the biggest obstacle to further progress,” Dan Hendrix, president of the Center for AI Safety, said in an email to TechCrunch.
Recently, Hendrix's own motives have come into question. In July, he publicly launched Grey Swan, a startup that develops “tools to help companies assess the risk of their AI systems,” according to a press release. Following criticism that Hendrix's startup could benefit from being one of the auditors that SB 1047 would require developers to hire if the bill passes, Hendrix sold his shares in Grey Swan.
“We are withdrawing our investment to send a clear message,” Hendrix said in an email to TechCrunch. “If the billionaire VCs who oppose common sense AI safety want to show that their motives are pure, let them follow suit.”
What are the opponents saying?
Opposition to SB 1047 is growing among Silicon Valley companies.
Hendrix's “anti-billionaire VC” is likely a reference to A16Z, a venture capital firm founded by Marc Andreessen and Ben Horowitz. The firm has strongly opposed SB1047. In early August, the venture firm's chief legal officer, Jaykumar Ramaswamy, submitted a letter to Senator Wiener, arguing that the bill “burdens startups with arbitrary and variable thresholds” and will have a chilling effect on the AI ecosystem. As AI technology advances, costs will rise, and more startups will cross the $100 million threshold and become subject to SB1047. A16Z said some of its startups are already receiving that amount to train their models.
Fei-Fei Li, often referred to as the Godmother of AI, broke her silence on SB1047 in early August, writing in a Fortune magazine column that the bill would “harm our budding AI ecosystem.” Li is a highly regarded AI research pioneer out of Stanford, but she also reportedly founded World Labs, an AI startup backed by A16Z, in April that was valued at $1 billion.
She joins influential AI scholars like Stanford University researcher Andrew Ng, who called the bill an “attack on open source” when he spoke at a Y Combinator event in July. The open source model could create additional risks for creators because, like any open software, it can be easily modified and deployed for arbitrary and malicious purposes.
Yann LeCun, lead AI scientist at Meta, said in a post on X that SB 1047 will harm research efforts and is “based on a fantasy of 'existential risk' pushed by a few delusional think tanks. ” Meta's Llama LLM is one of the best examples of an open source LLM.
Startups are also unhappy with the bill. Jeremy Nixon, CEO of AI startup Omniscience and founder of AGI House SF, a San Francisco hub for AI startups, worries that SB 1047 will destroy his ecosystem. He argues that bad actors who cause significant harm should be punished, not AI labs that openly develop and distribute their technology.
“At the heart of this bill is a deep confusion that the level of riskiness of LLMs could differ in any way,” Nixon said. “I think it's very likely that all models are risky as defined in the bill.”
But Big Tech, which the bill directly targets, is also upset by SB 1047. The Chamber of Progress, an industry group that represents Big Tech giants like Google, Apple, and Amazon, published an open letter in opposition to the bill, saying it would restrict free speech and “drive innovation out of California.” Last year, Google CEO Sundar Pichai and other tech executives supported the idea of federal AI regulation.
Silicon Valley has traditionally not liked California enacting such sweeping tech regulations. Big tech companies played a similar card in 2019 when another state privacy bill, the California Consumer Privacy Act, threatened to change the tech landscape. Silicon Valley lobbied against that bill, and a few months before it went into effect, Amazon founder Jeff Bezos and 50 other executives wrote an open letter calling for a federal privacy bill instead.
What happens next?
On August 15, SB 1047, along with any approved amendments, will head to the California State Senate, where the bill will “live or die,” according to Wiener. Given the overwhelming support from lawmakers so far, it's likely to pass.
Anthropic filed several amendments to SB 1047 in late July that Weiner and the California Senate Policy Committee say they are actively considering. Anthropic is the first cutting-edge AI model developer to publicly indicate a willingness to work with Weiner on SB 1047, but it does not currently support the bill, which is widely seen as a victory for the bill.
Among the changes Anthropik proposes are repealing FMD, curtailing the Attorney General's power to sue AI developers before harm occurs, and eliminating whistleblower protection provisions in SB 1047. Wiener said he is generally positive about the proposed amendments, but they need to be approved by several Senate policy committees before they can be added to the bill.
If SB 1047 passes the Senate, it would head to California Gov. Gavin Newsom's desk, who would ultimately decide whether to sign the bill into law by the end of August. Wiener said he hasn't spoken to Newsom about the bill and doesn't know his position.
The bill would not come into force immediately, as the FMD is scheduled to be established in 2026. Moreover, even if the bill passes, it will most likely face legal challenges by then, possibly from some of the same groups that are currently speaking out against it.