Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Databricks, co-founder of Prperxity, pledges $100 million to a new fund for AI researchers

June 23, 2025

Apple's liquid glass interface improves with iOS 26 Beta 2 release

June 23, 2025

According to Canada, the carrier was breached by China-related spying hacking

June 23, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Apple's liquid glass interface improves with iOS 26 Beta 2 release

    June 23, 2025

    Senators urge FTC to investigate Spotify's higher priced bundled subscriptions

    June 23, 2025

    SNAP gets Saturn, a social calendar app for high school and university students

    June 20, 2025

    The X app code refers to the physical card that comes to X money

    June 20, 2025

    Deezer begins labeling AI-generated music to tackle streaming scams

    June 20, 2025
  • Crypto

    Stablecoin Evangelist: Katie Haun's Battle of Digital Dollars

    June 22, 2025

    Hackers steal and destroy millions of Iran's biggest crypto exchanges

    June 18, 2025

    Unique, a new social media app

    June 17, 2025

    xNotify Polymarket as partner in the official forecast market

    June 6, 2025

    Circle IPOs are giving hope to more startups waiting to be published to more startups

    June 5, 2025
  • Security

    According to Canada, the carrier was breached by China-related spying hacking

    June 23, 2025

    US insurance giant AFLAC says customer personal data was stolen during a cyber attack

    June 23, 2025

    Iran's government says it will shut down the internet to protect against cyber attacks

    June 20, 2025

    According to web surveillance companies, the internet will collapse across Iran

    June 18, 2025

    Pro-Israel hacktivist group claims responsiveness to alleged Iranian bank hacks

    June 17, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Databricks, co-founder of Prperxity, pledges $100 million to a new fund for AI researchers

    June 23, 2025

    Four months after valuation of $300 million, HarveyAI will increase to $5 billion

    June 23, 2025

    Destruction 2025 Builder's Stage Agenda is now alive and in shape

    June 23, 2025

    Want to know where the VC will invest next? See 2025 suspension

    June 23, 2025

    TC Last time to save all stage paths

    June 22, 2025
TechBrunchTechBrunch

California's AI bill, SB 1047, aims to prevent an AI disaster, but Silicon Valley warns it will create one.

TechBrunchBy TechBrunchAugust 15, 202411 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


Update: California's Appropriations Committee passed SB 1047 on Thursday, August 15, with significant amendments that change the bill. Read more here.

Outside of science fiction movies, there's no precedent for AI systems being used to kill people or launch large-scale cyberattacks, but some lawmakers want to put in place safeguards before bad actors make that dystopian future a reality. California's bill, SB 1047, aims to prevent real-world disasters caused by AI systems, and is due for a final vote in the state legislature in late August.

While this seems like a goal we can all agree on, SB 1047 has drawn the ire of Silicon Valley players big and small, including venture capitalists, major tech trade associations, researchers, and startup founders. There are many AI bills flying around the country right now, but California's Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act has become one of the most controversial. Here's why:

What does SB 1047 do?

SB 1047 aims to prevent large-scale AI models from being used to cause “significant harm” to humanity.

An example of “significant harm” in the bill would be a bad actor using an AI model to create a mass casualty weapon or directing it to orchestrate a cyberattack that causes more than $500 million in damages. (For comparison, the CrowdStrike outage was estimated to have caused more than $5 billion in damages.) The bill puts the onus on developers — the companies that develop the models — to implement sufficient safety protocols to prevent such outcomes.

What models and companies are subject to these rules?

SB 1047's rules only apply to the world's largest AI models: those that cost at least $100 million and use 10^26 FLOPS during training. That's a huge amount of computation, but OpenAI CEO Sam Altman said that's how expensive it was to train GPT-4. These thresholds could be raised if necessary.

Few companies are currently developing public AI products large enough to meet these requirements, but tech giants like OpenAI, Google, and Microsoft are likely to do so soon. AI models (essentially large statistical engines that identify patterns in data and make predictions) generally get more accurate as they get bigger, and many expect this trend to continue. Mark Zuckerberg recently said that the next generation of Meta Llama will require 10 times the computing power and will fall under SB 1047 regulations.

When it comes to open source models and their derivatives, the bill provides that the original developer is liable unless another developer spends three times as much to create a derivative of the original model.

The bill also mandates safety protocols to prevent misuse of covered AI products, including an “emergency stop” button to shut down the entire AI model. Developers would also have to create testing procedures to address risks posed by their AI models and would have to hire third-party auditors annually to evaluate their AI safeguards.

The result must be “reasonable assurance” that following these protocols will prevent serious harm — not absolute certainty, which is certainly impossible to provide.

Who will enforce it and how?

A new California agency, the Frontier Models Department (FMD), will oversee the rules, and all new public AI models that meet SB 1047's standards will have to be individually certified with written copies of safety protocols.

The FMD will be governed by a five-member committee appointed by the California Governor and state legislature that includes representatives from the AI ​​industry, open source community, and academia. The committee will advise the California Attorney General on potential violations of SB 1047 and issue guidance to AI model developers on safeguards.

The developer's chief technology officer must annually submit an attestation to the FMD assessing the potential risks of the AI ​​model, the effectiveness of safety protocols, and a description of how the company complies with SB 1047. Similar to a breach notice, if an “AI safety incident” occurs, the developer must report it to the FMD within 72 hours of learning of the incident.

If a developer fails to comply with any of these provisions, SB 1047 allows the California Attorney General to bring a civil action against the developer. For a model that costs $100 million to train, fines could be up to $10 million for a first violation and $30 million for subsequent violations. This penalty rate increases as the AI ​​model becomes more costly.

Finally, the bill also includes whistleblower protections in the event that an employee seeks to disclose information about an unsafe AI model to the California Attorney General.

What are advocates saying?

California Sen. Scott Wiener, who authored the bill and represents San Francisco, told TechCrunch that SB 1047 is an attempt to learn from past policy failures on social media and data privacy and protect the public before it's too late.

“In the past, when it comes to technology, we've waited until something bad happened and then sat back,” Wiener said. “Instead of waiting for bad things to happen, let's be proactive.”

Even if a company trains its $100 million model in Texas or even France, as long as it operates in California, it would be covered by SB 1047. Wiener said the Legislature has “done very little technology legislation in the last 25 years,” and he thinks it's up to California to set a precedent here.

When asked if he had met with OpenAI or Meta about SB 1047, Wiener said he had “met with all the large labs.”

The bill is backed by two AI researchers sometimes called the “godfathers of AI,” Geoffrey Hinton and Yoshua Bengio. The two belong to a faction of the AI ​​community concerned about dangerous doomsday scenarios caused by AI technology. These “AI doomsayers” have existed in research for some time, and SB 1047 could enshrine some of their preferred safeguards into law. Another group sponsoring SB 1047, the Center for AI Safety, wrote an open letter in May 2023 calling on the world to prioritize “mitigating the risks of AI extinction” as seriously as pandemics or nuclear war.

“This is in the long-term interest of the industry in California, and across the U.S., because a serious safety incident is likely to be the biggest obstacle to further progress,” Dan Hendrix, president of the Center for AI Safety, said in an email to TechCrunch.

Recently, Hendrix's own motives have come into question. In July, he publicly launched Grey Swan, a startup that develops “tools to help companies assess the risk of their AI systems,” according to a press release. Following criticism that Hendrix's startup could benefit from being one of the auditors that SB 1047 would require developers to hire if the bill passes, Hendrix sold his shares in Grey Swan.

“We are withdrawing our investment to send a clear message,” Hendrix said in an email to TechCrunch. “If the billionaire VCs who oppose common sense AI safety want to show that their motives are pure, let them follow suit.”

What are the opponents saying?

Opposition to SB 1047 is growing among Silicon Valley companies.

Hendrix's “opposition to billionaire VCs” is likely a reference to a16z, a venture capital firm founded by Marc Andreessen and Ben Horowitz. The firm has strongly opposed SB 1047. In early August, the venture firm's chief legal officer, Jaykumar Ramaswamy, submitted a letter to Senator Wiener, arguing that the bill “burdens startups with its arbitrary and variable standards” and will have a chilling effect on the AI ​​ecosystem. As AI technology advances, costs will rise, so more startups will cross the $100 million threshold and become eligible for SB 1047. According to a16z, some of its startups are already receiving that amount to train their models.

Fei-Fei Li, often referred to as the Godmother of AI, broke her silence on SB1047 in early August, writing in a Fortune magazine column that the bill would “harm our budding AI ecosystem.” Li is a highly regarded AI research pioneer out of Stanford, but she also reportedly founded World Labs, an AI startup backed by a16z, in April that was valued at $1 billion.

She echoes influential AI scholars like fellow Stanford University researcher Andrew Ng, who called the bill an “attack on open source” when speaking at a Y Combinator event in July. The open source model could create additional risks for creators because, like any open software, it can be easily modified and deployed for arbitrary and malicious purposes.

Yann LeCun, Meta's lead AI scientist, said in a post on X that SB 1047 will harm research efforts and is based on “a fantasy of 'existential risk' pushed by a few delusional think tanks.” Meta's Llama LLM is one of the best examples of an open source LLM.

Startups are also unhappy with the bill. Jeremy Nixon, CEO of AI startup Omniscience and founder of AGI House SF, a San Francisco hub for AI startups, worries that SB 1047 will destroy his ecosystem. He argues that bad actors who cause significant harm should be punished, not AI labs that openly develop and distribute their technology.

“At the heart of this bill is a deep confusion that the level of riskiness of LLMs could differ in any way,” Nixon said. “I think it's very likely that all models are risky, as defined in the bill.”

But Big Tech, which the bill directly targets, is also upset by SB 1047. The Chamber of Progress, an industry group that represents Big Tech giants like Google, Apple, and Amazon, published an open letter in opposition to the bill, saying it would restrict free speech and “drive innovation out of California.” Last year, Google CEO Sundar Pichai and other tech executives supported the idea of ​​federal AI regulation.

U.S. Rep. Ro Khanna, who represents Silicon Valley, issued a statement on Tuesday in opposition to SB 1047. He expressed concern that the bill is “ineffective, punishes entrepreneurs and small businesses, and undermines California's innovative spirit.”

Silicon Valley has traditionally not liked California enacting such sweeping tech regulations. Big tech companies played a similar card in 2019 when another state privacy bill, the California Consumer Privacy Act, threatened to change the tech landscape. Silicon Valley lobbied against that bill, and a few months before it went into effect, Amazon founder Jeff Bezos and 50 other executives wrote an open letter calling for a federal privacy bill instead.

What happens next?

On August 15, SB 1047, along with any approved amendments, will head to the California State Senate, where the bill will “live or die,” according to Wiener. Given the overwhelming support from lawmakers so far, the bill is expected to pass.

Anthropic filed several amendments to SB 1047 in late July that Weiner and the California Senate Policy Committee say they are actively considering. Anthropic is the first cutting-edge AI model developer to publicly indicate a willingness to work with Weiner on SB 1047, but it does not currently support the bill, which is widely seen as a victory for the bill.

Among the changes Anthropik proposes are repealing FMD, curtailing the Attorney General's power to sue AI developers before harm occurs, and eliminating whistleblower protection provisions in SB 1047. Wiener said he is generally positive about the proposed amendments, but they need to be approved by several Senate policy committees before they can be added to the bill.

If SB 1047 passes the Senate, it would head to California Gov. Gavin Newsom's desk, who would ultimately decide whether to sign the bill into law by the end of August. Wiener said he hasn't spoken to Newsom about the bill and doesn't know his position.

The bill would not come into force immediately, as the FMD is scheduled to be established in 2026. Moreover, even if the bill passes, it will most likely face legal challenges by then, possibly from some of the same groups that are currently speaking out against it.

Correction: This article originally referenced language from an earlier draft of SB 1047 about who is responsible for tweaked models. Currently, SB 1047 says that the developer of a derived model is only responsible for that model if they spend three times what the original model developer spent on training it.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Databricks, co-founder of Prperxity, pledges $100 million to a new fund for AI researchers

June 23, 2025

Apple's liquid glass interface improves with iOS 26 Beta 2 release

June 23, 2025

According to Canada, the carrier was breached by China-related spying hacking

June 23, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.