Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Not everyone is excited about DMs on the thread

July 3, 2025

Kristen Craft brings fresh fundraising strategies to every stage

July 3, 2025

The Y Combinator alumni have launched a new $34 million fund dedicated to YC startups.

July 3, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Not everyone is excited about DMs on the thread

    July 3, 2025

    Meta has found another way to engage you: message that message first

    July 3, 2025

    Everything you need to know about Flash, Blueski-based Instagram alternatives

    July 3, 2025

    Substack brings new updates to live streaming as it increases video push

    July 2, 2025

    Amazon shuts down the Freevee app in August

    July 2, 2025
  • Crypto

    Vitalik Buterin reserves for Sam Altman's global project

    June 28, 2025

    Calci will close a $185 million round as rival Polymeruk reportedly seeks $200 million

    June 25, 2025

    Stablecoin Evangelist: Katie Haun's Battle of Digital Dollars

    June 22, 2025

    Hackers steal and destroy millions of Iran's biggest crypto exchanges

    June 18, 2025

    Unique, a new social media app

    June 17, 2025
  • Security

    Ransomware Gang Hunter International says it's shut down

    July 3, 2025

    India's biggest finance says hackers have accessed customer data from insurance units

    July 2, 2025

    Data breaches reveal that Catwatchful's “Stalkerware” is spying on thousands of phones

    July 2, 2025

    Hacking, Leaking, Exposure: Do not use stalkerware apps

    July 2, 2025

    Qantas Hacks lead to theft of personal data for 6 million passengers

    July 2, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Kristen Craft brings fresh fundraising strategies to every stage

    July 3, 2025

    The Y Combinator alumni have launched a new $34 million fund dedicated to YC startups.

    July 3, 2025

    Learn how to tighten a cap table with TC All Stage 2025

    July 3, 2025

    Writer CEO May Habib will win the AI ​​stage in 2025

    July 3, 2025

    It's on track to raise $150 million at a $2 billion valuation

    July 2, 2025
TechBrunchTechBrunch

California's AI bill, SB 1047, aims to prevent AI disaster, but Silicon Valley warns it will create it

TechBrunchBy TechBrunchAugust 13, 202410 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


Outside of science fiction movies, there's no precedent for AI systems being used to kill people or launch cyberattacks, but some lawmakers want to put in place safeguards before bad actors can make that dystopian future a reality. California's bill, SB 1047, aims to prevent real-world disasters caused by AI systems and is due for a final vote in the state senate in late August.

While this seems like a goal we can all agree on, SB 1047 has drawn the ire of Silicon Valley players big and small, including venture capitalists, major tech industry associations, researchers, and startup founders. There are a number of AI bills currently flying around the country, but California's Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act has become one of the most controversial. Here's why, and who's saying so.

What does SB 1047 do?

SB 1047 aims to prevent large-scale AI models from being used to cause “significant harm” to humanity.

The bill gives examples of “significant harm” as bad actors using AI models to create weapons of mass casualty or directing them to orchestrate a cyberattack that causes more than $500 million in damages. (For comparison, the CrowdStrike outage was estimated to have caused more than $5 billion in damages.) The bill makes developers (i.e. the companies developing the models) responsible for implementing sufficient safety protocols to prevent such outcomes.

What models and companies are subject to these rules?

SB 1047's rules only apply to the world's largest AI models: those that cost at least $100 million and use 10^26 FLOPS during training. That's a huge amount of computation, but OpenAI CEO Sam Altman said that's how expensive it was to train GPT-4. These thresholds could be raised if necessary.

Few companies are currently developing public AI products large enough to meet these requirements, but tech giants like OpenAI, Google, and Microsoft are likely to do so soon. AI models (essentially large statistical engines that identify patterns in data and make predictions) generally get more accurate as they get bigger, and many expect this trend to continue. Mark Zuckerberg recently said that the next generation of Meta Llama will require 10 times the computing power and will fall under SB 1047 regulations.

When it comes to the open source model and its derivatives, the bill states that if another party spends $25 million developing or tweaking it, that party will be responsible for the derivative, not the original developer.

The bill also mandates safety protocols to prevent misuse of covered AI products, including an “emergency stop” button to shut down the entire AI model. Developers would also have to create testing procedures to address risks posed by their AI models and would have to hire third-party auditors annually to evaluate their AI safeguards.

The result must be “reasonable assurance” that following these protocols will prevent serious harm — not absolute certainty, which is certainly impossible to provide.

Who will enforce it and how?

A new California agency, the Frontier Models Department (FMD), will oversee the rules, and all new public AI models that meet SB 1047's standards will have to be individually certified with written copies of safety protocols.

The FMD will be governed by a five-member committee appointed by the California Governor and state legislature that includes representatives from the AI ​​industry, open source community, and academia. The committee will advise the California Attorney General on potential violations of SB 1047 and issue guidance to AI model developers on safeguards.

The developer's chief technology officer must annually submit an attestation to the FMD assessing the potential risks of the AI ​​model, the effectiveness of safety protocols, and a description of how the company complies with SB 1047. Similar to a breach notice, if an “AI safety incident” occurs, the developer must report it to the FMD within 72 hours of learning of the incident.

If a developer fails to comply with any of these provisions, SB 1047 allows the California Attorney General to bring a civil action against the developer. For a model that costs $100 million to train, fines could be up to $10 million for a first violation and $30 million for subsequent violations. This penalty rate increases as the AI ​​model becomes more costly.

Finally, the bill also includes whistleblower protections in the event that an employee seeks to disclose information about an unsafe AI model to the California Attorney General.

What are advocates saying?

California Sen. Scott Wiener, who authored the bill and represents San Francisco, told TechCrunch that SB 1047 is an attempt to learn from past policy failures on social media and data privacy and protect the public before it's too late.

“In the past, when it comes to technology, we've waited until something bad happened and then sat back,” Wiener said. “Instead of waiting for bad things to happen, let's be proactive.”

Even if a company trains its $100 million model in Texas or even France, as long as it operates in California, it's covered by SB 1047. Wiener believes it's up to California to set a precedent here, because the Legislature has “done very little technology legislation in the last 25 years.”

When asked if he had met with OpenAI or Meta about SB 1047, Wiener said he had “met with all the large labs.”

The bill is backed by two AI researchers sometimes called the “godfathers of AI,” Geoffrey Hinton and Yoshua Bengio. The two belong to a faction of the AI ​​community concerned about dangerous doomsday scenarios caused by AI technology. These “AI doomsayers” have existed in research for some time, and SB 1047 could enshrine some of their preferred safeguards into law. Another group sponsoring SB 1047, the Center for AI Safety, wrote an open letter in May 2023 calling on the world to prioritize “mitigating the risks of AI extinction” as seriously as pandemics or nuclear war.

“This is in the long-term interest of the industry in California, and across the U.S., because a serious safety incident is likely to be the biggest obstacle to further progress,” Dan Hendrix, president of the Center for AI Safety, said in an email to TechCrunch.

Recently, Hendrix's own motives have come into question. In July, he publicly launched Grey Swan, a startup that develops “tools to help companies assess the risk of their AI systems,” according to a press release. Following criticism that Hendrix's startup could benefit from being one of the auditors that SB 1047 would require developers to hire if the bill passes, Hendrix sold his shares in Grey Swan.

“We are withdrawing our investment to send a clear message,” Hendrix said in an email to TechCrunch. “If the billionaire VCs who oppose common sense AI safety want to show that their motives are pure, let them follow suit.”

What are the opponents saying?

Opposition to SB 1047 is growing among Silicon Valley companies.

Hendrix's “anti-billionaire VC” is likely a reference to A16Z, a venture capital firm founded by Marc Andreessen and Ben Horowitz. The firm has strongly opposed SB1047. In early August, the venture firm's chief legal officer, Jaykumar Ramaswamy, submitted a letter to Senator Wiener, arguing that the bill “burdens startups with arbitrary and variable thresholds” and will have a chilling effect on the AI ​​ecosystem. As AI technology advances, costs will rise, and more startups will cross the $100 million threshold and become subject to SB1047. A16Z said some of its startups are already receiving that amount to train their models.

Fei-Fei Li, often referred to as the Godmother of AI, broke her silence on SB1047 in early August, writing in a Fortune magazine column that the bill would “harm our budding AI ecosystem.” Li is a highly regarded AI research pioneer out of Stanford, but she also reportedly founded World Labs, an AI startup backed by A16Z, in April that was valued at $1 billion.

She joins influential AI scholars like Stanford University researcher Andrew Ng, who called the bill an “attack on open source” when he spoke at a Y Combinator event in July. The open source model could create additional risks for creators because, like any open software, it can be easily modified and deployed for arbitrary and malicious purposes.

Yann LeCun, lead AI scientist at Meta, said in a post on X that SB 1047 will harm research efforts and is “based on a fantasy of 'existential risk' pushed by a few delusional think tanks. ” Meta's Llama LLM is one of the best examples of an open source LLM.

Startups are also unhappy with the bill. Jeremy Nixon, CEO of AI startup Omniscience and founder of AGI House SF, a San Francisco hub for AI startups, worries that SB 1047 will destroy his ecosystem. He argues that bad actors who cause significant harm should be punished, not AI labs that openly develop and distribute their technology.

“At the heart of this bill is a deep confusion that the level of riskiness of LLMs could differ in any way,” Nixon said. “I think it's very likely that all models are risky as defined in the bill.”

But Big Tech, which the bill directly targets, is also upset by SB 1047. The Chamber of Progress, an industry group that represents Big Tech giants like Google, Apple, and Amazon, published an open letter in opposition to the bill, saying it would restrict free speech and “drive innovation out of California.” Last year, Google CEO Sundar Pichai and other tech executives supported the idea of ​​federal AI regulation.

Silicon Valley has traditionally not liked California enacting such sweeping tech regulations. Big tech companies played a similar card in 2019 when another state privacy bill, the California Consumer Privacy Act, threatened to change the tech landscape. Silicon Valley lobbied against that bill, and a few months before it went into effect, Amazon founder Jeff Bezos and 50 other executives wrote an open letter calling for a federal privacy bill instead.

What happens next?

On August 15, SB 1047, along with any approved amendments, will head to the California State Senate, where the bill will “live or die,” according to Wiener. Given the overwhelming support from lawmakers so far, it's likely to pass.

Anthropic filed several amendments to SB 1047 in late July that Weiner and the California Senate Policy Committee say they are actively considering. Anthropic is the first cutting-edge AI model developer to publicly indicate a willingness to work with Weiner on SB 1047, but it does not currently support the bill, which is widely seen as a victory for the bill.

Among the changes Anthropik proposes are repealing FMD, curtailing the Attorney General's power to sue AI developers before harm occurs, and eliminating whistleblower protection provisions in SB 1047. Wiener said he is generally positive about the proposed amendments, but they need to be approved by several Senate policy committees before they can be added to the bill.

If SB 1047 passes the Senate, it would head to California Gov. Gavin Newsom's desk, who would ultimately decide whether to sign the bill into law by the end of August. Wiener said he hasn't spoken to Newsom about the bill and doesn't know his position.

The bill would not come into force immediately, as the FMD is scheduled to be established in 2026. Moreover, even if the bill passes, it will most likely face legal challenges by then, possibly from some of the same groups that are currently speaking out against it.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Not everyone is excited about DMs on the thread

July 3, 2025

Kristen Craft brings fresh fundraising strategies to every stage

July 3, 2025

The Y Combinator alumni have launched a new $34 million fund dedicated to YC startups.

July 3, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.