For several years, engineers have been sounding the alarm about the potential for advanced AI systems to wreak havoc on humanity.
But in 2024, those voices of alarm have been drowned out by the tech industry’s push for a practical and thriving vision of generative AI – a vision that also benefits their wallets.
Those who warn of the catastrophic risks of AI are often referred to as “AI doomers,” but that is not the name they prefer. They fear that AI systems could make decisions to kill people, be used by those in power to oppress the masses, or in some way contribute to the collapse of society.
2023 appeared to be the beginning of a renaissance era in technology regulation. AI doom and AI safety — a wide range of subjects including hallucinations, poor content moderation, and other ways AI harms society — has ranged from niche topics discussed in San Francisco coffee shops to MSNBC, The conversation evolved into a conversation that appeared on CNN and on the front page. of the New York Times.
To summarize the warnings issued in 2023, Elon Musk and more than 1,000 engineers and scientists called for a pause on AI development and asked the world to prepare for the technology's grave risks. Shortly after, top scientists from OpenAI, Google, and other institutes signed an open letter saying AI risks causing human extinction and should be trusted more. A few months later, President Biden signed the AI Executive Order with the general goal of protecting Americans from AI systems. In November 2023, the nonprofit board supporting OpenAI, one of the world's leading AI developers, fired Sam Altman, whose CEO is known for lying and is the same as artificial general intelligence (AGI). They argued that they could not be trusted when it came to such important technology. An imaginary endpoint for AI, meaning a system that actually exhibits self-awareness. (However, this definition is currently changing to meet the business needs of those talking about it.)
For a moment, it seemed like the dreams of Silicon Valley entrepreneurs were taking a backseat to the health of society as a whole.
But for these entrepreneurs, the story of AI doom was more of a concern than the AI model itself.
In response, a16z co-founder Marc Andreessen published “Why AI will save the world” in June 2023. This is a 7,000-word essay that deconstructs the disastrous politics of AI and presents a more optimistic vision of how the technology will unfold.
SAN FRANCISCO, CA – SEPTEMBER 13: Entrepreneur Marc Andreessen speaks on stage at TechCrunch Disrupt SF 2016 at Pier 48 on September 13, 2016 in San Francisco, California. (Photo by: Steve Jennings/Getty Images for TechCrunch)Image credit: Steve Jennings / Getty Images
“The age of artificial intelligence is here and people are making a fuss about it. Luckily, I’m here to bring you some good news. AI won’t destroy the world, it might even save it.” “No,” Andreessen said in his essay.
Andreessen concludes with a useful solution to our AI anxieties. It's about acting fast and breaking things. This is essentially the same ideology that has defined every technology (and its attendant problems) in the 21st century. He argued that Big Tech companies and startups should be allowed to build AI as quickly and aggressively as possible, with little or no regulatory barriers. This will ensure that AI is kept out of the hands of a few powerful companies and governments, allowing the United States to compete effectively with China, he said.
Of course, this will allow a16z's many AI startups to make more money. Others find his techno-optimism unfashionable in a time of extreme income inequality, a pandemic, and a housing crisis.
Andreessen doesn't necessarily agree with Big Tech, but making money is one area the industry as a whole can agree on. The a16z co-founders sent a letter to Microsoft CEO Satya Nadella earlier this year essentially asking the government not to regulate the AI industry at all.
Meanwhile, despite their frantic waving in 2023, Musk and other technologists haven't slowed down to focus on safety in 2024. Quite the opposite: AI investment in 2024 will exceed anything we have seen before. Altman quickly returned to the helm of OpenAI, and a large number of safety researchers left the organization in 2024, warning of a declining safety culture.
Biden's safety-focused AI executive order lost little support in Washington, D.C., this year. President-elect Donald Trump has announced plans to repeal Biden's executive order, arguing that it is hindering AI innovation. Andreessen said he has been advising Trump on AI and technology in recent months, and Sriram Krishnan, a longtime venture capitalist at a16z, is now Trump's official senior advisor on AI.
Dean Ball, a research fellow specializing in AI at George Mason University's Mercatus Center, says Republicans in Washington have several AI-related priorities that outweigh today's AI apocalypse. These include building data centers to power AI, using AI in governments and militaries, competing with China, limiting content moderation by center-left tech companies, and removing children from AI chatbots. This includes protection.
“I think [the movement to prevent catastrophic AI risk] It is losing ground at the federal level. At the state and local level, they fought one big battle and lost,” Ball said in an interview with TechCrunch. Of course, he's referring to California's controversial AI safety bill, SB 1047.
One of the reasons AI Doom fell out of favor in 2024 is simply because as AI models became more popular, we realized how unintelligent they were. It's hard to imagine that Google Gemini, which just told you to put glue on pizza, would become Skynet.
But at the same time, 2024 was also the year in which many AI products seemed to turn science fiction concepts into reality. For the first time this year, OpenAI showed how you can have conversations on your phone without using your phone, and Meta unveiled smart glasses with real-time visual understanding. The ideas underlying the catastrophic risks of AI come primarily from science fiction movies, and while there are obvious limitations, AI shows that some science fiction ideas may not be forever fictitious. proven by time.
The Biggest AI Destruction Battle of 2024: SB 1047
California Democratic State Sen. Scott Wiener (right) during the Bloomberg BNEF Summit on Wednesday, January 31, 2024 in San Francisco, California, USA. At the summit, we will develop winning strategies and ensure that technological change shapes a cleaner, more competitive future. Photographer: David Paul Morris/Bloomberg via Getty Images Image Credit: David Paul Morris/Bloomberg via Getty Images / Getty Images
The AI safety battle of 2024 culminated in SB 1047, a bill sponsored by two renowned AI researchers, Jeffrey Hinton and Yoshua Bengio. The bill seeks to prevent advanced AI systems from triggering mass extinction events or cyberattacks that could cause more damage than the 2024 CrowdStrike failure.
SB 1047 passed the California General Assembly and ended up on the desk of Governor Gavin Newsom, who called it a bill with “outsized impact.” The bill seeks to prevent the kind of things Musk, Altman and many other Silicon Valley leaders warned about when they signed an open letter on AI in 2023.
However, Newsom vetoed SB 1047. Days before the decision, he spoke about AI regulation on stage in downtown San Francisco, saying: What can we solve? ”
This very clearly summarizes how many policymakers are thinking about catastrophic AI risks today. That's not a problem as a practical solution.
Still, SB 1047 was flawed beyond its focus on the catastrophic risks of AI. The bill regulated AI models based on size, with the aim of regulating only the largest players. However, this does not take into account new technologies such as test-time computing and the rise of small AI models, which major AI labs are already focusing on. Additionally, the bill would restrict companies like Meta and Mistral from releasing highly customizable frontier AI models, making them vulnerable to attacks on open source AI and, by proxy, the research industry. It was widely believed that there was.
But Silicon Valley did the dirty work to sway public opinion on SB 1047, according to the bill's author, state Sen. Scott Wiener. He previously told TechCrunch that Y Combinator and A16Z venture capitalists were running a propaganda campaign against the bill.
Specifically, these groups spread the claim that SB 1047 would send software developers to prison for perjury. Y Combinator asked young founders to sign a similar letter in June 2024. Around the same time, Anijny Mida, a general partner at Andreessen Horowitz, made similar claims on a podcast.
The Brookings Institution classified this as one of the bill's many misstatements. He noted that SB 1047 would require tech executives to submit reports identifying shortcomings in AI models, and the bill would make lying on government documents constitute perjury. But the venture capitalists who propagated these fears failed to note that people are rarely prosecuted for perjury, and even more rarely convicted.
YC denies the idea that they are spreading misinformation, previously telling TechCrunch that SB 1047 is vague and not as specific as Sen. Wiener claimed.
More generally, during the battle for SB 1047, there was growing sentiment that AI destroyers were not only anti-technology, but also delusional. Prominent investor Vinod Khosla said in October that Wiener was ignorant of the real dangers of AI.
Yann LeCun, chief AI scientist at Meta, has long opposed the ideas underlying AI doom, but this year he has become more outspoken.
“The idea that somehow [intelligent] “The idea that a system could come up with its own goals and take over humanity is completely absurd and absurd,” LeCun said at Davos in 2024, noting that we are still far from developing superintelligent AI systems. “There are many ways to build [any technology] There are dangerous ways, there are wrong ways, there are ways to get people killed…but as long as there is one way to do it right, that is all we need. ”
Meanwhile, policymakers are turning their attention to new AI safety issues.
Fighting towards 2025
The policymakers behind SB 1047 have hinted that it could be revived in 2025 with amended legislation to address long-term AI risks. Encode, one of the bill's sponsors, said SB 1047's national attention was a positive signal.
“Despite the veto of SB 1047, the AI safety movement has made very encouraging progress in 2024,” Sunny Gandhi, vice president of political affairs at Encode, said in an email to TechCrunch. “We are optimistic that there is growing public awareness of long-term AI risks and a growing appetite among policymakers to tackle these complex challenges.”
Gandhi said Encode expects “significant efforts” to regulate AI-assisted catastrophic risks in 2025, but did not specify specific efforts.
On the other side, a16z general partner Martin Casado is one of those leading the fight against regulation of catastrophic AI risks. In a December op-ed on AI policy, Casado argued for a more rational AI policy going forward, declaring that “AI appears to be very secure.”
“The first wave of stupid AI policy efforts is almost over,” Casado said in a December tweet. “I hope we can be even smarter in the future.”
Calling AI “very safe” and trying to regulate it “stupidly” is in some ways an oversimplification. For example, Character.AI, a startup that a16z invested in, is currently being sued and investigated over child safety concerns. In an ongoing lawsuit, a 14-year-old boy in Florida allegedly confided his suicidal thoughts to the Character.AI chatbot, engaged in a romantic and sexual chat with it, and then committed suicide. This incident itself shows how our society must prepare for new types of risks surrounding AI that may have seemed absurd just a few years ago.
More legislation is emerging to address long-term AI risks, including one just introduced at the federal level by Sen. Mitt Romney. But now it looks like AI destroyers will be fighting an uphill battle in 2025.