A grueling election cycle has come to an end. With Donald Trump becoming the 47th president of the United States and Republicans controlling the Senate and possibly the House of Representatives, his allies are poised to bring about major changes at the highest levels of government.
The impact will be felt most acutely in the AI industry, which is largely opposed to federal policy decisions. Mr. Trump has repeatedly said he intends to dismantle Mr. Biden's AI policy framework on “day one,” aligning himself with kingmakers who have harshly criticized all but the lightest regulations.
Biden's approach
Mr. Biden's AI policy came into effect through the executive order “AI Executive Order” passed in October 2023. Congress's inaction on regulation led to the enactment of the executive order, which is voluntary rather than mandatory.
AI EO is working on everything from advancing AI in healthcare to developing guidance to reduce the risk of intellectual property theft. But two of the more significant provisions that have drawn the ire of some Republicans concern AI's security risks and real-world safety implications.
One provision is designed to require companies developing powerful AI models to report to the government on how they train and secure those models, and to investigate vulnerabilities in their models. We ask you to provide us with the results of your tests. Another provision directs the Commerce Department's National Institute of Standards and Technology (NIST) to develop guidance to help companies identify and correct flaws in biased models.
AI EO has achieved a lot. Last year, the Department of Commerce established the American AI Safety Institute (AISI), an agency to study risks in AI systems, including systems with defense applications. We also released new software to help improve AI reliability and tested major new AI models through agreements with OpenAI and Anthropic.
Critics allied with President Trump argue that the EO's reporting requirements are onerous and effectively force companies to disclose trade secrets. At a House hearing in March, Rep. Nancy Mace (R-S.C.) said they “could scare away would-be innovators and hinder further ChatGPT-type progress.”
The requirement is based on an interpretation of the Defense Production Act, a 1950s defense assistance law, and has been labeled by some Republicans in Congress as an example of executive overreach.
At a Senate hearing in July, President Trump's running mate, J.D. Vance, expressed concern that “an attempt at preemptive overregulation” would “entrench the incumbents in the technology industry that we already have.” did. Mr. Vance has also supported antitrust law, including efforts by FTC Chair Lina Khan to spearhead an investigation into acquisitions of AI startups by big tech companies.
Some Republicans have equated NIST's AI efforts with censoring conservative speech. They accuse the Biden administration of trying to guide AI development with liberal notions of disinformation and bias. Sen. Ted Cruz (R-Texas) recently criticized NIST's “woke AI 'safety' standards” as a “speech control scheme” based on “amorphous” social harm.
“If I am re-elected, I will rescind President Biden's artificial intelligence executive order and ban the use of AI to censor the speech of the American people from day one,” Trump said at a rally in Cedar Rapids, Iowa, in December. ” he said. ”
AI EO replacement
So what will replace Biden's AI EO?
Little has been gained from the AI Executive Order President Trump signed during his last presidential term, which established the National AI Research Institute and directed federal agencies to prioritize AI research and development. His EO mandates that government agencies “protect civil liberties, privacy, and American values” in the application of AI, help workers acquire AI-related skills, and Required to promote the use of “trusted” technology.
During his campaign, Trump promised policies that would “support the development of AI that is rooted in free speech and human flourishing,” but he did not elaborate.
Some Republicans say they want NIST to focus on the physical security risks of AI, including its ability to help adversaries build biological weapons (something President Biden also addresses) . But they also avoid supporting new restrictions on AI that could jeopardize some of NIST's guidelines.
Indeed, the fate of AISI within NIST is uncertain. AISI has a budget, a director, and partnerships with AI research institutes around the world, but AISI could be abolished by simply repealing Biden's EO.
In an open letter in October, a coalition of businesses, nonprofits, and universities called on Congress to enact legislation codifying AISI by the end of the year.
President Trump has acknowledged that AI is “very dangerous” and requires enormous amounts of power to develop and run, signaling a willingness to tackle the growing risks posed by AI.
For this reason, Sara Kreps, a political scientist who focuses on U.S. defense policy, doesn't expect any major AI regulations to come out of the White House over the next four years. “I don’t know if President Trump’s views on AI regulation will rise to the level of antipathy that would repeal the Biden AI EO,” she told TechCrunch.
Trade and national rulemaking
Dean Ball, a researcher at George Mason University, agrees that a Trump victory would likely result in a lighter regulatory regime that relies on applying existing laws rather than enacting new ones. But Ball predicts this could increase the appetite of state governments, especially in Democratic strongholds like California, to fill the void.
State-led efforts are progressing smoothly. Tennessee passed a law in March that protects voice actors from AI cloning. This summer, Colorado adopted a phased, risk-based approach to AI deployment. And in September, California Governor Gavin Newsom signed dozens of AI-related safety bills, some of which require companies to disclose details of their AI training.
National policymakers have introduced nearly 700 AI bills this year alone.
“It's unclear how the federal government will respond to these challenges,” Ball said.
Hamid Ekubia, a public affairs professor at Syracuse University, believes President Trump's protectionist policies could have an impact on AI regulation. He expects the Trump administration to impose stricter export controls on China, including, for example, on the technology needed to develop AI.
The Biden administration has already banned the export of many AI chips and models. However, some Chinese companies are reportedly taking advantage of loopholes to access the tools through cloud services.
“As a result, global regulation of AI will be undermined.” [of new controls]”This is despite the need for greater global cooperation,” Ekubia said. “The political and geopolitical implications of this could be significant, potentially enabling more authoritarian and repressive uses of AI around the world.”
Matt Mittelstead, another researcher at George Mason University, said that if President Trump were to impose tariffs on the technology needed to build AI, it would also put a strain on the resources needed to fund AI research and development. Point out that there is a possibility. During his campaign, President Trump proposed imposing a 10% tariff on all U.S. imports and a 60% tariff on products made in China.
“Probably the biggest impact will come from trade policy,” Mittelstedt said. “We expect potential tariffs to have a significant economic impact on the AI sector.”
Of course it's early. And while Trump largely avoided mentioning AI in his campaign, many of his policies, including H-1B visa restrictions and oil and gas admission plans, could have downstream effects on the AI industry. There is.
Sandra Wachter, professor of data ethics at the Oxford Internet Institute, urged regulators, regardless of political affiliation, not to lose sight of the dangers and opportunities of AI.
“These risks exist regardless of political affiliation,” she says. “These evils don’t believe in geography and don’t care about partisanship. We can only hope that AI governance isn’t reduced to a partisan issue. This is an issue that affects all of us, no matter where we are. We all need to work together to find good global solutions.”
TechCrunch has a newsletter focused on AI. Sign up here to get it delivered to your inbox every Wednesday.