Can the US meaningfully regulate AI? It's not at all clear yet. While policymakers have made progress in recent months, they have also experienced setbacks, illustrating the difficult nature of laws that impose guardrails on technology.
In March, Tennessee became the first state to protect voice actors from unauthorized AI cloning. This summer, Colorado adopted a phased, risk-based approach to AI policy. And in September, California Governor Gavin Newsom signed dozens of AI-related safety bills, some of which require companies to disclose details of their AI training.
However, the United States does not yet have a federal AI policy comparable to the EU's AI law. Regulation continues to face significant obstacles at the state level as well.
After a long battle with special interests, Governor Newsom vetoed SB 1047, a bill that would impose extensive safety and transparency requirements on companies developing AI. Another California bill targeting distributors of AI deepfakes on social media was put on hold this fall pending the outcome of a lawsuit.
But there is reason for optimism, says Jessica Newman, co-director of the AI Policy Hub at the University of California, Berkeley. During a panel discussion on AI governance at TechCrunch Disrupt 2024, Newman said that while many federal bills, such as anti-discrimination and consumer protection laws, may not have been written with AI in mind, they still apply to AI. he pointed out.
“You often hear that the US is this kind of 'wild west' compared to what's happening in the EU, but I think that's overstated. The reality is more nuanced than that,” Newman said. ” he said.
Newman points out that the Federal Trade Commission is forcing companies that covertly collect data to delete their AI models, and that the sales of AI startups to big tech companies violate antitrust laws. They say they are investigating whether there are any. Meanwhile, the Federal Communications Commission has declared AI-voiced robocalls illegal and issued rules requiring the publication of AI-generated content in political ads.
President Joe Biden is also trying to put certain AI rules on the books. About a year ago, Biden signed an AI executive order that encouraged voluntary reporting and benchmarking practices that many AI companies had already chosen to implement.
One impact of this executive order was on the American AI Safety Institute (AISI), a federal agency that studies risks in AI systems. AISI, which operates within the National Institute of Standards and Technology, has research partnerships with leading AI institutes such as OpenAI and Anthropic.
However, AISI could be reduced by simply reversing Biden's executive order. In October, a coalition of more than 60 organizations called on Congress to enact legislation codifying AISI by the end of the year.
“I think we all share an interest as Americans in making sure we mitigate the potential downsides of technology,” said AISI Director Elizabeth Kelly, who participated in the panel discussion. .
So is there hope for comprehensive AI regulation in the U.S.? The failure of SB 1047, which Newman described as a “light-touch” bill with input from industry, is not necessarily encouraging. . SB 1047, written by California Sen. Scott Weiner, was opposed by many in Silicon Valley, including prominent technologists like Yann LeCun, chief AI scientist at Meta.
This being the case, Weiner, another Disrupt panelist, said the way the bill was drafted would not have changed, and he is confident that broad AI regulation will eventually prevail. .
“I think this is a stepping stone for future efforts,” he said. “Hopefully we can do something that can bring more people together, because the reality is, as all large labs have already acknowledged, the risks are: [of AI] They are real and we want to test it. ”
In fact, Anthropic last week warned of an AI catastrophe if governments don't implement regulations within the next 18 months.
Opponents have only doubled down on their rhetoric. Last Monday, Vinod Khosla, founder of Khosla Ventures, called Wiener “totally ignorant” and “unqualified” to regulate the real dangers of AI. And Microsoft and Andreessen Horowitz issued statements opposing AI regulations that could affect their financial interests.
But Newman argues that pressure to unify the growing patchwork of state-by-state AI rules will ultimately yield stronger legal solutions. In lieu of agreement on a regulatory model, national policymakers have introduced nearly 700 AI bills this year alone.
“My sense is that businesses don't want to have a patchwork regulatory environment where each state is different,” she said. About that uncertainty. ”