Helen Toner, a former OpenAI board member and strategy director at Georgetown University's Center for Security and Emerging Technology, worries that if the status quo doesn't change, Congress will have a “knee-jerk” reaction when it comes to AI policymaking.
“The current Congress, maybe people haven't noticed, is not very functional and not very good at passing laws unless there's a major crisis,” Toner said at TechCrunch's StrictlyVC event in Washington, D.C. on Tuesday. “AI is going to be a big, powerful technology, but at some point something's going to go wrong. And if the only laws we get are ones that are made impulsively as a reaction to a major crisis, is that productive?”
Toner's comments, coming ahead of a White House summit on Thursday to discuss how AI is being used to support American innovation, highlight a long-standing impasse in U.S. AI policy.
In 2023, President Joe Biden signed an executive order implementing certain consumer protections for AI and requiring developers of AI systems to share the results of safety testing with relevant government agencies. Earlier that year, the National Institute of Standards and Technology, which develops federal technology standards, released a roadmap for identifying and mitigating emerging AI risks.
However, Congress has yet to pass any legislation on AI or even propose any comprehensive regulation similar to the EU's recent AI law, and with 2024 being a key election year, that's unlikely to change anytime soon.
As the Brookings Institution report points out, the gap in federal regulatory developments has sent state and local governments scrambling to fill the void: State lawmakers are introducing 440% more AI bills in 2023 than they did in 2022, according to lobbying group TechNet, with nearly 400 new state-level AI laws proposed in recent months.
The California Legislature passed about 30 new AI bills last month aimed at protecting consumers and jobs. Colorado recently approved a bill requiring AI companies to use “reasonable care” to avoid discrimination while developing the technology. And Tennessee Governor Bill Lee signed Elvis' Law in March, banning AI from replicating musicians' voices or likenesses without their explicit consent.
The patchwork of regulations risks creating uncertainty for both industry and consumers.
Consider this example: Many state laws regulating AI have different definitions of “automated decision-making” (a term that broadly refers to any decision made by an AI algorithm, such as whether a business should receive a loan). Some laws do not consider a decision to be “automated” as long as there is some degree of human involvement. Other laws are more strict.
Toner believes a higher level of federal mandate would be preferable to the status quo.
“Some of the smarter and more thoughtful players that I've seen in this space are trying to say, what are some fairly light-touch, fairly common-sense guardrails that we can put in place now to make future crises, future big problems less severe and basically make it less likely that we'll need some kind of quick, ill-considered response later on,” she said.