One of the only U.S. government agencies dedicated to evaluating the safety of AI is at risk of being dismantled unless Congress chooses to authorize it.
The American AI Safety Institute (AISI), a federal agency that studies risks in AI systems, was established in November 2023 as part of President Joe Biden's AI Executive Order. AISI operates within NIST, an agency of the Department of Commerce that develops guidance for the implementation of various categories of technology.
However, AISI has a budget, a board of directors, and a research collaboration with its UK counterpart, the UK Institute for AI Safety, which could be abolished by a simple reversal of the Biden executive order. .
“If another president comes into office and repeals the AI executive order, AISI will be dismantled,” Chris McKenzie, senior director of communications at Americans for Responsible Innovation, an AI lobbying group, told TechCrunch. Ta. “and [Donald] President Trump has promised to repeal the AI Executive Order. Therefore, if Congress formally authorizes the AI Safety Institute, it will ensure its continued existence regardless of who is in the White House. ”
In addition to securing AISI's future, authorizing the office could also lead to more stable long-term funding from Congress for the initiative. AISI's current budget is approximately $10 million, a relatively small amount considering the concentration of major AI research institutes in Silicon Valley.
“Congressional appropriators tend to give higher budgeting priorities to entities formally chartered by Congress,'' McKenzie said, adding, “These entities have broad buy-in and a single “We understand that this is not a one-time thing for this administration, it's here for the long term.” Priority. “
In a letter today, a coalition of more than 60 businesses, nonprofit organizations, and universities called on Congress to enact legislation codifying AISI by the end of the year. Among the signatories are OpenAI and Anthropic, which have signed an agreement with AISI to collaborate on AI research, testing, and evaluation.
The House and Senate have each introduced bipartisan bills to authorize AISI's operations. But the proposal faces opposition from conservative lawmakers, including Sen. Ted Cruz (R-Texas), who is calling for the Senate version of the AISI bill to roll back the diversity program.
Admittedly, AISI is a relatively weak organization from an enforcement perspective. The standards are voluntary. But tech giants such as Microsoft, Google, Amazon, and IBM, which signed the aforementioned letter, as well as think tanks and industry coalitions, see AISI as the most promising avenue for AI benchmarking that can form the basis of future policy. .
There are also concerns among some interest groups that allowing the dissolution of AISI risks ceding control of AI to foreign countries. At the AI Summit in Seoul in May 2024, international leaders will discuss AI safety, which is comprised of institutions from the UK, as well as Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada and the European Union. It was agreed to form a network of associations. and the usa
“While other governments are rapidly moving forward, members of Congress will permanently authorize the AI Safety Institute and ensure that the United States plays a key role in advancing AI innovation and adoption, helping the United States become a global leader.” “We can ensure that we don't get left behind in the AI race,” Jason said. Oxman, president and CEO of the Information Technology Industry Council, an IT industry trade group, said in a statement. “We urge Congress to heed today's calls to action from industry, civil society, and academia and pass the necessary bipartisan legislation by the end of the year.”