OpenAI CEO Sam Altman said the company is working with the National AI Safety Institute, a federal agency whose mission is to assess and address risks to AI platforms, and has a contract to provide early access to its next major generative AI model for safety testing.
The announcement, which Altman posted to X late Thursday night, didn't go into much detail, but along with a similar agreement it struck with the UK's AI Safety Agency in June, appears intended to counter the perception that OpenAI is neglecting its AI safety efforts in the pursuit of more capable and powerful generative AI technologies.
In May, OpenAI effectively disbanded a unit that had been working on the problem of developing controls to prevent “superintelligent” AI systems from going out of control. According to reports (including ours), OpenAI abandoned the team's safety research in favor of new product launches, ultimately leading to the resignations of two of the team's co-leaders, Jan Leike (who now leads safety research at AI startup Anthropic) and OpenAI co-founder Ilya Sutskever (who founded safety-focused AI company Safe Superintelligence Inc.).
Amid growing criticism, OpenAI removed restrictive non-disparagement clauses that implicitly discouraged whistle-blowing, created a safety committee, and announced it would dedicate 20% of computing to safety research. (The since-disbanded safety team had been promised 20% of OpenAI's computing for its work but never received it.) Altman recommitted to the 20% pledge, reaffirming that OpenAI had revoked non-disparagement clauses for new and existing staff in May.
But the move did little to appease some observers, especially after OpenAI decided to staff its safety committee entirely with in-house people, including Altman, and more recently moved its top AI safety executive to a separate organization.
Five senators, including Hawaii Democrat Brian Schatz, recently questioned OpenAI's policies in a letter to Altman. OpenAI's chief strategy officer Jason Kwon responded to the letter today, stating that OpenAI's “[is] We are committed to implementing strict safety protocols at every step of the process.”
The timing of OpenAI's agreement with the U.S. AI Safety Institute earlier this week seems a bit suspicious given the company's support for the Future of Innovation Act, a Senate bill that would authorize the Safety Institute as an enforcement agency to set standards and guidelines for AI models. The joint move could be seen as an attempt at regulatory capture, or at least an exercise of OpenAI's influence over AI policymaking at the federal level.
That's not surprising, since Altman sits on the Department of Homeland Security's Artificial Intelligence Safety and Security Committee, which makes recommendations on the “safe and secure development and deployment of AI” across America's critical infrastructure. OpenAI has also significantly increased its spending on federal lobbying this year, spending $800,000 in the first six months of 2024, compared to $260,000 in all of 2023.
The U.S. AI Safety Institute, housed at the Commerce Department's National Institute of Standards and Technology, is in discussions with Antropic and a coalition of companies that includes major tech companies such as Google, Microsoft, Meta, Apple, Amazon and Nvidia. The industry group is tasked with addressing activities outlined in President Joe Biden's October executive order on AI, including developing guidelines for AI red teaming, capability assessments, risk management, safety and security, and watermarking of synthetic content.