OpenAI CEO Sam Altman is leaving an internal committee that OpenAI created in May to oversee “significant” safety decisions related to the company's projects and operations.
OpenAI said in a blog post today that the safety and security committee will be an “independent” board oversight group and will be chaired by Carnegie Mellon University professor Zeico Colter, Quora CEO Adam D'Angelo, retired U.S. Army Gen. Paul Nakasone, and former Sony vice president Nicole Seligman, all of whom are current members of OpenAI's board of directors.
OpenAI said in a post that the committee conducted a safety review of OpenAI's most recent AI model, o1, while Altman was still chairman. The group will continue to receive “regular briefings” from OpenAI's safety and security team and will retain the authority to delay the release until safety concerns are resolved, the company said.
“As part of its work, the Safety and Security Committee will continue to receive periodic reports on the technical evaluation of current and future models, as well as on ongoing monitoring post-release,” OpenAI said in the post.[W]Building on our model launch processes and practices, we have established an integrated safety and security framework with clearly defined success criteria for model launches.”
Altman's departure from the Security Committee comes after five U.S. senators wrote Altman this summer questioning OpenAI's policies. Nearly half of OpenAI's staff, who once focused on the long-term risks of AI, has left, and former OpenAI researchers have accused Altman of opposing “real” AI regulation and supporting policies that advance OpenAI's corporate goals.
They're right: OpenAI has significantly increased its spending on lobbying the federal government, budgeting $800,000 for the first six months of 2024, compared to $260,000 for the entire year last year. Altman also joined the U.S. Department of Homeland Security's Artificial Intelligence Safety and Security Board this spring, which makes recommendations on the development and deployment of AI across America's critical infrastructure.
Even with Altman's removal, it's unlikely the safety and security committee will make any tough decisions that would have a significant impact on OpenAI's commercial roadmap. OpenAI said in May that it intended to address “legitimate criticism” of its work through the committee. Of course, “legitimate criticism” is in the eye of the beholder.
In a May op-ed for The Economist, former OpenAI board members Helen Tonner and Tasha McCauley said they doubted the current OpenAI would hold itself accountable.[B]”Based on our experience, we believe that self-governance cannot reliably withstand interest-driven pressures,” they wrote.
And OpenAI's profit incentives are growing.
The company is currently rumored to be in the process of raising more than $6.5 billion in a funding round that would value OpenAI at over $150 billion. To close the deal, OpenAI will likely abandon its hybrid nonprofit corporate structure, which sought to limit investor returns to ensure OpenAI stays aligned with its founding mission of developing artificial general intelligence “for the benefit of all humanity.”