Following criticism of its approach to AI safety, OpenAI has formed a new committee to oversee “significant” safety and security decisions related to the company's projects and operations. But in a move sure to anger ethicists, OpenAI opted to populate the committee with only company insiders, including the company's CEO, Sam Altman, rather than outside observers.
According to the company's official blog post, Altman and the other members of the Safety and Security Committee — OpenAI board members Brett Taylor, Adam D'Angelo, Nicole Seligman, Chief Scientist Jakub Paczocki, Alexander Madri (who leads OpenAI's “preparedness” team), Lillian Wen (head of safety systems), Matt Knight (head of security), and John Shulman (head of “alignment science”) — will be charged with evaluating OpenAI's processes and safety measures over the next 90 days. The committee will then share its findings and recommendations with OpenAI's full board of directors for their consideration. At that point, the company will provide a public update on any adopted proposals “in a manner consistent with safety and security.”
“OpenAI recently began training our next frontier models, and we hope the resulting systems will bring the next level of capability on our path to the future. [artificial general intelligence,]”We are proud to have built and released a model that leads the industry in both functionality and safety, but welcome robust discussion at this critical moment,” OpenAI wrote.
OpenAI has seen several high-profile departures from its safety division in the past few months, with some of those former staffers voicing concerns about what they see as a willful disregard for AI safety.
Daniel Kokotaillo, who worked on OpenAI's governance team, resigned in April after losing confidence that OpenAI would “act responsibly” when it came to releasing increasingly powerful AI, as he put it in a personal blog post, and OpenAI co-founder and former chief scientist Ilya Sutskever resigned in May after a long-running dispute with Altman and his allies, reportedly in part over Altman's rush to release AI-powered products at the expense of safety practices.
More recently, Jan Leike, a former DeepMind researcher who helped develop ChatGPT and its predecessor InstructGPT while at OpenAI, stepped down from his safety research position and said in a series of posts on X that he believes OpenAI is “not on track” to “properly” solve AI security and safety problems. Gretchen Krueger, an AI policy researcher who left OpenAI last week, echoed Leike's comments, calling on the company to improve accountability, transparency, and “care in addressing AI safety issues.” [it uses its] “Unique Technology”
More needs to be done to improve fundamentals such as decision-making processes, accountability, transparency, documentation, enforcement of policies, caution in using their technology, and mitigating inequalities, rights and environmental impacts.
— Gretchen Kruger (@GretchenMarina) May 22, 2024
Quartz notes that beyond Sutskever, Kokotylo, Reicke, and Krueger, at least five of OpenAI's most safety-conscious employees have resigned or been fired since late last year, including former OpenAI directors Helen Toner and Tasha McCauley. In an Economist op-ed published Sunday, Toner and McCauley wrote that they couldn't expect OpenAI to hold itself accountable as long as Altman was at the helm.
“[B]”Our experience suggests that self-governance cannot reliably withstand profit-driven pressures,” Toner and McCauley wrote.
In response to Tonner and McCauley's allegations, TechCrunch reported earlier this month that OpenAI's SuperAlignment team (charged with developing ways to manage and steer a “super-intelligent” AI system) had been promised 20% of the company's computing resources but only received a fraction of that. The SuperAlignment team has since been disbanded, with much of its work falling under the purview of Schulman and OpenAI's safety advisory group, which it formed in December.
OpenAI advocates for and simultaneously works to shape AI regulation, hiring in-house lobbyists and a growing number of law firm lobbyists and spending hundreds of thousands of dollars on lobbying efforts in the U.S. in the fourth quarter of 2023 alone. Recently, the U.S. Department of Homeland Security announced that Altman will join its newly formed Artificial Intelligence Safety and Security Committee, which will make recommendations on the “safe and secure development and deployment of AI” across U.S. critical infrastructure.
To avoid the appearance of ethical gaffes on an executive-dominated safety and security committee, OpenAI has pledged to hire third-party “safety, security, and technical” experts to support the committee's work, including cybersecurity veteran Rob Joyce and former U.S. Department of Justice official John Carlin. But beyond Joyce and Carlin, the company has not provided any details about the size or composition of this group of outside experts, nor has it disclosed the limits of the group's power and influence over the committee.
Bloomberg columnist Parmy Olson noted in a post on X that corporate oversight boards like the Safety and Security Committee are similar to Google's AI oversight committee, the External Advisory Committee for Advanced Technologies.[do] In effect, there is no real oversight.' OpenAI says it seeks to address “legitimate criticism” of its work through the committee — though of course “legitimate criticism” is in the eye of the beholder.
🙏🏼 Did you know OpenAI has indicated that they will “address legitimate criticism of their work”? They'll probably decide what “legitimate criticism” means as well. 🤬 https://t.co/S2pq4MRYx9
— Neil Turkewitz (@neilturkewitz) May 28, 2024
Altman previously promised that outsiders would play a key role in OpenAI's governance. In a 2016 New Yorker article, he said OpenAI was “[plan] “A way to allow a broad range of people around the world to elect their representatives to the Governing Board.” This has never happened, and at this point seems highly unlikely.