OpenAI's Superalignment team is responsible for developing ways to manage and operate “super-intelligent” AI systems, and is committed to providing 20% of the company's computing resources, according to team officials. It is said that there is But requests for some of that compute were often denied, hampering the team's work.
The issue, among others, led to the resignation of several team members this week, including co-leader Jan Reich. He is a former DeepMind researcher who worked on ChatGPT, GPT-4, and ChatGPT's predecessor, InstructGPT, during the OpenAI era.
Reich announced the reasons for his resignation Friday morning. “I disagreed with OpenAI's leadership on the company's core priorities for quite some time, but I finally reached a breaking point,” Reich wrote in a series of posts about X. Covers security, surveillance, readiness, safety, adversarial robustness, (hyper)coordination, confidentiality, social impact, and related topics for next-generation models. These problems are very difficult to solve, and I fear we are not on track to get there. ”
Building machines that are smarter than humans is an inherently risky endeavor.
OpenAI has a great responsibility on behalf of all of humanity.
— Jan Leike (@janleike) May 17, 2024
OpenAI did not immediately respond to a request for comment regarding the resources promised and allocated to its team.
OpenAI formed the Superalignment team last July, led by Leike and OpenAI co-founder Ilya Sutskever, who also resigned from the company this week. It had an ambitious goal of solving the core technical challenge of controlling superintelligent AI over the next four years. The team, which includes scientists and engineers from OpenAI's former alignment department and researchers from other organizations within the company, will contribute to research that will inform the safety of both internal and non-OpenAI models. Through initiatives such as our , research grant program , we solicit and share work from the broader AI industry.
The Superalignment team published a series of safety studies and successfully funneled millions of dollars in grants to outside researchers. But as his OpenAI executives' bandwidth began to increase with product launches, the Super Alignment team realized they had to fight for more upfront investment. This investment was considered critical to the company's stated mission to develop super-intelligent AI for the benefit of all humanity. .
“Building machines that are smarter than humans is an inherently risky endeavor,” Reich continued. “But in recent years, safety culture and processes have taken a backseat to shiny products.”
Sutskever's battle with OpenAI CEO Sam Altman was an added distraction.
Sutskever, along with OpenAI's former board of directors, abruptly moved to fire Altman late last year over concerns that he had not been “consistently candid” with board members. Under pressure from OpenAI investors, including Microsoft, and many of the company's employees, Altman was eventually reinstated, much of the board of directors resigned, and Sutskever reportedly never returned to work. .
Sutskever contributed to the Superalignment team, contributing to research as well as acting as a liaison to other departments within OpenAI, according to officials. He was also going to serve as an ambassador of sorts, impressing upon OpenAI's key decision makers the importance of the team's efforts.
After Mr. Reich's departure, Mr. Altman agreed that “there is still a lot of work to do” and wrote in X magazine that they were “committed to it.” He alluded to a longer explanation that co-founder Greg Brockman provided Saturday morning.
We really appreciate everything Jan has done for OpenAI, and we know he will continue to contribute to this mission externally. In light of the questions raised by his resignation, I wanted to explain a little bit about how we are thinking about our overall strategy.
First… https://t.co/djlcqEiLLN
— Greg Brockman (@gdb) May 18, 2024
Brockman's response contained few specifics about policies or promises, but did include “a very tight feedback loop, rigorous testing, careful consideration every step of the way, world-class security, safety and functionality.” We need harmony.” ”
Following the departures of Reich and Sutskever, another OpenAI co-founder, John Schulman, was moved to be responsible for the type of work that the superalignment team was doing, but with no dedicated team and a gradual This will be a team that collaborates. A group of researchers belonging to each department within the company. An OpenAI spokesperson described this as an “integration.” [the team] Deeper. ”
The concern is that as a result, OpenAI's AI development will not be as safety-focused as expected.
Publish an AI newsletter. Sign up here to start receiving it in your inbox on June 5th.