Lilian Wen, another of OpenAI's chief safety researchers, announced Friday that she is leaving the company. Wen has been vice president of research and safety since August, and previously served as head of OpenAI's safety systems team.
In a post on X, Wen said, “After 7 years at OpenAI, I feel ready to reset and explore something new.” Wen said Nov. 15 would be his last day, but did not say where he would go next.
“I have made the very difficult decision to leave OpenAI,” Wen said in the post. “Given what we have accomplished to date, I am extremely proud of everyone on the Safety Systems team and have great confidence in our team's continued growth.”
Wen's departure is the latest in a string of AI safety researchers, policy researchers and other executives to leave the company in the last year, with some accusing OpenAI of prioritizing commercial products over AI safety. there was. Wen joins Ilya Satskever and Jan Reike, leaders of OpenAI's now-disbanded Superalignment team, which sought to develop ways to control superintelligent AI systems. They also left their startups this year to work on AI safety elsewhere.
According to LinkedIn, Wen first joined OpenAI in 2018 and worked on the startup's robotics team, ultimately developing a robotic hand that could solve a Rubik's cube. According to her post, the task took two years to accomplish.
As OpenAI started focusing more on the GPT paradigm, so did Weng. The researcher moved in 2021 to help build the startup's applied AI research team. After launching GPT-4, Wen was tasked with creating a dedicated team to build a safety system for the startup in 2023. Today, OpenAI's Safety Systems division includes much more. According to Wen's post, there are 80 scientists, researchers and policy experts.
Many are concerned about the safety focus of OpenAI as it seeks to build increasingly powerful AI systems. Miles Brundage, a longtime policy researcher, left the startup in October and announced that OpenAI was disbanding the AGI preparation team he was advising. The same day, The New York Times profiled former OpenAI researcher Suthir Balaji, who said he left OpenAI because he believed the startup's technology would do more harm than good to society.
OpenAI told TechCrunch that executives and safety researchers are working on the transition to replace Wen.
“We deeply appreciate Lillian's contributions to groundbreaking safety research and building rigorous technical safeguards,” an OpenAI spokesperson said in an emailed statement. “I am confident that the Safety Systems team will continue to play a critical role in serving hundreds of millions of people around the world and ensuring our systems are safe and reliable.”
Other executives who have left OpenAI in recent months include CTO Mira Murati, chief research officer Bob McGrew, and VP of research Barret Zoph. In August, prominent researcher Andrei Karpathy and co-founder John Shulman also announced they were exiting the startup. Some of these people, including Leike and Schulman, left to join OpenAI competitor Anthropic, while others started their own ventures.