Lilian Weng, another senior safety researcher at OpenAI, announced on Friday that she would be leaving the startup. Weng has served as vice president of research and safety since August, and previously served as head of OpenAI’s safety systems team.
In a post about X, Weng said, “After seven years at OpenAI, I feel ready to reset and explore new things.” Weng said the last day would be Nov. 15, but did not specify where he would go next.
“I have made the very difficult decision to leave OpenAI,” Weng said in the post. “Looking at what we have achieved, I am very proud of every member of the Safe Systems team and have great confidence that the team will continue to thrive.”
Weng’s departure is the latest in a string of AI safety researchers, policy researchers and other executives who have left the company in the past year, with some accusing OpenAI of prioritizing commercial products over AI safety. Weng joins Ilya Sutskever and Jan Leike, leaders of OpenAI’s now-disbanded Superalignment team. The team has been trying to develop ways to pilot superintelligent AI systems, and they left the startup this year to research AI safety elsewhere.
According to LinkedIn, Weng first joined OpenAI in 2018 and worked on the startup’s robotics team, where he created a robotic hand that could solve a Rubik’s Cube. According to her post, it took her two years to accomplish this.
As OpenAI began to focus more on the GPT paradigm, so did Weng. The researcher transitioned in 2021 to help the startup build its applied AI research team. After GPT-4 was released, Weng was tasked with creating a dedicated team to build a safety system for the startup in 2023. Currently, OpenAI’s Safe Systems department has 80 scientists, researchers and policy experts, according to Weng’s post.
Although there are many AI safety experts, many have raised concerns about OpenAI’s focus on safety as it attempts to build increasingly robust AI systems. Longtime policy researcher Miles Brundage announced in October that he was leaving OpenAI and disbanding the AGI preparation team he advised. The same day, the New York Times profiled Suchir Balaji, a former OpenAI researcher, who said he left OpenAI because he thought the startup’s technology would do more harm than good to society.
OpenAI told TechCrunch that its executives and safety researchers are working on a transition to replace Weng.
“We deeply appreciate Lilian’s contributions to groundbreaking safety research and building rigorous technology safeguards,” an OpenAI spokesperson said in an emailed statement. “We are confident that the Safe Systems team will continue to play a key role in ensuring the safety and reliability of our systems and serving hundreds of millions of people around the world.”
Other executives who have left OpenAI in recent months include CTO Mira Murati, Chief Research Officer Bob McGrew, and Vice President of Research Barret Zoph. Last August, renowned researcher Andrej Karpathy and co-founder John Schulman also announced that they would be leaving the startup. Some of them, including Leike and Schulman, left to join OpenAI competitor Anthropic, while others went on to start their own ventures.