Would You like a feature Interview?
All Interviews are 100% FREE of Charge
A former OpenAI researcher has opened up about how he “caused a stir” by writing and sharing an internal safety document that ultimately got him fired.
Leopold Aschenbrenner graduated from Columbia University at age 19 and, according to his LinkedIn profile, worked on the OpenAI Superalignment team. “Fired for leaking information” He spoke about the experience in a recent interview with podcaster Dwarkesh Patel back in April. release Tuesday.
Aschenbrenner said he wrote and shared the memo after a “major security incident” and shared it with several OpenAI executives, without going into specifics in the interview. In the memo, he wrote that the company’s security was “grossly inadequate” to prevent the theft of “critical algorithmic secrets by foreign actors,” Aschenbrenner said. The AI researcher had previously shared the memo with other members of OpenAI, and “most of them have said they found it helpful,” he added.
HR later warned Aschenbrenner that worrying about the Chinese Communist Party’s espionage was “racist” and “unconstructive,” Aschenbrenner said. OpenAI lawyers then asked Aschenbrenner about his views on AI and AGI and whether he and the SuperAlignment team were “loyal to the company.”
Aschenbrenner alleged that the company then looked at his OpenAI digital work.
He was fired soon after, according to the company, but it claimed he had leaked confidential information, failed to cooperate with an investigation and had received prior warning from human resources after sharing the memo with executives.
Aschenbrenner said the leak in question was of a “brainstorming document on readiness, safety, and security measures” needed for artificial general intelligence (AGI), which he shared with three outside researchers for feedback. He said the document was reviewed for any sensitive information before being shared, and that sharing this type of information for feedback is “totally normal” at the company.
Aschenbrenner said OpenAI considered the sentence “plans for AGI by 2027-2028 and does not set a timeline for readiness” confidential. He said he wrote the document a few months after the Super Alignment Team was announced, which mentioned a four-year planning period.
In a July 2023 post, OpenAI announced that its superalignment team: The goal is The goal was to “solve the core technical challenges of coordinating a superintelligence within four years.”
“I didn’t think the planning period was a sensitive issue,” Mr. Aschenbrenner said in an interview. “It’s something that Sam has always said publicly,” he said, referring to CEO Sam Altman.
An OpenAI spokesperson told Business Insider that the concerns Aschenbrenner raised internally and with the company’s board “did not lead to his resignation.”
“While we share his enthusiasm for building safe AGI, we disagree with many of the points he has made about our work since then,” an OpenAI spokesperson said.
Aschenbrenner is one of several former employees who have recently spoken out about safety concerns at OpenAI, after a group of nine current and former OpenAI employees signed a letter calling for greater transparency from AI companies and protections for people who raise concerns about the technology.