Would You like a feature Interview?
All Interviews are 100% FREE of Charge
OpenAI’s former head of safety is revealing all.
On Tuesday night, Jan Reich, the leader of artificial intelligence company Super Alignment Group, announced that he was resigning for cause. Candid post to X: “I have resigned.”
Now, three days later, Reich has further elaborated on his departure, saying OpenAI doesn’t take safety seriously enough.
in his postReich joined OpenAI because he thought it was the best place in the world to research how to “steer and control” artificial general intelligence (AGI), an AI that can think faster than humans. He said it was because of it.
“However, I disagreed with OpenAI’s leadership on the company’s core priorities for quite some time and eventually reached a breaking point.” Like wrote.
A former OpenAI executive said the company should pay most attention to issues of “security, surveillance, readiness, safety, adversarial robustness, (hyper)coordination, confidentiality, social impact, and related topics.” Stated.
But Reich said his team, which was working on how to tune AI systems to what’s best for humanity, was “sailing against the wind” with OpenAI.
“We are long overdue to take the impact of AGI incredibly seriously,” he says. I have writtenadded, “OpenAI must become a safety-first AGI company.”
Reike concluded the thread with a note to OpenAI employees, encouraging them to change the company’s safety culture.
“I’m counting on you. The world is counting on you.” He said.
Resignation at OpenAI
Reich and Ilya Satskeva, another Super Alignment team leader, left OpenAI within hours of each other on Tuesday.
in Statement regarding XAltman praised Sutskever as “arguably one of the greatest minds of our generation, a guiding light in our field, and a dear friend.”
“OpenAI would not be where it is today without him,” Altman wrote. “While he has personally meaningful work ahead of him, I am forever grateful for what he has done here and look forward to completing the mission we started together.” I am working hard.”
Mr. Altman did not comment on Mr. Reich’s resignation.
on friday, Wired reported OpenAI has disbanded its two-person AI risk team. According to Wired, researchers investigating the dangers of AI going out of control will now be absorbed into other parts of the company.
OpenAI did not respond to a request for comment from Business Insider.
The AI company, which recently debuted a new large-scale language model, GPT-4o, has been rocked by high-profile shake-ups in recent weeks.
In addition to Reich and Sutskever’s resignations, Vice President of Human Resources Diane Yun and Director of Nonprofit Strategic Initiatives Chris Clark are also leaving the company. According to information. And last week, BI reported that two other safety researchers have left the company.
One of those researchers later wrote that he lost confidence that OpenAI would “act responsibly in the days of AGI.”