OpenAI said on Tuesday it had established a safety and security committee led by senior executives after dissolving its previous oversight committee in mid-May.
According to the company, the new committee will be responsible for making recommendations to OpenAI’s board of directors on “significant safety and security decisions regarding OpenAI’s projects and operations.”
This comes after the developers of the ChatGPT virtual assistant announced that they have begun training their “next generation model.”
office stated in a blog post He predicts that the resulting system will take us to the next level of capability on the path to AGI (artificial general intelligence), an AI with intelligence equal to or greater than that of a human.
The formation of the new oversight team comes after OpenAI disbanded a previous one that focused on long-term risks to AI after both team leaders, OpenAI co-founders Ilya Sutskever and Jan Reicke, announced they were leaving the Microsoft-backed startup.
As the massive models that underpin applications like ChatGPT evolve, AI safety has come to the forefront of a major debate, with developers of AI products also wondering when AGI will arrive and what the risks are.
OpenAI board members Brett Taylor, Adam D’Angelo and Nicole Seligman now serve on the new safety committee along with Altman.
Reicke wrote this month that OpenAI’s “safety culture and process have taken a back seat to a flashy product.” Following Reicke’s resignation, Altman said on the social media platform X that he was sad to see Reicke go, adding that OpenAI “still has a lot of work to do.”
Over the next 90 days, the Safety Group will evaluate OpenAI’s processes and safety measures and share its recommendations with the company’s board of directors. OpenAI will provide an update on any recommendations it adopts at a later date.
–CNBC’s Hayden Field contributed to this report.
"Elevate Your Brand with an Exclusive Feature Interview!"