Would You like a feature Interview?
All Interviews are 100% FREE of Charge
A group of current and former employees of major AI companies, including OpenAI, Google DeepMind, and Anthropic, Open Letter They call for greater transparency and protections from retaliation for people who speak out about potential concerns about AI. “Absent effective government oversight of these companies, current and former employees are the few people who can hold the companies accountable to the public,” the letter, made public on Tuesday, said. “Yet extensive non-disclosure agreements prevent them from raising concerns beyond the companies, who may not be addressing these issues.”
This letter, Vox investigation The report revealed that OpenAI had attempted to silence recently departed employees by forcing them to choose between signing heavy-handed non-disparagement agreements or risk losing their vested interests in the company. In response to the report, OpenAI CEO Sam Altman Said He said he was “deeply embarrassed” by the clause and claimed it had been removed from recent departure documents, though it’s unclear whether it’s still in effect for some employees. An OpenAI spokesperson told Engadget that the company has removed non-disparagement clauses from its standard departure documents.
The 13 signatories include former OpenAI employees Jacob Hinton, William Sanders, and Daniel Kokotajiro. Said He said he resigned because he lost confidence that the company would responsibly build artificial general intelligence (AI), an AI system with intelligence equal to or greater than that of humans. The letter, supported by prominent AI experts Geoffrey Hinton, Yoshua Bengio and Stuart Russell, expressed serious concerns about the lack of effective government oversight of AI and the financial incentives that drive tech giants to invest in the technology. The authors warn that the unchecked pursuit of powerful AI systems could lead to the spread of misinformation, worsening inequality and even a loss of human control over autonomous systems, potentially leading to human extinction.
“There is still a lot we don’t know about how these systems will work and whether they will remain aligned with human interests even as they get smarter and potentially exceed human-level intelligence in all domains.” I have written Kokotajiro said of X: “Meanwhile, there is little to no oversight of this technology. Instead, we rely on the self-policing of the companies developing it, while profit motives and tech mania pressure companies to ‘move fast and break things.’ It is dangerous to silence researchers and let them fear retaliation, when we are currently one of the few in a position to warn the public.”
“We are proud of our track record of delivering the most capable and safe AI systems, and we believe in a scientific approach to addressing risks. Given the importance of this technology, we agree that rigorous debate is important, and we will continue to engage with governments, civil society, and other communities around the world,” an OpenAI spokesperson said in a statement shared with Engadget. “That’s why we provide avenues for employees to voice their concerns, including anonymous forums. Sincerity Hotline We have also established a safety and security committee, which is led by a member of our board of directors and our safety leader.”
Google and Anthropic did not respond to Engadget’s requests for comment. statement Sent to BloombergAn OpenAI spokesperson said the company is proud of its “track record of delivering the most capable and safe AI systems” and believes in “a scientific approach to addressing risks.” It added that “we agree that rigorous debate is essential given the importance of this technology, and we will continue to engage with governments, civil society and other communities around the world.”
The signatories call on AI companies to adhere to four key principles:
-
Refrain from retaliation against employees who raise safety concerns
-
Support an anonymous system for whistleblowers to alert the public and regulators about risks
-
Allow for a culture of open criticism
-
Avoid non-disparagement and non-disclosure agreements that limit what employees say
The letter comes amid growing scrutiny of OpenAI’s practices, including the dissolution of its “superalignment” safety team and the departure of key figures, including co-founders Ilya Sutskever and Jan Rijke. Criticized The company prioritizes “eye-catching products” over safety.
Update, June 5, 2024 at 11:51am ET: This story has been updated to include a statement from OpenAI.