Would You like a feature Interview?
All Interviews are 100% FREE of Charge
A tough week has turned into a tough month for OpenAI, and it doesn’t look like it will be an easy problem for the company’s accomplished CEO, Sam Altman, to solve.
In the latest twist in the OpenAI disaster story, a group of current and former OpenAI employees have gone public with concerns about the company’s financial motives and commitment to responsible AI. According to The New York Times: report The report, released Tuesday, describes a culture of false promises about safety.
“The world is not ready, and neither are we,” former Open AI researcher Daniel Kokotaijo wrote in an email announcing his resignation, as reported by The New York Times. “I worry that we are forging ahead anyway, justifying our actions.”
Also on Tuesday, the whistleblowers joined other AI industry stakeholders in publishing an open letter calling for change in the industry. The group urges AI companies to uphold a culture of open criticism and promise not to retaliate against people who raise concerns.
While the letter wasn’t specifically directed at OpenAI, it was a pretty clear subtweet and marks another damaging development for a company that has seen more than enough beatings in recent weeks.
In a statement to Business Insider, an OpenAI spokesperson reiterated the company’s commitment to safety, highlighting an “anonymous integrity hotline” where employees can voice concerns, as well as the company’s Safety and Security Committee.
“We are proud of our track record of delivering the most capable and safe AI systems, and believe in a science-based approach to addressing risks,” they said in an email. “We agree that rigorous discussion is essential given the importance of this technology, and we will continue to engage with governments, civil society, and other communities around the world.”
Safety is a Second (or Third) Priority
A common thread among the complaints is that safety isn’t the top priority at OpenAI, but rather growth and profits.
In 2019, the company transformed from a nonprofit focused on safety technology into a “limited profit” organization valued at $86 billion, and now Altman is considering turning it into a capitalist, for-profit company.
This led to safety becoming a lower priority, according to former directors and employees.
“Our experience leads us to believe that self-governance cannot reliably withstand profit-driven pressures,” former directors Helen Toner and Tasha McCauley wrote in The Economist. Editorial Last month, a resolution calling for external oversight of AI companies was passed. Toner and McCauley voted in favor of firing Altman last year. (In a response op-ed, current OpenAI board members Bret Taylor and Larry Summers defended Altman and the company’s safety standards.)
These profit-inducing incentives have made growth a priority, pushing OpenAI to compete with other artificial intelligence companies to develop more advanced technology and release products before some deem them worthy of attention, some of the people said.
According to an interview Toner gave last week, Altman routinely lied and hid information from the board, including about safety processes. Toner said the board wasn’t even aware of ChatGPT’s November 2023 release, only finding out about it on Twitter. (The company didn’t explicitly deny this, but said in a statement that it was “disappointed that Mr. Toner continues to raise these issues.”)
Former researcher Koko Tajiro told The New York Times that Microsoft began testing Bing with an unreleased version of GPT that had not been approved by OpenAI’s safety committee (a charge Microsoft denied, according to The New York Times).
These concerns echo those of Jan Reicke, who recently left the company. Reicke led the company’s Superalignment team, a group dedicated to studying the risks of AI superintelligence to humanity, along with recently departed chief scientist Ilya Sutskever, who has seen a number of departures in recent months. The team was dissolved when its leaders left, but the company has since set up a new safety committee.
“Over the past few years, safety culture and process have taken a back seat to flashy products,” Reicke said in a series of social media posts surrounding his departure. “I have disagreed with OpenAI’s management on the company’s core priorities for quite some time, and I have finally reached a breaking point.”
These concerns are growing as the company moves closer to artificial general intelligence, or technology that can interpret any human action. Many experts say AGI increases the likelihood of p(doom), a nerdy, depressing term for the possibility that AI could destroy humanity.
Put bluntly, as leading AI researcher Stuart Russell told BI last month, “Even the people developing the technology are saying it could lead to the extinction of humanity. What right do they have to play Russian roulette with everyone’s children?”
Non-disclosure agreements with top actors
The 2024 bingo card probably didn’t say Black Widow would be taking on Silicon Valley giants, but that’s what happened.
Over the past few weeks, the company has encountered some unexpected foes with concerns that go beyond just safety, including Scarlett Johansson.
Last month, the actress lawyered up and issued a scathing statement against OpenAI after the company unveiled a new AI model that sounds eerily similar to Johansson’s. Though the company claims it wasn’t trying to impersonate Johansson, the similarities are undeniable, especially considering Altman tweeted “Her” around the time of the product’s announcement, a likely reference to the 2013 film in which Johansson played an AI virtual assistant. (Spoiler alert: The film doesn’t make the tech look too good.)
“I was shocked, outraged and in disbelief that Altman was pursuing a voice that was so eerily similar,” Johansson said of the model, adding that Altman had repeatedly offered to provide her voice for Open AI but she turned it down.
The company’s defense was, more or less, that management didn’t communicate properly and handled the problem clumsily, but that’s not much comfort given that the company deals with some of the most powerful technology in the world.
The situation was exacerbated by the release of a damaging report about the company’s culture of silencing criticism with restrictive and unusual non-disclosure agreements: Former employees who left without signing non-disclosure agreements could lose vested interests worth millions of dollars. Such agreements were essentially unheard of in the tech industry.
“This is my fault and one of the few times I’ve been truly embarrassed while running OpenAI. I had no idea this was happening and should have known,” Altman responded to the allegations in a tweet.
But a few days later, Altman was embarrassed when a report was released suggesting he had known about the NDA all along.
As Altman learned, when it rains, it pours.
No more white knights
But the May showers did not bring about June flowers.
Like so many tech companies before it, OpenAI is synonymous with its co-founder and CEO, Sam Altman, who until recently was seen as a benevolent genius with a vision for a better world.
But as the company’s reputation continues to deteriorate, so does its leader’s reputation.
Earlier this year, the venture capital elite began turning their backs on Altman, and now the general public may follow suit.
The Scarlett Johansson incident made him seem incompetent, the NDA fiasco made him seem a bit of a snake, and the safety concerns made him seem like an evil genius.
Recently, The Wall Street Journal report There were several questionable business transactions by Altman on Monday.
While he doesn’t make any money directly from OpenAI (he doesn’t own any stock in the company and his reported $65,000 salary is a tiny fraction of his $1 billion net worth), he does have many conflicts of interest: He has personal investments in several companies that OpenAI does business with, The Wall Street Journal reported.
For example, he owns shares in Reddit, which recently struck a deal with OpenAI, and the first customer of Helion, a nuclear energy startup in which Altman is a lead investor, was Microsoft, OpenAI’s largest partner. (Altman and OpenAI have said he has recused himself from these deals.)
Amid a flurry of damaging media coverage, the company and its leaders have been working on damage control: Altman announced he had signed the Giving Pledge, pledging to donate most of his fortune, and the company has reportedly signed a major deal with Apple.
But some good news isn’t enough to clean up the mess Altman finds himself in. It’s time to grab a bucket and a mop and get to work.