I thought OpenAI’s GPT-4o demos were incredible, as the new model gives ChatGPT a massive upgrade over competitors. Google demoed similar multimodal assistant features for Gemini, but that demo was not live, and the features won’t be available until later this year. The GPT-4o upgrade rolled out almost immediately after the short press event.
Reports that said Apple and OpenAI were working on integrating ChatGPT into the iPhone’s upcoming iOS 18 update made so much sense.
You’d think that OpenAI could do nothing to ruin the demo, but then we witnessed three annoying developments that made me question my trust in the company. OpenAI isn’t as old as Big Tech companies that many people hate. But it’s not a startup, either. And from the looks of it, it’s already starting to behave like a Big Tech player.
Ilya Sutskever’s departure
In chronological order, the first thing that happened after the GPT-4o update was Ilya Sutskever’s departure from OpenAI. Anyone following the inner workings of tech companies like OpenAI would not necessarily be surprised to hear that the brilliant mind behind some ChatGPT tech is leaving.
Sutskever might have been instrumental to Sam Altman’s short demise last fall. He also supported Altman’s return as CEO. And we still don’t know what really happened there. But Sutskever practically disappeared from public view. A departure seemed imminent after the board drama in November, but also worrisome.
Many people thought Sutskever was the main force keeping OpenAI in check. That is, ensuring that OpenAI develops safe AGI, artificial general intelligence that’s aligned with our interests and won’t necessarily lead to the destruction of the human race.
OpenAI’s superalignment team crumbles
It got even worse after Sutskever’s departure. OpenAI’s superalignment team’s co-leader, Jan Leike, left the company hours after Sutskever announced his departure. In a series of tweets, Leike said that his team has been struggling to get enough resources to ensure the safe development of ChatGPT.
He said he had come to disagree with the OpenAI leadership “about the company’s core priorities” until finally reaching a “breaking point.”
“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics,” Leike said. “These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”
Leike marked only the latest departure from the team in recent months.
In a pair of reports, Vox detailed OpenAI’s superalignment team departures but also the strict NDAs that might have prevented OpenAI employees from talking to the press after leaving the company.
Does this mean OpenAI is on the verge of unleashing AGI onto the world without the proper guardrails? It’s probably unlikely. But it sure indicates the company might be moving a lot faster and unsafer than the likes of Sutskever and Leike would like.
The Scarlett Johansson mess
The final unexpected development that hurt OpenAI is the Scarlett Johansson scandal. Sam Altman & Co. tried to have the actress lend her voice to GPT-4o, as she voiced an AI character in Her, a movie that feels more actual than ever. Johansson said no.
OpenAI went on to deliver a voice for GPT-4o that sounded a lot like Johansson. Altman reportedly tried to have the actress reconsider before the big demo. She didn’t.
OpenAI proceeded to show off the Sky voice that sounded a lot like her, and Johansson contacted her laywers and put out a public statement. OpenAI withdrew the Sky voice, saying it used a different actor. But the damage is done.
First, you don’t do this to anyone, let alone the one actress playing a beloved Marvel hero who took Disney to court and won in previous years, especially in the public battle that preceded the settlement.
Secondly, you don’t steal anything to improve AI, considering all the criticism you already received for infringing copyright with ChatGPT and other OpenAI products.
I don’t want OpenAI to become the new Google
Google is doing many things right with its products nowadays, but it’s also doing many things wrong for profit. It’s a giant corporation that has had to deal with plenty of scandals over the years, especially the privacy-infringing kind. That sort of behavior eroded my trust in Google over the years to the point where I’ve almost eliminated its core products from my computing experience.
It’s also why I prefer ChatGPT over Gemini, even though the latter has many strengths that OpenAI will struggle to match.
But I certainly don’t appreciate the direction OpenAI is taking. If chasing profits at all costs, and at the expense of AI safety and common sense, is OpenAI’s main priority, I’ll rethink my relationship with ChatGPT. I can always switch to a different provider, and there are plenty of alternatives in town.
That said, there’s time for OpenAI to steer the ship in the right direction before having to, say, come up with a motto like “don’t do evil” to improve its image.