Ilya Sutskever is easily the first name that comes to mind when you talk about the early days of ChatGPT, even though he no longer works at OpenAI. A co-founder of OpenAI, Sutskever is one of the brilliant minds who worked on ChatGPT and other AI initiatives at the company before leaving it to form his own startup.
Sutskever was involved in the coup that saw the brief ouster of Altman in late 2023, though he later expressed regret for taking part in it. Sutskever resigned from OpenAI last May to form Safe Superintelligence (SSI), a company whose aim is in its name.
That sort of artificial intelligence refers to AI that’s superior to anything humans can do. It’s two steps ahead of where we’re currently at. Companies like OpenAI are working on developing AGI or artificial general intelligence. That’s AI that can handle tasks with the creativity of a human. AGI will then lead to superintelligence, or at least that’s the idea behind it.
These terms aren’t exactly objective, and the goalposts move depending on the commercial interests of the players involved in the race to AGI and superintelligence. That’s where Sutskever’s approach appears to differ from everyone else’s. And it turns out that whatever he and his small team are working on, it’s good enough to convince investors to give him billions to continue his research.
Now, there’s a claim that Sutskever might have discovered a different way to train AI than everyone else, and that process is seeing promising results.
If you’re following AI topics, you might have come across these two paragraphs on Reddit or X about Sutskever’s work:
Sutskever has told associates he isn’t developing advanced AI using the same methods he and colleagues used at OpenAI. He has said he has instead identified a “different mountain to climb” that is showing early signs of promise, according to people close to the company.
“Everyone is curious about exactly what he’s pushing and exactly what the insight is,” said James Cham, a partner at venture firm Bloomberg Beta, which hasn’t invested in SSI. “It’s super-high risk, and if it works out, maybe you have the potential to be part of someone who is changing the world.”
They’re from a Wall Street Journal article that covered SSI’s recent financing round. The company just raised $2 billion at a valuation of $30 billion. SSI was valued at $5 billion just this past September.
That sort of growth is impressive for an AI startup but not exactly surprising. Sutskever is one of the most prominent names in AI, especially considering his involvement in developing ChatGPT and his interest in safe superintelligence.
What’s surprising is that investors are putting in all that money without actually getting anything in return immediately. Sutskever & Co. will not release any commercial products while they research superintelligence. And it’s not certain that SSI is even getting there or getting there before competitors.
Still, Sutskever and his colleagues must have some AI demos ready to woo investors. That’s what makes The Journal’s paragraphs above so exciting. Sutskever figuring out a “different mountain to climb” to develop superintelligence than everyone else sounds like some sort of breakthrough discovery in AI.
Does that mean that Sutskever won’t use the methods he pioneered while at OpenAI, which involved training smarter AI with the help of vast amounts of data? We can only speculate on that.
The report goes on to say Sutskever is running a tight ship. The team only has about 20 employees operating from offices in Silicon Valley and Tel Aviv. Those who work at SSI are discouraged from disclosing it via platforms like LinkedIn. Those interviewing are told to leave their phones in Faraday cages before entering SSI offices. Faraday cages block cellular and Wi-Fi signals.
The WSJ also says that the SSI team doesn’t have well-known names from Silicon Valley. Instead, Sutskever is looking for promising new hires he can mentor rather than experienced people who might jump ship down the road.
While Sutskever will not share specific details about his AI work with the world, he appeared at the NeurIPS AI conference in December, where he teased the kind of superintelligence he is trying to develop. Per The Journal, he said that when superintelligence arrives, it could be “unpredictable, self-aware and may even want rights for themselves.”
“It’s not a bad end result if you have AIs and all they want is to coexist with us,” Sutskever said. This sounds a lot like something he said while still employed at ChatGPT. “Our goal is to make a mankind-loving AGI,” the AI engineer said at OpenAI’s 2022 holiday party.