After the OpenAI CEO drama concluded last week, a report dropped a big bombshell about the company’s most recent work. OpenAI supposedly stumbled upon a big AI breakthrough called Q* (Q-Star) that could threaten humanity. The detail comes from a letter that unnamed OpenAI researchers had sent to the board before Sam Altman’s firing. The new Q* algorithm might have been one of the key developments that led to the firing of Altman.
Now that Altman is back as CEO of OpenAI, he addressed questions about those five days between his firing and rehiring in an interview. Sam Altman avoided addressing the reasons why the board fired him. But OpenAI CTO Mira Murati said the events that just unfolded have nothing to do with AI safety.
The Q* question inevitably surfaced, with Sam Altman somewhat acknowledging the letter that mentioned it. He labeled the whole thing as “an unfortunate leak,” which seems to indicate the concerns in that letter are real.
The Q* letter
It was Reuters that first reported on the Q* algorithm just as OpenAI rehired Sam Altman. While Reuters had not seen the letter, it reported that Q* achieved something that generative AI can’t do reliably. That’s solving math problems:
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
This might be a big milestone towards developing smarter AI. Maybe it’s one of the building blocks towards the first AGI that OpenAI is working on. That’s all speculation, as the company has yet to address such developments.
OpenAI did not comment on the matter at the time. But Mira Murati reportedly alerted the staff last week about the Q* stories that would soon emerge.
What Sam Altman says
In an interview with The Verge, Sam Altman answered a question about Q*, choosing to largely pivot towards AI safety and its benefits for the world. That’s what OpenAI is developing and why Sam Altman is at the company. Here’s his full comment:
No particular comment on that unfortunate leak. But what we have been saying — two weeks ago, what we are saying today, what we’ve been saying a year ago, what we were saying earlier on — is that we expect progress in this technology to continue to be rapid, and also that we expect to continue to work very hard to figure out how to make it safe and beneficial. That’s why we got up every day before. That’s why we will get up every day in the future. I think we have been extraordinarily consistent on that.
Without commenting on any specific thing or project or whatever, we believe that progress is research. You can always hit a wall, but we expect that progress will continue to be significant. And we want to engage with the world about that and figure out how to make this as good as we possibly can.
I’ll also point out a different comment that Sam Altman made at the Asia-Pacific Economic Cooperation summit a day before being fired. That comment teased a big AI breakthrough, though he didn’t detail it:
Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime.
Was that about Q*? We don’t have a definitive answer to that. But it sure seems like OpenAI has made some big breakthroughs recently. As long as it’s safe, that’s certainly exciting news for the future of ChatGPT. And if it’s not safe, well, we might never know AGI has been reached until it’s too late.