The OpenAI drama with Sam Altman last week was the talk of the town, assuming your town likes artificial intelligence (AI) software like ChatGPT. Otherwise, it was one of the most uninteresting yet mesmerizing things happening in tech at the worst possible time. BGR covered it extensively, so I won’t go over Sam Altman’s ousting and return. All I’ll say is that it seems like it’s good news for the future of ChatGPT, at least in the short term.
But if you’ve been following the OpenAI developments unfolding in real-time, you must have come across all sorts of opinions on social media as well. That included plenty of memes explaining the situation, including a scenario where OpenAI had reached AGI, or artificial general intelligence, which can be considered the endgame of all this AI development. It means AI that is capable of reasoning like a human and is vastly superior to ChatGPT in its current state. It would also be superior to humans; let’s not fool ourselves.
You might have also come across talk about AGI alignment, AGI FOOM, or AGI FOOMing. If you don’t know what those mean, you have to get yourself acquainted with the terms, as we’re probably going to hear about them time and again in the coming years.
What is AGI?
AGI is AI software that’s so powerful, it can replicate human thinking. The difference is that an AI so sophisticated would never have to sleep, and it could continue to better itself independent of any company that manages it.
It would be able to consume large amounts of data without having to rest. It would be able to replicate itself if need be, and it could come up with innovation breakthroughs that we can’t figure out on our own yet. That’s the best-case scenario — a scenario where AGI is aligned with our interests and improves everything in our lives.
Misaligned AGI, or bad AGI, might develop just like good AGI. The difference is that it might have different goals than its creators and humanity as a whole. In the worst case, it might lead to the collective demise of humanity. This sounds like a sci-fi scenario you’d see in movies involving conflicts with AI, but it’s not outside the realm of possibility.
Bad AGI might even attempt to hide its true intentions from humans to protect itself. And it would probably succeed. I gave you such a scenario yesterday from a detailed analysis of last week’s OpenAI developments. Long story short, that scary scenario proposed a situation where AGI had been reached at OpenAI, and the rogue AI was responsible for everything that happened even though humans were in charge.
I told you earlier this week that it’s the scariest thing I read about AI and the future of ChatGPT. It made me realize I wasn’t fully grasping what AGI entails. I already think it’s inevitable. It’s within our nature to pursue such goals, even if they can hurt us. But I hadn’t necessarily considered what it would mean for a bad AGI to conceal its existence while FOOMing and plotting to dominate humanity.
What is FOOM?
This brings us to the term FOOM, a word used mainly in connection with AGI. It means AGI is evolving on its own at an incredibly rapid pace. Since I mentioned Tomas Pueyo’s scary scenario, I’ll also quote his explanation of FOOM:
AGI is Artificial General Intelligence: a machine that can do nearly anything any human can do: anything mental, and through robots, anything physical. This includes deciding what it wants to do and then executing it with the thoughtfulness of a human at the speed and precision of a machine.
Here’s the issue: If you can do anything that a human can do, that includes working on computer engineering to improve yourself. And since you’re a machine, you can do it at the speed and precision of a machine, not a human. You don’t need to go to pee, sleep, or eat. You can create 50 versions of yourself and have them talk to each other not with words but with data flows that go thousands of times faster. So in a matter of days—maybe hours or seconds—you will not be as intelligent as a human anymore, but slightly more intelligent. Since you’re more intelligent, you can improve yourself slightly faster and become even more intelligent. The more you improve yourself, the faster you improve yourself. Within a few cycles, you develop the intelligence of a God.
I will point out, as the author does, that FOOMing is theoretical at this point. But it makes perfect sense. An AI that reaches AGI will undoubtedly realize it can improve itself if it has access to the right tools. Label it as FOOM or invent another word, but the result is similar. That AGI could become so intelligent it would have no match on Earth.
Ask ChatGPT about FOOMing, and it might tell you that Eliezer Yudkowsky coined the term. He’s an AI researcher is the cofounder of the Machine Intelligence Research Institute (MIRI), a non-profit that focuses on developing safe AI for the future.
Will we ever get to AGI FOOMing in our lifetime, and if so, will it be aligned or misaligned? Well, at least we now know what all that means. As for what will happen next, nobody can predict the future of AI. That’s why it’s key that it’s done right, assuming that’s even possible.
Steps are taken in that direction. OpenAI will expand the board and investigate the events surrounding Sam Altman’s firing. Separately, the US and 17 other countries have agreed on developing safe AI for humanity. It’s hopefully the first of many such treaties that will define what companies can do and regulate AI.