Click to Skip Ad
Closing in...

Microsoft spent ‘several hundred million dollars’ to build a ChatGPT supercomputer

Published Mar 13th, 2023 4:45PM EDT
OpenAI ChatGPT
Image: OpenAI

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Microsoft spent a ton of money to make ChatGPT possible.

As reported by Bloomberg, when Microsoft invested $1 billion into OpenAI, part of that money was spent on building a supercomputer to power what we now know as ChatGPT. According to the report, the company connected tens of thousands of Nvidia’s A100 graphics cards in order to enable the processing power that ChatGPT needed, a move that cost somewhere in the range of “several hundred million dollars.”

Nidhi Chappell, Microsoft general manager of Azure AI infrastructure, said that ChatGPT is just the beginning and that there will be many more models that come out of the project:

“We built a system architecture that could operate and be reliable at a very large scale. That’s what resulted in ChatGPT being possible. That’s one model that came out of of it. There’s going to be many, many others.”

Scott Guthrie, the Microsoft executive vice president responsible for cloud and AI, says that, despite ChatGPT being the most popular use case from the supercomputer so far, it can be generally adapted for multiple use cases:

“We didn’t build them a custom thing — it started off as a custom thing, but we always built it in a way to generalize it so that anyone that wants to train a large language model can leverage the same improvements. That’s really helped us become a better cloud for AI broadly.”

Guthrie also teased that the model everyone is working with right now is possible from a supercomputer that is now a couple of years old. The team is already training its next-generation supercomputer, which the executive says “is much bigger and will enable even more sophistication.”

Microsoft is hosting another AI event on March 16th where it and OpenAI are expected to unveil GPT-4, the next generation of the technology that powers ChatGPT. According to a recent report, GPT-4 will not only support text, but audio, video, and images as input as well, unlocking even more capability for AI.

Joe Wituschek Tech News Contributor

Joe Wituschek is a Tech News Contributor for BGR.

With expertise in tech that spans over 10 years, Joe covers the technology industry's breaking news, opinion pieces and reviews.