A few days before Google kicked off I/O 2025, we saw evidence that the company was working on an exciting feature for one of its most interesting AI products. NotebookLM would get Video Overviews on top of the existing Audio Overviews feature that lets you turn AI reports into podcasts, including interactive ones where you can talk to the AI.
Video Overviews sound even cooler. As I explained at the time, I’m already dreaming of a future where I can ask the AI to create a visual illustration of a concept it’s explaining to me. I’d prefer a video instead of an image, but that isn’t possible yet.
Then the official I/O 2025 keynote dropped, filled with AI announcements. Gemini and AI were all Google could talk about. There was no space for anything else, which we already knew, considering that Google held the Android 16 part of the presentation last week.
Google did not mention Video Overviews during the main keynote, but it turns out the feature is coming to NotebookLM, and the first samples are out now.
Google announced in a blog post that it’s infusing LearnLM into Gemini 2.5, something we heard during the main keynote. But that’s where Google also dropped the Video Overviews announcement.
First, Google announced that NotebookLM users will be able to customize the length of Audio Overview podcasts. You’ll be able to choose between shorter and longer versions of the AI-generated audio summaries you get when feeding sources into NotebookLM.
I explained more than once that I wish ChatGPT Deep Research supported a similar feature, so I can turn those large reports into audio experiences to listen to on my runs. The ability to tweak the length of the audio summary should come in handy.
Google also confirmed that Video Overviews are happening, saying it “heard from users that they’d like more visual clues during the overviews.” Video Overviews will not be available immediately to users. They’re coming soon, but it’s unclear how long it’ll take.
Google provided a few samples, and my first reaction is that you should temper your expectations. Google isn’t using the brand-new Veo 3 tech to create some sort of amazing video clips to explain the contents in your NotebookLM reports. But it will create slides and use images from the source material to turn those summaries into video content that’s easier to digest.
For example, the following Video Overview discusses tectonic plates after a field trip. The clip clearly targets young students and recaps what they learned during their trip. It’s just a minute long, but it’s a great way to use the feature to explain concepts.
The video features text slides but also a few images to help explain the concepts the kids learned. It’s unclear whether those images were AI-generated or if they’re part of the materials the teacher would have uploaded to NotebookLM.
Video Overviews can be longer, as Google also shared two additional videos for a different audience: People interested in Gemini news, whether regular AI users or developers.
Gemini used the feature to create video presentations that are about 10 minutes long each. As you can see below, they do a great job summarizing Google’s I/O 2025 announcements.
The Video Overviews include imagery from the blogs and videos Google used to announce the new AI features, and they’re easy to digest. An AI-generated host talks you through the various topics, explaining things along the way.
Again, it’s nothing too sophisticated, and some people might not appreciate Video Overviews. But it’s a feature with great potential. Hopefully, we’ll see it soon in NotebookLM and the Gemini app.
As a longtime ChatGPT user, I’d want similar overviews in ChatGPT. On that note, now that NotebookLM is available as an Android and iPhone app, you can turn your ChatGPT Deep Research into PDFs and then feed them into NotebookLM for Audio Overviews. Once Video Overviews are available, you might want to do the thing.