I’ve made no secret of my interest in Apple’s Vision Pro. I can’t wait to try the spatial computer, and I’ll hopefully be able to integrate it into my daily life, whether it’s for work or entertainment. The device won’t be available in the US until early next year. It’ll then roll out in Europe and other markets after that. That’s a long wait, but it should be worth it.
Word on the street is that Apple plans to add ChatGPT-like AI features to iOS 18 next year. “Apple GPT” — an unofficial name used by Apple fans — could debut by next fall on the iPhone 16 and all other iPhone versions that support iOS 18.
But if Apple is indeed preparing a so-called Apple GPT AI solution for a wide release next year, it’ll likely make a big deal about it during next year’s WWDC event. The first Apple generative AI features should then be available in iOS 18 beta releases preceding the official iOS 18 release.
If iOS 18 gets a native AI experience similar to ChatGPT, then I’m certain all other Apple products running next-gen operating systems will also get access to this early version of Apple GPT. And I hope the Vision Pro will be a part of that.
I said that the Vision Pro needed its own ChatGPT product from the moment Apple unveiled the device. As a reminder, Vision Pro computing should be even faster than we’re used to. Smartphones and laptops are already incredibly fast, but we interact with them via touch or peripherals. With the Vision Pro, we’re going to get a speed bump, as the OS will track our eyes and hands.
This will introduce a new era of computing because we’ll be navigating the operating system at the speed of thought.
All we need for the experience to be even faster is built-in generative AI. We need Apple’s equivalent of Copilot, which is Microsoft’s new generative AI software for Windows 11.
With Apple GPT on board, the Vision Pro experience could get even faster. Not only will the headset react to the movement of our eyes and hands, but we’ll probably issue voice commands as well. That’s hopefully what we’re going to get from Apple.
As I explained recently, we’re in the early days of personal AI. Copilot is one such example. Another example is what Google is doing by adding Bard support to its apps. Apple can’t afford to stay out of this AI race.
Since the company likes to bring as many new features as possible to new devices, I’m convinced the Vision Pro will be part of the first wave of Apple devices that support Apple GPT. The device packs Apple’s M2 processor and will run visionOS, an operating system based on iOS. If iOS 18 gets an Apple GPT program, visionOS will likely also run it.
Considering the rumored launch timeframe for Apple GPT and the still-unclear Vision Pro release date, I’m cautiously optimistic about testing Apple’s generative AI product on the spatial computer.
I’ll also point out one interesting thing from the Apple GPT report. Apple is supposedly building a large cloud infrastructure to handle AI features. But the expectation is that Apple will go for an “edge AI” type of experience. This would involve more on-device processing, which fits with Apple’s commitment to privacy.
As I said, privacy should be a core focus when developing personal AI. And Apple is best positioned to offer that. The downside might be that older hardware won’t be able to support Apple GPT features, but that’s speculation. If that’s the case, the Vision Pro should certainly support on-device Apple GPT since the spatial computer runs on Apple’s M2 chip.