Without question, the biggest selling point of Apple’s new iPhone 11 lineup revolves around the device’s new and advanced cameras. More specifically, the main attraction of Apple’s new iPhone 11 cameras is the Night mode feature that allows users to take incredibly stunning and crisp photos even in low-light environments.
While the early reviews for camera performance on the iPhone 11 were overwhelmingly positive, one area where reviewers found room for improvement centered on photos taken in medium-light environments. Not to worry: Apple has a new image processing framework dubbed Deep Fusion designed to fix that. Even better, Apple just released a new iOS beta with Deep Fusion. So if you want to take advantage of even more impressive camera performance, and the usual assortment of security and bug fixes, you’ll want to download the new update right away.
Notably, Apple’s Deep Fusion technology isn’t a feature that can be toggled on and off. On the contrary, the iPhone will actively detect the lighting conditions in a given environment and will put Deep Fusion to work automatically, if need be.
Deep Fusion, Apple notes, “is a new image processing system enabled by the Neural Engine of A13 Bionic. Deep Fusion uses advanced machine learning to do pixel-by-pixel processing of photos, optimizing for texture, details and noise in every part of the photo.”
As to how it all works, The Verge explains:
- By the time you press the shutter button, the camera has already grabbed three frames at a fast shutter speed to freeze motion in the shot. When you press the shutter, it takes three additional shots and then one longer-exposure shot to capture detail.
- Those three regular shots and long-exposure shot are merged into what Apple calls a “synthetic long.” This is a major difference from Smart HDR.
- Deep Fusion picks the short-exposure image with the most detail and merges it with the synthetic long exposure. Unlike Smart HDR, Deep Fusion only merges these two frames, not more. These two images are also processed for noise differently than Smart HDR, in a way that’s better for Deep Fusion.
- The images are run through four detail processing steps, pixel by pixel, each tailored to increasing amounts of detail — the sky and walls are in the lowest band, while skin, hair, fabrics, and so on are the highest level. This generates a series of weightings for how to blend the two images — taking detail from one and tone, color, and luminance from the other.
- The final image is generated.
To download the latest beta, you can hop over to Settings > General > Software Update on your iOS device and then hit “Download and Install” located towards the bottom of the page. Per usual, you’ll want to make sure that your device is fully backed up before you install a new iOS update of any kind.