Apple also shows that computational photography is based on truly hefty code.
Deep Fusion has just been updated by Apple for the latest iOS 13 developer beta, which is the most desirable photography feature on iPhone 11 and iPhone 11 Pro. Apple’s Vice President Phil Schiller once stood on stage and called it a “crazy computing technology” feature.
Deep Fusion was not updated shortly after the iPhone 11 and iPhone 11 Pro, which Apple promised to launch in a software update as soon as possible. And now, Apple has allowed developers to experience this impressive photography feature.
A photo taken with Deep Fusion on iPhone 11 Pro.
With Deep Fusion, the camera system on iPhone 11 and iPhone 11 Pro will have 3 modes automatically enabled, based on light level and camera type:
– The main camera will use Apple’s improved Smart HDR for medium light and above, Deep Fusion for medium to low light and Night mode for low light.
– Camera Tele will use Deep Fusion as the main, Smart HDR and Night mode will only be used for very bright or very dark scenes.
– Wide angle camera will only use Smart HDR, because it does not support Deep Fusion and Night mode.
Unlike Night mode, which has an on-screen indicator and can be turned off if desired. Deep Fusion is activated without any notification, meaning you won’t know if this mode is working or not. Apple says the goal is to keep users from having to worry too much, just press the shutter button and the rest will be handled by the iPhone automatically.
Deep Fusion processes images in both sharp detail and impressive color and contrast.
Meanwhile, the Deep Fusion feature does a lot of work compared to the Smart HDR mode. Here are the basic operating steps of Deep Fusion mode:
– Before pressing the shutter button, the camera automatically takes 4 photos at high speed to freeze motion and 4 standard shots. When you press take, it will take a picture with a longer exposure time.
– Then, four standard photos and one long exposure photo are merged into one, which Apple calls a “synthetic long”. This is a big difference from the Smart HDR shooting mode.
– Deep Fusion will continue to choose one of the four most detailed high-speed shots, to combine with synthetic long.
– The system will process the image through four steps, pixel by pixel, each step adjusted to increase the number of details. The final image will get sharp details from the high-speed shot, colors and contrast from the synthetic long image.
It takes about 2 times longer to take and process images with Deep Fusion than with Smart HDR. Therefore, after you take and immediately see the photo you have just taken, you will see it is still being processed.
With Deep Fusion, Apple has officially jumped into the field of computational photography based on lines of code, which Google has done very well on its Pixel smartphones.