On October 4th Google announced its latest iteration of flagship smartphones with the Pixel 2 and the Pixel 2 XL. Like the name suggests, the XL model is a larger form-factor with a bigger battery, a larger screen with different characteristics such as a higher resolution. However, besides display, dimensions, and battery specifications, Google has commented that the user experience is essentially identical. The devices share the same software and hardware features between the two models with the aforementioned exceptions. The core semiconductor content within the phones is supported by the Qualcomm Snapdragon 835 platform, like many other Android flagships released in 2017. However, Google has at least one hardware trick up its sleeve to help provide an enhanced smartphone photography experience, its Image Processing Unit (IPU), the Pixel Visual Core.
At first glance, Google’s camera may appear uninspiring with a single 12.2 Megapixel rear-facing camera that, besides only using one camera, shares comparable characteristics of other flagship phones released earlier this year from OEMs such as LG and Samsung. Though by some accounts The Pixel 2 is already rated as the best mobile photography experience available, the Pixel 2 can potentially separate itself further from the rest of its competition through the activation of the visual core which is currently sitting dormant in the device until a future software update.
Novel Portrait Mode Despite only one Camera
The Pixel 2 & 2 XL has a portrait mode which enhances a photo of someone by optimizing the focus around the subject and blurring the background and potentially foreground too. The ability of Google’s latest phone to offer this feature is even more impressive given most of the devices that do provide this feature use two rear-facing cameras compared to only one on the Google smartphone. Dual camera smartphone designs create portrait effects by matching pieces from an image taken by one camera to the identical piece taken with the other camera to determine relative depth of the background to the subject. The Google Pixel 2 & 2XL uses information from twin pixels within the single lens camera module and on-device computational routines in the form of an IPU to accomplish the same effect using one set of optics.
Prior to the Pixel 2, most phones offering depth sensing capabilities and which also lacked a second rear-facing camera used two separate layers of the same image to blur pixels in the background. There were inherent limitations to this method as objects in front of the subject would not be able to be blurred. Also, with this method pixels are blurred after a subject, but not in front of it. The front-facing camera on the Pixel 2 uses this method to create a synthetic depth of field effect, where the rear camera uses both aforementioned methods.
In order to create this effect, the Pixel 2 must rely on a series of steps to achieve its portrait effect. The phone first creates an HDR+ image through a process of taking a series of shots and averaging effects to create the crispest image possible. The next step is an incredibly complex process of filtering and re-filtering using machine learning to create a mask of the subject or person in the photograph and blur the background. The next step utilizes dual-pixel autofocus, which provides a left and right side view of the same shot, to create a stereo depth map and further refine the blurring effect. The entire process is done in post (after the user has pressed the camera shutter) and takes about 4 seconds to render. Users will have an overall experience similar to other smartphones with portrait mode minus the instant preview of the background blur.
Is this the beginning of a new wave of Smartphone ASIC?
Though the underlying computational photography attributes are impressive on their own, Google’s use of a custom SoC to process images will provide further benefits once activated through a future software update. Google’s development of the Visual Core could have wide-ranging implications for the future of smartphone photography and core semiconductor content. The use of a dedicated image processing unit with eight cores and which is physically located very close to the rear camera it supports means that it can process image related tasks five times faster, consuming one-tenth the power compared to utilizing other components for image processing. The custom IPU is a relatively large piece of silicon, though not as big as the Snapdragon 835 which it also sits near.
Google’s custom ASIC strategy with the Pixel 2 will likely extend into other new product categories such as the recently announced Google Clips – a new type of always-on consumer camera that is meant to have the intelligence to capture moments that matter as opposed to simply record and store functionality. The development of the Google Pixel Visual Core was done in collaboration with Movidius, an Intel company that specializes in machine vision.
Though custom SoCs are utilized by many OEM in the smartphone market today, the inclusion of an image processing SoC is unique. Most OEM lean on peripheral processing such as Graphical Processing Unit (GPU) and Digital Signal Processors which are included within the main SoC of a given device. However, as machine learning and artificial intelligence become increasingly common, devices will likely need specific hardware to efficiently implement these features. When it is enabled in the future it should be able to provide for faster, more efficient HDR+ image processing as well as image related machine learning processes. With smartphone market leaders such as Apple and Huawei developing neural processing features as part of their platforms and yet others such as Xiaomi developing SoC in-house, this could be the beginning of the next wave of product differentiation, driven by custom IC development within the smartphone market.