IMX378 Overview

We reached out to Sony to try to learn a bit more about the IMX378 sensor that is used by the upcoming Google Pixel and Pixel XL phones, as well as by the Xiaomi Mi 5S. Unfortunately, Sony was not able to distribute the datasheet for the Exmor RS IMX378 sensor just yet, but they were extremely helpful, and were able to provide us with some previously unreleased information about the IMX378.

First up, the very name itself was wrong. Despite rumors stating that it would be part of the Exmor R line of Backside Illuminated (BSI) CMOS sensors like the IMX377 before it that was used in the Nexus 5X and Nexus 6P, our contact at Sony has informed us that the IMX378 will instead be considered part of Sony’s Exmor RS line of Stacked BSI CMOS sensors.

While many things have remained the same from the IMX377 to the IMX378, including the pixel size (1.55 μm) and sensor size (7.81 mm), there have been a couple key features added. Namely it is now a stacked BSI CMOS design, it has PDAF, it adds Sony’s SME-HDR technology, and it has better support for high frame rate (slow motion) video.

Stacked BSI CMOS

Backside illumination by itself is an extremely useful feature that has become almost standard in flagship smartphones for the last few years, starting with the HTC Evo 4G in 2010. It allows the camera to capture substantially more light (at the cost of more noise) by moving some of the structure that traditionally sat in front of the photodiode on front illuminated sensors, behind it.

Backside Illumination CMOS Sensor Design

Surprisingly, unlike most camera technology, backside illumination originally started appearing in phones before DSLRs, thanks in large part due to the difficulties with creating larger BSI sensors. The first BSI APS-C sensor was the Samsung S5KVB2 that was found in their NX1 camera from 2014, and the first full-frame sensor was the Sony Exmor R IMX251 that was found in the Sony α7R II from last year.

Stacked BSI CMOS technology takes this one step further by moving more of the circuitry from the front layer onto the supporting substrate behind the photodiodes. This not only allows Sony to substantially reduce the size of the image sensor (allowing for larger sensors in the same footprint), but also allows Sony to print the pixels and circuits separately (even on different manufacturing processes), reducing the risk of defects, improving yields, and allowing for more specialization between the photodiodes and the supporting circuitry.

Sony Exmor R vs Exmor RS BSI vs Stacked BSI CMOS image sensor

PDAF

Phase Detection Autofocus PDAF Example by cmgleeThe IMX378 adds Phase Detection Autofocus, which last year’s Nexus phones and the IMX377 did not support. It allows the camera to effectively use the differences in light intensity between different points on the sensor to identify if the object that the camera is trying to focus on is in front of or behind the focus point, and adjust the sensor accordingly. This is a huge improvement both in terms of speed and accuracy over the traditional contrast-based autofocus that we’ve seen on many cameras in the past. As a result, we've seen an absolute explosion of phones using PDAF, and it has become a huge marketing buzzword which is held up as a centerpiece of camera marketing across the industry.

While not quite as quick to focus as the Dual Photodiode PDAF that the Samsung Galaxy S7 has (also known as “Dual Pixel PDAF” and “Duo Pixel Autofocus”), which allows every single pixel to be used for phase detection by including two photodiodes per pixel, the merger of PDAF and laser autofocus should still be a potent combination.

High Frame Rate

There’s been a lot of talk lately about high frame rate cameras (both for consumer applications, and in professional filmmaking). Being able to shoot at higher frame rates can be used both to create incredibly smooth videos at regular speed (which can be fantastic for sports and other high-speed scenarios) and to create some really interesting videos when you slow everything down.

Slow Motion Pineapple Falling Into Water

Unfortunately, it is extremely difficult to shoot video at higher frame rates, and even when your camera sensor can shoot at higher frame rates, it can be difficult for the phone’s image signal processor to keep up. That is why while the IMX377 used in the Nexus 5X and 6P could shoot 720p video at 300 Hz and 1080p video at 120 Hz, we only saw 120 Hz 720p from the Nexus 5X and 240 Hz 720p from the 6P. The IMX377 was also capable of 60 Hz 4k video, despite the Nexus devices being limited to 30 Hz.

The Pixel phones are both able to bring this up to 120 Hz 1080p video and 240 Hz 720p video thanks in part to improvements related to the IMX378, which sees an increase in capabilities of up to 240 Hz at 1080p.

The sensor is also able to shoot full resolution burst shots faster, stepping up to 60 Hz at 10 bit output and 40 Hz at 12 bit output (up from 40 Hz and 35 Hz respectively), which should help reduce the amount of motion blur and camera shake when using HDR+.

SME-HDR

Traditionally, HDR for video has been a trade-off. You either had to cut the frame rate in half, or you had to cut the resolution in half. As a result, many OEMs haven’t even bothered with it, with Samsung and Sony being among the few that do implement it. Even the Samsung Galaxy Note 7 is limited to 1080p 30 Hz recording due in part to the heavy computational cost of HDR video.

RED HDRx Demonstration

The first of the two main traditional methods for HDR video, which Red Digital Cinema Camera Company calls HDRx and which Sony calls Digital Overlap HDR (DOL-HDR), works by taking two consecutive images, one exposed darker and one exposed lighter, and merging them together to create a single video frame. While this allows you to keep the full resolution of the camera (and set different shutter speeds for the two separate frames), it can often result in issues due to the time gap between the two frames (especially with fast moving objects). Additionally, it can be very difficult for the processor to keep up, as with DOL-HDR, the phone's ISP handles merging the separate frames together.

The other traditional method, which Sony calls Binning Multiplexed Exposure HDR (BME-HDR), sets a different exposure setting for every pair of two lines of pixels in the sensor to create two half resolution images at the same time, which are then merged together into one HDR frame for the video. While this method avoids the issues associated with HDRx, namely a reduction in frame rate, it has other issues, specifically the reduction in resolution and the limits on how the exposure can be changed between the two sets of lines.

BME HDR RGBG Bayer Filter Example Image

Spatially Multiplexed Exposure (SME-HDR) is a new method that Sony is using to allow them to shoot HDR at the full resolution and at the full frame rate that the sensor is capable of. It is a variant of Spatially Varying Exposure that uses proprietary algorithms to allow Sony to capture the information from the dark and light pixels, which are arranged in a checkerboard style pattern, and infer the full resolution image for both the dark and light exposure images.

Unfortunately, Sony was not able to give us more detailed explanations about the exact pattern, and they may never be able to disclose it -- companies tend to play their cards very close to their chest when it comes to cutting edge technology, like that which we see in HDR, with even Google having their own proprietary algorithm for HDR photos known as HDR+. There is still some publicly-available information that we can use to piece together how it may be accomplished, though. A couple of papers have been published by Shree K. Nayar of Columbia University (one of which was in collaboration with Tomoo Mitsunaga of Sony) that contain different ways to use Spatially Varying Exposure, and different layouts that can achieve it. Below is an example of a layout with four levels of exposure on an RGBG image sensor. This layout claims to be able to achieve single capture full resolution HDR images with only around a 20% loss in spatial resolution, depending on the scenario (the same accomplishment that Sony claims for SME-HDR).

Spatially Varying Exposure SME HDR RGBG Example

Sony has used SME-HDR in a couple image sensors already, including in the IMX214 that has seen a lot of popularity lately (being used in the Asus Zenfone 3 Laser, the Moto Z, and the Xperia X Performance), but is a new addition to the IMX378 compared to the IMX377 that was used last year. It allows the camera sensor to output both 10 bit full resolution and 4k video at 60 Hz with SME-HDR active. While a bottleneck elsewhere in the process will result in a lower limit, this is a fantastic improvement over what the IMX377 was capable of, and is a sign of good things to come in the future.

One of the big improvements of the IMX378 over the IMX377 is that it is able to handle more of the image processing on-chip, reducing the workload of the ISP (although the ISP is still able to request the RAW image data, depending on how the OEM decides to use the sensor). It can handle many small things like defect correction and mirroring locally, but more importantly, it can also handle BME-HDR or SME-HDR without having to involve the ISP. That could potentially be a major difference going forwards by freeing up some overhead for the ISP on future phones.

Sony Full Frame APS-C and 1/2.3" Exmor CMOS Sensors

We would like to thank Sony once again for all the help with creating this article. We really appreciate the lengths that Sony went to in helping ensure the accuracy and depth of this feature, especially in allowing us to uncover some previously-unreleased information about the IMX378.

That being said, it really is a shame that it is so hard to access some of this information, even basic product information. When companies try to put information on their websites, it often can be rather inaccessible and incomplete, in large part because it is often treated as a secondary concern of the company’s employees, who are more focused on their main work. One dedicated person handling public relations can make a huge difference in terms of making this type of information available and accessible to the general public, and we’re seeing some people trying to do just that in their free time. Even on the Sony Exmor Wikipedia article itself, where over the course of a couple months a single person in their spare time laid most of the foundation to take it from a nearly useless 1,715 byte article that had been mostly the same for years, into the ~50,000 byte article which we see there today with 185 distinct editors. An article that is arguably the best repository of information about the Sony Exmor sensor line available online, and we can see a very similar pattern on other articles. A single dedicated writer can make a substantial difference in how easily customers can compare different products, and in how educated interested consumers are about the subject, which can have far-reaching effects. But that’s a topic for another time.

As always, we’re left wondering how these hardware changes will affect the devices themselves. We quite clearly will not be getting 4k 60 Hz HDR video (and may not be getting HDR video at all, as Google has not mentioned it yet), but the faster full resolution shooting likely will help substantially with HDR+, and we will see the improvements of the newer sensor trickle into the phone in other similar small but substantial ways as well.

Google Pixel Phones EIS On and Off Comparison

While DXOMark lists the Pixel phones as performing slightly better than the Samsung Galaxy S7 and HTC 10, many of the things that gave the Pixel phones that small lead were major software improvements like HDR+ (which produces absolutely fantastic results, and which DXOMark dedicated an entire section of their review to) and Google’s special EIS system (which can work in tandem with OIS) that samples the gyroscope 200 times a second to provide some of the best Electronic Image Stabilization we have ever seen. Yes, the Pixel phones have a great camera, but could they have been even better with OIS and Dual Pixel PDAF added in? Absolutely.

Don’t get me wrong, as I said, the Pixel phones have an absolutely stunning camera, but you can’t really blame me for wanting more, especially when the path to those improvements is so clear (and when the phones are priced at full flagship pricing, where you expect the best of the best). There’s always going to be a part of me that wants more, that wants better battery life, faster processors, better battery life, brighter and more vivid screens, louder speakers, better cameras, more storage, better battery life, and most importantly, better battery life (again). That being said, the Pixel phones have many small fantastic features that could come together to create a truly promising device, which I am excited to see.