Google Explains the Fused Video Stabilization Technique Used in the Pixel 2

Google Explains the Fused Video Stabilization Technique Used in the Pixel 2

We may earn a commission for purchases made using our links.

Google may be having some display quality issues with the Pixel 2 XL but the cameras on their two new flagship smartphones are top notch. The company opted for EIS last year with the original Pixel and Pixel XL, while this year they are using a combination of EIS as well as OIS hardware. To leverage these two technologies, Google is using what they’re calling Fused Video Stabilization and today, they explain what it is and how they accomplished all of it.

When Google was researching on what to improve from the original Pixel phones, they looked at what the main camera issues that many smartphones owners come across. This included camera shake since most of us are holding the phone in our hand(s) and it is often difficult to be completely still. Motion blur can also be an issue from if the camera or the subject moves during exposure. Another common issue is rolling shutter distortion which happens because of the way CMOS image sensors work. The last one Google mentions is known as focus breathing which can happen when the angle of view changes significantly due to objects “jumping” in and out of the foreground.

OIS is popular as a solution to these problems as it helps to eliminate some of these issues, but it can be limited in its implementation. The same can be said about EIS as it all depends on the algorithm used but again, for it to work, it inherits some limitations like needing to reduce the field of view or resolution.

So Google combined both technologies and both are enabled during video recordings, allowing them to address all of the issues mentioned above. The solution they came up with involves three processing stages where it first analyzes the motion by using the phone’s high-speed gyroscope to estimate the rotational component of the hand motion (roll, pitch, and yaw). Next up is the motion filtering stage where they combine machine learning and signal processing to predict the intention in moving the camera. Lastly, we get to the frame synthesis stage where the software models and removes the rolling shutter and focus breathing distortion.

Thanks to these techniques, the videos produced on the Pixel 2 and Pixel 2 XL have less motion blur and look more natural.

Head on over to Google’s Research Blog post for more details!

Source: Google Research Blog