The code that Google used on its Pixel 2 smartphones to determine what to blur and what to focus on in Portrait Mode pictures is now free for developers to use in their own apps.

Google Research has detailed what it calls its machine-learning semantic image segmentation model, DeepLab-v3+. The composite model, which implements Google’s TensorFlow neural network processing as well as models such as Pascal and Cityscapes, essentially determines what items in the image are classified as human, animal, food or some other genre as well as define specific ranges for these items. From there, the model can determine what the background is and what key subjects are.

On the Pixel 2 and Pixel 2 XL, this data is used for Portrait Mode where the background is blurred and the subjects are kept in focus, but it is hoped that “other groups in academia and industry” will reproduce results for their own purposes and even expand upon DeepLab’s model for further refinement.




Jules Wang is News Editor for Pocketnow and one of the hosts of the Pocketnow Weekly Podcast. He came onto the team in 2014 as an intern editing and producing videos and the podcast while he was studying journalism at Emerson College. He graduated the year after and entered into his current position at Pocketnow, full-time.

You May Also Like
Play Store
Google is making it harder for apps to spy on your Android device
The new changes will only take effect when an app targets Android API level 30 or later on devices running Android 11 or later. 
nokia G10 G20 pocketnow
Nokia G10 and G20 go official rocking decent specs and a compelling price tag
Nokia G10 and G20 offer an IPX2-certified build, feature a 6.5-inch HD+ display, and run Android 11 out of the box.
Realme 8 angled rear panel pocketnow
Realme 8 update brings Starry Mode to the camera
The Realme 8 update brings improvements to the camera in the form of optimized image quality of the rear camera and more.