The code that Google used on its Pixel 2 smartphones to determine what to blur and what to focus on in Portrait Mode pictures is now free for developers to use in their own apps.

Google Research has detailed what it calls its machine-learning semantic image segmentation model, DeepLab-v3+. The composite model, which implements Google’s TensorFlow neural network processing as well as models such as Pascal and Cityscapes, essentially determines what items in the image are classified as human, animal, food or some other genre as well as define specific ranges for these items. From there, the model can determine what the background is and what key subjects are.

On the Pixel 2 and Pixel 2 XL, this data is used for Portrait Mode where the background is blurred and the subjects are kept in focus, but it is hoped that “other groups in academia and industry” will reproduce results for their own purposes and even expand upon DeepLab’s model for further refinement.




Jules Wang is News Editor for Pocketnow and one of the hosts of the Pocketnow Weekly Podcast. He came onto the team in 2014 as an intern editing and producing videos and the podcast while he was studying journalism at Emerson College. He graduated the year after and entered into his current position at Pocketnow, full-time.

You May Also Like
Google Pixel 6 Series Pocketnow
Google Pixel 6 and Pixel 6 Pro official with attractive features and price
The new Google Pixel 6 and 6 Pro comes with the Tensor chipset and they bring several upgrades, and improvements over the build quality, software, and much more.
Samsung Galaxy Tab S8 leaked render featured
Leaked renders reveal how the Samsung Galaxy Tab S8 Series may look like
A new leak reveals the upcoming Galaxy Tab S8’s design and tells us some of the specifications that we can expect to see on the new flagship tablets. 
google play reduced 15% fees media experience program
Google lowers Play Store fees for music apps and subscriptions
Google has said that it is lowering its commission fees for all…