The code that Google used on its Pixel 2 smartphones to determine what to blur and what to focus on in Portrait Mode pictures is now free for developers to use in their own apps.

Google Research has detailed what it calls its machine-learning semantic image segmentation model, DeepLab-v3+. The composite model, which implements Google’s TensorFlow neural network processing as well as models such as Pascal and Cityscapes, essentially determines what items in the image are classified as human, animal, food or some other genre as well as define specific ranges for these items. From there, the model can determine what the background is and what key subjects are.

On the Pixel 2 and Pixel 2 XL, this data is used for Portrait Mode where the background is blurred and the subjects are kept in focus, but it is hoped that “other groups in academia and industry” will reproduce results for their own purposes and even expand upon DeepLab’s model for further refinement.

You May Also Like
Huawei Mate 30 Pro review

Huawei Mate 30 Pro review: the best phone you can’t get, and that’s OK

In our Huawei Mate 30 Pro review we’re trying to answer the question of whether the phone can survive without Google support, and should you buy it?

Companies could soon get licenses to sell to Huawei

Good news for Huawei: In a recent Bloomberg interview, Commerce Secretary W. Ross said he was optimistic about reaching a “Phase One” China deal this month.

Amazon is taking $199 off Apple’s latest 13-inch MacBook Air

The latest MacBook Air models from Apple are getting a $199 discount in their 128GB and 256GB storage configurations on Amazon