The code that Google used on its Pixel 2 smartphones to determine what to blur and what to focus on in Portrait Mode pictures is now free for developers to use in their own apps.

Google Research has detailed what it calls its machine-learning semantic image segmentation model, DeepLab-v3+. The composite model, which implements Google’s TensorFlow neural network processing as well as models such as Pascal and Cityscapes, essentially determines what items in the image are classified as human, animal, food or some other genre as well as define specific ranges for these items. From there, the model can determine what the background is and what key subjects are.

On the Pixel 2 and Pixel 2 XL, this data is used for Portrait Mode where the background is blurred and the subjects are kept in focus, but it is hoped that “other groups in academia and industry” will reproduce results for their own purposes and even expand upon DeepLab’s model for further refinement.