The good news for developers who wish to take advantage of the Pixel 2’s Portrait Mode is that Google has released (via 9to5Google) the deep learning model behind the technology. This means that developers will be able to get a peek under the hood and potentially apply Google’s technology to their own creations.
In a post on Google’s blog, the company explains a bit on how the Pixel 2’s Portrait Mode works. This is done by “pinpointing the outline of objects”, which in turn “imposes much stricter localization accuracy requirements than other visual entity recognition tasks such as image-level classification or bounding box-level detection.”
They also point out that this kind of technology would have been hard to imagine five years ago, and note that, “We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology.”
Filed in AI (Artificial Intelligence), Google and Pixel 2.
. Read more about