Google’s Nexus handsets used to be known for their somewhat bad quality cameras back in the day, where early Nexus handsets appeared to be lagging way behind the competition in terms of picture quality. However fast forward to today, Google’s Pixel phones actually have pretty impressive and come with features that Android users are porting to non-Pixel handsets.
Now in a post on their AI blog, Google is sharing details on how they went about creating the portrait mode effect on the Pixel 3, and the methods and prototypes they used to achieve that. Google notes that they are still using AI to help achieve the depth effect, but with the Pixel 3 they improved upon it which results in better depth estimation.
According to Google, to train their new AI they needed a lot more PDAF images which was captured using the rig pictured above. “To accomplish this, we built our own custom “Frankenphone” rig that contains five Pixel 3 phones, along with a Wi-Fi-based solution that allowed us to simultaneously capture pictures from all of the phones (within a tolerance of ~2 milliseconds). With this rig, we computed high-quality depth from photos by using structure from motion and multi-view stereo.”
There is no doubt that all of Google’s hard work and research has paid off as the Pixel 3 does have one of the better cameras around. Other companies such as Huawei are indeed giving Google a run for their money, but at the very least no one is disputing the Pixel’s image quality from now on.
Filed in AI (Artificial Intelligence), Google and Google Pixel 3.
. Read more about