The tech media has been operating under the assumption that the Pixel 2 and Pixel 2 XL were utilizing a dedicated vision co-processor, the Pixel Visual Core co-opted by Intel, when taking its pictures.

I got a fun correction from Google today: The Google Camera app does not use the Pixel Visual Core. Google’s camera app doesn’t use Google’s camera chip. Facebook and Snapchat are the first ever uses of it.

— Ron Amadeo (@RonAmadeo) February 7, 2018

FoneArena even had a question on this topic addressed by Brian Rakowski, vice president of product management at Google.

The Visual Core which we will be turning on in the coming apps will primarily be for 3rd party apps. The cool thing about it is that it gives pretty good performance in a default capture scenario. So when 3rd parties use the camera APIs they’ll be able to get those high quality HDR processed images.

[…]

Turns out we do pretty sophisticated processing, optimising and tuning in the camera app itself to get the maximum performance possible. We do [zero shutter lag] and fast buffering to get fast HDR capture. So we don’t take advantage of the Pixel Visual Core, we don’t need to take advantage of it. So you won’t see changes in the pictures captured from the default camera app in the coming weeks.

The need for better picture quality is most apparent when it comes to third-party apps linking up with the camera’s APIs, so if the default digital signal processing on the Snapdragon produces the results it does on the Pixel 2 and Pixel 2 XL, credit to Google for pulling off its software.

That said, we’re all on notice on what we say about what Google says about its phones.