Smartphones aren’t just getting better at taking pictures, causing users to drift away from point-and-shoots and rely on their phones to help capture important memories; phone hardware is also getting a lot smarter about processing what it sees, and with the first commercial Project Tango handset just a few more months away, we’re about to turn a new page in the story of phones learning to see and interpret the world around them. Today we’re hearing about one of the ways the phones of tomorrow will be able to use their cameras to gather data about their environments as Google partners up with machine-learning company Movidius in an agreement that will see Google deploy the company’s vision processors in new hardware.
Google’s already worked with Movidius on Project Tango, but this new partnership sounds like a distinct effort, one that will let Google pair the company’s MA2450 chip with its imaging processing algorithms in the hopes of giving phones the tools to autonomously recognize their surroundings. Possible use-cases involve phone security, by using the tech to recognize individual users, or maybe even automatic translation, detecting the presence of signs and running them through Google’s services.
Really, though, the partnership is a bit open-ended; there’s no commitment here to get the Movidius chips out on commercial devices, so Google has some time to experiment with the tech and see what sort of uses might be possible.
Maybe we’ll ultimately get something that gets wrapped up in Project Tango, or emerges as its own similarly ambitious project. Heck, we could even see things take a swift turn for the mainstream if Google works with one of its OEM partners to get some of this Movidius tech in a new Nexus phone – though for the moment, that’s all just speculation.