Google’s ATAP team posted a new video, highlighting the use cases of the Soli sensor technology. The video shows us how Google attempts to make devices more socially intelligent by only working and displaying information when the person is looking at the display or facing the device.

The team was also responsible for Project Soli, the Soli sensor that was equipped in the Google Pixel 4 series, and the Nest smart devices. The Soli sensor allowed the smartphone to unlock the display, and recognize the user’s face, providing a safe unlock mechanism. The sensor was also used for various hand gestures – which never really quite worked properly in the Pixel 4 series – that could recognize when a user was reaching for the device or trying to switch skip.

What is Google ATAP?

Google ATAP (short for “Advanced Technology and Projects”) is a hardware invention studio that belongs to Google. It’s a group of engineers, scientists, analysts, artists, and designers who work together on new ideas, which could one day make life easier and more innovative. The team mainly focuses on new inventions that they try to turn into finished products that could later be used in devices, such as the Soli sensor in the Google Pixel 4 series, and other Nest devices.

In the new video today, Google ATAP highlighted its main goal “to create ambient, socially intelligent devices that are controlled by the wave of a hand or turn of the head” (via 9to5Google).

“As humans, we understand each other intuitively — without saying a single word. We pick up on social cues, subtle gestures that we innately understand and react to. What if computers understood us this way?”

In the video, the team shows off a few different devices that understand the “social context” of the environment around the person, and interacts and displays information when the person approaches the product. The video also highlights that the technology could allow the device to recognize when a user turns to the device to read messages or see the weather information and display other content when it’s not actively looked at.

The device in the video appears to be a mockup, and it’s unlikely that we’ll see a similar smart home device anytime soon, but it shows the possibilities of what may be possible in the near future, once the team manages to figure out how to recognize such challenging tasks.