TapSense Could Make Touchscreen Input More Intelligent
The widespread deployment of capacitive displays has revolutionized how we use touchscreens, and multi-input tracking has made gesture recognition a reality; we’re a long way from the low-resolution sensors used in decades past. A group out of Carnegie Mellon University has some ideas about how we could further expand the usefulness of touchscreens, by enabling them to distinguish between different kinds of touches.
Imagine a smartphone were tapping an icon behaves differently based on whether you touched the screen with your fingernail, with the surface of your fingertip, or had turned your hand and tapped with a knuckle. One could launch an app, while another might bring up a context-sensitive menu.
Called TapSense, the system doesn’t require any changes to existing touchscreens, but uses a microphone to analyze the sound made as you touch the screen. With about 95% accuracy, it can distinguish one type of finger input from another. Accuracy is even better when trying to tell the difference between a finger and a stylus, approaching 100%.
The biggest problem is that the team developing TapSense relied on external microphones, rather than a smartphone’s own mic, due to design choices making such components poor at picking up this type of sound. Of course, it would be trivial for a manufacturer to add another mic to a future phone, specifically designed to capture sounds made from your interaction with the screen, but that would take quite a bit of convincing to see happen. If it did, though, TapSense sounds like it could be hugely useful, if only to give us an alternative to tap-and-hold.