One of the tricky things about designing software for smartphone user interfaces is how you can let a user specify what action should be taken when he or she taps an onscreen object. On a PC, we regularly change how software interprets our mouse input by choosing which button on the mouse itself we press, along with any other buttons we might hold down simultaneously on the keyboard. While there’s some effort towards adding this kind of flexibility to smartphone hardware (look no further than the Samsung Galaxy Note and the side-button on its S Pen stylus), we’re largely stuck with the long-press, and then choosing an action from a menu. Google may have a better idea, as detailed in a recently-published patent application.
Google’s idea is pretty innovative, from the sounds of it. The principle is that you’d specify what sort of action you want your phone to undertake in regards to an input by tracing a code letter, followed by circling the content in question, in one continuous motion. For instance, if you got an email about a friend’s trip to the zoo, and wanted to learn more about the aye-aye, you’d trace a “W” on the phone’s display (W for Wikipedia) and circle the relevant text.
At first, it sounds like a really neat idea, but we have our concerns about the implementation. For instance, how’s the phone supposed to recognize in the first place that you’re starting to enter a gesture command, say one using a capital “I” as its code letter? Wouldn’t the beginning of the input look the same to the phone as you swiping to scroll the screen down? We’ll have to wait and see if Google decides to turn this idea into an actual product to learn how it might deal with such situations.