By Evan Blass | June 12, 2012 5:19 PM
One of the first items of note during the mobile portion of yesterday’s WWDC keynote was the upgraded performance users can expect from Siri. Performance probably isn’t the right word — Siri was just programmed how to answer a broader ranger of questions, but will still perform the same as the day you brought her home. Which begs the question: does Siri do a good enough job to be considered a value add? The people I know who have turned off the function altogether would probably put it in the gimmick column.
The biggest problem with voice-control, and one that may never be fully solved, is that people speak so many different ways. Even if you account for accents and regional dialects, there are still countless individual takes on particular words or phrases, making it nearly impossible to deploy one-size-fits-all software. And even if the program is capable of understanding you perfectly, there are still environmental variables like wind noise, traffic, construction, and competing voices. The technology has a long way to go before it can compensate for the many types of real-world interference.
Then there’s the question of desirability. Voice-control is a very cool concept that sounds awesome at first. Until you realize that you appear sort of awkward out in public questioning and commanding your device — especially when you have to repeat yourself. I tried out the Microsoft VoiceCommand software back in the Windows Mobile days, and spent days changing music tracks and asking it the weather. Then I realized that it completely failed in real-world conditions, like driving in the car — exactly when you need it the most.
There’s no doubt that some of the benefits offered by Siri, and other products like Samsung’s S-Voice, are very real. If you’re sitting at work (a quiet workplace) and need to look up a quick bit of info, voice-control can definitely be your friend. But until it works for all people under all conditions — until it’s a polished, nearly error-free experience — it seems to be more of a novelty than an actual technological breakthrough.
If voice-control isn’t the answer to to the input conundrum, then what is? Some people will tell you that mind control is a feasible way of controlling equipment: the technology to perform simple tasks is well beyond the infancy stage to the point of commercialization. Motion control also seems like a good way of manipulating small devices; we’re already seeing this to a limited degree with the use of accelerometers and gyroscopes.
So, is voice control really all that it’s cracked up to be? Will it ever obviate the need for touch input? If voice isn’t ideal, what are some other ways to quickly and safely control a smartphone or tablet?