By Taylor Martin | August 6, 2013 1:51 PM
Talking to a phone – or any piece of technology, for that matter – is gauche, awkward, and often alienating. There is very little that feels normal about asking your smartphone a question or telling your car stereo to call someone.
However, the last two years have been a sort of resurgence for voice input for mobile platforms. Apple acquired Siri and integrated it into the core iOS experience; Google introduced Google Now and offline dictation software; Samsung introduced S Voice with TouchWiz Nature UX; and LG introduced what was virtually a direct copy of S Voice.
It was only a matter of time before dictation and voice input were put to the true test.
As such, one year ago today, both Michael and I were unknowingly wrapping-up the exact same week-long challenge. Without any sort of collaboration or cross-communication, Michael and I (though for a different publication at the time) both decided to succumb to one week without using our fleshy digits to type a single word on our smartphones – we could only use dictation for text input … for an entire week. The crazy part? We started at the same exact time. (If you haven’t followed along, this was a common occurrence for Michael and I last year.)
Great minds, I guess.
After a week, though, we both came to a similar conclusion. Voice input is … impressive. It’s fairly accurate, and after a week of using it non-stop, it becomes a lot less awkward to use, especially in the midst of strangers and friends alike.
However, dictation and voice input have a long way to go yet. They’re exceptionally inaccurate in various situations, any sense of privacy is thrown to the wayside, which can be troublesome with personal conversations, and context is everything. How can a computer ever know when you mean to spell out the word “period” instead of bringing a sentence to a full stop? It can make a fairly accurate guess, based on context and nearby words, but it can never know for sure. And as Michael so eloquently worded … using dictation:
“It’s interesting to see how differently the brain treats verbal communication. Compared to typing
itthings, I mean. It’s very difficult to compose a message properly by speaking. As opposed to typing it. Just on a fundamental brain power level.”
The technology definitely has quite a few hurdles to overcome. But the root of the problem isn’t necessarily the technology itself as much as it is the users. We also have to overcome the strange feeling we get when speaking to our phones.
Since the Moto X announcement last week, I’ve written about Motorola’s phone twice – once to explain why the Moto X and Motorola are deserving of a little praise, despite a mostly negative reception, and another to explain why I, personally, want the phone. And the week before, I detailed three features the Droid Ultra and its siblings have that every phone should have. Unsurprisingly, those three editorials honed in on a single point: voice input is more important than ever.
Since Motorola’s X8 Computing System was unveiled, the touchless control feature has been widely accepted as a gimmick. A dedicated low-power core for natural language processing means Motorola’s phones are always-on and always listening for your voice, ready to queue a Google voice search, place reminders, set alarms, make calls, send emails and text messages at any time, without the user having to touch or pick up the phone.
If that’s not impressive or intriguing, I don’t know what is.
The fact of the matter is, ever since the launch of Google Now, I’ve come to use the service almost every single day. I open Google Search to see what it has to offer – local events, traffic on the way home from the office, weather, stocks, etc. But I also open the app several times every day to perform voice searches. I use it for simple math and quick unit conversions, as well.
I haven’t yet gotten into the habit of creating reminders, calendar events, or setting alarms with Google Now, but it’s something I feel I should have been doing all along.
In case you are unaware, there are quite a few voice commands you can speak in Google Now:
The ability to do those things without ever touching my phone would almost certainly increase how many voice commands I use and how frequently I use them. I’ve looked at the above list countless times over the last year, and I can never seem to remember half of them, most of which would be very useful. Maybe a phone that encouraged me to use voice commands would help me make the most of Google Now.
I’m interested, though. How often do you ladies and gents use voice commands on your smartphone? Daily? Once per month? Never? Sound off in the comments below, and join the discussion. Do you think a hands-free device that always-listening would change how often you use voice commands?