The NYTimes is reporting that Google researchers have added sophisticated voice recognition technology to the company’s search software for the Apple iPhone. The new feature will be available as early as today.
It works like this: You ask a question and the sound is converted to a digital file and sent to Google’s servers, which try to determine the words spoken and pass them along to the Google search engine. The concept of a spoken-word interface with Google is not new. A service called Google 411 has been around for a while and can be used with any phone. (And other spoken-word or voice-recognition services exist, including one of my favorites, Jott, that converts a 15 second message into an e-mail or other text document.)
However, here’s the new, new thing today, as described in the Times:
“An intriguing part of the overall design of the service was contributed by a Google researcher in London, who found a way to use the iPhone accelerometer — the device that senses how the phone is held — to set the software to “listen” mode when the phone is raised to the user’s ear.
So, here’s my first question:
“Why would Google, which is in the midst of supporting the launch of its own mobile phone platform appearing first on the G-1 from T-Mobile, release an awesome feature that provides a marketing advantage to a competitor of the G-1?”
The T-Mobile G-1 includes an accelerometer (with noted limitations). Unless the same app-feature is released for the G-1 simultaneously, I think T-Mobile there are some marketing folks at T-Mobile who have the right to be fuming this morning.