One of the more interesting stories that been reading about is how developers/hackers have been working with the iPhone 4S; mainly working on porting Siri to other Apple devices— the iPad and previous versions of the iPhone. Which is what Apple should’ve done the first place.
But now are starting to see where customers are starting to recognize that using services like Siri (voice-recognition) consume more data; probably more than the customer is comfortable with. And voice-recognition is not one of those forgetting applications. You know, like it’s a phrase that you spoke is not the one you wanted or it was incorrect, you shouldn’t be charged for the bandwidth until the phrase with a word that you said is properly recognized – that’s not the case.
Using voice recognition on a mobile device period, uses more bandwidth; good recognition or back recognition, the bandwidth is used and there’s no getting it back.
Initial articles that are coming out trying to bring this topic to light, to get more people to understand the ramifications of using voice-recognition is funny because voice-recognition in general has been popular until the past year, when Siri was introduced with the iPhone 4S. now that the iPhone has voice-recognition as a boilerplate for the phone itself, android is now actively working on a Siri competitor…
Android has had voice-recognition for prolonged time, but it hasn’t been widely used as what the iPhone has now. Apple is clearly pushing Siri as a application to drive sales. It’s a novel application and it drives customers to be very curious about the abilities of a device like this.
It’s something that I’ve mentioned before, voice-recognition is not done, mobile devices. The mobile device doesn’t do anything but act as a microphone, and pretty much any version of the mobile device can function as a microphone. the sound from someone speaking into the phone is captured and then sent to [in the case of Apple] the voice-recognition data center [Nuance Dragon NaturallySpeaking] for processing and then the recognized text is sent back to the smartphone. The shorter the phrase, the less the data, and the longer the sentences or phrases, the more data is sent to the data center for processing…
if you’re in one of those situations where you don’t have a large data plan, this is where you’re going to recognize how fast you data plan can get eaten up. This is where the phone carriers have a huge grin on their face because they know this time has been coming in there about to cash in on all the extra data that people can be transferring back and forth using their mobile devices.
As hackers/developers continue to create ancillary applications that will work for Siri on other Apple devices, and possibly android app devices later on, I think you’re going to see the level of data usage for smartphones skyrocket as the process of using voice recognition becomes mainstream and the recognition of the voices and accents that people user have are going to be better recognized.
If not going to be limited to smartphones though, a good example of this would be what Microsoft‘s Xbox may be doing in the future; allowing customers to search for channels and content by voice commands; having the data set off to a data processing center then having the results appear on the Xbox. the same could be said for the next version of Google TV…
Voice-recognition is a great thing to play with if you have the bandwidth…
Larry Henry Jr.
…via Dragon NaturallySpeaking 11