One of the biggest, most aggressive, pushes for voice recognition is going on RIGHT NOW. The big boys like Nuance, Google and Microsoft are on the verge of making this world something out of Star Trek.
I believe everyone has dreams of being able to ask a computer a question and then have it give you the answer… well, with what the iPhone is doing with Siri and what Nuance has been doing with Dragon NaturallySpeaking and the PC for years is exactly that. They’ve been training computers to understand common language; common questions.
The most brilliant example of this was with Watson on the show Jeopardy. Most of the time questions and requests and phrase have to be programmed in to first recognize the question, then decipher the information and then the computer is going to have to reply by using text to speech [to speak back].
But I suspect that with all the data that Google, Nuance and Microsoft are able to get their hands on with the voice recognition they have on their respective Smartphones/platforms, this research and data is growing exponentially right now.
Every time someone uses their Smartphone to do voice recognition that data has to be sent back to a computer to be analyzed, decoded and then sent back; and normally the user has to confirm what was sent back— and there’s normally options or what might other things you could’ve said. But when you select the correct option, you are reporting back to the respective provider plus and minus data. This data is then collected and applied to the database and used next time.
Here’s a video from the Jeopardy show…
Keeping in mind this is being done almost instantly over the wireless network… but now building that database and that level of recognition is awesome. Because once the database is there to reference, you can start to apply this to just about anything; all you need is a data connection.
This brings me to home devices… in a couple of years, I see the death of the TV remote as we know it. Services like Comcast VOD and Google TV maybe something that’s standard. TV’s are going to be Internet enabled, and Comcast is an ISP provider, it’s logical to think that remotes that can understand what you say to it isn’t that far off.
Imagine having a station in your house in 5-10 years that’s Internet enabled and has the ability to understand questions, learn, be updated and continuously evolve to help you get the things you need to know— I see it coming. Take for an example, if you had something like this and it didn’t know how to check the weather for you. You could train it with a standard question, ‘what’s the weather like today’ and then the unit could be trained to respond with results from a webpage and read it to you… it’s actually simple. And then the training and response would be stored in a central DB and then applied to what other might be attempting to do as well… collectively making everyone’s system better… evolving.
You can apply this to a range of applications… If computers could understand just about anything you ask it and answer almost immediately; that’s something. Just as with Watson…
There’s a ton of data being collected right now on voice recognition and it’s going to be interesting how the application of that data is used.
Are you using voice recognition on your Smartphone?
Larry Henry Jr.