If you’re moving from Windows 7 to Windows 10, you’ll notice a slight improvement to the speech recognition interface in the new version. In Windows 7, the speech recognition interface didn’t work at all with applications unless they had fulltext control. This is something that Nuance has done as well with Dragon NaturallySpeaking, but their customer base didn’t appreciate it as much. But the change that Microsoft made, is actually an improvement over what was available.
The speech recognition engine and Microsoft is not that bad. What’s horrendous about Microsoft speech recognition is the user interface. It lacks serious attention to detail in usability. It has no intuitiveness to it.
So the good news is that Windows 10 includes speech recognition and that now you can dictate into applications that weren’t supported previously with Windows 7. The adjustment that Microsoft made in Windows 10 is that when you’re trying to use their speech recognition in applications that are not natively supported, with fulltext control, as you’re dictating if the application is not supported, a small dictation box pops up to allow you to enter in the information that you want and then say the word “insert”.
In my testing with Windows 10, using my WebCam as a microphone, the speech recognition was only about 30% accurate. So using speech recognition, sitting at your desktop, with the WebCam directly in front of you, don’t expect to have Windows 10 recognize your speech accurately.
But if you’re using a headset, you can expect that recognition to jump up to 80 to 90%. The recognition engine that Microsoft uses, I’m not sure what it is, but it doesn’t use the same word predictive dictionary that Nuance uses.
To be honest, I started this article with the intention of using Microsoft’s speech recognition, but it was so inaccurate and causing me to go back and edit so much, that I fell back to Dragon NaturallySpeaking [v14].
In a work environment, where employees probably are not allowed to have a copy of Dragon NaturallySpeaking, I think that Microsoft’s speech recognition can still be highly productive. Microsoft’s speech recognition can be adjusted and custom words can be added to the user library for pronunciation and proper formatting.
I think Microsoft would do better for themselves if they simply offloaded the speech recognition engine to a data center for processing as opposed to allowing the PC to do the processing locally; or at least give the option.
Speech recognition in operating systems has become very competitive. Apples Siri uses the speech recognition engine from Nuance’s Dragon NaturallySpeaking. Google’s speech recognition does outstandingly well, but is not available through any operating system unless you’re using a web browser. Recently, Google started allowing users to dictate directly into Google documents.
Microsoft’s Cortana is one of the new flagship features of Windows 10, but if the speech recognition engine that they’re using doesn’t recognize the words properly, the features/commands cannot be executed correctly. With any luck, Microsoft is using the boost in Cortana’s usage to refine their speech engine.
I would still like to see all the companies that provide speech recognition offer a solution that simply uses the Internet/data center for speech recognition processing as opposed to loading bulky software on the local machines. Obviously, there would be some concerns with privacy, but there would be an option for speech recognition local processing or Internet-based processing.
Overall, Microsoft’s speech recognition works well enough to be useful, but expect some heavy editing and lots of refinements/training in the speech dictionary. The user interface for Windows 10 speech recognition is not ideal, but it is functional; it’s better than typing. Having been a user of Dragon NaturallySpeaking for years, I think that Microsoft’s speech recognition is about as accurate as version 8 of Dragon NaturallySpeaking and probably just as responsive.
Do you use speech recognition? Let me know your thoughts.