A Social Network and MOOC for Health Informatics Professionals and Students
Voice recognition is a technology that has been bubbling below the surface for several decades. The inclusion of the Siri voice recognition system in the iPhone 4S seems to many a signal that the time has come for voice recognition to be adopted on a wide-scale basis. Physicians have been using voice recognition for some time now, but many have seen it as the least-worst option for docs who have never learned to type and don’t have the time or inclination to do so. As voice recognition gets better, however, there may be a point where even doctors who are fluent typers will switch over to dictation using voice recognition software.
One company that I talked to at HIMSS has taken advantage of a cloud-based API from Nuance (the makers of the Dragon suite of voice-recognition software) to come up with a new iPad based information system for use in emergency departments called SparrowEDIS. Designed by emergency medicine physician Dr Brian Phelps and his team at Montrue Technologies, the system has just won the Nuance 2012 Mobile Clinician Voice Challenge.
The software is, at heart, a unique iPad-based user-interface that enables doctors to enter narrative data into the hospital’s EHR system by first communicating with Nuance’s Healthcare 360 cloud-computing system (to translate the dictation into text) and then, through the hospital’s EHR interface engine, sending the text into the EHR. On top of this functionality, users can also check prescription interactions, order new prescriptions and share discharge instructions. The team have put together a video that shows off the system and the thinking behind it:
I can see lots of doctors being attracted to this system. Doctors who can’t type quickly and work in environments where they are required to be on the move are obviously the most likely to benefit. However, I can see that even doctors who can type will start to see the benefits of voice recognition in terms of maintaining eye contact and rapport with patients as they perform their history taking and examination. Hopefully, Dr Phelps and his team will be publishing some results from their pilot and future trials to see whether or not this type of applications marks “the end of typing” or whether the younger generation of doctors, who can already type quickly, prefer to type than talk.
What do you think? Is mobile voice recognition the future of medical narrative data-entry or is it just a transitioning tool for doctors who can’t type and don’t want to learn?