Speech recognition demonstrates fewer errors than transcribed reports
One of the objections to the implementation of speech recognition software in a radiology practice is that it generates more errors than traditional transcription methods. According to a scientific presentation at the 93rd scientific assembly and annual meeting of the Radiological Society of North America (RSNA), the opposite is true: transcribed reports show higher error rates than automated speech recognition applications.
â€œError rates for speech recognition are not statistically different from, and may be better than, for traditional transcription in our practice,â€? said John Floyd, MD.
Floyd, a partner in the 24-member Radiology Consultants of Iowa (RCI), a non-academic radiology group in Cedar Rapids, Iowa, said his practice will generate about 400,000 dictations this year using speech recognition software (SpeechQ by MedQuist) for two large acute-care hospitals, seven rural hospitals, and an imaging center.
The study at RCI complied error rates for 498 reports created with the SpeechQ software and compared those with error rates for the same reports transcribed in a traditional manner. The report cohort consisted of 20 to 25 studies involving CR, MR, and general radiographic procedures from each of the 24 radiologists in the practice.
Floyd reported that automated speech recognition was more accurate than traditional transcription. The traditionally transcribed reports included at least one error in 13 percent of the total, while the speech recognition reports demonstrated one error in only 9 percent of the total studies.
â€œThe rate for significant errors, requiring the preparation of an addendum, was 0.6 percent for speech recognition and 2 percent for traditional transcription,â€? Floyd said.
The automated approach also contributed to an impressive improvement in report turnaround time compared with manual report preparation, he noted.
â€œSeparate data for this practice indicated that average turnaround time for traditional transcription was greater than 24 hours while that for speech recognition was less than 1 hour (excluding screening mammograms), and that there was stable or increasing productivity for each radiologists in terms of RVUs (relative value units) per hour produced before and after speech recognition implementation,â€? Floyd observed.
More than 97 percent of the practiceâ€™s reports are edited and signed by radiologists on completion of their dictation, although they have the option to send their dictation to a correctionist at any time, Floyd said.
He noted that with traditional transcription, RCI finalized from 4 percent to 8 percent of its reports in less than 60 minutes in the two years prior to its adoption of SpeechQ in January 2007. In the first three months of speech recognition adoption, RCI was able to put out 66 percent of its reports in less than an hour. During the second three months of speech recognition deployment, the practice improved its delivery of reports to 82 percent in less than one hour.
He shared that the accuracy rate for speech recognition reported by RCI was confirmed by an independent analysis conducted at one of the two acute-care hospitals that the group services. The facilityâ€™s evaluation of 514 reports conducted in September produced an overall transcription error rate of 9.7 percent for both automated and traditional report generation, and that an addendum was required for 0.6 percent of the reports.
â€œSpeech recognition can be used to dramatically improve radiology report turnaround time without degrading report accuracy or diminishing radiologistsâ€™ productivity,â€? Floyd said.