If all you do is use better tools, and you don't measure all the benefits those tools provide, of course you won't seen improvements. I just finished reading through the Archives of Internal Medicine report on the use of EHR's and Clinical Decision Support to improve quality.
There are some problems with the way this particular study was designed. The intervention "Use of an EMR with Clinical Decision Support" is used to address 20 specific quality of care metrics. However, the intervention as defined is not designed to address any one, let alone all of those quality of care metrics specifically. In fact, provider use of EHRs may be motivated by several factors: Improved use of clinician time, better capture of information used for reimbursement, avoidance of medication errors, use of ePrescribing or electronic billing, as well as support for higher quality of care for specific conditions. The study only address the last as a motivator for use.
In order to obtain quality improvements for the specific measures cited in the report, you need to plan for it when deploying an EMR in your practice. Just because an EMR supports clinical decision support doesn't mean that it provides ALL CDS possibilities all of the time. Different practices have different workflows for which the CDS capabilities of products may need to be configured. For example, some products provide documentation templates that support certain clinical guidelines and have CDS rules enabled. But these guidelines and the templates implementing them must be customized to specific care settings based on population age (e.g., pediatric/geriatric), gender (e.g., ob/gyn), formulary, et cetera. The most commonly enabled CDS feature in EHRs is med/med and med/allergy interaction avoidance. Yet there is not a single quality measure in the study that addresses that issue. Yet, it is the most commonly cited issue I hear about alongside the IOM report "To Err is Human".
The conclusions of the report don't surprise me given its design. Nor should they surprise anyone else who reads it. If the treatment (use of EHR + CDS) is non-specific to the problem (the specific 20 quality measures), the expectation that it would have any effect is a marginal hypothesis to begin with. A better study would be to examine the effects of EHR use with CDS where implementation is designed to improve a specific quality measure. In that case, the intervention would be designed to treat the specific issue and the results would be much more relevant.
The data gathered in the NACMS surveys is insufficient to conclude that decision support doesn't help, but right now, that's all the data the study authors had to work with. I suspect that future studies, which will include EHRs that have had CDS rules enabled to support high priority quality measures will have better results. To better explore those results, it might be useful to start gathering metrics on what quality measures healthcare providers have enabled CDS to support.
Updated at 1/25 3:39 pm
A good analogy to this study would be performing a clinical trial on use of medications to treat a specific set of conditions, where the use of any medication was enough to be considered in the numerator, and measuring the impacts of medication use on disease treatment.
Thanks Keith, your analysis is prescient as always. For a more detailed analysis of the limitations of the research methodology used, may I refer your readers to my blog? See:
ReplyDeletehttp://informaticsprofessor.blogspot.com/2011/01/electronic-health-records-do-not-impact.html
Bill Hersh
The saying when I worked in manufacturing was: "What gets measured gets done. What gets rewarded gets done well." I stopped being amazed at how the lessons that I learned in manufacturing have not been implemented by healthcare. Deming wrote about applying his principles to healthcare, but few appear to have read them.
ReplyDeleteAnother great analysis. I thought this study was skewed from the beginning.
ReplyDelete