A lot of the work I've been doing is focused on taking costs out of healthcare. One of the principles I try to apply rigorously is to ensure that the largest cost burdens are borne by the systems at the center. Image that you have 100 systems connecting to one central hub. Imagine further that there is some complex processing needs to take place during commmunications between those systems and the hub. Where do you put the expensive node? Why at the center of course. Similarly, you avoid trying to change workflows at the edges because those also incure costs.
Yet when we talk about quality reporting, most of the quality reporting initiatives put the burden at the edge, and everyone reports nicely computed measures to the center. Instead of incurring costs at a few centralized hubs, providers at the edge are incuring pretty substantial costs (see Cost to Primary Care Practices of Responding to Payer Requests for Quality and Performance Data in the Annals of Family Medicine).
What if, instead of reporting the measures, we reported the values that went into the measurement using existing workflows. What if the centralized hubs were responsible for computing the measures based on the "raw data" recieved. Yes, the centralized hubs would need to do a lot more work, BUT, even if that work were two or three orders of magnitude larger than it is today, the number of edge systems is 5 to 6 orders of magnitude larger than the number of central hubs. If you have 100,000 systems communicating with you, it's certainly in your best interest to make "your job" simpler and easier and reduce your costs. But if you are a centralized system, and "your job" also includes paying for 60% of healthcare costs, then you have a different economy to consider. The costs incurred at the edge don't impact you today, but they will indirectly impact your bottom line tommorow.
The HL7 QRDA specification goes a long way towards relating the data used to compute quality measures back to the data used in Electronic Medical Record systems. However, it still requires more effort at the edge than some other approaches, as it still requires computatation at the edge. It also needs to be built upon a foundation that is designed for quality reporting rather than clinical documentation.
The HL7 eQMF specification strikes at the problem from a different angle and takes a slightly different approach. This specification should be able to:
a) Define the raw data needed to compute measures,
b) Specify how the measures themselves are computed.
If it performs both of these functions, then electronic medical record systems should be able to report the "data they have" to systems that can compute quality measures. This should result in a far lower implementation burden than trying to get thousands of different organizations to implement and report on these computations, and it will also help to stabililize the measures. The measures will all be computed the same way based on the raw data. Variations in how the measure is interpreted are eliminated or dramatically reduced. This should result in even better (or at least more consistent) measures.
IHE has developed a profile for Care Management that could readily support the reporting of the raw data (ok, so it is HL7 Version 3, SOAP and Web Services based, but that IS another discussion). The missing specification in that profile is the one that tells it what data needs to be reported. That could easily be eQMF. I live in hope.