This will be a challenging certification requirement to implement in many ways. The way it is currently worded below, it appears as if you can only test it after the fact.
§ 170.212 Performance standards for health information technology.The Secretary adopts the following performance standards for health information technology:How would you measure or certify for this?
(a) EHR technology must successfully electronically process documents validly formatted in accordance with the standard specified in § 170.205(a)(4) no less than 95% of the time.
There are a couple of challenges. The first thing is in the definition of "success". What does it mean to successfully process a CDA document? There are several key processes specified in Meaningful Use. One of them is to reconcile and incorporate the content of the document for problems, medications and medication allergies. But there is also a need to incorporate other material, such as laboratory results, or narrative sections into the EHR. A more careful definition of success is certainly needed here.
Another key issue is the measure of validity. How would validity be determined. I've often mentioned that for Meaningful Use, the current TTT validator requires conformance to all required capabilities of Meaningful Use. That is to say it verifies the maximum possible CDA that a product must be capable of sending. There are other provisions of Meaningful Use which require the product to offer the physician the ability to customize what is sent so as to include only relevant data in a summary. Those provisions enable a physician to create a valid CCDA that will not pass validator testing because it doesn't demonstrate all required fields (e.g., there may not be any procedures recorded if none were done). We must further distinguish cases where a medication is reported but no code is known, or a disease is reported but no code is present in the specified coding system. For example, suppose that Influenza Type a subtype H7N2 (a bird flu) becomes capable of infecting humans. Find a SNOMED CT code for it. You won't be able to. You'd have to live with a supertype (Influenza Type a subtype H7). What about the case where a medication can be identified, but the dose and frequency aren't available. So all of these variants are possible, and we'd have to define what it meant to be able to successfully import these.
There are two ways to approach this offhand: Statistically, or through detailed analysis. The former method simply says "get a large number of varying but valid samples", and require the EHR to successfully demonstrate incorporation of each. It's easy, and it gets you to some degree of reliability quickly and easily, but it will certainly miss some non-obvious cases that someone somewhere will eventually program into a system. The second method is by doing a detailed analysis. This is more difficult, because it requires someone experienced enough with the standard to determine the legal range of possible variation, but not so well entrenched in it to miss the obvious sources of variation that a practitioner well trained in best practices will simply ignore because they've had that variation trained out of them.
Based on Meaningful Use statistics, you can probably identify a small number of products producing CCDA's that represent 95% of the physicians already using a CEHRT. You could build a test set from those vendors. Who's going to build that set, and what is in it for the vendor to contribute to that effort? Many vendors are hesitant in giving out examples of what their product can do without certain assurances that the material will not be used in ways that allow other vendors to compete. This is the reality of being in business.
One of the biggest values of IHE is that it provides an environment where vendors can do that sort of testing and gather that sort of test data on profiles they care about, in a way that ensures those samples are protected by non-disclosure agreements. What happens between vendors at Connectathon stays there, it's part of the agreement we all sign when we show up, and that includes how we deal with the testing data.
But Certification testing isn't done in that sort of environment. So either the data has to be gathered ahead of time, or be developed in a painstaking manner.
I'm not against the idea, but the mechanism by which it is implemented needs quite a bit more description before I'd be willing to say that this regulation is ready for prime time.
Some of the SHARP work, specifically the SMART-CDA Scorecard could be of assistance here. I'm still ruminating on this one.
No comments:
Post a Comment