"Per ONC guidance, the requirement for displaying structured data and vocabulary coded values in human readable form requires that the received XML (CCD or CCR) be rendered in some way which does not display the raw XML to the user. In addition, the standardized text associated with the vocabulary coded values must be displayed to the user. There is no requirement that the actual coded values be displayed to the user, however, the Vendor may choose to do so. The Vendor may also choose to display locally defined text descriptions of the vocabulary codes, however, the standardized text must always be displayed."The key problem with this paragraph is that it misunderstands the use of controlled vocabularies like SNOMED CT, ICD-9-CM, RxNORM and LOINC. There is no “standardized text” associated with these codes when used in clinical practice, or perhaps it might be better to say that there are a lot of different choices for which text you could use. What is standardized is the code, and the text helps to supply the meaning for the code. But for any given code, there isn’t only one text that could generate the specified code.
In addition, many provider organizations have developed interface vocabularies into these coding systems. By making providers use specified text in the narrative, what is happening is that the regulation is changing not how providers exchange information, but also the words that they use to provide care, and therefore the way that they perform it. In my experience, this is the best way to get providers to reject an implementation. The text that they use is developed based on many years of experience providing care, and is what they find to be best suited for that purpose.
The point about codes is that they allow computers to talk to each other in ways that computers will understand, without dealing with the possible variations that can be found in narrative. The problem that ONC sees is that providing the codes in a “level 3” coded entry way in CCD is thought to be too complex for some to implement. After four years of testing this in HITSP and IHE, and seeing many commercial and open source implementations that support “level 3” CCDs, I have to argue that it isn’t that complicated, and if you are going to require the “codes” or specific text values in the CCD, it’s better to do it right.
The notion that the “code” or a controlled term needs to appear in the text is one that appears in a different use case, “claims attachments”. In attachments, the codes are used to adjudicate payment decisions, not deal with treatment. Providers can likely tolerate this sort of requirement when it comes to getting paid, but not when it impacts how they provide care. Frankly, I think that the claims attachment work needs to be refreshed to reflect what we’ve learned about communicating patient summaries and clinical documentation in the last three years. It would be better if the claims process simply reused what we are requiring for clinical care. The idea that we pay for the care provided and not some code designed specifically for payment is one I’ve discussed before on this blog.
Even so, if ONC doesn’t want to require use of “level 3” CCDs, I have to wonder what they will do in CCR, which doesn’t have a standardized text representation of the data. Using that standard, you essentially would be providing the code anyway.
The suggestion that Structured Documents suggested was to build off of other testing that NIST is already conducting. In order to show that the narrative in a “level 2” CCD is supporting the appropriate vocabulary, you need to show that A) the text produced in the CCD is coming from the appropriate list in the EHR system, and B) that the EHR uses one of the necessary coded vocabularies in the IFR to maintain that list.
It was a very productive meeting, and I have to give NIST a great deal of credit for reaching out to the SDOs to answer some of these implementation questions. I would not want to be stuck between the rock and the hard place that they in. They basically have regulatory text from three different places that they have to deal with, and aren’t in a position to change any of it to make sense. They just have to make it all work. I continue to be impressed with the work that NIST has done, and their willingness to reach out and ask tough questions, and continue to work through until they get solutions.
I have to agree with you on this Keith (though I seem to rarely find myself in disagreement with you). NIST has done a great job so far in reaching out and a good effort in making this all work the best possible way. Given what they have been handed to work with, making it all work is no easy task. My hats off to the folks at NIST working on this. I am glad to see the openness and interest in getting this done right, even though what they have been handed is less than perfect. I hope to provide further assistance myself.
ReplyDeleteCorey Spears
Well, I do disagree w/ Keith (usually for sport) but I do have to commend him on this point. Particularly when we get people up to speed on using SNOMED-CT compositionally grammar and sending proper expressions, there isn't going to be any thing close to what NIST is talking about. It will, of course, always be possible to generate various "human readable" formulations, some of which will hide some of the transmitted detail, since humans can likely infer it from context (e.g. under family history section if you say "adenocarcinoma of the sigmoid colon, paternal uncle" most of us can "get it").
ReplyDeleteThis is the job of a terminology server, NOT something that belongs in a conformance criteria. The meaning of the code is the code, not the human readable (and potentially confusing, e.g. "dilation of the cervix" which can be a procedure, a finding, an observation type, a measurement, etc.).