Tuesday, June 7, 2022

Digital Transformation is not computerizing paper forms

This is very related to what we are trying to accomplish with CREDS, and so is sort of a followup to my previous post on CREDS, but is more broadly scoped.

It starts here with this tweet, but not really.  It actually starts with all of the work that people have had to do to mediate between the data in the EHR, and measurement of what that data actually means.

It boils down simply to how one asks questions in order to perform a measurement.

In the early days of quality measurement and/or registry reporting, it was not uncommon to see questions like this:

  • Does this patient qualify for XYZ treatment?
  • Has the patient been prescribed/given XYZ treatment?
  • Is XYZ treatment contraindicated and/or refused?
And, the reporting formats be structured in Yes/No form in some sort of XML or other format.

It's gotten to the point that some of these questions and their possible answers have been encoded in LOINC.  Now, when used for a survey or assessment instrument, that use is fine.  

But for most registry reporting or quality measurement activities, this should really be handled in a different fashion.  This is a start, and can be written in Clinical Quality Language format, or even more simply in a FHIRPath Expression.

  • Does this patient have any of the [XYZ Indications ValueSet]?
  • Has this patient been prescribed/given [XYZ Treatment ValueSet]?
  • Does this patient have [XYZ Treatment Contraindications ValueSet]?

The theme in this restructuring is: Ask the provider what they know and did, and define the logic to compute it.  But even better is to ask the provider what they know and did (and recorded), and have the quality measure reviewer actually do the compute on the quality measure.

Apply normal workflows to keep track of what is learned and done; these shouldn't be interrupted.  I can recall a case where normal workflow added a checkbox to an EHR screen just to get a clinician to acknowledge that they had reviewed and/or reconciled the medication list.

Building these value sets and the logic to evaluate them is hard.  Doing it so that it is interoperable across systems is also hard.  But honestly, the cost per provider to do this is so much less to do it once and do it well, than it is to have hundreds or thousands of systems all need to do this is much more costly, and likely to introduce differences in the compute, and variability in the reported values.

Stop asking for the answers, start asking for the existing evidence to get the answers you need consistently.  And if you cannot get the existing evidence, ask yourself why before asking that it be added to the normal workflow.

   Keith







0 comments:

Post a Comment