Tuesday, October 27, 2020

Models of Clinical Decision Support

This is mostly a thinking out loud piece for me to wrap my head around some thoughts related to the work that I've done on Clinical Decision Support, and how that relates to work done recently for the SANER Project, and how to connect that to ECR, and other public health reporting efforts. My first step in this journey is to review what I've already written to see how my approach to CDS, and that of standards has evolved over time.

Some of the more interesting articles I've written on this topic include:

Most relevant to this discussion is the three legged stool of instance data, world knowledge, and computational algorithms from my first article.
The biggest difference in most implementations of clinical decision support is in where the algorithm gets executed, and quite a bit of effort has been expended in this arena.  I originally described this by referencing the "curly brace" problem of ARDEN Syntax, and it describes a challenge of integrating the algorithm for computing a response with a way of accessing instance data.

Here are the key principles:
  1. Separate data collection from computation. (Instance Data from Algorithms)
  2. Use declarative forms that can be turned into efficient computation (Algorithms).
  3. Separate inputs from outputs (Instance Data, Actions and Alerts).
The tricky bit, for which I don't HAVE a principle is in how to identify the essential instance data, which honestly is largely driven by domain knowledge, and this is where MUCH of the nuance about implementing clinical decision support comes into play.  

There are two main approaches to clinical decision support: Integrating it "inside" an information system that has access to the essential data, or moving the data to an information system that can efficiently compute a result.

The former operates on the assumption that if you have efficient access to data, then compute locally (where the data results), and you can thus skip the need to separate instance data from algorithms that implement knowledge.  The latter requires the separation of instance data from the algorithm to facilitate data movement.

A large distinction between what SANER and Clinical Quality Measurement does from the rest of Clinical Decision Support is largely based on the distinctions between the needs for systems supported decision support based upon population data (data in bulk), and systems making decisions at the level of an individual.

It largely boils down to a question of how to access data efficiently. Different approaches to clinical decision support each approach this in a slightly different way.
  • Quality Reporting Data Architecture (QRDA) defines a format to move data needed for quality measurement to a service that can evaluate measures.
  • Query Health used Health Quality Measure Format (HQMF) to move a query described in declarative to a data source for local execution, and then results back to a service that can aggregate them across multiple sources.
  • HQMF itself has evolved from an HL7 Version 3 declarative form to one that is now largely based on the Clinical Quality Language (CQL) which is also a declarative language (and a lot easier to read).
  • Electronic Case Reporting (eCR) uses a trigger condition defined using the Reportable Condition Mapping Table (RCMT) value set to move a defined collection of data (as described in eICR) from a data source to the Reportable Condition Knowledge Management Service (RCKMS) which can provide a reportability response including alerts, actions and information.  RCKMS is a clinical decision support service.
  • CDS Hooks defines hooks that can be triggered by an EHR to move prefetch data to a decision support service using SMART on FHIR, which can then report back alerts, actions and other information as FHIR Resources.
  • SANER defines an example measure in a form which is represented by an initial query, and then filtering of that, using FHIRPath, which may result in subsequent queries and filtering.
One of the patterns that appears in many CDS specifications is about optimization of data flow.  There's an initial signal executed locally, which is used to selectively identify the need for CDS computation.  That signal is represented by a trigger event or condition, driven by either workflow, or a combination of workflow and instance data.  One example of a trigger event is the creation of a resource (row, database record, chart entry, FHIR resource, et cetera) matching a coded criterion (e.g., as in RCMT used with RCKMS for ECR). 

The trigger/collect/compute pattern is pervasive not just in clinical decision support, but in other complex (non-clinical) decision support problems that deal with complex domain knowledge.  It has uses in natural language processing software, where it has been used for grammar correction, e.g., to detect a linguistic pattern, evaluate it against domain (and language) specific rules, and then suggest alternatives (or verify correctness).  The goal of this approach is multi-fold: optimization of integration and data flow, and separation of CDS logic (and management thereof) from system implementation.

Population based clinical decision support is often expensive because it may require evaluation of thousands (or hundreds of thousands) of data records, and the more than can be done to reduce the number of records that need to be moved, the more efficiently and quickly such evaluations can be performed.  FHIR Bulk Data Access (a.k.a., Flat FHIR) is an approach to moving large quantities of data to support population health management activities.  It further accentuates the need for optimization of data movement to support population management.

As I think again through all of what has gone before, one of the things missing from my three legged model is the notion of "triggers", and I think these deserve further exploration.  What is a trigger event?  In standards this is nominally a workflow state.  From a CDS perspective, it's the combination of a workflow state, associated with resource matching a specific criteria.  The criteria is generally pretty straightfoward: This kind of thing, with that kind of value, having a measurement in this range, in this time frame.  And in fact, the workflow state is almost irrelevant -- but is usually essential for determining the best time to evaluate a trigger event.  Consider ECR for example, you probably don't want to trigger a reportability request until after the clinician has entered all essential data that you might want to compute with, at the same time, you don't want to wait until after the visit is over to provide a response.  Commonly this sort of thing might be triggered "prior to signing the chart", given that you want to make sure that the data is complete.  However, given that the results may influence the course of treatment or management, a more ideal time might be just before creation of the plan of care for the patient.

A few years back I worked on a project demonstrating the use of "Public Health Alerts" using the Infobutton profile and a web services created by John's Hopkins APL that integrated with an EHR system that was developed by my then employer.  We actually used two different trigger events, the first one being after "Reason for Visit" was known, and the second one just before physical exam, after all symptoms and vital signs had been recorded (if I remember correctly).  This was helpful, b/c the first query was relatively thin on data, but could guide data collection efforts if there was a positive hit, and the second one could pick up with a better data set to capture anything that the first might have missed.

I'm not done thinking all this through, but at least I've got a first start, I'm sure to write more on this later.

Monday, October 5, 2020

HL7 FHIR SANER Ballot Signup Closing October 19

I sent the following e-mail out to a subset of the SANER IG distribution list we maintain internally for folks who have been involved in development of The SANER Project.  I didn't bother to send to those who work for organizations had already signed up to participate in the ballot.  For those of you who have been following from afar, this is an opportunity for you to look more closely at what we've been doing for the past 8 months, and contribute your input!


     Keith

As someone who has expressed interest, or having participated in the development of the HL7 FHIR Situation Awareness for Novel Epidemic Response Implementation Guides, we are letting you know that this document will soon be published for ballot. 

You will need to sign up BEFORE October 19th, 2020 to be included in the voting pool should you have interest in voting on this implementation guide in the next ballot cycle

To sign up to participate, go to http://www.hl7.org/ctl.cfm?action=ballots.home.  If you are an HL7 Voting Member for your organization, you will need to log in to see the ballots that you can vote on.  

If you are not a member, you can participate in an HL7 Ballot pool by paying creating an HL7 Profile and paying applicable administration fees (See http://www.hl7.org/documentcenter/public/ballots/2021JAN/Announcements/NonMember%20Participation%20in%20HL7%20Ballots%20Instructions.pdf for details).

Thank you all for your contributions, we have accomplished a tremendous amount of work over the last 8 months, and we hope to see you comments on this implementation guide.  Feel free to pass this information along to others you think should participate in voting on this implementation guide.

Keith W. Boone