Wednesday, February 22, 2012

Query Health and Quality Measurement

Query Health has a great deal of promise to improve the ability of public health and clinical research to access clinical data.  And of course, being based on HQMF, the big promotion for it is what it will do for quality measures.  As Dr. Michael Buck and Rich Elmore pen in their announcement (see the bottom half) with regard to quality measures:
 "... the cycle time [for quality measure development] could go from years to days"
Whoa, Nelly.  Days? 

From a technical perspective, I'd say months, but it is still a dramatic improvement.  What Query Health will do is enable providers to access their data to measure performance quite easily.  But delivering a quality measure is much more than being able to access the data and computing a score.  Query Health may deliver vast improvements there, but it still won't address the non-IT issues.

One of the major challenges in Quality Measurement is in how to deal with "exceptions" and "exclusions".  Exclusions are cases where the measure doesn't apply to a situation where it might otherwise be relevant (e.g., patients with certain types of conditions that don't really fit into the measure criteria).  Exceptions are cases where even though everything else fits, there is a good reason not to count that situation as being detrimental to quality (e.g., patient refused treatment) because it is not something that the provider can control.

The collection of situations which may need to be excluded varies depending on the measure.  And to be able to apply these to the measure means that providers will likely need to change their workflow to capture the reason why the measure may not apply.  To compute the measure means that the data has to be there.

You can still compute the measure without exception or exclusion data.  It just results in a different value.  Failures to accurately capture the necessary data won't neccessarily be apparent in the computed result.  This isn't really a technical problem.  Give the computer the data, and it can compute the result.  Give it inaccurate data, and it will compute an innacurate result.  Query Health doesn't have any special protections against GIGO.

When you implement a measure, you need to avoid GIGO.  Doing so may require changes in workflow to capture the neccesary information.  Changes in workflow don't happen in days, and that is why I say months.

The Clinical Quality Measure workgroup is looking at improving the overall process for measure development.  Exceptions and exclusions are a critical part of that process, because of the challenges they provide for implementation.  I don't think the solution here is a technical one.  Query Health will help, but it isn't a silver bullet.
  -- Keith

1 comment:

  1. I have just begun to evaluate Query Health so my comments may not be accurate. However, I have taken a look at HQuery which is as I understand it the current distributed query mechanism which underlies Query Health.

    Computer science has been down the distributed query road before, most notably with Mike Stonebraker's Mariposa project at Berkeley in the 1990's. Mariposa was the successor to Postgres and its principal objective was to solve the distributed query problem through the entire data management stack--query parser, query optimizer, access method manager, and storage manager. The map/reduce approach taken by hQuery only addresses part of the stack--access method and storage. While the map/reduce approach has outstanding scaling properties, without a rich query language AND query optimization I don't see how this approach will work for what Query Health is trying to achieve. There is another issue with distributed data management a la Mariposa. The distributed query approach does not adequately support real- time decision support. This would seem particularly important for public health and bio- surveillance. I think the approach to look at is not distributed query but rather a real-time distributed data/compute grid married in an asynchronous pub/sub way to any subscribing columnar database. By the way, this is the dominant approach used by global financial trading operations that need real time risk analytics and longer term research databases. Nobody is using distributed queries.

    ReplyDelete