Pages

Monday, October 31, 2011

Declarative vs. Procedural and QueryHealth

A couple of weeks ago and the S&I Framework Face to Face, the Query Health Technical workgroup was asked to pick an implementation to focus on.  I liked hQuery, but one of its challenges is that its queries are defined procedurally (in JavaScript).  In chatting with Marc Hadley, one of the developers, it became pretty clear that they are using "template" programming to generate the procedural code.

In my update on Query Health from the Face to Face Rich Elmore asks:
Would be interested in your thoughts on the differences between procedural and declarative approaches as outlined in the hQuery Summer Concert presentation (see charts 15 - 16 http://wiki.siframework.org/file/view/hQuery+Summer+Concert+Presentation.pdf )
One of the tasks I took on last week was to start looking at how to implement the query declaratively, using the HQMF format.  The slides that Rich is referring to compare this JavaScript:

function population(patient) {
  return (patient.age(start)>=64);
}

To this chunk of (reported to be) HQMF:

<entry typeCode="DRIV">
  <observation classCode="OBS" moodCode="EVN.CRT"
    isCriterionInd="true">

    <id root="4AAEF95D-DCC6-459C-839C-C820DF310D60"/>
    <code code="ASSERTION" codeSystem="2.16.840.1.113883.5.4"/>
    <value xsi:type="CD" code="IPP"
      codeSystem="2.16.840.1.113883.5.1063"
      codeSystemName="HL7 Observation Value"

      displayName="Initial Patient Population"/>
    <sourceOf typeCode="PRCN">
      <conjunctionCode code="AND"/>
        <act classCode="ACT" moodCode="EVN"
          isCriterionInd="true">

          <templateId root="2.16.840.1.113883.3.560.1.25"/>
          <id root="52A541D7-9C22-4633-8AEC-389611894672"/>
          <code code="45970-1" displayName="Demographics"
            codeSystem="2.16.840.1.113883.6.1"/>
          <sourceOf typeCode="COMP">
            <observation classCode="OBS"
              moodCode="EVN" isCriterionInd="true">

              <code code="2.16.840.1.113883.3.464.0001.14"
                displayName="birth date HL7 Code List"/>
              <title>Patient characteristic: birth date</title>
              <sourceOf typeCode="SBS">
                <pauseQuantity xsi:type="IVL_PQ">
                  <low value="64" unit="a" inclusive="true"/>
                </pauseQuantity>
                <observation classCode="OBS" moodCode="EVN">
                  <id 
                   root="F8D5AD22-F49E-4181-B886-E5B12BEA8966"/>
                  <title>Measurement period</title>
                </observation>
              </sourceOf>
            </observation>
          </sourceOf>
        </act>
      </sourceOf>
  </observation>
</entry>


Admittedly, the XML is hideously ugly.  I took my own crack at generating it because frankly, I could barely understand what the above said.  And for good reason, because it doesn't actually follow HQMF.  For one, the data criteria go into the "Data Criteria Section".  


This is an example of the data criteria that the patient is 64 years old or older:
<observation classCode="OBS" moodCode="EVN.CRT">
  <id root="42e2aef0-73c4-11de-8a39-0800200c9a66"/>
  <code code="424144002" codeSystem="2.16.840.1.113883.6.96" 
    displayName="Age"/>
  <value xsi:type="IVL_PQ">
    <low value="64" unit="a" inclusive="true"/>
  </value>
  <participant typeCode="SBJ">
    <role classCode="PAT"/>
  </participant> 
</observation>

That's actually pretty easy (for me) to interpret, but could be a little easier (or perhaps the right word is "greener").  I borrowed some of the Green C32 Schema for result and recast it with a slightly different set of assumptions:


<resultCriteria>
  <resultID root="42e2aef0-73c4-11de-8a39-0800200c9a66"/>
  <resultType code="424144002" codeSystem="2.16.840.1.113883.6.96" displayName="Age"/>
  <resultValue>
    <physicalQuantityInterval unit="a" low="64" />
  </resultValue>
</resultCriteria>


Hmm, arguably not much better, and I don't see the patient referent.

Another thing that bugs me in the XML is that the identifiers for the data criteria are useless for content creators (they use OIDs or GUIDs, which are impossible to remember).  If you create a criterion for patients over 64 years of age in one place, and want to reference it from somewhere else WITHIN the same XML, ideally, you'd use ID and IDREF with meaningful names.

In the above example, it might appear as:

<resultCriteria ID='patientsOlderThan64'>
  <resultType code="424144002"
codeSystem="2.16.840.1.113883.6.96" displayName="Age"/>
  <resultValue>
    <physicalQuantityInterval unit="a" low="64" />
  </resultValue>
</resultCriteria>


I also looked at the same set using the S&I Framework Clinical Information Model and SQL.  Here is a sample query that selects all the PatientInformation rows that qualify:

SELECT * FROM PatientInformation WHERE DATEADD(Year, 1, DateOfBirth) < GETDATE()

OK, so I cheated a little. I used the date of birth and computed the age. This is what most systems would probably need to do anyway [Note: One alternative for this one would be for the system to pre-compute a "virtual" observation of the age based on the patient's birth date.]

I'm still not satisfied with any of these declarative formats.  The SQL isn't quite right.  It selects from a table of patient Information, when what I really want is a table of patient identifiers.  The HQMF is still a little too dense.

And the procedural stuff won't fly in the real world, simply because it requires a particular implementation technology (JavaScript) that won't be readily applicable to all systems.  I know that there are JavaScript implementations for just about everything, but procedural is just the wrong way to go here.  It doesn't optimize for the ways that different systems want to work unless they all happen to use map/reduce, which isn't the case.

I'll have to spend more time on this tomorrow looking at it from a different perspective.

Sustainable HIE in a Changing Landscape (Free Webinar)

This looks like it might be worth listening to...
View Message Online >>
- Free Webinar -
Release of the 2011 Report on HIE: Sustainable HIE in a Changing Landscape
Date: Thursday, November 10, 2011 from 3:00 - 4:00 PM ET
eHealth Initiative will be presenting a webinar featuring results from their annual 2011 Health Information Exchange (HIE) Survey that focuses on the issue of sustainability. This 60 minute session will provide data on a critical issue in the implementation and maintenance of HIEs across the United States. This webinar, led by Jason Goldwater, Vice President of Programs and Research, will focus on the following:
  • How is sustainability defined?
  • What are the characteristics of a sustainable HIE?
  • What are some of the sustainability models that HIEs are employing?
  • What functionalities are HIEs offering their stakeholders to remain sustainable?
  • Examples of sustainable HIEs that responded to the 2011 HIE Survey
Join to learn why the issue of sustainability continues to be prevalent with the HIE community and what best practices and lessons learned can be gained from those who have managed to create a successful model for their participants and stakeholders.
Register Now!
eHealth Initiative | 818 Connecticut Avenue | Suite 500 | Washington, DC 20006 | United States

Friday, October 28, 2011

Half of ACO Quality Measures are in MeaningfulUse

I was doing a review of the ACO Quality Measures yesterday.  One thing I noted was that at least half of them are also in Meaningful Use.  The cool thing about that is that this team has already reported that most MU measures can be computed from the HITSP C32 with the addition of procedures section, and that nearly all can be addressed by adding smoking status and vitals (see pages 4-6 of their paper).

What about the other half?  Well, six of them come from what will now be a CMS supported survey, three of them come from claims data, and another one comes from Meaningful Use attestation reports.  So, of the remainder, what isn't in Meaningful Use Stage 1, and can it be computed from the HITSP C32?

These are the remaining measures:

  1. Medication Reconciliation at Discharge (NQF #97)
  2. Fall Risk Screening (NQF #101)
  3. Depression Screening (NQF #418)
  4. Proportion of Adults 18+ who had their Blood Pressure Measured within the preceding 2 years (CMS)
  5. Diabetes Measures (NQF #729
  6. Ischemic Vascular Disease (IVD): Complete Lipid Profile and LDL Control (NQF #75)
  7. Treatment for CAD w/ Diabetes/LVSD (NQF #66)
Looking at each of these:
Fall Risk (#2) and Depression (#3) screening assessment scores can be recorded in the HITSP C32 Results section.  So, if you find an assessment result, you can tell that these have been done.

The diabetes measures (#5) are from the MN Community Measurement instrument.  These account for five separate measures in the ACO rule and are very simple to compute from CCD medications, results, vital signs or social history (for tobacco use) sections:
  1. HgA1C < 8
  2. LDL < 100
  3. BP < 140/90
  4. Tobacco Non-Use
  5. Aspirin Use
The IVD (#6) and CAD (#7) measures can also be readily computed, as they are very similar to other computable results in Meaningful Use stage 1.

That just leaves #1, Medication Reconciliation on discharge.  Because the CCD is not a discharge summary, you'd really want to look at a different document to compute that measure.  But if you did open up a discharge summary, and found one of the required entries of the IHE RECON profile, you could be assured that medication reconciliation was performed at discharge.

So, half of ACO measures are meaningful use measures, and you should be able to compute nearly all of the the other half from the standards already required for Meaningful Use stage 1.  And it could even get better under stage 2 when we have other documents supported for Transitions of Care.


Thursday, October 27, 2011

CDA Consolidation: It's a Library, not just a Book

We've had several discussions on the CDA Consolidation guide project on one of my negatives.  As originally stated, I was proposing that we remove the required LOINC code for CCD 1.1 (yes, there will be a new version of the CCD when the CDA Consolidation guide is done).  The rationale for this suggestion was that we want there to be a way to say: "It's a CCD", without fixing the document type.

As the discussion went on, it evolved into there existing a template identifier that indicates "This document conforms to all applicable sections and entries (and documents), found in the CDA Consolidation guide".

This feature was already present in the HITSP C83 specification, and is reproduced below.  As such, it is certainly within the scope of the HL7 project to ensure that this feature is reconciled along with the IHE and HL7 guides.

Please note that the following constraints have been added to support the creation of structured documents using the templates defined in this section. The last two constraints are to ensure that a CDA document conforms to the HITSP defined sections and entities where these entries are available. If HITSP has not created an appropriate entry or section, the CDA document MAY include those. 
C83-[CDA-10] CDA document instances that adhere to the specifications for the sections and entries defined within this specification MAY declare their conformance to these constraints by including <templateId> element with a value of 2.16.840.1.113883.3.88.11.83.1 in the root attribute and no extension.
C83-[CDA-11] Conforming CDA document instances SHALL conform to the HITSP defined sections* where available
C83-[CDA-12] Conforming CDA document instances SHALL conform to the HITSP defined entries** for clinical statements where available
* If the intent of the section is to capture information that could be captured in an existing HITSP section, the section shall conform to the HITSP defined section (although it may be further constrained).
**If the intent of the entry (clinical statement) is to capture information that could be captured in an existing HITSP entry, the entry shall conform to the HITSP defined entry (although it may be further constrained).

There are a couple of key questions around this proposal:

What is the point of it?
The main point is that the CDA Consolidation guide, like the CCD, IHE PCC Technical Framework and HITSP C83 specifications that it harmonizes, is a library of documents, sections and entries.  In IHE we've found that the IHE PCC Technical Framework is more widely used as a library of sections and entries than the individual document templates specified within it.  The IHE section and entry templates are presently in use in national projects in the US, Europe, Japan and China.  The same is true for CCD template (there are over 20 implementation guides using CCD templates).  We can certainly expect, based on that experience that the same will also be true for the CDA Consolidation guide.  The HITSP guide not only anticipated this behavior based on prior experience, but also promoted it through the above conformance rules.

What are the benefits of asserting this template identifier?
Suppose that you have developed an application that understands the CDA Consolidation guide, and gathers information from appropriate sections on problems, medications, and allergies.  Now, you want to use this application, but someone has determined that they need a new document type not covered by the guide.  As an example, let's look at an assessment of health and functional status for long term care use.  This clearly isn't covered by the current set of documents in the CDA Consolidation guide.

But, if someone were to create a new implementation guide, and use the CDA Consolidation guide sections and entries, my application could take advantage of that fact.  There is a major concern here, which is a potential impact on patient safety.  Suppose my application uses the information in this document, specifically on problems, medications, and allergies, to provide some clinical decision support for the patient assessed in the document.  Suppose also that the creators of that assessment instrument (a CDA implementation guide), decided that they didn't like the way that CDA Consolidation dealt with allergies, and so they used an incompatible model.  What could happen?  Well, if you didn't know that they used a different model for allergies, when you tried to extract the allergy information, your application would either not find it, or not find all the details recorded in the same way as they had been recorded according to the consolidation guide.  As an end result, you wouldn't have the allergy information, and could produce incorrect, perhaps even dangerous results.

Now, how can you tell that they've used templates you understand.  In this particular example, you'd make a list of the templates you need, and then go check to make sure that the document incorporated them.  This particular example is deliberately simplified, so maybe you only need the handful of sections and entries related to problems, meds and allergies (there's at least a dozen of these).  Even having checked that, you really don't have any assurance that they haven't added another section that does things differently in a way that you don't understand.

What the CDA Consolidation template identifier would do is tell you that the document has done things the CDA Consolidation guide way for everything that overlaps in anyway with what already exists.  That means that it doesn't record the same information present and described somewhere in the guide in a way that it NOT compatible with the guide.

This gives the receiving system a statement of assurance that says, if CDA Consolidation guide did it this way, this document does it this way as well.  That means that the receiving system has less to worry about with respect to information being recorded differently from what it already understands.

This is exactly the rationale that HITSP used when adding these conformance statements to the C83 specification in support of the Clinical Note Details use case.  And in fact, someone did create a guide just like the one I just described that conformed to the HITSP template for C83.  It was the CARE-SET implementation guide used in the C-HIEP demonstration project.  And guess what, that project also used MDHT.

Granted, just because someone asserts the template identifier, it doesn't mean that you can simply assume that what they've done is correct.  You also have to be able to test conformance to it.

How can you test this?
Some of the opposition seems to come from "we don't know how to automate the test."  There are assertions in the CDA Consolidation guide that have implications, and there are some unstated assumptions that may need to be made more explicit.  If the section template id is X, then the LOINC code for the section must be Y.  In most cases for these documents, there should be only one section of that type, so:

  1. X -> Y (template id X implies LOINC code Y)
  2. |Y| = 1 (there shall only be one occurence of LOINC code Y)
  3. |Y| !-> X -> (|Y| > 1) (if Y does not imply X, then there could be more than one section with LOINC code of Y)
  4. Thus, Y -> X (so, in a conforming instance, if the LOINC code is Y, the template ID must be X)

So, you can detect non-conforming sections and entries where these kinds of assertions (and assumptions) are present.  You can also detect non-conforming entries with similar constraints on code (or other attributes) or detect  missing entries from sections.  You can also identify specific entry classes that do not conform to template identifiers in the guide for inspection to see whether or not they overlap.

One of the challenges here is figuring out how to create the appropriate tests based on the data that is used to generate the guide.  This may be a technical challenge, but just because there is a technical challenge shouldn't be an impediment to implementing the idea.  It's certainly feasible.  The MDHT project managed it (see slide 8) for C-HIEP.

How does this benefit implementation guide and standards developers?
A guide that requires the assertion of the Consolidation template identifier in instances is itself asserting that it doesn't doesn't conflict with the CDA Consolidation guide.  That means that it, and any additional templates it creates would be more easily considered for inclusion into the next edition of the consolidation guide.  A guide that doesn't do this requires a deeper look for gaps and overlaps.  It also means that when such a guide is reviewed and approved, reviewers can point out inconsistencies not just as potential problems, but rather as implementation guide errors.


How this simplifies creation of other guides?
With one assertion, I can adopt all the rules in the CDA Consolidation guide.  Essentially, I've declared:

import org.hl7.sdwg.CDAConsolidation.*; (for Java developers)
using org.hl7.sdwg.CDAConsolidation; (for C# developers)


All of the rules, all the logic, all the consistency implied in the guide is asserted in one little statement.  If, as a CDA implementation guide creator, I am ready to trust in the CDA Consolidation guide, then I can turn my attention to other things that guide doesn't support.  This is extremely powerful.  You can adopt hundreds of pages of specification, and avoid countless hours of struggle, and there's even a way to say you did just that.  If we want to make it easy to create implementation guides (e.g., using tools like MDHT) to meet other use cases, this is the kind of thing that we need to consider.

I've been through this process not once, not twice, but many times.  In IHE adopting from HL7 CCD, in HL7 adopting from IHE PCC, in HITSP adopting both, and back again in HL7 with the consolidation guide.  If we want "one ring to bind them", it needs to have a name, and I need to be be able to use it meaningfully. This would be extremely important for IHE adoption, because we want to ensure consistency.  IHE already has document content that isn't included in the CDA Consolidation guide.  How would we be able to assert consistency with it?

What does optional or recommended really mean?
A guide that asserts the Consolidation template identifier is stating that it is doing things consistently with what the guide indicates.  If the guide says that severity is recorded in this way, and the instance record severity differently, and the severity assessment is optional, has the guide conformed?  While the instance might conform to the allowed model, it really has failed to follow the intent of the guide.  The guide says to record severity this way, and the instance didn't.  This is an incompatibility that would later need to be resolved if the new guide were "brought in" to the CDA Consolidation guide.

It's a brand, not just a guide!
This is my last rationale for inclusion of this, and probably the weakest from a technical perspective, but strongest from an implementation perspective.  As HL7 learned from developing the CCD, everybody wanted to be able to call everything a CCD, but not everything was.  This gives HL7 something that allows everybody to use CDA Consolidation, even when it doesn't have the exact document they need, and still claim conformance to the guide.

Be Afraid of EHR

Be Afraid.

EHRs are going to be more widely used.  Therefore:


Never mind that:

Because, after all, change in the way medicine is practiced is always risky.  Therefore it should be avoided.


So, remember and be afraid.

Wednesday, October 26, 2011

Killing Eterna-threads: REST vs. SOAP

I've heard it said that "Any field that still must use the word Science in its name is still more art than science".  I went to school majoring in "Computer Science", as did many of the readers of this blog.  I think most of us get it that it really is as much art as science still.  Evaluating art relies on subjective judgment.  Evaluating science relies on objective judgment.  These judgments are usually made by experts.  I love the origin of the word "Expert", described as "a person wise through experience", because it is absolutely applicable to this discussion.

In software design, especially the design of information systems, there is still no one true way.  Anyone with experience in the "Art" of software design understands this.  There are lots of different ways to do things, and some are better and some are worse, and which one is better often depends upon the environment, use case, and requirements of the situation.

Often when I'm asked a question about "Should I do X or Y?" or "How do I do X?", my first response is "What are you trying to do?"   I'm trying to understand the environment, use case, and requirements that will let me suggest one solution or another based on my expertise.

Just last week I was in a room full of W3C Standards and IT Experts.  These included authors/editors of W3C standards, committee chairs, book authors, et cetera.  The debate over REST and SOAP which we've frankly stopped discussing for a while in Healthcare IT is still going strong over there.  One thing that was truly obvious to the experts in THAT room is that the relevant points in the REST vs. SOAP debate depends upon the environment.  In "the web", REST dominates with loose binding, dynamic, resource-oriented approaches, and ad-hoc mash-ups.  In "the enterprise", SOAP dominates with strong contracts, definable composition approaches, strong security models, and expected and pre-agreed upon behaviors.

I've seen eterna-threads before ... you know, those debates that never die, and never get solved either.  I build rules for them in my e-mail client so that they get appropriately routed.  The challenge here is not one of science.  These debates cannot be addressed in an OSI 7-layer model.  Instead, you have to jump to OSI-layer 8 or 9 (religion/philosophy and politics) to understand them.  The two camps have even been identified as "Cats" and "Dogs".  I'm stuck in the middle.

To give an example, the author of the Web Application Design Language (WADL) was present at the meeting I attended.  He reports how he was resoundingly "spanked" for not understanding the RESTful "philosophy" regarding interface contracts.  Arguably, that author understands RESTful approaches as well as any practitioner of the art.  In fact, what he did was bring in a requirement "outside" the typical environment where RESTful approaches work, and attempted to integrate that requirement into a RESTful framework. Since the REST crowd didn't see that as a requirement of their "typical" environment, they rejected it as being antithetical to the RESTful approach.

Fielding's paper describes REST as an architectural style and demonstrates its appropriateness for the web.  What others are wanting is a demonstration of this particular style's appropriateness for other environments, such as the enterprise.  We would like to see how features of the current "SOAP" style (e.g., contracts, definable composition, pre-agreed upon or well-defined behaviors, and strong security models) could be addressed using a RESTful approach.

One of the outcomes of the meeting was a suggestion to document best practices in RESTful approaches, and to describe how some of these other requirements: Strong contracts, well-defined behaviors, and strong security models can be applied to REST.  As several "RESTful" experts acknowledge, these aren't antithetical to the approach, they simply haven't been well documented and understood by that community.

It is my sincere hope that a document how REST can work in the enterprise environment is produced, and that it might help the rest of us understand.  If it gets done well, perhaps we can put this eterna-thread to REST.

Tuesday, October 25, 2011

SIFramework Face to Face update on QueryHealth

I spent some time in the Query Health Clinical and Implementation workgroups at last week's S&I Framework face to face meetings.  There were a number of interesting discussions, and David Tao reports on some of them in his blog post today.  I'm a bit behind on my own report out, so here it is, with a few [side rambles] included.

One of the discussion points in the Clinical Workgroup was on the user stories.  We were looking at the various user stories and expanding upon what they do.  We quickly got bogged down into discussions on quality metrics.  But the key point in Query Health is not just quality reporting.  There are a lot of different things that it can report on:

  • Quality Measures (an obvious use case)
  • Population Stratification
    What does your population look like when stratified against a particular disease risk?
    e.g., Who should get a limited supply of a particular immunization?
  • What medications and adverse events are correlated (e.g., Heart Attack and Vioxx)?
  • What is the trend with respect to some particular condition over time?
So, we broke that log-jam and got focused on some other stories that might look a bit different.  One common feature of many of these user stories is the focus on "counting", so we are now talking more about counters, rather than numerators and demoninators (which are just specialized counters).

Another interesting discussion in the Implementation workgroup was on how we should proceed.  What kinds of technologies that we've seen thus far seem promising?  This was, as on person put it (not me, although it was attributed to me on the Query Health review call today because I said it again later in the meeting), a "Beauty Contest".  Three different systems stood out, but by far, the hQuery proposal backed by MITRE seemed to get the most support.  I had a hard time choosing because I've been behind on my summer concert listening (ONC recorded a dozen 90-minute presentations from invited guests who might have a solution to the problem).  But I did have some idea about what hQuery was, and thought it might be a good place to start.

One of the things that I don't like about hQuery right now is that it seems to be tied too much to JavaScript, and to a Map/Reduce based algorithm. While JavaScript is extremely available, and Map/Reduce is very scale-able, there are definite challenges to adapt it to a wide variety of implementations.  I'd much prefer a more declarative approach, using something like the Transitions of Care Clinical Information Model (CIM) as the basis for the declarations.

The TOC CIM is based on the HITSP C154/C83/C80 work (data elements, information models, and vocabulary) that goes into all HITSP Document specifications, including the C32.  One concern of the Implementation workgroup is the weak modeling of Encounters in the TOC CIM.  According to one participant, we needed to modify the TOC CIM to create our own model.  I like to think about that differently. We (the Query Health project) need to modify the S&I Framework CIM to meet our requirements.  Note how I changed the ownership of the work product from TOC to S&I Framework.  If S&I Framework is to survive, it needs to stop organizing itself solely in a project centric fashion.  

[One suggestion I made for the next S&I Face to Face meeting was to develop tracks:  Clinical, Implementation/Technical, and Operational/Business.  In that way, folks who have deep knowledge in one of those areas can contribute to more than one project.  Right now, we don't have that structure, and it means that I, and others like me, spend a lot of time communicating across and trying to follow different projects and workgroups within projects.  It's very difficult to follow more than one activity when everything is going on all at the same time.  If S&I Framework isn't careful, they'll have as many workgroups as HL7.]

I was in MITRE offices later in the week for a W3C Workshop on Data and Services Integration.  After the workshop, I was given a brief overview of hQuery by Marc Hadley, an organizer of that workshop and one of the designers of hQuery.  He showed me their "Query Builder", which describes what results a clinician might be looking for.  The JavaScript on the back end is actually produced from the content specified in the Query Builder.  I think there are some opportunities here to use some of the HL7 HQMF modeling (eMeasures) work do help declaratively define the results, along with the CIM model (based on C32 and other HL7 CDA documents), and the general framework of hQuery for distribution and reporting. 

[One of the complaints about eMeasures is that it "is new", but the fact of the matter is that a lot of Query Health is "new".  Sure, there is some existing work, but we are trying to put a lot of that together in ways that can be supported across the spectrum of Health IT and that hasn't been done before.  At the moment, it really is the only standard that declaratively specifies the structure of some sort of counter]

Among the benefits of using a declarative structure based on a reference model are:
  1. Queries can be defined based on what the user wants to see.
  2. EHR and Health IT systems can map the query to whatever framework they want to use.  It could be SQL, Map/Reduce, Extract/Transform/Load or any other approach.  It isn't tied to any specific architectural model.
The hQuery work already shows that they can transform against a "Green" C32 model into a Map/Reduce architecture using JavaScript.  Those details are all necessary to produce a working implementation.  But they aren't all necessary to produce the set of standards that defines how Query Health works.

An interesting side-note here is in a recent paper presented at AMIA by John D’Amore, Dean Sittig and Adam Wright titled: The Promise of the CCD: Challenges and Opportunity for Quality Improvement and Population Health.  In it the authors note that 12 of the 44 (a little more than one fourth) Meaningful Use Stage 1 Ambulatory quality measures could be extracted from the example CCDs containing problems, medications and allergies, and that 35 of the 44 (about 80%) could be extracted by adding procedures.  With the addition of other data, the authors note that they could extract all the ambulatory measures.

This is a pretty significant result for Query Health because it means, at least for one of the pilots, that there is evidence that they can succeed, as most of their data is coming from documents using the C32 specified content. 

Query Health is an interesting project.  I'm hoping that it will produce something useful without being to restricting on the technology that is used to implement it. So far, it seems to be headed in the right direction.

 --  Keith

Monday, October 24, 2011

Got ACO? You will still need EHR and HealthIT

OK, so I've just finished plowing through nearly 700 pages of the ACO Final Rule.  I also bookmarked the Preview copy, so you can download it from Google Docs for your own reading enjoyment ;-)

As a patient, my main focus in reviewing the rule is what it does for me.  There are few changes from my original review that really impact patients here, except in:

  1. How ACO providers are measured on quality (the final rule uses 33 measures phased in with respect to performance, whereas the proposed rule had 65),
  2. The requirement that a certain percentage of ACO providers were meaningful users of HealthIT.
  3. That Federally Qualified Health Centers and Rural Health Centers can now form (by themselves) and better participate in ACOs (which is great if you happen to live in a rural or underserved area). 
In general, the requirements for patient engagement, and patient participation in ACO governance are still present.

From the Health IT side, I was interested in how the changes impacted the Health IT industry.  They removed a specific requirement on the number of ACO Professionals who are Meaningful Users, but that doesn't really worry me.  Measure 11 has double weight, and is "% of providers who are meaningful users", so that will still have an ACO program impact.

My 140 character summary of the ACO rule from a Health IT perspective?  "Got ACO? You'll still need EHR and HealthIT, even if the regs don't require it."  

The reason for that is the amount of coordination that will be needed by members of the ACO.  The point of shared savings is that the providers actually get a benefit for not duplicating work.  So, if the lab was already done, and provider A has it, then provider B (and everyone else) can potentially benefit from the savings when Provider B uses the existing result.  Recently my wife had knee surgery.  The surgeon wanted a recent EKG.  He would have been incented (under the savings model), to reuse the one my wife already had.  Perhaps not as much as he might have earned from doing it over, but the world is not perfect.

There are a couple of places where Health IT and EHRs will really matter to ACOs:
  1. Assessing the health needs of the patient population.
  2. Identifying High-Risk Individuals and support of individualized care planning
  3. Supporting the use of evidence based medicine (e.g., through clinical decision support).
  4. Reporting on Quality Measures
  5. Managing care through an episode, including transitions between providers
  6. Dealing with monthly claims data and quarterly aggregates
Page 178 of the Final Rule points out that coordination of care between ACO participants and non-participants is one way to accomplish ACO goals.  Amusingly enough, some commenters were looking for CMS to fund some of the IT Investments needed.  CMS points these commenters to the Meaningful Use program for Health IT and EHR incentives.
  

For those of you who are interested in how ACOs will be measured, Table 1 from the final rule shows the quality measures (You can also find this on page 324 of the final rule text in the Federal Register Preview).

 
Table 1 Measures for Use in Establishing Quality Performance Standards that ACOs Must Meet for Shared Savings

#DomainMeasure TitleNQF Measure #/ Measure StewardMethod of Data SubmissionPay for performance
R = Reporting P=Performance
Year 1Year 2Year 3
AIM: Better Care for Individuals
1 Patient/Caregiver Experience CAHPS: Getting Timely Care, Appointments, and Information NQF #5, AHRQ Survey R P P
2 Patient/Caregiver Experience CAHPS: How Well Your Doctors Communicate NQF #5 AHRQ Survey R P P
3 Patient/Caregiver Experience CAHPS: Patients' Rating of Doctor NQF #5 AHRQ Survey R P P
4 Patient/Caregiver Experience CAHPS: Access to Specialists NQF #5 AHRQ Survey R P P
5 Patient/Caregiver Experience CAHPS: Health Promotion and Education NQF #5 AHRQ Survey R P P
6 Patient/Caregiver Experience CAHPS: Shared Decision Making NQF #5 AHRQ Survey R P P
7 Patient/Caregiver Experience CAHPS: Health Status/Functional Status NQF #6 AHRQ Survey R R R
8 Care Coordination/ Patient Safety Risk-Standardized, All Condition Readmission* NQF #TBD CMS Claims R R P
9 Care Coordination/ Patient Safety Ambulatory Sensitive Conditions Admissions: Chronic Obstructive Pulmonary Disease (AHRQ Prevention Quality Indicator (PQI) #5) NQF #275 AHRQ Claims R P P
10 Care Coordination/ Patient Safety Ambulatory Sensitive Conditions Admissions: Congestive Heart Failure (AHRQ Prevention Quality Indicator (PQI) #8 ) NQF #277 AHRQ Claims R P P
11 Care Coordination/ Patient Safety Percent of PCPs who Successfully Qualify for an EHR Incentive Program Payment CMS EHR Incentive Program Reporting R P P
12 Care Coordination/ Patient Safety Medication Reconciliation: Reconciliation After Discharge from an Inpatient Facility NQF #97 AMA-PCPI/NCQA GPRO Web Interface R P P
13 Care Coordination/ Patient Safety Falls: Screening for Fall Risk NQF #101 NCQA GPRO Web Interface R P P
AIM: Better Health for Populations
14 Preventive Health Influenza Immunization NQF #41 AMA-PCPI GPRO Web Interface R P P
15 Preventive Health Pneumococcal Vaccination NQF #43 NCQA GPRO Web Interface R P P
16 Preventive Health Adult Weight Screening and Follow-up NQF #421 CMS GPRO Web Interface R P P
17 Preventive Health Tobacco Use Assessment and Tobacco Cessation Intervention NQF #28 AMA-PCPI GPRO Web Interface R P P
18 Preventive Health Depression Screening NQF #418 CMS GPRO Web Interface R P P
19 Preventive Health Colorectal Cancer Screening NQF #34 NCQA GPRO Web Interface R R P
20 Preventive Health Mammography Screening NQF #31 NCQA GPRO Web Interface R R P
21 Preventive Health Proportion of Adults 18+ who had their Blood Pressure Measured within the preceding 2 years CMS GPRO Web Interface R R P
22 At Risk Population - Diabetes Diabetes Composite (All or Nothing Scoring): Hemoglobin A1c Control (<8 percent) NQF #0729 MN Community Measurement GPRO Web Interface R P P
23 At Risk Population - Diabetes Diabetes Composite (All or Nothing Scoring): Low Density Lipoprotein (<100) NQF #0729 MN Community Measurement GPRO Web Interface R P P
24 At Risk Population - Diabetes Diabetes Composite (All or Nothing Scoring): Blood Pressure <140/90 NQF #0729 MN Community Measurement GPRO Web Interface R P P
25 At Risk Population - Diabetes Diabetes Composite (All or Nothing Scoring): Tobacco Non Use NQF #0729 MN Community Measurement GPRO Web Interface R P P
26 At Risk Population - Diabetes Diabetes Composite (All or Nothing Scoring): Aspirin Use NQF #0729 MN Community Measurement GPRO Web Interface R P P
27 At Risk Population - Diabetes Diabetes Mellitus: Hemoglobin A1c Poor Control (>9 percent) NQF #59 NCQA GPRO Web Interface R P P
28 At Risk Population - Hypertension Hypertension (HTN): Blood Pressure Control NQF #18 NCQA GPRO Web Interface R P P
29 At Risk Population – Ischemic Vascular Disease Ischemic Vascular Disease (IVD): Complete Lipid Profile and LDL Control <100 mg/dl NQF #75 NCQA GPRO Web Interface R P P
30 At Risk Population – Ischemic Vascular Disease Ischemic Vascular Disease (IVD): Use of Aspirin or Another Antithrombotic NQF #68 NCQA GPRO Web Interface R P P
31 At Risk Population - Heart Failure Heart Failure: Beta-Blocker Therapy for Left Ventricular Systolic Dysfunction (LVSD) NQF #83 AMA-PCPI GPRO Web Interface R R P
32 At Risk Population – Coronary Artery Disease Coronary Artery Disease (CAD) Composite: All or Nothing Scoring: Drug Therapy for Lowering LDL-Cholesterol NQF #74 CMS (composite) / AMA-PCPI (individual component) GPRO Web Interface R R P
33 At Risk Population – Coronary Artery Disease Coronary Artery Disease (CAD) Composite: All or Nothing Scoring: Angiotensin-Converting Enzyme (ACE) Inhibitor or Angiotensin Receptor Blocker (ARB) Therapy for Patients with CAD and Diabetes and/or Left Ventricular Systolic Dysfunction (LVSD) NQF # 66 CMS (composite) / AMA-PCPI (individual component) GPRO Web Interface R R P

*We note that this measure has been under development and that finalization of this measure is contingent upon the availability of measures specifications before the establishment of the Shared Savings Program on January 1, 2012.


In case you want a quick summary of what's changed financially, Table 5 from the final rule summarizes the changes between the Proposed rule and the Final Rule.  You can also find this on page 396 of the Final Rule text  (in the Federal Register Preview).

Table 5: Share Savings Program Overview


One-Sided Model
Two-Sided Model
Issue
Proposed
Final
Proposed
Final
Transition to Two-Sided Model   
Transition in third year of first  agreement period  
First agreement period under  one-sided model. Subsequent  agreement periods under two- sided model
Not Applicable   
Not Applicable   
Benchmark 
Option 1 reset at the start of each  agreement period.
Finalizing proposal 
Option 1 reset at the start of  each agreement period.
Finalizing proposal. 
Adjustments for health status and  demographic changes                
Benchmark expenditures adjusted  based on CMS-HCC model.                
Historical benchmark  expenditures adjusted based on  CMS-HCC model.  Performance year: newly  assigned beneficiaries adjusted  using CMS-HCC model;  continuously assigned  beneficiaries (using  demographic factors alone  unless CMS-HCC risk scores  result in a lower risk score).  Updated benchmark adjusted  relative to the risk profile of the  performance year.    
Benchmark expenditures  adjusted based on CMS- HCC model.               
Historical benchmark  expenditures adjusted  based on CMS-HCC  model.  Performance year :  newly assigned  beneficiaries adjusted  using CMS-HCC  model; continuously  assigned beneficiaries  (using demographic  factors alone unless  CMS-HCC risk scores  result in a lower risk  score). Updated  benchmark adjusted  relative to the risk  profile of the performance year.
Adjustments for IME and DSH    
Include IME and DSH payments    
IME and DSH excluded from  benchmark and performance  expenditures  
Include IME and DSH  payments   
IME and DSH  excluded from  benchmark and  performance  expenditures
Payments outside Part A and B claims  excluded from benchmark and  performance year expenditures;   
Exclude GME, PQRS, eRx, and  EHR incentive payments for eligible  professionals, and EHR incentive  payments for hospitals  
Finalize proposal     
Exclude GME, PQRS, eRx,  and EHR incentive  payments for eligible  professionals, and EHR  incentive payments for  hospitals
Finalize proposal     
Other adjustments   
Include other adjustment based in  Part A and B claims such as  geographic payment adjustments  and HVBP payments
Finalize proposal   
Include other adjustment  based in Part A and B  claims such as geographic  payment adjustments and HVBP payments
Finalize proposal    
Maximum Sharing Rate   
Up to 52.5 percent based on the  maximum quality score plus  incentives for FQHC/RHC  participation
Up to 50 percent based on the  maximum quality score  
Up to 65 percent based on  the maximum quality score  plus incentives for  FQHC/RHC participation
Up to 60 percent based  on the maximum  quality score 
Quality Sharing Rate 
Up to 50 percent based on quality  performance
Finalizing proposal 
Up to 60 percent based on  quality performance
Finalizing proposal 
Participation Incentives  
Up to 2.5 percentage points for  inclusion of FQHCs and RHCs 
No additional incentives  
Up to 5 percentage points  for inclusion of FQHCs and  RHCs
No additional  incentives 
Minimum Savings Rate 
2.0 percent to 3.9 percent depending  on number of assigned beneficiaries
Finalizing proposal based on  number of assigned beneficiaries
Flat 2 percent 
Finalizing proposal:  Flat 2 percent
Minimum Loss Rate 
2.0 percent 
Shared losses removed from  Track 1
2.0 percent 
Finalizing proposal 
Performance Payment Limit
7.5 percent.
10 percent
10 percent
15 percent
Performance payment withhold
25 percent
No withhold
25 percent
No withhold
Shared Savings  
Sharing above 2 percent threshold  once MSR is exceeded 
First dollar sharing once MSR is  met or exceeded. 
First dollar sharing once  MSR is exceeded. 
First dollar sharing  once MSR is met or  exceeded.
Shared Loss Rate     
One minus final sharing rate     
Shared losses removed from  Track 1    
One minus final sharing rate     
One minus final  sharing rate applied to first dollar losses once  minimum loss rate is  met or exceeded;  shared loss rate not to  exceed 60 percent
Loss Sharing Limit 
5 percent in first risk bearing year  (year 3).
Shared losses removed from  Track 1.
Limit on the amount of  losses to be shared phased in over 3 years starting at 5 percent in year 1; 7.5 percent in year 2; and 10 percent in year 3. Losses in excess of the annual limit would not be shared.
Finalizing proposal