Thursday, January 31, 2013

The men and women of Interoperability

Health IT News recently announced its annual Men and Women of Health IT Awards.  I'll probably never make that list, as I don't work in a Healthcare System, nor do many of my colleagues in interoperability.  I thought I'd make a list of my top ten men and women of Interoperability.  The rules are very much like those for the Ad Hoc Harleys, I make them up as I go along.  There's no nomination period, and the qualifications are that you have to somehow come to my attention.

Without further ado, here's my top ten in alphabetical order, because I couldn't rank them any other way.

Bill Majurski from NIST: He's with the government, but he really is here to help.  Past co-chair of IHE IT Infrastructure, currently leading MU Stage 2 validation tool efforts in NIST, and all around nice guy.

Bob Yencha: Former HITSP-ite, HL7, CDA, IHE and S&I Framework geek.  Bob's one of those all around guys who can manage a project, facilitate a meeting, or technically lead a project.

Corey Spears of Medicity: CDA Rock star formerly from McKesson, Corey now leads up interoperability activities for Medicity.  I like to think that Corey is my "secret agent" on the "dark side" ;-)  of healthcare, the payers.

George Cole of Allscripts: Another CDA Rock Star.  George participates in IHE IT Infrastructure, Patient Care Coordination, and Quality, Public Health and Research, as well as other interoperability initiatives here in the US.

Gila Pyke: Past recipient of an Ad Hoc Harley, reviewer of The CDA Book, and a strong leader in interoperability in Canada.  She really gets it, and even when she doesn't have it yet, she will tomorrow.

John Moehrke of GE Healthcare: My go-to guy for all things related to security and privacy.

Laura Bright: Another member of the Canadian posse, currently technical committee co-chair in IHE PCC.  She doesn't just understand the technology, but also the business rationales for interoperability.

Lori Fourquet: She wears/has worn lots of hats, secretariat/lead for ISO TC-215 and ASTM security workgroups, a strong coordinator of public health activities in IHE and HL7, and experienced as interim CEO and CTO for a state Health Information Exchange.

Tone Southerland of Greenway: Rock star of IHE PCC (and past TC cochair) and CDA, recently asked to give testimony to the Health IT Standards and Policy committees (last Tuesday).  I wish I had started in interoperability in Healthcare IT when he did.  No longer just an up-and-comer, truly an expert.

Vassil Peychev of Epic: An HL7 V3 rock star (including CDA), who also spends a good deal of time in the IHE IT Infrastructure technical committee.  Vassil is well-known in both HL7 and IHE circles.

These are my ten picks for the men and women of interoperability.  And the one thing that they all have in common besides being exceptional leaders in Health IT, is that they are all here at Connectathon, working to ensure that systems truly do interoperate.

Tuesday, January 29, 2013

Would you rather be remembered, or forgotten?

I'm certainly living my life the way I'd like to be remembered, and having a ton of fun while I do it.

"You could write that on your gravestone." somebody said to me tonight, in response to some brag  I made about something I've done.  In that one, vaguely morbid comment, he quite summed up why I do what I do.  I'm extremely proud of some of my accomplishments (Though there is certainly plenty left to do, especially if the powers that be continue in the way that they have).

Someone else pointed out earlier today, that we (pointing to the Connectathon floor) could be making a lot more money working in IT in the finance industry, but they prefer doing what they do (working in Health IT), because of the impact it has on lives.  This week, I get to hang out with nearly 600 people who love their jobs, many of whom will never work in another industry because they feel that what they do in healthcare is so important.  It's intense.  It's exhausting.  And it is extremely rewarding.  To think that something you wrote, or created, or designed, or managed, will save a life, is a fantastic thing to be able to say about yourself.

But that same speaker commented on something else I'd talked about having worked on (spelling correction), and pointed out the value of being forgotten.  I don't recall exactly what he said,  but I'll sum it up in this way:

When what you have worked on becomes so commonplace that nobody remembers what it was like before we had it, then you have truly had an impact.  My new goal in life is not to be remembered, but to be forgotten.  And when I get there, there will always be something else to do.

-- Keith

Monday, January 28, 2013

The Model is the Standard

In HL7 Version 3, with one significant exception, the information model derived from the RIM is the normative expression of the standard.  The significant exception is of course CDA, which has the normative requirement that a CDA instance must validate according to the XML Schema produced from the model (once all extension elements and attributes are removed).

In HeD and HQMF, we want the overlapping concepts (the data to be used, and the operations that can be performed over them) to be drawn from the same model.  The model thus provides consistent semantics, and we can trace from clinical guidance to both the measures of implementation and the clinical decision support that implements it.

We had a long discussion today between myself, and members of the HeD development team to figure out how to move forward.  We have, as a result a proposal that stems from the fact that the V3 schemas are not normative.  Here is a quick summary of the proposal from my perspective:

  1. The current HeD guide is treated as an implementation guide of a to be developed standard model, and is altered somewhat, but not substantially so, to be published by HL7 as an informative document or DSTU.  The alterations to be made stem from current ballot comments, and include harmonization with the HQMF header structure, metadata, and outline.  This does not include changes to its use of VMR, representations of logic or expressions.
  2. A new HeD standard model will be produced subsequently that is compatible with this implementation guide.  It will use the same model elements as HQMF where there is overlap.  The VMR/QDM data access layers and the logic/expression evaluation will be separate components.
  3. In that standard, the necessary logic and expression evaluation capabilities will be described functionally based on the capabilities of the existing HeD expression language.  These will be aligned with the simple expression language (based on simple math) that the SD and CDS workgroups agreed to in our joint session as being an implementation feature of the QDM-based HQMF guides.  Thus, you could map from something like simple math to the HeD XML expression representation if you wanted.  The capabilities would be the same (or very nearly so).
  4. Adjuncts can be produced which transform from the HL7 V3 schemas generated automatically by our existing tooling, into the language specified in the HeD implementation guide.
  5. We can also look at alternative ITS representations that allow the separation of models (e.g., VMR/QDM) from schema representations, to better support implementation flexibility.
This proposal keeps existing HeD pilots on their existing timelines as if my major ballot comments had not been made, aligns the current guide with the HQMF header and outline, allows HQMF to move forward on roughly the expected timelines we expect, and still harmonizes the efforts of both groups.

This was an exercise in making things possible, not in making them perfect, but I think we have a perfectly possible solution.

Thursday, January 24, 2013

GMDD: The new standard for patient engagement! Tweetup and Pitch at HIMSS13

It started with a tweet from @interopshowcase requesting volunteers for the Interoperability Showcase.  Then @ePatientDave asked me if I knew a way he could get into HIMSS.  I made the connection, and then suggested to the showcase sponsors that they should get him to speak as well as volunteer.

It's now gonna happen.  Dave will be speaking Monday, March 4th, at 4pm at the Interoperability Showcase.  His presentation doesn't have a title yet, but I suggested the same as the title of this post (up to the !), but we'll see what he comes up with.  While Dave is speaking, I'll be hosting a tweetup in the same location and time as Dave's talk (the tweetup really starts when he's done).  After he talks, I'll help walk interested folks through a demonstration that shows what IHE is doing to make it possible to get our damn data.

This is gonna be fun.

If you plan on going to the event, please RSVP here, so we know how many chairs we should have!  Registration is open to all attendees of HIMSS 2013.

Wednesday, January 23, 2013

The Universal Integration Component

Most readers of this blog have some experience with designing applications.  Applications are easy. You just have to account for the different ways people are going to use them.

Designing standards is a little bit different.  When you design a standard, you have to take into account for the uses to which many different applications will put the standard.  It requires a lot more flexibility to deal with issues, yet rigidity (because standards shouldn't be too flexible).  You have to allow for cases where, for example, race and ethnicity are strongly encouraged data elements for capture (e.g., as in the US), and also for cases where it is illegal to capture and report them (e.g., as in some countries in the EU).

When you go a step further, as IHE does, in developing profiles which use multiple standards, it gets even more complex.  Here, you have to work out how to solve challenging impedance matching problems between different standards that take on different roles in the environment, sometimes bending or warping things a bit as you go.

After building several profiles, you start to think like the toy maker Lego, or perhaps the A.C. Gilbert company.  You'll probably have some of your favorite components that you wind up using (I prefer Lego Technic for building toys), but others have their favorites too, and eventually, you are going to have to hook them together.

I just wish there was something like this universal adapter brick for IT standards:

Or better yet, the IT standards equivalent of a 3D printer through which I could build such an adapter.

Yes, we have interface engines.  They are trying to take on that role.  But I could also have just put the period after the third word of that last sentence.

Tuesday, January 22, 2013

Recovering from the HL7WGM, and back to ABBI

It may not be apparent, but I'm an introvert by nature.  I prefer to spend my time thinking, writing, and reading, rather than interacting with a lot of people directly at the same time.  Last week was one of the busiest HL7 Working Group meetings I've attended in the last few years.  Every quarter was filled with something that either I wanted to be present for, or for which my presence was requested (due to how I voted on a committee's ballot).  Add to that the all-day board meeting, a day full of teaching, and a day for the FHIR connectathon, and you have a completely exhausting week.  The meetings, connectathon, and teaching all went well, which made it even more exhausting.  I took the day off yesterday (Martin Luther King/Inauguration Day) and spent some time recovering with my family.

My brain is still burning with all the discussions from the last week.  I thought I'd spend this morning knocking one off my list, which is the ABBI PULL security model using OAuth 2.0.  I spent some time with John Moehrke over lunch and breaks talking about it, and we think we've worked out the key details.

Software Deployment Models

There are four different deployment models for software using the ABBI PULL API that I can imagine, each with their own security concerns:
  1. Running in a Web Server
  2. Embedded in a Device
  3. Installed Natively on a Computer or Device
  4. Running in a Web Browser
I'm going to look at each of these separately, and the same provisions of my last post apply:  Slap me if I get it wrong.

Running in a Web Server

The first two are probably the easiest to address.  With the first, you have a limited number of instances of the software, (typically) controlled by the software developer.  For these applications, the website running the software can be secured from inspection by the end-user, as can its secrets.  These applications can use dynamic registration pretty easily.  These applications can have their own identity associated with them, secured by the software manufacturer (or in some cases, by the organization deploying it) to ensure that the installed software instance can be identified.  Protecting that identity credentials is up to the developer/manufacturer/deployer of the software.

Embedded in a Device

In the second case, embedded in a device, I'm talking about software with no user accessible memory. So, things like activity meters, glucometers, blood pressure cuffs, et cetera, could be included (depending on how they are built, and I'm not talking about something you attach to your smart phone or tablet, but rather a wholly contained device).  There will likely be several magnitudes more of these devices and their installed software than there would be in the first case.  In these cases, we count on two things: The physical security of the device to ensure that someone cannot access the internals of the software.  FWIW: It is possible to crack open the device, and read out the software if standard components are used.  It's a risk, but there are some precautions that device manufacturers can take to protect the software here, or they can just choose to take that risk.  The risk is somewhat mitigated by the fact that only a few software installations would be thus inspected.  So if each device is provisioned with something like a device specific certificate signed by the manufacturer, what is at risk is the software identity associated with a single device (for each device thus inspected).  This assumes that that device manufacturer would deploy with something as secure as a signed certificate.  If they just use a device serial number, it would be relatively easy to impersonate multiple devices by just using made up serial numbers.

Running Natively

In the case where software is installed and running natively on a computer (e.g., laptop or desktop) or device (e.g., smartphone or tablet), there are several challenges, as I've previously mentioned. It is technically feasible to separately provision each downloaded instance of the software installed on a device with its own identity that can be stored in secure storage, but that may not be feasible for different software distribution models (on media, downloaded through an app-store controlled by outside organizations).  It is also technically feasible to require a separate software registration to provision the software (as is done with many software applications), through which the software can be identified.  

The approach that I've settled on here is to delegate the risks associated with the identity of the installed software instance to the software manufacturer.  The manufacturer would need to either provision the installed and running application with its own secure identity, or use a shared identity among all identical instances of the same package.  Since this is a choice the software manufacturer makes, they have some ability to control what happens should someone try to impersonate their software, and can determine what is an acceptable level of risk to them and their customers.  Applications which help a patient keep track of non-life-threatening health data (e.g., diet) might require less stringent security measures than those which ensure that patients are taking their medication appropriately, for example.

The choices made by the manufacturer can be described to the authorizer, so that data holders and authorizers can also make decisions about whether to allow the application access and how to alert users based on the type of application identity (shared or separately provisioned).

Web Browser Based

Finally, for web browser based applications, we would need to delegate some of the risk of using the application to the patient.  It's hard for me to imagine an application that is completely browser-based in the commercial world (i.e., doesn't have some web-server component associated with it) that could facilitate securing the application identity, but it is technically possible.  I think there are two cases here as well:  The software developer could choose to deliver a separate identity to each "download" of the browser-based application, or they could share an identity across all such implementations.  Given the ease in which a web-browser based application's identity can be obtained, it doesn't seem all that useful to deliver separate identities to each instance of the application running in a different web-browser.  There are some ideas I'm playing with here (e.g., tying the application identity to a user authentication), but this requires more thought.

  -- Keith

P.S.  That's one down (partially), and at least five more to go...

Friday, January 18, 2013

Template Versioning Redux

I have talked about how I think templates should be versioned previously.  I spent this morning in the joint meeting between HL7's Structured Documents, Templates and other workgroups.

The challenge that I have presently is that IHE wants to adapt or adopt a number of CCD-A templates to replace the CCD 1.0 templates we are currently using.  Our implementers are much more concerned about how an assertion of conformance to a template must be represented in an instance than they are about how the template metadata represents a template identifier and its versions.

Right now, if you want to declare conformance at the document level to the Discharge Summary in the IHE XDS-MS profile, you would write this:

<templateId id=""/>

We'll need to update that profile to adopt as much as possible (removing US Realm constraints) to the CCD-A discharge summary.  But we cannot use the CCD-A discharge summary directly because there will be some adaptations made.  My current plan is to declare conformance to the new version thus:

<templateId id="" extension="20130615"/>

That means that the instance is declaring conformance to the version of the template published on June 15th of 2013.

This template would be compatible with the C-CDA discharge summary, in that its constraints do not violate the rules of the C-CDA Discharge summary (with entries required), so in the US, you would declare your instance conformant to both the IHE PCC and the C-CDA templates:

<templateId id="" extension="20130615"/>
<templateId id="2.16.840.1.113883." />

This is a mix of new and old style template assertions.   I think that the updated templates specification should indicate a mechanism by which a versioned template ID is represented in an instance.  That representation should provide a backwards compatible mechanism to represent unversioned templates for specifications that were in place before we got to the versioning mechanism.

IHE PCC will be permitted to add  constraints to its template document that may not have been stated as strongly in the C-CDA. We could state that the LOINC code is fixed to 18842-5 for the document (and probably will), rather than having a list of possible values (as is present in C-CDA).

In the new world, most vocabulary constraints will be written in terms of vocabulary domains, and those domains will be bound to a value set through the realm assertion.  Each document will contain a <realmCode> element that declares the realm bindings that apply (and there will usually be only one).

When <realmCode code="US"/> is present, the IHE USA national extensions to the profile will identify the set of vocabulary bindings and data type flavors that must for used with the document in the US.  It will be the responsibility of IHE USA to declare what those are (and they will use the C-CDA and MU regulations to select appropriate values).  When <realmCode code="FR"/> is present, it will be up to IHE France, and so on.

We would also (through IHE USA), likely declare that if you declare conformance to this template, and also specify the US realm, that you must also declare conformance to the appropriate C-CDA template (to ensure conformance to US requirements for the document).

This will allow various countries to start with the IHE specifications, and modify them (including changing them according to localization rules which are quite similar in IHE and HL7) as needed to meet their requirements.

It may be appropriate to ballot these specifications in both IHE and HL7, and in fact, this is a suggestion I've been considering.  The challenge here is that IHE and HL7 are piloting the idea of balloting IHE specifications in HL7, and this is a very big profile to run through this process for the first time.  We may want to focus the ballot on ONE document type (e.g., discharge summary) for that pilot, and follow up with the whole set over time as we understand the impacts of balloting IHE material in HL7.

Our first ballot could be either for comment only and/or DSTU, and should occur at the same time IHE goes through its public comment process (that would be out of cycle for HL7).

Run over by the HIPAA bus?

Were you run over by the HIPAA bus yesterday?  The Omnibus final rule finally landed with a crunch last night.  If you check out #HIPAAbus, you'll see my notes from my blaze through with page numbers.  My notes are below.  I haven't actually read the rule, yet, just the commentary up through the start of the financial impact assessment (which I nearly always skip).  If you find federal regulation boring, skip to the fun stuff at the end of this post.

The new rule modifies the HIPAA Privacy & Security Rules to implement HITECH, strengthens privacy protections under GINA, makes other changes to simplify thing for regulated entities, and  modifies the Breach Notification Rule to address public comments.

The final rule is effective March 26, 2013; affected parties (covered entities and their business associates) must comply by September 22, 2013.  Existing BA contracts can remain in force until September 22, 2014 with certain provisions.  If modified sooner, those contracts must comply with new rules.  A 180-day period for compliance will become the norm for similar future regulation (unless exceptions are necessary).

Privacy and Security

Business Associates

  • Business associates now include patient safety organizations.
  • Health Information Organizations, e-Prescribing gateways, and PHR providers must be business associates. A PHR provider is only considered to be a BA with respect to covered entities on whose behalf they are providing services.  While requirements of a BA are contagious to other associates of a BA with respect to HIPAA, a covered entity need not have agreement with those associates.  
  • It delegates responsibility for providing assurance with respect to HIPAA for other associates to the BA.
  • BA's include entities that create, receive, maintain or transmit PHI on behalf of a covered entity.
  • Business associates are subject to direct civil penalties with respect to enforcement.

Making life easier for Patients and Family

  • Covered entities may disclose immunization status to schools with documented agreement by parent, without any signature being required.
  • Family members & caregivers are permitted access to dead person's records unless that person's prior expression to contrary is known.
  • Information about care paid for by patient can be restricted by that patient from sharing with payer or associates without any exception.  This is a right, not a request that can be denied.

ABBI Rules

There was a chunk of stuff starting around page 263 and ending around 277 that I found to be very enabling for the ABBI project.
  • When an EHR is available, the individual has a right to an electronic copy be transmitted to the individuals designee.
  • If the patient is notified about the risk of unencrypted e-mail to access PHI, and still wants e-mail, they have a right to it.
  • The individual has a right to chose to designate a third party receiver (person or entity) to transmit PHI to.


A bunch of stuff popped out as being of some interest:

  • Copiers and fax machines that transmit, may also store PHI, so the storage on them must be treated like any other PHI storage.
  • If a covered entity is paid for marketing activities, it must a) have patient authorization, and b) let the patient they are getting paid for it, and c) let the patient opt out, even it the activity is with respect to treatment or operations.
  • Persons who are dead for more than 50 years do not have protected health information, it's just health information.
  • More flexibility given for how research authorizations can be combined with treatment authorizations.  Research authorizations need not be study specific any more, but must adequately describe purpose of use.
  • There are many changes necessary to the Notice of Privacy Practices. Health plans can post these on their web site and include them in their next yearly mailing. Providers must prominently display and make copies available to patients onsite.

Into the Breach (Rule)

  • The phrase "Low probability of breach" replaces "no significant risk of harm", keep practicing those risk assessments.
  • We didn't know isn't an excuse if you should have known.
  • If you want to look at how other covered entities and BA's handled a breach, Mickey Tripathi wrote a great post.  You'd think HHS could have put that link in themselves.
  • Waiting until the last minute after a breach to notify patient may be considered an unreasonable delay. 60 days is the upper limit.
  • "We retain § ____ in this final rule without modification" seems to be the preferred response on the Breach rule.

Genetic Information Nondiscrimination Act (GINA) Rules

  • The use or disclosure of genetic data for underwriting purposes prohibited to health plans covered under HIPAA, except for Long Term Care, for which the jury is still out, so it is still allowed.
  • GINA defines family member as dependent or 1st through 4th degree relatives (including by marriage or adoption). Do you know any of your 4th-degree relatives?  One of mine is famous.
  • Data about disease manifestation in family members is genetic data.  If you have a disease, your children cannot be dinged for it.
  • Underwriting means things like rules for eligibility or determination of benefits, premiums, cost sharing, exclusions for pre-existing conditions, and contract creation or renewal.

The fun Bits

  • (Page 43) The US flag needs 6 more stars ;-) as DC, PR, Guam, Virgin Islands, American Samoa & Northern Mariana Islands are defined by the rule as states.
  • (Page 44) My favorite change is the paragraph that describes the removal of a comma: (c.f., this tweet)
Overall, I don't find anything objectionable about what they did.  It's really about what they didn't manage to do - but I'll leave the missed opportunities for John Moehrke to write about.  

    Thursday, January 17, 2013

    Reverse Engineering the Quality Measurement Process

    This week has been very productive thus far.  Yesterday, I described one of the reasons why I felt that HeD needed to be aligned with HQMF in a bit more detail.  If you look at an event condition action (ECA) rule, the basic structure is:

    on (event) if (condition)
    then { action }

    Let's transform the structure a bit:

    action(p) and condition(p)

    It should be obvious that this is a transformation that can be automated.

    In this transformation, the numerator is patients for which the predicate action is true and for which the predicate condition is true.  The denominator is patients for which the predicate condition is true.  This is the definition of a measure.  It just so happens that I have a perfect example to play with: an ECA rule developed from NQF Measure 68 that appeared in the HeD Ballot.

    I restructured that rule to use an HQMF-like declaration.  Now I want to build a transformation that reverses the process of creating a rule from a measure, to creating a measure from a rule.

    Data Criteria

    In the data criteria section, there is:

    • an observation criterion of AMI_Diagnosis
         <id root="0" extension="AMI_Diagnosis"/>
         <statusCode code="completed"/>
         <value xsi:type="CD" valueSet="2.16.840.1.113883.3.464.1003.104.12.1001"/>
    • an observation criterion of IVD_Diagnosis that looks very much like the above, save that the id is AMI_Diagnosis and the value set is different.
    • a procedure criterion for CABG_Procedures
        <id root="0" extension="CABG_Procedures"/>
        <title>Coronary artery bypass graft procedures</title>
        <code valueSet="2.16.840.1.113883.3.464.1003.104.12.1002"/>
    • a procedure criterion for PCI_Procedures that looks very much like the above, save that it is related percutaneous coronary interventions, rather than CABG.
    • an observation criterion for age18andOlder
        <id root="0" extension="age18AndOlder"/>
        <code code="424144002" codeSystem="2.16.840.1.113883.6.96" displayName="Age"/>
        <value xsi:type="IVL_PQ"><low value="18" unit="a"/></value>
    • a substance administration criterion representing a prescription for antithrombotic in the past year.
      <substanceAdministrationCriteria>  <id root="0" extension="onAntiThrombotic"/>  <title>Prescribed antithrombotic w/in the past year</title>  <effectiveTime><low><expression>date(now,-1,"a")</expression></low>  </effectiveTime>  <participant typeCode="CSM">    <roleParticipant classCode="THER">      <code valueSet="2.16.840.1.113883.3.464.1003.196.12.1211"/>    </roleParticipant>  </participant>  <definition>...</definition></substanceAdministrationCriteria>
    • an observation criterion representing a reason not to prescribe an antithrombotic.
      <observationCriteria>  <id root="0" extension="antithromboticNotPrescribedForDocumentedReason"/>  <title>Patient or other Reason for not prescribing an antithrombotic</title>  <code code="G8697" codeSystem="2.16.840.1.113883.6.12" codeSystemName="CPT-4"/>  <definition>...</definition></observationCriteria>
    These will not be transformed because they are the same for both.

    Condition Action Section

    In the condition action section, there are a condition criteria that are anded together:
    The patient must be older than 18.
        <id root="0" extension="age18AndOlder"/>
    The patient must not be on an antithrombotic, nor have a documented reason for not being on one.
            <id root="0" extension="onAntiThrombotic"/>
            <id root="0" extension="antithromboticNotPrescribedForDocumentedReason"/>
    The patient must have a diagnosis of AMI or IVD, or be scheduled for CABG or PCI.
            <id root="0" extension="AMI_Diagnosis"/>
            <id root="0" extension="IVD_Diagnosis"/>
            <id root="0" extension="CABG_Procedures"/>
            <id root="0" extension="PCI_Procedures"/>

    This represents the condition in the ECA rule.  In the measure, I'd like to ensure that the denominator matches all of these preconditions.  I can simply move all the preconditions to be inside the demoninatorCriteria in HQMF.  Here's a snipped of XSLT which would do that:

    <xsl:template match="hed:conditionCriteria">
        <id root="c75181d0-73eb-11de-8a39-0800200c9a66" extension="DENOM"/>
        <xsl:copy-of select="hed:precondition"/>

    There are three choices presented to the provider for things to do in the ECA rule in a Choose One Action:

      <title>Treatment and documentation options</title>
      <text>Treatment or documentation a clinician may order or 
        perform for an IVD patient with no prescribed antithrombotic
        in the patient record
    The action itself needs to be turned into a numerator criterion.  Note: There's also a Choose At Least One Action too that I proposed.  The ChooseOne action would map to the OnlyOneTrue construction in HQMF, and the Choose At Least One Action would map to the AtLeastOneTrue construction. Here's an XSLT snippet that would perform that transformation:
    <xsl:template match="ChooseOneAction">
        <xsl:copy-of select="hed:title|hed:text"/>
        <xsl:apply-templates select="hed:option/*" mode="numerator"/>

    Each of the options appears as follows and needs transformation as well:
    1. Prescribe an antithrombotic

        <participant typeCode="CSM">
          <roleParticipant classCode="THER">
            <code valueSet="2.16.840.1.113883.3.464.1003.196.12.1211"/>
    2. Document an antithrombotic that already existed.

        <title>Document antithrombotic prescription in the patient's active med list</title>
        <participant typeCode="CSM">
          <roleParticipant classCode="THER">
            <code valueSet="2.16.840.1.113883.3.464.1003.196.12.1211"/>
    3. Document the reason for not prescribing an antithrombotic

        <title>Document reason for not prescribing aspirin or other antithrombotic</title>
        <code code="G8697" codeSystem="2.16.840.1.113883.6.12" codeSystemName="CPT-4"/>

    Proposals don't exist in HQMF since it looks back at what was done, so we need to look for orders or events that happened.  These need to be turned into criteria that show up in the data criteria section.  The substance administration proposal describes what needs to be done.  We just change the name from substanceAdministrationProposal to substanceAdministrationCriteria and we obtain a criteria that matches the proposal.  Changing the name of an element is trivial in XSLT.  The pattern in the template for this:
      <xsl:copy-of select='node()'/>

    At this point, we now have a measure, but we need to specify a measure period.  We'll leave that choice up to the measure developer.

    There are a couple of things this measure doesn't have:

    1. It doesn't distinguish between initial patient population and denominator criteria.  We could add a marker in the rule to help distinguish those.
    2. It doesn't distinguish criteria that are present because they represent exceptions or exclusions. Again, we could add markers in the rule to help distinguish those.
    3. It doesn't address dynamically created proposals.  You might propose for some rule, a specific medication, but also a dose that is related to the patient's weight or age.  Having dynamic criteria is a subject of another post, but this is also related to the next issue.
    4. A proposal can be overridden and/or changed or customized by the provider.  The rule will often provided suggested values.  There are essential (with respect to measurement) components (e.g., the medication), and not-essential components (e.g., a text description of the reason why a medication wasn't given).  If we just turn the proposal into a criteria, we haven't addressed the essential vs. non-essential distinctions in the proposal.
    This last case can be addressed in the rule, using a structure similar to how I dealt with offering choices for different medication dosing regimens.  In this case, the first part of the proposal offered the essential detail (the medication), and the options within the proposal offered suggestions for non-essential details that can be altered by the provider acting on the proposals.

    Over the next week, I'll be writing this transformation, and addressing some of these issues.  What I've also demonstrated here is a very strong reason for keeping HeD and HQMF aligned as we develop both of these ballots.  HeD was asked to align itself with HQMF in my ballot comment, but at the same time, HQMF must also align itself with HeD.  I'm hoping that there is a ballot comment we can hang that on in HQMF, but if necessary, I'll take it through channels to get what we need in HL7 to keep these two ballots aligned.

    -- Keith

    Wednesday, January 16, 2013

    Slap me

    While I've been at the HL7WGM, I've also been spending time thinking about ABBI and OAuth 2.0 and dynamic client registration.  I still don't have the details and capabilities of dynamic client registration fixed firmly in my head.  Part of the reason I'm working through this so hard is due to a threat to the ABBI Ecosystem that doesn't yet seem to be mitigated.  Let me start with some assumptions:

    ABBI PULL will use OAuth 2.0 for authentication.  It will need dynamic client registration to support authentication of client applications.  The reason that authentication of clients is necessary is that we want to establish trust in the ABBI ecosystem.  That means that applications need to be trustable, or not trustable, and that trust once granted could subsequently be revoked.

    To identify a client application installed on a user's system, it must be provisioned with credentials so that an authorizer can identify that installation, and grant trust (or not) based on both the user authorization, and the client application identity.  The client application identity could be specific to an installation, or cover all installations of the same client, or an instance's identity could be linked to  an "application class".  Revoking trust on an instance by instance basis is tedious, and means that having seen one threat, you address it, but that doesn't help you with the next one.  In the linked instance to class model, trust could be revoked from a specific instance, or from all instances of the same client, depending on what happened in the ecosystem.

    The threat to an application is this:  If an instance of an application can be impersonated, then an attacker can deploy numerous instances of a rogue application that impersonates a real application (e.g., through a bot-net).  These rogue applications behave badly in a way that is detectable by the authorizer.  This results initially in a revocation of trust, first from each badly behaving instance, but eventually to any instance reporting to be that application.

    The attacker in this case has successfully denied users of that application from the services it provides in the eco-system. The reputation of the organization supplying the application also suffers.

    Denial of service might seem to be a pretty mimimal attack.  No PHI was lost, damaged or stolen.  Service will be restored eventually.  This isn't about not being able to access Twitter though. This is healthcare data.  People have died because they didn't have their medication list readily available when they needed care.  Am I over-responding?  I don't know.

    The problem is that in the OAuth threat model, application code isn't secure.  If you've ever used a Java decompiler, you know what I mean.  I've seen C/C++ decompilers that work with x86 architectures that are also pretty good.  So the code isn't safe, nor is the data in your application, so it cannot be safely pre-configured with secrets. Even assuming that an attacker has limited resources, modern tools can reconstruct the application code readily enough to enable serious hacking (been there, done that, all for legal purposes).

    Post-configuration of the application with a secret is possible (because the application can store the secret in protected storage), but there is a chicken and egg problem.  Because the application code isn't secure, the protocol by which the installed application obtains its post-configured secret is known, and an attacker can use then that protocol to obtain an installed instance of an application's post-configured secret.  Having that, they can now impersonate the installed instance.  So anyone could masquerade as an application for which they had the code for, and if they can do it once, they can do it a thousand times (or more).

    The responsibilities of members of the ABBI ecosystem is not equally shared.  Applications (running on a device) need to protect a single (or perhaps a few) end-user's data and their own secrets in an installation.  Data holders must protect the data of thousands or even millions of patients.  Authorizers enable the trust in the ecosystem, and have one of the highest levels of responsibility, because if they break down, the ecosystem breaks down.

    And what I cannot figure out is how to enable an authorizer to protect an application from being impersonated.

    This is not my area of expertise.  I know enough about security (which is quite a bit actually) to know how not to get my face slapped, and when I need expert assistance.  This is one of those cases.  Yet, because I understand more of OAuth than many others (there are a few exceptions, including Josh M, whose been really helpful), I'm putting my face right out front.  Slap away.

    Monday, January 14, 2013

    What is a Hospitalization?

    An interesting issue on the HQMF ballot came up today in the HL7 Structured Documents workgroup that I thought was worth talking about here.  The concept that was needed was "hospitalization".  But what is that?

    The short answer is it's rather complicated.  The (really) long answer is below:

    Let me first describe a situation:

    A patient suffers stomach pain, goes to the emergency room, is evaluated and treated, and transferred to a bed for observation.  After about 20 hours, they have not gotten better, and so are admitted to the hospital.  They undergo a series of diagnostic tests, and after the doctors figure out what is wrong with them and diagnoses them, they are transferred to another facility for treatment. After treatment, they are then transferred back to the original facility.  They are then discharged home.  They have a follow up visit with a specialist, who recommends therapy.  The patient walks over to therapy, and starts the first course the same day.  After the last therapy visit the patient has another follow-up with the specialist (again on the same day).  And then there is a final phone conversation with their general provider before they are finished with the condition.

    In the HL7 RIM, these various events are modeled as a PatientEncounter.  The definition of encounter in the RIM is: An interaction between a patient and care provider(s) for the purpose of providing healthcare-related service(s).

    Examples given in the RIM include: Outpatient visit to multiple departments, home health support (including physical therapy), inpatient hospital stay, emergency room visit, field visit (e.g., traffic accident), office visit, occupational therapy, telephone call.

    So let us attempt to identify all of the "encounters" this patient had.  We'll get to hospitalization eventually, but we need these concepts first.
    1. The emergency room visit.
    2. The admission for observation.
    3. The subsequent admission to the hospital.
    4. The transfer to another facility.
    5. The transfer back to the original hospital.
    6. The first follow-up with the specialist.
    7. Six visits to therapy
    8. The second follow-up with the specialist.
    9. The phone conversation with the GP.
    Except that in some cases, #1 and #2 could be part of the same encounter by some views, or it might be that #2 and #3 are parts of the same encounter, or #1, #2 and #3 could be considered to be a single hospitalization.  #3, #4 and #5 are each separate except for the case where the other facility and the original hospital are treated as the same entity, in which case they could be treated as one encounter.  Encounter #6, and the first encounter of #7 could be considered to be part of the same encounter, as could the last visit of #7 and visit #8 (under the single visit to multiple departments, if PT and the specialist are simply different departments of the same organization).  The therapist might even describe all of #7 as one encounter.  It's pretty clear the hospitalization stops after encounter #5, but where does it start?  Don't worry, I'll get there.

    Patient Views of an Episode
    Consider another case, treatment for arthritis of the knee.  Imaging the treatment includes two different attempts at physical therapy, an MRI, an injection, and finally surgery followed up by more PT.

    Imagine I want to look at how much treatment for arthritis costs.  There are a lot of ways I could view this.  How much did the surgery cost me could cover both surgery and PT, or just surgery depending on what I mean.  Did I mean just the surgery, or the surgery and the necessary followup care.  On the MRI, did I just mean the MRI, or also the pre- and post-MRI visits to the specialist to review the condition, order the MRI, and evaluate the results.  How much did PT cost could mean each session separately, or all of them together.
    Now we introduce another concept, which is a super-set of hospitalization, called the episode of care.  Episode of care is not defined in the RIM, nor can I find an adequate description of it.  What constitutes an episode of care varies depending on context.  From the patient perspective, the broadest view of the episode of care goes from onset to resolution of the condition.  But it also might vary depending on what they want to discover (see the sidebar).  The specialist might consider this episode to be from step 6 to step 8.  The original hospital might go from 1 - 5, or perhaps 2-5.  The therapists view would include all six therapy visits.

    Now, imagine that you have a quality measure that is intended to address the cases I described above.  How would you describe the episode of care, and/or the encounters related to the episode of care.

    This isn't a failing of the HL7 RIM.  In this case, it is a failure to include adequate definitions of the term encounter and episode in defining the quality measure.  It is further complicated by the fact that there are varying administrative definitions for encounter and/or episode depending on who you might be dealing with (e.g., CMS), and there may also be clinical definitions for some well-defined episodes of care (e.g., pregnancy).

    Administrative definitions of encounter don't seem to be that helpful in quality measures, because they depend on organization structures that may vary.  So, even though clinically the sequence of events might be identical for a given patient, they could be treated differently if you looked at administrative definitions.

    We could be rigorous in our definitions, but that won't solve the problem.  Let me illustrate.

    Suppose we were to define an encounter as a single contiguous stay, within one care setting (ED, Ambulatory, or Outpatient), at the same location.  We'd have to be more specific about definition of "same location".  My first refinement of that was "same physical location", but then I had to think about medical centers and some hospitals I know.  My next refinement came up with the same building, or complex of buildings, on a contiguous property, being operated by the same organization.  I had to refine the latter, to be the fuzzier "giving the outward appearance of being operated by the same organization" because some complexes are, and some aren't operated by the same organization.  And I'd further have to relax contiguous property to address outliers, like the office one or two blocks over that was leased to address space overflow issues.   Then I'd have to get more concrete because these definitions are way too fuzzy.  For the purpose of argument, assume I have.

    And then assume I'm satisfied with this.  So, here it comes: I could define a hospitalization as an episode of care defined as a sequence of contiguous encounters at the same location (as not really described above) where at least one of the encounters is an inpatient setting.  Finally, phew!

    It seems as if I've adequately defined my way around the ambiguity, and we could write the measure.  Let's say that I'm satisfied with these definitions (which is a big leap), and that you are satisfied with them as well (an even bigger leap).

    While you could now define a quality measure about a hospitalization, the definitions that I've given does not operate on the data that most systems capture.  Encounter is principally an administrative concept, and the data about it is captured using administrative definitions, not rigorous definitions  imposed by some misguided, though perhaps well-meaning standards geek. Even though what I've defined allows me to distinguish encounter's 3, 4 and 5, they may be simply captured as one encounter at some organizations, and they may not have the necessary data to distinguish those as separate encounters.

    So, are we back to the drawing board?  Perhaps, and perhaps not.  I think the idea of coming up with operational1 (rather than administrative) definitions of encounter, episode of care, and concepts like hospitalization, will be useful going forward.  Organizations will have to map from what they capture (mostly administrative) to the operational definitions.  The value of having consistent operational definitions of these concepts is that it makes quality measures more comparable, and less sensitive to administrative variations.

    But it won't happen overnight, and it will take some time for these operational definitions to come into use in Health IT solutions.

    -- Keith

    1 I call these operational, because they are being used in quality measures, which is the operations part of TPO (treatment, payment and operations).  There are also clinical definitions for episodes of care (e.g, pregnancy can be pretty well defined clinically).  There may be different clinical definitions for encounter, but I think that's probably too many angels dancing at this point.

    Sunday, January 13, 2013

    FHIR = Free of Heavy Infrastructure, Really!

    I just finished participating in the 2nd HL7 FHIR Connectathon.  My goals for this connectathon were relatively straightforward:

    1.  Search for a patient by id, name or other demographics (e.g., Gender and DOB).
    2.  Update the patient record with an alternative identifier used to support authorized access to their health information.
    3.  View XdsEntry resources and the Documents associated with the selected patient.
    4.  Enable Authorization and Login using OAuth and OpenID (stretch goal).

    I managed 1 & 2, starting this time with absolutely no code.  I made no progress on goals 3 and 4.  Goal 3 was really not-material to my use case.  Goal 4 was simply not possible given the time I had, but I'm going off to my room now to work on that.  If I can make progress on that, I'll report about it later.

    I had a few challenges with caching of search results, which had to do with my IE settings.  That took me at least two hours to work out, and was made more complicated by some code changes which had been posted to one of the servers that broke search.  I hadn't planned on using IE as my browser, since I prefer Chrome.  However, due to a strange interaction between Chrome with its implementation of XMLHttpRequest and one of the test servers,  I couldn't get my solution to work in Chrome.  So, I downgraded to IE 8.

    What was interesting about this exercise was that all my code was HTML, CSS and JavaScript.  I had no need to install a web server, JSP/Servlet engine, XML or XSLT processor, or anything else.

    This has huge benefits to integrators.  Imagine that you are tasked with creating an ADT record, or updating an admission record in a Health IT system today.  What do you need?  Let's take a look at this using HL7 V2, V3 and FHIR:


    To deal with this in V2, you first need some way to access the ADT record.  In V2, you could do this using a control query message, but only if your Health IT solution supports V2 Queries for those types of records.  If it doesn't, you'll have to tap into the database.

    You'll need to be able to create and send HL7 V2 messages, and receive and parse the responses.  Likely that means that you need an interface engine of some sort.  You cannot really do this from inside a web page without quite a bit of infrastructure.  At the very least, you'd need a signed Active X Control that integrated with your Interface engine.  More than likely, you'd be coding pages in the interface engine, or, you'd be running a web server and doing the integration with the interface engine (and the Health IT system that it was connected to) behind the scenes.  May interface engines already support generating web pages, and some even launch and work well with a bare-bones server like Jetty.


    To deal with this in V3, you will need an XML Parser, and tools to deal with Web Services, like Apache Axis 2.  Again, you'll probably need a web server.  You'll have to work through a number of integration issues with the V3 messages, including making choices about (or at least configuring) vocabularies that can be used.  If you are lucky, your Health IT system will implement the appropriate V3 messages.  You may be able to query for the patient record, or you may have to shadow the ADT content by listening to existing messages, or again, you could have to tap into the database.


    You need a modern web browser that supports XMLHttpRequest.  Query is built in for resources, as are the basic operations such as create, update and delete.    All of your code can reside in a web page.  What is even more significant is that the syntax that you use to access the resources is very much aligned with your programming language so long as there is good JSON support for it.  So, if you wanted to access the first patient identifier, it looks like this: thePatient.identifier[0] (taken from the code I wrote this weekend).

    Comparing my experience here with experiences in V2, V3 and yes, even CDA, FHIR is clearly the way to move forward.

    The FHIR team is already planning the next FHIR Connectathon.  We discussed what the focus should be at the next one.  One topic of special interest to me was paying attention to documents in FHIR.  It should be interesting.

    Friday, January 11, 2013

    Adventures in Plumbing 2

    Yesterday, my old water filter assembly began leaking from one of its compression fittings.  I told my wife I would get to it after my call, but she apparently didn't hear me say that. When she came down to check on it, she tried to get it to stop leaking, and it started spurting water.  So I quickly turned off the water.  After I finished my call, I examined it further.

    The fittings were old, and the rubber had lost its elasticity, so when it took a good knock and started leaking, there really was no getting it back together well.  Plus, the nuts had corroded to the mounting plate (who designs these things, anyway), and so no matter how much I strained, there was no loosening or re-tightening of the fitting possible.  I turned off water valves, and nudged everything back into place as best I could.  I turned back on the valves and there was a small drip.

    My father could fix just about anything around the house, and he taught all of his children what to do.  But I was pretty sure my plumbing skills weren't up to this challenge, and that it would save time, aggravation, and money to have a professional fix it.  Soldering takes practice to do well, and I had little experience ever doing it myself, and none in the last twenty years.  But, getting a plumber on short notice was going to be expensive.  I decided that I would suffer with a small drip until I could back from the HL7 WGM and supervise a plumbing repair.

    Today, my wife reported that the cold water taps in two locations weren't working.  I went back downstairs, and sure enough, had forgotten to turn back on one of the shut-off valves.  The drip really didn't get any worse, but I kept my ear out during the day.  This afternoon, someone started running water upstairs, and I could hear the drip quickly progress from drip to stream to spray.  So I shut things off again.

    This was now a mess, because I leave tomorrow morning at Oh-dark-hundred to go to the HL7 WGM.  I knew my plumbing skills weren't up to the challenge, but I checked out the internet just in case.  I found something called Sharkbite, which was just a push-to-fit fitting that promised to be water-tight and easy to use.

    I picked up the 18" hose (see picture above) and a pipe cutter, cut out the filter (we weren't using it any more after we had switched over to a water cooler), and pushed on the hose.  It all worked, simply and easily, and the repair (including a half-hour trip to the hardware store), took less than an hour.  A real plumber could do it in 5 minutes.  If you have to do plumbing, this is the way to do it.  Even an unskilled person like me could use them.

    Now today, on the HITsm tweet chat, we were asked about how we would know when interoperability had been achieved.  I half-jokingly reported that it would be when this blog was no longer necessarily.  But imagine, what if it was that easy to connect two healthcare IT systems?  That's where HL7's FHIR comes in for me.  FHIR makes it easy for someone with very little experience to hook things up quickly and easily.  If you have to do Healthcare IT plumbing, that's the way to do it.  I look forward to the second FHIR Connectathon this weekend.  I have to admit that I'm not as prepared for it as I'd like to be, but I'm not really worried about that.  After all, I have quite a bit of skill in this art, and FHIR makes the connections easy.

    Support: A Key Component in Delivering Standards

    There's been a lot of discussion over on the Structured Document's workgroup list about how SDWG will provide support of C-CDA and other specifications the workgroup creates.  According to the HL7 Governance, the TSC is responsible for providing interpretations and publishing guidance on the use of HL7 standards, but as one in-the-know member put it, "That mechanism has never been operationalized."  Wes Rishel alluded to some of these discussions in his post: A Much-Needed Nudge for Stage 2 Interoperability a few days ago.

    A subcommittee of Structured Documents was formed to propose how this would be implemented, and we had our first meeting yesterday.  I put together a list of questions based on our discussions, which I shared with the list.  Other members of the committee chimed in with additions (thanks to Robert, Thom, Lisa and Brian), and Brian Zvi Weiss put together a consolidated list of all of the input, which you see below.  He proposed several additions to the list, and some rewording to turn the questions into declarative statements about what we are doing.

    Over the next few weeks, I see this project making significant progress.  With regard to tools, HL7 already has GForge, which could support issue input, tracking and publication.  There are other tools out there, but unless and until we obtain a budget for anything, we'll have to use what is freely available.

    Here's the current state of things.  I welcome your input here, or on the HL7 Structured Documents mailing list.

    Mission Statement

    Establish a process for managing and responding to implementer questions about the C-CDA standard.


    • Rapid start-up, and results, validated by implementer feedback
    • Increase consistency of C-CDA adoptions
    • Reduce need for pre-negotiation in the processing of discrete data
    • Provide input and experience to relevant HL7 groups for possible application outside of SDWG and C-CDA
    • Establish foundations for “examples library” for C-CDA

    Project Need

    • C-CDA focuses on template definition, not implementation guidance
    • There isn’t a proven operational model for supporting a standard for rapid, large-scale adoption

    Execution Guidance

    • Prioritize support for MU stage 2 data elements
    • Develop common terminology to parse issues to determine which process they go through
    • Create, and manage a work list of issues and ensure both immediate/tactical/interim resolution, and monitoring and follow-through of longer-term validation of interim solution and/or alternative via standards evolution process
    • Work two tracks in parallel:
      • Formulation of the proposed process
      • Experimentation on execution of elements of the proposed process with current active questions from listserv and C-CDA DSTU 1.1 errata items marked “in process/review”
    • Success Criteria
      • To be determined
    • Scope
      • To be determined

    Process Outline

    1. Question asked/posted/surface as per agreed mechanisms
    2. Manage the set of issues
      1. Ensure question meets guidelines in terms of scope and quality (specific, clear, relevant, proposed sample XML, etc.)
      2. Classify each issue with respect to kind of guidance necessary [Brian: proposed addition] classification impacts handling and SLA (committed time for each phase of handling)
      3. Assign the issue to relevant parties for resolution
    3. Develop resolution
    4. Seek required levels of approval as per the nature of the question and the proposed resolution (and their classification)
    5. Report to SDWG
      1. SDWG is ultimate “approving authority” either explicitly or via delegation to support sub-group via agreed mechanisms per each established classification level of question/proposed-response (bulk-reviews, individual topic discussions, post-response review, etc.)
      2. Rejected resolutions go back to earlier states, as appropriate
    6. Finalized and publish types of guidance requiring publication (more generally, work each question-guidance situation to the appropriate closure state)
    7. Special focus on build-out of examples database generated directly and indirectly in the context of resolution/guidance/responses or proactively
    8. Maintain previously generated guidance artifacts
    9. Provide input to resources responsible for current standards maintenance activities (errata processing/publishing, re-balloted versions, consideration requirements for new versions of standards, etc.)

    Key Questions to Address

    Process Details

    • In general: details of process outlined above with associated tables/flow-charts and clarity of each step (inputs, owners, outputs, SLA, etc.)
    • What point of entry should questions/interpretations be channeled thru?
      • Single point of entry managed by HL7 and routed to appropriate WG/responsible person or entity for resolution (includes ANY HL7 Standard)?
      • Single point of entry managed by SDWG (limited only to questions/interpretations related to C-CDA?
    • Roles/Responsibilities. Defining the roles in the process – for example:
      • Managing inputs – who collects the question from the sources and works with those asking the questions to ensure quality/scope of question.
      • Managing the flow – who manages the list and the items in it as they move from step to step in the process
      • Classification – who determines the classification of the issues which in turn determines the handling flow, the required approval level, etc.
      • Prioritization and resource assignment – who manages the resource pool of answerers and prioritizes the issues as per the required resources and their bandwidth
      • Developing resolution – who is authorized to formulate the answers (for each classification-based path)
    • In the context of roles/responsibilities above, what are the respective roles of “ad-hoc volunteers” who are HL7 members , “ad-hoc volunteers” who are not HL7 members, SDWG, the support sub-group of SDWG, other HL7 groups, etc.
    • What should the turnaround be on each type of issue?
    • What types of requests are there?
      • A question that can be answered directly using material from the C-CDA DSTU or previously developed official guidance artifacts (generated through this process).
      • A question that requires the material from the C-CDA DSTU and prior guidance artifacts to be further interpreted.
      • A question that points to errata in prior guidance artifacts
      • A question that points to errata in the C-CDA DSTU
      • A question that identifies an issue that requires prior guidance artifacts to be reconsidered.
      • A question that identifies an issue that requires the C-CDA DSTU to be reconsidered.
    • In context of “types of guidance requests” above, need clear guidelines on what is in-scope and what is out-of-scope for this process. Also guidance on quality requirements for questions.
    • What is the relationship between Questions and Issues?
    • Might several different questions relate to or be consolidated into a single issue before determining the type of guidance required?
    • What kinds of guidance are there?
      • Errata (a type-o or mistake in the drafting of the standard or prior guidance artifact)
      • Clarifications (can be explained using just the standard or prior guidance artifacts)
      • Interim Guidance (requires some interpretation of the standard)
      • Change Proposals (affects either the standard or the prior guidance artifacts)
      • New Feature Proposals (affects the standard)
      • Implementation Assistance (What am I doing wrong?)
      • External issues (The NIST Validator …)
    • How is each type of guidance developed/handled?
    • What makes guidance authoritative?
      • Resource chartered to supply pointers to existing standard or existing guidance artifacts delivers information to address the question
      • Subcommittee Approval
      • Committee Vote
      • TSC Approval


    • What tool is used to manage the items and monitor/workflow their progress?
    • What tool is used to manage the examples database?
    • How is guidance distributed: Mailing List, Newsletter, Wiki, Other Infrastructure (e.g., GForge), etc.
    • How are guidance artifacts managed (stored, accessed, maintained, tracked for usage)?

    Business Model

    • Resourcing. What level of resourcing, what source of funding, mix of paid and volunteer work, etc.
    • Usage rights/cost. Who gets to use this support mechanism (everyone? HL7 members only?), what does it cost, is there a single answer to those questions or is it a tiered answer (e.g. some level of usage free to everyone, some to HL7 members only, etc.)

    Thursday, January 10, 2013

    Clinical Quality Workgroup Call Summary

    The HIT Standards Clinical Quality Workgroup met today.  We were tasked by the HITSC with responding to certain sections of the Meaningful Use Stage 3 Request for Comments.

    On Retiring Certain Meaningful Use Measures:
    The workgroup seemed to be in agreement that after 80%, we should start looking at measures that reflect the use of data captured, rather than just testing for data capture.

    On Care Plans:
    We should look for use of certain basic data elements in care plans, and continue to promote standardization of care plan data elements.

    On Clinical Quality Measures (CQMs):
    We should consider redevelopment of quality measures that work with the information models and clinical data that is required under meaningful use, rather than simple retooling of existing quality measures.  We should also consider using HQMF Release 2.0.

    On Technology for CQMs:
    Data models are incredibly important.  We need to have a consistent data model across Health IT.  One speaker noted differences between the Health eDecisions work and the HQMF work (which was my principle complaint about it).

    On having a Core set of CQM's:
    This is rather challenging for specialty providers.  One speaker suggested that while we might want to focus on something that gives us big bang for the buck, it might not be applicable to every provider.  We didn't come to consensus on our position on this call.

    On getting input for the HITPC and its workgroups:
    Attending meetings in DC is rather limiting, should also consider getting input at other venues.  For patients, HITPC needs to go to them, not expect them to come to HITPC.

    On use of Patient sourced data in CQMs:
    Yes, but... the standards are still under development.  We agreed that it important to annotate the source of data.

    On Process vs. Outcome Measures
    Not an either/or question, rather an and/both.  There was agreement that measure suites containing related process and outcome measures would be quite valuable.

    On Measure Development
    It is important to support consistent use of data elements, models, vocabulary and value sets.  We need a measure of quality for quality measures.

    That's about as far as we were able to get.  There were some 30 items we were tasked to address, and we were able to discuss almost all of them in the two hourse before we needed to finish the call.

    Math Homework

    My daughter spent the last week on a math project for her Algebra 2 class.  The project is a poster, looking at data about an Olympic sport.  The intent of the project was to compare improvements in the speed of women and men over at least a forty year period in a single event.  She chose Giant Slalom.  This is a downhill ski race where the participants weave in and out of flags (gates).

    She did all of the work, collected the data table by hand, hand-drew the scatter plot, computed estimated best-fit lines (using her programmable calculator), added them to the chart, neatly drew out all of the data, used Kramer's rule and inverse matrices, et cetera.  This was several hours of work over the last week, including at least 3 hours last night.  Then she decorated her poster and began writing up her conclusions (the last step of the project).  Everything fell apart.  She struggled with the conclusion because her best-fit lines showed that men getting slightly better, but women getting much worse.  It didn't make any sense.

    We went back over her data, and discovered a significant differences in the numbers.  Then we saw that there were also changes in how the data was reported from year to year.  I asked her to go back over the data again and put it into a spreadsheet, checking for those changes in how it was reported, while I headed off to get more posterboard (at 9:30pm).

    While I was gone, she had discovered other challenges with the reported data, and couldn't get the forty-year period for both genders that she needed.  When I returned, I looked at her problem again.  We put all the numbers (which she had again carefully tabulated by hand) into a spreadsheet.

    I showed her how to turn that into a scatter plot.  Then I showed her how to plot a regression (best-fit) line through each data set, and get the equation.  "What's R2?" she asked me when I checked the box to display that value too.  We looked up the definition of correlation coefficient on the web.  I then explained to her that it serves as a measure of the likelihood of a correlation between the data points.  She looked at my plot and noted that the men's results had an R2 of 0.000.  Yup, I responded.  And we went digging further.

    We discovered after a bit of reading that Giant Slalom is really different races at each event with the same name (her words and emphasis).   The course varies each time.  The number of and position of the "gates", the width of them, and the distances between them can all change between each race.

    Her whole approach to the conclusion changed.  She reported that she had insufficient data to draw a conclusion and explained why.  She wasn't happy.  She wanted to get a result showing that women's times were improving faster than men, or at least that women's and men's were both improving.  But she didn't.  At least she reported her results accurately.  I hope her teacher takes into account what she learned with this project, and grades it appropriately.

    In our society's focus on evidence-based medicine, and with my own understanding about how the practice of medicine varies across different providers, I often wonder if they too are running the same races.  Earlier this week, posted a blog describing the Reproducibility Initiative. That initiative runs what they think is "the same race", to see if they get similar results.  Given our recent experience, that sounds like a really good idea.

    -- Keith

    Wednesday, January 9, 2013

    Patient-centric HealthIT

    A question came up on the Society for Participatory Medicine's e-mail list the other day.  Basically, it boiled down to how we would define Patient-centric Health IT.

    As a patient, I have some pretty clear ideas about this.  To get at them, let's look at what I consider to be valuable:

    1. My Health
    2. My Money
    3. My Time
    4. Access to My Information
    5. Access to other information that is pertinent to any of the above
    Here's my initial set of requirements.  Patient-centric Health IT makes it possible for me:

    [1,2] To understand how much my health issues are costing me currently, and how much it could cost me in the future.
    [2] To understand what my costs are for different treatment options at different locations.
    [1,2,3] To be able to compare and contrast my options for different providers with respect to availability, distance, cost, quality and effectiveness.
    [3] To quickly and easily schedule appointments at times that are convenient for me electronically.
    [3] To quickly and easily obtain a telehealth consultation for health issues that aren't urgent or emergent.
    [1,3] To quickly and easily communicate with my healthcare providers.
    [1,2,3] To be able to coordinate my care with my healthcare providers. 
    [1,3] To quickly and easily access care for urgent and emergent issues.
    [3] To quickly and easily fill and refill my prescriptions.
    [4] To access my health information electronically, automatically, without any further intervention once I've set it up.
    [4] To understand my health information.  This could be a lab report, my health record, or any other sort of health data.

    These are the kinds of things that I really enjoy working on, because I can see how it directly benefits me.

    Tuesday, January 8, 2013

    What were we trying to do?

    Wes Rishel recently posted about an ongoing discussion on the HL7 Structured Documents list.  Another discussion is also cropping up about particular wording ("such that it") used in the C-CDA specification.  And then there's the discussion about what the intent was for the C-CDA (and former CCD and C32 content) about the results organizer.

    All of this stems from one challenge, which is how we incorporate requirements (or fail to in most cases) into the specifications.  The specifications tell you what to do, but in many cases, not why.  The why is very important to implementers because it explains the reason it is the way it is, rather than some other way, and provides further implementation guidance about the intent behind each template or structure.

    Most HL7 V3 specifications include story boards (examples of use cases really), but CDA implementation guides usually don't because they are still written in Word.  Even so, V3 story boards are often rather incomplete with respect to details.

    Some of these challenges are a result of tight deadlines.  All too often, we've been rushed to get something done because of various deadlines that aren't under our control.  It takes longer to write something that explains the rationale behind the specification in addition to writing the specification.

    I've pushed for both rationale and examples (see rules 15 and 16 here), but the tooling we have today (in HL7) doesn't support either yet directly.  Back in HITSP I insisted that value sets and data elements had more than just a name.  They had to have at least a sentence defining them.  I think the same should be true for every constraint.  If we can't write a sentence explaining why it is there, then perhaps we shouldn't have it.

    One of my hopes is that the ballot quality committee newly formed in the HL7 Publication workgroup will take up some of these issues to make this content easier to read.

    This isn't just a problem that is experienced in Healthcare Standards.  It's fairly common in other complex standards.  One of the "bibles" used by many in the structured documentation world is Charles Goldfarb's The SGML Handbook.  That book includes the entire text of the SMGL Standards (ISO 8879), extensively annotated. Michael Kay's various books on XSLT are similarly valuable (I own XSLT 2nd Edition).  I hardly every try to do any serious HTML programming without dragging down my copy of O'Reilly's Flamingo book, Dynamic HTML, by Danny Goodman.  All of these books on my shelf have extensive tabs, post-it notes, and corner turn-downs marking important sections.  Each of them is rather large (600+, 900+ and 1300+ pages respectively).  Even though I've read SGML, XSLT, HTML and CSS standards directly, these books are my primary sources.

    When I wrote The CDA™ Book, those books were how I aspired it would be used, but I could barely manage 300 pages in the year I spent on it.   While I might readily generate more than 500 pages annotating the Consolidated CDA Specification, I don't have the time (or funding) to do so, nor do many others involved in development of these specifications.  The tooling challenge has a similar issue.   I don't know exactly how to solve this problem, but I do know that it is one that needs to be solved.  Like everything else, one key ingredient to the solution will be time.