HL7 Officer Elections are now open, I'm running for Chair, please VOTE (for me).

The CDA Book is sometimes listed for Kindle here and it is also SHIPPING from Amazon! See here for Errata

Friday, July 25, 2014

IHE PCC Strategic Planning

The IHE Patient Care Coordination Technical Committee met this week to review the public comments received on the work it is doing this season.  I'll write up the details of what we did later.  More importantly the Planning Committee also met to review its strategic road map.

As part of that work, we came up with a revised mission and vision statement for the domain (which appear below) to help us guide the work.

Vision: The vision of Patient Care Coordination is to continually improve patient outcomes through the use of technology connecting across healthcare disciplines and care paths.

Mission: The mission of Patient Care Coordination is to develop and maintain interoperability profiles to support coordination of care for patients where that care crosses providers, patient conditions and health concerns, or time.

We looked at answers the following three questions to then identify key activities for the domain.

 1. What are the leading trends in Health IT and EHR system interoperability that most greatly impact you?

 2. What is the biggest change in thinking that should be applied to the problems of EHR interoperability?

 3. What capability in Interoperability would make you stand up and take notice?

Some of the topics that came up included:

  • Distributed Care Coordination
  • Dynamic Interface Development
  • Updates to QED to support a RESTful transport (using FHIR)
  • Expanding the influence of the Nursing Subcommittee

We are still evaluating the details, and so there is still some chance for you to have input.  Please provide it in comments below.

Tuesday, July 22, 2014

We done DID that

Every year the IHE PCC Committee goes through the process of updating its Strategic Plan.  This process involves brainstorming about the directions we could take with new profiles in the domain.  Yesterday during this process I came up with an extension of the idea originating in the IHE Care Management Profile, and further explored in this post.  The idea is to develop a way to perform Dynamic Interface Development (DID); the general concept is that there is a catalog of data elements from which you could choose information that needs to be transmitted in an interface. The catalog would need to be at the right level of granularity, approximately at the level of an IHE Template, a FHIR Resource, or a V3 RIM Class.  To create the interface, one would create a list of the data elements that are needed (these might be expressed as FHIR Query resources for common cases).  Having made the list, it would be fairly easy to automate creation of an interface to send the collection of data expressed in that set of queries.

There's a lot of work needed to get there, and there are a lot of little details between here and there, and some pretty major assumptions to test out over time.

   Keith



Friday, July 18, 2014

IHE Cardiology Technical Framework Supplement Published for Trial Implementation

IHE Cardiology Technical Framework Supplement Published for Trial Implementation

The IHE Cardiology Technical Committee has published the following supplement to the IHE Cardiology Technical Framework for trial implementation as of July 18, 2014:
  • Registry Content Submission - CathPCI V4.4 (RCS-C)
This profile may be available for testing at subsequent IHE Connectathons. The document is available for download at http://ihe.net/Technical_Frameworks. An accompanying sample CDA document and sample data collection form are also available for download. Comments on all documents are invited at any time and can be submitted at http://ihe.net/Cardiology_Public_Comments/.

But It Changes my Workflow

I hear the complaint that implementing an EHR changes their workflow from a lot of physicians and nurses.  I also hear (from those who've studied up a bit), that EHR X is no good because it is just an electronic version of their existing paper workflow.

The other day I tweeted:
Why would you use an EHR that DIDN'T change your workflow? Improvement = Change

A few folks commented that change is not necessarily improvement, and I certainly agree.  But let's take a step back from that point.  You are going to start using an EHR, either a different one than you were currently, or a brand new one since you haven't used an EHR in the past.

Why would you do that at all if NOT to make an improvement in your practice.  And if you are making an improvement, that means that workflows are GOING to change.  I cannot think of a single case where a significant improvement (cost, time, efficiency, outcomes or otherwise) didn't result from at least one significant change in workflow.

Can you?

I agree, not all changes are improvements, and not all EHR implementations are done well, BUT, my point stands.  If you are implementing an EHR, you should expect your workflow to change.  And if you are implementing it right, you should expect it to improve.

     Keith



Thursday, July 17, 2014

On Backwards Compatibility

HL7 has a great deal of experience with defining rules for backwards compatibility (which is not to say that those rules always worked).  They also have a whole document devoted to substantive change (for non-members, simply download HL7 Version 3 Normative Edition and read Substantive Change under background documents).  IHE and DICOM have similar rules about substantive change, which is part of what versioning and backwards compatibility are supposed to address.

The problem breaks down into two parts.  We'll start with two systems, N (for New) and O for (Old) which are using some version of some standard.  This is what we need for backwards compatibility that will support "Asynchronous Bilateral Cutover".

New Sender (N) sending to Old Receiver (O)

In this case, System O has certain expectations, and is already in the field.  So whatever had to occur in the message sent to system O should still be possible to send in the new version of the standard (if it isn't possible, then System N and System O don't have a common set of capabilities).  Moreover, if it had to be sent under the old version of the standard, it should also be required in the new version.  If not, then the two systems cannot readily communicate.

Old Sender (O) sending to New Receiver (N)

In this case, System N also has certain expectations, but before the new standard is fully baked, is NOT already in the field.  When receiving information from System O, system N should be able to deal with the fact that it is NOT going to get stuff that is new or changed in the new version of the standard.  Usually System N is a new version of System O, and could work under the "old rules".  So, keep working that way. One of the things that often happens in making changes to the standard is that previously optional stuff now has to be required (constraint tightening).  What System N needs to do in these cases is figure out what a safe and reasonable default behavior would be when it gets a communication from System O where those tightened constraints are not yet used.

Moving On

Tightening of constraints should be OK behavior in moving from version O to version N of a standard.  We know that loosening constraints is often a problem as we move forward.  However, the expectation of the receiver should be that old constraints are still OK for some period of time, in order to allow others to catch up asynchronously.  Determining this set-point is an interesting policy challenge.  At the very least, you should "grandfather" older systems for at least as long as they have to "upgrade" to the new version of the standard, and perhaps a little longer to deal with laggards (as we've seen has been necessary for 5010 and ICD-10 cutovers).

At some point, you have to drop support for the old stuff.  A good time to do that might be when it is time to move to version M (for moving on) of the standard.  What we want to do to support Asynchronous Bilateral Cutover is think about changes so that required behavior is invoked more slowly, and that things that were not previously required start off first as being desired, or preferred behavior (which is required in new systems, and allowed in old systems).

Exceptions

There will always be cases where an exception needs to be made.  That needs to be carefully thought out, and in new versions of standards, Exceptions to the backwards compatibility rules should be clearly identified when the standards are balloted.  We shouldn't have to figure our where those are just by inspecting the new rules.  Some features need to be retired.  Some capabilities cannot be changed in a way that won't impact an older system.  The key is to clearly identify those changes so that we can get everyone to agree that they are good (which is not the same as agreeing that they aren't bad).

With the new Templates formalism, we should be able to identify these issues with C-CDA going forward.  I would very much like to see HL7 sponsor a project to ensure that C-CDA is made available in the new format once it finishes balloting.



Wednesday, July 16, 2014

Supercompliance with HL7 CCDA Versions

I've written several times about C-CDA Release 2.0 and issues with template versioning which are pretty close to resolution, and I've also written previously about the 2015 Certification criteria and backwards compatibility with 2014 criteria.

What I'd like to expand upon now is an issue that could occur if C-CDA Release 2.0 was adopted as optional criteria for 2015 and how we could arrange things so that a 2014 system could receive it.  It draws in part on what I described previously as a Dual Compatible CCD, but is slightly different because CCDA 2.0 is largely backwards compatible with CCDA 1.1.

The issue is that when System N (for new) sends a CCDA 2.0 document to System O (for old) that understands CCDA 1.1 there are three things that could happen:
  1. System O would correctly identify the fact that the CCDA 2.0 document did NOT comply with CCDA 1.1 and would reject it.
  2. System O would NOT correctly identify this fact because the template identifiers are similar enough, and some don't understand the details of how to match identifier values in HL7 Version 3 and CDA, and so it would incorrectly try to process it.  System O would handle these as follows: 
    1. Where these templates are backwards compatible:, System O would handle the 2.0 templates correctly where they are backwards compatible with 1.1, BUT, 
    2. Where they are not, System O would have issues with interpretation of the content.
There are FEW cases where 2.b. would occur based on a preliminary analysis of CCDA 2.0, BUT, we need to spend more time looking into those details.  One case that has been identified is the templates for Smoking Status, where there are differences in interpretation of the effectiveTime element on the observation.

What I would propose that systems using CCDA 2.0 do is be "Super Compliant" (this is really independent of what my thoughts are on the 2015 certification rule).  Super Compliant means that they send documents which conform to both CCDA 2.0 rules, and CCDA 1.1 rules, and only send the same information twice when ABSOLUTELY NECESSARY (as in the case for Smoking Status), and only in very controlled ways.

It would mean that a sector or entry could declare conformance to both CCDA 1.1 and 2.0 except where that wasn't feasible (e.g., Smoking Status).  That last little note also means that we need to examine more carefully where backwards compatibility isn't the case. 

The benefit of this is that systems which understand CCDA 1.1 will, if properly interpreting the guide and underlying CDA Release 2.0 standards, work correctly.  Even if they don't fully interpret the CDA Release 2.0 standard properly (e.g., get confused about the Instance Identifier data type and similar but not identical identifiers), could and likely would still get the correct result.

This allows systems supporting CCDA 2.0 to work with systems only understanding 1.1.  It does mean that newer systems supporting CCDA 2.0 would still have to support CCDA 1.1 as input, because not every system available would be able to send CCDA 2.0, but that is already pretty likely to be the case for MOST Certified EHR products in use today.

I'll talk a little bit more tomorrow about flavors of compatibility and breaking changes (also called substantive changes) and how we could develop some policies for advancing standards that allows new features to be created, and problems to be fixed but still ensure that systems


Tuesday, July 15, 2014

Where there is smoke ...


... There are Chocolate-Covered Bacon-Wrapped Smoked Twinkies

A few weeks ago I got a Vertical smoker for Father's day.  Over (smoked) dinner later that week we somehow got on the topic of disgusting fair food, things like chocolate-covered Corn Dogs and fried Twinkies.  And then the topic of Smoked Twinkies came up.  Of course everything is better wrapped in Bacon, so it became Bacon-wrapped smoked Twinkies.  And then chocolate-coated, bacon-wrapped, smoked Twinkies.

As my daughter will tell you, perhaps the scariest thing about these were that they actually didn't taste that bad.  And from this experience (never to be repeated), we also learned that smoke-flavor imparted into cake can be good (Vanilla ice cream on smoked pound cake turned out pretty well as a follow-up).

As an experiment, it was completely ridiculous, very easy to perform, and resulted in some interesting results that had been completely unexpected.  The cost was virtually nil, and I learned a great deal about my smoker just doing it.  This is much better than spoiling a $60 piece of brisket on a first attempt (that didn't happen either fortunately, it was wonderful).

The point of this post is that sometimes you have to be willing to try an experiment on something complete stupid.  There's value in that because there is no danger that anyone will ever tell you to ship it, and you can still learn a lot just by doing.  And you can throw a bunch of things at it just to see how they would work together.  And if it fails, you never expected it to work anyway, so there's no real disappointment.

[Aside: How many developers have had the scary experience of showing a weekend prototype to a product manager only to be asked how soon you could ship it?].

The trick is to do the best job you can anyway, just to see whether there is something in this completely idiotic idea.  When I smoked the Twinkies, I left a couple of them bacon-unwrapped, just so I could see the difference the bacon made.  When I chocolate covered them, I also covered only the bottom half, again to compare with Bacon alone vs. Bacon and Chocolate.

When I design a form (e.g., for an EHR), I often throw in some features to see how they work together with the form.  Chocolate coating as it were.  But I also leave myself an easy way to see how the form works without them.  Profiles and standards often have some optional bits to, because we aren't sure they'll be needed all the time.  When I implement a standard, implementation guide, or IHE profile, I try to include the optional bits if I can.  It's only when it becomes challenging (doesn't taste good) that I ignore the option. As it turns out, all variations of the Smoked Twinkie were edible, and some (like me) even liked them, although others (like my daughter) would never admit that.

And if, as in my Twinkie experiment, you actually learn something other than how to work with the tools, it was worth more than you expected.  If it doesn't, it was at least something you could chalk up to experience and pass the story along to your colleagues amid gales of laughter.

     Keith

P.S.  How does FHIR fit into this?  Attending my first FHIR Connectathon was just one of those throw-away experiments.  Now I'm a believer.  You should try it at least once.