Pages

Friday, May 31, 2013

Progress on CDA Harmonization



Everything always talks longer than you want.  The above diagram went into the minutes of our May IHE face to face meeting, courtesy of XDCD.  I'm spending a good bit of time on this project, in the expectation that we will be able to shave huge chunks of time from developing IHE profiles.  That makes it worthwhile.

My target for the next week is to simply be able to perform a comparison of section requirements between the CDA Consolidation guide and the IHE PCC Technical Framework.  Fortunately, IHE and CCDA both chunk at a fairly high level, generating requirements about code, title, and a few other section attributes, and spend most of their time on subsection or entry requirements.

So after cleaning up the crosswalk (I found some missing templates in the appendix mapping CCDA templates to HITSP/IHE/HL7/CCD templates), I'm about ready to generate the requirements from the IHE and CCDA content.  This should go fairly quickly, because there are only about 28 templates that we need to do this for.

What I find most interesting is that the Consolidated CDA has 7 templates which it provides both "coded" and "uncoded" forms, and IHE PCC has 3, and they don't overlap.  It doesn't stop me from performing the comparison, but does mean that I'll have to perform ten comparisons twice, once against the "uncoded" form, and a second time against the coded form.  In performing these comparisons, I feel as if I'm trying to solve the graph isomorphism problem.  In one way, I am, but in another way I'm not.  We already have an evaluation of the isomorphism in the mappings between templates.  What I'm really trying to generate is how different these two graphs are.

In any case, I'm making progress, and given how much time I expect it to save, I still have several months of effort I can focus on this problem for it to be worthwhile.  At least according to Randall in his chart above.

Wednesday, May 29, 2013

RTFM

Before I go any further with this post, let me say loudly that:
I agree, we [the HIT community] need more CDA examples to work with!

One of the discussion topics over on the Structured Documents list this week is about the need for more examples of the Consolidated CDA specification.  Two arguments being promoted are: The documentation isn't good enough to stand on its own without MORE examples, and that implementers don't read the specifications.  Oh, how true that last statement is in Healthcare IT. And how wrong it is as well.  Someone explained that "I do not expect them to look at 3000 pages of manuals on 7 different web sites..."

I have so little patience with that attitude.  Let's take a standard like XML.  I have four books on my bookshelf about XML in general, including this one that I reviewed not too long ago.  I have two books that cover XML Schema.  I have two books covering XSLT.  I have two books about Java and XML.  I have one about Java and SOAP.  I have all the W3C XML standards printed and bound.  I have links to my favorite web-sites on XML, which include at the top, the W3C and w3cschools, and also mulberrytech, and xml.com.

My collection of material on XML runs to more like 30,000 pages, rather than 3000.  And I know XML developers who have more. I'm what I call an HTML duffer.  I write HTML pages (and have done so professionally for quite some time), but I'm not an expert at HTML.  I only have about 10,000 pages or so of content on HTML in my library.  I'm NOT extraordinary by the way.  I know HTML duffers whose book collection exceeds my own, and experts whose collection exceeds it by an order of magnitude.

I EXPECT all engineers who need to use a specification to do their best to use it correctly.  I expect decent engineers to do more than look at one example.  I expect GOOD engineers to read the specifications and use them.  I expect GREAT engineers to read the specifications, comment on them, fix them, and read as much as they can about them.

Don't tell me proudly that you've never read a specification.  It's NOT a badge of honor.  Yeah, OK, you've come to an outstanding understanding of the technology without reading the specification.  Great, how to you have a meaningful discussion with someone who's actually read it, or even better, created it. If you haven't read it, you've lost the opportunity to develop common ground.

Yes, we need better specifications, and we need shorter specifications, and we need easier to read specifications.  And we need a lot more examples.

But we also need professional engineers who will live up to the job title.  Not just weekend warriors who want to be able to show off what they did in their spare time.  Healthcare is simply too important for that attitude.

   Keith

Tuesday, May 28, 2013

Deviance

Today I spent about 3 hours taking several standardized tests.  Getting to the stage where I would even be taking these tests was a lot more challenging than the tests were.  This morning I was having a conversation with someone else about another set of tests about interoperability, and the many and various ways those tests could be (and are), gamed.  Later in the day I was reading about how someone else was accused of gaming meaningful use criteria.  And at some other point in time, I heard about how the HIT Policy Committee was considering using some reported measure results (a measure is a test by any definition of the term) as a proxy for some other test.  And I expect to see a lot of discussions about how these measurements are used, gamed, and abused.

In all cases where we focus on testing, a lot of attention is paid by various observers to the edge cases, places where the test can be gamed, where people do things or have interpretations that are unintended, or stretch the purpose or intent.  And in the overall scheme of things, folks are right to be concerned.  There are of course, opportunities for abuse, and no, of course you cannot directly compare scores because they don't account for all ranges of variation, and why yes, people do get right answers by guessing, or even making stuff up.  And yes, we can measure this stuff better.

But in general, for most of the people, or organizations, or whatever else you are measuring, when you use the material as intended, the test works, and it does provide some value.  Unless you are in that very small, select group 3 sigma from the mean, or concerned about someone claiming to be there that shouldn't be, I wouldn't worry about it.  It's simply not worth the energy.  Remember MIPS? Millions of Instructions Per Second, also known as Meaningless Indicator of Performance.  At the order of magnitude level, MIPS worked.  It was when you were fussing about the difference between 100 and 110 that MIPS was simple not worth it.

Even I am only a second order deviant, according to some tests.

Friday, May 24, 2013

Here be Dragons


"Here be Dragons" is a metaphor used on maps of old to show places where the known merges into the unknown and dangerous.

In standards, SHALL and REQUIRED, or SHALL NOT and PROHIBITED are well known territory.  These are places we have trod before, and know it well.  And know whether it is safe or not.

SHOULD and RECOMMENDED, or SHOULD NOT and NOT RECOMMENDED define areas where our knowledge is not so absolute.  These are areas where care must be taken.  I tell engineers:  treat every SHOULD and SHOULD NOT as if the word SHOULD was SHALL.  And only when you are expert enough to explain why this conformance statement cannot be SHALL can you tell me that you won't do it.  It rarely works, but is good advice.  The point is, experts have already gone over this, and unless you understand it better than they do, work with their recommendation.

Where the real dragons are is in MAY, or PERMITTED content.  In this, danger abounds.  Because it says MAY, people are free to ignore it.  The MAYs in a specification are the places where you can do cool stuff, and where new capabilities can be explored.  And so, just as danger abounds, so also does opportunity. After all, we all know what dragons covet.

   - Keith

P.S.  In the map above, look to the mid-right, in the south-east part of Africa.  You should find a dragon, and asp and a basilisk.

Thursday, May 23, 2013

Ours is to Reason Why, What Others Do or Nae

What follows came up on the Structured Documents call today.  Several of us agreed to propose a fix to this.  Here's my first crack at it:

There's some inconsistency between Consolidated CDA and QRDA when reporting on reasons why something was or wasn't done.  Consolidated CDA has the Immunization Refusal Reason Template, which models refusal in the way we agreed to handle the missing reasonCode attribute on various acts in CDA Release 2.  That method specified that a reason for doing or not doing something would be represented by an entryRelationship with typeCode = RSON, in an observation where observation/code indicated the reason why.  But we have no such templates in CCDA for reasons associated with other clinical activities, such as refusal of a medication, or other procedure.

In QRDA Release 2, we have Patient and Provider preference templates. These are used as reasons why things aren't done and are defined as follows:
Preferences are choices made by patients relative to options for care or treatment (including scheduling, care experience, and meeting of personal health goals) and the sharing and disclosure of their health information.
Provider preferences are choices made by care providers relative to options for care or treatment (including scheduling, care experience, and meeting of personal health goals).
I would merge these two:
Preferences are choices made by a person relative to options for care or treatment, including scheduling, care experience and meeting of health goals, including the sharing and disclosure of information.
When preferences are expressed as the reason for doing or not doing something, they more often than not are related to the act done or not done by the RSON act relationship as above, however, in several places in QRDA Release 2, it uses the REFR act relationship. This looks like a simple mistake to me.  To make it easier, I'd just include the act relationship and vocabulary constraints within the "Reason" template, so that we stop having to say all that stuff repeatedly everywhere that the reason is used.

Rather than having a code to identify who had the preference, I'd rather have codes that indicates that indicate the judgement or assessment made (e.g., religious preference, duplicate treatment, et cetera), and let the source of the information being recorded indicate whose preference we are talking about (provider or patient).  I'm not quite sure what the correct participation is here, but responsible party (the person in this case, who has primary responsibility for the reason being specified) seems to be closest.  Within that, the role class could then indicated whether this person was the patient, the patient's guardian, the provider, et cetera.

Hopefully, that could clean up the confusion that we have today.


Your Mileage may Vary

If everyone purchased cars like we purchased healthcare, we'd have:
  1. An Engine by Audi
  2. A Transmission by Chevy
  3. The Design by Detroit (really?)
  4. and Fuel Economy by Toyota
And if we paid for it the same way:
  1. Your employer would indicate up to three bankers you could use for your car loan.
  2. Your bankers would tell you which models of car you are allowed to purchase.
  3. You could still buy any car you wanted, but would have no clue what it would cost you, and how much your bankers would pay.
  4. And you'd have no clue if you were getting the best car for your money, or if it even worked before you paid for it.
And once you reached retirement age, your choices would change all over again.  And so on.  Of course, your choices (and mileage) may vary.

Wednesday, May 22, 2013

How would a Patient define a medical home?

I've been hearing a lot about patient-centered medical homes again.  The topic first hit my radar screen sometime in 2006-7 when it heated up again, after what appears to be a re-introduction of it some time in the early 21st Century.  When PCMH first showed up (on my radar), it appeared to be the next big thing.

I took a sniff at what it espoused back then in order to figure out what its impact would be on my day job.  As I looked deeper into it, the technology, standards, and information requirements of a PCMH were just what I'd have expected, and so, it being more about physician business models, it passed through my consciousness pretty much without impact.  (The same could be said for Accountable Care Organizations).

Any information worker understands that you need data to do your job, and that the better and quicker it flows, the more able and effective you can be.  Physicians are information workers, whether they realize it or not.  They are containers (silos even?) and distributors of highly specialized knowledge and services.

Given that the topic has blipped three different times this week on my radar, I thought I'd take a crack at really looking at it with my patient perspective.  You know, that viewpoint from behind spring-green colored glasses.

  1. A patient centered medical home is about me, after all, I'm the patient.
  2. It's where I prefer to go for medical care, including information, diagnosis and treatment.
  3. When I go there, they have to take me (isn't that the definition of home).  And that means not just 9-5, Monday to Friday, but all hours and all times of day that I (or a family member) could get sick.
  4. At home, I know where to find stuff.  A PCMH makes it easy for me to get information, about my conditions, my treatments, my appointments, and my diagnostic studies and tests.
  5. At my PCMH, they can care for me for MOST of the problems that I have to deal with.  If it's obscure, or requires specialty care, they know where to go.
  6. When I call, someone I know answers the phone.
  7. At home, they know who I am.  I don't have to reintroduce myself every time.
  8. Finally, staying at home is cheaper for me that going elsewhere.
Most of what I've stated above is simply a patient centered restatement of many of the principles espoused by organizations such as NCQA, ACP, AAP, AAFP and others.  None of how I describe it has to do with technology directly, although all of it can be assisted by technology.  One of the most interesting disparities between my idea of "patient centered" and the one in the link above is when you compare my statement #8 above with this one from the 2007 Principles Document: 
Payment appropriately recognizes the added value provided to patients who have a patient-centered medical home. 
Note that what I have to say is not incompatible with the previous statement, but it certainly doesn't belong in a "patient-centric" document as stated above.  What I think patients want is stated quite clearly and objectively.  This will cost less (and if you can do it and make more money, that's fine, but do make sure we're spending less on healthcare). At least that's one of my values.

Tuesday, May 21, 2013

Temporal Relationships Again

Today I spent a bit of time with the HQMF editorial team.  One of the issues that we spent quite a bit of time on was temporal relationships.  In the RIM, temporal relationships are represented by vocabulary terms, such as SBS or EAE (Starts before Start, Ends after End).  SBS implies that X.effectiveTime.low < Y.effectiveTime.low.  So now the question of how to deal with <= comes up, and we didn't have a good answer.

To deal with relationships such as "X occured within one hour of the start of Y", you have to deal with what the HL7 RIM calls the pauseQuantity associated with the act relationship.  The pauseQuanity is defined as being of type PQ.TIME (a physical quantity using units of time).  This is incredibly complicated to get right when you have multiple relationships between two different acts and the starting times and ending times associated with them.  If you set the pauseQuantity to 1 hour, and use the SBS act relationship type between act X and act Y, you are saying that X, if paused by one hour, still starts before the start of Y.  Since that is the exact opposite of what you want, what you would do would be to set pauseQuantity to -1 hour, and then use SAS.  That means that X, if moved backwards one hour in time, starts after the start of Y.  Are you confused?  I certainly still am.


<actCriteria>
  ... Criteria for Act X ...
  <temporallyPertains typeCode='SAS'>
    <pauseQuantity value='-1' unit='h'/>
    <actCriteria>

  ... Criteria for Act Y ...
    </actCriteria>

  </temporallyPertains>

</actCriteria>


One of the participants (Marc Hadley from MITRE) proposed that we change pauseQuantity from PQ.TIME to IVL_PQ.TIME, so that we could say from 1 to 2 hours.  I started to say that doesn't work to say what we want it to say, because ... and then realized that depending on what we are comparing, it certainly could work.  The key here are four vocabulary terms covering temporal relationships, SCW, SCWE, ECW and ECWS (Starts Concurrent With Start, Starts Concurrent with End, Ends Conncurrent with End, and Ends Concurrent with Start).

Now lets look at how you could say this:  If you want act Y if it started within 0-1 hours from X, you would have X be related to Y via SCW and would set the pause quantity from 0 - 1 hour.  X.start delayed by 0 to 1 hours is concurrent with Y.start means exactly what we want.

<actCriteria>
  ... Criteria for Act X ...
  <temporallyPertains typeCode='SCW'>
    <pauseQuantity xsi:type="IVL_PQ">
       <low value='0' unit='h'/>
       <high vaue='1' unit='h'/>
    </pauseQuantity>
    <actCriteria>

  ... Criteria for Act Y ...
    </actCriteria>

  </temporallyPertains>

</actCriteria>

The latter means the same as the former, is more flexible, and also has the advantage of allowing the use of the inclusive attribute on the low and high bounds, so now we can also get into the finer details of less than vs. less than or equal to.

The problem is that pauseQuantity is defined in the HL7 RIM to be of type PQ.  We cannot just magically change it to a different type like IVL_PQ.  However, we can certainly state that the pause quantity is some value within a particular range by using the uncertainRange capability of the HL7 Version 3 Datatypes Release 2.  That gives us:


<actCriteria>
  ... Criteria for Act X ...
  <temporallyPertains typeCode='SCW'>
    <pauseQuantity>
      <uncertainRange>
        <low value='0' unit='h'/>
        <high vaue='1' unit='h'/>
      </uncertainRange>
    </pauseQuantity>
    <actCriteria>

  ... Criteria for Act Y ...
    </actCriteria>

  </temporallyPertains>

</actCriteria>


And so, we have a much simpler representation for dealing with temporal relationships, without any RIM changes.  The next step is to restrict the allowed vocabulary when the uncertainRange portion appears to be SCW, SCWE, ECW, and ECWS.  In fact, by limiting to just those four terms we can address any kind of relationship needed between the end points of two events.

We have to bounce this past the Structured Documents workgroup, because it is a variation on the original solution, and we don't want to be making stuff up that the workgroup didn't agree to in reconciliation.  I think this will be pretty straight-forward.

Monday, May 20, 2013

Heresy

One test of a good standard is the degree of skill needed by a developer to correctly understand and implement it.  One seemingly innocuous proposal for FHIR which I welcomed was to change the term "Value Set" into what some thought would be much more easily understood "Code Set".

As I said, I would welcome this.  I teach HL7 routinely, and I always have to explain what a value set is, but the term code set (or even better, code list) is much more obvious in meaning.  From my own early days coding in Healthcare IT before I ever learned HL7 (yes, there was such a time), I created objects that were all about dealing with codes.  CodeSet, CodeList, CodeMap were all names of things that I worked with routinely.

Then I learned about value sets.  Value Sets used to be HL7 lore.  It's assumed knowledge when you read one of the foundational HL7 standards (Vocabulary).  The material is now pretty well defined in HL7's Core Principles specification (which took more than 3 years to be balloted as an HL7 Standard).  So now this is canon (actually, it was canon before it was ever balloted).  Changing it to make it easier for some developer to use is clearly a non-starter.

[DYK: Did you know the HL7 Wiki has categories for Lore and Approved Lore?]

I'm amused because while I've spent a good deal of time teaching people what these things are, and how to use them properly, I also struggle with the need to make it easier.  As I reviewed again in my annual training last week, the least effective way to ensure that something is used safely is to provide documentation about safe use.  It is far better to design it in.

In other overall scheme of things, whether we call it value set or code set is not all that material.  I'll just stop for a moment, look at my audience, and say "a value set is a set of codes", and move on.  Maybe this term will catch on so well that someday I won't have to.  Looking at Google, "Code Set" generates 1.8M hits, "Code List" 5.9M hits, and "Value Set" 3M hits.  A bit more playing around shows that "Code List" appears more frequently in general IT contexts, and "Value Set" or "Code Set" more frequently in Healthcare IT contexts.  I can hope.

Of course, I fully expect to be punished for my heresy.  Until then, I'll simply have to create another rule in Outlook.

  Keith



Thursday, May 16, 2013

Who writes Clinical Notes?

This question was the fundamental question being addressed in a recent Structured Documents Workgroup meeting.  At the root of it is whether or not a patient authored note belongs, or doesn't belong in the CDA Consolidation Guide as it is further developed by HL7.  One of the reasons for concern is that at present, HL7 published documents are the fundamental unit of "standardization" (and in fact, this is true for just about every SDO).  While we need better ways of publishing this content, others, such as ANSI and ISO, and regulators which reference HL7 standards are still referring to these standards by the name of the publication.  If patient authored notes become part of the CDA Consolidation standard, it becomes much easier to cite (or in this case, re-cite) them in ongoing regulatory efforts.

One of the challenges of course, is that this also expands what CDA Consolidation is, and certainly expands the efforts for the next ballot cycle on the CDA Consolidation guide.  At issue here, I believe is a need to incorporate some work developed by one part of the community that is competing with the need of other members of that same community to meet a more restricted set of goals.  I'm quite sensitive to this tension, and I often fuss myself about the crazy schedules that SD sets for itself.

Structured Documents spent quite a bit of time discussing this last year.  In September, we established the following principles for inclusion in the CDA Consolidation product (or product family, it still isn't clear which this is).  This was the agreed upon outcome:


Scope of Consolidation: “CDA templates at entry, section and document level applied in primary clinical information records and for exchange supporting continuity of care.”

Criteria for Inclusion:

  • New material will be included based on evaluation of these criteria:
  • Nine original implementation guides are grandfathered
  • New material meets the following tests:
    • High reuse of Consolidation templates
    • Covers primary data (documents originate for delivery of care, becomes part of patient record, in contrast to secondary use; templates, of course, can be reused)
    • Used for provider/provider, provider/patient communication
    • High use of semantically interoperable templates (“model of meaning”, in contrast to “model of use” templates)
Note, that nowhere in Structured Documents definition of "Clinical Information Records" do we make any distinction based on who the author of the document is.  I think many assumed that because we used the term "Clinical Information", the assumption was that it must be generated by a clinical practioner.  But we never said that, and many would argue that patients can be just as, or even more clinically informed that their providers about some content.

The patient [authored|generated] [note|document] (however you want to call it), meets the tests described in the criteria for inclusion above. 

  1. The data in the document is intended for delivery of care, and can become part of the patient record.
  2. It reuses the General Header constraints, and adds to them to identity documents that have been authored by a patient.  It allows for use of existing CCDA sections and entries in the document to support the patient generated content.
  3. It is used for provider/patient communication.
  4. The templates contained within it follow the model of meaning structure.
We (The HL7 Structured Documents Workgroup) agreed today to include this content in the next round of Consolidated CDA balloting.  This step puts patient generated content in health information exchanges on equal footing with clinician generated content.  And when you think about it, that silly little clipboard that goes into your record when you first start in a physician practice, is nothing more than the first of many pieces of patient generated content that appear in your medical record.  It's about time we acknowledged the contributions that patients already make to their records, and the need for more of this kind of communication between patients and providers in the standards for exchange.

     Keith

Wednesday, May 15, 2013

Broad and Narrow

I love finding new and interesting stuff.  One of my greatest joys is finding a new Science Fiction author to read that I haven't read before.  One way to do that is to browse stores, but another way is to talk to really interesting people about the books they like to read.  That's one of the reasons I like Twitter and other social media outlets as a way to find interesting reading material.  But it is still too narrow a venue for me, and I have other ways to find content.

I have two different media aggregations apps I routinely use.  Flipboard is what I use to browse through the various articles that friends, tweeps, links and plussers have posted to Facebook, Twitter, Linked In, and Google+.  I'm fairly conservative in who I follow, sticking in a somewhat narrow band within my particular field, although necessarily NOT sticking just with those I agree with (in fact, a sure way to get followed by me is to be a smart and articulate person with an opposing view).  Even so, I think of Flipboard as being a narrow-band receiver of content for me.  It really is based on my choices of people to follow.

To stretch my view of content, I've been using Zite to broaden my perspective.  Until recently, I had pretty good evidence that was working.  There's been a ton of stuff that didn't matter, along with about 1 article in 20 that I wouldn't have run across in my usual day that was very interesting.  In other words, high recall and cruddy precision, with the very occasional gem.  But after a recent upgrade, and somewhere along the way having given Zite a way to track me better, I fear that I may have made a mistake.  I went through my top news today, and found not just one or two or even three, but more than a half dozen articles that were right within my zone of interest.

For some this would be great.  For me, it's a cause for alarm.  What I'm afraid of is that the rest of the content I ask for will be filtered based on what it knows I will like.   But, I don't trust those hidden algorithms not to filter out some jewel that I might like despite my prior reading history.  I'm not looking just for stuff I know I'll like, I'm looking for hidden jewels.  I then looked past the "Top Stories" section, into a few topic areas that are also within my zone of interest.  Yep, indeed, they have. The frequency of "favored" sources seems to be much higher, and the stuff I'm seeing is also very much what I actually get in my day-to-day reading.

Dang, they figured me out, and in the process, I think the jewels may be lost.  Now I'm just going to have to find another app to bring me random content that is still interesting, and "outside the zone".  I wish someone would be smart enough to develop that app.

How does this apply to my day job?  Have you ever been searching for an answer to a question, only to find a dozen articles that simply don't cut it for you?  After hours or even days of digging, eventually you find it in some obscure place.  I'm trying to find the obscure places that I should be looking.  It isn't about what you know, but what you know about how to find out that matters in the long run.

Tuesday, May 14, 2013

One more thing

You know you have too much to do when you run from one project to the next to give each project a kick back in the air before it crashes to the ground.  This challenge is exacerbated by having too much travel, or too many meetings, or simply too much other stuff.  On days like this, a blog post is a luxury, or simply a place to vent.

At times like this, my focus is on finishing -- something -- anything.  Thankfully, I managed to get five things done today, six if you can count this post.

Monday, May 13, 2013

Minimum Necessary and Open Templates

Last week at the HL7 Working Group Meeting, I taught a class on Templated CDA.  This is a class for implementation guide developers.  You might think it applies only to the kinds of folk who go to IHE or HL7 meetings, or perhaps S&I Framework or Canadian Standards collaborative meetings, but you might be surprised.  I define "implementation guide" rather broadly.  There are implementation guides that are defined at a national level (such as in the US Meaningful Use regulations), those defined at a regional or project level (such as those defined for Healtheway), and then there are those that are included with commercial products (we call those product manuals), or which are part of an organizations internal design documentation.  The use of these documents might differ, but the kinds of content it needs to contain, and the various choices that a specification editor has about what could be done are about the same.

One of the interesting topics of discussion in the class was on the use of Open vs. Closed templates.  An open template allows anything beyond what has been explicitly prohibited in the rules expressing the template.  A closed template prohibits anything beyond what has been explicitly allowed by the rules of the template.  The former is extensible and reusable.  The latter is not.  However, the latter is quite useful in cases where minimum necessary comes into play.  This is a key phrase that shows up in various places in US regulation to mean "no more than is needed" to serve a particular purpose.  Under HIPAA, exchange of information for payment or operations falls under the rubric of "minimum necessary".  However, exchanges for treatment do not (which doesn't stop people from trying to address them as if it were a requirement).

In public health, the use of minimum necessary in exchanges also facilitates risk mitigation.  If you know that users are only communicating the minimum necessary to support a particular public health function, you can address data risks using that minimum necessary criteria.

I really don't like letting an operational requirement cross over into the data models that we use to express template content.  Basically, it means to adopt the data model for a particular use case, I also get stuck with this additional piece of baggage (minimum necessary) that gets in the way of reuse.  My recommendation to my students is to define their templates in "open" terms, and then define the characteristics of the systems that exchange them as being able to, or unable to accept open content during an exchange.

So, now I have my cake (an open template), and other systems can define how they want to eat it (with or without any additional toppings).

Friday, May 10, 2013

Versioning Templates in Definitions and Instances

We had a great discussion this morning in the Templates workgroup.  One of the agreements that we came pretty close to making was how to address referencing a templateId in an instance when you are dealing with old-style (version-unaware) templates, and new-style (version aware) templates.

In the old way, this was the correct way to reference a template.
<templateId root='1.2.3.5'/>

In the new way, this is the correct way to reference a specific version of the template.
<templateId root='1.2.3.5' extension='versionLabel'/>

When operating under the old world order, and in the new world order, the first reference to a template identifer in an instance still needs to work.

That's because instances created using version aware templates may still need to refer to other templates that aren't yet version aware (under the new model).

So, the first example above is what I'm calling a version unaware templateId, and the second example is a version aware templateId.  The former can only point to a version unaware template definition, and the latter can only point to a version aware template definition.

Instances can use either or both forms.  And in fact, this is perfectly legal:

<act>
  <templateId root='1.2.3.5'/>
  <templateId root='1.2.3.5' extension='versionLabel'/>
   ...
</act>

In the example above, the instance of the act is declaring conformance to both the version unaware release of the template, and to the newer version aware edition.

The Art-Decor project currenly uses <templateId root='1.2.3.5'> to reference "the current version" of the template, enforcing a "dynamic" constraint to the template.  I would argue that any content creator knows WHICH version(s) of a template are being used, and so MUST report the version identifier.  That is because versions are allowed to introduce backwards incompatible changes (which is part of why we doing this).

In any case, we will be discussing this further next week.

Thursday, May 9, 2013

Harmonizing HL7 HealtheDecisions and HQMF for MeaningfulUse Stage 3 : Expressions

One of the challenges with spawning multiple projects with multiple consensus groups is Rishel's law: When you change the consensus group, you change the consensus.  This is especially challenging when different parts of the projects make up different components of an ultra-large scale system.

The case in point is the different approaches taken by HQMF and Health eDecisions.  It started off this morning with a discussion of harmonizing the expression language syntax used in Health eDecisions and the QDM Implementation Guide of HQMF Release 2.  But the underlying problem is even more challenging than dealing with different expression syntax (each of which have their own value propositions).

You can transform a statement of how the process should be executed or what an outcome should be into a quality measure, as I mentioned previously.  Developing the transformations between Health eDecisions and HQMF is feasible.  It's made a lot easier if the two systems share a common information and representational model, because it makes a map between the two easier to understand.

Where HQMF and Health eDecisions don't align today is that HQMF followed HL7's modeling structures to come up with a representational model, which was then transformed into a XML representation through a predefined, automated process.  The existing Health eDecisions work went straight to an XML representation which eliminated much implementation complexity presented by that predefined, automated process. But it doesn't have traceability back to a higher level representation model (although one or more such models exist).  I've looked at Health eDecisions and HQMF closely enough to see the similarities between the two models that would help expose the common model.

We didn't execute on the project I mentioned, and that in part has to do with resource constraints and in part due to lack of incentives to do so.

So now we have this concern that Health eDecisions, and HQMF aren't aligned, and we'd like to see that happen for Stage 3, and we have "six months" to make that happen.

Let's start with expression languages.  HQMF Release 2 doesn't specify one.  Health eDecisions does.  The QDM Implementation guide will define an expression languange that MAY be used, but need not be.
I'd argue that we have two different requirements for expressions in HQMF:

  1. Expressions must be easily created, edited and validated by measure developing organizations, ideally using automated tools like MAT.
  2. Expressions must be efficiently executed in high volume enviroments in order to compute quality measures.
  3. Implementing the expressions should not result in extensive investment in software crafted to evaluate them.
The first requirement is addressed pretty well by HED's language, because it is stored as a parse tree and can easily be manipulated by web based tools that allow expression construction through creation of a visual representation of the parse tree.  It is not addressed well by JavaScript because the parse tree has to be created to represent the expression.

The middle requirement is not addressed well by HED's language because it requires interpretation of the expression language, and the newness of the language means either using or creating code that does so.  It won't be nearly as fast as a premium grade JavaScript interpreter.  That's not a ding.  You don't get to that level of quality without time, and HED implementations haven't had that luxury.  There are some platforms that WILL be challenging to use this in.  You couldn't step out to HED's langauge easily from SQL, for example, and interpreting it in Perl wouldn't be my first choice for execution speed either.  However, it is addressed well by commercial and open source JavaScript interpreters.  These are quite efficient, and have been around well and have a high level of quality, and work in numerous programming environments.

The last requirement is also arguably difficult for HED to compete against JavaScript.  I cannot think of a platform combination I've encountered in my career that doesn't support JavaScript except for my Arduino.  I'd have to write code for most others, or use the code provided by the HED pilot project.

The first requirement and latter two requirements address the needs of different groups.  I'm not a measure developer, and never will be, nor would I ever be involved in software development of products for that group.  However, I do deal with products that are the execution environment in which these expressions need to execute.  So, JavaScript meets my needs readily.  In fact, I'd say it beats HED's language hand's down.  No, it's NOT platform independent, but it is ubiquitous, available on my platform choices, and I don't have to teach it to my developers.

If we had to pick just one, there'd be no solution to this problem in HQMF, but fortunately, we don't have to.  I assert that there's a transformation that can be defined from the platform independent HED syntax to the "limited" JavaScript syntax which will be provided in the QDM-Based HQMF guide.

Fortunately, Data Types release 2 (which HQMF Release 2 uses) allows multiple expression representations to be provided in an element expressing an expression.  The execution environment is permitted to choose which of the expressions to use, based on its preference.  Thus, both expressions can be provided in HQMF measures.  The former meets the needs of one group, and the latter meets the needs of another, and there's an automated way to go from A to B, so that once measure developers are satisfied with a measure and the expressions that go along with it, those expressions can be turned into an execution script.

I knew an engineer that wrote something called a Kalman Filter in Fortran.  If you read about it, you learn that this is used in guidance systems.  He sat in the control room at NASA while his software took an Apallo mission to the moon.  I would never attempt to write a Kalman Filter in an expression language like HED.  It doesn't address at all my needs for being able to easily write and express a complex matrix mathematical algorithm.  In fact, I'd prefer Fortran to just about any other language choices that I could use (C++ would come first, Fortran is second, and Java third).

On the other hand, HED is well suited to handle Event Condition Action rules that are graphically created (don't expect me to write the XML by hand though) and manipulated.  It meets a different need.

So, how would I harmonize these?  Provide an incentive for someone to create an open source transformation from a decent enough subset of HED expressions (see Section 5.12 of the HED spec) to meet the needs for quality measures to a subset of the language that has been specified using JavaScript, and put it into the MAT to enable translation of HED XML to JavaScript.  When a measure is published, autogenerate the Java Script version of the expression and publish with BOTH expressions.

I've only addressed ONE issue here.  There are several more to go, and plenty of time later to write more blog posts.


Wednesday, May 8, 2013

It's Wednesday at the HL7WGM, and that means it must be time for ...

It will soon be the morning for recognition of HL7 members who've achieved special standing.  During regular working group meetings, that special standing is for "veterans" of HL7, those with 10 years or more of membership (I get my 10 year badge next year).  I seem to have a tradition of my own at HL7 meetings too, so here we go again.

As you may recall, the last award I gave out was to someone entering retirement.  This next one goes out to someone who is still early in his career.  I'm quite jealous of him, and that's really because of his youth, and his meteoric rise in expertise.  He's quite a number of years younger than I am, but already I can see that his career is blossoming.  When I met him about five years ago, he knew little about CDA, but was clearly already quite an adept developer, and quite eager to learn.  So, I taught him what I knew, and not just about CDA, but also about being a committee chair, and working in the standards space.  What he's absorbed in these years that I've known him is phenomenal.

Outside of the HL7 world (and even inside it), there are few people who can stand toe-to-toe with me in CDA debate and win it.  And there are also relatively few that have written two, let alone more than half a dozen CDA implementation guides.  This next person is one who can, and has.

Without further ado, my next Ad Hoc Harley goes to...

Tone Southerland
of Greenway Medical
for outstanding contributions in Perinatal Care

When I first met Tone, he and his wife were preparing to have their first child.  Tone took a leadership role in developed the Antepartum Summary profile in IHE.  Tone and his wife now have four children, and Tone has written, edited or majorly contributed to a total of ten CDA implementation guides on the topic of Perinatal care over the last five years.  Congratulations Tone, and when I retire, I hope that you'll stick around for another decade or so to keep these folks whipped into shape (but you better stay in shape, because I intend to have a career like Ed Hammond).

Tuesday, May 7, 2013

I'm an Implementician and other Stories from the HL7WGM


It started out as an overheard at HL7, but then turned into a misheard, and so now I'm going to claim this portmanteau word as my own neologism.  It's a combination of implementor and informatician, with the best properties of both (or perhaps that's the worst of both, it really depends on your point of view).  Used in a sentence you might say something like FHIR is built for implementicians.

I like to build stuff that works that makes both academics and developers happy.  Perhaps I'll use it as a job title on my next order of business cards, but I still think Standards Geek has more panache.

I wish I could tell you what happened with the HL7 Working Groups today, but I spent much of the day in the Board meeting.  There were some good outcomes there.  There will be a Paris WGM in May of 2015, as a result of outstanding support of the European community members from the International Council.  The foregone conclusion at the beginning of the board meeting was that it wasn't affordable, but we determined that this was a) an investment worth making, and b) an opportunity that was worth pursuing to ensure that the meeting brought value to members of both the European community and to the HL7 organization as a whole.  I was really impressed with the decision making here.

We also made progress towards development of a new membership model, but that is still under development.  The membership task force has made good progress since the last time we heard from them.

There were a number of discussions that should also result in both a more transparent board and a more effective leadership team.  We did finish the board meeting in time to resume Q4 with the working groups.  Most of the board felt that trying to hold the board meeting starting Q1 on Tuesday probably wouldn't work going forward, because of the unpredictable ending times, so we'll likely start it Q3 at the next WGM.  That pretty much guarantees two lost quarters, but two known quarters down, is better than two plus some unknown additional time.

CDA R3 is still making progress, and there seems to be a general idea that this will go to ballot "soon", although I expect that there are still a few cycles that work product will have to run through.

We talked about the Consolidated CDA update project scope statement, which was floated past Structured Documents just before the WGM.  There'll be more discussion about it this week, and a final discussion next Thursday before it goes up to the steering division.

As currently outlined, that project will incorporate existing errata, make some adjustments to the Consult note, and add a referral and transfer summary, and a care plan.  These will be coordinated with existing S&I framework activities as well as ongoing IHE work with the Consolidated CDA Harmonization project it is executing.

IHE members are getting antsy about the joint ballot project, and several approached me to discuss their concerns and propose a candidate, because if we truly want to do an out of cycle ballot, we need to get it rolling NOW if we want to align IHE and HL7 comment periods.  So, I get to try to address that tomorrow in my role as HL7's liaison to IHE.

Monday, May 6, 2013

Breaking: What’s Next for Health Story?

There wasn't a press release on this, but it was announced (via the Structured Documents workgroup mailing list) today that Health Story has now moved under HIMSS.  If you are at the HL7 Working Group meeting, you are invited to attend the session described in the note below.

Health Story, for those of you who don't know already, is an industry collaboration and HL7 associate that developed many of the implementation guides that eventually (along with IHE profiles and HITSP implementation guides) went into the IHE / Health Story Consolidation Guide, otherwise known as C-CDA or Consilidated CDA and has been named as the standard for Meaningful Use in the US.

     Keith


Hi,

There will be a Birds of a Feather meeting tomorrow [Tuesday, May 7th], 5-6pm, in Georgia 4:

The Health Story Project, in close affiliation with HL7, developed eight CDA implementation guides in quick succession and then initiated the Consolidation Project which produced Consolidated CDA. Health Story retains its close affiliation with HL7 and has moved under the organizational umbrella of HIMSS.

Come take the opportunity to take a look back at the birth & growth of the industry Project and to consider how the Project can contribute to the future of HIT.

Birds of a Feather session Tuesday from 5-6 pm in Georgia 4.

I look forward to seeing you there. Please forward this to other lists where the topic may be of interest. 

Thanks

Liora

Sunday, May 5, 2013

Home again, home again, jiggety-jog

Home is where, when you have to go there, they have to take you in.
-- Robert Frost
Last week was IHE Public Comment meetings in Oakbrook, Illinois, where we prepare IHE profile proposals for 2013 for public comment.  Oakbrook is like a second home to me, since so many IHE meetings are held there in RSNA's space.  I spend probably 3 weeks a year in the area.  So much so, that they know us by name at our local sushi joint (although it closed finally last Tuesday, we ate there at their last night open).

This week is the HL7 Working Group Meeting in Atlanta (one of HL7's common venues, I've been here 3 times at least).  And I've been in the region enough to know the best local sushi place as well, and last time I was there, they recognized some of us.

In IHE I co-chair one of the committees (Patient Care Coordination) that develops CDA implementation guides, and was at one point in time a member of the IHE Board.  I spend most of my time here working on implementation guides using CDA and other HL7 V3 standards.  I do quite a bit of work harmonizing with HL7.  In IHE, I'm the HL7 guy or the CDA guy.

In HL7 I used to cochair the structured documents workgroup, and am now a member of the board.  I spend most of my time here working on HL7 CDA standards and implementation guides, and other V3 standards (and more recently, beyond V3, including FHIR).  I do quite a bit of work harmonizing with IHE.  At HL7, I'm been referred to by some as being Mr. IHE (although that's a title that was previously held by a GE colleague for many years, and I'm not quite ready to take on his role).

At elsewhere, like in the ONC S&I Framework activities, I'm considered by some to be the IHE guy, and by others, the HL7 guy, depending on which expertise I'm using.

It's always clear to me that where ever I go that I bring with me some other perspective that may not already be present.  Isn't that the point of a consensus body?  To include the needed perspectives that should be at the table.  What I find interesting is that I probably embody more of HL7 or IHE when I'm away from those organizations, then when I'm present in them.  So, even though I've come back to my other home, sometimes, it doesn't always feel so homey.

That's OK, here I am, and they still have to take me in.

Wednesday, May 1, 2013

Failure does not equal lack of progress

This week is the "public comment preparation" meeting for several IHE domains.  At this meeting, profile authors are supposed to bring content for Volume II of their profile supplements for committee review.  I'm working on the CDA Consolidation Harmonization profile, and I failed to bring any content.  I could blame it on software (the second to last bug has been fixed), illness (I had two run-ins with illness because I failed to take some basic precautions during recent travel), or just simple complexity, or the fact that I haven't yet figured out how to clone myself.

Even though I failed, we are still making progress, and I still expect this profile supplement to get published this year, though perhaps not as fast as we all want or need it to be.

You might be interested in how this is all going to work, how is IHE going to harmonize international profiling efforts with a US realm specification, and deal with the massive changes this is going to have on its existing work.

The first step is that we will be introducing template versioning into IHE profiles so that when a template has a version change, we don't have to adopt a new template identifier in templates that use the new version.  That was a big challenge caused by previous methods of handling changes to templates in CDA.

The old way of doing it was something like this:  Template A has a constraint that says that Template B appears in it one or more times.  However, when template B changes, it is given a new identity (e.g., template B').  In order to fix template A, we have to change one of its constraints (the one about template B) to say that it uses B' instead.  Because we've changed template A in a way that is not backwards compatible with the previous template, now we need to reidentify template A, so that it becomes template A'.

In the new way of doing things, Template A will have a constraint that says that some version of Template B appears in it one or more times.  When template B changes, it is given a new identity that is comprised of the identity for the set of template versions for it, and its version number.  So, now, when we create a new version of template B, template A will NOT need to change, because it can now include either Template B version 1, or Template B version 2 (and in fact always allowed that, we just didn't previously have a template B version 2 until it got created).

Versions will have major and minor number changes.  A change in a template minor number means that readers and writers can still understand the newer template, they just may not have gotten all the additional detail supplied in the new version.  A change in a template major number means that readers and writers will know that something has changed in a way that isn't backwards compatible.  If they don't know how to handle the newest stuff, they can at least be aware that the new template isn't something that they recognize.

In order to make it easy for applications to determine whether they should try to access a document, each document must also contain a Technical Framework revision number which identifies the base set of templates that must be understandable by a consumer to correctly interpret the content.  If you get a document that uses a version of the PCC technical framework that the software doesn't yet understand, it has two choices:
1.  It can just bail out, and indicate that the document version isn't supported.  This is the behavior of applications like Microsoft Word back in the 1.0 and 2.0 days.  If you tried to read a Word 2.0 file with Word 1.0, it would tell you that it didn't understand the file format.
2.  The application can make a best effort to read the document, failing only when it encounters a template that has a major revision that it doesn't understand.  This is how Adobe Acrobat Reader reports issues when it encounters a PDF document written in a version of PDF that is newer than it was designed to handle.  It only fails if it encounters something that it doesn't know how to deal with, and usually does a good enough job if the document didn't require the newer features.

How will we figure out what goes into the new templates?  That's where software comes into play. The Model Driven Health Tools project already has much of the final text of the PCC Technical Framework modeled, and also has the CDA Consolidation guide modeled.  The C-CDA guide has a cross-walk of old versus new templates in Appendix B.  I can use that crosswalk to compare the two template models, computing three sets of constraints for a template pair:

  1. Constraints that are identical in both IHE and C-CDA.
  2. Constraints in IHE that are not in C-CDA for a given pair.
  3. Constraints that are in C-CDA that are not in IHE.
Constraints of the first type will remain untouched in the IHE Technical Framework.  This is the core of the new content for PCC.  

Constraints of the second type will be reviewed by PCC to determine whether we want to re-adopt them, or leave them out.  If we readopt them, these constraints will be added to an appendix of the the US National Extensions section in Volume 4 (not yet written).  Implementing those constraints on C-CDA will be what is needed to take a valid C-CDA and make it a valid implementation of the IHE PCC template.

Constraints of the third type will be reviewed by PCC to determine whether we want to adopt the C-CDA constraint as a new constraint on the PCC template.  If they are adopted, these constraints move back into the first set.  When they are not adopted, it won't harm use of C-CDA, they just won't be necessary in the IHE PCC Context.  We will be sure not to re-adopt any constraints from group 2 that would violate a constraint in group 3 (e.g., code on the Concern template), so as to avoid introducing any incompatibility between C-CDA and PCC templates.

Another change we will be adopting is transitioning from direct use of value set or coded terms in the PCC TF to using Concept Domains.  We have already identified a set of Concept Domains that is the union of all concept domains already existing in the HITSP C32, CCD, C-CDA, and PCC Technical Framework.  Constraints dealing with vocabulary will be assigned to concept domains where needed from both sets (there will still be a few cases where direct use of value sets or codes is appropriate).  That refactoring will shift constraints from category 2 or 3 back into category 1.  In other words, if IHE and our interpretation of the C-CDA value sets result in assigning the same concept domain to equivalent constraints in both, and that results in the constraints being identical, they can be moved into to the core set.

During this process, we will identify what C-CDA value set was reassigned to a concept domain, and note in the US National Extensions that this is the value set is bound the the concept domain.

Finally, we will internationalize some of the data type flavors, and note in the US National Extensions which data type flavor substitutions can be made.

The end result will have the side effect of creating an International flavor of the Consolidation Guide.  There's some thought that when we get finished, IHE would ballot that through HL7 through the collaborative process that HL7 and IHE announced a few months back.  I'm not even thinking about that right now.  You can see there's a lot of work to do before we get to that point.

   Keith

Classifying Documents in XDS

IHE defines in the XDS metadata, numerous different codes that are associated with the documents it indexes.

  • typeCode
  • classCode
  • eventCode
  • practiceSettingCode
  • authorSpecialty
  • authorRole
  • specialtyCode
  • formatCode
  • mimeTypeCode

typeCode is a single detailed code that classifies documents at a very detailed level and can be used for other sorts of analytics and decision support by automated systems. It's really intended for machine, rather than human use. LOINC is the most commonly used system for this classifier.

classCode is intrinsically tied to typeCode. Both codes classify documents, and typical implementations take a high level slice of the LOINC typeCode hierarchy to implement classCode.

A document with a class code indicating it was a discharge summary could appear as simple text, rtf, pdf in or CDA. A physician would who needs to see what happened to a patient in a recent hospital visit is not concerned about who wrote it, the detailed classification of the content, or what specifications it conforms to. They want to know what happened in the hospital. The classCode supports those sorts of queries.

The purpose of classCode when originally envisioned was to allow providers to identify the documents they need by what they contain: e.g.: X-Ray report, Discharge Summary, Lab Report, but not go into a detailed classification like Hospital Physician Discharge Summary.

However, even X-Ray report and Discharge Summary go just a bit further than what classCode intends, because they overlap with other classifiers.

eventCode captures information about the event or events which are relevant to the document.  It's been used as a bag to code various different events of interest in different profiles.  But it can also simply identify the kind of service that was performed, e.g, a consultation, or surgery.

healthCareFacilityType captures information about the kind of facility where the care was given, so that Discharge Summary is simple a combination of rfacilty type = Hospital, and thus, classCode could simply addressed as Summary. I've seen different value sets used here, from the Healthcare Provider Taxonomy (which includes both setting and specialty) to SNOMED.

practiceSetting addresses the medical specialty of the practice where the report was produced, e.g., radiology, and so X-Ray report becomes report, where the practice setting is radiology.  Practice setting and and authorSpecialty are often assigned to the same coding system.  That's because the practice setting could be more general, while the author training is more specific.

authorRole addresses the role of the author in creating the document. Is this a document created by the patient, their attending physician, their nurse, et cetera.

Those who understand how LOINC classifies documents will understand the axes used above.  If typeCode = the document LOINC code, the five axes of type in LOINC:
Kind of Document = class
Service Event = event
Setting = healthcare facility
Subject Matter Domain = practice setting / author specialty
Role Description = author role.

This was no mistake.  Two other code systems are left.

mimeType is used to identify the IETF mime type registered with IANA used for the content. We also note that at the time XDS was created, text/xml was overloaded to support things like CDA and CCR and other XML formats. These days, separate mime types are being created for different standards (and HL7 is in the process of finishing registration of mime types for CDA finally). Even so, mime type is still insufficient to determine what specification the content conforms to.

formatCode requires more explanation. The purpose of formatCode is to identify a specification that the content conforms to, over and above what is supported by mime type. For example, a document conforming to PDF/A has the same mime type as a document using PDF. While PCC has in the past used distinct formatCode values to distinguish between different document content profiles, we recently decided that because formatCode and classCode and typeCode provided the capability within a profile to distinguish content, that we could just assign a single formatCode to a profile supplement. This is because different documents in a profile have different codes describing them, and we expect that the affinity domain would be configured to support that distinction.

   Keith