Convert your FHIR JSON -> XML and back here. The CDA Book is sometimes listed for Kindle here and it is also SHIPPING from Amazon! See here for Errata.

Wednesday, June 15, 2011

Thoughts after a Standards Meeting

It's not a Healthcare standards meeting unless sushi has been eaten.  Tonight several (6) of us went to my favorite DC Sushi Restaurant: Sushi Taro.  We all ordered the Sushi Tasting, and it is definitely an experience to be savored.  We spent nearly three hours on these 11 courses.  Dinner tonight was on my own dime because there is no way this would fly on my T&E budget, but it was worth every penny.

The SI framework meeting that I'm attending has been equally valuable on many different fronts.  Being present face to face is the first opportunity many of us have had to meet new faces.  It's also a good opportunity to meet many old friends from HITSP, and continue ongoing relationships in HL7 and IHE.

What's up at SI Framework?  I'm pretty much focused on the Transfers of Care work, so I don't have much to say about the LRI work.  But here are some high points:

Clinical Information Modeling:  This workgroup is closing in on content needed for 4 different transfers of care use cases (Discharge, referral to consult, consult report and patient discharge instructions).  There are three tiers of information.  In Tier A is the data that is needed for every transfer:  Active Problems, Meds, Allergies, et cetera.  In Tier B is the stuff that should be available to be sent because it may be pertinent and relevant for a transfer.  I'm not sure about Tier C, but I think it falls into the category of nice to have but not necessary.  So, we should soon have the CIMs.  What's a CIM?  Well, these are ONC SI Framework CIMs not HL7 constrained infomation models (isn't it lovely that we can reuse the same acronym for a similar thing).  And not to be confused with Intermountain's Clinical Element Models (that is clearly a CEM).  I'm not sure why there's all this reinvention, but because we don't have a metamodel for CIMs yet (I hope to be working on that with the MDHT team), it's something that can be fixed by retrofitting what we already know about clinical models from around the world.  The workgroup lead is someone I hadn't previously met, Dr. Holly Miller.  I got to listen to her explain the importance of pertinent and relevant to the CIM team while I was in the back of the room.  This evening I upgraded her ribbon from Rock Star to Goddess.  I don't know how many times I've tried to explain that to people.  She joined the group who headed to Sushi for dinner.

Data Elements:  This workgroup has nearly completed mapping data elements required for the use case back to the HITSP C154 work that was done in 2010.  John Donnelly led these efforts (he was another member of the dinner team tonight).  It's almost all there, which is what I would expect.  There's some confusion because there's a different between a set of data elements to describe something like a condition, and the clinical context in which it appears.  For example, you use the same set of fields to describe a condition regardless of whether its part of the active problem list or is part of the history of past illness.  It's that classification of the set of data elements which is dynamic (and depends upon clinical judgement about relevance), and the data elements themselves, which are static.  I got to spend some overdue time with this workgroup.

Standards Analysis is proceeding well, but depends on the output of the CIM workgroup.  It appears as if the strategy might be C83 for today, moving towards the CDA Consolidation work when it is ready (e.g., when we have some deployment experience and a reference implementation).  The CCR doesn't seem to be up to the job from the reports of several people (not just me).  I described it to one person thus:  You can have a fast motorcycle and a fast car.  In a short race, the fast motorcyle will win every time.  But in the longer races, the car will beat it every time.  The motorcycle has more accelleration, and will get you to top speed faster (e.g., the CCR and transfers of care), but the Camaro will beat it out in any long race because it has a higher top speed (CDA and CCD).  That is going to be an interesting discussion because most of the CCR proponents aren't at this face to face meeting.  I miss seeing Steve Waldren, he was a force for good in HITSP, and we haven't connected back up since those days.

The Reference Implementation and Architecture workgroup came under quite a bit of fire when we reviewed their scope and purpose. Even when  we ripped that to shreds and reworked it, much of the architecture remained, but our focus appears to be changed.  That group seems to want to adopt me and I'm not adverse to the idea.  There's quite a bit of overlap between what they want to do and work that already exists in MDHT and the Template Meta-model work.

There's a ton of data that we need to gather.  Some of that will likely be in a template exchange format based on the template meta-model.  My own work in this space is leading me down the path of rebuilding the model from an XML Schema designed from the model.  While EMF is capable of generating a schema from a model, I'm not well versed in how it does it yet to get the XML Schema that I think should be generated.  So I generated and XML schema that I'll hand-edit, and then reverse engineer that back into an EMF model.  I some work I've done this week I've already identified a gap, which is a business name associated with the class attribute being constrained.  That might actually enable us to tie the clinical information models, data elements and template models together in MDHT.  Then we can build a transformational architecture based on the model data using something like MDMI (which we may shortly have access to an open-source implementation of).  Once I get that working the way I like it I'll have to explain it to the templates workgroup at HL7 who is working on updating the Templates DSTU.  I wish all these projects weren't trying to be pertinent and relevant at the same time.

From there, I think there is a step of building an interface to support storing CIMs in an exchange instance and loading an exchange instance into a model so we can extract the data elements from CIMs.  That's a place where something like MOF2TEXT could help.  Any .Net coders out there that want to write code generation for the .Net interfaces?

There's some thought that the interface for the reference implementation should be a service.  I think the first step is to build an API and then to refactor a service interface out of it.  The former will be more fine-grained than the latter.  Since the API is model driven, it shouldn't be too difficult to get several implementations if we can get the right people engaged.

There's clearly signs of growing pains in SI framework.  The late venue change (we are now in the Hilton Hotel down the street instead of in another office buidling) is due to 170 people registering for the meeting.  This would have been pretty normal turnout for a HITSP meeting, probably five times what they had for Direct, and predictable given the size and number of workgroups.  Both ToC and LRI workgroups were stuck with two large conference spaces that we had to share for our breakout sessions.  Communication has been a bit of a challenge, but late breaking news has made it to people via e-mail instead of just being on the wiki.  This is a welcome change because I do read my e-mail every day, several times a day, but I don't go to the SI Framework wiki all that often.  Wiki's are great collaboration and documentation tools, but lousy ways to deal with conversations.  I wish we'd just get a list server and figure out how to integrate it into the wiki.

There were some other "up-level" conversations that we also had about the SI Framework process.  One of my challenges is having yet-another-standards-activity to follow.  ToC alone has umpteen hours of calls a week, at least twice as much as what I try to follow in HL7.  The RI workgroup was new to me because I cannot keep up.  They are spawning (in ToC and LRI) about two workgroups a month -- with more to come on at least two other projects (Provider Directories is one of those, and the other you'll hear about later).

One of the things Doug asked, first to the workgroup leaders, and second to the entire group was "what is it that this organization does that the SDOs do not."  There are a lot of different answers to this question.  One of them is "Support from ONC".  For some reason, neither IHE nor HL7 (nor the former HITSP) seem to get the respect from certain people at ONC or the FACA's that they'd like.  And yet when I look in the room for the SI framework project, more than half of it is made up of HL7, IHE and former HITSPites.  There seems to still be a belief in some quarters that these organizations haven't served the industry.  While they all have their blemishes, I think that they have done so.  One of the comments made about the need for SI framework was that "we aren't there yet", but the same person also said "Interoperability is a journey, not a destination."  Clearly there are some waypoints on this journey that people would like us to hit.  I want to know what they are, and how they are being established (something not apparent in how SI framework projects get chosen today).  This is not any one organization's problem.  It belongs to all of us, just like my jacket tells not just my story, but those of many other volunteers in many different organizations working to bring about change.

Members of IHE and HL7 will continue to participate in these efforts, but I really wish that someday there could be a come-to-Jesus meeting with all parties where everyone could air their griefs and get over some of these impasses, so we could be more focused across the board.  There's still too much back-room-snyping going on in way too many places.  Maybe it's all just "Politics" and I should just ignore it, but I'd really like to make it go away so we can focus on the real work.  This shouldn't be so dog-eat-dog competitive.  There should be a way for all of these organizations to work more closely together (maybe something like the Canadian model -- I think I'll propose something later in a blog post).

The Documentation workgroup meets in the afternoon to talk about the "transition strategy document".  This was deemed out of scope in the HL7 efforts, but really needs to exist if the ToC project is to succeed.  There needs to be a map drawn from the way we do it today (C32 Version 2.5 and CCR) to the way being proposed for the next stages of meaningful use.  I'm wondering if trying to bring in a publisher might help with this.

All-in-all, it's been a very productive standards meeting thus far.


  1. Oh, this is sounding so familiar!

    You're not the first to suggest that the Canadian Model might be something for ONC/S&I to look into more closely (see It's evolved quite a bit in the last couple of years, and has provided a model that has been deployed in a number of other countries.

    As I've said in the past, being a participant in IHE (actively) and HL7 (occasionally), and having worked as staff at HITSP, I am willing to help my US colleagues understand this model and how it might fit into the US context. My door is always open, and I can be compensated with Sushi!!!! :)

  2. Well, if I knew there would be sushi involved…….

    I wish I would have been able to attend, a lot going on in health IT these days and they have still outlawed cloning. Thanks for the update on the activities Keith.

    Regarding your analogy of the motorcycle and car; I would agree but I think the CDA/CCD is more like a kit car (still working on Lvl 3 templates and requires a lot of learning by developers). If one can get all the pieces together and get it running well, then the top speed will be higher. In the meantime the bike is well down the street. Of course, the general computer industry is using rocket power! I get reports from several people that the CCD doesn't seem up to the job either.

    I think the real issue is not CCR or CCD but rather documents or data. It just seems, people cannot let go of the old paradigms. A discharge summary is not important, rather the list of current and d/c'd med; list of active and resolved issues; list of results, etc. is what is important.

    The nice thing about the Direct Project was that there was a requirement to create a reference implementation - e.g. there must be a solution at the end of the work. I am struggling to see what the "solution" will be at the end of the S&I ToC.

    Anyway, it would of been nice to see you. We see things differently and have different experiences and it always fun to learn and debate cordially with you.


  3. Thanks Keith. When explaining what happened to others back at the farm, it helps when I can give them a link to your blog and not have to rewrite it all. Despite the "ripping to shreds" of the Architecture/RI scope, I think we came out still afloat with a more accurate scope statement by the end. As to Steve Waldren's question about the "solution" coming out the end, I think that the ToC reference implementation will include open source code to take discrete data (provided via an API) and create content in the form of the "ToC information packages" described in the use cases. Examples: Discharge Summary (in my world, people still want this context, not just isolated components of data), Consultation Referral Request, and two others. And also code to take an inbound information package and decompose it into standardized data elements, which an EHR could then import and reconcile (though the import and reconcile are not in the scope of the ToC RI). Xxisting EHRs would need "adapters" to be able to use this code, though a EHR built from scratch around the Clinical Information Model might need less adaptation. ToC content needs to be secured and transported, so I foresee the ToC Reference Implementation as one piece of the puzzle that would be complemented by other pieces such as Direct Project or IHE XDR. If using Direct (which is, wisely, content-agnostic), then the ToC Information Packages would be encrypted via S/MIME and securely transported to the intended recipient. Reference Implementations, or pieces thereof, from both the Direct and ToC projects could be used.

    I think one of the challenges is how many EHRs will adopt reference implementations since most are already "well down the road" in some of these ToC content creation functions. Will they say "it's just as easy to create the package myself, as to map to the CIM and to have the RI do it?"