Monday, July 30, 2012

Moving from HITSP C32 to CCD 1.1

This is the first of a series of posts describing how to move from the HITSP C32 required under Meaningful Use to the CCDA specifications.  Since HITSP C32 was a CCD, the plan is to cover how to deal with changes between the HITSP C32 restrictions on CCD, and the HL7 CCDA version of CCD (CCD 1.1).

I'm assuming that you are going to be starting from your existing HITSP C32 implementations to support the new format.  I'm doing this by manually editing the Robust HITSP C32 example provided by NIST, and manually inspecting my changes against the CCD 1.1 specification in the HL7 CDA Consolidation Guide (when there is other testing available, I'll take advantage of that as well).

I’m first going to address the header constraints.  Later posts will address problems, allergies, medications, lab results and procedures sections, and if there is enough demand, I'll get to other sections as well.  You'll be able to find all of these posts under the 2S2 category.

The CCD 1.1 Header

You’ll need to add two <templateId> elements to the existing set under <ClinicalDocument> to conform to the CCD 1.1 Header constraints. 

  <!-- Added for CCDH: CONF:9441 -->
  <templateId root="2.16.840.1.113883."
       assigningAuthorityName="HL7/CCDA General Header"/>
  <!-- Added for CCDH: CONF:8450 -->
  <templateId root="2.16.840.1.113883."
       assigningAuthorityName="HL7/CCD 1.1"/>

The “Robust CCD” example from NIST needs no further alterations to be valid for the “CCD Header Constraints”, but we still have yet to deal with the General Header Constraints.  We’ll hit those next.

General Header Constraints

The H&P General Header Constraints were largely adopted in CCDA, but there are a few additional things that you need to worry about:
templateId elements have obviously changed.
The requirements for id, code, and title in the ClinicalDocument have not.
languageCode changes slightly in that the original General Header Constraints specified the nn or nn-CC form, which is still acceptable, but the CCDA allows anything from the valueSet 2.16.840.1.113883.1.11.11526, which is defined by RFC-4646.  That RFC is a bit more forgiving, but most users of it will follow the general nn or nn-CC pattern.
Most of the changes are minor and affect what you can expect in participants in the header:

On addr and telecom

As before, many participants must have an addr and telecom element.  However, the new CCDA header has tighter constraints on what can be present in addr, and looser constraints on telecom.
The tighter constraints on addr ensured that addresses could be stored in information systems without having to parse them:
The address is fielded into streetAddressLine, city, state, postalCode and country elements, where the first two are required, remaining three are recommended.  Non-whitespace text outside of elements inside the <addr> is not permitted. 
For telephone numbers, we had stricter requirements In the original General Header.  We had specified that phone numbers appeared in a constrained form of the tel: URL format.  The older constrained form is described below:

telephone-url = telephone-scheme ':' telephone-subscriber
telephone-scheme = 'tel'
telephone-subscriber = global-phone-number [ extension ]
global-phone-number = '+' phone-number
phone-number = digits
digits = phonedigit | digits phonedigit
phonedigit = DIGIT | visual-separator
extension = ';ext=' digits
visual-separator = '-' | '.' | '(' | ')'
CONF-HP-12: Telephone numbers SHALL match the regular expression pattern tel:\+?[-0-9().]+
CONF-HP-13: At least one dialing digit SHALL be present in the phone number after visual separators are removed.

These constraints are no longer present in the CCDA General header, but are still considered to be best practice in sending phone numbers.  NOTE: If you apply the constraints of the tel: URL from RFC-3966 (or its predecessor RFC-2806) , the lack of a global phone number (a number beginning with a +) requires use of the phone-context= attribute somewhere in the URL, and if you don’t have that, the URL is not correct.  So, the + form is still preferable.


The administrativeGenderCode and birthTime elements are still required (and administrativeGenderCode still uses the HL7 vocabulary).  The maritalStatusCode, religiousAffiliationCode, raceCode, and ethnicGroupCode elements are still optional (although maritalStatusCode is now recommended.  These use the same vocabularies as was required in the HITSP C32.
Only one <name> element is permitted in the <patient> element.  HITSP C32 allowed multiple names to be present. 
The <name> element in the new general header is now constrained to require being fielded into components for patients, just as was done in the HITSP C32.


Authors must contain an <id> element that contains their NPI.  This is a new requirement in the CCDA general header, and you will find it in a number of places, starting with the <author> element.  It isn’t clear how you indicate that the author either doesn’t have an NPI (possible) or what to do when the NPI is unknown. 

If you do not have the author’s NPI, I would recommend this form:
<id root='2.16.840.1.113883.4.6' nullFlavor='UNK'/>

And if they do not have one, this form:
<id root='2.16.840.1.113883.4.6' nullFlavor='NA'/>

The first one indicates that you either don’t know the NPI, or it isn’t applicable (in the second one).  HL7 modeling experts will probably hate these choices, but what else should we do?  This is similar to the old how do I say the patient’s phone number is unknown problem that was solved in a similar fashion years ago.

The <name> element can be fielded, like the patient’s name, or it can just be a string containing the full name of the author, but you cannot mix the two as you could have under the HITSP C32 (not that anybody really tried to do that).

If the author is a medical device, you must include both <manufacturerModelName> and <softwareName>.

Data Enterer

Data enterers now require an id, address and telephone number (addr and telecom elements), unlike what was required for HITSP C32.  And if you have an NPI for the data enterer, you should include it.  I’m not sure this isn’t a copy and paste error.


In the HITSP C32, when the informant was an assignedEntity (a healthcare provider), they must have an address (and it is optional for others).  In CCDA, all informants SHOULD have an address, regardless of who they are.  Again, the name must either be fielded, like the patient name, or just a string.

In HITSP C32, an informant that was not a healthcare provider, was restricted to in the kinds of relationships they could have with the patient.  All of those relationships are still permissible, but other types of relationships can also be used.

Information Recipient

In the old general header, if you had an <intendedRecipient> element, you also had to have either an <informationRecipient> or <recievedOrganization>.  In CCDA, neither are required, but if you do that, basically, you might as well have omitted the element.

Legal Authenticator and Authenticator

Ensure that you include <time value='yyyymmdd'/> and <signatureCode code='S'/> in these elements.  A CCDA should have a Legal Authenticator, and may have other authenticators.  The old general header made no recommendations on the presence of the legal authenticator.

That about covers it for this first installment.  Look for future installments over the next few weeks.

Why is Software Hard?

A long time ago, I was in a meeting with a colleague and several senior managers for a project we were working on.  He and I managed different development teams and were being lambasted by a manager (not our own) with respect to a recent bug.  The manager had previously made some comments about his extensive history in software (not realizing that both my colleague and I had more experience in the industry that he had).  He went on to compare software development to car manufacturing.  Why can Toyota build such great cars and we still have bugs in our software?

My colleague made a very simple statement:  "More moving parts."

According to this website, a typical car has around 30,000 parts when you take it down to the level of a single component (e.g., a screw).

Yet we measure software in thousands of lines of code, and a typical enterprise application has millions, tens of millions, or even hundreds of millions of lines of code (and some of that can be 10, 20 or even 30 years old).  And those parts can touch on each other in so many different ways not limited to the laws of physical space.

Some of the best studies (e.g., COCOMO), reports and books (e.g., Mythical Man Month) from decades ago estimate software productivity as being around 10 lines of code per programmer-day, or around 2.5 thousand lines of code a year.  I had an engineer who worked for me at that point in time who produced (measurably), 100,000 lines of code a year (and that was also bug-free).   He was five times more effective than the next-best engineer I had working for me, and my team was arguably 5x better than Brooks 10 lines per programmer-day estimate.  But even so, to consider what that team could do with respect to maintaining tens of millions of lines of code, it's just daunting.

In healthcare, we add to that complexity.  SNOMED CT codes for several hundred thousand concepts.  ICD-9-CM has over 14,000 codes and nearly 4,000 procedures.  ICD-10-CM has more than 68,000 codes, and ICD-10-PCS has 76,000 codes.  LOINC has more than 50,000 terms. RxNORM has nearly 500,000 codes.

Back to the car analogy.  Imagine how difficult it would be to build anything, if the thread count of every screw was different.  At some point, it is amazing that software developers manage to build anything at all.

Friday, July 27, 2012

IHE PCC and IT Infrastructure Call for Proposals

Greetings IHE Community and Industry Partners,
It is with great pleasure we announce the IHE annual planning cycle has begun! The IT Infrastructure (ITI) and Patient Care Coordination (PCC) Domains are soliciting work item proposals for the 2012-2013 Publication Cycle. The Call for Proposals opens July 26 and concludes October 5, 2012. Interested parties are invited to submit a brief proposal for new IHE Profiles and/or White Papers to be considered for development in the new Publication Cycle. 
This e-mail describes the annual planning cycle process from August – December 2012. Please continue reading for more details or visit the IHE Wiki for more information.
Help Promote IHE Call for Proposals:
All IHE members and industry partners are invited share this announcement with their committee mailing lists and other interested parties. Additional information is maintained on the IHE Wiki.

All Proposals must follow the structure and format of the attached Brief IHE Proposal Template, specifically addressing the following:
  1. What is the problem you propose to solve by this proposal, and how is that problem expressed in practice (e.g., a use case)?
  2. How would fixing this problem improve health care in practice?
  3. What specific components of standards could be used to solve this problem?
4.       Your proposal should identify one or more potential editor(s) in the event that the proposal is selected for further evaluation and possible development. If possible, please include some indication of the business and/or clinical case surrounding the situation when describing the problem. For example, is there an economic motivation for addressing this problem immediately?

Summary of IHE’s Multi-Phase Proposal Process:
1.       Submit Brief Proposals by October 5, 2012:
PCC and ITI Domain’s Call for Proposals Open July 26 and close on October 5, 2012. Submit a Brief Proposal with the attached form to the domain email listed below.

2.       Planning Committee’s Proposal Review Webinars- Decision Meetings:
Save the Date! Webinars are held during the weeks of October 8 & 15, 2012 on WebEx.  All Proposal authors are required to present a 15 min. summary of their Brief Proposal on one of these Webinars. Exact dates for each domain will be announced in August 2012. Please anticipate participating on 1-3 webinars. These Webinars will be decision meetings.
3.       2012-2013 Planning Proposal Evaluation Kickoff:
Save the Date! October 30-31, 2012 in Oak Brook, IL.* Click here for more details.
We urge those who submit proposals or white papers to attend the Planning Proposal Evaluation Kickoff Meeting in person or phone. In-person advocacy has proven to be the most effective way to ensure your brief proposal are understood and accepted by the committee. 

4.       2012-2013 Technical Committee Proposal Evaluation Meeting:
Save the Date! The week of December 3, 2012 Location TBD.* Click here for more details.
Proposals that are accepted at the Planning Proposal Evaluation Kickoff Meeting and given to the IHE Technical Committee for review are required to write and present a detailed proposal during the Technical Proposal Evaluation Meeting.

Deadline: October 5, 2012 at 11:59 pm CDT
Email the completed brief IHE Proposal template to the corresponding domain email address below before October 5, 2012 at 11:59pm CDT.
Domain Email
Planning Co-Chair 1
Planning Co-Chair 2
PCC Planning Committee
Keith Boone
Dr. Michael J. McCoy
ITI Planning Committee
Karen Witting
Claudio Saccavini
We look forward to working with you during the IHE 2012-2013 Publication Cycle. Please contact the IHE secretary at if you have any additional questions or need further assistance. 

Wednesday, July 25, 2012

The Patient's Workflow

Thinks happen and then thinks happen. And this one has been very interesting to watch unfold in my brain, as I try to tie together eight impossible thinks before dinner.

I've been teaching quality standards this week, covering things like CDA, CCD, CCDA, QRDA, and HQMF as they relate to PQRS and QDM, and various quality improvement initiatives.  In discussion of the QRDA Release 2.0, I was explaining how structured documents had created QDM based templates, mostly starting from the CCDA templates to map to the NQF Quality Data Model.
"For some things," I said, "we had no CCDA template to represent a concept, such as for the QDM Communication area.  So, the workgroup created communication templates that specialize for the different domain specific attributes, sender and receiver.  The possible values for sender and receiver include patient and provider (and also information systems, if I remember correctly).  So, the workgroup created three templates to support quality measures around provider communication:
  • Communication from Patient to Provider 
  • Communication from Provider to Patient
  • Communication from Provider to Provider
They didn't include Communication from Patient to Patient, because that doesn't include provider communications."
But then my S4PM badge poked me in the chest (which is amazingly difficult, given its sitting in a coat pocket 2000 miles from me), and I wondered a bit more about this idea.

I had a routine physical examination a couple of weeks ago, and my provider successfully met a possible quality measure with me via:  Communication from Patient to Patient Recommended.  This is QDM speak to say that this was an action he performed, where the category was "Communication", the domain specific attributes were sender=patient and reciever=patient, and the state of action was "Recommended".  The recommendation was in relationship to weight loss, where he suggested Weight Watchers, because it isn't just the weekly weigh-ins that help, but rather the communication and support between members that also helps.

In a related development, Farzad tweeted this link to me earlier today.  The most interesting idea in the tweet and link was that these are not provider's quality measures, but rather quality measures belonging to a patient.  These are MY measures for the quality of care that I'm getting.  The link is worth reading because the authors talk about a mechanism whereby they evaluate a measure in relationship to the patient, not the provider.

We've been having some discussions on Patient engagement quality measures for providers at the Society for Participatory Medicine, but let's turn this back around.  What are my quality measures?

Where are the quality measures that patients (or consumers) can apply to themselves and their data?  Who is developing those?  And how will we automate them and deliver them to patients?  And how can patients use this information to improve their quality of care?

Somewhere, there has to be some research about  patients who get better outcomes because of what they do and can control, and an understanding of the benefits (and costs) that their own actions have with respect to the quality of care they receive. I imagine that most reading this by now are going to focus on health and wellness actions. I'd like to shift your attention away from that, because that's not where I'm headed with this.  I'm bombarded by that data all of the time, and I have a pretty good idea what I should be doing from that perspective, and what my health and wellness quality and input data and measurements are.

What I'm really after are those things that have to do with how I as a patient relate to my healthcare system, my doctors, my payers, my employer, et cetera, that could also improve my outcomes within that system.  What data should I be tracking for that?  What are the measures?  What are the guidelines?  If I were to be diagnosed with a life-altering disease, I know that one thing I would do is join a patient community.  I already have enough information to show me that the value of that action would tremendously improve the quality of care I receive.  That's a pretty obvious case, and it's quite similar to what my doctor recommended to me for weight control, save that the patient population is different.

After thinking about this some more, I reminded myself that to improve a process, you need to document it and instrument it to measure quality.  And to do this, we need to look at the patient's quality measures from the patient's perspective.  Patient engagement and empowerment isn't about focusing on patients, but rather, returning the focus TO patients.

I don't have answers yet, just questions: What is the process from the patient's perspective?  What is the patient's workflow? What are the engaged patient's guidelines?

  -- Keith

P.S.  Even HL7 is looking at what Patient Engagement means for Health IT standards at it's plenary session this fall. I'm looking forward to see what fuel that session brings to this fire (pun intended).

Tuesday, July 24, 2012

Real Time Quality Measurement as Clinical Decision Support

The topic of "Real Time Quality Measurement" as "Clinical Decision Support" has come up recently in three different forums.  In the AHRQ RFI which I mentioned earlier today, in a previous face-to-face meeting of the HIT Standards Committee, and just now in the agenda for the Clinical Quality Workgroup for tomorrow's montly call.  I'm a member of that workgroup, but won't be able to attend the meeting, given that I'm in an all-day training teaching about standards for quality measurement.  As always in cases where I cannot attend the call, I read the materials and respond ahead of time to the workgroup, so that at least they have my input.  After rereading, I decided to share it as this post:

On the topic of conceptualizing CDS as real-time quality measurement, you’ve hit one of my favorite discussion topics.

Quality measurement is intended for process improvement. One of the first steps in process improvement is documentation of quality processes. The second step is to build measurement into those processes so that progress against quality can be measured as early as possible. In clinical care, the process is a care guideline, and when that guideline is instrumented to that it can be measured, it can also be instrumented so that it can be executable. An executable clinical guideline operating against measurable inputs is clinical decision support in a very real sense.

My post on Gozinta and Gozouta is nearly three years old, yet most of what I have to say in it still applies in the current day.

What has changed in the past three years is that HQMF is now in its second release cycle, and there are significant initiatives using it, including the Measure Authoring Tool, and Query Health. There are yet more opportunities to use it to define not just how a quality measure is computed, but also to define the inputs (and possibly even outputs) of a clinical decision support process. Just being able to describe the inputs to a CDS process in a way that would allow Health IT systems to automatically generate an appropriate interface to a CDS implementation would be a tremendous game changer.

This should be a consideration of the ONC S&I Framework Health eDecisions project. I have some ballot comments on HQMF Release 2.0 that HL7 will soon be publishing that will enable this kind of use, based on some earlier standards work that IHE did. That never got adopted, I think in part because a standard like HQMF was missing from the protocol. There is emerging work in the CDS space from the Clinical Decision Support Consortium (an active participant in the Health eDecisions project) that could readily take advantage of synergies between it, and HQMF as a description of the inputs it is expecting. The CDSC work takes a CCD document from an EHR and develops from it, a list of needed interventions for a patient, which it returns to the sending EHR.

The Data Criteria section of HQMF owes its existence in part to some of that early IHE work on Care Management. The idea was that the “data of interest” to the system (in the case of HQMF, data of interest to the measure) needs to be well defined. And having defined it well, and in a computer readable format, it could be used to automatically generate an input for a clinical decision support system. That input could be a CDA Document, implemented using the CCD specification, or one of the documents specified in the CCDA specifications. It need not contain all of the specified data of interest, just that data of interest that is available to the provider, in order to be useful.

Knowing what data is of interest can also be used on the EHR side to prompt the provider to ask good questions. This is yet another form of Clinical Decision Support.

One of the missing points will be trying to figure out how to specify what the outputs of the Clinical Decision Support system would look like. Here, I believe we need to do more work on care planning. The output of the CDS system can be a care plan that specifies further diagnostics, interventions or goals which might be appropriate for the patient. Specifying what the outputs look like in a way that is actionable is important. But it need not be so detailed as to suggest exactly what needs to take place and in what order, as that is a case where systems might innovate.

  -- Keith

AHRQ Request for Information on HealthIT and Quality Measurement

AHRQ recently posted 15 questions about Quality Measurement in a request for information from the public.  Unlike other RFIs I've seen lately, this one is remarkably clear and concise.  Even so, I thought it would be interesting to try an summarize the fifteen questions even further.  Sometimes I find restating the question in simpler terms makes them easier to answer:

  1. Who are you and why should we care?
  2. Whose voice isn't being heard and how can we make it louder?
  3. What do you think is hard and how could it be made easier?
  4. How should we talk to patients?
  5. Measure developers?
  6. How is quality measurement related to clinical decision support?  Are they the same thing?
  7. How can we go faster without making your life impossible?
  8. Is MAT Helping?  What else could we do?
  9. What about Natural Language Processing?
  10. What do we need to deal with longitudinal data?  
  11. How should we talk to doctors?
  12. How do we get stakeholders to talk to each other?
  13. How can we move away from claims based CQMs?
  14. What have you found to be successful?
  15. Please show us ... (some of us are from Missouri)...
These are great questions, and I'll be thinking about some of the answers over the coming weeks.  The deadline for responses is August 20th, 2012.

Monday, July 23, 2012

What is HQMF?

John Moehrke asked for this one.  I've written nearly a double dozen posts on HQMF, and more than three dozen on Query Health, but never stopped to explain what HQMF is.

The Health Quality Measure Format (HQMF for short), is an HL7 standard format for documenting the content and structure of a quality measure.  It is intended to represent quality measures used in a healthcare setting.  It is an XML document format based on the HL7 Reference Information Model (RIM), just like CDA is, but instead of describing what happens in a patient encounter, the HQMF standard describes how to compute a quality measure.

There are six or eight key components of a quality measure (depending upon measure type and level of detail).  These are structured at three different levels of detail.  At the first level of detail is metadata describing the quality measure.  This goes into the header of the document, and describes the who wrote it, the dates over which it is valid, who validated it, and other details about how the measure works or is used.  The metadata makes it easy to find the quality measure.  You can write a valid HQMF document and only include this level of detail.  The body of the document can be written in a PDF or any other multimedia format.  This level of detail is intended to support legacy documentation on quality measures.

The second level of detail provides a human narrative description of the quality measure in three (or four) sections:
  1. Measure Description
  2. Data Criteria
  3. Measure Population
  4. Measure Observations
The measure description simply contains human readable narrative describing the measure, it's purpose, how it works, et cetera.  The data criteria section describes the "data of interest" to the measure.  This is where you find descriptions of the events, statuses and attributes of those events that need to be captured to make the measure effective.  The measure population section describes the major components of the measure, including the initial population, the numerator, the denominator, and various special cases (known as exclusions or exceptions depending upon how they are treated).

A typical ratio measure has criteria describing the initial population, and criteria for numerator and denominator.  These criteria explain how to counts of patients (or other items) that match the specified criteria.

Other measures (e.g., a continuous variable measure), need to describe not just what to count, but how to compute from the items selected in the initial population (e.g., to compute average ED wait times).  How those  observations are computed is described in the measure observations section.

The third level of detail needed to automate measure computation is provided in machine readable entries in the last three sections.  These entries appear when the measure is specified at the highest level of detail.   These entries provide  the computer with instructions on how to count and compute the results of the measure.  For ratio measures, these entries describe how to combine the data of interest using Boolean logic to select the items to count.  For measure observations, these entries describe what items need to be computed with, and the measure observation definitions then describe the computation.

HQMF Release 1.0 was developed and balloted in 2009 by HL7 as a Draft Standard for Trial Use (DSTU), and was published in March 2010 as a DSTU (and has now expired).  During the two year DSTU period, the standard was used by NQF to retool more than 100 existing quality measures into electronic format.  NQF also developed the Measure Authoring Tool to create HQMF documents.  The ONC S&I Framework adopted and adapted the standard for use in its Query Health project, designed to allow health data to be queried by external sources.  (See my series of posts on Query Health).

The Query Health adapted version of HQMF became the foundation of Release 2.0 of the HQMF standard, currently being balloted by HL7.  This will also become a Draft Standard for Trial Use for two years.  MITRE has developed an experimental transform that converts HQMF R1 to HQMF R2 format, with some limitations.  I've heard reports that they've been able to use it successfully on about 80% of the NQF retooled measures.

HQMF can do more than just define "how to count".  The HQMF standard can be used to describe what the output of counting should look like.  When used in combination with the QRDA specification (an implementation guide on the HL7 CDA), HQMF can tell you what data needs to go into the QRDA.  It's also possible to use an HQMF document with just a data criteria section to describe what needs to be sent over an interface, or to describe the entries that should be present in a CCDA document (a variation on the use already designed into HQMF and QRDA).  So HQMF is the "query", and QRDA is (or can be) the output of that query.  There are three different categories of QRDA.  As related to HQMF, QRDA Category I is patient level data for a single patient.  A collection of QRDA Category I documents can be used as the data inputs to a measure calculation.  QRDA Category II can be used to report on the patient data actually used to calculate the measure, enumerating all patients and their data in a single document.  Finally, QRDA Category III can be used to report the aggregated results.  QRDA Category I is fully described in the HL7 QRDA DSTU.  Category II and Category III are alluded to, but not described in detail.  It's easy to figure out that QRDA Category II would be very similar to Category I, but just contain more patient data.  The format for QRDA Category III is currently being worked on (in fact, I missed a meeting today because I'm teaching about HQMF and QRDA).

Given current developments and implementation efforts that have been going on in Query Health and for retooled measures, I expect HQMF to be on the short list of standards to be considered for Meaningful Use Stage 3.

This post was updated on Tuesday, July 24th, to address comments below. I clarified what HQMF is, added more information about the third level of detail, and described a bit more about QRDA.

Thursday, July 19, 2012


For the last week I've been at the IHE PCC Face to Face meeting to resolve public comments on the various profiles that were published for public comment more than a month ago.  The Request Clinical Knowledge (RCK) profile got some excellent feedback from members of the HL7 Clinical Decision Support Workgroup.  One of the standards we had considered for the RCK profile was the HL7 Infobutton SOA Guide.  However, we decided not to persue it too far when we realized that the guide contained so much SOAP-based content. Most Infobutton implementations today seem to use the RESTful URL Implementation guide, and so we focused on that and the original standard.  It's been a bit hard to track what is going on with Infobutton in HL7 these days, because some of the final implementation guides seem to be ahead of the balloted model, although consistent with where it is going.

There was some excellent give and take in the discussions with the CDS workgroup over the last week, and it seems that several of the things we did "differently" are going to show up in new guides that they will have coming out for ballot in the near future.  One concern I have for this profile is what happens in a month of so when Meaningful Use Stage 2 arrives.  Will we be "in sync" with the final rule, or not?  I'm hoping that we will be.  If ONC names Release 3 of the URL guide, we'll be in great shape with this profile,  because it does comply with it.

The updated text is already on the IHE FTP site, and should be headed off for trial implementation as of tomorrow (actually today now).  I'm preparing a bunch of Just-In-Time training content on Quality Standards for next week.  By the time the ballots are finished and reconciled, I'll already have classes done for QRDA and HQMF, and who knows, I might even start that second or third book soon.

In other news, the IHE MHD profile (Documents for Mobile Health) also seems to be coming along.  We were all rather frustrated with the dearth of feedback received, but I'm still excited with the results.

Wednesday, July 18, 2012

Computation and HQMF

One of the things I've been struggling with in the back of my head for the past couple of weeks with HQMF is how to deal with continuous variable (computed) measures.  I nearly got it right last week when we put together the model where:
  1. The data criteria section describes data of interest.
  2. The population criteria selects items to compute with.
  3. The measure observations section defines the computations.
I said nearly.  It works for any single act where the computation is derived from attributes of that act.  But it occurred to me this evening (while working on something not quite related), that I was missing something.  Where it doesn't work is for computations dealing with two or more different acts.  I'm sure if I'd done more applicative programming at some point in my career, how to do this would have been obvious, but it wasn't last week.  I never got Prolog either.  It's weird. Somehow I can do awesome stuff in weird languages like XSLT, but some of these other twisty ways of thinking escape me (at least initially).

In any case, I realized tonight that the solution to the challenge for computations involving two or more acts is to provide a mechanism that defines a set of tuples.  Each item in tuple is associated with a "measure population criteria" which describes whether or not the item should be considered on its own merits alone, and zero or more constraints between two or more items of the tuple that must be true for the items to be considered.

This is essentially a "INNER JOIN" between the sets of results matched by the measure population criteria for each item.  Imagine the case where, in an ED encounter, you want to measure the time between the creation of an order to admit a patient, and the discharge of that patient from the ED encounter.  You have two acts:  The ED encounter itself (call it EDEncounter), and the order to admit (call it OrderToAdmit).  In the current HQMF model, these could both be represented as measure population criteria, and you could have a measure observation definition based on those criteria (e.g., EDEncounter.effectiveTime.high - OrderToAdmit.effectiveTime), and it would nearly work.  But let's assume that you have a patient who has had two ED encounters in the measure period.  For those patients, the definition of the computation would have to decide which EDEncounter to associate with which OrderToAdmit.  The only way to resolve the issue is to tell the system how the two are related.

So, with two or more variables, there needs to be at least one more criteria which specifies the relationships between the acts.  It could go a number of different places, but because this criteria is critical to the computation, I'd put it in the measure observation definition as a precondition.  The precondition would reference a relevant act from the measure population criteria, and that reference would then be related (through an act relationships) with one or more other acts through the same kinds of relationships allowed for other criteria.  Multiple precondition relationships could be specified, all of which must be true for the criteria to succeed.  These precondition relationships are like the ON clauses associated with the joins.

One thing I don't like about this is the way that references need to be used.  The challenge is that the content of the precondition is an act reference.  I've seen how having to remember the components of an identifier of an act can make things challenging for people reading specifications.  In OASIS ebXML specifications, identifers are UUIDs or names.  Names are locally unique within the message, and can later be given globally unique identifiers assigned by the system.  We don't have a similar capability in HL7, but I just realized something about the long forgotten RUID type.  The HL7 II data type is made up of two components, the root (of type UID), and the extension.  The root typically identifies the namespace from which the identifier comes from, and the identifier itself is stored in the extension component.  The root attribute is often a UUID or OID, but it can also be of the RUID type.  That's simply a type reserved by HL7 in balloted specifications.

What I'd like to do in HQMF is say that the RUID that is represented by the string "local" represents a namespace defined by the message or document in which the content appears.  Then, rather than having to remember an OID or UUID for each act that is referenced, we could just identify it by saying "local", and giving it some sort of locally unique identifier (such as a name) in the extension portion.  We could further define this namespace as being the same as the namespace used by the <localVariableName> element found in act relationship elements.

The ED Encounter example above would appear as follows:

    EDEncounter.effectiveTime.high - OrderToAdmit.effectiveTime 
  <methodCode code='COUNT'/>

  <methodCode code='SUM'/>
      <id root='local' extension='EDEncounter'/>
           <id root='local' extension='OrderToAdmit'/>

Arguably, the XML should include derivation relationship from the measureObservationDefinition to the criteria which defines the variables EDEncounter and OrderToAdmit.  However, these acts are already defined and named in the context of the HQMF Document, and I see no need to provide additonal XML just for the sake of "completeness".

Translated, it means
  1. For the measure populations defined by the variable names in the derivation expression (EDEncounter and OrderToAdmit),
  2. for each EDEncounter, OrderToAdmit pair 
  3. if an OrderToAdmit is related to EDEncounter by the "component" relationship (which is to say that the order occured in the encounter)
  4. compute the derivation expression
  5. and report the sum and count over all terms.
This turns into executable code pretty well.  An example translation into SQL appears below (assuming that population are created as views as in my prototype of 8 months ago)

 SUM(EDEncounter.effectiveTime.high - OrderToAdmit.effectiveTime),
 COUNT(EDEncounter.effectiveTime.high - OrderToAdmit.effectiveTime)
 FROM EDEncounter
 JOIN OrderToAdmit
 ON OrderToAdmit.EncounterID = EDEncounter.ID

It is too late to make this correction for the content that went out to ballot, but this is a ballot comment that I will make.  After all the purpose of balloting is to detect and correct for stuff like this.

Now that inspiration has been satisfied, I need to get back to what I should have been doing...

Tuesday, July 17, 2012

Rethinking the Care Management profile in IHE

Several years ago, IHE PCC developed the Care Management (pdf) profile.  The idea behind this profile was to enable care managers (people) using a care management system to be able to access information from multiple data sources.  We had hoped that by specifying the data of interest (based on a care guideline) in a computable format, we could automate the generation of interfaces between various Health IT systems to the care management system.   The profile never took off because the standards we were using were neither well understood, nor very mature.  However, what I learned from that profile greatly influenced later HL7 work on HQMF.

I suggested that HQMF needs to have a data criteria section that describes the data of interest.  Over time, we made that section even better to support Query Health and HQMF Release 2.0 soon to be out for ballot.   Interest in the Care Management profile surfaced today in the IHE Face to Face meetings happening this week.

Given where the standards are now, it seems like it is possible to specify data of interest in an HQMF, without going any further to specify a measure.  From that, we can generate a CCDA, CCD, or QRDA document from an encounter that includes the necessary data of interest, and submit that to an HIE, Care Management system or other Health IT system managing the care for a patient (e.g., in a medical home or an ACO).

This is a paradigm that is already being used in part in some of the Beacon programs here in the US, and is fairly well understood.  I think it's time to shake the dust off the Care Management profile and rewrite it so that people will implement it.

Thursday, July 12, 2012

An XSLT Breadth-First Search for Processing XML Schema (TL;DR)

Last night after spending several hours working with the HL7 HQMF ballot content, I started getting frustrated with the content structure.  The HL7 Publication DTD (yes, I know) is rather roughly documented.  It's based on the W3C Publication DTD, and as in many things in IT, "nobody knows how that works anymore".  Actually, I know just the person who does, but he's retired into his second career, taking pictures of birds.

One of the limitations (and a not unreasonable one at that) is the level of nesting allowed.  They only number sections to the fourth level.  After that, it starts getting funky (and it looks bad too).  So, I want to navigate through the R-MIM Walkthrough, and I just can't organize things the way I want.  One of the problems is that the document has been edited by several people, and the outline wasn't consistent.  Since this section was basically a walk of the DAG (directed acyclic graph) that is the RMIM, I figured I could probably automate generating the Table of Contents.

So I got someone to give me the latest schema generated by the HL7 tools, and started writing a stylesheet to traverse the schema and generate the outline.  Well, that was daring.  After all, here we are in the 11th hour (or perhaps at this point the 35th hour), and I want to completely reorganize a major chunk of the documentation.  Well, needless to say, I didn't get any sleep last night, but that's not because I reorganized things, but rather because of what I discovered when I did finally succeed.  I'll talk about that later.  What I really want to discuss today is this cool stylesheet that I wound up building.

You see, it walks the Schema, starting with the complexType that is used for the root of the HQMF document schema.  It then identifies each Participation, Role, Entity and ActRelationship in the tree, and generates my outline.

Here's the general structure I worked out.  It isn't ideal, but it works and people can navigate it.

1.1 Act
1.1.1 Act Attributes Act.attribute1 Act.attribute2
1.1.2 Act Participations Act.participation1 participation1.attribute1 participation1.attribute2 participation1.role1 role1.attribute1 role1.attribute2 role1.player1 player1.attribute1 role1.scoper1 scoper1.attribute1
1.1.3 Act Relationships Act.relationship1 relationship1.attribute1 relationship1.attribute2 relationship1.act2
1.2 Act2
 ... and so on

The focus in the HL7 RIM is on the acts being documented, and this documentation structure keeps the attention on the acts, and doesn't exceed a nesting depth of 4, so I was pretty happy.

So I began writing the stylesheet.  After about 45 lines, I had everything by the recursion back to Act all working.  There were 6 templates in the whole thing.  One called Act, another Participation, another Role, another Entity and a final one called ActRelationship.   I knew I was going to have to break cycles in my traversal, because sections can have sections and so on.  So I wrote a little bit of code to detect the cycles, and the first thing that happened was a stack crash.  After a few moments adding some <xsl:message> elements, I was able to see that my cycle detection only worked for cycles of length two (after all, I knew there weren't any longer cycles in the R-MIM).

As it turns out, I was wrong.  The R-MIM designed allows for something called a Choice box in which you can include multiple model elements.  Then you can attach relationships from the choice box back to itself.  This is such a common thing in HL7 V3 that there's even a special shape for it in the tools.  The problem there is that if you have such a choice box that can go back to itself, the cycle length increases.  Say you had two items, A and B, and a link between them through C.  Now instead of having to detect this loop:  ACA, you also have to detect ACBCA and BCACB.  The more items you put in the choice box, the longer your loop detection has to detect loops for.

OK, so I tried the next best thing which was, as I iterated over things and went down the stack, to pass a list of where I'd been.  Well, the problem with that (besides being clunky), was that it just didn't work.  There were too many pathways through the cycles, and you could get into a real mess.  Now I was really stuck and it was around 10pm.  I recalled a post by Michael Kay (author of the XSLT Programmers Reference) somewhere about loop detection in XSLT, so I dug out my second edition.  No good.  It's described in the third edition, and he finally got the code right in the fourth edition (neither of which I have or needed until last night).  And he wrote it in XSLT 2.0, which is no good for me, because I'm still working in XSLT 1.0 (I know, I'm a glutton for punishment).

So after an hour of digging around trying to find a DFS or BFS (depth- or breadth-first search) implementation in XSLT I gave up, made some coffee and took a walk.  Then I went back to my desk, wrote the recursive BFS code down a couple of different ways, and then made it tail recursive.  Tail recursion is a good thing to do when you are writing in XSLT, because you can overcome a lot of limitations that way.  XSLT doesn't let you change the values of variables once you set them, but a tail recursive call lets you change the input parameters, and it can be optimized by a good XSLT parser into loop.

Here's the basic algorithm:

{ ... whatever you want to do ... }

BFS(list todo, list done)
    if (!empty(todo)) {
      head = car(todo);
      tail = cdr(todo);
      newNodes = nodesReachableFrom(head);
      needsDoing = (newNodes - todo) - done;
      BFS(todo + tail, done + head);

BFS(root, null);

You will note that I never modify a variable after setting it.

This is how it looks translated into XSLT.

<xsl:stylesheet xmlns:xsl=""
  version="1.0" xmlns:xs=""
  <xsl:key name="node" use="." match="xs:complexType/@name"/>
  <xsl:key name="neighbors" 
    use="//xs:complexType[.//xs:element/@type = current()]/@name"

  <xsl:template match="/">
    <xsl:call-template name="BFS">
      <xsl:with-param name="todo"
  <xsl:template name="process">
    <p><xsl:value-of select="."/></p>
  <!-- BFS generates an XML fragment containing the keys
    of elements which need to be processed in the order they
    should be handled based on a breadth-first search of the 
    tree represented 
  <xsl:template name="BFS">
    <!-- todo is a node-set() containing a list of all nodes that have 
      yet to be processed -->
    <xsl:param name="todo" select="/.."/>
    <!-- done is a node-set() containing a list of all nodes that have
      already been processed -->
    <xsl:param name="done" select="/.."/>

    <!-- If todo is empty, we don't do anything -->
    <xsl:if test='count($todo)!=0'>
      <!-- head is the first node in todo in document order -->
      <xsl:variable name="head" select="$todo[1]"/>
      <!-- tail is the rest of todo in document order -->
      <xsl:variable name="tail" select="$todo[position() != 1]"/>
      <xsl:variable name="reachable" select="key('neighbors',$head)"/>
      <xsl:variable name="needsDoing" 
        select="$reachable[not(. = ($todo|$done))]"/>

      <xsl:for-each select="$head">
        <xsl:call-template name="process"/>

      <xsl:call-template name="BFS">
        <xsl:with-param name="todo" select="$tail|$needsDoing"/>
        <xsl:with-param name="done" select="$done|$head"/>

This is pretty generalizable to any problem where you need to traverse a DAG without crossing links.  A couple of notes are in order:
<xsl:key> and key(N, V) are XSLT elements and functions that are designed to do element lookup by key values.  These are worthy tools to have in your kit if you do a lot of XSLT programming.  Larn em!
I've set up two keys, one called "node" which returns the @name attribute of the xs:complexType element that has the name you specify.  That's a very straight-forward use of key in XSLT.  However, the second one called "neighbors" is much trickier.  It uses the key mechanism to return the set of xs:complexType/@name values that are needed by the elements in the named complexType.  For what I'm doing, I'm only interested in elements, so this isn't as hard as it could have been.

The reason that these return the @name attributes instead of the xs:complexType element is because the node-set returned can then be used as an argument to the key() function.  I won't go into all the details. You can use other problem-specific logic to find the "neighbors" of your node.

The next bit is also tricky.  The todo variable is a node-set, that I'm treating as a list.  $todo[1] is the head of the list.  $todo[position() != 1] is everything but the head.  So I have a built in CAR/CDR functions (remember LISP?).

Finally, given that you have two lists of items, how do you select items in the first list that aren't also in the second list.  This is how you do that: $first[not(. = $second)].
Where most people go wrong is in writing: $first[. != $second].  Since . and $second are node-sets, they use node-set comparision logic.  X = Y is true for node-sets X and Y if the string values of any to nodes in X and Y are the same.  X != Y is true if there is a node in X whose string value is not = to the string value of a node in Y.  If you don't believe me, read the spec.  Anyway, I showed you this little trick a few weeks ago.

This works, but the output wasn't in the order I expected.  My list is really a priority queue.  The nodes in the node-set are processed in document order.  So, when I "remove" the first node from the list, I'm actually getting the first @name attribute that appears in the document.  That is NOT the order I'm adding them in however.  They get added in whatever order the Schema follows.

It's important to me that I process this stuff in BFS order, because it makes it easier to follow the documentation that way.  To fix that, I had to use another XSLT trick, and that was to keep the lists in document fragments, which I then converted to node-sets using the EXSLT node-set() extension.

BTW: It isn't clear to me yet whether BFS or DFS produces a better order for documentation, but either one can be made to work.

The final XSLT for creating my table of contents can be found here.

This took me about four hours to figure out.  It created more work for me because I was able to see what content was missing.  It also vastly improved the HQMF result because the content is generated from the HQMF artifacts, so I cleaned up a lot of naming errors introduced by a new version of the HL7 RIM and Datatypes (we were using an much older RIM and Datatypes R1.1 for HQMF Release 1.0).  And because I was able to fix those errors, I'm a lot happier about the ballot quality (although still not satisfied).  I'll probably feel better after I get some sleep.

It's time

I've been living without my children for the past several days.  They are all off at camp.  My eldest comes back on Saturday.  It seems like a little taste of retirement, or semi-retirement anyway.  My wife and I have enjoyed being able to read without the TV being on, going out to dinner and seeing R-rated movies without the kids, et cetera.  But I'll definately be glad to get my kids back, and I'm not sure even after this week that I'm going to be ready when my eldest heads off to college.  I expect that when the time comes, I will be ready, just as a friend of mine finally decided he was truly ready to retire a couple of months ago.

It seems like I've known him forever, but it's only been slightly more than a decade.  He was one of my mentors, first in IHE, and then in HL7.  Back when I started, I knew precious little about the sausage making that is standards.  He's taught me quite a bit over the years, and I sure have missed him since his semi-retirement a few years back.  I still get to see him every now and then.

Somebody once told me that if you make it until fifty without growing up, you don't have to.  I think it was him.  I know he succeeded, he's managed to retire and still hasn't grown up.  I hope I'm as successful in my life.  I look forward to spending more time with him in my other emerging passion, as an engaged patient. What I really like is that even though he's retiring, he's planning on taking his rather detailed knowledge about HealthIT and standards into the engaged patient sphere.  And I know he won't take any crap about "you don't understand how IT works", because he did it for 40+ years, all of it in Healthcare. And the world is better for it.  He's still involved enough that I expect we'll still be running into each other at some of the same venues, and heading out to sushi to BS about days past.  And frankly, I hope just to spend more time being friends.

I don't believe in gold watches.  For me, it's gold Harleys.

This certifies that 
Glen Marshall, Unaffiliated

Has hereby been recognized for a lifetime of contributions to Healthcare and HealthIT.

Wednesday, July 11, 2012

Pushing Patients Around? Not!

I'm on NeHC's mailing list, and this is the first I've seen anything about this meeting in DC (and I know I'm not the first person to have this response either). If you happen to be a patient, and for some reason, are going to be in DC next week, here is an opportunity to tell them how to get better engaged with patients.  My first suggestion would be to provide patients with opportunities to provide input without having to hop on a plane with less than a week's notice.

Given some of the feedback I'm getting from e-patients NeHC seems to be headed down the same path the Partnership for Patients paved a few weeks back. Fortunately, e-patients are not to be pushed around.  I'm sure they'll get a good talking too, starting here.

Please, don't let "Patient Engagement" become the next "Green Marketing".

Consumer Consortium on eHealth Engagement Summit

The Office of the National Coordinator for Health Information Technology (ONC) is a cooperative agreement partner of National eHealth Collaborative (NeHC).

On Monday, July 16, 2012 from 10am to 4pm EDT, National eHealth Collaborative (NeHC) will be hosting the Consumer Consortium on eHealth Engagement Summit in Washington, DC. The Engagement Summit will bring together stakeholders with a common interest in engaging consumers and patients with health IT. The Summit will provide a valuable forum for networking and sharing, highlighting industry activities that are advancing the consumer engagement movement, and further developing the coordinated consumer outreach strategy of the Consumer Consortium on eHealth.

Attendees will have the opportunity to support the Office of the National Coordinator for Health IT (ONC) in providing feedback on the proposed next phase of content for The Summit will also feature a panel discussion on best practices for community-level engagement, as well as demonstrations of innovative eHealth tools and apps aimed at engaging patients and consumers.

The meeting will take place at the offices of Venable LLP - 575 7th Street NW at the 8 West Conference Center.

We invite you to RSVP to attend the Consumer Consortium on eHealth Engagement Summit. Space is limited.

Questions? Email

Demo Your Solution at the Engagement Summit!

Do you have an innovative solution for encouraging consumer engagement?  Provide a demonstration of your solution at the Engagement Summit. Please contact Claudia Ellison, Director of Development at NeHC, at for more information.

The Office ofthe National coordinator for Health Information Technology