Pages

Thursday, July 31, 2014

Building IHE XDW Workflows

I've been working on a few XDW workflows lately, mostly dealing with variations on eReferral and Transfer. In developing these, it became pretty important to understand how the workflow itself works.  We've documented it in general using BPMN.  Ideally, what I would like to do is be able to represent the workflow description entirely in BPMN with a few annotations describing some key pieces of metadata about the workflow.  I haven't quite gotten there yet, but am pretty close.

What I have figured out is that the key to defining the workflow is a single spreadsheet table with the following columns:
  • Trigger Event
  • Responsible Actor
  • Conditions
  • Action
  • Workflow Task
  • Task Status
  • Task Owner
  • Input Documents
  • Output Documents
  • Notify
  • Notes
The trigger event takes the form of "Workflow Task - Task Status", as in Request - COMPLETED, or a narrative description of an event, such as "Referral Response timed out" (e.g., indicating a maximum duration for a response was exceeded).  This tells you what initiates the activity.  The next two columns tell you who and when.  The responsible actor column is the workflow participant that has to do something, and the conditions tell you if there are additional preconditions that must be met.  For example, if the maximum time for a response timed out, AND there are no other acceptable responses, then the responsible actor must do something with another task.

And the next two columns start a "sub table", listing the things that the responsible actor must do.  It may for example, transition one task to another state, AND create two new tasks.  Each one of these gets a row. The action is usually create or transition, or create/transition, and the workflow task is one to be acted upon. Create is used to create a task that hasn't already existed.  Transition is used to transition a task that already exists to a new state.  Create/Transition is used when you don't know what the current status is for the task (it may not have been created yet, but it could exist).  Task Status indicates what the final state is for the task that was acted upon.  The task owner indicates who the task owner is.  Tasks can be created by the task owner, but may also be created by other workflow participants.  For example, in the workflow I'm looking at, one participant creates a Respond to Request task for each possible edge system that could respond, encouraging them to complete some activity.  During these actions, input or output documents can be added to the workflow.  Usually input documents are added during CREATE and IN_PROCESS states, and Output documents are added during IN_PROCESS, COMPLETE or FAILURE transitions.  It isn't usually the case that an input is added at the end of the task.  In some cases (although I haven't had to do this yet), the only action an actor may have is to add an input or output document to another task, and do nothing else.

An important step is that a notification should be given to the task owner and any expected actor who will be triggered by the state transition on the task.  Thus, is the Referral Requester workflow participant has a trigger on Response COMPLETED, then when the Referral Responder workflow participant the Response task to COMPLETED, it should notify the Referral Requester.  For this, I'm assuming that the DSUB transaction is used for notifications, and that the workflow participants are duly notified by some other actor (perhaps the Workflow Manager) of these cases.  There's no real subscription to set up, its just assumed that the notification transaction will occur under the appropriate cases.  When you use a Workflow Manager actor (as I did in my definition), one thing that you could do is make sure that 
  1. It is listed as the intended recipient of the workflow document (meaning that it will be managing the workflow),
  2. Some system, such as the registry or repository, interprets the intendedRecipient metadata in the provide and register transaction as a signal that a DSUB style notification should be given to the actor identified (in some way) in the Intended Recipient metadata. 
  3. That workflow manager notifies any task owner of transitions that have occured on a task that were NOT performed by the task owner itself.  For example, task creations, FAILURE or COMPLETION transitions that may be caused by other external events, et cetera.
  4. That workflow manager also notifies any actor responsible for activity on a trigger event that it can detect, e.g., a transition of a task.
In defining workflow in this manner, the table defines what MUST be done in the XDW workflow document, BUT, it does not prohibit additional transactions.  For example, in the Referral Request task, we only note that when a referral is needed (a narrative trigger event), the Referral Requester must create a Referral Request with final status being COMPLETED, and an input document of a Referral Request Document.  I don't say anything at all about the existence of intermediate states, such as CREATED, or IN_PROCESS.  It is quite possible that these states are recorded and the various transitions shown in the XDW document over time.  However, my definition doesn't depend on their existence, and doesn't prohibit it.  This enables other useful behaviors to be added later.  These states are essentially implicit in the workflow, and simply aren't documented with any requirements.

An important point in building these tables is that you have to consider that the state of the workflow should be transparent to an outside reviewer that does NOT have any understanding of the internal business rules associated with the workflow.  So, while you might assume that since a request has been completed, it is expected that the Referral Responder actors might be tasked with generating a respond, you wouldn't be able to tell if a task didn't exist for those actors.  Thus, some tasks that don't exactly need to be recorded in the workflow document to manage the state among the workflow participants still need to be made explicit so that others (such as dashboarding applications), can still provide a good indication to an outside viewer what the status of the workflow is.

I've transposed an short example of the table below (because I usually make it horizontal rather than vertically oriented).  I'll explain the key points after the table.  This is an example meant to illustrate different features of the table, it's not the actual workflow that I wound up defining.

Metadata\Trigger EventReferral neededReferral Request IN_PROGRESSReferral Response READYReferral Response COMPLETED -or- FAILURE
Responsible ActorReferral RequesterWorkflow ManagerReferral ResponderWorkflow Manager
Conditions
N/A
N/A
Referral can be acceptedReferral cannot be acceptedAcceptable Response availableNo acceptable response available and more responses expectedNo acceptable responses available and no more responses expected
Actioncreatecreatecreatetransitiontransition
N/A
transition
Workflow TaskReferral RequestDispatch ReferralReferral ResponseReferral ResponseReferral Request
N/A
Referral Request
Task StatusIN_PROGRESSCOMPLETEDREADYCOMPLETEDFAILURECOMPLETED
N/A
FAILURE
Task OwnerReferral RequesterWorkflow ManagerReferral ResponderReferral ResponderReferral Requester
Input DocumentsRequest for Referral
N/A
Request for ReferralRequest for ReferralRequest for ReferralRequest for ReferralRequest for Referral
Output Documents
N/A
N/A
N/A
Referral Response (Required)
Referral Response (Optional)Referral Response (Required)N/AReferral Responses (Optional)
NotifyWorkflow ManagerReferral RequesterReferral ResponderWorkflow ManagerReferral Requester
N/A
Referral Requester
Notes

This task is created for each possible referral responder.
Normal CompletionWait, hopefully someone will respond positivelyNobody responded positively, include all negative responses as output to allow manual review/escalation.

The idea of this workflow is to handle a managed referral request process, where there is a single requestor, and multiple possible referral responders.  There's a workflow manager participant that handles the business processing of the responses to choose among acceptable responses.  When an acceptable response is received, the referral request task is considered to be completed.  There are a lot of different ways to extend this workflow by adding new columns to it.  For example, you could be nice, and transition each outstanding referral response task to the FAILURE state once a referral response has been accepted, so that the Referral Responders can now ignore these tasks if it hasn't gotten to them.  You could also add a time out event to allow the Workflow Manager to renotify Responders of tasks that they haven't paid attention to, or escalate the request, et cetera.

Getting to Other Views

Having developed this table, there are now a couple of different things that you can do with it.  Often you need to explain multiple viewpoints of the workflow definition to satisfy to all and sundry stakeholders what is happening.  The table allows me to generate three different views:

BPMN

You can start to diagram the workflow in BPMN.  The general rules I've been following is that each workflow task becomes a task or activity in the BPMN diagram, and that each workflow participant (aka actor or task owner) gets a swim lane or pool. Conditions are used as conditional entry points into a task. Input and output documents are often included as components of the messages passed between tasks in different lanes/pools.

In an ideal world, I'd have the time to do some automation with a modeling tool so that I could graphically create the content in BPMN and subsequently generate the table, or import the table from a spreadsheet, and dynamically generate the diagram.  For now, I can live with managing the diagram separately from the table and going back and forth between the two.  It's just simply a matter of time and priorities.

Actor Requirements

When you sort this table by responsible actors and task sequence (effectively time), you get a description of the responsible actors view of the task to be performed, and what their required behaviors are.

Task oriented View

When you sort it through by workflow task and task sequence, you get a view of the steps of the workflow.

Generalizing and Specializing

The beauty of this table format is that it also allowed me to build more complex workflows by specializing from simpler ones, again by adding columns (rows really in my spreadsheet).  In my case though, what we wound up doing was defining the most complicated workflow, and then removing columns (generalizing) to get to simpler ones.  It doesn't really matter the direction that you do it in, what matters is that you can simplify or generalize these behaviors.

Notification (ala DSUB) is an important component of this workflow description, and the mechanism I've used is pretty straightforward.  I'm not sure if that's something to submit as a modification to the XDW profile (e.g., the Notification option), or in a specific workflow profile, but I think it is probably worthwhile to consider.

Wednesday, July 30, 2014

Combined 2015 CMS QRDA Implementation Guide Now Available


Combined 2015 CMS QRDA Implementation Guide Now Available
News Updates | July 29, 2014

The Combined 2015 QRDA Implementation Guide for eligible professionals, eligible hospitals, and critical access hospitals (CAHs) to use for reporting electronic clinical quality measures (eCQMs) starting in the 2015 reporting year is now available on the CMS website. The 2015 Combined Implementation Guide provides technical instructions for QRDA Category I & Category III reporting for the following programs: 
  • Hospital Quality Reporting including the EHR Incentive Programs and Inpatient Quality Reporting (IQR)
  • Ambulatory programs including the Physician Quality Reporting System (PQRS), the Comprehensive Primary Care (CPC) Initiative, and Pioneer ACO
CMS accepted public feedback on the draft guide from June 10, 2014 to July 8, 2014, and has made revisions accordingly for inclusion in this release.
The CMS 2015 QRDA Implementation Guide is updated for the 2015 reporting year, and combines business requirements and information from three previously published CMS QRDA guides:
  1. The 2014 CMS QRDA Implementation Guide for Eligible Hospital Clinical Quality Measures  
  2. The 2014 CMS QRDA I Implementation Guides for Eligible Professionals Clinical Quality Measures [zip file]
  3. The 2014 CMS QRDA III Implementation Guides for Eligible Professionals Clinical Quality Measures [zip file]
About the Combined Guide
Combining the above guides into a single document provides a unified resource for implementers, eliminating the need to locate the individual program guides. More importantly, combining guides harmonizes differences among earlier versions of the CMS QRDA guides, especially between the QRDA-I guides for eligible professionals, eligible hospitals, and CAHs.
The CMS 2015 QRDA Implementation Guide incorporates applicable technical corrections made in the new 2014 HL7 errata updates to the HL7 Implementation Guides for QRDA I and III.
The new guide contains two main parts:
  • Part A is the harmonized QRDA-I implementation guide for both eligible professionals and eligible hospitals/CAHs.
  • Part B is the QRDA-III implementation guide for eligible professionals.
It also includes appendices that annotate the changes between the HL7 QRDA-I and QRDA-III standards and the CMS QRDA specific constraints. Changes between the CMS 2014 QRDA guides and the new combined guide are provided as well.

Additional Resources
The 2015 CMS QRDA Implementation Guide is available for download on theeCQM Library Page in the Additional Resources section. To learn more about CQMs, visit the Clinical Quality Measures webpage.  For questions about reporting requirements using the 2015 QRDA Implementation Guide, please refer to the specific program’s help desk or information center.
Department of Health and Human Services logo
Centers for Medicare & Medicaid Services logo

Friday, July 25, 2014

IHE PCC Strategic Planning

The IHE Patient Care Coordination Technical Committee met this week to review the public comments received on the work it is doing this season.  I'll write up the details of what we did later.  More importantly the Planning Committee also met to review its strategic road map.

As part of that work, we came up with a revised mission and vision statement for the domain (which appear below) to help us guide the work.

Vision: The vision of Patient Care Coordination is to continually improve patient outcomes through the use of technology connecting across healthcare disciplines and care paths.

Mission: The mission of Patient Care Coordination is to develop and maintain interoperability profiles to support coordination of care for patients where that care crosses providers, patient conditions and health concerns, or time.

We looked at answers the following three questions to then identify key activities for the domain.

 1. What are the leading trends in Health IT and EHR system interoperability that most greatly impact you?

 2. What is the biggest change in thinking that should be applied to the problems of EHR interoperability?

 3. What capability in Interoperability would make you stand up and take notice?

Some of the topics that came up included:

  • Distributed Care Coordination
  • Dynamic Interface Development
  • Updates to QED to support a RESTful transport (using FHIR)
  • Expanding the influence of the Nursing Subcommittee

We are still evaluating the details, and so there is still some chance for you to have input.  Please provide it in comments below.

Tuesday, July 22, 2014

We done DID that

Every year the IHE PCC Committee goes through the process of updating its Strategic Plan.  This process involves brainstorming about the directions we could take with new profiles in the domain.  Yesterday during this process I came up with an extension of the idea originating in the IHE Care Management Profile, and further explored in this post.  The idea is to develop a way to perform Dynamic Interface Development (DID); the general concept is that there is a catalog of data elements from which you could choose information that needs to be transmitted in an interface. The catalog would need to be at the right level of granularity, approximately at the level of an IHE Template, a FHIR Resource, or a V3 RIM Class.  To create the interface, one would create a list of the data elements that are needed (these might be expressed as FHIR Query resources for common cases).  Having made the list, it would be fairly easy to automate creation of an interface to send the collection of data expressed in that set of queries.

There's a lot of work needed to get there, and there are a lot of little details between here and there, and some pretty major assumptions to test out over time.

   Keith


Friday, July 18, 2014

IHE Cardiology Technical Framework Supplement Published for Trial Implementation

IHE Cardiology Technical Framework Supplement Published for Trial Implementation

The IHE Cardiology Technical Committee has published the following supplement to the IHE Cardiology Technical Framework for trial implementation as of July 18, 2014:
  • Registry Content Submission - CathPCI V4.4 (RCS-C)
This profile may be available for testing at subsequent IHE Connectathons. The document is available for download at http://ihe.net/Technical_Frameworks. An accompanying sample CDA document and sample data collection form are also available for download. Comments on all documents are invited at any time and can be submitted at http://ihe.net/Cardiology_Public_Comments/.

But It Changes my Workflow

I hear the complaint that implementing an EHR changes their workflow from a lot of physicians and nurses.  I also hear (from those who've studied up a bit), that EHR X is no good because it is just an electronic version of their existing paper workflow.

The other day I tweeted:
Why would you use an EHR that DIDN'T change your workflow? Improvement = Change

A few folks commented that change is not necessarily improvement, and I certainly agree.  But let's take a step back from that point.  You are going to start using an EHR, either a different one than you were currently, or a brand new one since you haven't used an EHR in the past.

Why would you do that at all if NOT to make an improvement in your practice.  And if you are making an improvement, that means that workflows are GOING to change.  I cannot think of a single case where a significant improvement (cost, time, efficiency, outcomes or otherwise) didn't result from at least one significant change in workflow.

Can you?

I agree, not all changes are improvements, and not all EHR implementations are done well, BUT, my point stands.  If you are implementing an EHR, you should expect your workflow to change.  And if you are implementing it right, you should expect it to improve.

     Keith


Thursday, July 17, 2014

On Backwards Compatibility

HL7 has a great deal of experience with defining rules for backwards compatibility (which is not to say that those rules always worked).  They also have a whole document devoted to substantive change (for non-members, simply download HL7 Version 3 Normative Edition and read Substantive Change under background documents).  IHE and DICOM have similar rules about substantive change, which is part of what versioning and backwards compatibility are supposed to address.

The problem breaks down into two parts.  We'll start with two systems, N (for New) and O for (Old) which are using some version of some standard.  This is what we need for backwards compatibility that will support "Asynchronous Bilateral Cutover".

New Sender (N) sending to Old Receiver (O)

In this case, System O has certain expectations, and is already in the field.  So whatever had to occur in the message sent to system O should still be possible to send in the new version of the standard (if it isn't possible, then System N and System O don't have a common set of capabilities).  Moreover, if it had to be sent under the old version of the standard, it should also be required in the new version.  If not, then the two systems cannot readily communicate.

Old Sender (O) sending to New Receiver (N)

In this case, System N also has certain expectations, but before the new standard is fully baked, is NOT already in the field.  When receiving information from System O, system N should be able to deal with the fact that it is NOT going to get stuff that is new or changed in the new version of the standard.  Usually System N is a new version of System O, and could work under the "old rules".  So, keep working that way. One of the things that often happens in making changes to the standard is that previously optional stuff now has to be required (constraint tightening).  What System N needs to do in these cases is figure out what a safe and reasonable default behavior would be when it gets a communication from System O where those tightened constraints are not yet used.

Moving On

Tightening of constraints should be OK behavior in moving from version O to version N of a standard.  We know that loosening constraints is often a problem as we move forward.  However, the expectation of the receiver should be that old constraints are still OK for some period of time, in order to allow others to catch up asynchronously.  Determining this set-point is an interesting policy challenge.  At the very least, you should "grandfather" older systems for at least as long as they have to "upgrade" to the new version of the standard, and perhaps a little longer to deal with laggards (as we've seen has been necessary for 5010 and ICD-10 cutovers).

At some point, you have to drop support for the old stuff.  A good time to do that might be when it is time to move to version M (for moving on) of the standard.  What we want to do to support Asynchronous Bilateral Cutover is think about changes so that required behavior is invoked more slowly, and that things that were not previously required start off first as being desired, or preferred behavior (which is required in new systems, and allowed in old systems).

Exceptions

There will always be cases where an exception needs to be made.  That needs to be carefully thought out, and in new versions of standards, Exceptions to the backwards compatibility rules should be clearly identified when the standards are balloted.  We shouldn't have to figure our where those are just by inspecting the new rules.  Some features need to be retired.  Some capabilities cannot be changed in a way that won't impact an older system.  The key is to clearly identify those changes so that we can get everyone to agree that they are good (which is not the same as agreeing that they aren't bad).

With the new Templates formalism, we should be able to identify these issues with C-CDA going forward.  I would very much like to see HL7 sponsor a project to ensure that C-CDA is made available in the new format once it finishes balloting.


Wednesday, July 16, 2014

Supercompliance with HL7 CCDA Versions

I've written several times about C-CDA Release 2.0 and issues with template versioning which are pretty close to resolution, and I've also written previously about the 2015 Certification criteria and backwards compatibility with 2014 criteria.

What I'd like to expand upon now is an issue that could occur if C-CDA Release 2.0 was adopted as optional criteria for 2015 and how we could arrange things so that a 2014 system could receive it.  It draws in part on what I described previously as a Dual Compatible CCD, but is slightly different because CCDA 2.0 is largely backwards compatible with CCDA 1.1.

The issue is that when System N (for new) sends a CCDA 2.0 document to System O (for old) that understands CCDA 1.1 there are three things that could happen:
  1. System O would correctly identify the fact that the CCDA 2.0 document did NOT comply with CCDA 1.1 and would reject it.
  2. System O would NOT correctly identify this fact because the template identifiers are similar enough, and some don't understand the details of how to match identifier values in HL7 Version 3 and CDA, and so it would incorrectly try to process it.  System O would handle these as follows: 
    1. Where these templates are backwards compatible:, System O would handle the 2.0 templates correctly where they are backwards compatible with 1.1, BUT, 
    2. Where they are not, System O would have issues with interpretation of the content.
There are FEW cases where 2.b. would occur based on a preliminary analysis of CCDA 2.0, BUT, we need to spend more time looking into those details.  One case that has been identified is the templates for Smoking Status, where there are differences in interpretation of the effectiveTime element on the observation.

What I would propose that systems using CCDA 2.0 do is be "Super Compliant" (this is really independent of what my thoughts are on the 2015 certification rule).  Super Compliant means that they send documents which conform to both CCDA 2.0 rules, and CCDA 1.1 rules, and only send the same information twice when ABSOLUTELY NECESSARY (as in the case for Smoking Status), and only in very controlled ways.

It would mean that a sector or entry could declare conformance to both CCDA 1.1 and 2.0 except where that wasn't feasible (e.g., Smoking Status).  That last little note also means that we need to examine more carefully where backwards compatibility isn't the case. 

The benefit of this is that systems which understand CCDA 1.1 will, if properly interpreting the guide and underlying CDA Release 2.0 standards, work correctly.  Even if they don't fully interpret the CDA Release 2.0 standard properly (e.g., get confused about the Instance Identifier data type and similar but not identical identifiers), could and likely would still get the correct result.

This allows systems supporting CCDA 2.0 to work with systems only understanding 1.1.  It does mean that newer systems supporting CCDA 2.0 would still have to support CCDA 1.1 as input, because not every system available would be able to send CCDA 2.0, but that is already pretty likely to be the case for MOST Certified EHR products in use today.

I'll talk a little bit more tomorrow about flavors of compatibility and breaking changes (also called substantive changes) and how we could develop some policies for advancing standards that allows new features to be created, and problems to be fixed but still ensure that systems

Tuesday, July 15, 2014

Where there is smoke ...


... There are Chocolate-Covered Bacon-Wrapped Smoked Twinkies

A few weeks ago I got a Vertical smoker for Father's day.  Over (smoked) dinner later that week we somehow got on the topic of disgusting fair food, things like chocolate-covered Corn Dogs and fried Twinkies.  And then the topic of Smoked Twinkies came up.  Of course everything is better wrapped in Bacon, so it became Bacon-wrapped smoked Twinkies.  And then chocolate-coated, bacon-wrapped, smoked Twinkies.

As my daughter will tell you, perhaps the scariest thing about these were that they actually didn't taste that bad.  And from this experience (never to be repeated), we also learned that smoke-flavor imparted into cake can be good (Vanilla ice cream on smoked pound cake turned out pretty well as a follow-up).

As an experiment, it was completely ridiculous, very easy to perform, and resulted in some interesting results that had been completely unexpected.  The cost was virtually nil, and I learned a great deal about my smoker just doing it.  This is much better than spoiling a $60 piece of brisket on a first attempt (that didn't happen either fortunately, it was wonderful).

The point of this post is that sometimes you have to be willing to try an experiment on something complete stupid.  There's value in that because there is no danger that anyone will ever tell you to ship it, and you can still learn a lot just by doing.  And you can throw a bunch of things at it just to see how they would work together.  And if it fails, you never expected it to work anyway, so there's no real disappointment.

[Aside: How many developers have had the scary experience of showing a weekend prototype to a product manager only to be asked how soon you could ship it?].

The trick is to do the best job you can anyway, just to see whether there is something in this completely idiotic idea.  When I smoked the Twinkies, I left a couple of them bacon-unwrapped, just so I could see the difference the bacon made.  When I chocolate covered them, I also covered only the bottom half, again to compare with Bacon alone vs. Bacon and Chocolate.

When I design a form (e.g., for an EHR), I often throw in some features to see how they work together with the form.  Chocolate coating as it were.  But I also leave myself an easy way to see how the form works without them.  Profiles and standards often have some optional bits to, because we aren't sure they'll be needed all the time.  When I implement a standard, implementation guide, or IHE profile, I try to include the optional bits if I can.  It's only when it becomes challenging (doesn't taste good) that I ignore the option. As it turns out, all variations of the Smoked Twinkie were edible, and some (like me) even liked them, although others (like my daughter) would never admit that.

And if, as in my Twinkie experiment, you actually learn something other than how to work with the tools, it was worth more than you expected.  If it doesn't, it was at least something you could chalk up to experience and pass the story along to your colleagues amid gales of laughter.

     Keith

P.S.  How does FHIR fit into this?  Attending my first FHIR Connectathon was just one of those throw-away experiments.  Now I'm a believer.  You should try it at least once.

Monday, July 14, 2014

This is not my Job

So, this morning I got a query from a colleague about a specification that is being developed for one organization that I volunteer with, regarding information that I have expertise about based on another organization I volunteer with.  We have a pretty good working relationship and regularly ask each other quick questions, and sometimes not so quick questions based on each other's expertise.  We are also working collaboratively together on yet another project.

It isn't my "job" to answer those sort of questions.  But it sure makes my life easier when my questions are answered.  And there's no benefit to me from holding back on this information, because it is readily obtainable from other sources.  Frankly, even the longer questions aren't anything anyone could reasonably bill a client.

This Quid Pro Quo exchange goes on all the time in standards development.  It is what makes my job (and the job of anyone like me) possible.  After all, there is simply no way I could be expert in so many places. And so, I'm thankful for it, and hopefully gracious in answering those quick (and even not-so quick questions).  After all, there is no benefit to developing standards if they are only used by a single organization.

   Keith

P.S.  Actually, my self-professed job description includes educating industry about Health IT standards, so really it IS part of my job, just not an obvious one.

Thursday, July 10, 2014

HL7's First Payer Summit

This just crossed my desk. Since I'll be at the event, I thought it was worth posting here ;-) Keith

Don’t Miss the HL7 Payer Summit
Jumpstart your interoperability initiatives with this value-packed event

Sept 18-19, 2014 • Hilton Chicago, Chicago, IL

 

More than just a set of standards for data messaging, HL7 is a family of technologies that provides a universal common framework for interoperability of healthcare data. Using HL7 technologies speeds time to launch and lowers the cost of development for interoperability initiatives aimed at improving outcomes, lowering operating costs, and achieving other critical strategic goals for your organization.

This program was created specifically with payers in mind. The two-day summit features influential industry speakers offering strategic direction and practical information on interoperability for healthcare payers, including hot topics such as ADT, mobile health, the regulatory environment and the HL7 FHIR® standard.


Schedule at a Glance

 

Thursday, September 18, 2014

9:00 – 9:45 am

What is the HL7 and Why Payers Should Care
9:45 – 10:30 am
Industry Adoption and Use of HL7 Standards and Transactions

10:30  – 11:00 am 
Break

11:00 – 11:45 am
ADT Case Study

11:45 – 12:30 pm

HL7 Clinical Exchange, Quality and Population Health Challenges

12:30 – 1:45 pm
Focus Group Lunch
(Sponsorship Opportunity)

1:45 – 2:30 pm

Panel Session on Care Coordination, Patient Centered Medical Homes, and ACOS

2:30 – 3:00 pm

Payer Challenges in the New Patient Centered World of Health Information Exchange

3:00 — 3:30 pm
Break

3:30 – 5:00 pm

One Tree, Many Branches: Making Sense of Today’s Growing Regulatory Environment


Friday, September 19, 2014

9:00 – 10:30 am 
The Mobile Health Revolution

10: 30 – 11:00 am
Break

11:00 – 12:1 pm
Why All the Buzz About FHIR®


 

Event Pricing

Members of WEDI and AHIP receive the member rate

 

Member: $400

Non-Member: $600

 

Save the date for this new program created just for payers! Registration opens next week. Watch the HL7 website and your inbox for more details soon.

 

 




Monday, July 7, 2014

That XML Schema is NOT the HL7 Standard you are reading

With very few exceptions*, HL7 Version 3 standards are models of interactions and information exchanges using messages between various applications acting in different roles in the system.  The HL7 Development Framework then automatically applies rules about how the interactions identified by the model are expressed in XML Schema using the HL7 Version 3 Implementation Technology Specification. That specification relies a bit further on the HL7 Version 3 Data Types (pick any of three releases) for expression of the basic data types in the message.

So when a committee like Patient Care or Orders and Observations creates an HL7 Version 3 standard, they are defining the information content of the exchange, NOT THE XML SCHEMA.  In fact, the XML is NOT the normative definition of the standard that they are producing.

As the XML ITS (Release 1) says of itself:
This document describes how HL7 V3 compliant messages can be expressed using XML. It describes how the definition of the set of valid XML instance documents is derived from a specific HL7 Message Type. It covers ISO levels 5 and 6. Those familiar with V2 might call these the "XML encoding rules" for HL7 Version 3 messages.

So, if need be, someone could create a JSON ITS, or a UML ITS (it used to exist), or a JSON ITS, or a Python ITS.  If you don't like the XML, the opportunity exists to improve it, but the XML itself isn't the standard.  The artifacts you will find in a V3 standard include:
  • Story Boards
  • Application Roles (which are Informative rather than Normative at this time [still])
  • Trigger Events
  • Interactions
  • Domain Message Information Model (D-MIM)
  • Restricted Message Information Models (R-MIMs)
  • Hierarchical Message Descriptors (HMD)
  • XML Schema (which are not Normative!)
The XML schema is derived from the HMD based on rules defined in the XML ITS.  The HMD is derived from the R-MIM, which are models used to implement an Interaction which is defined based on a set of application roles interacting according to a storyboard based on the occurence of a trigger event.

Thus, the HMD is the "normative" description of the messages being defined by the standard, and it is another HL7 standard (the XML ITS) which turns the HMD into something that can be used in an implementation.  If you don't like the HL7 XML, you could develop another ITS. Some work groups are trying to do so and just not figuring out how to make their ad-hoc XML content actually be algorithmically derived from a RIM-based model.  I'm not too worried about these efforts, or efforts to create a JSON based ITS or any other ITS for that matter.  From my perspective, that's the old way of doing things, and I want to see how we could do it on FHIR.

     Keith

* One HL7 Standard is a big exception to this rule, and that is CDA Release 2.0, which in its conformance section says: A conformant CDA document is one that at a minimum validates against the CDA Schema, and that restricts its use of coded vocabulary to values allowable within the specified vocabulary domains.

Thursday, July 3, 2014

Where will you be in five years?

This is a question that I ask myself from time to time, and I imagine myself, five years hence, and make up stories about where I am and what I'm doing, and how I got there.

This question is especially relevant to Health IT folk at this time, as we are presently in an era of unprecedented EHR and Health IT adoption.  This isn't just a US perspective, but also an International one. HL7 is also at a cross-roads, as I've mentioned in my previous post (see The Future of HL7).  Just for fun, here is an imaginary day in the life of "Motorcycle Guy", on the eve of July 4th, 2019.

I've just finished making travel arrangements to the Second Quinquennial Health IT Conference to be held in (semi-exotic location).  It's a week long conference in which members of CDISC, DICOM, HL7, IEEE, IHE, ISO, OpenEHR, WHO, X12 and others, along with national standards organizations and professional societies from all over the world join together to develop an international Healthcare standards strategic plan.

I'm part of the US delegation as a member of recently formed US Standards Collaborative.  It's a public/private partnership thingy with ONC, CMS, et. al. providing a chunk of the funding (and thus getting to drive some of the agenda), but also has significant vendor and provider engagement.  It sort of evolved in some ways out of the Direct Project and subsequent ONC driven S&I Framework initiative, and later formation of HL7 USA, almost by accident.  You see, after HL7 USA was created, when ____ suggested that it and ___ get together and do something, and those two organizations decided that it was a good idea to get together, a few of the other US based SDOs got a little concerned and decided that they wanted in (rather than trying to break it up).  It got a little bit messy for a while, but eventually everyone agreed that it would be better to work together, and so now we have our own collaborative.

There was enough momentum in that activity to get ___ and ___ to reach out to ___ and ____ and thus we had what's now know as the First Quinquennial Health IT meeting in Geneva (it seemed like a safe place to have it at the time).  We actually had a pretty good turnout, something like 500 of the top Health IT, Standards and Informatics people from around the world.  All we did was talk to each other, ... oh, and we agreed on one thing, to do this again in five years.  Since then, there's been a large number of collaborations that probably wouldn't have happened if we hadn't had that first meeting.  There's not really anything formal about the way that works, but the idea is that if we simply get together and bound around some ideas about what we do.

Anyway, it should be fun, and it will definitely take my mind off the question I've been asking myself lately: "What do you want to do when you grow up?"

So, it's an interesting little exercise, and I suspect some of the ideas that I get when I do it are just a bit far afield.  But at least it makes for some interesting daydreams.

Wednesday, July 2, 2014

What Happens when the Funding Goes Away?

There are three ways that standards really happen.  One is when a bunch of organizations get together to solve a mutually challenging problem and develop something, like CDA or HL7 Version 2. The other is when a bunch of organizations discover that a particular piece of technology works to solve a problem and decide to make it a standard (e.g. Schematron or PDF).  The third way is when a single organization decides that a standard should exist and funds the development of it.  Large (usually governmental, but not always) organizations do this sometimes.

The question that concerns me about the latter is what happens when the funding goes away.   I've seen this occur already in some funded "meaningful use hopeful" standards projects (projects hoping to get a line a in the reg at some point).  The interesting thing is what happens when the funding disappears.  Momentum gets lost, sometimes enough that the standard itself might never really get finished, or implemented, or used.

Look at Direct for example.  it is certainly suffering from the lost momentum problem.  Was it worth it?  Given that Direct got the MU mention it was after, it will probably succeed.  But is the artificial stimulus the best approach?  Maybe, but maybe not.  I don't know what other levers to pull, but I'm surely looking for them.


Tuesday, July 1, 2014

The Future of HL7

... is looking pretty rosy right now, at least from a leadership perspective.  There are three excellent (in my opinion) candidates from which to choose the next HL7 Chair,
  • Calvin Beebe of Mayo, long time co-chair of Structured Documents, past board member and current treasurer.
  • Dough Fridsma of ONC, Chief Scientist at ONC and former Director of the Office of Standards and Interoperability.
  • Me
I'd be hard pressed to choose between Doug and Calvin if I weren't running myself, but I am, so my decision is easy.  Yours is perhaps a bit more challenging.

Elsewhere you can read my profile, and see my work experience.  Here, I'd like to talk a little bit more about my vision for HL7.

HL7 as an organization needs to change.  It needs to become more Internationally focused so as to streamline efforts globally, and at the same time, it needs to find a way to become responsive to national initiatives which need HL7 standards.  It's been more than a decade since I joined and first heard the phrase "one member one vote", and while we have made progress on that front, we still haven't achieved that goal.

At the same time, we've taken a tremendous shift towards implementers of our standards, with initiatives such as freeing HL7 IP which I've been a part of, and in the development of implementer specifications like FHIR, which I've also been a strong supporter of.

So, why would you vote for me?

HL7 will be a different organization in four years.  That isn't a promise, nor a prediction, it's a simple fact.  I think the organization needs leadership that recognizes the need for change, and that is willing to act in bringing about that change.  We've made some pretty good changes in the past couple of years, but the momentum needs to increase.  We haven't stopped changing, and to grow, we need even more.  We need to complete the journey of one-member one-vote that we started a decade ago, and we need to complete the major transformation of the business model of this organization that started with our free IP initiative.  In some ways, we also need to return to our roots of developing standards for those who implement them, and focus on our customers.  You should vote for me if you think I'm the right person to lead those changes.

   Keith

P.S.  And if you feel like Doug or Calvin is the right person, you should vote for them.  Like I said at the beginning of the post, there are a lot of great choices available.