Convert your FHIR JSON -> XML and back here. The CDA Book is sometimes listed for Kindle here and it is also SHIPPING from Amazon! See here for Errata.

Thursday, March 31, 2011

IHE Releases Anatomic Pathology and Eye Care Supplements for Trial Implementation

IHE Community,

Anatomic Pathology supplement (and supporting appendix) published for Trial Implementation

The IHE Anatomic Pathology Technical Committee has published the following supplement (and supporting appendix) to the IHE Anatomic Pathology Technical Framework for Trial Implementation:

This profile will be available for testing at subsequent IHE Connectathons.  The documents are available for download at  Comments should be submitted to the online forums at

Eye Care supplement published for Trial Implementation:

The IHE Eye Care Technical Committee has published the following supplement to the IHE Eye Care Technical Framework for Trial Implementation:

This profile will be available for testing at subsequent IHE Connectathons.  The document is available for download at  Comments should be submitted to the online forums at

Thoughts on Goal II of the ONC HealthIT Strategic Plan

Goal II: Improve Care, Improve Population Health, and Reduce Health Care Costs through the Use of Health IT
Objective A: Support more sophisticated uses of EHRs and other health IT to improve health system performance
OK, so here is where clinical decision support starts to play a role.  Stage 1 of Meaningful Use required use of a clinical decision support rule, and most believe that CDS (which I've written just a few posts about) will play a much bigger role in Stage 2.  I'm engaged in two different CDS projects related to this objective, and I'm feeling pretty good about where both are heading.
Since I'm not a provider, I'm not seeing outcomes from the HITRC program, but if you are going to be developing best practices, you need to start communicating them, and that's never been a government strong point, as I've mentioned previously.
It's painful they put in the 5010 rule and switch ICD-10-CM in as part of developing administrative efficiencies.  Yes, they will make for eventual efficiencies but these are long-term, not short term investments, and in the short term they compete directly with Goal 1 objectives and stratagems  Eligibility and enrollment initiatives do seem to be having some momentum and they might be headed in the right direction, but payer side stuff is not my forte.  I'm not hearing huge complaints, and that IS a good sign.
Objective B: Better manage care, efficiency, and population health through EHR-generated reporting measures
Quality measures need to be baked into, not bolted on to care afterwards.  Our current measures are still after the fact.  e-Measurement is one solution, but I much prefer e-Guidelines that have measurement baked in.
Objective C: Demonstrate health IT-enabled reform of payment structures, clinical practices, and population health management
The ACO rule is coming soon to a Federal Register near you (like now).  In the meantime, Beacon Communities and other pilot projects are definitely paving this road and moving ahead rapidly if my own experience is any guide.  Unfortunately, there won't be much time to see what the Beacon's manage before we have rules of the road being developed.  Cart before the horse?  Strategically, it's too soon to tell.  Legislatively, I think  deadlines for regulation were already set.  Payment reform issues are beyond my usual realm of discussion here.
Objective D: Support new approaches to the use of health IT in research, public and population health, and national health security
OK, so the first think they talk about in stratagem 1 is stratagem 2.  The key challenge I heard from State public health officials last year was how grant funding from CDC simply encourages silo mentalities.  Grants for Immunization infrastructure cannot be used for cancer registry work, et cetera.  So even though both would benefit from common infrastructure for, let's say, a master patient index, the funds cannot be shared.  Let's rethink that.  Then on point two, we recapitulate infrastructure investment.  I want to make a point about infrastructure.  If the road can only be used by green buses, it isn't infrastructure, unless those green buses are carrying a major portion of the traffic, and they aren't.  I think we need to start looking at how common infrastructure can be used across the various silos of public health.  That would be truly radically new approach.
Finally, we get into the aggregation and reuse of data for research (Stratagem II.D.3).  A standards colleague recently stated, some goals are aspirational, and others are operational.  Well, when I read this particular section, I definitely aspirated.  My deep sigh was that the strategy in this section seems to be that motherhood is great and apple pie tastes excellent.  I would agree with both of those statements, but they don't do a single thing to answer the question of how to make it possible.  Are we spending more on programs like eMERGE, caBIG or CTSA?  That would be doing something.  Recognizing that we need to do something and that it is a good idea is NOT really doing anything.  So, what are we doing?

.. I'll take up this thread again after a brief word on proposed ACO regulations  ...

The way an EHR should work

I had a visit to the Dr.'s office yesterday for my shoulder which has been bothering me.  It got to the point the day before where I couldn't type for a bit, so I called and got an appointment.   It's been about a month since I pulled something shoveling snow (we are expecting another two inches this weekend ... great), and the pain had been tapering off but I was miserable yesterday.

My Doc has been using an EHR system created by the company I work for. When they first rolled the system out, he spent a good bit of time behind the computer, but now he's really gotten the swing of things.  He spent most of his time with me, referring once or twice to his computer screen.  Apparently I have a pinched nerve, so he prescribed Prednisone and 8 weeks of PT.  He confirmed with my who my pharmacy was, and in less that 10 seconds, the prescription was done.

He rechecked my blood pressure which the nurse took twice.  He noted that on one of his brief glances at the screen.  It was very high all three times.  It's been slowly creeping up and we are going to check it again in a week when I'm not in excruciating pain (from his lips to God's ears as my aunt would say).

I also got a referral to a specialist that I'm overdue to see.  He was able to find the guy I saw 5 years ago in just a few seconds.  Both the PT referral and the specialist referral he was able to print off in seconds.  He's been giving me paper printouts routinely from each visit for years.

In parting, I was able to ask him a few questions about Meaningful Use.  He a) knew what I was talking about, and b) knew where his practice stood.  They are not there yet, but he knows that they will be shortly.  They are getting ready to install a patient portal (I look forward to being able to schedule visits online), and they will soon have the ability to give me my data electronically.  He was quite excited about that and other capabilities they will soon have.

His office is, as you can guess, already well on top of things.  Meaningful Use didn't change where they were headed, but it will bring it about sooner.  His competency with the technology is as it should be, it serves him, not the other way around.  The other provider in that same office is one of the technology champions for the group, so it shouldn't be surprising that he gets it.

Now, the visit downstairs to what I hoped would be my PT was a completely different story.  That had nothing to do with EHR and plenty more to do with the current payment system.  She doesn't take my insurance, and so I'll have to find someone else close by.

Wednesday, March 30, 2011

Comments on Goal I of the ONC HealthIT Strategic Plan

On Monday, I looked at the differences between the 2008 ONC Strategic plan and the 2011 Strategic plan.  Today I'm looking at the first goal and providing a little bit more coherent response now that I'm no longer fuzzy.

Goal I: Achieve Adoption and Information Exchange through Meaningful Use of Health IT
Objective A: Accelerate Adoption of Electronic Health Records
This objective focuses on financial incentives, workforce education, implementation assistance and communication with providers. The main thing missing here is a truly focused marketing effort to get providers to adopt EHR technology. I don't know how many times I've been in a room in the last year filled with doctors, to find that many still haven't heard of Meaningful Use. In addition to marketing to providers, we also need to encourage patient engagement with their providers on this topic. Patients  have a role in encouraging their doctors to use EHR technology, and to share health information electronically.  Something like this commercial could work to create greater demand.
Objective B: Facilitate information exchange to support meaningful use of electronic health records
This part of the plan is weak.  It talks about fostering business models for exchange, but in the same section talks about The Direct Project, which is is a technology platform, not a business model.  It also talks about the S and I Framework initiatives in the context of advancing data interoperability, but ignores much of what has already been done around HIEs.  It spends several paragraphs talking about S and I Framework, which is addressing content to be exchanged rather than mechanisms supporting exchange, and barely touches on the half billion dollar State HIE program.  More needs to be done to facilitate and encourage exchange using existing technologies readily available, rather than reinvention.
Objective C: Support health IT adoption and information exchange for public health and populations with unique needs
This one is an interesting combination that I would have separated into two pieces: public health and addressing population disparities.  CDC's efforts in public health are many, but I'm not sure that their collaboration with State public health agencies is as effective as it could be.  I'm not really familiar enough with programs around health disparities to comment so I won't.  On supporting long term/post-acute care, I can see where the Transitions of Care project will help, and I'm encouraged that SAMHSA is considering incentives for use of EHR in the behavioral health community.

HIMSS to provide free 5010 Education Week April 4-8, 2011. Register today!

Since yesterday I wrote about ICD-10, it seems appropriate today to post this announcement about 5010 that hit my inbox.

Get Ready for Version 5010
with a full week of free webinars! April 4-8, 2011
The health care industry has less than a year left to prepare for the Version 5010 transaction set change on January 1, 2012.  Have you started to prepare for the transition? Are you getting ready to test with your trading partners?
To assist with compliance, HIMSS is supporting an education effort, Get Ready for 5010, that will sponsor a second set of free webinars April 4-8, 2011. These webinars will focus on testing for 5010. All providers should be planning to test soon if they expect to meet the end of year compliance deadline.
The webinars will feature speakers from the Centers for Medicare & Medicaid Services (CMS), provider and payer organizations, and will offer practical information and early lessons learned on:
  • Testing for Large and Small Practices and Facilities
  • How to Test with Medicare Fee-for-Service
  • Testing with Commercial Payers and Clearinghouses
The Get Ready 5010 initiative is supported by a broad group of health care industry stakeholders representing providers, payers, government, and vendors who are coordinating their efforts to support a smooth and timely transition to Version 5010.
Whether you are well along with your Version 5010 project or just starting, you will find value in one or more of these free webinars. Get all the details and register today.
Join us!

Tuesday, March 29, 2011

HealthIT and the switch to ICD-10

Several recent posts (this one among them) have talked about how the switch to ICD-10-CM and ICD-10-PCS will affect medical coding.  Chief among the concerns is that coding productivity will go down because the new US ICD-10 coding schemes simply have more codes.  This was even a brief topic of discussion on the #HITsm chat last night.

If coding continues as it has, being mostly a human based operation, this will likely be the case.  However, I see this as an opportunity for healthcare IT vendors.  I spent several years working in Natural Language Processing. I know that automated coding systems can approach and potentially exceed human skill levels using NLP techniques.  This would especially be the case if machine learning and feedback techniques are applied.

The challenge is that the NLP system has to be designed in rather special ways.  It will need to be able to provide its reasons for coding one way or another -- something that humans can do naturally, but which software does not do well.  The importance of this stems from the need to deal with audits on coded information.  I think the natural first step is to look at providing "coding assistance" to professional coders, rather than trying to replace them, as this takes advantage of existing processes.

In addition to throughput, the benefit of automated coding could be higher accuracy and greater consistency.  It looks like an interesting opportunity in the health IT space.

Monday, March 28, 2011

What's New in the ONC Strategic Plan for HealthIT

I'm suffering from a Spring cold, so it's difficult to focus today.  I started the day off trying to read through the 80 page Health IT Five-year Strategic Plan from ONC.  The plan was last updated in 2008 by the ONC in the Bush Whitehouse.  If you are looking for a summary of the new plan, check out John Halamka's post.

Whenever a plan changes, the first thing I want to know is what's different, and the second is why.  So I put together this little table which shows the what.  I'll save the why for when I'm a little more coherent.  The first column for 2008 shows the eight objectives from the 2008 plan.  The second column shows the matching goals from the 2011 plan.

Objective 1.1: Facilitate electronic exchange, access, and use of electronic health information, while protecting the privacy and security of patients’ health information  Goal I, “Achieve Adoption and Information Exchange through Meaningful Use of Health IT” discusses the centerpiece of the government’s health IT strategy over the next five years. “Meaningful use” is aimed at widespread adoption and information exchange in its first two stages, and will then build to improved health outcomes in the third stage. Activities are focused on two areas: defining the meaningful use of certified EHR technology, and encouraging the achievement of meaningful use through the Medicare and Medicaid EHR Incentive Programs and supporting achievement of meaningful use through grant programs, including information exchange with public health department and laboratories.
Objective 1.2: Enable the movement of electronic health information to support patients’ health and care needs 
Objective 1.3:  Promote nationwide deployment of EHRs and PHRs and other consumer health IT tools 
Objective 1.4:  Establish mechanisms for multi-stakeholder priority-setting and decision-making
Objective 2.1:  Advance privacy and security policies, principles, procedures, and protections for information access in population health  Goal III, “Inspire Confidence and Trust in Health IT,” focuses on government efforts to update its approach to privacy and security issues related to health IT, and build greater confidence and trust in EHRs and health information exchange among providers and the public. The digitization of health records will create a new set of challenges for protecting the privacy and security of health information, but it will also open new opportunities for improving safeguards.
Objective 2.2: Enable exchange of health information to support population-oriented uses  Goal II, “Improve Care, Improve Population Health, and Reduce Health Care Costs through the Use of Health IT,” discusses the specific ways health IT is contributing to the goals of health care reform: improved care, improved population health, and reduced per capita costs of health care. Widespread adoption of EHRs, information exchange, quality improvement initiatives, and health care reform pilots are required to implement the Affordable Care Act. As part of each of those activities, the government is investing in and leveraging health IT to create a transformed health care system.
Objective 2.3:  Promote nationwide adoption of technologies to improve population  and individual health 
Objective 2.4: Establish coordinated organizational processes supporting information use for population health 
Goal IV, “Empower Individuals with Health IT to Improve their Health and the Health Care System,” discusses how the government is designing health IT policies and programs to meet individual needs and expectations, provide individuals with access to their information, help facilitate a strong consumer health IT market, and better integrate individuals and clinicians’ communications through health IT. A public that has a voice in designing national health IT policies and programs and is empowered with access to its health information through useful tools can be a powerful driver in moving toward patient-centered health care.
Goal V, “Achieve Rapid Learning and Technological Advancement,” focuses on demonstrating ways health IT and meaningful use can enable innovation and appropriate use of health information to improve knowledge about health care across populations. In the long run, the government is pursuing a vision of a learning health system, in which a vast array of health care data can be appropriately aggregated, analyzed, and leveraged using real-time algorithms and functions. This future will require technical innovation to build on the foundation of meaningful use, as well as finding new ways to use that foundation in the practice of health care.

What's new?  

The HITECH Act has created a completely different environment for change that supports the first two goals quite well through incentives, grants and other programs.  As I read through the 2011 plan, I find that many of the tactics being used to advance the strategic goals, especially around Goal I are well into the execution phase.  I still think communications could be vastly improved.  It still amazes me how many providers have never heard of meaningful use.  The Government has never been good at marketing, but it needs to get better if Meaningful Use is to succeed.

Goal II still needs some work on the public health side.  The HITECH Act doesn't provide much for public health in the way of assistance, but requires providers to work with public health.  Something needs to close that gap.

Goal III needs much more communication and time.  Trust will happen eventually.  Remember when you wouldn't enter your credit card number on the Internet? Well, those days are gone, but that didn't happen overnight.

Goals IV and V are more in the future.  Individual empowerment has begun.  While some are concerned that less than 10% of American's are using a PHR, I'm absolutely thrilled that 1 in 15 have and that number is growing.  It is clearly an emerging market.  I expect those numbers to double or triple in the next few years. It's pretty clear that Rapid Learning will occur as more data becomes available.  I signed up for 23 and Me about a year ago and the number of new results that I've seen over the past year has been astonishing (even if not all that useful yet).  I expect similar things to happen elsewhere.  The really interesting challenge could come from patients who want a piece of the action

Friday, March 25, 2011

Usability Testing of EHR for MeaningfulUse

At HIMSS, Dr. Charles Friedman, Chief Scientist at ONC announced that Usability would be included in Stage 2 of Meaningful Use.   This became the tweet heard 'round the floor of the show.  As if the challenges of Meaningful Use Stage 2 timing were not already enough, this seemed to be yet another late-breaking hurdle.

Dr. Friedman and representatives from NIST spoke to the HIMSS EHR Association this afternoon to talk to us about what ONC intends.  The slides he presented are below.  These are mostly a condensed version of what he presented at HIMSS.

NIST will be hosting a workshop in July focused on the topic of usability in Healthcare IT.  You can also expect that there will be upcoming opportunities for testimony and discussion in various advisory committee venues on this topic.  When I asked the inevitable question about timelines for Usability testing given existing tensions around stage 2 deadlines, I got back an interesting answer.  Dr. Friedman was not specific and essentially indicated that we'll all be seeing how this works out as it evolves.  It seemed to me to indicate that ONC might have some flexibility around the usability testing requirements for Stage 2, but I wouldn't place any bets on that.


With Apologies to Dire Straights (and Sting)

I want my, I want my CCD.

Now look at them yo-yo's, that's the way you do it
You get your money with the CCD.
That ain't working, that's the way to do it
Money for nothing and EHR for free

Now that ain't working, that's the way you do it
Let me tell you them guys ain't dumb
Maybe get a blister on your little finger
Maybe get a blister on your thumb

We got to install microfilm viewers
for all these paper documents
We got to move these file cabinets
We got to move these fax machines

The little faggot with the earring and the chopper
Yeah buddy, that's his own hair
That little faggot got his own blog page
That little faggot he's on twitter there

We got to install microfilm viewers
for all these paper documents
We got to move these file cabinets
We got to move these fax machines

Thursday, March 24, 2011

Publishing Standards using Standards

Yesterday, Dave Carlson presented on how the Model Driven Health Tools will be used to generate the guide being produced by the CDA Consolidation project.  It was like someone turned on the way-back machine for me.  Before I entered the field of Healthcare (a decade ago now), I used to work for a company (now defunct) that produced technical documentation tools working with SGML and XML.

These days there is an OASIS XML-based standard for designing, writing, managing, and publishing many kinds of information in print and on the Web: The Darwin Information Typing Architecture or DITA.  The DITA standard has several open source implementations, including DITA-OT, which MDHT is using as an output format for the model.

DITA can be readily transformed to standard formats like XHTML and PDF, or proprietary formats like RTF, or other output combinations like JavaDoc, Eclipse Help, et cetera, using a common set of inputs.

Dave is using ANT, another Java based open source component to support the documentation build process.

There's plenty of content uses in the CDA Consolidation project that needs to be created manually.  This is incorporated into the documentation using topic maps to build the "Table of Contents" of the guide.  Getting this content from an editor into DITA format would clearly be a concern.  However, most word processors these days generate XML output, often in a standard format like OpenDocument or DocBook, or at least in a well documented proprietary variants of a standard format .  It's pretty easy to write XML transforms from these formats to DITA to capture the necessary documentation structure, so that editors can continue to use their favorite word processes.  There are also commercial products that support editing of DITA directly.

There's plenty of proprietary solutions that support complex documentation development applications, many of them using DITA these days as well.  What the MDHT and CDA Consolidation projects have done has been to develop tools which could support standards developers using open source tools, generating very high quality output.

It's all still very much a work in progress, but as I think about next generation publishing tools, I'm hoping that HL7, IHE, DICOM and other standards organizations consider the MDHT tools as a very strong starting point.  Who knows, maybe there will even be a GIT integration someday that will make one of my IHE colleagues happy.

Wednesday, March 23, 2011

IHE support for PCAST Use Cases

Today John Halamka posted Use Cases for Health Information Exchange discussed by the PCAST Workgroup.  There were four use cases:

  1. Push by patient of data between two points
  2. Simple Search for Data
  3. Complex Search for Data
  4. Deidentified Aggregated Data Mining
These are all presently supported by existing IHE Profiles, many of which have already been deployed in Health Information Exchanges in the US and around the world.

Push by patient of data between two points

This is the use case for which XPHR was designed.  As a UEL, XPHR uses CDA, which captures medications, problems, allergies, labs and references to other studies (e.g., imaging).  The metadata includes patient identity, provenance (author, source of submission, and privacy metadata).  The assumption made by IHE PCC was that an XDS registry could act as a non-tethered PHR, be able to receive pushes from their providers EHRs.  It could also support input using IHE's XDR profile, or using XDM via The Direct protocol.

These profiles include requirements to support ATNA, which requires user authentication, certificates on both sides of the communication, audit logging by all entities engaged in communication.

The IHE BPPC profile supports capture of patient consent, and supplies metadata about consent.

With regard to the Infrastructure John suggests is needed for this approach:

  • The XDS, XDR and XDM profiles include metadata on patient identity, provenance and privacy.
  • Categories of health data are already captured in the metadata as specified in the vocabularies selected in the  HITSP C80 Clinical Document and Message Terminology Component, in section Document Metadata, and in the LOINC codes for document sections using in the HITSP C83 CDA Content Modules.
  • Providers of Applications capable of wrapping content packages of clinical data in the CDA format can be found here
  • Providers of Applications capable of receiving the CDA format and "unwrapping" the content can be found here
  • Certificate management can be through X.509 certificates supported by a number of different providers.
  • And I agree, that policies are needed, but that's outside of the IHE scope.
Simple Search
For this use case, the IHE BPPC Profile can be used to capture the patient consent. Queries can be created using the IHE XDS Stored Query or XCA profiles to locate information for a patient from numerous sources specified in URIs as web service endpoints.  These are the very same query/response exchanges already supported in the NwHIN.  As before, XDS Stored Query and XCA require ATNA which supports authentication, audit and encryption of the exchange.
Again, on Infrastructure suggested:
  • Policy is needed, but is outside of IHE's scope.  The profiles are built to support wide variation in policy.
  • Syntax and semantics for XDS queries have been well established.
  • Applications capable of performing the queries are listed here.  Some of the current deployments can be seen here.  This is already available in NwHIN implementations, and through the CONNECT Open Source
  • An approach to identity patients using the IHE XCPD profile has already been specified (pdf) by the NwHIN Spec Factory.  That profile supports disambiguation of patient identity.
Complex Search
Complex search is different from the perspective of the PCAST use cases, but technically implemented using the same capabilities.  What differs are policies.
  • IHE XUA can be used to communicate provider identity and role in the query transaction.
  • The "DEAS" is an XDS Registry and eMPI supporting XCPD, or an XCA and XCPD Gateway
  • Syntax and semantics for this are already specified. 

De-identified Aggregated Data Mining
This one is the "weakest" of the use cases given that John doesn't talk about the challenges of deintification.  In fact the use case he gives provides data that is probably as accurate as a fingerprint for identifying a person ... a radiological image.

  • Policy is again out of scope, but please note that IHE profiles are always designed to support a wide variation in policy due to their International nature.
  • IHE is developing a handbook (doc) on this topic this year.
  • The IHE MPQ profile release last year is one of the infrastructure components that can be used to support this capability.  

As John notes in his post, Policy is required for all of these use cases.  I'd love to see a workgroup focused on the policy issues irrespective of any standards or technology.

Tuesday, March 22, 2011

Use Cases for Links from Entries to Narrative

One of the frequent topics of discussion in the CDA Consolidation project are the IHE required links from Machine Readable Entries to Narrative Text in the document.  This requirement goes way back to IHE's first CDA Profile: XDS-MS and was included as a recommendation in CCD (see Page 18 quoted from below):

CDA provides a mechanism to explicitly reference from an entry to the corresponding narrative, as illustrated in the following example (see CDA Release 2, section content for details):
CONF-29: A CCD entry SHOULD explicitly reference its corresponding narrative (using the approach defined in CDA Release 2, section ).

There's a rationale and a couple of use cases behind this idea. The IHE (and underlying HL7) specifications don't require that code be valued, but in those cases, you do need the originalText.  The whole point was to be able to identify the "free text" that would have been coded.  There were two choices:
  1. Duplicate the text.
  2. Leave the text in the narrative and point to it from the entry.
Duplicating the text seemed to be a potential source of errors that could be avoided by pointing to it.  Others have reported that pointing to it is also a potential source of errors, but at least that can be better detected and corrected for.  You cannot detect mismatches between entries and narrative by allowing for duplication.  

The use cases that I've encountered for using the pointers include:
Ensuring that each machine readable entry has associated narrative
One can this in the CDA document by verifying that each reference/@value attribute can be associated with an ID attribute using that URL.  You cannot automatically detect that it is the correct narrative but the next two use cases help.
Providing popups that display the machine readable data associated with the Entry 
The simplest method is to use the HTML title attribute to generate the "popup", but you get get a lot fancier.  Essentially, for any element with an ID attribute, you add a title attribute to the output HTML that lists the code, display name and coding system name in the title.  This can also be used to facilitate manual testing of appropriate association of coded entries with narrative content.
Identifying Narrative Content that isn't associated with a machine readable entry
You can also put narrative content in a different style if there is an entry associated with this.  This is just a slight variation on the popup trick I described above.  It easily makes it apparent which narrative content isn't associated with an entry.

Now, some have suggested that when the Narrative is completely derived from the text, as indicated by the DRIV typeCode attribute value on the component element, that this constraint is unnecessary.  As the narrative is reported by the document to be completely derived from the entries, there should be no reason to put in the linking code.  I'm of mixed opinions about this, because even when "narrative is derived from entries" there is still a need to make sure that it was done correctly.  The linking capability makes this possible.

It's very clear that there are some who would have this be changed.  The question of whether this is even within the scope of the project is also under debate in HL7 Structured Documents.  Given that all IHE did was strengthen a CCD SHOULD constraint into a SHALL constraint, I would argue that it isn't.  However, the SDWG seems content to ignore that issue, and see what others want to do in this regard.  I know that not everyone likes this constraint, and that SDWG added it to CCD as part of ballot reconciliation to harmonize with what IHE had done.  But even though it was hotly debated then, doesn't really make for a good excuse to weaken it now.  Even so, I'm likely to lose this battle, and will have to figure out how to ensure that this change will not cause problems even though it won't be backwards compatible.  There's quite of bit of IHE work implemented Internationally which adheres to this constraint.

Monday, March 21, 2011

The Next Generation Health Platform

Do you remember the world before... 
  • 8" floppy drives?
  • 14" hard disks?
  • email?
  • desktop computers?
  • bag-phones?
  • luggables?
  • twisted pair?
  • cell-phones?
  • HTML and HTTP?
  • CD-R/W?
  • T-cons?
  • XML?
  • broadband?
  • Screen Sharing?
  • WiFi?
  • laptops?
  • USB?
  • flat panel displays?
  • DVD-R?
  • VOIP and Skype?
  • 3G?
  • Twitter and Facebook?

How did those technologies change your life?

I had a dream this morning having to do with EclipseiPads, and Healthcare.  Yesterday I got a sermon about how new technology (netbooks and iPhones) were enabling people to avoid being "in the moment" with others who were in the same place at the same time.  Recently I got a new smart phone.  I've been thinking about getting an iPad 2.  While watching TV at home last night with my family we saw the ad for the AT&T Laptop Dock for Motorola ATRIX 4G.

We are in the middle of a major platform shift.  The last major shift was to the wireless laptop, and that wasn't all that big a deal from the developer perspective.  This next one is about as big as the simultaneous introduction of the IBM PC, the MacIntosh, and the bag-phone.  It will have a tremendous impact on the software to be developed in the next decade.

The new shift clearly includes 4G cellular technology, touch screens, LCD video cameras, multi-core processors and semiconductor storage.  Voice recognition will probably become more prevalent as well, although I'm not seeing signs of that yet.  This new technology will make it easier to be "in the moment" with others who aren't in the same space as you

It's not yet clear where the new platform is going, or who will win.  Will it be Apple?  Android?  Someone else? What will the OS be?  ? Apple's iOS? Google's Android or Chrome OS?   What will the application development environment look like?  What new languages will I have to Learn?  HTML5?  Flash?  Will the applications use XML or JSON or will it even matter?

What will the new world bring?  How will your legacy applications respond?  Will they simple be re-skinned or will they be completely renewed to take advantage of all of the features of the new technology?  What standards should you be looking for in this new technology?

Here's my short list of must-haves:

  • Mini-USB
  • Mini-HDMI
  • Micro-SD
Now, you can get USB, MicroSD/SD and HDMI adapters for the iPad 2, but they require dongles.  I hate dongles, and I already carry a bag full of standard cables.  I'm not happy about how proprietary the Apple products are, but given their popularity I may need to have one anyway.

Some things to think about for the new platform.
  • Security - Smaller devices are easier to lose or be stolen.  They need to have encryption built into  storage, and strong password protection or biometric authentication to access the device.  Networking via 4G and WiFi needs to support encryption, firewalls and anti-virus software.  It won't be long before we see viruses targeted towards some of the newer unlocked or jail-broken platforms.  Locking down applications and configuration is common for corporate laptops but not so much for telephones.  IT departments need to start looking at this.  How often did you back up your phone?
  • Cloud Computing - A lot of these devices will be working in the cloud rather than performing local computing.  Organizations that plan on using these devices as client workstations will need to think more about their private or public cloud infrastructures.  That also includes networking.  Does your WiFi reach where these devices will be used?
  • UI designs will continue to advance on these platforms.  Developers need to be careful about early platform lock-in though, as most of the advances seem to be built into the platform, rather than the development environment.

Wednesday, March 16, 2011

On the New Format for HL7 CCD Training, and some stats on The CDA Book

I'm teaching CDA and CCD this week at the HL7 Education Summit.  We're delivering the training in a new format developed by Diego Kaminker, Co-chair of the HL7 Education Workgroup and HL7 Argentina Chair.  Diego did a great job restructuring the training, and I envy the students that got it.  What's different?  Well, I usually spend a half-day on CCD, but today we spent a full day.  The additional time was used for exercises creating a HITSP C32 compliant CCD document.

Also, all the students in today's CCD-focused session spent the day before with Diego and Calvin Beebe (HL7 Structured Documents co-chair) on CDA basics, so I knew what the students already knew about CDA going into the class.  That changes things dramatically in how I teach.  I don't need to spend time for any remedial work in the class for those students who don't have the prerequisite skills or training.  It means that I can draw on and help to reinforce the students existing knowledge.  

Because the class was focused on Meaningful Use implementation, most of the students had already been, or were immediately going to be applying their skills.  Having motivated students makes for a much more interesting classroom environment.  I think I enjoyed this class just as much as the students did.
Even though the class was focused on Meaningful Use, we had two students from Canada who were interested to see what we were doing in the US.  One of them commented about how much Meaningful Use in the US was affecting the International EHR market.  It was a rather astute observation.

After the class, Calvin, Diego and I had dinner and talked about further improvements we want to make based on the students feedback. And they gave great feedback on what we had done, which will further improve this training for others.  While the next Ed Summit reverts to the original format, the November Ed summit will probably reuse the new format.  That will give us enough time to make some of the improvements we discussed.  I also will be teaching the CCD class a little bit differently going forward, based on some of the outcomes of this class.

After dinner, Calvin asked me how much time I spent writing The CDA Book.  He estimated about 1000 hours.  I guessed about 500.  After I got back to my room, I tallied up the stats Microsoft Word keeps on editing time.  I have two documents which were corrupted by Word, and 21 uncorrupted saved copies of the work in progress.  Counting up the editing time I spent, and filling in for the corrupted gaps I'm looking at about 850 hours for the book, outline and proposal.  Adding in time I spent pitching the book to three editors, acquiring equipment and software, and preparing graphics, I figured it was still under 1000 hours.  But, I get page layouts this week to proof, so figure another chunk of time there, and time to promote it and some other stuff, and Calvin probably nailed it on the head.

I started the project in Mid-November of 2009, and delivered it to the publisher in Early November of 2010, so figure a year of elapsed time.  1000 hours is half a working year for most people (50 weeks X 40 hours = 2000 hours).  I know I already spend too much time doing what I do for the day job, and this was after hours, so I figure that while I was writing The CDA Book, I was putting in 70 - 80 hour weeks, and in the last month, about 80-100 hour weeks.   Total page count is around 400, so that works out to about 2.5 hours a page.  If I sell as many as I hope to, it would work out to about $20/hr after expenses.  If I sell as many copies as I expect to, it would work out to about $8.5/hr after expenses.  This is not a get rich quick scheme by any means.  I could do better answering this ad.

This was an interesting investigation because  I'm considering taking on another book project.  Metrics are great.  It gives me some targets for improvement.  The next book would be about using IHE profiles to create Health Information Exchange.  It's got quite a different audience I think, than the CDA Book, but I'm still exploring the idea, even this rate of return.  It's fun, but then again, I have a strange idea about fun.  Just ask my kids.  I have a book project to work with them on as well.  I think it'll start as a blog but be designed for eventual publication as a book.  The working title of that project is "Math you aren't supposed to Know".

Tuesday, March 15, 2011

Why there is no W3C Schema for CCD

One of the callers on today's CDA Consolidation call asked if there would be an XSD for the Consolidated CDA Implementation Guide.  This is a fairly common request that I hear from a lot of different sources.  If you have the CDA specification, it provides a W3C Schema for the HL7 CDA Standard.  But there is no W3C Schema for CCD itself, or for other implementation guides built on top of CDA or CCD.

CDA is a standard that is capable of representing a wide variety of clinical documents.  CCD and other implementation guides place further constraints on CDA, but do not do so in a way that you can create a W3C Schema (an XSD file) for them.  The reason for that is that the W3C Schema standard represents the original constraints that were present in XML and its predecessor SGML.  Those constraints required that the Element name fully define the possible model for the XML having reached a particular point within the parse of a document.  Even though parsing technology has advanced quite a bit since SGML was specified, that requirement still remains.  This means that it becomes difficult or impossible in W3C Schema to even specify a set of content constraints of an XML element based on the value of one of the attributes of that element.

In CCD, the model is not described by the element name, nor even an attribute of that element, but rather from the value of the root attribute of the templateId element which appears inside the element.  An XML parser would need to determine the particular set of constraints that would be applied using some form of lookahead.  The original SGML markup language forbade the requirement for lookahead in parsing markup. This constraint made its way into XML DTDs and subsequently into the W3C XML Schema language.  So, there is no W3C way to create a schema that can tell you how to further constrain the CDA XML for use in CCD.  The same is also true for constraining XHTML or DOCBOOK or any other general purpose markup language.

One of the benefits of W3C schema support and the reason that it is often asked for is that it enables easy translation from databases or other persistence layers based on the XML content model.  Some would argue that this limit of CCD is a problem.  I happen to like the ability to layer constraints to support incremental interoperability.  GreenCDA is one effort at moving through the layers of constraints to generate XML that has a schema.  It may be a way through the challenge as transforms from Green to normalized CDA would still provide incremental interoperability.  I'd like to see a more algorithmic approach (read: repeatable and programmable) used to derive GreenCDA schemas from the layered constraints.

But, there it is.  If you want to know why there isn't a schema, it's because of some fundamental limits placed on XML, and those limits apply to any general purpose markup language.  The same problem appears in DocBook, XHTML and any other markup language that you want to further constrain for a particular purpose.  It's also the same reason why there isn't an XML schema specific to the XDS metadata (and in fact, a repeatable algorithm for developing a GreenCDA could also be used to develop a GreenXDS metadata package).  Hmm, I'm seeing a trend here...

Monday, March 14, 2011

IHE Patient Care Device White Paper Published for Public Comment

IHE Community,

Patient Care Device White Paper published for Public Comment

The IHE Patient Care Device Technical Committee has published the following white paper for Public Comment:
·         Medical Equipment Management (MEM): Cyber Security

The document is available for download at  Comments should be submitted by April 15, 2011 to the online forums at

A Doctor is Not a Bank

All too often I've heard the comparison between the financial industry and its efforts to make transactions electronic, and the healthcare industry.  But health is not something that I can make deposits on and withdraw later.  We aren't talking about a case where there are only two organizations completing business transactions on behalf of their customers.

There's a lot more going on here.  A better comparison would be to automation supporting electronic commerce between multiple businesses.  I'll use electronic publishing as an example, since I have some history in that space.
Imagine that you had a customer needing a new web page.  You have to understand what the customer is trying to accomplish, and then design a page to meet their needs. Along the way, you have to obtain assets:  Text content, media (pictures or video), put it together, get approvals, and publish the content.  Obtaining the assets might involve negotiating access to content from others, paying someone to provide it, or simply assigning the job of creating it to someone on your staff.  Afterwards, you need to put all those pieces together into a coherent whole, possibly get someone to review and approve it, and then it gets pushed out to the web.  Anywhere along the way you may learn that there are other tasks to perform.  Some of the content may need to be coded in Flash, in which case, you might need to put a flash player download button on the site (which means you need another piece of content), et cetera. Oh, and if you are providing full service, you might also evaluate how people respond to the page, and make any adjustments necessary to improve their response.  Now, consider making that whole process electronic, and you begin to understand the complexity of healthcare. BTW:  There are systems that support this process electronically, but they are proprietary.

You have a patient, with a specific complaint or symptom. After spending a bit of time getting to know that patient, a healthcare provider has to make some objective assessments (findings), or get others to provide them (e.g., referrals or testing).  Some information may need to be generated by specialists.  Once you've determined what is wrong, you need to pick a treatment.  Oh, that might need approval (from the patient's insurer), and then you need to follow up to see that the treatment worked, and adjust as needed.

I wish I could earn excess health, deposit it with my Doctor, earn interest and come back and make a withdrawal of it from him as needed.  If that sounds ridiculous to you, then please join me in efforts to stop comparing healthcare IT interoperability to banking.

Friday, March 11, 2011

Medication Status in CCD

One of the things that people often want to know about medications is whether they are still actively be used by the patient, or whether they are historical.  Some look to find this in the status element of the substanceAdministation or supply act, but that is used for a different purpose.  It is used to record the state of the act according to the state model defined in the HL7 RIM.  For example, when dealing with a request or order for medications, the state is completed when the order is filled.  When dealing with an event, such as the administration of a medication, the state is completed when the med has been given to the patient.  That means that what that statusCode element implies about the medication depends upon the type of act (specifically its mood).

In the IHE PCC Technical Framework, the medication entry often has effectiveTime elements that tell you the intended or actual time of use (see yesterday's post for how to use those).  But even then, you may not know the dates, but still want to indicate that this is a historical medication.
So a different mechanism is needed to indicate that a medication is currently active, on hold (e.g., blood thinners before surgery), or no longer active.  The CCD specification contains a Medication Status entry that can appear within a substanceAdministration or supply act to support this.

The medication status entry looks like this and should appear at the end1 of the substanceAdministration or supply act (with all other entryRelationship elements:

The codes and display names that you can use for this entry are:

CodeDisplay Name
421139008On Hold
392521001Prior History
73425007No Longer Active

I don't know what the difference between Prior History and No Longer Active are.  They are nearly enough equivalent that you should be able to assume that if either of those codes are present, the medication is no longer being used.

   -- Keith

1 It need not appear exactly at the end, so long as it is with all the other entryRelationship elements.  But if you don't know where to put it, that's the best place for me to tell you, because it will be correct according to the CDA schema.

Open Source Software as a way forward for HealthIT

I'm a big fan of open source.  A lot of the software I use is open source, and much of the software that isn't relies on open source.  I've given a wee bit to the open source community, but not a whole lot.  I think my largest contribution has been publicizing it on this blog.

But I've been reading a number of posts and tweets lately that tout "Open Source" as the way forward for Healthcare IT.  Some of the recently posted links appear below.
In the interest of full disclosure, I work for a software vendor (see my profile).  You might think my views are skewed against open source.  I don't think they are, but I'll let you make your own decisions.

There's nothing inherently better or worse about the quality, features or capabilities of Open Source Software than of commercial offerings. In fact, for some things, the Open Source implementation is THE industry leader for the task.  I challenge you to find Java based applications that doesn't support Xerces for XML parsing or Xalan for XSLT transforms.  It's just hard enough of a task to show how these open source solutions dominate that space.

The two major benefits of open source seem to be cost, and the opportunity to modify the product to suit your own needs.  For the small physician office or hospital, the cost of the software seems especially attractive. For larger organizations with a skilled programming staff, the opportunity to modify the product may be even more attractive.  One of my own favorite benefits of open source is how close it brings product developers to their customers, because I think it has incredible value.  That may be the one reason why open source is viewed as a way forward.

The true costs of software ownership only start with the licensing or purchase price of the software.  The Total cost of ownership can be much higher, and many open source offerings don't include the same services that commercial offerings do.  Smaller providers need to be aware of the gaps, because organizations using open source software often have to make up for what is missing from open source.  This can include documentation, education and training, service, support, maintenance, and in the era of Meaningful Use: Certification.   Larger organizations will often have their own staff who can often fill those gaps, but that just shifts the cost from one place to another (it may in fact be cheaper, but can still require a substantial investment).

Because open source efforts are often volunteer supported, they don't always have funding to support things like Certification.  Getting an EHR ONC-ATCB certified is no small task.  It requires weeks of preparation, and person days committed to testing, and can include fees1  from around $20,000 to as high as $33,000 dollars to certify a complete ambulatory EHR.  Few Open source efforts have the funds to cover that kind of cost.  That responsibility would become a cost needing to be borne by the implementers of the product.  Smart implementers would be we well advised to partner up with other users in their community to share the cost of product certification in this case (see ONC FAQ 4).

Many Open Source efforts are supported by large software and/or hardware vendors.  For example, Intel has a fairly large commitment to open source supporting their processors.  If you take a look at the membership of Open Health Tools, you will see a number of large IT vendors, and several EHR vendors who participate.  Many of these have contributed substantial amounts of intellectual property, and development resources to these efforts.  Their staff are often very active in, or even leading open source efforts.  These vendors often provide services to fill in the gaps in conjunction with their involvement in the development of the software.  Other vendors or consultants may also offer services to backfill without necessarily being directly involved in the open source development efforts.  Some  vendors have created their own open source product offerings:  The code is completely free and downloadable, and you can contribute to it, but the open source effort is managed by the vendor (and in some but not all cases, the IP is retained by the vendor). These vendors often generate revenue by providing services, support, education, consulting, et cetera, around the implementation and use of the product.

The Federal Government has event supported a couple of Open Source efforts.  One of these is the CONNECT Open Source project which supports connection to the Nationwide Health Information Network.  Another appears to be the popHealth project designed to support Meaningful Use quality reporting.  In that particular case it appears (see Project Milestones on the project home page) that the project will also be ONC-ATCB certified for that purpose.  Federally supported projects have their own set of challenges.

Federal projects live as long as there is funding support from the agency, and even when the agency (or agencies) would like to renew the funding, it's not always the case that it does.  When it does, there is no guarantee that the supporting contractors will remain the same, or that they will do things the same way.  The contract for the CONNECT project was awarded to CGI/Stanley, but was protested by the original contract holder Harris.  The lights are currently on for CONNECT, but little work is happening until the dispute is resolved. This is very frustrating on two counts:  Some Federal agencies were relying on this activity to meet some of their objectives.  There are others who have invested time and effort into the activity, and in other intellectual property in the hopes of providing services that they won't be able to capitalize on as much as they had hoped to until the protest is addressed.  I don't know of any studies on risks for Federally supported open source activities.  Given my own experience with Federally funded initiatives, I would expect that there's about a two-year risk cycle to consider at the very least.

So, my advice:  Be aware of your needs.  Go into your software implementation project with your eyes open, and understand the total cost of the software you will be using. Do look at open source efforts (here is a list to consider).  If the open source efforts meet your needs, then by all means, consider them.  Don't assume that because the cost of the software license is 0, that it will be cheaper in the long run.  Do the analysis, and be aware of potential risks.  Take the same effort you would vetting an open source project as you would a software vendor.  If, in the end, you decide that open source is the way to go, then it will be the way forward for you.

  -- Keith

1 Not all ONC-ATCBs publish their fee structures for certification. These figures are from publicly available information.

Thursday, March 10, 2011

Medication Dosing Regimens in CDA Release 2 the IHE Way

This started as an e-mail response to a question, but I think others will benefit from it, so I'm posting it here, and will send the querant to this page.

The question revolves around the use of effectiveTime element in the IHE Medication entry (NOTE: The IHE wiki is non-normative content, but good enough for this discussion).

Questions are:  How many effectiveTime elements are required by IHE, and what data types must be used.

The answer is a bit complicated, but then, so are medication dosing regimens.  CDA can capture a dosing regimen as complex as “On Tuesday’s at 1:00pm and Thursday’s at 10:00 am between Memorial day and Labor Day”. [A dosing regimen I used to use for swimmer's ear medication].

IHE says:  If that's to hard to encode, just make it human readable.  So you needn't have any effectiveTime elements.

You can also say:  I took one dose of  acetaminophen at 2:00pm on March 9th [I had a massive headache].  So, that would go into a single effectiveTime element using the TS data type, and the value attribute would indicate the date (and time) of the dose given (or when it was intended to be given).

You can say that this IV drug was used for 8 hours.  That works similar to the single dose, but the effectiveTime is IVL_TS, with the start and stop of the dosing regimen recorded in the low and high elements respectively.

Then there's the case of take this drug once (or N times) a day for 10 days.
In that case, the first effectiveTime element records the intended or actual start and stop days (in a low and high element).  The second effectiveTime element records the frequency information using the PIVL_TS data type.

You could also say take Exedrin PM before bedtime for the next 10 days (which I had done for left shoulder pain due to a pinched nerve -- fortunately that is gone now).  In that case, the first effectiveTime still records the intended or actual start and stop days, and the second effectiveTime records the event driven dosing regimen using the EIVL_TS data type.

IHE also mentions the use the PPD_IVL_TS data type to represent "every 4 to 6 hours", but hardly anyone uses that.  It also mentions the use of SXPR_TS to represent more complex regimens, but again, hardly anyone ever uses that.

The point is, the first effectiveTime tells you when the medication regimen was active, the second tells you the frequency of administration.  There are no third or ... ones, and in some cases, there may be none.  If you want formal constraints:

effectiveTime shall appear at most two times [0..2]
If one or more effectiveTime elements appear, the first one shall contain either the time of a single dose in the value element using the TS data type, or the duration of the dosing regimen in an IVL_TS data type.
The second effectiveTime element should represent the frequency of the dose, using either the PIVL_TS or the EIVL_TS type.  The PPD_IVL_TS or SXPR_TS data types may be used to represent more complex dosing regiments.

I would note a couple of additional points:
HL7 V3 (and thus CDA) has four different ways to fully record an interval:  low + high, low + width, width + high, center + width.  IHE has only one:  low + high.  

The only cases I've ever seen where something else is needed are: 
  1. Where start and stop time are not know with great precision, but duration is:  e.g., this procedure was performed on 1/7/2011 for 15 minutes.  That would require one of the + width representations, and if IHE encounters it for a profile (we haven't yet), I'd recommend low + width.  In HL7 we encountered this in the Procedure note, and that's the way it was done there.
  2. There's also the case where duration is known, but days are not.  We encountered this in HL7 Claims Attachments:  Take this drug for 10 days, where it was asserted that there was insufficient context to determine the starting date.  In that case, Claims Attachments used width only for that case.

Twitter tips for Healthcare Social Media Users

I've been using Twitter to promote this blog for about two years.  Over that time, I've learned a number of ways to expand my twitter network.
  1. Content is king, but it need not be your own.  Finding good content can be difficult, but there are a few quick and easy things you can do:
    1. Locate content sources that you can trust to provide good information to your followers, especially if access is obscure or limited.  Using twitterfeed or feedburner, use that source's RSS feed to generate tweets for you.  A couple of the content sources I use include HL7 Press releases and events, and ASTM E31 Standards activity notifications.  I'm still pushing IHE to get some RSS feeds for its content.
    2. Forward public announcements that you get e-mailed to your blog (blogger has an e-mail submission capability).  You'll have to upload or relink the graphics in those e-mails, so make sure that they don't get automatically published.  I often send IHE announcements to this blog.  Do try to keep announcements limited (I try to keep it under one a day), because you should be focusing on good content.
    3. Some announcements you get have a web link.  You can save yourself time by just tweeting that link.
    4. Use something like the sidebar on your web browser to make it easy to tweet an interesting article you've read.
  2. Use Hash tags.  Hash tags attract audiences that aren't already following you.  Every important communication should have one to three hash tags.
    1. Live tweet an event important to your audience using the event hash tag.  I have a netbook that I like to use for this, as I find my phone difficult to tweet from, but sometimes its the only resource I have. I find that live tweeting an event using hash tags is a great way to expand your audience, especially when they want to, but cannot attend the event you are tweeting. 
    2. Attend a tweet chat.  Tweet chats are great ways to connect to new people who are interested in what you have to say, and vica versa.
  3. Leave space in your tweets for others to retweet them.  A tweet that is too long to retweet is frustrating.  Based on statistics of my own followers, I'd suggest tweets no longer than 120 characters in length if you want them easily retweeted.
  4. Engage in conversations with others.
  5. DM new followers and ASK THEM A QUESTION.  When then mention you in response, it is another way to raise your visibility (to their followers).
  6. Make Friend Friday (#FF) an opportunity to recognize new followers, and to thank those who have RT's or mentioned you during the week.  It's polite, takes just a little bit of time (with the appropriate tools), and creates loyalty.  Don't #FF the same people each week.  Find new people to recognize.

IHE Radiology Technical Framework Documents Published for Public Comment!

IHE Community:

The IHE Radiology Technical Committee has published the following Technical Framework Documents:

Supplements for Public Comment
  • Cross-Community Access for Imaging (XCA-I)
  • Imaging Object Change Management (IOCM)
Available for download at  Comments should be submitted by March 18, 2011 to the online forums at

Supplement for Trial Implementation
  • Cross-Enterprise Document Sharing (XDS-I.b)
This updated version of the profile will be available for testing at subsequent IHE Connectathons.  Available for download at

Radiology Technical Framework
Rev. 10 of the Radiology Technical Framework incorporating all final text profiles in Radiology. Available for download at

Wednesday, March 9, 2011

Update on the IHE Reconciliation

The IHE Patient Care Coordination met in Canada about a month ago to review work items we are delivering this year.  I'm focusing on the Reconciliation profile in that committee.

So far, we have completed a big chunk of the Volume I work (overview, scope, use cases, actors and transactions), and are just getting started on Volume II (technical details).  Because of the many simplifications that we implemented a month ago, this is mostly a content profile.  Accessing the data used for reconciliation is supported by IHE IT Infrastructure profiles (e.g., XDS, XDR, XDM), and the structure of problems, medications, and allergies is specified in IHE PCC Content profiles.

The major additions to content include the following:

A reconciliation act which collects:

  • The identity of the person performing the reconciliation.  Open question:  Other than name, address, telephone and ID, should we gather other data (specialty, licensure, et cetera).
  • The sources of information, either as
    • Documents, in which case we need to know the Document used, and the system (e.g., HIE home community identifier) from which it came (perhaps captured as an informant).
    • Query Results (using QED), in which case we need to know the query used to retrieve the data (captured as observation media), the performer of the query (to address issues of acess  ontrol), and the system from which it came (an identifier similar to the home community ID).
  • The clinical statements that were produced as a result of the reconciliation process. 
  • Relevant links between the resulting clinical statements produced from reconciliation, and any previous clinical statements showing the progression of information.  For example, if a different medication regiment is intended to replace an existing regimen, this link should be present.  A similar case might occur where "back pain" is subsequently diagnosed as a "compressed disk", in which case the latter diagnosis might "replace" the former.
The final piece that is needed is a discussion of the "Concern model" which HL7 used to keep track of problems and allergies as they progress through the care and treatment process.  This will likely be added to Volume I at a high level, and then be reflected in some clarifications in Volume II on the Concern, Problem Concern and Allergy Concern entries.

One issue that needs  further attention is how to deal with updates to documents or content used in reconciliation.  The IHE IT Infrastructure DSUB (pdf) profile allows a system to subscribe to documents for a patient, but in this particular case, we'd actually want to subscribe for updates (replacements) of the specific documents used in the reconciliation act.  That's not presently supported by DSUB, but I think we'd want to see that option.  It's dealing with an edge case that we'd hope would be infrequent (replacing a document reported in error), but still important to the process.

I don't think there are good answers to this same challenge for QED yet, although the CM profile effectively acts as a subscription using very similar transactions to QED.  I'll have to think about that one.  


Tuesday, March 8, 2011

Leveraging Standards for ACO Development

One of the things I didn't get to do at HIMSS was connect up with someone who wanted a brain dump from me on leveraging healthcare standards to support Accountable Care Organizations.  One of my recent tweeps asked if EHR was required to support ACOs, and another responded that EHR is not, but that an HIE is.  To add to the alphabet soup, I'd like to throw in CDS.

What I'm hearing about most often from Accountable Care Organizations is that they are focused on cases where they can quickly realize savings.  The low hanging fruit seems to be in the area of chronic disease management, care planning and follow-up.

The top chronic diseases appear to be Diabetes [all seem to agree this is a top priority], Chronic Obstructive Pulmonary Disease (COPD), Congenstive Heart Failure (CHF) [these next two are very common], Coronary Arterty Disease (CAD), and Hypertension.

Other areas of attention are on care-planning and follow-up for post-surgical and post-ED visits.

From a standards perspective, the key areas of focus is on sharing data from various settings where the ACO has access to it.  The standard that most are already adopting is the HITSP C32/HL7 Continuity of Care Docuement as it is already contained within the Meaningful Use Regulation.  Another area of attention is on accessing laboratory results.

From a vocabulary perspective:
While many are still using ICD-9-CM codes for diagnoses, some are pushing for SNOMED CT codes.  For medications, the push it towards RxNORM, and for labs to LOINC.

So far, motherhood and apple-pie, and consistent with national directions.

Where their seems to be some struggle is on data elements needed for managing care for each of these cases.  I expect there there are still quite a number of spreadsheets being tossed around.

The next step for standardization in this area would be to apply codes to the various guidelines for treating these diseases.  The hard part is deciding on which guidelines to apply for care.  The easy part is selecting the codes for the important components.  Let's take an example of what I mean, by looking at the International Diabetes Federation's Global Guideline for Type 2 Diabetes.  I picked this guideline arbitrarily, you could use any other guideline for this effort.

Having chosen the guideline, let's take a couple of its recommendations and apply codes to them:
Guideline SD2 (see Page 9 of the PDF above) recommends a fasting oral glucose tollerance test for screening.  So, now we need to code the tests.  Out comes RELMA and we find that the most common test in LOINC® is: 1504-0 Glucose^1H post 50 g glucose PO mg/dL.  There are some 92 other result codes that we could also look for, but I won't bother listing them.

Guideline SD4 (same page) references the WHO criteria for diagnosis of Diabetes (peeling the onion as it were).  So, now I dig out my CliniClue Browser and look up the code for Diabetes Melitus (73211009 in SNOMED CT®).  Since I'm interested at this point in all subtypes of Diabetes underneath that code, I'll deal with that as an Intentional value set.

If we wanted to address other means of screening (e.g., Urine dipstick testing), I could look up the codes for those in RELMA as well.

I'll skip the Care Delivery section and move on to the Education section since Care Delivery references content in the other sections.

Guideline ED1 talks about Diabetes Type 2, so again I find the right SNOMED CT code (44054006).

Guideline LS2 talks about a type of care provider: dieticians.  I can dig up SNOMED codes for that, or use the Healthcare Provider Taxonomy codes (a Registered Dietician is 133V00000X in that coding system).

Guideline TT1 talks about HbA1c levels, so I need LOINC codes for that.  I find two codes in LOINC using RELMA (both of which are common):
  • 17856-6 Hemoglobin A1c/ % Hgb
  • 4548-4 Hemoglobin A1c/ % Hgb

Diabetes can cause retinopathy, so there are guidelines around eye screening.
Guideline ES1 talks about Eye Examinations, so I can find CPT codes for that (but not display them...)
It also discusses retinopathy, so I can find SNOMED CT codes for that (399625000) and explicitely for Diabetic Retinopathy (4855003).
And so on and so forth until I've got codes for the entire guideline.  Picking the guideline, is, as I said, the hard part.  Coding the stuff is Clinical Informatics drudgery, and can be quickly done (I did a good bit of the IDF guide in about 6 hours).  Some of these codes will be for conditions, others for providers, encounter types, lab results and orders, vital signs, services, medications, et cetera.
If I'm smart, for each of these things I've got a code for, I should also have a link back to at least one (if not all) requirements for that thing in the guideline.  This can be used later!
The list of coded stuff tells me what I want to see in lab reports, CDA documents, and more.  Having that list makes the job much easier, because now I can make sure that information is captured in the EHR, coded in a CDA document, or imported from laboratory reports (and mapped to LOINC where needed).
All those linked references I talked about earlier can also be used in Infobuttons in the EHR, or in patient educational content.
IHE developed the Care Management (CM) (pdf) profile to support chronic disease management.  Uptake has been slow, but I'm starting to get feedback on it, and at least one organization has implemented it that I know about.  It's designed around exactly this sort of process, and the beauty of that profile, if it were implemented fully, is the automatic generation of interfaces (no more interface engineering -- imagine taking that cost out of healthcare).
The next big challenge for ACOs is applying clinical decision support to this information.  The guidelines supply the rules that should be invoked on the data that we've coded, and those can be written in whatever form (programming language, rules engine, et cetera) needed.  Thomson Reuters is working on an AHRQ grant to develop electronic Recommendations.  This is essentially the step before coding a guideline into Clinical decision support logic.  A necessary step prior to describing the logic is what I just described, coding the guideline.
Both the CM profile and the IHE Request for Clinical Guidance (RCG) (pdf) profile support mechanisms to integrate clinical decision support as a service.  The RCG profile includes an appendix to show how to map back and forth from a CDA document to a Care provision message to support the integration of CDS as a service.    The Clinical Decision Support Collaborative is currently investigating how to integrate CDS services with ERHs, and these profiles provide some support and a standards based interface that group could take advantage of.

CDS is the next big area where standardized interfaces are needed.  IHE was just a bit early in its development of profiles on CDS, as the current market is focused on Meaningful Use Stage 1.  But stage 2 and stage 3 are coming, and CDS will play a key role in those stages.

I'd love to see some national efforts around the selection of and coding of guidelines for chronic conditions that could be widely shared.  I think that effort would give ACOs a huge head start.  Most often, it's not the codes that change in the decision logic, but the various measure parameters (should HbA1c be managed to 6 or 7%), should medication X be applied at this point or that.  Coding national guidelines in this way would let ACOs get a head start, and if they wanted to adjust the decision logic because they have a better idea of how to manage the condition, more power to them.  At least they could skip the step of finding the codes.  It pains me to understand how much clinical informatics expertise we are wasting doing the same work repeatedly (in some cases, reinventing guidelines that others have already well established).

Oh, and this would be a process that would have quality measurement built in.