Sunday, February 28, 2010

There's a Reason for That

We set up the IHE Interoperability Demonstration at HIMSS today, and will put the polishing touches on it tomorrow. This is looking to be the best demonstration in the last 6 years that I've been a participant. There will be a lot going on this week and I'm looking forward to it all.  There's a reason for that, and that is pretty much the theme of this blog.  It started this morning...

As I entered the hall this morning and asked directions to the IHE Showcase, someone told me about the shortcut between Hall B and Hall C.  It's the best way to go he said, because we put the Interoperability Showcase at the other end of the hall.  "I know,"  I responded  "you do that every year."  He replied "There's a reason we do that..." and before he could finish I told him what it was.  The IHE Showcase draws something like 10% of the HIMSS attendees to it, and they want to pull people through the exhibit hall, so they put the big attraction at the other end of it.

It sounds like good marketing strategy to me.  This is also the reason that several vendors I know set up their booths around the showcase (even one vocal IHE detractor I know did that for several years).  They realize that is the place to be. 

There is a reason for that.  I watched six years aog as the sole HIE core service that we'd accidentally created attracted all the attention that year (and, yes, I will forgive Bill, someday..)  Five years ago when we demonstrated XDS as a profile instead of an accident, an analyst told one of the profile authors that we'd just invented a $2 billion dollar a year business worldwide.  I've seen that busisness blow past $2b a year in the US alone a couple of years ago (see the HIMSS Analytics database), and heard estimates that it was nearly $12b a year globally a year or so ago.

There's a reason for that:  Show me any other technology that can have such an impact, and that is deployed in so many places around the globe, and is available from so many different vendors.  You can see it right here at HIMSS at the Interoperability Showcase, and around the rest of the world. That map is about to explode again.  There are about 30 different organizations using NHIN Connect, which is based on the IHE XDS profile.  Only a few of them are on that map today, but I have a mission this week to go find them and put them on it.  That will be pretty easy, they are right next to the Interoperability Showcase.

Tuesday will be very busy for me next week. There's a reason for that too.  From 1-2pm I'll be at the IHE Patient Care Coordination Committee informational session.  If you want to learn more about this domain, and what it has to offer, drop by for an hour and hear about the work we are doing, and how you can freely participate.  From 3:30 to 5:00 I'll on one of the several "meet the bloggers" panels along with some other industry notables (some much more well known that I am).  After that, one of the two presentations I'm on for this week will be the PCC update on Tuesday afternoon.  We will do something a little bit different in the presentation this year, but that's because we've done something special this year.  Come find out what we did Tuesday at 4:45 in Theater B at Booth #233 in Hall C.

I hope to see you there.  If I don't see you, I hope you have a good reason for that.


Thursday, February 25, 2010

Public Comment Period to Close Tomorrow

Tomorrow is the deadline to comment on the last round of HITSP documents.


Document Number: HITSP 10 N 461

Date: February 1, 2010
TO: Healthcare Information Technology Standards Panel (HITSP) - FOR REVIEW AND ACTION
Public Stakeholders - - FOR REVIEW AND ACTION
FROM: Michelle Maas Deane HITSP Secretariat, American National Standards Institute
RE: Public Comment Period Begins on Interoperability Specifications (IS), Technical Note (TN), Capabilities(CAP) and other construct Documents

The Healthcare Information Technology Standards Panel (HITSP) announces the opening of the public comment period for the following Interoperability Specifications (IS), Capabilities (CAP), Technical Note (TN) and other construct documents:

· IS07 - Medication Management Interoperability Specification
· IS09 - Consultations and Transfers of Care Interoperability Specification
· IS11 - Public Health Case Reporting Interoperability Specification
· IS91 - Maternal and Child Health Interoperability Specification
· IS98 - Medical Home Interoperability Specification
· CAP93 – Scheduling Capability
· CAP119 - Communicate Structured Document Capability
· CAP135 – Retrieve and Populate Form Capability
· CAP136 - Communicate Emergency Alert Capability
· TN907 - Common Data Transport Technical Note
· TP13 - Manage Sharing of Documents Transaction Package
· C28 - Emergency Care Summary Document Using IHE Emergency Department Encounter Summary (EDES) Component
· C80 - Clinical Document and Message Terminology Component
· C83 - CDA Content Modules Component
· C148 - EMS Transfer of Care Component
· C154 - Data Dictionary Component
· C162 - Plan of Care Component
· C165 - Anonymize Long Term and Post Acute Care Assessment Data Component
· C166 - Operative Note Document Component
· C168 - Long Term and Post Acute Care Assessments Component
· C170 - Vital Records Component

The public comment period on these documents will be open from Monday, February 1st until Close of Business, Friday, February 26th. HITSP members and public stakeholders are encouraged to review these documents and provide comments through the HITSP comment tracking system. The documents and the HITSP comment tracking system are located on

As stated at the HITSP Panel meeting and in document HITSP 10 N 459 – No-cost extension to the HITSP contract, there is currently no plan for formal disposition of the comments gathered, and such work will be deferred until the resumption of normal HITSP activity. ANSI will export all comments gathered during the comment period, and publish them on the HITSP public website for broader access by members and industry.

Notes on Certification

One of the questions I get asked a lot is "What version of C32, C83 and C80 specifications" should I be using to meet the certification requirements in the IFR.  The HITSP Care Management and Health Records TC waited to see what the Standards and Certification interim rule would look like BEFORE we finished updates to them.  The HITSP Panel Approved 2.0 releases of C83 and C80 were written AFTER we saw the rule, and contain provisions in them that SUPPORT that rule.  Prior Panel Approved versions (e.g., 1.1) of these specifications DO NOT contain these provisions.  So, if you want the best that HITSP has to offer for certification under meaningful use, use the 2.0 versions of the HITSP C80 and C83 specifications in your CCD implementations.  Take note: the CCHIT Comprehensive Certification still talks about Version 1.1.

The other question I get asked about a lot is what certification will look like, or how it will work.  I still don't know, because we haven't yet seen the promised rules.

Certification is a critical component for HIT products that must be completed BEFORE providers can take advantage of incentive payments.  It's nearly the end of February and we are still waiting on the proposed rule for Certification for Meaningful use.  This has a pretty significant impact in a couple of ways:

1.  The proposed rule will likely have a 30-60 day comment period.
2.  That will be followed by at least a 30 day period to consolidate comments and generate a final rule.
3.  That final rule will likely have a 30 day period before it goes into effect.
4.  Certifying organizations will need to align their processes with the certification final rule...
5.  Which may include certification of the certifiers...
6.  Finally, products will need to complete certification...

If each of these steps is required, and takes at least a month, we are still six months away from having certified products under meaningful use.  That means maybe this summer we could see certified products.
Yes, CCHIT is going to certify products -- twice if it needs to, once to see if they meet requirements under the current IFR, and a second time if needed to address any gaps.  That certification isn't the same without the finished certification process regulation.

We are told to expect a Final Rule based on feedback to the Interim Final Rule this Spring (April - June).  That Final Rule could change certification requirements, although any major changes seem to be pretty unlikely.  Finally, the rule for Meaningful Use Incentives could also be finalized this spring, which could effect any sort of additional certification that goes over and above the Meaningful Use certification requirements.  That means that step 4 will require synchronization with the Final Rules, which could mean certified products would be available in the fall.

At our current rate, EHR products could still be certified before the end of this year, but not if we keep adding delays.   I've heard recently that ONC wants to get public input on certification processes before they even public a proposed rule, which could delay things further.

Because of the way these processes work in government, it's very difficult to get any idea of what is going on.   During the development of regulation, the government basically acts like a black hole for information.  Lots of it may be going in, but nothing comes out until they are done.  I understand the need for this, but I very much wish that a SCHEDULE could be published so that the industry would have some idea what is happening.  A high level schedule with planned (but not promised) dates conveys very little about what is being done, but at least helps the industry to plan. 

At this time, I'm expecting another "Vacation Surprise" from HHS and ONC just like I got for Christmas last year.

Tuesday, February 23, 2010

Upcoming online Open Forum on ICT Standardization and eHealth

This Thursday I'll be participating on an open forum on Internation Communications Technology standardization as it relates to eHealth.  The forum is being put together by Talk Standards.  You can find the announcement for the forum here.

The forum will be trying to address these questions:
  • How can ICT standardization best contribute to the development of eHealth services and systems?
  • What important lessons can be drawn from experiences around the world, including Europe and the USA?
  • To what extent should governments intervene in the standardization process to reach eHealth objectives?
  • Is it feasible that ICT standards enable patient choice and mobility; also across international borders?
I'll be contributing some of my experiences on international standards efforts and government initiatives in the US.  I hope to see you there.


Monday, February 22, 2010

What is in a Name?

What's in a name? That which we call a rose By any other name would smell as sweet.
-- Romeo and Juliet (II, ii, 1-2)
For some reason, naming things seems to be a really important act.  I have far too many years of experience as an engineer dealing with these names to really be offended, when, at the very last minute, marketing changes the name of something in a user interface, and we have to search and replace all occurences of the name used to explain it in all documentation and UI code.  As you can tell, changing the name doesn't change it's behavior, but it often does change the perception.  Engineers often use the duck test when they name things.

If it looks like a duck, swims like a duck, and quacks like a duck, then it is probably a duck.
-- Various
Names are interesting because how you name something says something about your point of view.  Engineers name things based on how the code works.  We do things like create whole models of the world or significant parts of it (e.g., the HL7 Reference Information Model or the ebXML Registry Information Model) that are made up of reusable parts based on how those things act when we have to code them up.

Other people who have to use these systems are more interested in other attributes of the thing.  How they are used in a specific business context, or what policies and procedures must be developed around them (The business viewpoint).  What information they convey about a thing, and what specific attributes make this thing different from that thing (the informatics viewpoint).

Unfortunately, in just about all software engineering disciplines, all the good names are already being used. So the same names sometimes get used in the engineering viewpoint as in another viewpoint.  This can often create confusion because using the same name doesn't necessarily indicate the viewpoint of the namer.

Discussions around the names of things help when they expose these different viewpoints, and illustrate the different behaviours.  They can become divisive when there is a lack of clarity around which viewpoint is being discussed.  A current discussion in IHE is around the distinctions between a care plan used in nursing, and another plan used to coordinate care for chronic conditions.  There are of course, different points of view, and unfortunately, we aren't necessarily clear about which point of view is being discussed.
From my perspective (an engineers viewpoint), the care plan and the coordination plan require a lot of the same behaviours be coded into the expression of the thing.  From the perspective of others, the behaviours are different.  The frequency at which the plan gets updated varies depending upon whether the plan is being used to support inpatient care or to manage care for a chronic condition from a different setting.  The determination of that frequency is a business decision based on policies necessary for appropriate care in each of these settings.  Does that mean I have to change the code I use to manage it?  Not necessarily.  Quite often, business rules are dealt with separately from the rules about representation and storage.  So, I'd like to call it a care plan ... from the engineering perspective.  But if I don't expose that perspective to others, and it doesn't use the same business rules, then its identity becomes confusing.
We have a similar problem in HL7 Version 3 constructs.  The HL7 RIM describes things from an informatics viewpoint.  We have acts and observations that describe the bits and pieces that make up a medical record described in a way that makes it easy to compute with things, and to expose similarities between things like problems and allergies.  However, these names in the RIM don't address differences in business rules around them, or perhaps even the engineering viewpoint.
A perfect example of where the informatics viewpoint and the engineering viewpoint differ in Version 3 are the distinction between codes and identifiers compared to how those same things are represented in HL7 Version 2.  Version 2 was much closer to the implementor viewpoint, and so a coded concept (CE) and an identifier (CX) both had a component called ID which was the identifier of either A) the concept or B) the thing to be identified.  In Version 3, the coded concept data type no longer calls the code value an identifier, even though it still identifies a concept from a specific coding system, and the II data type uses a term ("extension") completely different from "identifier" to talk about the part we (implementors) normally think of as the identifier part.  When I teach these data types, I start with identifier, explain how the parts map to the things that implementors think about, and then explain code and show the similarities.
The purpose of the Micro-ITS is to make HL7 Version 3 easier to implement by exposing names of things using a different viewpoint.  The use of these different viewpoints is important because the implementors of HL7 Version 3 are NOT by and large, informaticists.  They are for the most part, software engineers and interface developers who have some experience with the business viewpoint in healthcare.  Trying to make them conform to the "informatics" viewpoint is proving to be counterproductive.  Informatics is a 2-3 year graduate degree program, but you don't need to have that background to implement an interface.
Being able to look at things from different viewpoints is what the thing formerly known as SAEAF, and now known as SAIF (The Services Aware Interoperability Framework) is helping HL7 to do.  Understanding what viewpoints are exposed and when and where they are discussed in the specifications you are working from will help people to better understand what we are talking about.
So, when you are discussing names, remember that one of the things in it is an associated viewpoint, and that it is important to expose and understand that also.

Thursday, February 18, 2010

Subsets and Value Sets

The HITFACA Blog reports:
On February 23, 2010, the Vocabulary Task Force established by the Clinical Operations Workgroup of the Health IT Standards Committee will hold a public hearing on “Vocabulary Subsets and Value Sets” as facilitators of meaningful use of electronic health records (EHRs).

And then provides a list of questions which I've responded to below:
  1. Who should determine subsets and/or value sets that are needed?
    It depends.  Subsets or value sets are needed for implementation guides and for much broader use cases such as Laboratory ordering and Results.  Consensus standards organizations should be responsible for determining which subsets or value sets are needed for their implementation guides.  Broader use cases may be driven by various initiatives at regional or national levels.  An organization responsible for harmonization of standards similar to ANSI/HITSP should also have a role in identifying value sets.
  2. Who should produce subsets and/or value sets?
    Consensus based Standards bodies should produce and MAINTAIN them.  Production seems easy, but a value set or subset that has no maintenance process has no life.  
  3. Who should review and approve subsets and/or value sets?
    It depends upon what they are used for.  Primarily the concensus groups of the producer organizations, but in some cases, such as value sets used for quality measures, review and approval could also include organizations like NCQA. 
  4. How should subsets and/or value sets be described, i.e., what is the minimum set of metadata needed?
    See HITSP TN903: Data Architecture Technical Note
  5. In what format(s) and via what mechanisms should subsets and/or value sets be distributed?
    Value sets should be available in a standard format, such as the Rich Release format used by NLM for RxNORM and UMLS.
  6. How and how frequently should subsets and/or value sets be updated, and how should updates be coordinated?
    It depends on their use.  Updates for fairly static value sets should be reviewed at least every five years (ANSI rules uses this figure for reaffirmation of Standards).  Value sets for clinical use should be reviewed and updated at least annually.  Some value sets and subsets may need to be updated quarterly, montly or even weekly (e.g., medications).  Updates may be delivered as a subset containing only the changes in more frequently updated value sets.
  7. What support services would promote and facilitate their use?
    Value sets should be available from a Web Service, such as that described in the HITSP T66 Retrieve Value Set Transaction.
  8. What best practices/lessons learned have you learned, or what problems have you learned to avoid, regarding vocabulary subset and value set creation, maintenance, dissemination, and support services?
    Building a value set requires a commitment to ongoing maintenance of it.  Dissemination should support both manual download automated retrieval and update.  Support services require that there be a feedback mechanism (such as an e-mail list service) to comment on it.  Public input is absolutely necessary in the creation and maintenance of a value set.  Quick response may be needed for clinical value sets to address issues like H1N1 or new medications or treatment options.
  9. Do you have other advice or comments on convenience subsets and/or value sets and their relationship to meaningful use?
    Isn't this enough...
  10. What must the federal government do or not do with regard to the above, and/or what role should the federal government play?
    The Federal Government should have a role in the coordination of value set deployment activities.  Presently the CDC, NLM and AHRQ (USHIK) all have some role in the development or deployment of value sets, which includes overlapping distribution, delivery and maintenance responsibilities.  Duplication of these efforts is not useful.  It would be better if there was a single coordinated effort, which could include participation from all of these bodies.
    NLM has appropriate infrastructures for manual download, licensing and deployment.  CDC has appropriate infrastructures for some development of public health oriented value sets.  USHIK has appropriate infrastructures for delivery of knowledge about value sets (e.g., metadata).  To my knowledge, none of these provide for automated computer update of value sets using simple web services such as those described in the HITSP T66 Retrieve Value Set Transaction, but I believe CDC is closest to having that capability.

Birthing of a New Standard

Green CDA has been getting a lot of buzz lately, so I figure it's time for me to add my two bits to the discussion.  First of all, this is a research project by HL7 to determine how to simplify CDA.  The first part of this project is to explore what [human] processes can be used to enable simple expression concepts described in the CCD implementation guide.  It's also an 80% solution that doesn't cover all the complexity supported by CDA or CCD.   Getting past the 80% solution to a full solution is the eventual goal, but that IS NOT IN the scope of Green CDA project in HL7.

If "Green" development continues to require human intervention to "Green" other CDA implementation guides, Green CDA will suffer the same problems that other efforts have.  Green CDA is one example of a μ-ITS on some of the CCD templates, but there are over 500 templates that have been developed for CDA alone, an only a 10th of these are CCD templates, and not all of those are found in Green CDA. Green will not scale through manual efforts alone, and the complexity of the information will cause incompatibilities across different "Green" efforts.  Been there, done that, and have no desire to do it again.

So, the other leg of this research is to examine how the manual processes used to develop Green CDA can be automated through the development of a framework (and governance model) for creating a μ-ITS on an HL7 model that uses templates.   Because of the large number of templates involved, we also need a template registry (another HL7 project) to enable access to all of the template development that's been generated over the past 5 years in HL7, IHE, HITSP, Health Story and epSOS, just to name a quick handful. 

In The Standards Value Chain, Glenn Marshall does an excellent job of explaining the process of developing and implementing standards, and the related time frames. Green CDA isn't a solution that's just around the corner.  Gestation of a new standard isn't sped up by setting unrealistic goals or adding new mothers to help give it birth.

The key concept in CDA after human readable narrative is the clinical statement.  This is a sentence in a machine readable language.  It turns out that machine readability of this language is not quite as important as human understandibilty for implementors (it took me three years to learn to speak it).  Fixing that is going to require invention a new language that works for both audiences.  Research and time are required, as well as technology that can simplify the XML and still support the rich information model needed for clinical decision support.  After all, if we can exchange the information, but cannot compute with it, we've defeated the purpose of the computable clinical statement altogether.

Healthcare is hard.  Making hard problems easy to solve is an even harder problem.  Don't expect a miracle this week, this month or even this year.  Just give it some time, and it will happen.  Continue to watch this space if you want to observe some of the birthing pains.

Tuesday, February 16, 2010

Templates and Vocabulary Bindings

One of the problems that the Structured Documents Workgroup has encountered over the past couple of years is the difficulty dealing with realm specific vocabulary bindings with templates.  The problem is that there isn't a universally accepted vocabulary for clinical terms across the various international realms.  So, building an implementation guide for the Universal Realm is difficult.

We want to be concrete in the selection of value sets used with a particular template that we create, but there's no agreement on the vocabulary to use.  We cannot use fill in the blank for everything because:

A) It's not universally available
B) Some other vocabulary is mandated nationally for that purpose

Et cetera.

So how do we resolve that problem?

I propose the following set of principles:
1.  Templates select a concept domain for a specific list of concepts.
2.  Templates show representative value sets that provide the list of concepts in that domain for a given vocabulary.
3.  Realm specific bindings take over from there.

Let's look at a simple use case: identifying vital signs at a high level
1.  The concept domain is vital signs.  We define it to be a set of codes for recording vital signs (let's stick with the usual ones and not get too creative here, I know we can, but...)
Blood Pressure
Respiration Rate
O2 Saturation
2. A representative value set from SNOMED is:
body tempurature (386725007)
diastolic blood pressure (271650006)
systolic blood pressure (271649006)

respiratory rate (86290005)
pulse rate (78564009)
oxygen saturation measurement (104847001)

Another representative value set from LOINC is:



3.  Realm specific bindings determine which of the two value sets are used when the implementation is used in a particular realm.

This isn't so hard.  The really hard part is that we have to be willing to divorce the concrete bindings from the content of universal realm implementation.  As an implementer, what that means is that you have to go to a country specific list of vocabulary specified elsewhere (e.g., an appendix or a realm-specific implementation guide).  This is the so-called "HITSP problem", having to reference multiple documents to get what you need for an implementation.

The HL7 Vocabulary workgroup describes three core concepts in the HL7 Version 3 standard:
Concept Domain

Vocabulary/Terminology/Coding System
Value Set

A value set implements a concept domain by drawing terms from specific coding systems (a value set may contain concepts from different coding systems).  It concretely implements a concept domain.  What is missing here is the notion of a "Concept Set" which better defines a concept domain.  It's like a value set, except that it provides only definitions, not codes.  It's also like a concept domain, except that it is more concrete.

One way to express the contents of a concept set is to use metathesaurus concepts (e.g., from UMLS) to represent the set of concepts in a concept domain.  This doesn't solve the reference indirection problem for implementors directly, but it does allow implementors with access to the metathesaurus to automatically obtain bindings to specific vocabularies.  By replacing the representative value set with a concept set, we can potentially select across multiple vocabularies at the same time.

It's not a complete solution yet; I'm sure there are a few details that need to be worked out, but it is a start in the right direction.


P.S.  We still need a US Realm authority in HL7.  That would solve the other half of the problem that Structured Documents has to work around.  Due to the lack of that authority, US Realm guides are balloted by the entirety of the HL7 membership, and all to often, non-US realm specific concerns creep into US realm ballot comments.  Maybe someday we'll resolve that problem.  I keep hoping the replacement RFP for standards harmonization from ONC will move to address that issue.

Friday, February 12, 2010

HITSP Press Release: Contract with U.S. HHS Extended through April 2010

If you read here regularly, you already know this...

For Immediate Release

HITSP Contract with U.S. HHS Extended through April 2010

Washington DC, February 12, 2010: The Healthcare Information Technology Standards Panel (HITSP) is pleased to announce that its contract with the U.S. Department of Health and Human Services (HHS) has been extended through April 30, 2010.

“Since HITSP’s formation in 2005, the Panel has been working to advance the widespread adoption and interoperability of electronic health records,” said Fran Schrotter, HITSP project director and senior vice president and chief operating officer of the American National Standards Institute (ANSI), the organization that administers the Panel. “We are very grateful to the thousands of volunteer technical experts who have worked countless hours to create HITSP work products that help to enable health IT interoperability. Keeping this momentum going is a tremendous priority for us, and we are pleased that HHS has granted a contract extension that allows HITSP to continue its outreach efforts.”

During this contract extension period, the Panel will hold monthly informational update calls, participate in the HIMSS10 Healthcare Information Technology Conference and Exhibition, and work with the Centers for Medicare and Medicaid Services (CMS) on a Quality Data demonstration project. In addition, the HITSP website continues to be fully operational and accessible, offering stakeholders access to the Panel’s full program of work, as well as educational materials and archived webinars.

The extension will also assure that HITSP volunteers stay engaged going forward until the next phase of standards harmonization to be funded by the Office of the National Coordinator (ONC) is announced.
“ANSI and our strategic partners – the Healthcare Information and Management Systems Society, the Advanced Technology Institute, and Booz Allen Hamilton – are committed to progressing standards harmonization efforts and interoperability guidance in the health IT field,” continued Schrotter. “We are proud of HITSP’s accomplishments and are dedicated to a continued partnership. We look forward to responding to any requests for proposals issued by ONC to further this important work.”

For more information on HITSP, visit or contact HITSP secretary Michelle Maas Deane ( ; 212.642.4884).

About HITSP A cooperative partnership between the public and private sectors, the Healthcare Information Technology Standards Panel (HITSP) is a national, volunteer-driven, consensus-based organization that is working to ensure the interoperability of electronic health records in the United States.

Operating under contract to the U.S. Department of Health and Human Services (HHS), HITSP is administered by the American National Standards Institute (ANSI) in cooperation with strategic partners including the Healthcare Information and Management Systems Society (HIMSS), the Advanced Technology Institute (ATI), and Booz Allen Hamilton.


Blogging on Standards

In a few week I'll be at HIMSS and will be participating on in panel session of Healthcare bloggers.  One of the question's I'm often asked is how I manage to write this blog and do my day job.  I used to spend from 1-4 hours on a post, but these days, my average time is anywhere from 15 minutes to 30 minutes.  Some posts still take longer depending upon how much research went into it and the amount time it takes to rewrite the post. 

Since my day job includes activities in HL7, IHE and HITSP, I often have to respond to questions on different issues in those organizations. When I can, I try to respond in a way that gives me a blog post as well. This cuts down drastically on the amount of time I spend writing, since I've usually already put together a response.

Rewriting the content sometimes is easy, and requires just a few changes, and other times is much harder.  I spent about 8 hours each on the basic content for Meaningful Use IFR Comments and Meaningful Use NPRM Comments.  Rewriting that content so I could use it for the IFR took another 4 hours and about 8 rewrites (you can probably imagine why).  The second required little change and was ready in about 10 minutes.

In addition to dealing with questions on standards, I can usually get a good post about new standards development about once a month based on one activity or another that I'm working on.

I'm always trolling for topics.  Sometimes I'll start an article and put it on the shelf for a week where I don't have much time to write.

News headlines that I get through Twitter and other lists are another source of inspiration.  When I see these and have something to say, I do so, and where possible, post the link to my response as a comment on the article.  That's not a great way to get huge numbers of new readers, but it does bring a few new readers to my blog from different sources.  It's always best to strike while the iron is hot on those.  Posting comments a few days or even a few hours after the post has been tweeted gets a much lower response rate.  How do I know?

I track the use of my blog through Google Analytics.  This is a free tool that lets you know a great deal of information about where your readers are coming from, what days they are reading, how often they read, and what topics they are reading on.

I use twitterfeed to automatically tweet my posts and that of a collegue to help drive traffic to my sight.

When I have a posting that is of interest to one or more of the various communities that I work with, I will often e-mail a link to the posting to the right list.  However, I do this sparingly, because I don't want to become known as a blog spammer.  I also try to make sure that the subject line of those e-mails tells the reader that it is a blog posting.  Many of us travel and have limited connectivity from time to time.  Downloading your e-mail and later finding out you need to be connected to the internet to read the post is frustrating (thanks to Ann W for that tip).


P.S.  This post took about 15 minutes to write.  I've been thinking about the topic since I was first asked to be on the Meet the Bloggers Panel at HIMSS, but I don't count that time...

Wednesday, February 10, 2010

If I had a Hammer Redux

My second post is also the second most popular post of all time on this blog:  If I had a Hammer

This morning's frustration is because ONC and CMS didn't get that message.  Current Meaningful Use regulations selecting standards (CCD and CCR), and proposed regulations on Incentives that ONC and CMS have provided have described communicating discharge summaries in a way that doesn't work, because, ...well..., everything isn't a nail.

First, let's help ONC explore what a discharge summary is:

A few of the links below come up from that query provide some ideas about what should appear:

There are a few things you can determine from this research:
  1. A discharge summary is required to have 6 things according to Joint Commission
    1. Reason for hospitalization
    2. Significant findings
    3. Procedures and treatment provided
    4. Patient’s discharge condition
    5. Patient and family instructions
    6. Attending physician’s signature
  2. A discharge summary is required to have 3 things according to CMS
    1. Outcome of Hospitalization
    2. Patient Disposition
    3. Provisions for Follup Care
  3. Common practice shows that discharge summaries typically include:
    1. Chief Complaint/Reason for Admission
    2. History of Present Illness
    3. Admission Diagnosis
    4. Discharge Diagnosis
    5. Medications on Discharge
    6. Discharge Instructions
    7. Followup Care
By the way, this is really old news.  The December 2001 publication of the HL7 Claims Attachments specification for Clinical Reports describes a discharge summary using the following LOINC codes.  The table below is drawn from the HL7 Claims Attachement guides which were produced by HL7 in its role as a designated standards maintenance organization under HIPAA.  Later releases of these guides were referenced in the Claims Attachments regulation proposed by CMS some three or more years ago, and HL7 developed Release 3.0 of these guides were developed throughout 2006 and 2007.  More than 8 years of standardization effort went into that work.


The HL7 Care Record Summary (release 1.0) further refined these requirements, and these were subsequently adopted by the IHE XDS-MS profile, which has also been selected by HITSP (see C48) and recognized be the former secretary of HHS:
Code        Component NameRequired/Optional

So, ignoring all the standardization that's already gone on before, let's look at the ramifications of stuffing a discharge summary into a CCD and/or CCR.
  1. What section will Admission and Discharge Diagonosis go in the CCD? 
    These could be listed as problems, but how will you distinguish between what was thought to be a heart attack on admission and was later found out to be an ulcer (or visa versa) prior to discharge.  We could of course add a CDA section using the above LOINC codes, and that's allowed for by CDA, but CCD doesn't say how to record these items, and CCR doesn't have the capability to add sections at all.
  2. Where will the patient's discharge condition go?  There's not a place for that either defined by CCD or CCR.
  3. How about the reason for hospitalization?  There's not a place for that either defined by CCD or CCR
  4. History of Present Illness? There's not a place for that either defined by CCD or CCR
  5. Physical Examination? Guess what I'm going to say here...
  6. Hospital Course?  Guess again...
  7. Patient Disposition? ...
  8. Outcome of Hospitalization?  ...
The most significant issue here is that in order to put a discharge summary inside a CCD or CCR, we have to stretch both of those publications past what they are intended to do.  A secondary issue is that by so doing, we will not get discharge summaries in any standard format because those publications do NOT say how to do it.

A discharge summary is a document that is mandated to be produced by a hospital on discharge or transfer of a patient from an inpatient stay.  It has required content for accreditation and existing regulation or requirements for payment.  Standards have already been developed that are consistent with some of the selected standards by the Meaningful Use IFR.  Let's stick with the standards, shall we, and not try to invent something new that doesn't work the way existing standards and implementation guides had intended.

Tuesday, February 9, 2010

Wisdom of the Crowds meets the Personal Health Record

Recently, I was completing a risk analysis on the IHE Perinatal Workflow profile.  One of the risks that was identified was the uncertainty or lack of reliability associated with externally supplied information, especially that provided by a patient though their PHR.  Another concern often expressed in the context of personal health records but not in this specific case was the possibility that a malicious user might use a PHR to support drug seeking behavior.  As part of that analysis, I looked at possible mitigations for both these risks.

In the first case, the issue is the uncertainty that a provider associates with patient supplied information, or information coming from unknown sources.  This risk isn't unique to the use of personal health records.  It already exists in the practice of healthcare when taking patient histories, or relying on externally supplied reports (e.g., discharge summaries) to provide patient information.  Existing practice includes procedures for verification of that information. 

When the PHR gets involved, and what is different is the assumption that providers will percieve computer supplied information as being more reliable that patient supplied data.  We will need to insure that providers are trained to question electronically supplied information the same way that they question patient or other externally provided data.  A simple acronym should help healthcare providers just as it has helped software engineers for many years:  GIGO... Garbage In, Garbage Out.

Providers can also cross-check information from a number of different sources.  This is something that health information exchanges enable that is not currently possible without extensive phone calls to the various healthcare providers for a patient.  Here we can apply the wisdom of the crowd, where the crowd in this case is the collection of healthcare providers who have seen the patient.  These same cross-checks can also be utilized to address the second risk, that of drug seeking behaviors.

The residual risk after implementing these procedures (known as mitigations) is the need to address new findings, symptoms or diagnoses previously unreported by other healthcare providers.  It is these cases where existing practice of verifying the information comes into play.  A final option here is the use of digital signatures on provider supplied information.  The use of digital signatures ensures  that the data came from the owner of the certificate, and that only that owner could have provided it, supporting "non-repudiation".  The source of the signed document won't be able to say: "I didn't say that." because the digital signature will show that only they could have. 

This was an enlightening excercise, and I wish we had done it 4 years ago when we created the first profile supporting exchange of information with the personal health record.  It's not that these concerns aren't well founded, because certainly they are.  However, there are a number of ways to reduce or eliminate the associated risks that we've identified in the use of this and other IHE profiles.

The next time I hear someone raise these concerns, I'll take them through the risk analysis that I went through.  It will be interesting to see their responses.

Monday, February 8, 2010

How Much Information Should Patients See? All of it

One of the provisions of the HITECH Act and the regulations that have been either created or proposed as a result of it is that patients should be given copies of their health information, including problems, medications and allergies, but also discharge summaries, instructions and procedures. This seems to be a cause for concern on some fronts. See these two posts for example:
As a patient and an advocate for other patients, I want to see what another healthcare provider sees.  I don't want a restricted view into my own health records or those of other patients whose care I'm responsible for.  I know enough to ask questions when I don't know what something means.  I also want the ability to ask those questions of any healthcare provider or other information source available to me.

Ideally, giving patients copies of their medical records will be the start of a dialog between them and their doctors.  By initiating this dialogue, providers will wind up with more educated and involved patients.  I understand how "raw reports" could cause a great deal of patient concern, especially "abnormal flags" on a lab report. This too will need to be managed, just like any other change in our healthcare system.  If you want to do a good job, make sure that the way you convey that information is simple and easy to read. It will not only benefit patients, but it will also benefit other providers who are reading my medical records subsequently. But it shouldn't be used as an excuse to delay action on exchanging information with patients, or push back timelines current regulations or plans.

I have one comment to make with regard to the definition of a medical record:  Any healthcare provider should already have defined for themselves what they consider to be the patient's legal medical record.  This is the record that they are responsible for maintaining for their practice.  The same definition should be used to define the medical record that the patient has access to.

All too often I've been involved in situations where trying to access my own record, or that of someone whose care I'm responsible for involves a great deal of effort, delays, and use of personal time on my part.  If you want my help, make it easy for me to get access to the information I need, and give me all of it.

Wednesday, February 3, 2010

CDA Design Patterns

Over the years I've seen a number of different patterns for collecting information in health information systems and communicating it in CDA documents.  Today's posting discusses some of the common patterns I've encountered.  Note that in this posting, I'm providing very abbreviated examples of what would appear in the CDA document that are not valid without other details unnecessary for this discussion:

Checkbox Design Pattern
One of the common patterns that often needs to be represented is the checkbox pattern used in some products for review of systems, physical examination findings, or past medical history.  Often in these history taking, reviews or examinations there are specific lists of symptoms or conditions that are being checked for.

When a box is checked these typically result in an observation of the following form:
‹observation classcode="OBS" moodcode="EVN" negationind="false"›
   ‹code code="finding" .../›
   ‹effectivetime value="201002031714"›
   ‹value xsi:type="CD" code="Crackles" ...›

When the box is not checked, these typically result in an observation of the following form:
‹observation classcode="OBS" moodcode="EVN" negationind="true"›
   ‹code code="finding" .../›
   ‹effectivetime value="201002031714"›
   ‹value xsi:type="CD" code="Crackles" ...›

Radiobutton Design Pattern
The Radio button design pattern asks a question about each symptom or finding, and offers one or more choices about its value. 

‹observation classcode="OBS" moodcode="EVN" negationind="true"›
   ‹code code="QuestionCode" ... /›

   ‹effectivetime value="201002031714"›
   ‹value xsi:type="??"  value="..." ›

One of the struggles with this design pattern is that codes for questions aren't the same as codes for findings or diagnoses.  The question is "Diabetes Status", the answers are yes/no/unknown or some variation thereof, but the diagnosis is "Diabetes" which is a different code.

For a yes/no/unknown value set for the answer, you can use the BL data type, with values of true or false to indicate yes or no, or a nullFlavor to indicate unknown.  This leads us the the next design pattern.

Yes/No/Unknown Radiobutton Design Pattern
When the radio button design pattern is used with the answers yes/no/unknown, there's a tendency to model this using the checkbox design pattern.  The yes/no part can follow the exact same pattern as checkbox design pattern.  However, to say "unknown" is a little more challenging because most vocabularies don't routinely support "unknown X status" for all symptoms and findings.  When they do, the other challenge is that you have to have knowledge of the relationships between these two different codes.

So then people try to model it using the Radiobox Design Pattern, but find that the codes for the diagnosis and the question about the diagnosis status are different.  This makes it difficult to compute against the diagnosis codes without understanding the relationships between them and the diagnosis status codes.  Also, there's a dearth of "diagnosis status" codes in most vocabularies.

There is a way to model this pattern in the value element using SNOMED CT and post-coordination:

‹value xsi:type="CD" code="413350009" displayname="finding with explicit context" ...›
    ‹name code="408729009" displayname="finding context" ...›
    ‹value code="261665006" displayname="unknown" ...›
    ‹name code="246090004" displayname="associated finding" ...›
    ‹value code="286661006" displayname="Fever" ...›

This basically says there is a finding: "Fever" that has the context "unknown".  Other possible context values are "possibly present", "known present", "known absent", "confirmed", et cetera.

This means that the SNOMED post coordinated pattern can be used in a regular fashion to represent the Yes/No/Unknown Radio Button Design pattern.  It's yet another model layered on top of the RIM, with the associated complexity, but also has the benefit that this particular CDA design pattern can easily be used to represent all the variations on a theme for yes/no/unknown.

Thanks to Sondra Renly who asked the question that got me looking down this path.

I'll be collecting more design patterns for CDA as the year goes on.  Send your favorites to me here.

Tuesday, February 2, 2010

Random Ramblings at IHE

Four IHE Domain Committees are meeting this week.  Patient Care Coordination, IT Infrastructure and Quality, Research and Public Health are meeting in Phoenix,  AZ, while Radiology meets in Oakbrook, IL.  We are meeting to kick off profile development for the 2010 season.

There's so much the goes on at a standards meeting like this, some of which is relevant to the meeting itself, and a good portion of which is also related to other activities that the different groups are engaged with across the healthcare standards space.

Today's post is just a compilation of several bits of information that I've come across this week.

Two HIE's in Connecticut are using XDS.  A friend of mine who is involved in Health Information Exchanges in Connecticut reported to me that two exchanges there, one in Western Connecticut (Danbury) and the other in Central Connecticut (Hartford) are using XDS and the HITSP C48 specifications.  I'll be adding those to the Where in the World is XDS Map shortly once I get more details.

The QRPH workgroup is working on the development of the Public Health Case Reporting profile.  This profile comes out of a workshop that several IHE members including myself participated in, as I reported here in the Making of an IHE Profile, and later in Part 2 on that same topic. The the profile proposal developed in that workshop resulted in active work on a new profile. They've run into what I call the "Clinical Decision Problem" in Healthcare IT.  This particular problem is one I've encountered in several different interoperability scenarios, including Public Health Alerting (See HITSP T81), Chronic Care Coordination (see IHE Care Management Profile), Clinical Decision Support (See the IHE Request for Clinical Guidance Profile) and now in Public Health Case Reporting. 

The problem is simply that we have a standards gap in the representation of decision making processes and guidelines for care.  I think the right answer is to develop a structured document to represent a guideline that brings in aspects of HL7 Structured Document Architecture and Clinical Decision Support.  However, currently scheduled work in HL7 in the Clinical Decision Support and Structured Documents workgroups are necessary antecedents to this project, so there may be some delays in bringing this about.  We resolved the problem the same way that it has been solved in other situations, which is to make the description of the clinical decision support logic outside the scope of the profile.  We simply ensure that one of the actors is responsible for making the clinical decision based on information that it obtains through one or more interoperable transactions, and responding with the appropriate result depending upon what decisions were made.  In this particular example, we are thinking that it should be possible to determine whether a case report is required depending upon the content of a CCD-based clinical document, and if so, then returning a case report form that would generate the appropriate CDA based document to report on the case.  This would occur using the IHE Request Form for Data Capture profile.

IHE PCC is developing it's first workflow profile.  This is an interesting profile to work on (I'm the editor for it).  Basically the profile is bringing together content from six different domains (PCC, PCD, LAB, RAD, ITI and QRPH), across 21 different profiles (XDS, PIX, PDQ, PAM, XD-LAB, LTW, SWF, SINR, APE, APS, APL, APHP, LDHP, LDS, MDS, NBS, NDS, PPVS, and MCH).  It sounds very complex, I think the Perinatal Workflow does a great deal to simplify coordination of perinatal care.

Since a picture paints a thousand words (or in this case, 2083 as I've counted them), I'll provide just a brief overview of what we are considering.

If you want more information, look at the profile supplement posted to the pcctech google group and to the IHE FTP site.