Thursday, October 29, 2009

Team Building and CDA Schematrons

I'm in Portland Oregon for most of this week for a workout with one of our development teams.  Joining me are my boss and one of our other Standards Geeks.  I've long known the importance of face to face contact in developing relationships, and the benefits of my being out here for just one day are already apparent.  For one, I've learned that my boss is a better pool player than I am, but not by much ;-)

Our discussions today were far ranging, but one of them centered around validating CDA constructs.  There's a great tool that's been developed by NIST called the CDA Guideline Validator.  This tool is based on collaborative work from NIST, Alschuler Associates, LLC, Integrating the Healthcare Enterprise (IHE) and the CCHIT Health IT Collaboration Effort "LAIKA" project.  While it is a great tool, there are still some process issues that need to be worked out for it to be of greater service to the healthcare IT industry.  Don't get me wrong, I love these tools, and point people to them several times a month, but I'd like to see a little bit more.

One of the issues that I run into is that we have to revalidate this tool every time it gets updated to use as part of our testing processes.  Here are a few suggestions I'd like to make that  I think will improve the use of this tool as part of certification testing and vendor implementation.

1.  I'd like to see a design document that describes the overall design of the validation tools.  The schematron source code for the validator is great, and frankly, I know a good bit about how that was built, but others need access to that information as well.  This need not be a long document, it could be as short as 3 - 5 pages.
2.  I'd like to see some explanations about how to read the output documented somewhere.
3.  I'd really like to see a validation plan that shows how the rules implemented by the tool are tested, and a validation suite that tests the rules (both positively and negatively).
4.  Having Andrew's and Mary's e-mail contact information available is terrific, but a little hard for people to find when they need to report problems.  Also, not having an active bug list that people can view and track makes keeping up with the issues a bit of a struggle.  I'd like to see a real bug tracking system installed and accessible from the main page.
5.  Coordinating feedback from bugs to the organizations responsible for interpretation of the various templates is also difficult (I'm involved in three of them, and I hear from Andrew regularly with all three hats on).
6.  I'd love to see a way to directly link each of the errors reported to the appropriate place in the document from which rule is derived.  For HITSP and HL7 this is fairly straightforward (it's just a link to the appropriate constrain ID in the document), but for IHE is a little bit more difficult.  The IHE profiles need to be a little bit more formal about their constraints in the profiles, now that it (we) have removed the Schematrons themselves from the technical framework (note to self, add as a discussion item at the PCC face to face in two weeks).
7.  We need a little bit more formal governance model for how to deal with interpretation of the standards and implementation guides that this tool is supporting.  Somewhere along the way I'd like to see a way to verify that the various tests do in fact meet the requirements specified by each of the various guides, and a process for resolving issues that require input from the appropriate authorities.
8. I think this tool could be taken further, and I'd like to see it run as an open source project(*) that we could all participate in. I think structuring it that way would provide more capacity for improvement. 

We've already got a terrific bunch of players that has developed this tool, let's put them all on the same team.

* LAIKA is already an open source project, but only supports C32.  I'd like to see the entire set of Schematron validation tools be part of a separate open source project.  Oh, and it'd be really nice if they supported the IHE SVS profile for value set validation (ITI changed the profile during public comment last year to support an HTTP GET retrieval of the value set in XML that is suited for that purpose for that very reason). And furthermore..., oh just start the thing and I'll chime in.

Wednesday, October 28, 2009

The H1N1 Use Case

Many of the HITSP specifications that I've worked on over the past years have been in response to prior events:


The Biosurveillance use case followed the Anthrax scares
Emergency Responder followed hurricane Katrina
Immunization and Response Management was created around the same time we kept hearing about H5N1 Bird Flu.

The last use case, is the most relevant one to my family at the moment.

Not too long ago, I listened to a story on National Public Radio (my most common source for national news that isn't an RSS feed) about the outbreak of H1N1 flu in Texas, specifically in Austin where my family and I would be visiting for a family wedding.  The H1N1 vaccine had not yet arrived in Massachussetts where I live.  We talked to our children's nurse practitioner Nurse J, about our pending visit, and she agreed that the children should be vaccinated.  She hounded local public health officials regularly, and eventually, managed to track down doses of the spray, and my children were vaccinated several days prior to our visit to Austin (they also recieved their seasonal flu shots the same day).

We were never really able to address the response management portion of the Immunization and Response Management use case, mostly because it dealt with supply chain issues.  That's really outside of the scope of most of expertise involved in HITSP activities.  Somehow we need to get back to that use case and address that portion of it so that providers like Nurse J can focus treating patients instead of getting what they need to do so.

However, what Nurse J, and providers like her demonstrate, is that it isn't just technology that will resolve our current problems in healthcare.  We need more good providers out there like her, who do what it takes to treat their patients, and more good patients who do what it takes to stay healthy.

Monday, October 26, 2009

HL7 Standards Activities

Usually I just twitter these reports from the TSC, and post links to the HL7 Project Database, but this week there are about seven different HL7 Standards initiatives to mention from the TSC, and one significant one from the Structured Documents Workgroup, so I thought I'd give a brief synopsis here:

First and most importantly, HL7 CDA Release 2.0 achieves a significant landmark next April, which will be its fifth "birthday" as an HL7 and ANSI approved standard.  According to ANSI rules, the standard must be reaffirmed, revised or withdrawn after five years.  So, the HL7 Structured Documents Workgroup has initiated a project to reaffirm CDA Release 2.0 as an ANSI/HL7 approved standard.  While the committee is presently working on CDA Release 3.0, we do not expect that work to be completed by the expiration of the existing CDA Release 2.0 standard.  There are a number of International projects currently utilizing CDA Release 2.0 and so their will be a ballot item in January to reaffirm the standard as is.  The project will proceed to the Structured and Semantic Design Steering Division and then to the TSC for final approval.  My expectation is that it will achieve these approvals without any difficulty.

The following three items have been approved as new standards work items by the HL7 Technical Steering Committee (TSC):

  • Security Domain Analysis Model, Release 1 for Security Work Group [WG] of Foundation and Technology Steering Division [FTSD]. This project is intended to create and ballot a single HL7 Domain Analysis Model (DAM) integrating both security access control and privacy information models.
  • Implementation Guide for CDA Release 2 Level 3, Neonatal Care Report by Structured Documents WG of Structure & Semantic Design Steering Division [SSD SD]. The implementation guide will support electronic reporting of an initial segment of the data elements in the CHNC Neonatal Intensive Care Unit (NICU) Core Data Set (CDS) from Neonatal Intensive Care providers to Children’s Hospitals Neonatal Consortium (CHNC).
  • Implementation Guide for CDA Release 2: Procedure Note (Universal Realm) by Structured Documents WG of SSD SD. This project is to design a basic procedure note in XML as a constraint on HL7 v3 CDA r2. The note will be basic enough to be used for all procedures and will develop a sample note for endoscopy. To promote standardization and acceptance, it will be closely modeled on the current HL7 CDA Operative Note.
The TSC approved the publication or extension of the following DSTU's and Informative Documents:
Finally, Charlie McCay was unanimously relected as cochair of the HL7 TSC.  Congratulations Charlie!

Wednesday, October 21, 2009

IHE PCC Profile Requirements for Templates

I just finished an interesting discussion with an IT Architect who is implementing some of the HITSP Constructs (C32 Summary Documents using the HL7 Continuity of Care Document).  He identified a significant gap in explanation or detail for the various specifications that HL7, IHE and HITSP have produced.

There are at least three parts to the problem:

  • Template Inheritance
    What does it mean when template A from one specification requires conformance to template B from another?
  • Required Templates
    What does it mean when an XML element adhering to template A requires the presense of template  B inside it?
  • Use of Additional Constructs
    If you want to include additional XML structures inside a template, is that allowed?
Template Inheritance
HL7, IHE and HITSP have all made use of template inheritance in their specifications.  This creates a layering of constraints on the information that is allowed to be present in the final XML.  I talk about the layering of these constraints previously when I discussed Template Identifiers, Business Rules and Degrees of Interoperability.

When HL7, IHE or HITSP requires the use of a previously defined template, they never intentionally create a situation where newly imposed constraints conflict with preexisting requirements.  There have been a few mistakes (which is normal in the course of human events), but these are technical errors, not intentional overrides of any requirements.   I do use the term "requirement" advisedly, because any one of these organizations may override a MAY or SHOULD if they deem it necessary.  MAY and SHOULD are not taken to be absolute requirements, only guidance (see RFC 2119)


Required Templates
For example, the IHE PCC Technical Framework indicates that for the History of Past Illness Section:
"The History of Past Illness section shall contain a narrative description of the conditions the patient suffered in the past. It shall include entries for problems as described in the Entry Content Modules."
And further down in the table requires the presense of the Problem Concern Entry.

Must that entry appear directly within the section, as given in the example?

The answer to that question is no, which would only be apparent if you read the conformance tests we provided in schematron in previous editions of the technical framework.  The requirement is that the specified template must be contained within the section, but it need not be explicitly a direct child.  The rationale for this choice in IHE was that Sections and entries may have subcomponents which contain the necessary material. For example, Sections may have sub-sections containing the necessary details.

Suppose for example that you wanted to insert an organizer to collect a bunch of problem concern entries.  Would that be legal?  Yes it would.

Doesn't that create more ways to represent the information and more optionality?  It does create more optionality, but the IHE constraints are sufficient to enable the location of the required entries. They can be found using an XPath expression rooted in the context of the outer template (which is in fact how the validation schematrons test for this).

So, if you want to find the Problem Concern Entries of the section, you can use the following XPath expression to find them: .//cda:act/cda:templateId[@root = "1.3.6.1.4.1.19376.1.5.3.1.4.5.2"] 
This basically says to find the CDA Act elements that assert conformance to the Problem Concern template that are descendants of this element.

Use of Additional Constructs
HL7, IHE and HITSP do NOT forbid the use of anything otherwise allowed by the underlying standard except in VERY limited cases (I personally can only identify one instance, in the CCD Payer Section).  Anything not explicitely prohibited by the templates (or the underlying standard) is allowed.  The rationale for this rule is that we cannot predict the new use cases that will appear for one of these templates, and this maximizes interoperability of the specification.

In HL7 specifications, this resulted in the specification of originator and reciever responsibilities in several HL7 CDA Implementation Guides.  I've summarized these below, but you can find the full text in the History and Physical Implementation Guide DSTU.
Originator Responsibilities
The originator shall apply a template identifier to assert conformance to that template.  It need not assert a template identifer if it chooses not to assert conformance.

Reciever Responsibilities
A recipient may reject an instance (e.g., a document) that does not conform to a particular template, but it need not reject instances if it so chooses.  It SHALL NOT reject an instance that is not explicitely required to assert conformance to a template.

NOTE:  These responsibilities would appear in the Platform Independent Business Viewpoint of the SAEAF Matrix I depicted here.

So, yes, you are allowed to include additional entries, subsections or section content within a document if it is otherwise allowed. A caveat to that is that the CCD Specification does classify a document as being a summary of episode note. If you delve into areas where other clinical notes better represent the information you are lookinf for, read what I have to say about hammers here.

Tuesday, October 20, 2009

HIT Policy with Respect to Imaging Specialities

 If you've been following me for the past week, you can guess that what I've been reading are the various and sundry meaningful use requirements and policy recommendations.

What has me scratching my head right now is the nearly complete lack of integration of imaging into the HIT Standards selections.  Yes, imaging reports can be shared using the HIT Standards committee selections, but the images themselves cannot.  If the point of meaningful use is to reduce costs, and one way to reduce costs is to eliminate the need for duplicative testing, then why wouldn't we be identifying standards that allow for images themselves to be shared?  Right now the only thing the HIT Standards Committee has identified is the need to share radiology reports, and what they've chosen was originally written for the anthrax biosurveillance use case.

My wife would really enjoy not having to drive 60 extra miles to transfer films from one provider to another so that her primary care provider can view her mammograms alongside the radiology report.

The HIT Policy committee is meeting later this month (October 27 and 28) to discuss (among other things) "the mapping of core Meaningful Use objectives and existing measures to medical specialties, small practices, and small hospitals."

My hope is that some of their invited experts are from the imaging specialties, and that they talk to them about the needs to integrate imaging into the healthcare landscape.

Monday, October 19, 2009

HIT Standards recommendations and TLS 1.2 Support

I’ve been reviewing the outputs of the Security and Privacy (and Infrastructure) workgroup of the HIT Standards Committee for the past couple of weeks. Like many of you, I’m trying to decypher the ramifications for EHR users and implementors. I’ve been told that the basis for many of the standards recommendations on these sheets have been established on the basis of industry readiness. However, their recommendation for SHA-2 is a departure from this critieria.


The basis for that particular recommendation is due to Federal Agency requirements set forth by the National Institute of Standards and Testing. The policy established by NIST in March of 2006 results from research showing weaknesses in the SHA-1 algorithms. You can see the NIST policy statement here but I’ve also reproduced it below.

March 15, 2006: The SHA-2 family of hash functions (i.e., SHA-224, SHA-256, SHA-384 and SHA-512) may be used by Federal agencies for all applications using secure hash algorithms. Federal agencies should stop using SHA-1 for digital signatures, digital time stamping and other applications that require collision resistance as soon as practical, and must use the SHA-2 family of hash functions for these applications after 2010. After 2010, Federal agencies may use SHA-1 only for the following applications: hash-based message authentication codes (HMACs); key derivation functions (KDFs); and random number generators (RNGs). Regardless of use, NIST encourages application and protocol designers to use the SHA-2 family of hash functions for all new applications and protocols.

An intersting post from Xuelei Fan seems to indicate that the NIST guidance could be followed by an acceptable profile of TLS 1.0. I haven't done enough review of his assertions to determine if I agree with his recommendations or not, but they seem to reasonably thought out.  Others have indicated that the deprecation of SHA-1 would eliminate TLS 1.0 for consideration as a protocol for securing communication channels, and would require support for TLS 1.2. 

Implementation of security protocols is best left to experts, and unfortunately, the experts are a bit behind in supporting TLS 1.2 on many fronts. There are at least five key areas to consider:

1. Operating System Support
2. Programming Language Environment Support
3. Web Server Support
4. Browser Support
5. Communication Library Support

Operating System Support

Of the most common operating systems, only the Microsoft Windows OS provides cryptography support in the OS as far as I can tell. The most recent versions of Windows (Windows 7 and 2008 Server) support TLS 1.2 in SCHANNEL, the crypto library that ships with the OS. This accounts for less than 3% of the Windows OS deployment according to some sources (approximately 75% of Windows users are still on XP according to those same sources).

However, this support does not appear to be enabled by default, which creates some challenges. See http://forums.iis.net/t/1155254.aspx.

Programming Language Environment Support

Support for TLS 1.1 or 1.2 is not present in the most recent versions of Java or .Net, two of the most common platforms for application development. TLS 1.0 support is present in both the .Net and Java libraries.

Web and Application Servers

The only Web Server I’ve been able to find that supports TLS 1.2 natively is Microsoft’s IIS 7.5 or later, and only if deployed on a Windows OS supporting TLS 1.2. Application server support is essential if TLS 1.2 is to be used to deploy secure web services (using either REST or SOAP).

Communication Libraries

In the past, we’ve seen communication libraries as being a workaround that would enable application developers to use existing socket implementations and act as a gateway to secure communications. However, I’ve only found two libraries GNU TLS 2.8 and Yassl that support TLS 1.2. Even Open SSL doesn’t support TLS 1.2 yet.

Browser Support

What if you are using a thin client? Well, unless your thin client is using Opera 10, or Internet Explorer 8, you won’t see support for TLS 1.2. I have no clue whether the thin clients on smart phones support TLS 1.2, but given the dearth of support on the bigger platforms, I suspect smart phone support for TLS 1.2 is just simply not going to be there.

Commercial Support
Commercial support for TLS 1.2 is available for some web/application servers and programming languages, but presently this support only appears to be readily available from one vendor (that vendor by the way, also happens to hold the patents on some of the encryption used for SECRET and TOP SECRET classified information).

As a result of this review, I find myself extremely concerned about how the HIT Standards committee recommendation for SHA-2 would be implemented by the industry.  Basically, users of HIT Software would need to upgrade to new operating systems or platforms, or purchase additional software or hardware to meet some of these requirements.  I realize that our government partners already need to do so because of the NIST recommendation, but in some ways, this appears to be the tail wagging the dog.

Friday, October 16, 2009

Making it Possible for Doctors to Trust your Electronic Health Data

Jamie Ferguson raised a number of important issues in his posting here titled Can Your Doctor Trust Your Electronic Health Data.  As Jamie points out, the question is whether physicians will trust and use dynamically generated health information aggregated from multiple sources. There are two different responses to the issues Dr. Ferguson raises, the first is a technical/social one, and the second deals with aculturation.

Technical/Social Solutions
My first response provides a technical solution that takes advantage of existing methods to engender trust from healthcare providers.

The composition of information from multiple sources has a long tradition in healthcare, especially clinical research.  It's hard to find a report on evidence based medicine that doesn't claim 20 or more sources for the statements made in the research report.  Each of these statements is clearly marked and a reference to the original work is included. 

These days, the lack of references in a published report is commonly used as an initial indicator of low quality research. Thus it engenders a low level of trust from the readers. Elsewhere in technology, Wikipedia has a BOT that checks articles to see how many (or few) references that they cite. This application marks articles that fail to meet the quality goals of the WikiPedia site. The original Google search engine premise is that the more links there are (links = references) to an article containing the search keywords (especially when the linking article also contains them) the more likely it is to be relevant (trustworthy) to the user.

Ideally, when research reports are published electronically, these references are "hyperlinked" to a location where the referenced content can be retrieved.  Sometimes a unique identifier, such as an ISBN number (for a published work) is included, which makes it easier to retrieve from a library, or order from a publisher. Either of these mechanisms enables the reader to verify (some more quickly than others) that the original statements being referenced are not taken out of context.  However, the mere presence of these references is often sufficient to provide a degree of trust.  This is especially true when other key information about the source is included that supports the case for trusting the information (e.g., a reference to a well known author/researcher, et cetera).

When HL7 developed the CCD specification, IHE profiled it in the PCC Technical Framework and HITSP developed the original C32 specification we spent a lot of time talking about this same issue. The CCD specifications provides several mechanisms to record the source of the information (either as the author, the informant, or a reference to the document from where it orginated). The IHE work simply refines the document reference a little futher, and the HITSP work makes use of these methods. The HITSP and IHE work go one step further for authors and informants, adopting constraints from an HL7 Implementation guide that ensures that authors and informants are reachable by the reader of the document. It requires that each author or informant represented in the clinical document have a name, address and telephone number. This enables the sources of information to be traced backwards.


So, a consumers personal health record, or an agregated record created by a providers HIT system can use these same concepts when reporting information about a patient's health. Aggregaged healthcare records can (and should) contain references to original clinical documents.  This will engender greater trust from healthcare providers, EVEN when the original documents are not immediately available.  The fact that there is an original, and that the source of the original is accessible or reachable (perhaps with some additional work) will provide an additional degree of trust in the aggregated information.

Aculturation
The second response is that the provider response to the sharing of agregated clinical information coming from the patient will change over time.  The initial levels of trust will be low, but as technology advances, providers gain more acceptance and experience with it, and additional methods are put into place to engender trust, more trust will come.  Look at the Internet, today, most people wouldn't think twice about entering the credit care information online, but ten years ago, this was a scary prospect.  Trust is not something one develops in a relationship overnight.  It takes time and nurturing to develop.  To get the time we need, we need to take the first step -- which is to be willing to try something new.

Thursday, October 15, 2009

Interpreting the HIT Standards Spreadsheet

Like many of you who read this blog, I've been spending a great deal of time reviewing the HIT Standards Committee recommendations that have been out for about a month now, and the most recent updates to them for Security, Privacy and Infrastructure

Interpreting the Clinical Operations Recommendations
The Clincial Operations document has been very difficult to read because it groups various functions together when making the standards recommendations.  But then it doesn't identify which standards apply to which functions.  I did a little calling around and digging to find out what it all means, and am reporting my findings here in the hopes of helping to dispell similar confusions others face. 

However, I must note that you use this information at your own risk.  The interim final rule will be published in the Federal Register some time in December.  While I may have some good guesses, nobody except those at ONC who are writing it really knows what it will contain.

First of all, the Clinical Operations Workgroup spreadsheet was limited to two pages to avoid scaring people off.  However, if you've tried to print it out like I did, and your vision is as good (as bad really) as mine, you'll see that the information is as dense as what could have appeared in four or more pages.  Also, the tabular format really wasted a lot of white space that could have been put to better use.

Summary Records, Clincial Reports, Encounter Messages, Radiology Messages, Allergies, and Clinical Notes Content Exchange
The HIT Standards Committee recommended use of the Standards specified HITSP Capabilities 117, 118, 119, 120, 126, 127 and 137 for these purposes.  What they didn't say which would have been helpful, is which of these capabilities applies to Summary Records (119 and 120), Clinical Reports (126, 127),  Encounter Messages (137), Radiology Messages (137), Allergies (I haven't figured that one out yet but it appears to be 117 and 118 by a process of elimination), and Clinical Notes Content (119 and 120).

As always, when you use a term or short phrase, it's really helpful if you give a definition of it, especially if your readers weren't part of the norming process that you went through.  I'd like to point out that "Clinical Reports" and "Clinical Notes" are pretty hard to distinguish between, and I've only managed to do so because I did the mappnig.  This is one of the principals that the HITSP Data Architecure specification uses.  Each thing we discuss has both a name and a definition.  Hopefully, future communications from the Standards Committee will be more helpful here.

I won't go into all the gory details for the other requirements.  Now that you know the general pattern, you should be able to apply it on your own.  The key thing in understanding the HIT Standards document is to A) look at the HITSP Capabilities they identified, and then B) break down the requirements by function and map them to capabilities.

I'll be working on some thoughts about how to better present this information and communicating those to members of the Clinical Operations Workgroup subsequently. 

Interpreting the Privacy and Security (and Infrastructure) Workgroup Recommendations
The output of this group is much easier to read.  There are four tabs in the spreadsheet, crossing certification/guidance with security/infrastructure (there's very little here with respect to privacy yet).  My interpretation of this is that the guidance tabs identifies the appropriate HITSP specifications that will help you to understand what to implement, and the certification tabs tell you the standards that have been recommended.  Why they couldn't just identify the HITSP specifications and leave the naming of standards to those documents is beyond my pay grade. 

Document Exchange

This was most helpful except for one thing, which was Document Exchange.  The question that I've had to track down is whether or not one a document exchange standard (e.g., XDS.b, XDR or XDM) is required for 2011 or not . The spreadsheets from the workgroup call out specifically standards that are required, and specifically name XDS.b, XDR and XDM but in 2013 and beyond.  However, they also call out HITSP Service Collaboration 112, which specifically names XDS.b, XDR and XDM.
 
So, I sent out several queries, and the feedback I've heard and seen from others who have been in communication with the Security and Privacy Workgroup and the Clinical Operations Workgroup is that intent was that ONE OF these would be needed to support the exchange of patient information, just as is specified in Capability 119 and 120, and the underlying Service Collaboration 112.

Wednesday, October 14, 2009

World Standards Day

Today is the 64th Anniversary of the first meeting of the International Organization for Standardization (ISO).  I've been involved in healthcare standards for about 6 years, and to some degree in W3C XML standards for about 3 years some time before that.

Here are 50 of the standards I've used over the years.  How many do you recognize?
  1. ANSI X3.135-1989
  2. ANSI/HL7 2.5.1-2007
  3. ANSI/HL7 CDA, R2-2005
  4. ANSI/HL7 V2 XML, R1-2003
  5. ANSI/HL7 V3 RIM, R1-2003
  6. ANSI/HL7 V3 TRMLLP, R2-2006
  7. ANSI/HL7 V3 XMLITSDT, R1-2004
  8. ANSI/HL7 V3 XMLITSSTR, R1-2005
  9. ASTM E1239-04
  10. ASTM E1384-02a
  11. ASTM E1633-02a
  12. ASTM E1762-95
  13. ASTM E1869-04
  14. ASTM E1985-98
  15. ASTM E1986-98
  16. ASTM E2369-05
  17. C89
  18. C90
  19. C99
  20. CSS2
  21. DOM2
  22. ECMA-262
  23. FIPS 46-2
  24. FIPS 5-2
  25. HTML 4.0
  26. ISO/HL7 21731:2006
  27. ISO/HL7 27931:2009
  28. ISO/IEC 14882:1998
  29. ISO/IEC 14882:2003
  30. ISO/IEC 16262:1998
  31. ISO/IEC 9075-2:1992
  32. ISO/IEC 9075-2:2003
  33. RFC 1738
  34. RFC 1939
  35. RFC 2045
  36. RFC 2046
  37. RFC 2119
  38. RFC 2246
  39. RFC 2616
  40. RFC 2818
  41. RFC 3986
  42. RFC 791
  43. RFC 793
  44. RFC 821
  45. RFC 822
  46. X.509 
  47. XHTML 1.0
  48. XML 1.0
  49. XML 1.1
  50. XSLT 1.0

Tuesday, October 13, 2009

Crystal Balls

If you thought this post was going to discuss what I think ARRA regulations will look like, I'm sorry to dissappoint you.  If you want that crystal ball, look here.

Instead, I'm going to look a little further out than 3 months.  A colleague had asked me to project where I thought interoperability standards would be in five to ten years, and I thought it would be a good posting.  So, I looked into my crystal ball and this is what I see coming:

Five Years
  • Systems generating clinical documents will start to do so using the HL7 CDA Release 3 standard.
  • Guidlines and Alerts will be start being generated in a standard format with machine readable data.
  • Systems will be able to generate quality reports from machine readable specifications for quality reporting.
  • Real integration of clinical decision support in HIT systems will occur using standards.  The debate about languages for representing clinical decision support will have died down, but will not have completely disappeared from the landscape.
  • Clinical Genomics data will be customarily exchanged using standard formats.
  • Most standard specific transports will be replaced by more traditional IT based transport mechanisms (e.g., as web services are doing today, but more so).
  • Standard specific security models and information will be replaced by more traditional IT based security standards that have been profiled for use in healthcare (this is also starting to happen today).
  • Binary XML exchanges will take over where text based XML transports left off.
  • In the US, the NCPDP and X12 standards will be based on XML formats.
  • There will be at least two more intense standards battles in the healthcare space.
Ten Years
  • Harmonization of clincal vocabularies will result in fewer terminology standards with better integration between them.
  • US will be moving towards ICD-11.
  • Medical knowledge (not just terminology or ontologies) will start being publicly available in a standardized format.

Friday, October 9, 2009

Capabilities and Service Collaborations

In April of this year, HITSP diverged from its assigned tasks to work on an important issue for ONC, which was the restructuring of its prior work to support pending regulation for ARRA/HITECH.  As a result of these efforts, we modularized many of the HITSP specifications to simplify tasks for implementors.  This resulted in the creation of two new types of HITSP specification (they call them constructs, but I won't delve too deeply into HITSP-geek-speak here).

The first of these is a Capability, and the HITSP capabilities are what the HIT Standards Committee looked at when they identified standards for meaningful use.  The second is something called a Service Collaboration, and it is a specification for how to make several different pieces of an implementation work together.  Because this work was done quickly, we set up a few rules (which we may need to break in the future) about Capabilities and Service Collaborations.  Capabilities may not call on other Capabilities, but Service Collaborations can.  Capabilities include content, but service collaborations don't.  These rules seem to be more like generalizations of an ideal world.  The HITSP Internal Review Board is currently discussing all the pieces parts as we develop new Capabilities and Service Collaborations to complete our slate of work for 2009.

Here are my own, unapproved definitions of these HITSP specification types:

Capability: A collection of service collaborations, standards and implementation guides working together to serve a business purpose.


Service Collaboration: A collection of service collaborations, standards and implementation guides working together to serve a technical function.

Capability is an integration concept, and Service Collaboration is an implementation concept. What I mean is that developers will "implement" the Service Collaborations in software, and the Systems Integrators will put together Capabilities from Service Collaborations.  Ideally, Service Collaborations should be generalized to support a variety of use cases, whereas Capabilities would be more fine tuned to meet business requirements.  The rules about structuring these specifications are emergent properties resulting from common design patterns used in the integration and implementation layers.

However, the dividing line between implementation and integration is fuzzy and blurred.  The HITSP Capability 119 Exchange Structured Document sits right on that line.  Is this a business function or a technical one?  You could argue it either way.  Another issue to address is that capabilities over time become commoditized, and that pushes them across the line.

For now, I still consider Exchange of Structured Document  to be a capability, but I'll be very happy to see the exchange of structured documents become a service collaboration in the future.

An HL7 Version 3 WSDL Generator

A couple of weeks ago I railed at HL7 because they don't provide the tools that my collegues and I need to build Version 3 interfaces.  To begin remedy that problem, I started on some of my own tools.  I needed to create a WSDL for an IHE profile.  Since I had already figured out that the problem could be automated, I decided to figure out how to do it.  A few hours (about 20 all told) later, I've got a solution, which is about 8-12 hours longer than it would have taken me to hand-craft the WSDL in the first place.  The ROI will show up the very next time I need a WSDL for something.

It's remarkably small too, about 1400 lines of Java and an XSL transform that lets me customize the way the WSDL is built. I spent about 15 of the 20 hours trying to figure out how to deal with the transmission infrastructure, before I gave up and punted.

The application uses the PubDB file created by the publishing facilitator for any given workgroup.  Principally what it extracts from that database are relationships between the application roles and the interactions.  These are then written to an XML document, which is finally transformed via an XSLT transform into the WSDL output.

There's probably a dozen places where I've done something wrong, have misunderstood the intent behind HL7 transmission infrastructure, the mapping between messages in HL7 Version 3, and elsewhere.  However, I think its a useful start.  Certainly it will save me some work in the future.  Who knows, maybe someday this tool will actually be used during the build process of the HL7 Version 3 Standards to generate WSDLs automatically.

If you are interested, the complete project is in HL7V3WSDL.zip.

HL7V3WSDLGenerator is a rather simple software application that is designed to automatically generate WSDL files from HL7 Version 3 artifacts. To use this software you need only two more things:
  1. A copy of the publication database for the domain you want to build a WSDL for
  2. A Java Virtual Machine that supports Java 1.5 or later.

Installing the sofware is simple, just unzip the file to your hard drive. Running it is equally simple.  From the folder where you installed the software, type in

 
java -cp . org.hl7.v3.wsdl.WSDLGenerator PubDB.mdb

 
Where PubDb.mdb is the location of a PubDB file.  Don't have one?  You can  find the ones used for each ballot cycle here:  http://www.hl7.org/v3ballot/html/ (Just click on Source Files for the appropriate cycle, then domains and finally download one of the database zip files you find there; see here for an example).

 
Source code and documentation are provided, however I must note that support is not:
 
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
I'm planning on moving this into HL7 G-Forge or OHT in the future.  I'm also going to update the stylesheet (actually provide an alternate) to support IHE style WSDL files using ITI Volume II Appendix V rules.  If you understood that, you need to get out more often.
 
 

Wednesday, October 7, 2009

IHE PCC Planning Committee moves 5 Profile Proposals to next stage

Monday and Tuesday the IHE Patient Care Coordination planning met to discuss and review nine profile proposals previously reported on here.  The final set of profile proposals that we chose to move forward include the following:

  1. Post Partum Visit Summary  -- Adding data elements from the Labor and Delivery Report and Maternal Discharge summary, and additional progress information into a report to be produced at the last post-partum visit.
  2. Newborn Discharge Summary -- Focused on routine newborn discharge and including XDS-MS Discharge Summary data elements plus data elememnts found in the labor and delivery report.
  3. Perinatal Workflow -- Completing the Antepartum Record and Labor and Delivery Record and integrating these and other IHE Profiles into a complete workflow for the perinatal continuum of care.
  4. Chronic Care Coordination -- Using care plans from existing IHE profiles, and a few additional messages to communicate the care plan for a patient with chronic disease, and to obtain acknowledgment of the need for services from participating providers and updates on progress on the care plan.
  5. Nursing Summary -- Working with HL7 detailed clinical models, existing IHE work, and adding input from perioperative nursing on the patient plan of care to produce a nursing summary suitable for transfers of care.
One profile proposal was withdrawn (the one for document templates), because HL7 had decided to engage in standarization in that area, and we agreed that we should let that work proceed.  We will follow and participate in that work in HL7 and if need be, take up the question again in a future planning cycle.  Three of the nursing profiles were merged into the Nursing Summary, and the Workflow and LDR/APR completion were merged into the Perinatal workflow proposal.  We reached a consensus on the slate of the proposals without needing to vote for winners and losers, and the task will be continued by the technical committee in early November.  For some of these proposals we even think it may be possible to hold public comment sometime in February.

In the other room, the IHE Quality, Public Health and Research planning committee met also to review profile proposals that they had recieved.  The success we had at the PHIN conference in developing several profile proposals for public health almost overwhelmed the committee.  However, they managed to collaborate and found ways that each of these "public health" proposals could also help constituencies in research and quality.  As a result, they too reached a consensus, and all three of the profile proposals that were developed during the seminar at the PHIN conference moved forward in some way.

Tuesday, October 6, 2009

Turning a CDR into an XDS Repository

So, if you have an HIE which houses a clinical data store fed by a collection of HL7 Version 2 messages, how do you use the HITSP specifications to make the information available in that store as a collection of clinical documents?  That's the question that came to me today from a few people participating in some of the US National program activities.  Because this same implementation works internationally using IHE profiles, I've also indicated which of those you would use if you aren't in the US.

The answer was too long to give on the call that had already run over 15 minutes, but I promised I would detail it here.

First of all, I'm going to make some assumptions about the system under consideration:

1.  Certain observations about the active problems, patient allergies et cetera,  are fed to the system using an ADT message on admission, or appear in OBX segments of other messages that are used to feed that information.
2.  Lab results are fed to the system using an ORU (unsolicited observation).
3.  Clinical documents that are generated by transcription are sent to the system using an MDM message (document management).
4.  Certain other diagnostic results (e.g., EKG results or Radiology reports) are sent using either an MDM or an ORU message.

These are fairly safe assumptions in many hospital and some ambulatory evironments.

I won't get into how providers maintain the problem, allergy or medications list (appropriately updating the lists to keep them current), but will simply assume that is somehow being managed.

The question basically boils down to how one would represent the content of a CDR as a collection of documents that may be available.

Each study report or lab result, either transcribed or sent as an ORU message is represented as one clinical document.  In this particular context, the clinical document can be identified making use of the message unique identifer as one component.  The date of the document is typically the date of the message, but in an ORU could also be taken as being the result date, or in an MDM the document dates.  You have to work out some business rules to deal with that, but you can use a concistent set that works for both MDM and ORU pretty well.  Mappinng PID information into the CDA is also straightforward and consistent, but given that you've got some sort of CDR, you could obtain that from the database.

Some lab and test results coming through as ORU's are not going to be well structured, simply a collection of text results associated with the test (e.g., pathology, radiology reports).  These are best converted to a HITSP C62 (or IHE XDS-SD), where the document type is a LOINC code indicating a laboratory result (11502-2 LABORATORY REPORT.TOTAL works pretty well, or 47045-0 Study Result for other diagnostic testing or imaging reports, but more details LOINC codes can be used to identify the type of test if you have them).  In these reports, the OBR (observation request) often identify indications, observations, diagnoses/assessments, et cetera, and can often be treated as section headings in the result.

Other lab results are going to be more structured (chemistry, hematology, et cetera).  For these, I would advise formatting the result using the HITSP C37 construct (or IHE XD-LAB).  Here the OBR segments identify different collections of results, and the OBX segments are often formatted in a tabular fashion.

Other reports, often operative notes, ED reports, and discharge summaries, are going to be coming from an MDM feed.  These can contain Word processing documents, PDF, richly formatted text (RTF), or plain text.  Most word processing documents and RTF can be used to generate PDF through a number of different open source and commercial technologies, or you can often extract the plain text.  These HL7 V2 messages will often contain a code identifying the type of document, which should be mapped to the appropriate LOINC document type code (see HITSP C80 Section 2.2.3.14.1 Document Class for a partial listing of LOINC codes you may want to map to).  Those documents can be formatted again as an HITSP C62 (or IHE XDS-SD). 

Now, what's the entry point for these documents?  Well, that's where the HITSP C32 document (or IHE XPHR) comes into play.  The important point here is that C32 and the underlying specifications (XPHR,  HL7 CCD and ASTM CCR) identify this as containing a summary of the most relevant information about the patient at a point in time.  The key word is summary; think of it as a face sheet..  It isn't meant to contain every last dot and twiddle about what happened to the patient.  It should be a summary so that the most relevant information needed by a provider is immediately accessible, and other details can be tracked down if need be.  Why?  Because if you put every last dot and twiddle in the C32/XPHR/CCD, you will overwhelm the recieving provider with TOO MUCH INFORMATION, and they won't be able to find the relevant details.

So, in the results section of the document, you can simply put an overview of what other reports are available. But how would you link these other documents into the C32 that you have produced?  The key here is to use the IHE Coded Results Section and include in that section external references to the report that has the complete detail.   An appropriately configured stylesheet on the consumer side can identify and highlight text in the C32 results section so that it appears and acts just like a hyperlink would, but goes through the appropriate steps to securely access the relevant document.

Now, you have a face sheet using the C32 (or XPHR) specifications, and links to lab results (using C37 or IHE XD-LAB) and other reports (using C62 or XDS-SD) so that providers can obtain the details.  You also have a collection of other documents (structured and unstructured) that can be searched for by the document class code.  This will allow some systems to access and display just the collection of documents available to the provider for quick review to locate (for example) a relevant study.  This also enables partitioning of some of the data if need be so that some information (e.g., regarding sensitive topics such as HIV/AIDS results, et cetera) can be made more private.  This requires some thought about how you generate the C32, because you may not want to include references to more sensitive documents in a less sensitive document.  This may mean that providers with different levels of access would get different face sheets (C32 documents) depending upon the level of access that they should have to sensitive information.

One other thing to realize is that the C32 we are talking about here is something that is dynamically generated at need.  As a result, there another commitment that you need to make.  That is when you generate this C32 face sheet dynamically, you need to be able to uniquely identify it, and be able to store it for retrieval on demand later, so that providers who use it for care can be assured that they get the same set of bits (content) in the future.  If you dynamically generate it each time without storing what you sent, there is a danger that a software change could result in a new set of bits, which would disrupt two important principles of using documents:  persitence and stewardship.   The benefit to you of storing what is exchanged is that you can simply log the identifier of the exchanged document in the audit trail, and if need be, go back and see exactly what the content of the exchange was.

The final issue to review here is whether a C32 document is allowed to contain information from a single visit or encounter or whether it can be used to aggregate information from multiple episodes of care.  Some people have asserted that the last isn't allowed.  However, when you consider the original purpose of the C32 specification, you'll realize that it must be allowed to aggregate information from multiple episodes of care.  That original use was expressed through the HITSP IS03 Consumer Empowerment specification.  The purposes of that use case was to support exchange of information between consumers and providers.  The consumer is the ultimate aggregator of healthcare information.  In the end, the accumulation of data that consumers will gather and put in their C32 (or XPHR) documents will come from all of their healthcare providers.

Observations on Public Health

During a dinner conversation at an IHE Meeting, we were discussing some of the problems that public health faces in this decade. To put this into context, I need to set the clock back three years.  At that time,  three different constituencies (Clinical Research, Public Health, and Quality) worked within the IHE Patient Care Coordination domain for an entire year to develop a white paper describing their differences. What they came out with instead was a white paper expressing their similarities.  Public health extended the original paper several steps further, analyzing two different domains within public health to identify their similarities.

They came up with several important functional requirements needed by all stakeholders:
  1. Expressing Criteria
  2. Managing Patient Identity
  3. Gathering Data
  4. Retrieving Additional Data
  5. Filtering and Data Review
  6. Analysis and Evaluation
  7. Mapping
  8. Aggregation and  Reporting
  9. Communication
The next year (after several important discussions about the pronunciation of acronyms), the Quality, Research and Public Health domain of IHE was formed (QRPH -- pronounced Quirf).

Now fast forward three years. Earlier this year we met with several members of the public health community at the PHIN Conference in Atlanta to discuss The Making of an IHE Profile.  During the discussion, we brainstormed several different ideas.  Many of the needs were already addressed in existing IHE profiles (the number of times that I said "IHE has a profile for that" was probably too many, but the point was made).  What came out of that were three ideas reported here which are being discussed in Oak Brook this week.  I'll let you know what happened to these ideas later this week.

Over dinner, we talked about how siloed funding has resulted in a siloed infrastructure (this isn't a new concept, it was identified in the early 1990's by a researcher at John's Hopkins University).  The solution is not to create more silos, but instead to figure out how to rationalize public health.  One concerned, and very much in the know person said "But that will never happen", and she's both wrong and right.  It certainly won't happen tommorrow, nor the next year, or even two.  However, it can be changed over time.  We just need to have the persistence to make it change, and show by example how to do it.  We also need to get people thinking about the future, instead of the last battle we lost.  We are still learning the wrong things in some places.

Take for example, the current H1N1 pandemic.  If you had asked people three years ago what pandemic we would be facing in 2009, I and likely many people working in public health would NOT have said H1N1 Swine Flu, but rather H5N1 Bird Flu. Where did all the money spent to deal with H5N1 Bird flu go, and why isn't that infrastructure effective for Swine Flu?  For that matter, what about all the money that was spent to deal with SARS in the previous decade?  Examine how funding for H1N1 surveillance, education, pandemic planning, et cetera is being awarded.  Huge grants and contracts are being issued, but as public health official told me last month, "That money runs out in June (2010) and we don't know what will happen afterwards...".  It's hard to build and maintain an infrastructure when the funding runs out because the emergency has past us.  We need to think about a sustained funding model for important public health issues. 

All is not lost, because others are learning the right things. One state near me looked at their public health infrastructure a few years ago and realized that if they were able to eliminate the inefficiencies in that state due to the information silos, thereby reducing duplicated infrastructure they could save about of $10 million dollars a year.  Upon learning this, their choice was to spend $5 million a year of that savings to build the right infrastructure that would help solve the problem and save them $5 million a year.  They are now looking at adopting an IHE infrastructure statewide in the service of their population, and won't be the first state to do so (I believe Vermont has that honor).

As I've become engaged in the healthcare space I've learned about two things:  patients and patience.  I no longer look for changes to occur in Internet time.  It takes about five to seven years to go from the initial stages of development for a NEW standard (as opposed to a revision of an existing one), into having several products available on the market using it.  It's going to take that long or longer to make some of the changes we need in public health (they need more than just one new standard).

Please Lord, give me patience  ... NOW!

Friday, October 2, 2009

HITSP Announces new specifications for public comment

Over the course of this week ANSI/HITSP has released several documents for public comment.  They are all available in the public review and comment area of the HITSP web site.  Presentations given over the past week describing some of these are also available from the HITSP Webinar site.

Record
Title
Review Dates

C34

Patient Level Quality Data Message Component
9/30/2009-n/a

C36

Lab Result Message Component
9/30/2009-10/28/2009

C41

Radiology Result Message Component
9/30/2009-10/28/2009

C70

Immunization Query and Response
9/30/2009-10/28/2009

C72

Immunization Message Component
9/30/2009-10/28/2009

C105

Patient Level Quality Data Document Using HL7 Quality Reporting
Document Architecture (QRDA)

9/30/2009-n/a

C106

Measurement Criteria Document
9/30/2009-n/a

C154
HITSP Data Dictionary
9/30/2009-10/28/2009

CAP99

Communicate Lab Order Message Capability Specification
9/29/2009-10/8/2009

CAP117

Communicate Ambulatory and Long Term Care Prescription
10/1/2009-10/8/2009

CAP118

Communicate Hospital Prescription
9/30/2009-10/8/2009

CAP119

Communicate Structured Document
9/30/2009-10/8/2009

CAP140

Communicate Benefits and Eligibility
9/30/2009-10/8/2009

CAP141

Communicate Referral Authorization
9/30/2009-10/8/2009

IS06

Quality Interoperability Specification (Complete Set)
9/30/2009-n/a

IS06

Quality Interoperability Specification
9/30/2009-n/a

RDSS153

Newborn Screening
9/21/2009-10/19/2009

TN906

TN906 - Quality Measures Technical Note
9/30/2009-10/28/2009

Thursday, October 1, 2009

That Other Office of Coordination

Today is one of those days where I work a bit for The Other Office Of Coordination. This is the office that connects up national activities related to healthcare IT when those activities haven't actually engaged with each other.  It's built upon a Federated model instead of a Federal model.  There is no budget, but my raise this year was twice the pay I got last year (0).

We're recruiting right now:

There's only one responsibily for members of this office.  When you hear about an activity that in your own judgement should be aware of similar activity elsewhere, your must make each of the interested parties aware of the other, and of their need to talk to each other.

There's only one benefit.  If we are successful you won't have to deal with disconnected bureaucracy.

In the last three weeks this office has:
1.  Connected up two Federal agency working on testing tools.
2.  Connected up one federal office using HITSP specifications for a specific purpose with the HITSP committee that is extending them further for that purpose.
3.  Coordinated specifications of one federally sponsored project with another one.

We have no logo or cool acronym yet.  Join up anyway.