Pages

Wednesday, November 30, 2016

Implementing Partial CDA Validation

In Partial Rejection and Levels of Validity in CDA (or anything else for that matter) I discussed levels of validation of CDA content.  Now I have to make that real.  There are two different ways to go about it.  As you might recall, here are a few of the levels in the partial validation hierarchy:

Level 0: Totally bogus content.  Is this even XML?
Level 1: The CDA Header is valid.
Level 2a: Level 1 + the narrative content is valid according to the CDA Schema
Level 2b: Level 2 + the LOINC codes for documents and sections are recognized as valid.

The first level is just doing an XML Parse without validation.  This will ensure content is well-formed XML.  If you fail this test, no need to go further.

The next level validates everything up through nonXMLBody or structuredBody.  This is easy.  Craft a new CDA Schema by editing POCD_MT000040.xsd as follows (delete struck out material and insert underlined material):

  <xs:complexType name="POCD_MT000040.ClinicalDocument">
    <xs:sequence>
      <xs:element name="realmCode" type="CS" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="typeId" type="POCD_MT000040.InfrastructureRoot.typeId"/>
      <xs:element name="templateId" type="II" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="id" type="II"/>
      <xs:element name="code" type="CE"/>
      <xs:element name="title" type="ST" minOccurs="0"/>
      <xs:element name="effectiveTime" type="TS"/>
      <xs:element name="confidentialityCode" type="CE"/>
      <xs:element name="languageCode" type="CS" minOccurs="0"/>
      <xs:element name="setId" type="II" minOccurs="0"/>
      <xs:element name="versionNumber" type="INT" minOccurs="0"/>
      <xs:element name="copyTime" type="TS" minOccurs="0"/>
      <xs:element name="recordTarget" type="POCD_MT000040.RecordTarget" maxOccurs="unbounded"/>
      <xs:element name="author" type="POCD_MT000040.Author" maxOccurs="unbounded"/>
      <xs:element name="dataEnterer" type="POCD_MT000040.DataEnterer" minOccurs="0"/>
      <xs:element name="informant" type="POCD_MT000040.Informant12" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="custodian" type="POCD_MT000040.Custodian"/>
      <xs:element name="informationRecipient" type="POCD_MT000040.InformationRecipient" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="legalAuthenticator" type="POCD_MT000040.LegalAuthenticator" minOccurs="0"/>
      <xs:element name="authenticator" type="POCD_MT000040.Authenticator" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="participant" type="POCD_MT000040.Participant1" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="inFulfillmentOf" type="POCD_MT000040.InFulfillmentOf" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="documentationOf" type="POCD_MT000040.DocumentationOf" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="relatedDocument" type="POCD_MT000040.RelatedDocument" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="authorization" type="POCD_MT000040.Authorization" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="componentOf" type="POCD_MT000040.Component1" minOccurs="0"/>
      <xs:element name="component" type="POCD_MT000040.Component2"/>
      <xs:any processContents/>
    </xs:sequence>
    <xs:attribute name="nullFlavor" type="NullFlavor" use="optional"/>
    <xs:attribute name="classCode" type="ActClinicalDocument" use="optional" fixed="DOCCLIN"/>
    <xs:attribute name="moodCode" type="ActMood" use="optional" fixed="EVN"/>
  </xs:complexType>

This will result in the schema processor ignoring anything after the CDA Header.  Or will it? Actually, this will fail, as the schema now violates the Unique Particle Attribution constraint of XML Schema 1.0.  However, if you could be sure that componentOf would be present, setting minOccurs="1" on that declaration resolves the problem.  But not every CCDA requires that, and so that little fix won't work.  OK, what if we change the definition of that last component so that it can contain anything?  Yep, that works.

It should look something like this:
  <xs:complexType name="POCD_MT000040.Component2">
    <xs:sequence>
      <xs:any processContents="skip" />
    </xs:sequence>
    <xs:anyAttribute processContents="skip"/>
  </xs:complexType>

So, now <component> can contain any sort of well formed XML content, and your "header validator" won't care.

An alternative implementation would use a specialized XSL identity template with some exceptions to skip any unrecognized content after componentOf, and simply delete the component element definition in POCD_MT000040.ClinicalDocument.

The next challenge is validating narrative only content.  For that, you want to tweak section definitions within the document so that you don't care about validating any content that isn't <text>, <title> or perhaps <code> within <section>, and that you validate subsection content.

That's a bit trickier.  For this case, you could define a <component> element at the top level which would be overriden by specializations of <component> defined within the header or entries (which you really don't care about), but which would be processed when matched by <xs:any processContents='lax'>.  However, rather than do that, my recommendation would be to create a specialized identity template that copies only what you want to validate within sections, and skips anything you don't care to validate.  Then you can just use the standard CDA Schema to validate the content without any changes (because all content within a section is optional according to the schema).

In that way, what you've just done is eliminated the potentially invalid content.  There's extra value there, because now what you have is a transform of the original content which, if "narrative valid", is probably safe to keep around for viewing and transformation by a stylesheet.

That identity template is a simple exercise in software engineering.  I'll leave it to the interested reader to figure it out.

Oh, one final thing: Don't be dumb and validate in easy to hard order.  Validate in the other order, because it will cost less in processing time for good documents.  Let the bad ones pay the performance penalty for multiple validation stages.

   -- Keith


Monday, November 14, 2016

Good Interoperability works like a shotgun, but with a single bullet

As an implementer these days, I don't have the luxury of building one-off solutions.  I have to be able to take components and put them together in multiple ways to solve multiple problems.  CDA was the way we used to do this in IHE Patient Care Coordination in the past, where a single section or entry could be used in multiple documents to support multiple use cases.  In fact, if I look at the number of IHE profiles that use the problems, medications and allergies sections we first created, I get at least a dozen+ CDA documents which use those.  They became the foundation of our work for many years.

The same is becoming even more true now with HL7 FHIR.  Each FHIR resource can be used for multiple use cases, and the resources can be put together in multiple ways to achieve a solution.  If I want to build a flowsheet, I can use the Observation resource.  A graph? The observation resource.  A CDS Intervenion? I might want to access the Observation resource.  And it's the same resource (though perhaps with slightly different constraints for different uses).

No longer do I have to concern myself about different models, schema, et cetera, just because how I want to use the thing has changed.

So often, we have limited resources.  We want a shotgun, but all we get is a sling with a single stone. We get one shot at this.  With FHIR, I can line all my ducks up in a row and smack them down with that single stone.  It's not just two birds (use cases), but as many as I can line up.  And in fact, I don't even have to line them up all that much.  Perhaps what I have in FHIR is a flamethrower.

   Keith

Wednesday, November 9, 2016

Accidental Interoperability

I've been spending a great deal of time on the implementation side, which doesn't let me say much here.  However, I had a recent experience where I saw a feature one team was demonstrating in which I could actually integrate one of my interop components to supply additional functionality.

Quite simply, when you build interop components right, this sort of accidental interop shows up all the time.  It's really nice when it does too, because you can create a lot of value through it with very little engineering investment.

Lego could have spent less time on their very simple building block, but because of the attention they spent, there are SO many more ways to connect those blocks, some of them I am certain were never originally intended.

Getting into that component based mind-set can that enables accidental interop is sometimes quite challenging.  All too often we focus on very specific problems and fail to consider how what we are building can be done in a more general way.  When we get that focused, it often enables us to deliver sooner, because we are focused on the singular use case, and can take short cuts to optimize for our use case.  At the same time, that hyper-focus prevents us from looking at slightly more general solutions that might have broader use.

All too often I've been told those more generalized solutions are "Scope expansions", because they don't fit the use case, and the benefits of generalization aren't immediately experienced for the specific use can I'm asked to solve for. Yet my own experience tells me that the value I get out of more general solutions is well worth the additional engineering attention.  It may not help THIS use case, but when I can apply the same solution to the next use case that comes along, then I've got a clear win.  Remember Avian flu?  That threat turned out to be a bust yet CDC spent a good bit of money on a solution for that use case.  Could they use any of it for Swine flu?  Yeah, you really don't want to know the answer to that.



Thursday, November 3, 2016

Patient matching and restricted charts

Patient matching is a tricky area.  Name, birth date and gender are insufficient in a region to match a patient sufficiently for all patients.  For example, one Zip code in Chicago contains enough John Smith's that the likelyhood of an identity collision occuring within a practice in thta region is statistically significant, about 1 in 20 chance of occuring for some patient.  And John Smith is only the thirteenth most popular name in that region.

So you need other identifiers or differentiators to get a better match.

Some organizations have business rules about matching that only allows them to expose patient data to other providers if they get one and only one match.  Thye also have business rules about not displaying any data for patients whose charts are restricted outside their practice. 

Combine these two issues and you have a tricky challenge that is easy to get wrong.

How do you implement the patient identity search? Do you search for only patients with unrestricted charts, or do you search with both but only display if you get unrestricted results?  You have to search with restricted patients included!  Consider: If an identity collision occurs, regardless of whether it occurs for a patient with a restricted chart or not, you still have to detect it!  If you were to just search the unrestricted results, and there were two John Smith's whose identity collided, when someone tried to access data for the one with the restricted chart they would get data for the wrong John Smith 100% of the time.

So, you cannot restrict the identity search, but you have to restrict what it reports.  

This will impact patients whose identity happens to collide with those that have restricted access to their chart.  That is where other identifiers or data (such as Mother's maiden name, email or phone number) can help differentiate patients.