Convert your FHIR JSON -> XML and back here. The CDA Book is sometimes listed for Kindle here and it is also SHIPPING from Amazon! See here for Errata.

Thursday, August 27, 2009

Data Mapping and HITSP TN903

I routinely recieve requests for information about how to go from an HL7 Version 2 message to a CDA document, and whether HL7 has established such a mapping. To date, I have to answer that no they have not. I can tell you that many organizations have established such a mapping, and that I too have done the work. In fact, I have developed mappings from many HL7 Version 2 segments to CDA documents, several times over the past six years. Those of you aware of the history of CDA will realize that means Release 1 as well as Release 2.

These mappings aren't publically available because people who do this work usually put a lot of thought and effort into this sort of mapping to solve customer problems. It's a piece of intellectual capital that has value to an organization. However, there will shortly be new tools available that will enable others to create this sort of mapping from data that can be freely downloaded. Furthermore, the mapping will not be just from Version 2 to CDA, but will also address mappings between NCPDP Script, HL7 Version 2, HL7 Version 3, CDA, X12N, CDASH and CDISC specifications.

Who is doing this work? The HITSP Data Architecture Tiger Team, and actually, ANSI/HITSP over the past 4 years without even realizing it. During what I call the HITSP ARRA Diversion, where HITSP spend 90 days working on specifications and technical notes specifically to support ONC and HSS requirements for ARRA regulation, we developed the HITSP TN903 Data Architecture Technical Note. We are also being supported by AHRQ-USHIK, who has developed a data element registry that will be pivotal in the deployment of this information.

TN903 describes a new process for HITSP, whereby we:

1. Identify HITSP Data Elements (see section 4.3.1). These are then constrained as necessary with respect to precision, vocabulary, length, et cetera at the HITSP Data Element level if such constraints are warranted. A HITSP data element is defined in four columns. The first column gives the identifier, the second, a short name, the third a definition that should be clear enough for an implementer to understand it, and finally, any specific constraints on that data element.

2. Map these HITSP data elements into specific data elements found within the respective standards (see section 4.3.2). This mapping appears inside the data mapping section of a HITSP Construct (a HITSP way to say specifications starting with the letters C, T or TP). This mapping is defined by first identifying the data element the way that is most appropriate to the standard (see Table 4-4).

3. HITSP Data Elements will be loaded into the AHRQ-USHIK Data Element Registry, as will data elements defined by the respective SDOs. I've been given to understand that the most recent versions of X12, NCPDP and HL7 Version 2 are currently in the process of being loaded into USHIK. A data element registry is one of three types of metadata registries identified as being important by HITSP in TN903.

4. HITSP data element mappings can also be loaded into USHIK, so that the relationships between HITSP Data Elements and data elements defined by the standards can be identified.

The end result is that relationships between HITSP data elements and standard data elements will be available at some point in time in the future in electronic form. You'll be able to clearly see HITSP Data Element 5.15 (Subscriber ID) maps to X12N 270_2100C_NM109 67, and to segment IN1-36 of an HL7 Version 2 message, and finally to /cda:ClinicalDocument//cda:act[cda:templateId/@root='2.16.840.1.113883.'] /cda:participant[@typeCode='HLD']/cda:participantRole/cda:id of a CDA Document that uses CCD templates.

What can you do with that data? Well, the first thing that springs to mind is that you can use it to automate transformations from one standard to another. I have to caution you that you will need to be careful here. These mappings are both contextual (e.g., in the context defined for use of a specific HITSP construct), and inexact. In order to do the mapping, HITSP defines data elements at a higher level than the standards do. This allows us to identify useful concepts that we can harmonize across the standards. However, because of the inexact nature of the mappings, you will need to carefully check that these mappings remaind valid when used in other contexts (e.g., when transforming from an NCPDP Script message containing medication history into a C32 representation of the same). Even so, I expect this resource to be extremely valuable for interface developers as we move forward.

The HITSP Data Architecture describe in TN903 was developed based on four years of experience harmonizing standards. We didn't always do it this way, and have just begun integrating this process into our harmonization efforts. We have many months before we've completed identification of data elements across the standards used in existing HITSP specifications, and at the same time, we have new data elements coming in from the fourteen existing extensions and gaps requested from HITSP by ONC.

We started over this last week during the HITSP Face to Face integrating this into existing HITSP processes. On Wednesday of this week, we (Care Management and Health Records) worked through the Data Element identification and Mapping process with the Clinical Research Tiger Team. Their next round of specifications will be our first trial of the new process with a new specification. Most of the technical effort of data element identification and mapping to standards needed for this harmonization request has been completed. The next step involves actually writing the results down in the new Data Architecture component we will be developing, and the two new components that the Clinical Research Tiger Team will be writing. I'm very pleased with how the new process is working with this group, and hope to see early results from it in the next two weeks.

1 comment:

  1. Thanks for posting this Keith - it is was very useful. It also provides those who have done these kinds of mappings lots to think about for the future.