Pages

Monday, July 27, 2009

At the rim of the dam or the edge of a precipice?

One of the joys of HL7 Version 3 are the many different ways that you can say the same thing depending upon the context of your message. Let's look at a laboratory test result for example. Using the HL7 RIM, this will be encoded as an observation. Using the 2008 Normative Edition there are at least 7 ways within the published standards to say the same thing. These seven different ways appear in the following HL7 Version 3 standards:
  1. Care Record
  2. Clinical Genomics Pedigree
  3. Clinical Document Architecture
  4. Clinical Statement
  5. Common Message Element Types
  6. Public Health Reporting: Individual Case Safety Report
  7. Periodic Reporting of Clinical Trials Laboratory Results
The picture below shows how these HL7 standards support this concept slightly differently in each case.



I've shown how the names of the XML Element, and the constraints on the content of the Observation class vary using a bold red font, or where the RIM attribute is missing, a red background. You can get view the same thing here in this PDF.

Each standard has constrained the HL7 RIM Observation class differently. Arguably, there's a reason for this, but I challenge anyone to determine why. I can certainly accept that each of these standards had different requirements, and therefore constrained the model of observation differently. But nowhere in the standard are these specific requirements expressed.

One conclusion that you could draw from this collection of variances is that no one body is in charge of HL7 standards. Another conclusion that you could draw is that the development process doesn't enforce concistency across HL7 domains. Another is that by failing to document requirements, we are allowing our modeling efforts to introduce unnecessary inconcistencies across domains. All of these are fair challenges to the HL7 development process.

I would propose two changes to the HL7 development process to improve the situation:

Provide Architectural Oversight

HL7 has an Architecture Review Board that was reconstituted a bit more than a year ago. The group's charter can be found here. The Technical Steering Committee was also restructured not too long ago, and it's charter is here. From my perspective, the TSC provides strategic direction, the ARB tactical. I would hope that both these groups would be involved in helping to move HL7 in a direction where these kinds of inconcistencies eventually either dissapear from our work products, or can be explained based upon requirements.

We are currently in the process of some strategic realignment within HL7 (i.e., the thing formerly known as SAEAF). I'd like to see some of that realignment pay some attention to how different views and levels allow us to bring more consistency to the HL7 standard. We have to many artifacts pushing towards implementable messages that should be a a higher level of specification.

DICOM has Working Group 6 whose role is to "maintain the overall consistence of the DICOM standard". While working group 6 it is arguably a choke point on the development of the DICOM standards, I am not one to argue that having more standards is better. I'd rather have better standards. I think we need something more like that within HL7.

Make requirements Mandatory (or at least strongly recommended)

I've seen other standards organizations include a stage for formal requirements definition in their process. We could certainly ballot requirements in HL7 as informative documents prior to undertaking any large project. The HL7 Templates work group is presently developing Business Requirements for a template registry. I'm finding that effort to be very interesting, and the end results will be extremely useful.

None of us would undertake a 6-18 month software development effort without requirements. Yet all too often I see our own standards development efforts do exactly that. We mistakenly assume that usecases (storyboards) will cover it, or that the D-MIM is sufficient. The D-MIM should be a realization of the domain analysis model. The domain analysis model should be a reflection of the requirements of the domain. If we don't understand the requirements of the domain (or at least the subset we are playing with), then we don't have any business trying to develop standards for it. If we do understand these requirements, then documenting them should not be difficult.

All too often engineers want to get right to coding, or in the HL7 world, into modeling. Having stated and agreed to requirements will help to set the scope for the development effort. If the models are derived from and expressions of requirements, let's see the requirements. It would be a lot easier to validate a model if I understood the requirements that drove it. Most of the time that I have negatives on a ballot it is because the model doesn't meet my requirements. So, let's put in more effort up front. If we do it right, developing the model should be a cake walk.

Requirements are reusable design artifacts. Having developed and documented them once, you can use them over an over again for similar cases. Having them documented means that you don't have to have the same arguments over again about why X, Y or Z should be included into a representation. I would be able to look at inconguencies in the use of the Observation class in the above standards and trace these back to specific requirements.

1 comment:

  1. Great article! I had a gut feeling, but could not describe it. This blog describes it great. Especially the part on requirements. Thanks!!
    I also think it is not a good idea to make it possible to put any RIM (D-MIM derived models) content in a Clinical Document. We need something else before we do this. E.g. Detailed Clinical Models and a requirements model like EHR-S-FM.

    ReplyDelete