Wednesday, August 29, 2012

On Ballot Quality

HL7 Structured Documents is presently developing a scope statement for a new project that is, in my estimation, long overdue.  The focus of this project is to improve the quality of ballot documents that create CDA implementation guides.  I suspect that the scope of the project could grow, as ballot quality is an issue that currently impacts a number of different workgroups within HL7.

Here are some specific things I'd to see in a quality initiative:

  1. Readability:  In one recent ballot, a balloter ran the content through a reading ease score (probably Flesch-Kincaid) and came out with a result of negative 5 on a scale of 0 - 100 (it is possible, according to Wikipedia, a passage in Moby Dick has a score < -145).  Every ballot should report a readability score, even if nothing else is done with it.
  2. Full content: A recent HL7 ballot was missing the diagrams and hierarchical descriptor.  The latter is normative content.  Other ballots are routinely missing the change log.  This is also required for normative content.  This one is in part the fault of myself (and other committee members), because we failed to review the content when it was placed on the ballot preview site.  There needs to be a checklist for V3 XML Publications to ensure that all necessary resources are included.
  3. For CDA Guides, there's some boilerplate text that SDWG has been using repeatedly in its ballots.  This text should become reusable a resource that can be used in any CDA Implementation Guide, and we should stop seeing variations in it.  These sorts of variations are usually just minor distractions, but can become major from time to time.
  4. No implementation guide should ever go out to ballot without at least one sample content that is valid.  Meaningful variations should also be expressed (e.g., "patient reports no allergies" vs. a list of allergies being reported).  The sample also needs to be valid according to the ballot.
  5. Conformance tools -- We've been creating conformance tools at the end of some ballots, and this should almost certainly be a final publication requirement.  Ideally, we'd have preliminary tools (e.g., schematron) upon completion of the ballot.
Some of this is making sure that we dot the i's and cross the T's in following the usual ballot process.  In other cases, there are improvements to existing processes that we need to be executing upon.

I complained that when we started this cycle, there didn't seem to be enough time to get all of our work done.  I wish I had been more vocal on this point, because it appears that we've gone and shot ourselves in the foot.  Rather than getting HQMF out late, it seems like we need to go through another complete ballot cycle, because the current content is missing critical components.

I'm glad SDWG is taking on this project, and I hope it improves the quality of our existing work.  

One other thing will have to change for this to work.  We have to be willing to say NO to balloting new work on a schedule where we cannot meet our quality goals.  I can understand why, when a stakeholder has critical deadlines, that we would want to try to meet them.  We have to remember that HL7 is PRINCIPALLY a volunteer organization, and that commitments made by others to their stakeholders DO NOT represent commitments from HL7 itself, or its members.

I don't think it is realistic to initiate a normative project at WGM N, ballot it at WGM N+1, and publish the content before WGM N+2 on our current cycle.  It basically means that the project scope, requirements, and content have to be developed in an 8-10 week period, followed by a 3-4 week publication period, and  30-day ballot cycle.  Unfortunately, there seems to be a great demand for these short cycles, although I can think of few projects that have really succeeded in anything significantly less than a year.

At some point, we may want to consider moving to two ballots a year as a way to improve quality.  But that will ONLY work if we do fewer ballots annually.


Post a Comment