What keeps me awake at night are application developers that want to write systems level code1 thinking that it's at the application level and on the other end, implementation specifications that aren't written at the application level.
There are a couple of challenges. One is understanding at what level each specification should be:
CDA and the RIM are at the systems level. They are a very powerful infrastructure for delivering semantically interoperable content for all kinds of uses, essentially an HTML for healthcare. It's not just limited to patient summaries, but can go into great detail. But with great power comes some of the complexity that you expect at that level. One way to make CDA easy is to focus that power on specific use cases, which is what IHE, HITSP and many others have done using templates. Another way to do that is as in any other systems level sort of thing, provide code that makes it easy to do it (CDA) right. I've pointed to a number of open source implementations for CDA and CCD on the open source implementations page above. You needn't worry about the details because others have done that for you.
Now, Arien Malec (of Direct Project fame) and others have noted a dearth of .Net open source implementations for CDA. One reason for that is that the open source community likes open source platforms, and .Net doesn't quite fit that mold for many (see comments here). So there need to be another driver to get to open source .Net code. Maybe its creator can step up with some support, or a community of .Net developers with an interest in this can figure out what needs to be done if the right sort of organization is put around it. But realize also that having a .Net implementation supporting CDA is percieved by some as a commercial advantage. I don't know whether an open source community will appear or not, but I would hope so. But something has to drive it.
The remaining specifications are headed towards the application level. But there are other challenges in simplifying things. The first of these is in the way specifications are delivered. Until SDOs start delivering UML models as XMI as part of the standard, most of us are stuck with PDF, Word Documents, or at the very best, XML representations of narrative text. Even organizations with rich modeling capabilities like OASIS and HL7 rarely deliver up the source content for the UML models. So, the specifications aren't machine readable with commonly available off the shell tools (and MIF2 doesn't and won't ever cut it for me).
What makes machine readable specifications valuable? The ability to aggregate, automate and validate based on those specifications.
Aggregation is very important, because without the ability to aggregate information, we are left with a "peeling the onion" problem. Even though there's a way to scale up the development of implementation guides by distributing the work, there is no clean way to put all the pieces together.
Automation of code generation is also very valuable. This can be used to quickly and easily generate source code for implementation, validation, storage, et cetera. Give me the data and I can write and test and validate a code generate that will operate off of it quite a bit faster than I can write each class separately on my own. Back when IHE PCC was still developing its TF on the wiki, I grabbed the fairly structured XHTML from it, extracted the data for the templates into XML, and wrote another transform to generate XSLT stylesheets that build correct content in a matter of days for over 400 templates with documentation. Even if I spent 10 minutes per template, that would be weeks. And I'm a lot more comfortable with the amount of testing I put into my automation than I ever would be with the testing I personally could put into the development of 400 templates ... and that doesn't count documentation! Code generation isn't just for implementing objects in Java or .Net. I can generate code to validate, create a WSDL or IDL or other interface, generate SQL tables, or implementation documentation, you name it.
It's because it's data, and I can compute with it. So, to the SDO's out there, I have to repeat a phrase that has been used elsewhere: Give us the damn data!
The MDHT CDA Tools project that I awarded Dave Carlson the Ad Hoc Harley for is being used to do exactly this for CDA, delivering UML models for the following:
- HITSP C32
- HITSP C62
- HITSP C83
- IHE XPHR
- IHE XDS-MS
- IHE XDS-SD
- HL7 CDA
- HL7 CCD
- HL7 History and Physical
- HL7 Consult Note
- HL7 Progress Note
- The CMS CARE-SET Guide (based on HITSP C83)
If a model contains all the data necessary to implement at a particular level, and I have the models for all levels, then I can automatically generate the implementation guide documentation at all levels. The challenges are not really technical as the CDA Tools project has already been used to deliver a very nice implementation guide for Genetic Testing Reports (zip with PDF) in an HL7 ballot.
The challenges are organizational. If HL7 delivers content in UML format, and organizations use that content to develop there own guides, how does HL7 get revenue from that intellectual property when an organization wants to publish their guide based on the HL7 guide? John Halamka has some interesting thoughts in today's post on how to address those issues, and HL7 is examining them as well.
HL7 and IHE and ONC are hopefully about to embark together on a project to Consolidate a number of HL7 guides with the IHE work and CCD templates that they reference, and possibly include the HITSP C83 templates. Use of MDHT and CDA tools is a vital part of the plan. If we can get all those communitities working together, that would produce my ideal guide.
Keith
P.S. One of the keenest challenges we all face in developing implementation guides is that of self-reference. The guide provides the rules which define what is valid in the XML, which then has to be included in the guide in examples, which needs to be tested against the rules in the guide. One of the most time consuming tasks has always been generating good examples. If you give me the data, I can do that, and automate some generation of examples!
1 Don't bother to read this if you've ever implemented your own HTTP stack, HTML display engine, SSL transport, XSLT processor or XML parser that conformed to specifications. That should rule out about 10% of the readers of this blog. BTW: I've done the equivalent, but only once or twice, and it's never been a routine part of my job.
Hi Keith,
ReplyDeleteWhen you spoke of "organizational challenges" and referred back to "today's post" from John Halamka, I assume you were referring to this paragraph that he wrote: "We need a new model - government funding, a simple annual assessment paid to a government agency/administrative organization to enable download of complete implementation specifications, or some combination of public/private funding. Otherwise we'll suffer the problem of indirection - incomplete implementation guides which refer to proprietary information which refer to proprietary information etc." You mentioned that HL7 is addressing them, which is encouraging. But to address this problem fully, wouldn't the list of SDOs have to extend beyond HL7 to include IHE, OASIS, IETF, vocabulary suppliers, and others who provide any of the pieces of content that comprise an implementation guide? (The above list assumes that transport options in addition to information content are within the scope of the IGs). Do you see HL7 as providing the "lion's share" of that content?
Thanks,
David