Pages

Friday, December 18, 2009

If it ain't broke, don't fix it

A very long time ago, during the start of my software development carear, I had two quotes from my boss written on the chalkboard in my office at the University.

“  
Two rules of software development
  1. Just get it to work
  2. If it ain't broke, don't fix it.
These became a sort of test for people walking into my office.  If they looked at these two statements and questioned the implications of them on the software generated in that office, they passed. More than 70% of the cost of software is not in the development of it, rather, it is in the maintenance and support of it.  Applying these two rules might get it done quick, but won't result in something that you (or I for that matter) want to maintain.

In the realm of laboratory ordering and reporting, we are in a situation that is the result of applying these two rules.  We have numerous laboratory order and reporting interfaces installed across the country that "work", and since they aren't "broken", there are concerns that we shouldn't be spending a great deal of time fixing them.  This results in debates in the HIT Standards and Policy committees over the need to specify standards now or later, or allow early adopters a "buy" for the first round.

The question is whether you want to drive a more expensive vehicle that is reliable, or if you just want a clunker that spends a good bit of time in the body shop. The just get it to work attitude results in driving around a lot of lemons that have high maintenance costs, but the other approach is expensive in the short term. And if you've already got a lemon that's running, you may not be ready to purchase that new car just yet.  It might require some planning and adjustment, but when that lemon dies, you should be ready to get something that will last.

My thoughts in this are fairly straightforward. 
1.  If it ain't broke, don't fix it. 
2.  When it does break, fix it right, or get a new one that won't break like that again.

The same principle was applied to Federal Health IT infrastructure under the Bush administration via Executive Order 13410, and should be applied to laboratory standards for meaningful use.  In essence it says that new, upgraded or newly developed HIT, use the recognized standards.  Basically, if it isn't being replaced, don't change it.  I know, telling this adminstration to pay attention to what the last one did probably won't fly, even if it was a good idea.  So, instead, look to what section 13111 of ARRA has to say about federal spending on HIT.  What's good for the gander in this case, should be good for the rest of the geese.

If a provider has a working electronic laboratory interface, let it count for the first two years, but ensure that they are planning to update it to support the required standards by 2013.  That will avoid unnecessary expenditures on fixing what "isn't broken", but it will also indicate that we are serious about the use of standards.  It won't leave early adopters out in the cold over what is needed for laboratory interfaces.

Laboratory results are very important in quality measures and clinical decision support.  Avoiding standardization of the laboratory results interface will delay other factors of meaningful use that aren't being debated.  So, it's important to push for standardization and make it clear that we will move forward.

One of the key features of CDA that makes it so implementable is "incremental interoperability".  Let's use that principle for laboratory interfaces as well.

Thursday, December 17, 2009

Vocabulary

Healthcare IT products need to deal with terminology for ICD-9-CM, ICD-10-CM, ICD-10-PCS, SNOMED-CT, RXNORM, LOINC, NDC, CPT, HCPCS, UMLS, the Healthcare Provider Taxonomy and a number of proprietary vocabularies as well.  Most of these use different file formats to exchange the data about the vocabulary.

What I'd really like to see is everyone use standard format to exchange this information.  Preferably I'd like that format to be XML-based to make it easier to process.  But I'd also like that representation to be fairly compact, so I might be able to live with a text delimited format.  I can readily create an XML reader that will import common text delimited formats in an XML document for processing, so it's not a huge problem if the format isn't XML-based.

Finally, I'd like everyone to agree on some very common concepts (e.g., "is a") that need to be expressed so that these concepts have the same meaning across terminology.  Ensuring that we have a set of commonly accepted (standard) relationships will certain help us get to a point where we can reason across terminology boundaries.

The US Federal government is responsible in some way for maintenance, delivery or mandated use of some of these vocabularies (RXNORM, UMLS, ICD-9 and 10 variants used in the US, NDC, HCPCS and the Healthcare Provider Taxonomy), and yet almost all of them require different file formats for distribution.  It's what I've come to expect from my government, but I wish it would stop.  At least the work done by NLM (RXNORM and UMLS) have a common file format.  The Rich Release Format is used for both of these and uses | as a text delimiter to separate columns.  In fact, it might even be worthwhile to have a number of SDOs get together and agree to use that format (or perhaps a modification of it) to deliver vocabulary information.

Some of the vocabularies I mention are published in books with a lot of ancillary material that should also be part of the downloads.  For example, the ICD-9-CM vocabulary contains a rather large index which is incredibly valuable, along with a number of inclusions and exclusions.  But to really make good use of the vocabulary you need the data associated with these additional parts incorporated into the downloads.

Finally, I'd like to see some of the hierarchical relationships in some of these terminologies be formally expressed within them.  LOINC for example, contains numerous concepts describing clinical documents, but the LOINC data itself doesn't actually include some of the important relationships between the different types.  For example, the Admission History and Physical Note (47039-3) doesn't show up as being related in the document hierarchy with the Cardiology Hospital Admission Note (34094-7).  The same is also true for relationships between the various laboratory results. 

As we in the US continue to talk about simplification and debate some of the really hard IT topics, this seems like a really simple problem to solve that could be addressed with just a little bit of the right attention.

Tuesday, December 15, 2009

Healthcare IT Standards and General IT Standards

A recurring theme of this blog is using the right tool for the right job, and is one of my father's favorite aphorisms.

I am struck by the number of times that I hear others discuss healthcare standards problems as if they are different from problems that others in the IT field have addressed.  The current case has to deal with modeling of consents to share or release private information for use by others.  This is NOT a healthcare specific problem, it appears in multiple business contexts (e.g., credit reporting and credit checks).  It should be a matter of profiling appropriate industry standards (and I admittedly don't know which those would be, nor do I have a personal preference) to use appropriate healthcare terminology (regarding occupation and licensure, healthcare specific purpose, et cetera).

For some reason though, there seems to be this need to apply HL7 modeling to this problem (and perhaps every other problem encountered in the healthcare context).  The HL7 RIM is extraordinarily powerful and you can model almost anything you want with it.  I know, I've modeled 100 bottles of beer on the wall with it to teach RIM modeling for Claims Attachments.  Does that mean it should always be applied?  In this particular case, I'm not certain that it should.

This particular issue is a general problem that should have a general solution available from the IT space.  It just needs to be customized to address  healthcare specific issues.  If it was correctly modeled to begin with, that should be a straight-forward prospect.  If not, then it seems the right answer might be to go back to those bodies and get them to fix it rather than perpetuate the proliferation of perplexing products purported to puzzle out the problem.

I think that there are two issues here:
1. Using a solution provided by someone else isn't necessarily sexy or cool. 
2. Inventing new solutions provides product or consulting opportunities.
Neither of these is a requirement.  I want solutions, I want them to be commercially available and easily integrated into my current suite of tools.  Ideally, I'd like it to be something I can buy a book on or take a class on, and a skill-set that I can hire for from the existing pool of experienced IT  people. 

To be fair, using solutions built by others is hard, and building it yourself always seems to be easier, better and/or faster.  You have to read all the existing work and understand it, and apply creativity sometimes.  But the people building standards for healthcare are bright people.  I expect them to be able to take on that task.  On the easier/better/faster to build it yourself, well, most of the time, that's just an illusion.  Yes, what you do may be easier/better/faster, but does it really provide enough incremental value to justify all that work?  You could be spending your time on harder and more interesting problems that are much more valuable to solve.

I'm all for standardization, and I like HL7 and all the rest ..., but frankly I'd rather go to a mechanic when my car is broken than a doctor.  They charge better prices and the problem seems to stay solved longer.

Friday, December 11, 2009

UTC in HL7 Version 3

The timestamp data type is used in a variety of standards to mark the time at which an event occured.  Most standards (including HL7 Version 3 and W3C XML Schema) rely on ISO 8601 as the base standard which is then constrained in different ways.

Marc de Graauw asked a question about how one would represent Universal Coordinated Time using the HL7 Version 3 standards (see How to express UTC time in TS).  I did a bit of research on this and was somewhat amused with my findings:

The HL7 V3 Datatypes schema allows [0-9]{1,4} for the pattern following the + or - so that doesn't help much.

Section 4.2.5.1 of ISO 8601 states:
When it is required to indicate the difference between local time and UTC, the representation of the difference can be expressed in hours and minutes, or hours only. It shall be expressed as positive (i.e. with the leading plus sign [+]) if the local time is ahead of or equal to UTC of day and as negative (i.e. with the leading minus sign [–]) if it is behind UTC of day. The minutes component of the difference may only be omitted if the time difference is exactly an integral number of hours.

The key phrase ahead of or equal to UTC indicates that +00 or +0000 are the only ways to represent UTC other than Z. I know that zero is neither positive or negative but those terms are in reference to the leading + or - sign. The statement "equal to UTC" is what makes the point, which means that -0000 isn't valid (according to 8601).

Standards using 8601 disagree: 
The W3C use of 8601 in XML schema recognizes +00:00 -00:00 and Z as legal representations of UTC, with Z being the canonical representation. See http://www.w3.org/TR/xmlschema-2/#dateTime

Abstract Datatypes Release 1 and 2 say pretty much the same thing for the literal form of a time stamp:
In the modern Gregorian calendar (and all calendars where time of day is based on UTC), the calendar expression may contain a time zone suffix. The time zone suffix begins with a plus (+) or minus (-) followed by digits for the hour and minute cycles. UTC is designated as offset "+00" or "-00"; the ISO 8601 and ISO 8824 suffix "Z" for UTC is not permitted.

The ITS: XML Datatypes, Release 1 specification has nothing to say other than by reference to Abstract Data types.

Pragmatically, any user of HL7 V3 schemas should recognize any of +0, -0, +00, -00, +000, -000, +0000 and -0000 as a UTC time zone, but should only record UTC as +00 or +0000 (my own preference). These are all legal representations of time zones using the HL7 TS data type according to the (non-normative) schemas provided by the XML ITS.
So, there you have it.
 
Keith
 
P.S.  This is book fodder...

Thursday, December 10, 2009

Don't Panic

DON'T PANIC

Unlike the Hitchhikers' Guide to the Galaxy, the HITECH ACT of ARRA does not contain the words "Don't PANIC" printed in big bold text on the front cover, but it should. 

The HITECH provisions describing meaningful use are principally concerned with motivating healthcare providers to use electronic medical records wht the anticipated goal of reducing the costs of care. They do so through INCENTIVES.  See the definition below from wiktionary for the term:

incentive (plural incentives)
  1. Something that motivates, rouses, or encourages.
    I have no incentive to do housework right now.

  2. A bonus or reward, often monetary.
    Management offered the sales team a $500 incentive for each car sold.
As we all anticipate the pending regulation my current frustration is with various people who are panicing about there being "too much, too fast" for providers to adopt, or that the standards are not ready.  If the standards aren't ready, then how is it that 47% of the hospitals responding to the AHA Most Wired survey are already able to support CCD, or that 84% of the most wired can (see Connecting all Your Docs)? 

HITECH is nothing like HIPAA in what it requires of providers.  HIPAA regulation basically stated that you had to use certain standards if you wanted to use electronic claims transactions, and that you had 2-3 years to do it, and there was no money from the Federal government to help it along.  HITECH basically says that if you want to recieve incentive payments, then you have to do certain things over five years, and you'll be ahead of the game, and after five years, there will start being penalties for non-conformance.

Yes, this results in a great deal of craziness as everyone tries to ensure that they get as big a piece of the pie as they possibly can.  But...

If you aren't ready, slow down.  The world won't end tomorrow (or in the next two weeks), nobody is poised to bulldoze your practice.  You have some time to make reasoned and good decisions.  The incentives are structured so that the biggest payouts will be in the first years of technology adoption.  Waiting a little bit won't cut a huge chunk off the potential incentives you can recieve, just read Page 354 of ARRA.  In fact, you'll see that waiting a year (being a meaningful user in 2012) is just as good as starting in 2011.

That doesn't mean I don't want you to start now, because I do, but do so in a thoughtful way, and if you need more time, by all means take it.

Wednesday, December 9, 2009

What I want for Christmas

From HL7:  A new ITS that makes it easier to implement Version 3 specifications and a US Realm.
From W3C:  A binary XML that reduces the footprint of XML on the wire and an updated Schema specification that enables HL7 to build that new ITS.
From HITSP:  A week or two off and some infrastructure to build better specifications.
From IHE:  Actually, I think I've gotten that one already, a full slate of active leaders in PCC.
From ONC:  Some forethought and an RFP to continue the standards harmonization process that includes some of the other streamlining of standardization that I've asked for.
From ISO and the US TAG:  Some time to play in that sandbox.
From NIST: Open sourcing of validation tools.
From a book publisher: A contract.
From Congress:  Health Reform

What I'm giving for Christmas:
To HL7: Some easy ballot comments to address.
To W3C: Feedback on that new Schema specification they have.
To HITSP: 25 hours a day.
To IHE:  Antepartum Workflow Draft -- Really, I promise this time.
To NIST: Comments on testing tools.
To Congress:  A new senator.  It'll be a little late, but...

It's just a short list really, and I've been such a good boy...

Tuesday, December 8, 2009

More on Language


This time the issue is inclusive language.  Several times over the past two days I've been involved in various discussions around standards around that particular topic.

Several readers complain that a given specification hasn't listed a given profession, care activity, or other detail pertinent to a specific job function in a list that's clearly marked as being an incomplete set of examples.

Other comments indicate that certain types of documents should be included in such a list of examples.

A specific example in one document is faulted because it uses a concrete term in a detailed example, indicating that the example need not use THAT concrete term, and could use others.

In another case, we are asked to rewrite requirements coming from a third party to incorporate those necessary for a specific type of product.

I have three ways to deal with these issues:
1.  Ensure example lists are clearly marked as examples, using terms such as for example, e.g., or "as in the following incomplete list".
2.  Point out to the commenter that a specific example is being given that doesn't indicate preference for any given concrete term.
3.  Add the suggested term to the list, or find another change that can be made to make the commenter happy.

In all cases, these discussions and the resolutions:
1.  Do not impact the normative text of the document (what needs to be implemented in the specification).
2.  Make the reader with a specific focus feel as if their concerns are addressed.
3.  Consume time.

It's a frustrating balance.  The end result is a document that will please a wider audience, at the cost of time.  I continue to remind myself that making readers happy is an important component of getting specifications read and understood.  It is amazing what inclusive language can do for a reader, even though in the end, it may not change anything required by a specification (the specifications usually already support the given requirement).

I'm defining the terms clinician, healthcare provider and provider organization in my book right up front because I want everyone to understand what I'm saying and why there is a need to distinguish between these three entities.  I'm even having to deal with comments from reviewers on that language.



Oh Lord, give me patience, and give it to me NOW!  -- Unknown

Friday, December 4, 2009

On Language and Standards



‘When I use a word,’ Humpty Dumpty said, in a rather scornful tone, ‘it means just what I choose it to mean, neither more nor less.’
‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’
‘The question is,’ said Humpty Dumpty, ‘which is to be master – that’s all.’
As someone who participates in the activities of multiple standards bodies and reads standards published by even more, I am often having to jump between different viewpoints of the world.  If you've studied the dynamics of organizations, you understand that they go through several phases (all of which are necessary), including storming, forming, norming, performing and reforming.  The norming stage includes agreeing on a common language to describe things.  The last few days have highlighted the importance of language: 
  • I need to be able to express something in the language of people in a particular role that I'm not familiar with so that they can understand what I'm saying.
  • I explain that training that I provide is customized to the audience, so that if the class is filled with one set of students, it focuses on their higher level concerns about clinical content, and with another set, on theirs, which is focused on the pointy brackets of the XML.
  • I'm engaged in a discussion with a participant about the differences between the language they are using and the norms of the group they are communicating with.
  • I'm looking to simplify the language used in a specification so that what is being said can be written in a sentence that can be understood by three different audiences with very different languages.
  • I'm battling over the use of terms that are being defined by a group that are at variance with common definitions of those terms used elsewhere.
One of the most time consuming aspects of standards development is in indentifying definitions of things (note that I said identifying and not creating). Definitions are important. They provide the "white lines" that many are looking for to understand the boundaries of a thing. Many times it is the least productive, because nobody can agree on a single definition. This inspires what I consider to be unnecessary invention of new terms to which people can agree on a definition for because they are describing something new rather than trying to fit it into what is already known. This results in needless explanation of new terms that are really reuses of old things in new contexts, and a lot of translation and cross walking between concepts.

Because my mind is on the topic, I spent a few minutes writing up some of what I consider to be best practices around language used in standards:

As a participant in an SDO or Profiling Organization:
  1. Keep it simple.  The more complex a definition is, the less likely that the term it defines will be readily understood. 
  2. If you need to give something a name, make it a name that can be interpreted from the words in it without a definition (but still define it), or better yet, use a term that has a definition that suits, and cite your reference. 
  3. Use readily available and well recognized sources for definitions of terms.  I happen to prefer dictionaries like Websters or the American Heritage Dictionary as my source of definitions, or when necessary, dictionaries of computer terms, and lastly definitions created by well recognized organizations with broad participation rather than those that are specific to a single field of practice.
  4. Keep sentences simple and understandable by as broad a group as possible.  Remember that your audience likely contains the top-most C-levels of an organization right down to the recent college graduate responsible for implementing some portion of a system. 
  5. Avoid use specially coined terms that will be recognized and properly understood only by someone who has been participating in your organization for years (the phrases "class clone" and "transaction package" come to mind).  This is especially true in material that someone outside your organization needs to understand.
  6. Do create a glossary of technical terms that provides definitions and cites references to help new members.
  7. Avoid creation of new acronyms.  These are the least comprehensible to someone outside your organization.
  8. When you find cases where a different understanding terms results in different meaning, make sure you clarify it in what is published.  If it took your group an hour to resolve issues that result from different understandings of the term, don't assume that your readers will automatically have the same understanding of those terms when you publish.
  9. Provide real-world examples.  They are often the simplest way to express what is happening in ways that everyone can relate to.
As a new participant in and SDO activity:
  1. Ask for the glossary of terms.
  2. Ask for definitive references and resources.
  3. Try to understand the terms being used by the group using the group norms instead of your own, but also...
  4. Ask for clarification when you don't  understand a term or acronym.
  5. If you find yourself disagreeing with a statement, see if the source of the disagreement is in your understanding of the terms used in the statement.  So many times I've seen the reconcilliation of a disagreement resolved by ensuring that everyone has the same understanding of the terms being used.
  6. Ask for a real world example.
I'd love to see the various SDOs and profiling organizations to individually agree on a set of definitive published reference works that they will use and cite for definitions.  If we could get them all to agree collectively on these references, I'd be even happier.

Wednesday, December 2, 2009

A Canadian Perspective on Standards Harmonization

Today I have a special guest post from Mike Nusbaum.  Mike's a great guy and knows quite a bit about participating in multiple standards organizations.  He has been in leadership positions to my knowledge in ISO TC-215, HL7 and IHE, and also facilitates and writes for ANSI/HITSP here in the US.  Mike helped establish the Canadian framework for standards harmonization, and I asked him to write a guest post on the topic.  Here's Mike:

Guest contribution by: Michael Nusbaum, BASc, MHSA, FHIMSS
(a Canadian healthcare IT consultant who also works with HITSP in the US)
A Canadian Perspective on Standards Harmonization
As the US health reform freight train continues to roar down the tracks, the IT standards imperative becomes increasingly critical.  The government's well-funded priority to stimulate reform through the establishment of an interoperable nationwide health information network (NHIN) has put incredible pressure on standards harmonization activities over the past 6 months. Clearly, interoperability is achieved through the implementation and use of standards, and funding directed towards state and regional health information exchange (HIE) initiatives is contingent upon the adoption of those standards within all stakeholder communities.

Keith has written extensively in the blog about the fractured system of standards development and maintenance in the US, and despite incredible progress over the past 3 years towards harmonization, there is still so much to be done. Standards, of course, must be developed and used in a global context, as no one country [not to mention a large multi-national vendor community] can afford to tackle this alone.  That's why standards organizations like ISO/TC215, IHE, HL7, IHTSDO and others are all focussed internationally, while required to co-exist and link to initiatives being undertaken domestically (ANSI/HITSP, CCHIT, US/TAG, etc.).  In the US, there is no "umbrella" to formally coordinate domestic and international efforts.

As a Canadian who is also working extensively in the US healthcare IT community, I have had a unique opportunity to become involved in the governance and management of standards development/maintenance in both countries.  While I normally stay quiet on Canadian successes (and failures) with my US colleagues (respecting the need of each country to undertake their own voyage of discovery), there has been a recent flurry of enquiries asking the question "how does Canada do that?".  One of these enquiries recently came from Keith, who asked me to respond to his recent Call to Action posted to his blog last summer, describing the need for the US to not only harmonize standards development, maintenance, certification, vocabularies and implementation support... but to also harmonize governance through the establishment of some kind of "national organization".

[OK, here it comes...]  In fact, Canada has done exactly this, with the establishment of the Standards
Collaborative
that operates under the custodianship of Canada Health Infoway (the not-for-profit corporation that has been empowered by Canada's federal and provincial governments to coordinate and fund e-health initiatives). I was fortunate to have worked with Infoway a few years ago on a consulting contract, and was one of the team that developed and implemented the Standards Collaborative.  At the time, it was a radical concept to bring together all standards organizations operating in Canada, and link these to the standards development/maintenance activities that were being undertaken by Infoway to facilitate the interoperability of the national EHR "infostructure".  Now, some 4 years later, the Standards Collaborative has proven to be a model that has really worked well towards harmonizing standards (and by extension, interoperability) activities in Canada.


I won't repeat all the information that can easily be found online (e.g. see this fact sheet), except to say that just about all of the former, fragmented, standards organizations and initiatives have been brought under the Standards Collaborative "umbrella", which now includes HL7-Canada, ISO/TC215 Canadian Advisory Committee, IHTSDO liaison, DICOM liaison... and soon to be IHE-Canada).  Each of these "constituencies" operate in concert with one another, and while governance is harmonized, each constituency has a "head of delegation" supported by a SIG-like interest group.  The big benefit is the communication and harmonization between all of these initiatives, as well as a cohesive Canadian presence both internationally as well as domestically.  The "clients" (jurisdictions, health authorities and vendors) find this consolidation to have removed one of the most significant barriers towards the adoption of standards and the implementation of interoperability.

Would this work in the US?  Definitely.  However there must be amassed a significant and trusted leadership (like Infoway has done in Canada), together with a fair amount of political will (which ONC is in a good position to provide, I expect).

As I continue to monitor the evolution of the healthcare IT reform agendas in both countries, I will most certainly respond to any questions (like Keith's request) that foster more synergy between the US and Canada.  Those of you who were able to attend the recent Canada-US "HIE Summit" in Philadelphia were able to observe first-hand the tremendous bi-lateral potential of such synergy.  I remain optimistic.  If anyone is interested in an offline conversation about this, you can contact me at michael@mhnusbaum.com.


Thanks for the opportunity to weigh in on this, Keith!!

-- You are welcome Mike, and thanks for taking time to do it!

Tuesday, December 1, 2009

SAEAF Revisited

You may recall this box from Demystifying SAEAF ...maybe



Well, I joined a telephone call and web exchange with a number of really bright people who are, like me, still confused about the HL7 Services Aware Enterprise Application Framework or SAEAF as it is known in HL7 circles.

We spent a good bit of time working on the "elevator pitch" about SAEAF.  What we came up with was the following:



SAEAF provides a framework for specifying how HL7 products integrate and function from different viewpoints and levels of abstraction. When HL7 products agree upon common viewpoint and abstraction details, they can be used together.

SAEAF needs to be looked at not from the perspective of one HL7 Standard or work product, but from the viewpoint of the HL7 work products as a whole.  Think about each of the edges of the box as parts of a LEGO® block. A correctly designed HL7 Enterprise Architecture will use common measurements for the connectors that allow each of the blocks to snap neatly together. This illustrates the concept of conformance to the architecture. It requires that we have some ability to test conformance and measure how well different HL7 products stack up against each other.

Monday, November 30, 2009

Cross Linking

One of the benefits of working with other experts is the ability to cross link information they generate with information that you generate to save time and effort.  Today, John Moehrke writes about reasons why you shouldn't try to use audit logs as disclosure logs in his blog.  I'm going to save myself a longer post and refer you to his blog because what he has to say about ATNA and Accounting of Disclosures is important before you read further.

This is another case of the "If I had a hammer" syndrome that I posted on 18 months ago, but in this case the tool is a crescent wrench and the object it is being pounded on is a phillips screw.  If you really compare the requirements of an Audit Log and of a Disclosure Log you will see that you have almost none of the same business requirements, and only about a 60-70% overlap in the information requirements.  Yes, there is a common core there, but that doesn't make one equivalent to, or a superset of the other.  Some of the requirements also conflict with each other. An Audit log almost certainly includes more details used for forensic investigations that would never be released in a Disclosure Log.

So, what we have identified here is two different use cases with some overlapping requirements.  This is a pretty common phenomena in computer architecture.  The occurence of an overlap may point to a common ancestor in the analysis and design, but it does not imply equivialence or supersetting of requirements, nor need it.  Sometimes overlaps are interesting and useful, and this one certainly is, but not nearly so much as some would expect.

Addresses in CDA

Writing a book about a standard is different from implementing it.  When you are implementing, you can get away with ignoring parts of the standard that aren't of concern to you.  However, when writing a book about it, you need to cover details that wouldn't normally concern you, at least to explain to your audience why they do or don't matter, and when those rubriks apply.  Somewhere in the middle of that is writing implementation guides.  Because I'm now writing a book, I'm rereading the standard, and discovering things I didn't know.  I'll be reporting these discoveries from time to time.

Over the weekend, I discovered something about addresses in CDA that I didn't previously know.  Now I have an even deeper understanding (and perhaps some remaining confusion) about the AD data type. 

There are about 27 different kinds of information that can appear in an address data type.  They are all different kinds of address parts (ADXP) in the Version 3 Data Types standard.  I knew that. 

What I didn't know was the difference between <streetAddressLine> and <deliveryAddressLine>, and some fine details about the XML representation of these parts.  You've probably never seen the <deliveryAddressLine> referenced in any implementation guide, but it should have been.  The difference between a <streetAddressLine> and a <deliveryAddressLine> is the difference between a physical delivery address and a PO Box, rural route or other sort of delivery address.  A dearth of examples probably contributes to my lack of knowledge.

The Version 3 data types standard (both release 1 and 2) represents the address data type (AD) as a list of Address parts (ADXP), which can repeat any number of times.  Practically, some of these should only appear once (e.g., postal code, city, state, country or county), while others could appear multiple times (e.g., streetAddressLine or deliveryAddressLine).  There is also hierarchy of address part types which seems to imply a whole/part relationship between elements of the hierarchy.  For example, you would imagine that the <streetAddressLine> could contain a <streetName> element.  However, the data types schema doesn't allow the content of a <streetAddressLine> to contain a <streetName> element.  If you are going to parse any portion of the street address in detail, you cannot wrap the parsed elements in a <streetAddressLine> element.

The long and short of it is that I learned something new, and now, hopefully, you have too.

Wednesday, November 25, 2009

Turducken

Thinking about what it takes to get healthcare information communicated securely from one point to another reminded me of a Thanksgiving feast I still haven't tried yet.

A Turducken is a chicken, stuffed inside of a duck, stuffed inside of a turkey, and then cooked for a good long time.  I'm told by people who've had them that they are:
  1. Wonderful to eat
  2. Complicated to prepare
I feel that way about the current set of healthcare standards. Think about it:  We've got SNOMED CT, LOINC and RXNORM stuffed inside an HL7 CDA Document, packaged inside an IHE Cross Enterprise sharing transaction.  It is in fact, wonderfully full of content, securely exchanged, but is a bit complicated to prepare. 

My friends' third observation on Turducken was that it was worth the effort, and that parallels my own experience with standards based health information exchanges.

Have a happy Thanksgiving for those of you in the US, and for those of you who are not, have a good week.

UPDATE:  My wife pointed out to me after having heard about this morning's post that you can purchase a Turducken already prepared.  The same is true for the multiple standards put together by HITSP.  See OHF,  IPF and CONNECT for some examples.  Another excellent comparison.

Tuesday, November 24, 2009

Looking for CDA Stories

Writing on "The CDA Book" has started.  It is amazing the amount of non-writing writing that you have to do when you start a book.  The amount of reading you have to do is also pretty mind-numbing.  Thanks to the web, at least some of that material is readily accessible, even 10 years after the fact. 

I'm looking for CDA stories right now, especially stories of the early days of CDA, PRA, or KEG.  If you have any of these you'd like to share, please let me know.  I'm also interested in real-world, "today" CDA stories.  If you have good (or bad) stories about CDA implementations today, I'd like to hear them.  Finally, I'm curious as to whether I need to spend any time in the book on "The great SDO debacle of 2006".  What are your thoughts?

Friday, November 20, 2009

Simplification

My how times change.  In 2005, the largest problem facing healthcare IT interoperability in the US was the harmonization of standards.  ANSI/HITSP was formed in October of 2005 with the goal of addressing this particular issue.  It is now four years (and a month) since HITSP was created, and we no longer talk about the "Harmonization" problem.  The problem facing us now is the "Simplification" of standards.  I'm glad to see that we are moving onto the next problem, but hope that in so doing, we don't "unsolve" the previous one.

HITSP's great success in achieving its contract objectives goes largely unnoticed in this new phase because its objectives no longer address the most pressing issue.  I've heard several complaints about how the HITSP specifications are too complex, long and not directive enough for implementors.  These are valid complaints if you are trying to use these specifications as "implementation guides".  They were designed to specify enough to address the harmonized use of standards.  Iimplementation guides are much more than that.  Many of us involved in HITSP have argued that we need to find a better way to communicate.  However, the development of implementation guides requires a great deal more resources that were made available to HITSP under the ONC contract for harmonization, and the scope of the harmonization contract did not include that requirement.

To put this all into perspective, I did a little bit of research into what we (US taxpayers) are paying for with respect to implementation guides for healthcare standards.  See http://www.fedspending.org/ for one place where you can dig up some of this data for yourself. My own survey was anything but scientific, but based on my findings, I will assert that a 50-100 page implementation guide seems to cover from 2 - 4 transactions, and costs anywhere from $175,000 to $350,000 dollars to develop.  It is usally done in anywhere from 6 - 15 months.  The costs don't seem to scale up linearly with complexity either, twice as much complexity results in more than twice the cost.

So, how do we move forward from here?  The next step needs to be forward, towards simplification and education, yet not reject the harmonization that we just spend four+ years and spent tens of millions of dollars of both public and private funds addressing.

There are some concrete actions that we can take:
  1. Make simplification of standards an important topic for SDOs and profiling organizations to address.
    The ebXML Reference information model and the HL7 RIM are great information models (of meaning), but what we are hearing from the "Internet" crowd is that we need to be closer to how the information modeled for use (see my ramblings on Synthensis). So, how can we simply go from meaning to use and back again?

  2. Develop tools to make simplification easier (tool development is cheaper in the long run than just throwing more labor at the problem).
    The HL7 Templates workgroup is in the process of starting a project to build a templates registry.  Imaging having an information resource that would pull together all the templates that you need to implement a HITSP construct in one place.  One could readily use that resource to more quicky develop complete implementation guides that wouldn't have some of the challenges that our current HITSP specifications face.
  3. Move away from linear documents for specifications. 
    We are dealing with the information age, it's about time we moved away from linear documents, and the constraints that they place upon us.  Developers want richly linked media to help them find what they are looking for.  The HL7 V3 Ballot site is an overdone example of what I'm talking about, but it is surely better than a 100 page document to deliver the necessary content.  One of the biggest challenges that HITSP faces is how to take the content that we have now in documents and make it something that would allow us to put together a real implementation guide.
  4. Figure out how to move away from single-SDO based interchange and vocabulary models
    One thing that the world wide web and XML have taught us is the power of structured information that can be easily transformed.  We have 5-6 different key standards for communication in healthcare, only two of which are in XML.  We have some 7-9 different key vocabularies, with very similar high level models, yet no common terminilogy interchange format.  Can we provide common model across the space of healthcare for both interchange and terminilogy standards. 
These are real and concrete steps that will move us in the right direction. Let's not revisit discussions we had 4 years ago, they were far from simple then, and they would be even more complicated now.

Wednesday, November 18, 2009

ISO+ to UCUM Mapping Table


Several years ago I was the editor of the HL7 Claims attachments specifications.  As a result of that effort, I created a mapping table from ISO+ Units to UCUM units.  ISO+ units are commonly used in HL7 Version 2 lab messages, while UCUM is a newer standard developed by the Regenstrief Institute that is required for use in CDA Release 2 and in other HL7 Version 3 standards, and was selected by ANSI/HITSP for the communication of unit information.  A much shorter list was eventually used in the claims attachment guides, but I ran across the data I generated the other day and thought I would share it here.

Please note the disclaimer:  I am not an expert on ISO+, UCUM or laboratory units in general.  Therefore, you should validate this data before using it in any clinical applications.

ISO+ Units needing a Mapping
ISO+UCUMISO+UCUMISO+UCUMISO+UCUM
(arb_u)[arb'U]10.un.s/(cm5.m2)dyn.s/(cm5.m2)iu/mL[iU]/mLmL/hrmL/h
(bdsk_u)[bdsk'U]10.un.s/cm5dyn.s/cm5k/wattK/Wmm(hg)mm[Hg]
(bsa){bsa}cm_h20cm[H20]kg(body_wt)kg{body_wt}mm/hrmm/h
(cal)calcm_h20.s/Lcm[H20].s/Lkg/mskg/m2mmol/(8.hr.kg)mmol/(8.h.kg)
(cfu){cfu}cm_h20/(s.m)cm[H20]/(s.m)kh/hkg/hmmol/(8hr)mmol/(8.h)
(drop)[drp]dbadB[SPL]L/(8.hr)L/(8.h)mmol/(kg.hr)mmol/(kg.h)
(ka_u)[ka'U]dm2/s2REML/hrL/hmmol/hrmmol/h
(kcal)kcalg(creat)g{creat}lb[lb_av]ng/(8.hr)ng/(8.h)
(kcal)/(8.hr)kcal/(8.h)g(hgb)g{hgb}

ng/(8.hr.kg)ng/(8.h.kg)
(kcal)/dkcal/dg(tot_nit)g{tit_nit}m/sms/sng/(kg.hr)ng/(kg.h)
(kcal)/hrkcal/hg(tot_prot)g{tot_prot}masMsng/hrng/h
(knk_u)[knk'U]g(wet_tis)g{wet_tis}meq/(8.hr)meq/(8.h)osmolosm
(mclg_u)[mclg'U]g.m/((hb).m2)g.m/{hb}m2meq/(8.hr.kg)meq/(8.h.kg)osmol/kgosm/kg
(od){od}g.m/(hb)g.m/{hb}meq/(kg.hr)meq/(kg.h)osmol/Losm/L
(ph)pHg/(8.hr)g/(8.h)meq/hrmeq/hpapA
(ppb)[ppb]g/(8.kg.hr)g/(8.kg.h)mg/(8.hr)mg/(8.h)palPa
(ppm)[ppm]g/(kg.hr)g/(kg.h)mg/(8.hr.kg)mg/(8.h.kg)sec''
(ppt)[pptr]g/hrg/hmg/(kg.hr)mg/(kg.h)sieS
(ppth)[ppth]in[in_us]mg/hrmg/hug(8hr)ug(8.h)
(th_u)[todd'U]in_hg[in_i'Hg]miu/mLm[iU]/mLug/(8.hr.kg)ug/(8.h.kg)
/(arb_u)/[arb'U]iu[iU]mL/((hb).m2)mL/{hb}.m2ug/(kg.hr)ug/(kg.h)
/(hpf)[HPF]iu/d[iU]/dmL/(8.hr)mL/(8.h)ug/hrug/h
/(tot)/{tot}iu/hr[iU]/hmL/(8.hr.kg)mL/(8.h.kg)uiuu[iU]
/iu/[iU]iu/kg[iU]/kgmL/(hb)mL/{hb}
10*3(rbc)10*3{rbc}iu/L[iU]/LmL/(kg.hr)mL/(kg.h)
10.L10.L/(min.m2)iu/min[iU]/minmL/cm_h20mL/cm[H20]


Units that are the same in ISO+ and UCUM
%barg/LL.smgmmol/(kg.d)ng/Lueq
/kgBqg/m2L/(min.m2)mg/(kg.d)mmol/(kg.min)ng/m2ug
/Lcelg/minL/dmg/(kg.min)mmol/kgng/minug/(kg.d)
/m3CmGyL/kgmg/dmmol/Lng/mLug/(kg.min)
/mincm2/shL/minmg/dLmmol/m2ng/sug/d
/m3dhLL/smg/kgmmol/minnkatug/dL
/mindBJ/Llmmg/Lmol/(kg.s)nmug/g
/mLdegkatmmg/m2mol/kgnmol/sug/kg
1/mLeqkat/kgm/s2mg/m3mol/Lnsug/L
10*12/LeVkat/Lm2mg/minmol/m3Ohmug/m2
10*3/L
kgm2/smLmol/sOhm.mug/min
10*3/mLfgkg.m/sm3/smL/(kg.d)mosm/Lpgukat
10*3/mm3fLkg/(s.m2)mbarmL/(kg.min)mspg/Lum
10*6/Lfmolkg/Lmbar.s/LmL/(min.m2)mVpg/mLumol
10*6/mLgkg/m3meqmL/d
pkatumol/d
10*6/mm3g.mkg/minmeq/(kg.d)mL/kg
pmumol/L
10*9/Lg/(kg.d)kg/molmeq/(kg.min)mL/m2ngpmolumol/min
10*9/mLg/(kg.min)kg/smeq/dmL/mbarng/(kg.d)psus
10*9/mm3g/dkPameq/kgmL/minng/(kg.min)ptuV
10.L/ming/dLksmeq/LmL/sng/dSvV
a/mg/kgLmeq/minmmng/kgtWb

If you have corrections or comments, please post a response below.  I will make every attempt to keep these tables up to date and accurate.


Updated November 19, 2009:
Deleted mapping from ISO+ mm to UCUM to (transformation error)
Corrected mapping from ISO+ sec to UCUM '' (was ')
Updated October 20, 2010
Corrected mm(hg) to mm[Hg]
Changed oms/kg to osm/kg
verified mapping for mas (Megasecond according to HL7 V2) to Ms
deleted mapping from lm/m2 to lm
Still trying to resolve issues around (hb)
verified dm2/s2 = REM
Updated October 23, 2020
Fixed a number of problems with capitalization: Changed wb to Wb, v to V, uv to uV, sv to Sv, pAmp to pA, ohm to Ohm, mv to mV, kpa to kPa, gy to Gy, cel to Cel, bq to Bq and 8h to 8.h
Removed f, n because while these prefixes are the same, they aren't units by themselves
Removed n.s because the proper form is ns

Breast Cancer Screening

Recent news discusses the controversy over new recommendations on breast cancer screenings. While the headlines focus on the changed recommendations, I’d rather pay attention to how interoperability healthcare IT can help to identify the at risk patients and enable them to receive the necessary care. I’ve asked my colleague Scott Bolte to cover the details for us, since this is one of his areas of both expertise and passion. Here's Scott on the topic:
Everyone hates feeling helpless and wants to be able control their life.  Rarely is that more true
then when facing cancer, or trying to avoid cancer in the first place. It isn’t surprising then, that changes in breast cancer screening recommendations have provoked passionate debate by clinicians, clinical associations, patient advocacy groups, and patients themselves.

Earlier this week the US Preventative Services Task Force (USPSTF) changed the recommended age for routine screening for breast cancer from age 40 to 50. There were other changes too, but that's the most dramatic one.  The USPSTF is the premier body in the United States for screening recommendations.  It greatly influences clinical practice and reimbursement guidelines.  However, their recommendation for change is not accepted by other authoritative organizations like the American Cancer Society (ACS).

I will not second-guess either the USPSTF or the ACS.  What I will do is point out that these recommendations are for large groups of individuals with average risk. If you go beyond the headlines, you will find that the USPSTF guidelines still explicitly recognize some people have a genetic predisposition to develop cancer and that the new guidelines do not apply to them.


Most people have heard of “cancer genes” like BRCA1 and BRCA2. Actually, everyone has the BRCA1 and BRCA2 genes. All genes naturally come in a variety of forms called alleles.  Some alleles are harmless.  They lead to no measurable difference, or differences in cosmetic features like the color of your eyes. Other alleles are more significant, determining how your body processes drugs for example. But the alleles that we're worried about here are for BRCA1/2 genes and how that changes the chance that you develop a disease like breast cancer.

The BRCA1/2 genes tend to dominate discussions about breast cancer, but they account for less than 10% of breast cancers.  Just because someone has breast cancer, even at an early age, they may not have a genetic predisposition.  There are effective tools to identify risks of developing breast cancer, and one of the most useful is a detailed family history.

The advantage of a family history is it reflects both genetics and environment. It can capture factors such as where you live - with corresponding exposure to environmental pollutions, the family dinner table, exercise habits, and other components that increase or decrease risk. It is the interplay of genetics and environment that ultimately determine if you develop cancer.

If the new guidelines leave you feeling exposed, especially if you have a sister or aunt who has had breast cancer, I strongly recommend you assemble a detailed family history. Use a free web tools such as My Family Health Portrait from the US Surgeon General to survey not just breast cancer, but all other cancers since they are often interrelated. That will capture not only the extent of all cancers in your family, it will also collect critical details like maternal vs. paternal relations, and the age of onset of the disease.

With the family history in hand, you and a trusted clinician can determine if additional genetic testing is appropriate. Whether it is or not, the open conversation about clinical risk - in the context of your personal tolerance for more or less testing - will determine if the new USPSTF guidelines are appropriate for you. Having an ongoing dialog with the clinician, and trusting them when they determine if your risk is high, low, or average, puts you back in control.
To follow up on Scott's posting, I'll add that in 2008, ANSI/HITSP developed the IS08 Personalized Healthcare Interoperability Specification for the communication of detailed family histories in response to the the Personalized Healthcare Use Case. This specification includes the necessary detail for communicating these family histories in a wide variety of clinical documents. Healthcare providers with access to this information can thus readily identify patients who are at high risk, and act accordingly.

Monday, November 16, 2009

Two books

I just bought myself a netbook.  For the past 3 years the company notebook, and the 3 or 4 computers in my house have been sufficient, but now I need a "real" computer of my own, that can also travel when I do, and doesn't need to be wrestled from my wife. 

The primary reason is that I'm now seriously considering writing "The" book, and it needs to be done on my own equipment.  "The" book, will of course be "The CDA" book, but looking over my outline, there's no way I can produce "The CDA" book that I want, so it will have to start off by being "The LITTLE CDA Book" that contains most of what you need to know.  So, it won't go into detail on the interworkings of the IHE PCC Technical Framework, the ANSI/HITSP Specifications, or the CCD.  I won't spend a lot of time on CDA history (which I find fun, but most of you may not), but it hopefully will get you up to speed enough on CDA.

I haven't figured out all of the details about it yet.  I'm not sure about how to approach the content, but have one or two working outlines.  I haven't lined up a publisher.  I don't even know where I'll find the time to do it (probably between the hours of crazy o'clock and insane thirty, with ocassional stretches to o'dark hundred).  But I've been convinced for quite some time that it is needed, and was recently arm-twisted into thinking that I could do it at the last working group meeting.

Why am I telling you this?  I'm setting myself up to succeed by telling you that I'm going to do it.  I'm also looking for your input.  What's needed in the "little CDA book"?  The big one? 

Post your feedback here, or e-mail me (see my e-mail address on the HL7 Structured Documents page).

Moving Forward

Those who cannot learn from history are condemned to repeat it.  -- George Santayana

The resurrection of the debate between "CCR" and "CDA" of four years seems to ignore all that has occured since then.  If we are not careful, we are doomed to repeat our mistakes, and even if we are, it would appear that we are at least condemned to repeat the labor leading from our successes.

Do you remember all the hullaballo in early 2007 celebrating the harmonization of the CCR and CDA into CCD?  As one of the 14 editors of that specification who worked on it with members of HL7 and ASTM for more than I year, I certainly do.  At the time, it was celebrated as being one of the great successes of harmonization.  Most of us, having achieved the success of CCD moved on.  We built on that information model to support a truly interoperable exchange for healthcare.  Only now there are some who wish to see that work discarded because "it's not internet friendly".

Lest we forget, there's a lot more to agreeing on CCD that was needed to ensure interoperability.  There are some 80 different value sets from more than 25 different vocabularies that have been incorporated into the standards for the selected use cases.  There's also the necessity to secure the transport of that information through at variety of different topologies.

That took some three years of effort AFTER we resolved the CDA vs. CCR debate with the "both AND" of CCD.  If your definition of BOTH AND has changed, (and apparently it has for some), then more work is needed on the CCR half.  We would need to bring CCR up to the same level of interoperability that we did with CCD, and that will require yet more effort.  Frankly, I'd rather spend that time working on making the existing standards better by taking the learnings from the internet crowd and the health informatics crowd back into the healthcare standards organzations.  That's a BOTH AND that is a step in the right direction, instead of a step backwards.

Thursday, November 12, 2009

Educating the Healthcare Professionals of the Future

One of my favorite activities is engaging with students who are learning about health informatics, and teaching them about what is going on in the healthcare standards space.  Over the past two years I've been fortunate enough to speak at Harvard, the University of Utah, Northeastern University, and undergraduate students at  Stonehill College.  These opportunities also tickle my funny bone, because many of these organizations consider me to be underqualified as a student in their graduate health informatics programs.  I also teach HL7 standards at HL7 Working group meetings, and given seminars in person and online on IHE Profiles and HITSP speciifcations.  I love to teach, and I've been told that I'm pretty good at it by professional educators.

Recently, an educator prompted me to talk about the educational needs of health information professionals.

What do health information professionals need to know?  I recently spoke to a class of undergraduates heading into the health information field at a local college near my home on that very topic.  Healthcare Information professionals need to be able to navigate amongst a complex set of issues including:
  • Technology -- EHR, EMR, HIS, LIS, RIS, PACS and PHR
  • Law and Regulation -- ARRA/HITECH, MMA, CLIA, HIPAA, ICD-10/5010 Regulations
  • Federal, State and Local Agencies
  • Quality and Policy Setting Organizations -- Joint Commission, HIT Federal Advisory Committees, NCQA and NQF
  • Standards, Terminology and Standards Development Organizations
At the very least, in each of these areas, they should be able to identify what/who is important (and not), their purpose and where or who they can go to, to get more information. 

Take a little test, which among these 56 acronyms do you recognize.  Do you know what it stands for, or how it is defined?  Where would you go to get more information about it?  If you get all of them without reference to external resources, I'll be impressed, but I expect you'll be able to figure them all out pretty rapidly through the web.

ANSI
HCPCS
IHE
PHR
ARRA
HIE
ITI
RHIO
ASTM
HHS
ISO
RIS
CDA
HIMSS
JCAHO
SCRIPT
CCD
HIPAA
LIS
SNOMED CT
CDC
HIS
LOINC
TC-215
C32
HITECH
NCPDP
TPO
C83
HITPC
NQF
V2
CMS
HITSC
NCQA
V3
CLIA
HITSP
MLLP
WEDI
CPT
HL7
NEMA
X12N
DICOM
ICD-9-CM
OASIS
XDS
EHR
ICD-10-CM
ONC
4010
EMR
IETF
PACS
5010

The imporance of knowing how to find out was highlighted to me several times today in another classroom setting.  A question came up in the class on how one would roll up codes used to represent race and ethnicity.  I pointed out to the students that there is a) Federal policy in this country for representing (and rolling up) this information, and b) excellent reference terminologies that would enable them to correctly answer the question.  I wouldn't expect these information professionals to know that, but I did point out that they need to ask the question of "has someone else already addressed this issue".

Not too long ago, my home state set policy about tracking race information to help determine racial disparities in the delivery of care.  They wanted more detail than the OMB 6 categories, for which I applaud them.  However, after a little digging I learned that the policy had not been informed by some of the existing work being recommended or already adopted at the national level.  They didn't ask the question. 
Why not?  I'm not sure, but I know that those health information professionals (and policy makers) are already overwhelmed with information, and the important stuff, while all on the web, gets lost in the noise.

So, the first skill that health information professionals need to learn is not about any particular set of standards, agencies, policies, et cetera, but rather critical skills in information retrieval.  How can I find out what is important?  Where are good sources of information?  How do I develop reliable sources of my own?  I was asked 3 questions today about HL7 Version 2 that I didn't know the answer to, but I had the answers within an hour.  They need to be able to demonstrate that skill.

Being able to read critically, and identify salient points quickly is a crucial skill that we simply don't teach people. In addition to finding relevant information, they also need to learn how to plow through it.  In one week I read three different versions of HITECH.  It wasn't fun, but it was necessary.  Many of you have slogged through HITECH, HIPAA, ARRA, MMA or other legislation or regulations that impact our field, or waded through some recent medical research, or read and commented on recenly published specifications.  Ask yourself, did anyone ever teach you how to go through those documents?  There are education programs that teach these skills, but not any I've ever encountered in my educational experience.  Can you spend an hour with a 100 page document and identify the top 10 issues that you need to be aware of?  Health information professionals need to be able to demonstrate that skill as well.

Finally, the last skill is being able to communicate clearly and simply.  So much of what health information professionals need to do involves teams of people with a variety of very different and complex skills.  These teams include doctors dealing with highly specialized medical knowledge, medical researchers dealing with new drug pathways and a complicated array of regulations for clinical trials, to the billing specialist dealing with financial transactions among a half dozen different agencies all responsible for some portion of a patient's bill, to the IT staff dealing with a spaghetti network of technologies that all need to interconnect.  I can dive down into gobbelty-gook with the best of the geeks out there, but when I show real skill is when I've been able to explain some of this gobbelty-gook to a C-level.  Teaching Simple English to healthcare professionals is a pretty good idea.  I think it should be a required course for anyone who has to write specifications, policy, contracts, proposals, requests for proposals, laws or regulation on any topic (and would go a long way towards making all of our lives easier).

There are many other skills that a health information professional needs, and they are important.  But those three skills (information retrieval, critical reading, and simple communications) are fundamental for any information professional.  This is especially true for those working in healthcare.

Wednesday, November 11, 2009

IHE PCC Selects Profiles for 2010-2011 Season

IHE Patient Care Coordination met today and yesterday to review profile proposals that will be developed for the 2010 - 2011 season.  We selected 5 work items to move forward, with a sixth likely to be committed to in the next three weeks.
  1. Nursing Summary/Perioperative Plan of Care
  2. Perinatal Workflow
  3. Post-Partum Visit Summary
  4. Newborn Discharge Summary
  5. Completion of the APR/LDR profiles
  6. Chronic Care Coordination (under review, final decision in 3 weeks)
The selection of these work items is the culmination of a 3 month long process of proposal, multiple levels of review, consolidation and scoping excercises held over the course of three teleconferences, and two 2-day long face to face meetings.  The final selection was again by consensus, with the only formal votes being taken to approve the committee consensus to move these items forward.

We see in these proposals several exciting events for Patient Care Coordination.  While we had some communication bobbles on the Chronic Care Coordination profile, we are excited about the fact that this work comes to us from Australia, with the support of others in the UK.  Clearly we'll have some time zone challenges, but this will certainly strengthen our domain.

Several new members joined us based on their interested in the Nursing activities, and on the Newborn Discharge summary.  We are very excited to be moving into the Pediatric realm.  Finally, we expect this year to finalize the work we have been engaged in over the past three years in perinatal care.  I see culmination of that work appearing in our first "workflow" profile, which joins many pieces together from multiple IHE domains into a workflow that supports the process of caring for a mother to be.

Finally, this was my first technical meeting where I wasn't a TC cochair.  I must admit I did enjoy being able to lay back and watch others put their stamp on PCC, and help us develop new and better processes for communicating amonst ourselves, and with our audiences.

We also confirmed our schedule for the year, below:
  • February 1-4, 2010 (Decide Technical Direction of Profiles)
  • April 26-29, 2010 (Prepare profiles for public comments)
  • July 12-15, 2010 (Prepare profiles for trial implementation)
Finally, we hope to hold an open PCC planning meeting for HIMSS 2010 attendees, where we can tell more providers about what we are doing, and solicit their input. I'll provide more details on that meeting as they solidify, and I hope to see you there.

Keith

Taking cost out of the system

A lot of the work I've been doing is focused on taking costs out of healthcare.  One of the principles I try to apply rigorously is to ensure that the largest cost burdens are borne by the systems at the center.  Image that you have 100 systems connecting to one central hub.  Imagine further that there is some complex processing needs to take place during commmunications between those systems and the hub.  Where do you put the expensive node?  Why at the center of course.  Similarly, you avoid trying to change workflows at the edges because those also incure costs.

Yet when we talk about quality reporting, most of the quality reporting initiatives put the burden at the edge, and everyone reports nicely computed measures to the center.  Instead of incurring costs at a few centralized hubs, providers at the edge are incuring pretty substantial costs (see Cost to Primary Care Practices of Responding to Payer Requests for Quality and Performance Data in the Annals of Family Medicine). 

What if, instead of reporting the measures, we reported the values that went into the measurement using existing workflows.  What if the centralized hubs were responsible for computing the measures based on the "raw data" recieved.  Yes, the centralized hubs would need to do a lot more work, BUT, even if that work were two or three orders of magnitude larger than it is today, the number of edge systems is 5 to 6 orders of magnitude larger than the number of central hubs.  If you have 100,000 systems communicating with you, it's certainly in your best interest to make "your job" simpler and easier and reduce your costs.  But if you are a centralized system, and "your job" also includes paying for 60% of healthcare costs, then you have a different economy to consider.  The costs incurred at the edge don't impact you today, but they will indirectly impact your bottom line tommorow.

The HL7 QRDA specification goes a long way towards relating the data used to compute quality measures back to the data used in Electronic Medical Record systems.  However, it still requires more effort at the edge than some other approaches, as it still requires computatation at the edge.  It also needs to be built upon a foundation that is designed for quality reporting rather than clinical documentation.

The HL7 eQMF specification strikes at the problem from a different angle and takes a slightly different approach.  This specification should be able to:

a) Define the raw data needed to compute measures,
b) Specify how the measures themselves are computed.

If it performs both of these functions, then electronic medical record systems should be able to report the "data they have" to systems that can compute quality measures.  This should result in a far lower implementation burden than trying to get thousands of different organizations to implement and report on these computations, and it will also help to stabililize the measures.  The measures will all be computed the same way based on the raw data.  Variations in how the measure is interpreted are eliminated or dramatically reduced.  This should result in even better (or at least more consistent) measures.

IHE has developed a profile for Care Management that could readily support the reporting of the raw data (ok, so it is HL7 Version 3, SOAP and Web Services based, but that IS another discussion).  The missing specification in that profile is the one that tells it what data needs to be reported.  That could easily be eQMF.  I live in hope.

Monday, November 9, 2009

HITSP ANNOUNCES Public Comment Period on 41 Specifications


The Healthcare Information Technology Standards Panel (HITSP) announces the opening of the public comment period for the following Interoperability Specifications (IS), Capabilities (CAP), Requirements Design and Standards Selection (RDSS) and other construct documents (see below). The public comment period on these documents will be open from Monday, November 9th until Close of Business, Friday, December 4th. HITSP members and public stakeholders are encouraged to review these documents and provide comments through the HITSP comment tracking system at http://www.hitsp.org/.

  • RDSS157 - Medical Home
  • IS06 - Quality
  • IS92 - Newborn Screening
  • IS158 - Clinical Research
  • CAP99 - Communicate Lab Order Message
  • CAP117 - Communicate Ambulatory and Long Term Care Prescription
  • CAP118 - Communicate Hospital Prescription
  • CAP119 - Communicate Structured Document
  • CAP120 - Communicate Unstructured Document
  • CAP121 - Communicate Clinical Referral Request
  • CAP122 - Retrieve Medical Knowledge
  • CAP123 - Retrieve Existing Data Related Constructs
  • CAP126 - Communicate Lab Results Message
  • CAP127 - Communicate Lab Results
  • CAP128 - Communicate Imaging Reports
  • CAP129 - Communicate Quality Measure Data
  • CAP130 - Communicate Quality Measure Specification
  • CAP135 - Retrieve Pre-Populated Form for Data Capture
  • CAP138 - Retrieve Pseudonym
  • CAP140 - Communicate Benefits and Eligibility
  • CAP141 - Communicate Referral Authorization
  • CAP142 - Retrieve Communications Recipient
  • CAP143 - Consumer Preferences and Consent Management
  • TP13 - Manage Sharing of Documents
  • TP20 - Access Control
  • TP50 - Retrieve Form for Data Capture
  • T68 - Patient Health Plan Authorization  Request and Response
  • TP22 - Patient ID Cross-Referencing
  • T23 - Patient Demographics Query
  • C34 - Patient Level Quality Data Message
  • C80 - Clinical Document and Message Terminology
  • C83 - CDA Content Modules
  • C105 - Patient Level Quality Data Using HL7 Quality Reporting Document Architecture (QRDA)
  • C106 - Measurement Criteria Document
  • C151 - Clinical Research Document
  • C152 - Labor and Delivery Report
  • C154 - Data Dictionary
  • C156 - Clinical Research Workflow
  • C161 - Antepartum Record
  • C163 - Laboratory Order Message
  • C164 - Anonymize Newborn Screening Results
All comments received on these documents will be reviewed and dispositioned by the appropriate Technical Committees/Tiger Teams.  The comments will be used to inform the on-going process of standards selection and Interoperability Specification construct development. 

HITSP members and public stakeholders are encouraged to work with the Technical Committees/Tiger
Teams as they continue the process of standards selection and construct development. If your organization is a HITSP member and you are not currently signed up as a Tiger Team or Technical Committee member, but would like to participate in this process, please register here: http://www.hitsp.org/membership.aspx.

Thursday, November 5, 2009

Synthesis

‘When I use a word,’ Humpty Dumpty said, in a rather scornful tone, ‘it means just what I choose it to mean, neither more nor less.’

‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’
‘The question is,’ said Humpty Dumpty, ‘which is to be master – that’s all.’
-- Lewis Carol, Alice in Wonderland
It's been interesting reading the shift in discussions around REST vs. SOAP in the blogosphere this week now moving towards HTTP and HTML , or device-based connectivity.  See blog posts from John Halamka, Sean Nolan, and Wes Rishel.  My head exploded with insight -- and the sleep that I promised myself is gone by the wayside.

I'm a web, HTML and XML geek from way back.  In 2001 I claimed 7 years of experience with XML (a test my employer passed).  I've got dog-eared copies of the HTTP specifications (as well as HTML and XML specs) sitting on my shelf that are rather aged.  In the thirty years since the development of the OSI seven layer model, we've now seen a shift in how we view HTTP.  Most mappings of the web stack refer to HTTP as an "application layer" protocol, but SOAP, REST, Web Services and Web 2.0 seem to have driven it down the stack to "transport" by layering yet more on top of it.

The complexity of what has been identified as "SOAP" in all these discussions is not SOAP at all, but rather the information models in SOAP.  There's an important difference between the information models that SOAP and RESTful implementations offer that needs to be considered.  These models by the way, are not demanded of SOAP and REST, they just happen to be broadly adopted models that are often associated with these different protocols.

What REST implementations typicall offer that SOAP typically does not is something that HL7 geeks will recognize as a "model of use".  Models of use offer up business friendly names and representations for sometimes fairly complex semantic constructs (and they do it compactly).  The business concepts map closely to the Business Viewpoint of the HL7 SAEAF model.

What ebXML, HL7 Version 3, and similar protocol specifications offer up through SOAP that REST does not is a model of meaning.  The model of meaning maps closely to the Information Viewpoint represented in the HL7 SAEAF model.  Models of meaning are more complex, and contain a lot more explicit information, but they are bigger and harder to understand.  They become a language in which one must express the meaning of "simple" business concepts (although I note those concepts are not really all that simple).

Models of use are easy for people to understand and to perform simple and often very useful computations with (e.g., pretty UI).  Models of meaning are easy to perform complex and often revealing computations with (e.g., clinical decision support).  Geeks like me who've been immersed in various models of meaning don't have large problems speaking those languages and crossing between them, but trying to teach people new languages is rather hard after a certain age.  I seem to have a knack for computer languages that I just wish applied to spoken ones.

The benefits of models of use are conciseness and direct applicability to business processes, but to cross "models of use" boundaries often requires a great deal more translation (e.g., from clinical to financial).  That's because the concepts communicated in a model of meaning assume a great deal of implicit domain knowledge.  The hidden domain knowledge into the model of use makes translations hard.

The benefits of models of meaning are explicit representations of domain knowledge using a controlled information model.  All the possible sematic relationships are explicitly stated and controlled.  This simplifies translation between different models of meaning because one can work at the more atomic level of the controlled information model.  This is why (computer or human) language translators build "parse trees" first, and translate from those "models of meaning".  Models of meaning are more readily marshalled into data storage systems.

The importance of models of meaning in healthcare IT comes into play when we start talking about clinical decision support.  I illustrated one of these examples in Gozinta and Gosouta back in August.  In short, the "model of use" described in the guideline needs to be translated into a "model of meaning" representation in order to compute the guideline through a decision support rule.

So, I think I've successfully convinced myself that we need both model of use and model of meaning in the HIT standards space.  The simple business oriented representations are needed to make implementations easier for engineers.  The more complex information models are needed to compute with.

I think I see a way through the muddle, but it will take some time.  The right solution will not just adopt the first model of use that comes to us.  We will need to put some thought into it.  I believe that we can provide some motion towards an answer that could begin to use in 2013 (or earlier), would be easily adapatable with solutions deployed for 2011.

But if we move towards a model of use in communication patterns, we run into a translation problem that someone has to address.


In a nutshell, WE need to fully specify (in a normative way) translations from model of use to model of meaning and back.  The former is easy (with a common model of meaning), the latter more difficult.  Compilers are easy (use to meaning), but decompilers are hard (meaning to use).  When I say WE, I've got all my big hats on: HL7, IHE and HITSP.  And, we need to agree on a common model of meaning (and this we is the SCO, for which I have no hat).  The HL7 RIM is a really good start for a reference information model in healthcare (Wes and I both know that you can say almost anything in HL7 V3, and I have the V3 model to prove it.)

Having a common reference model provides the interlingua that will truly allow for interoperable healthcare standards.  If all models of use can be expressed in one (and nearly only one) model of meaning based on a common reference model, then translation between the models of use becomes a real possibility.  I know I can translate the "transports" that we've all been talking about into a model of use that would make a number of nay-sayers really happy.

There's also a way to use the same WSDL to enable either SOAP or RESTful transports which makes interfacing a lot easier and more negotiable.  The last problem is how to secure all of this RESTfully, which I'm somewhat unsure of.  I'm not sure it's safe to leave in the hands of the giants that gave us SOAP and WS-* (and insisted on XDS.b) but maybe they've learned their lesson

There's a lot more engineering that is needed to really make this work, and this blog posting is already too long to go into all the details.  The solution isn't simple (making hard problems easy never is) and it needs to address a lot of different business considerations.  There's also a need to address the migration issues for the current installed base (not just one, but at least 10 different HIE's in the US are using the HITSP protocols, many in production, and that doesn't count the Federal agencies, and a heck of a lot more internationally have been using the IHE specifications upon which the HITSP protocols are built for even longer).

My main concerns about all of this discussion is CHURN and disenfrancisement.  Over the past five years we've taken huge steps forward, and this seems like a big step backwards.  It may be a step backwards that prepares for a huge leap ahead, and because of that, I'm willing to engage.  I get what REST can do (this blog and my whole standards communications campaign are built on RESTful protocols).  The concern about disenfranchisement is the suggestion that a group of uber architects could do this quickly and outside the bounds of a governance model that organizations like IHE, HITSP and HL7 impose.  If this is to work, it needs the buy-in of those organizations, and their constituencies.  It needs to have two key goals:  simplicy and compatibility with the industry investments of the last five years.  XML was a three year long project that replaced SGML and changed the world.  It had those same two key goals.

If we can synthesizes models of meaning and models of use together, we will truly have a model of meaningful use.

I'll probably get a heap of flack for this post tomorrow (or at least the pun), but what can I say?