Thursday, September 30, 2010

Provider Directories

This isn't a topic I would usually write on, but given other absences, I got tapped to sit in on the HIT Policy Committee Information Exchange Workgroup deliberations and testimony on Provider Directories.  I missed the first panel's testimony because my flight was delayed for policy reasons (the crew needed sleep, a good policy).  Of course that delay due to policy also put me in the middle of rush-hour DC traffic and rain, so a 40 minute drive from Dulles took an hour and 40 minutes. 

There was quite a bit of lively discussion, and you can access all the written testimony here, including mine.  [Note:  That link will get stale, so if you are looking at this after October 2, look here under September 30th, and at that point you should also get the oral testimony and Q&A].  I spent three hours carving up my written testimony down to 4.5 minutes of oral remarks which I rehearsed 3 times yesterday, and then just wound up using that as notes for what I wound up saying unrehearsed because of what I'd heard during the day.

Now, for my OWN thoughts on this meeting, and I do mean my own, because as always, the comments on this blog represent my own opinions and not those of my employer or the standards organizations that I may represent.


There are two orthoganal axes by which I could characterized the MANY different kinds of directories discussed:

1.  Who/What is using the Directory to Communicate
2.  What purpose the communication is used for.

On the who/what:  It's mostly either human-to-human, or computer-to-computer.  Very different use cases, with very different requirements.

On the purpose, it's either for treatment (e.g., ePrescribing, referral, results delivery, or other communication of clinical data to providers), payment (another very big swath), or for operations (quality management...) [which also was not discussed very much at all, although I did allude to it briefly].

Doing this math, this is a 2x3 grid, so I can identify at least 6 different kinds of directories.  For the most part, the testimony used two different terms:  Yellow Pages, and routing.  By the end of the meeting, both terms had been questioned as to what they were, and in part because sometimes they referred to one of the 6 I identified, and other times, a different one.  MOSTLY, but not always, yellow pages fell into the human-to-human category, and were principally addressed in payment.  CAQH's work on the Uniform Provider Directory (did I get that right), or UPD was mentioned numerous times throughout the day, as were several directories used by CMS.  The "Routing" directory fell into the computer-to-computer column, about 50/50 split in treatment/payment.

CDC had a very interesting use case for directories, which was communications of information TO providers, like a public address system used to let everyone know about public health alerts.  I've been working with some folks to TRY to turn that sort of use on it's head, because the number of sources (and thus need for directory updates), for alerts much smaller than the consumer audience for them.  That use case was interesting because they need good recall (as many provider addresses as they can get), but precision is not as vital.

I also heard repeatedly that the need was for "ROUTING" directories, and not yellow pages.  Since I was at the very end of the line, I couldn't correct that terminology soon enough.  It's not about routing.  It IS about getting access to the services needed to support computer-to-computer communciation, which includes authentication, policy support (certifcates), and end-point discovery.  All this talk about Web 2.0 and we are still thinking about policy for routing messages.  That really needs to change.

I heard a number of dings on HL7, but in reality, most I heard were due to the fact some in the room didn't understand it (e.g., OBX structures), and others HAVE not chosen to implement or contstrain it appropriately.

At least one thing everyone agreed on was that we need standards for core directory content.  That's actually pretty easy, because you can just examine key fields in standards like HL7, NCPDP, and X12, and requirements of them in selected guides (e.g., CAQH/CORE or HL7 V2 guides for ELR, Immuniziation, or CCD) to see what that common set should be.  To avoid argument about whether it is core or not, I propose a very simple rule.  If a non-technical person can recognize the content as being the same thing in two or more of the standards, then it is a candidate for the core set.  If it appears in all, then it is certainly a core component.  A lot of stuff won't show up, but that should be OK, because this SHOULD be an Iterative process.  How do begin a journey of a million steps?  By taking the first one.

I was amazed by the continual references to the Internet as if that appeared by magic overnight, rather than being developed over the course of the last 5 decades, and in commerce, really only the last two. The web didn't happen overnight, and not all of these problems will be solved overnight either. Most of those Internet standards we laud are on their 4th and 5th iteration.


A number of commenters reported that there WERE NO standards for this. I pointed out that in fact there are, and by the way, this problem is not one experienced just in healthcare.


The other part that folks seem to agree upon is that this cannot be centrally controlled, and that the technology should support federation (Hey, look at what those internet guys came up with!). 

Walter Suarez summed it up very well.  There is a LOT to take in, and we need to be focused.  I'd go even further, use the 2 axes I described, and prioritize the bunch, then pick the two with the biggest ROI.  Because, as I pointed out over lunch, good policy has to be implementable, and implementable also means sustainable.  We cannot afford to execute on every good idea without understanding both the benfits and costs. 

Oh, and by the way, if this is focused on meaningful use, I'd tend towards treatment and operations (quality), rather than payment, and computer-to-computer, but you know, that is just MY reading of the policies behind Meaningful Use.

So much for not writing today, I'm exhausted but have another 25 minutes before my flight boards.  Next up, an analysis of the ISDS work I posted on between tweets and panels.

Good and Fast work, but BUSTED Process

This is the Preliminary EHR Data Requirements for Syndromic Surveillance from ISDS.

Page 6 section 1.2 Meaningful Use Working Group Charter and 1.2.2 Membership is the most interesting to me, quoted below, emphasis mine:

1.2 Meaningful Use Workgroup Charter

By January 2011, the Workgroup will recommend a contemporary business model of syndromic surveillance and its core data requirements, using a community consensus-driven process.
...
1.2.2 Membership

The eight-member group consists of syndromic surveillance experts from state and local health departments from across the U.S. Workgroup members are actively engaged in day-to-day system operations and are developing or implementing health information exchange technologies.

In HITSP days, this definition of a consensus group that omits consumers of a specification, users of products that need to implement it, and vendors required to implement it would get the second worst marks possible during "Tier 2" review, only worse being a proprietary specification.

The prinipal of balance is completely lost here.  It may be good, and fast work, and in the 30 second skim, it even looks pretty much like what HITSP came up with after months of deliberation. 

BUT! The process is BROKEN, and the deadlines here will NOT WORK for Meaningful Users.




Wednesday, September 29, 2010

Sparks

The HL7 ballot closed on Monday this week.  I managed to finish voting on all the ballots I signed up for this week, thanks to my knowledge of a few balloting techniques.  Last Friday I was asked to testify to the HIT Policy Information Exchange Workgroup on Provider Directories, so there likely won't be a post tomorrow.

Recently I've been going back to review where we've been for the past decade+ since To Err is Human and Crossing the Quality Chasm.  This review of history shows that while an overall strategy isn't apparent, there really is one, as I alluded to in a previous post.  What is missing here is what we call in Software development a problem of tracability to the requirements.  The two IOM reports identify significant problems and provide some strategies to solve them, but you won't find either as footnotes in the ARRA/HITECH laws, or the related regulation or a lot of other places.

This lack of a written overall strategy leads to a problem in defining scope in many places.  There are a thousand flowers blooming -- projects demanding attention, and many could wilt for lack of focused care and attention (and some arguably even should).  We need, as Doug Fridsma indicates in the ONC S&I framework, a way to focus.  These reports and their recommendations could be used as organizing principles (and in many ways already have), especially if we make the links explicit.

Dr. Fridsma, by the way, will be meeting with the ITS and Structured Documents workgroup committee members at the HL7 Working Group meeting next week.  He has some questions for HL7, and we obviously have a few points that we'd like to make to him also.

That reminds me that I need to practice my Ambassador presentation on CCD that I'll be giving on Monday at that event.  HL7 has the whole Monday afternoon devoted to free presentations HL7 Standards for Meaningful Use available to all meeting attendees (you do have to pay for meeting registration).  I'll also be teaching a class on the HL7 Continuity of Care Document Thursday afternoon.

By the way, if you missed the free HL7 webinar I did on the CCD, you can at least see the Q&A I posted on this blog.  While this presentation wasn't recorded, others may be in the future, and you can always ask HL7 for an ambassador to present for your group or event.

I'll be putting together a more detailed post on my findings after I analyze the two IOM reports, but I also have to be careful that I don't give away my next homework assignment to budding informaticists.  You see, my review was sparked not by thinking that this was a solution to my problem, but rather by a discussion I had with a BU professor who wants me to spend a day with her class.  It was only after I started my review that I realized how it could help bring focus to current events.  As always, when two seemingly unrelated things connect, sparks fly.

Tuesday, September 28, 2010

Some Sample Messages for Disease Surviellance

Someone recently asked me if I had sample messages for Disease Surviellance.  I already gave you a recipe, and I don't usually go out of my way to create or supply sample data (but will reference what I know is available), but this requests helps me to prove a point.

Going to the Certification Rule (45 CFR Part 170), specifically section 302(l) we see:
§170.302(l)
Public health surveillance. Electronically record, modify, retrieve, and submit syndrome-based public health surveillance information in accordance with the standard (and applicable implementation specifications) specified in §170.205(d)(1) or §170.205(d)(2).

Back once more to 205(d) (1) or (2)...
§170.205(d)
Electronic submission to public health agencies for surveillance or reporting.
(1) Standard. HL7 2.3.1 (incorporated by reference in §170.299).
(2) Standard. HL7 2.5.1 (incorporated by reference in §170.299). Implementation specifications. Public Health Information Network HL7 Version 2.5 Message Structure Specification for National Condition Reporting Final Version 1.0 and Errata and Clarifications National Notification Message Structural Specification (incorporated by reference in §170.299).

So, as you can see, if you use HL7 Version 2.3.1, you have choices, and if you use 2.5.1, you are supposed to use this other guide which ONC has acknowledged is incorrect.

If you want to implement today under the rule, and you don't want to chose a path that you know will change, you would be likely to chose §170.205(d)(1).
But now you need to come up with some content to send, and I have a suggestion for how to do that too.

 So, if you look at page 16 of the HITSP C39 specification, you will see three examples, which are reproduced below under the ANSI terms of copyright found in the C39 specification




© 2009 ANSI. This material may be copied without permission from ANSI only if and to the extent that the text is not altered in any fashion and ANSI’s copyright is clearly noted.
Guidelines and Examples
This is an inpatient admit message that contains Chief Complaint and Admitting Diagnosis data elements.
MSH|^~\&|SendingApp^‹OID›^ISO|SendingFac^‹OID›^ISO|ReceivingApp^‹OID›^ISO|ReceivingFac^‹OID›^ISO|2007509101832132||ADT^A01^ADT_A01|200760910183213200723|D|2.5
EVN||2007509101832132
PID|1||P410000^^^&‹OID›&ISO||””||196505|M|||^^^OR^97007
PV1|1|I||||||||||||||||||||||||||||||||||||||||||200750816122536
PV2|||^^^^POSSIBLE MENINGITIS OR CVA
OBX|1|NM|21612-7^REPORTED PATIENT AGE^LN||40|a^Year^UCUM|||||F
DG1|1||784.3^APHASIA^I9C||200750816|A
DG1|2||784.0^HEADACHE^I9C||200750816|A
DG1|3||781.6^MENINGISMUS^I9C||200750816|A
This is a discharge message where the patient has expired.
MSH|^~\&|SendingApp^‹OID›^ISO|SendingFac^‹OID›^ISO|ReceivingApp^‹OID›^ISO|ReceivingFac^‹OID›^ISO|2007709101832133||ADT^A03^ADT_A03|20077091018321330025|D|2.5
EVN||2007509101832133
PID|1||P410003^^^&2.16.840.1.114222.4.3.2.1&ISO||””||193707|M|||^^^OR^97005
PV1|1|I||||||||||||||||||||||||||||||||||20|||||||||200770908122522|200770910122522
PV2|||535.61^DUODENITIS W/HEMORRHAGE^I9C
DG1|1||0005.0^STAPH FOOD POISONING^I9C||200750816|F
DG1|2||535.61^DUODENITIS W/HEMORRHAGE^I9C||200750816|F
DG1|3||787.01^NAUSEA WITH VOMITING^I9C||200750816|F
This is an A08 Patient Information Update message used to convey additional clinical information as OBX segments.
MSH|^~\&|SendingApp^‹OID›^ISO|SendingFac^‹OID›^ISO|ReceivingApp^‹OID›^ISO|ReceivingFac^‹OID›^ISO|2007509101832133||ADT^A08^ADT_A01|20075091019450028|D|2.5
EVN||2007509101832133
PID|1||P410005^^^&2.16.840.1.114222.4.3.2.1&ISO||””||198805|F||2106-3^White^2.16.840.1.113883.6.238^W^White^L|^^^OR^97006
PV1|1|E||||||||||||||||||||||||||||||||||||||||||200750910182522
PV2|||^^^^SOB, looks dusky and is coughing up blood. States has just gotten over the measles.
OBX|1|TS|11368-8^ILLNESS/INJURY ONSET DATE/TIME^LN||2007509092230||||||F
OBX|2|NM|8310-5^BODY TEMPERATURE^LN||101.3|[degF]^^UCUM|||||F|||200750816124045
OBX|3|SN|35094-2^BLOOD PRESSURE PANEL^LN||^140^/^84|mm[Hg]^Millimeters of Mercury^UCUM|||||F
DG1|1||055.1^POSTMEASLES PNEUMONIA^I9C||200750816|W
DG1|2||786.09^DYSPNEA/RESP.ABNORMALITIES^I9C||200750816|W
DG1|3||786.3^HEMOPTYSIS^I9C||200750816|W
DG1|4||782.5^CYANOSIS^I9C||200750816|W


To end this do-it-yourself guide, you need to make these messages version 2.3.1 compliant.  As far as I know, all that takes is to take the last part of the first line containing "|D|2.5" and change it to "|D|2.3.1", and replace the various occurences of ‹OID› with an appropriate value for sending or recieving application identifiers. Whalla, you have a set of sample messages that support Disease Surveillance that seem to meet the requirements under the rule. 

The final test would of course be to develop software that can be certified, but let's take the pre-test as it were.  The approved certification test methods are specified by NIST, you can find the specifics for this criterion here.

And here is what that guide has to say about test data on Page 2:
HL7 v2.3.1 conformance is evaluated in terms of the relevant conformance statements in the HL7.2.3.1 standard based on the specific message type(s) submitted by the Vendor. Since no implementation guide has been specified, Vendors may select the message type(s) they wish to submit. The Vendor supplies the test data for this test.

Then we have the test procedure on page 3, with my emphasis on how to address the results:
  • Submit – evaluates the capability of the EHR to electronically generate the Vendor-selected syndromic surveillance information in a conformant HL7 v2.3.1 or v2.5.1 message

    • The Vendor identifies the version of HL7 to be used for this test (HL7v2.3.1 or v2.5.1). If v2.5.1 is selected, the Vendor also selects a Message Mapping Guide
    • The Vendor instantiates the Vendor-supplied test data in the EHR
    • Using EHR function(s) identified by the Vendor, the Tester verifies the presence of the test data in the EHR, generates the syndromic surveillance message and verifies that the message is conformant to the selected HL7 standard and, if applicable, the PHIN implementation guide and case notification message mapping guide.
Finally, we look at the evaluation critiera on page 4.
Inspection Test Guide – HL7 v2.3.1

IN170.302.1 – 1.01: Tester shall verify that the message is conformant with the HL7v2.3.1 for the message type and message segments generated by the EHR. The Tester may utilize an automated test tool or conduct a visual inspection of the message to conduct the evaluation. The Tester shall only evaluate those items identified as “R=Required” in HL7v2.3.1.

Your final challenge is to discover whether or not you can find someone in public health at the State level to read them.  These are not difficult messages to process, but State public health agencies may not be ready to accept the data, nor have the infrastructure or programs to do so.  It seems like that might have been a better way for CDC to spend their money rather than making us wait for yet another implementation guide that has no industry participation as of yet.

Monday, September 27, 2010

IHE Connectathon Participation Offers Unmatched Value

IHE - Changing the Way Healthcare Connects

IHE Community,

IHE Connectathon Offers Unparalleled Opportunity for HIT Testing
Vendors and developers of Healthcare IT systems have a unique opportunity to prepare their products for the emerging world of comprehensive electronic records, health information exchanges and “Meaningful Use” incentives. The IHE North America Connectathon, taking place January 17-21, 2011 at the Hyatt Regency in Chicago is the healthcare IT industry's largest face-to-face interoperability testing event. Last year's Connectathon drew nearly 400 individual testing participants from more than 70 organizations.


Connectathon testing is based upon IHE profiles, implementation guides for HIT standards that enable effective interoperability across a wide range of clinical settings, including health information exchanges. They support capabilities required to meet standards and certification criteria issued by the US Department of Health and Human Services and qualify for incentives payments for the use of electronic health records from the Centers for Medicare & Medicaid Services.

The unique supervised testing environment of the Connectathon enables the broadest and most efficient cross-vendor testing possible, saving time and money for developers seeking to achieve effective interoperability. IHE profiles will be available for testing in eight clinical and operational domains:
  • Anatomic Pathology
  • Cardiology
  • IT Infrastructure
  • Laboratory
  • Patient Care Coordination
  • Patient Care Device
  • Quality, Research and Public Health
  • Radiology
Registration fees are structured to encourage testing across domains, in many cases allowing systems to add testing of profiles in new domains for free.

Connectathon results are published in a publicly available database and IHE will publish links to integration statements describing the IHE capabilities available in their products. Connectathon testing is also required preparation for participants in the Interoperability Showcase at HIMSS11, February 21-24 in Orlando, Florida.

Applications for testing participants are available online at http://draft.blogger.com/www.ihe.net/north_america/connectathon2011.cfm. The submission deadline is Friday, Oct. 8, 2010.

Friday, September 24, 2010

Top O' the Week

Top of the week is In which I have something positive to say about ONC which is definitely a change in direction for me.  Don't worry though, I have a low frustration thresh-hold with BS and a great deal of left-over cynicism from the prior ONC, so if it isn't working, I'll be sure to let you know.

That same post also soared to top of the month, past the now number 2: I wanna be an ePatient and the ever popular Meaningful Use Standards Summary.

I wanna be an ePatient wants to be a YouTube video, and thanks to @RandomInterrupt I have a rhythm track, and to my daughter, some choreography.  The ball is rolling...

Upcoming events:
The HL7 Ballot closes on Monday of next week.
The HL7 Working Group and Plenary Session starts in Cambridge, MA on Monday of the following week  (it actually starts Sunday, but most show up on Monday) .  I'll be part of the free Ambassador talks on the HL7 Standards in the Meaningful Use Rule.
Two weeks after the HL7 meeting the IHE PCC, ITI and QPHR Planning Committees meet to discuss profile proposals that were submitted for this next round of development.

Past Events:
If you attended the HL7 Webinar on CDA/CCD that I gave last week, but your question didn't get answered, check Monday's post for answers.







Thursday, September 23, 2010

MeaningfulUse IG for Public Health Surveillance WILL Change

About a month ago I posted on the topic that the implementation guide chosen to be used with the HL7 2.5.1 standard for public health surveillance was likely to change.  Confirmation came shortly after via the HIT Standards Committee, and has since been acknowledged in an ONC FAQ.

Details on what the plans are to replace this guidance have been relatively limited, and so most implementations are either struggling with the wrong guide still, are hopefully using the advice I gave on how to use the HL7 Version 2.3.1 standard with the HITSP C39 specification.

Here's what's been going on since then.

CDC has been enaged with the International Society of Disease Surveillance (ISDS) on Disease Surveillance topics.  They asked ISDS to engage on this topic.  There was apparently a meeting of ISDS members earlier this month in New York City to discuss the issue, but I don't have many details beyond that.

Sundak Ganasen (Lead CDC Vocabulary Specialist) recently shared some of the outcomes of earlier discussions with ISDS to the HL7 Public Health and Emergency response workgroup through their public mailing list.  You can find that document below:



ISDS subsequently engaged with HLN Consulting to help them develop a consensus-based guide.  I've had some discussion with HLN regarding this work. 

Some of the challenges are that the existing work is in the "Biosurveillance" space which is not necessarily the same space as Syndromic Surveillance, although there are certainly overlaps.  I would argue that the C39 specification is simple enough to get us started while we wait for a larger body of work to be completed. 

The other challenge is having a public health system that is prepared to accept the data, and that is probably a strong argument from the public health perspective.  From the meaningful use perspective, though, trying to ensure that public health gets something they are prepared to use in the timeframes mentioned in the document above will fail for phase 1.  I say that because January 2011 is too late to have a specification that can start being developed upon in time for it to be implemented for phase 1.  That will mean that we wind up with a rather large variation in HL Version 2.3.1 implementations which public health will be equally unready to accept.

This is a difficult challenge.  I almost think that we must step back, and look at surveillance as a phase 2 requirement at this stage, given the time frames.  It is a shame, because it really should fit into phase 1.  The only realistic way to succeed for phase 1 is to get a final specification into the development pipeline by the end of October, and even that will be unlikely to see wide adoption.

How I Voted on the InfoButton, Genetic Testing, and Progress Note HL7 Ballots

Another part of being an effective voter on an HL7 Ballot is to share your findings with others.  In that way, you can free them up to review other content in more detail (if they happen to agree), and can at least be cognizant of issues going into the reconcilliation.  All ballot comments are public information anyway, so unless you are persuing the strategy of holding your negatives off till the end, there's no reason not to share your results widely.

HL7 Version 3 Implementation Guide: Context-Aware Knowledge Retrieval (Infobutton) – Decision Support Service (DSS) Implementation Guide, Release 1 (V3_IG_DSS_KM_INFOBUTTON_R1_D1_2010SEP)

Affirmative:

Suggestion: The content is good, but it needs a much simpler approach to explain how it works to the everyday engineer. The use of terms like "semantic signifier" means nothing to people who are not familiar with the OMG work. Conformance to OMG profiles also provides some referential problems here. This is an implentation guide, at least by its title. Please tell people how to implement correctly, not what else they have to read. It's fair to refer them to the Atom standard, but not really fair to make them read and comprehend a very high level conformance profile. I don't think rewriting this to make it easier to implement would have require any substantive change, so this is an affirmtive with suggestions.

HL7 Implementation Guide for CDA Release 2: Genetic Testing Reports, Release 1 (CDAR2_IG_GENTESTRPT_R1_D1_2010SEP)
Negative


Comment: I like where you are headed, and clearly the MDHT tools have served you well here but need to move towards generating HL7 XML Content for balloting purposes, rather than PDF (for a number of reasons). However, the document is clearly not completely ready for ballot at the DSTU level (perhaps is should have been For Comment Only). I have disagreements with the use of "text-only" templates, and proposals for hyperspecialized section codes.

A File upload was made for this document that can be retreived here

HL7 Implementation Guide for CDA Release 2: Progress Note, Release 1 (US realm) (CDAR2_IG_PROGNOTE_R1_D1_2010SEP)
Comment: Inconcistencies between this document and existing IHE and HITSP work prevent me from voting affirmative, especially given regulatory status of HITSP C32 Version 2.5. These sections MUST be at least identical to those already defined by current regulation in the US Realm. While the current work is not "inconcistent" with that regulation, it does not conform it its complete requirements.


A File upload was made for this document that can be retreived  here


How to effectively vote on standards in HL7 at the last minute

Voting on the HL7 ballots for this cycle close on Monday.  If you are like many, you may have left this important task to the last minute, or you may simply not know what to vote on, and may also be overwhelmed by the number of things to ballot on.  There are 31 different standards and implementation guides listed on the HL7 Ballot site which you might have been able to vote on.

If you hadn't signed up to vote for one of these by Monday, you can no longer do so, as the ballot voting pools closed on Monday after being open for 30 days.  But let's assume like me that you signed up for a dozen or more of these.  Also, let's assume that your day job has overtaken events, and you aren't able to get to the materials until the last minute.  How can you vote effectively?  Well, to be truly effective, you should start sooner, and delegate work to others in your organization.  But again, just in case you haven't, here are some strategies you can use.

Prioritize your efforts.  Each ballot has a "ballot level", which will begin with the letter
N - Normative Standard
D - Draft Standard
I - Informative Document
O - For Comment

Following that will be a number indicating the number of times the specification has been balloted at that level.  So, somthing that says N4 is in its fourth voting cycle to beccome a normative standard, whereas something that says N1 is at its first.

Normative standards have the most intense consensus requirements.  To pass ballot, 75% of the pool voting either yes or no must vote yes on the ballot.  Also, at least 60% of the pool must vote yes, no or abstain to reach quorum.  Normative standards are also the closest to being in a position to be driving requirements, so I pay attention to those first.

Next are Draft Standards for Trial use (DSTUs).  These require 60% of the yes/no voters in the poll must vote yes, and there is no quorum requirement.  DSTUs are drafts that are eventually intended to become standards later through a Normative ballot.  These will often be held stable in draft form while the Healthcare IT community experiments with them to see where the holes (if any are). 

The requirements for Informative documents are the same as for DSTU ballots, but these documents are expected to be completed afterwards, rather than being taken normative.  Informative documents are not standards as defined by HL7, but that won't stop Governments from requiring them to be used (e.g., the CCD specification is an Informative document).  DSTU vs. Informative is a toss-up, since they are about equal in stature.  I focus on normative track before informative track unless I have an expectation that something on the informative track will become defacto a standard as CCD has.

Finally, there are for comment only ballots intended to draw early comments and feedback.  If I'm in a time crunch, I save these for last.

Next, I prioritize in the categories by ballot topic.  If you don't have a clue between two topics as to which is more important, you can look at the digit following the ballot level.  Something at its first normative ballot is not as likely to pass as something in its fourth.  Again though, this is a rule of thumb, not an absolute.

Now, for everything that you signed up for that you know you won't get to, simply Abstain.  This is a courtesy as it helps the pool make quorum for normative ballots, and at least shows that you made an attempt.

Next, check the vote counts.  A ballot that is dead in the water is not really worth voting affirmative on.  Anything with more than 50% negatives almost certainly won't place and certainly needs work.  Also, negatives come in the voting cycle later than affirmatives, so anything even close to 50% is very likely to do worse not better.  Negative voters who want to kill a ballot also know that supporters of a particular ballot can often get additional affirmatives at the last minute by making some phone calls, so they wait until it is too late for them to do it before giving other voters information about the number of negatives they have to overcome.  However, a single substantive negative can be enough to kill a ballot, or at least delay it until calmer and more experienced heads can look at it.  So, don't take the approach that if it's going to pass and you think it shouldn't, but you don't have enough "votes" to make it fail that you should avoid making a negative comment.  All negatives must go through the reconcilliation, and if at the end there are still negatives, will at least an administrative round of voting wherein you can make your cases to the voters in the pool one last time.

Another reason to check the vote counts is to see who has already found problems with a ballot and what problems they have found.  If someone spots a particulary egregious problem, simply pile on, using their comment as yours and move on to review other areas.  If a balloter finds a number of problems and seems to have done a thorough job, vote negative, reference their comments in your vote and move on.  If neither of these cases appear, get to reading.  There are two or three different ways people do this. You can print out the materials, mark them up with pen, and then go back and write your comments, or you can read online, copy/paste content into the ballot spreadsheet, and make your notes there.  Sometimes it is enough to skim, other times you must scan in detail.  Your product requirements will drive how deeply you look.

This particular cycle is pretty uncontroversial, looking at the tallies, and I've already started my review, so hopefully I won't need to use any of these techniques. 

Now, if you've been engaged all along in the development efforts, and you are quite familiar with the content, or you've voted on this ballot the last three times and know it well, get those out of the way quickly.  The former will likely be affirmatives, and the latter, assuming you've reconciled with the committee will also likely be affirmatives (but give it one more look just to be sure).  If you haven't of course, you are probably going to submit the same comment as last time.  Look out for that one.  If that comment has already been addressed previously, it won't work, especially if the committee understands the rules.  Also, some documents will go out for ballot to address ONLY substantive change since the last cycle.  As a committee chair, I often recommend this approach because it means that content that made it through the last cycle is frozen, which simplifies for many and reduces the content that needs to be reviewed.

Back to the ballot for me.

Tuesday, September 21, 2010

An interesting diversion into the Death of the Semantic Web

Saturday, Monday and Tuesday's posts all got a lot of readership this week, to the point that I've hit a new high, more than 1000 pages views in a day, and am currently on track to beat the record of last month and the month before.  As I have been maintaining, promoting and enhancing this blog, and being more engaged in Social Media, I think I've discovered a few ideas about social media worth persuing more deeply.

Idea Number 1:  The semantic web is dead.  OWL and RDF and ontology may be great for computers to manage semantic relationships, but the web and its users are such that these technologies have never really become mainstream.  Instead, what has happened is that the users of the web have developed new ways to ensure that high quality, relevant content comes to them, and is readily accessible through search of semi-controlled vocabularies.

The replacement for the semantic web is the social web.  I get a daily feed of content tailored to my needs by virtue of the social networks that I participate in, and it has high relevance to my work and play.  I can search for relevant content using Hash Tags.  These are the new ontology; the semi-controlled, user defined vocabulary which is associated with this content and the links and conversations that go along with it.

I predict that we will see new information retrieval tools that build on hash tags being more broadly deployed.  Today, the tag cloud associated with this blog is directly related to the hash tags that "I" tweet when the post is done.  I expect shortly to be able to write a blog post, and have available to me a tool that identifies the relevant hash tags for me based on my social networks, and similar content.  There will be collators of hash tags who will evaluate and define them, or just record how they are used (like dictionary editors).  This will perhaps occur automatically, turning what is presently a semi-controlled user defined vocabulary into something that has just a bit more structure on it.

Aggregators will turn to automation to associate information such as what is contained in this blog into even more accessible content.  It will get tagged with hash tags that will initially be defined as they are used, and will later become more controlled as experience is gained with them.  I expect at some point that Twitter will start showing me "trending topics" and hash tags in my social networks, possibly at different degrees of removal (what are the trending topics and tags among my friend's freinds).

I expect someday to pull up a visualization of my "tag cloud" that shows the relationships and interconnections between them, including connections made by my friends and possibly even their friends.  By so doing, I expect to be able to learn about new tags that may be of interest to me.

Idea Number 2:  The relationshop between Social Networks and identity will fragment and reform.  Like many others, I am members of several different social networks, some associated with my role as a healthcare standards geek, others with personal relationships developed through my college years, others related to family, and others that are just simply groups of friends with a common interest.  I am already able to join and unjoin semi-permanent social networks to alter the way I percieve the web, or it percieves me.  These semi-permanent social networks are groups that I belong to through linked-in, google groups, mailing lists, interest groups in different social networking sites, et cetera.  But I will also be able to assume a temporary identity to view the world as others view it.  I can do this to some extent on some social networking sites by viewing my friend's friends page, and expect that trend to continue. 

You can already see some of what I see through my eyes on twitter using paper.li, and build newspapers for other twitter users too.  What you don't see today are the links that I clicked on, or the tweets that really grabbed my attention, but with Twitter and Google taking back control of the URLs they show to end users, that data could eventually be available.

Both of these ideas will eventually have a profound impact on the web as we see it, and vertical markets like healthcare once they figure out how to make use of these technologies.  I have to start thinking about how these technologies will change healthcare, and won't even pretent I know where it is going.  The ride has already been pretty interesting.

The semantic web is dead.  Long live the social web.

In which I have something positive to say about ONC

I spent this morning in a room full of ONC contractors and staff at HHS Headquarters in Washington DC learning about the roll-out of the ONC Standards and Interoperability Framework.  How I got here is an interesting story in and of itself, since this blog helped to contribute to being in the room.  My post of a few weeks ago apparently wound up in Doug Fridsma's inbox with a note attached about "Risk #1" related to communications.  It wasn't just that post, but also the fact that I had done some thinking about replacing HITSP before the contract ended.

It doesn't surprise me that I have readers inside ONC and the contracting organizations, but what did surprise me was the response from ONC.  Someone suggested to Doug that I and a few others like me be invited to this meeting, Doug agreed, and so I was invited (as were a few others).  I don't know what the selection process was for tihs meeting, but I did see several familiar faces this morning in DC.

Doug was clear that he wanted this information to be public and made the slides he presented at this meeting  available to me and thus to you:

Now before I comment on the slides, let me start of with Doug's introductory remarks, which were the clearest departure from the "old" ONC.  He reviewed the agenda, and then said, "and we've set up a couple of rooms for you afterwards, so that you can get with the different people you need to talk with".  Under the "old" ONC, communication between different projects (AHIC, HITSP, HISPC, CCHIT and NHIN) were all funneled up and down the ONC chain until very late in the process, resulting in a game of telephone, with the expected, but not funny result.  There was very clearly an expectation that contractors would be involved in every step of the process, even though they might be responsible for only one deliverable.

The real meat starts on Slide 4.  This slide shows the alignment of the contracts with the various aspects of the Office of Interoperability and Standards.

BREAKING NEWS:  The contract award for Use case development went to Accenture (see Slide 4 and 8).  This was finalized either late Friday or Saturday before this meeting was held, and hasn't even hit the web yet, except in a brief tweet I made from the meeting room this morning.

I spoke with Doug on the Standards Development contract before the meeting. That contract is being re-competed for a number of reasons, including too few competitive bids and a lack of responsiveness to ONC needs in the task order issued.  There also seems to be a need for more clarity around what it could do, and ONC is apparently considering opening this up under GSA which would make it more broadly accessible.  Now, the process they used for many of these existing contracts limited the responders to those with existing government contracts under an NIH process.  It is understandable that these contract holders would not necessarily have the best expertise or most experience available for healthcare standards development.

Doug has "two-sided" conversations, as I've been told.  That means that he listens as well as talks.  When he mirrored my own concerns using my words back to the room on the "Standards Contract", I actually gained some confidence that he does.

My notes indicate that ONC is in the process of rewriting that RFP, to work with the SDOs in a very focused way, NLM, HL7, IHE, et cetera.  I personally expect these "Standards Development" efforts under this contract to be rather focused, much like past contracts that ASPE, CDC, FDA and others have let for the development of specific standards activities.  Almost all of those that I'm aware of include collaboration with existing standards bodies, so while I see this still as being somewhat of a concern, I'm less worried about the government getting into the business of "hiring out" standards development.

Part of the reason for some of the  gaps in communication that I've been complaining about, is as Carol Bean explained at the meeting, that they really couldn't even talk about things while regulation is being finalized around them.  Some of the handicaps that ONC operates under have to do with laws and regulation about what a regulating body can do or say at certain periods of time.  This is also a challenge for Doug as he doesn't come into this position with decades of experience in the Federal bureaucracy.  My advice to ONC on this topic would be to get a regulator,  a marketer, and a well versed lawyer in the same room to hash out (or perhaps even, hack out), the communications strategy for ONC.  It is clearly needed, and Open Government needs to figure out how to accomplish it better.

Slide 5:
Doug talked about "government as a platform" or enabler on this slide.  The real focus here was to have these ONC contracts provide the capabilities that will enable stakeholders to do what needs to be done.  It's not about having the government in the driver's seat so much as it is to have them pumping the gas.

Slide 6 is a title, and slide 7. more of an outline, so I'll skip to slide 8.  This is the slide that Doug pitched in a major way to the HITSC two weeks ago.  You'll note again Accenture as the Use Case contract awardee.  Some notes from Doug on this slide:  While it looks like a waterfall model, it really is an iterative and agile process.  In reviewing this slide, Doug explained that we need to "Find out who is the best of breed in this area, and engage." with respect to the various activities.

BTW: I spoke to a number of people after this event with regard to a number of what I consider to be "best of breed" tools, e.g., the work that Dave Carleson and crew have done on MDHT.  You'll be pleased to hear that this sort of engagement is already happening, not just with MDHT but elsewhere, and further connections also need to (and will) be made.

Slide 9 is the mandatory quote that nobody can disagree with (even me).

Slide 10 starts to talk about the holistic vision of Standards Development that Doug has under this framework.  One of my complaints about the entire AHIC, HITSP, HISPC, CCHIT, NHIN communications debacle is that if a medical device manufacturer imposed the same communications (or lack thereof) regiment on its development processes, the FDA would shut them down.  It's clear that while there are 11 different contracts, they are all part of one process.  In fact, at one point, Doug says "The culture here is that I don't want to know who you are with" .. in fact, he goes on to "I mess up because I don't know who is working for who" ... "because it is all the same thing to me."  That, frankly is a very refreshing attitude.  I wear way to many hats but I can make it work for the very same reason.

Another point that Doug makes is that we need to get out of "Word" and PDF mode, and into the 21st century.  While these are my words, I believe he would very much agree with the sentiment.  He repeated several times the need for COMPUTABILITY and communication of information contained within the standards (Rich Kernan has a good story on that I'll share in a bit).  This is, from my perspective, going to be a HUGE challenge.  The process by which SDOs create there publications is the very foundation of those organizations.  Bob Yencha had a good approach he described:  Tell us your requirements, and we can tell you what tools we (the contractors) can provide to help you get the materials into a computable form.  Now Bob (and I) are structured documents (SGML) geeks with quite a bit of history before either of us ever got into healthcare.  We realize the challenges inherent in this task.  But at the same time, I would also welcome better tools.  The technology that we need is pretty widely available.  Solutions like DITA and DOCBOOK would ease this work greatly.  The BIGGEST challenge will be to get SDO engagement to TAKE ADVANTAGE of resources with $ who are willing to help.  That offer is going to annoy a number of different constituencies in the various SDO organizations,. but hopefully, wiser heads will prevail.

The point Doug makes: We need computable specifications, and artifacts that can help generate data, information, code and specifications.  It's more scalable.  We are presently in an artisan industry, how can we make this available to non-artisans?

Slide 11: This is mostly a reiteration of what has gone on before, but here Doug makes the point that what effectively has been awarded to the contractors is that each has been awarded "Ownership" of a component of the S&I framework, and is responsible for PARTICIPATION in the whole, including at control points, to make it work.  That means that the Testing contractor can look at the Use Case and say "Hey, that [requirement] is not testable".  Now that is, to me, signs that a real S&I Lifecycle is being developed.

Slides 12-15 give a bit more detail on each of the contracts.  On the Harmonization contract Doug makes the point that "Further up the food chain that harmonization occurs, the better off we are." because we want to support the "Reuse of requirements, as well as specification and implementation."

A point I'll make to Doug with respect to Use Cases, is that Use cases IDENTIFY a problem, they don't necessarily solve it.  The use of the S&I framework may do so, but we need to be careful to ensure that we have worthy Use cases to address.  We don't need more H1N1, Katrina or Anthrax use cases that are supposed to be magic bullets for major issues, but with no powder in the charge.  If you have a problem, it should have a measurable cost, and a reasonable belief that with investment, you can recoup more savings that the resources invested in solving the problem.  And when the people, volunteers or otherwise, tell you that it cannot be done "at this time" and "on this schedule", you need to believe them.  This work is like developing a product.  There is a triangle of resources, time and quality, and you may control two sides of it, but the third is simply a product of the other two parameters.  All too often, we need it NOW, with these resources, and quality suffers.  This, I believe, is why HITSP is a dirty word in some ONC circles, because the only leg of the triangle it was left to adjust to meet ONC demands was quality.

The best development efforts are set a high bar on quality (especially in healthcare), and adjust time or resources accordingly.  Some of these efforts will be open ended.  Great, let the process be iterative, and be able to accept at deadline, the number of iterations that can be performed in that time frame.  And you must also measure these processes, not by whether the work was completed, but whether the desired end result was achieved.  If NHIN Direct is done, but nobody ever takes it up, is it successful?

Slide 16:  This shows a swim-lane view of the S&I Lifecycle, along with the various checkpoints or control points along the way.

Slide 18 through 20 are NIEM material, which I'll save for a subsquent more detailed post.  I will repeat  on one quote  from Doug on NIEM "We are not really using the NIEM Core, the goal is to use those processes to ensure harmonization."

The NIEM process, while new to some of us in healthcare, is not foreign.  If you look at HITSP TN903 Data Architecture, and HITSP C154 Data Elements, you'll see a really good foundation for the NIEM Healthcare Core.  We need to figure out how to carry on and reuse that work.

I will also reflect in short form, one problem with the NIEM.  Green Fields and Cow Pastures are not the same thing, and current NIEM experience is in green fields.  NIEM will need extenstion not only to to cover new concepts (services and behavior), but also to account for legacy.

Slide 22: I'll sum up my review of Doug's presentation on this slide.  It shows the expected effort expended by  the different contract activities.  What you will see in this slide is that there needs to be engagement by each contractor in every step of the process.  The point is that each must understand what went on before and the requirements of what comes after in order to complete their task.  This diagram looks very much like the effort expended on a software development project, following a real lifecycle model.

I applaud Doug and his team, and the contractors for what they have accomplished so far.  It is still very early days, and there are still many ways this octopus can get itself tied into knots, but it has gotten off to a great start.  There's a lot more that needs to be done, and getting industry and SDO engagement into this process will be needed.  There won't I expect, be any big calls for participation in the early days, just a few people asked to contribute.  I am hoping that later there will be more active engagement, and even an open call for participation.

My own comments back to Doug and team at the end of the meeting amounted to a charge to engage the industry, the SDOs and the PEOs in this effort.  We are willing to help in this process, you need only to call on us and we will respond.  I am remarkably encouraged by what I see.


Monday, September 20, 2010

CDA/CCD Ambassador Questions and Answers

Last Tuesday I gave an HL7 Ambassador webinar on the HL7 CDA and CCD specifications to an audience of about 165 people. We answered about 10 questions, but due to the large audience, were not able to address all the questions asked.  This post addresses questions that were not answered and includes brief summaries of my answers to questions that were also answered in the webinar.

By far the most common question was whether attendees could get the slides. The slides are being made available to all attendees in PDF form, and I believe have already been mailed out to anyone who registered.

These are the questions we got and my answers:
  1. What is the best resource to get started? By far the best resource to get started with are the specifications themselves, which are available through HL7.  For the HITSP C32, the best document to look at is C83 CDA Content Modules which contains numerous examples.  There are also many examples in the IHE PCC Technical Framework.
    Another good resource are the CDA and CCD Quick Start guides which were developed by Alschuler and Associates.
  2. Is the HL73/CDA 2 XML spec publicly available?
    How can the current CCD Implementation Guide be obtained? Are there legitimate sources outside of HL7? The CDA Standards are available through HL7, ANSI and ISO (yes, CDA is also an ISO Standard).  I believe the CCD specification is only available through HL7.
  3. What tools are available to create and read CDA documents?

    There are a number of tools at various levels of complexity. I recommend having a look at the Model Driven Health Tools which includes tools designed to author, publish, validate, and implement CDA documents.  There's also the NIST Validators, which will be crucial for US implementations.  Misys Open Source Solutions (MOSS) includes a CCD Generator.  IBM produced a CDA Builder that is apparently in use at NCI.  The Open eHealth Integration Platform contains a CDA Parser.  There is an Eclipse Instance Editor that can be used to validate and edit CDA Instances.
  4. Which is the best version of the implementation guide for CDA that one should use?
    What are you trying to do?  There are more than two dozen implementation guides on CDA, some of which stand alone, and others which build on top of other guides.  Most of the guides build from the CCD guide but go further, filling in the gaps that CCD wasn't designed to address.  If you are looking for CDA Implementation guides for use in the US, the best source is still ANSI/HITSP.  For Europe, I would suggest looking at the work coming out of epSOS.  One of the very first guides came out of Canada.  This post contains a diagram showing the intellectual history of many of the guides, and includes links to many of them.
  5. What is the difference between CDA Release 1, 2 and 3? That's kind of like asking what the difference between HTML 1, 2, 3, 4 and 5 are.  However, to make it short and sweet.  CDA Release 1 was the very first HL7 Version 3 standard to be produced (1999).  It uses a form of XML and a model of the RIM that actually predates the HL7 Reference Information Model and XML Implementation Technology standards that we know today.  CDA Release 2.0 is the current version and is what CCD and other implementation guides are based upon.  See the next question for CDA Release 3.
  6. What are your plans with CDA R3 release?
    There a numerous changes planned for CDA Release 3.  The HL7 Structured Documents Working group is currently building a plan having evaluated nearly 50 Suggested Enhancements and 65 Formal Proposals.  While there are many expected changes, the biggest will be support for reuse of Domain models, support for public health, and some thoughts on chaning the narrative to support a subset of XHTML.
  7. What is relationship between CDA and Green CDA? Green CDA is a simplified version (more efficient, easy to write, edit, read, etc?) of CDA? Green CDA will replace CDA? When green CDA should be used? Green CDA is essentially an experiment to see how to make it easier to produce CDA documents.  Will Green CDA replace CDA?  I don't think so, since it is more limited than CDA.  However, it does effectively provide a simpler API to produce CDA documents.
  8. Is there a file size limitation for CDA to support? CDA documents are not themselves limited to any particular size by the standard.  Size limits may be imposed by external systems.  For example, many interfaces can support messages that are 32 or 64K in length (a goodly number of pages of CDA document, probably more than most structured documents would ever need). 
  9. Are there any standards around compression or encryption of CDA documents?   The HL7 standards fort the ED Data type support compression of image content inside a CDA document.  Web related standards support compression in transmission (e.g., GZIP).
  10. Can you also use CDA to show document images? or is it data only? Images are supported by the standard, but it is not intended to replace DICOM for imaging (e.g., an X-RAY, CT Scan or MRI Study).  The example document in the CDA standard shows how an image can be included in a document.
  11. Can you describe a use case for CDA and how (or perhaps if) that is different than a use case for CCD?
  12. Can you please briefly describe now a CDA might be used differently than a CCD
  13. Why can't a discharge summary be expressed as a narrative in the Encounters section?
    For these three, let me refer you to my second post "If I had a hammer" and the reason for starting this blog. 
  14. Is there a such thing as a "generic" CDA XML document, or is it always also a defined structure such as CCD?
    The CDA standard itself is intended to support a wide variety of use cases and to enable the display of clinical content using a single stylesheet.  It need not always be a defined structure such as the CCD.
  15. Can you please elaborate on how how business rules are translated into the templateid root.This post on Template Identifiers, Business Rules and Degrees of Interoperability probably covers the most important details.  Essentially the template identifier is assigned to a set of business rules associated with an information item.  Those business rules can be encoded into conformance requirements that can be in many cases automatically evaluated with model driven tools or using ISO Schematron as in the NIST validators.
  16. Is it also relevant for interoperability inside hospitals (between applications inside hospitals, Cardiology CVIS and HISs), replacing hl7 ORU/MDM messagesYes, CDA is very relevant for interoperability inside hospitals, but it need not replace the MDM message.  The MDM message is a way to transport a document between systems.  CDA is the Content to be transported.  Many systems in use at hospitals support use of MDM to do this today.
  17. MDM vs CCD?
  18. What type of HL7 V2 message can be used when CDA or any other file is to be exchanged using HL7 V2 Message?You can use the HL7 Version 2 MDM message to transport a CDA document, and the CDA standard itself covers this in section 3: CDA Document Exchange in HL7 Messages.
  19. Has any consideration been made for person-readable data in a multi-language environment? CDA supports reporting of the principal language of the document in the languageCode attribute found in both the ClinicalDocument element and the section element.  The ISO 639 code "mul" can be used to specifically identify content that is multi-lingual.  This is something needing review in Release 3.
  20. If machine readable entries are included, does the clinical content need to match what it in the human readable section? The HL7 Structured Documents Workgroup has long asserted that what is seen in the human readable narrative, which includes document titles, section titles, and section text is what is signed.  The machine readable entries SHOULD match, but you can expect that there may be more clinical detail in some of this content that may not be part of the signed narrative.  For example, codes may be applied to clinical content after documents are signed.
  21. Is there a list of EHR vendors that are currently able to recieve CDA documents, e.g., Hosptial discharge summaries directly into an ambulatory EHR like say [product name]. HL7 maintains a Product and Service guide which contains some information, but hasn't been fully populated by all vendors supporting CDA.  IHE Connectathons also report on which vendors have successfully implemented IHE profiles, and Vendor Conformance Statements for those vendors are available (click the folder link on the report).
  22. Is there a reccomendation on the CCD of how to indicate that a piece of information is considered sensitive. ie a diagnosis of HIV. This would be to enable the receiving application to either mask or require an attestation as to the need to see this information.
    CDA includes an attribute called confidentialityCode on the document or any of its sections which can be used to mark the sensitivity of the document.  See this excellent post by John Moehrke on appropriate use of that attribute.  The main problem with "MASKING" as it were are that in implementing such an infrastructure you are interfering with the wholeness and legal authenticity of the content being viewed. 
  23. A CCD section (encounter) can have many different types of entries by using CDA linkage. Example: Encounter Section can have Results Observation entries, Procedure Entries ...etc is there any standard which restricts this?
    Other than explicit restrictions in CCD (or guides which build upon it), not that I am aware of.   Most implementation guide developers for CDA take the approach that you must send what the guide requires, but are permitted to send more.  This allows for incremental interoperability (see this post).  If we had restricted the CCD entries to be only what we had modeled, CCD sections could not be reused and extended to support new use cases, and similarly for other guides.
While this question was not asked:  "How can I get someone to present an HL7 Ambassodor talk for my event...", the simple answer is to e-mail hq@hl7.org or contact me.  I'm not the only HL7 Ambassador (even for CDA and CCD) and CDA and CCD are not the only topics.

Sunday, September 19, 2010

Meaningful Use Vocabulary / VADS Vocabulary Updates

Crossed my desk this weekend and may be of value to those of you implementing the NEW CDC Guide for Immunizations using HL7 2.5.1. Most folks I know are sticking with the 2.3.1 guide but it may be of interest to some of you...

Hopefully one day ALL vocabularies required for meaningful use will be just as available and even more accessible!

Greetings,

CDC Vocabulary Server (PHIN VADS) has published the immunization and ELR (HL7 2.5.1) IG value sets.  PHIN VADS can be accessed at http://phinvads.cdc.gov
You may be interested in the following VADS vocabulary hyperlinks from VADS August & September 2010 releases:
  1. Immunization – HL7 2.5.1 IG Vocabulary (Meaningful Use)
  2. Electronic Lab Reporting (ELR) – HL7 2.5.1 IG Vocabulary (Meaningful Use)
  3. Fiscal Year Update: ICD-9 CM FY 2011* (Effective Oct 1st 2010) value sets: Diagnosis, Procedure
  4. National Healthcare Safety Network’s (NHSN) Healthcare Associated Infections (HAI) Clinical Document Architecture (CDA) – Release 5 IG Vocabulary:

    1. NHSN Bloodstream Infection (BSI) - Release 5 
    2. NHSN Surgical Site Infection (SSI) - Release 5
    3. NHSN Hemovigilance Incident Report - Release 5
    4. NHSN Laboratory-Identified Organism (LIO) - Release 5
    5. NHSN MDRO Monthly Monitoring Report - Release 5
    6. NHSN MDRO or CDAD Infection - Release 5
    7. NHSN Patient Influenza Vaccination - Release 5 
    8. NHSN Pneumonia - Release 5
    9. NHSN Population Summary - Release 5
    10. NHSN Procedure - Release 5
    11. NHSN Urinary Tract Infection (UTI) - Release 5
    12. NHSN Central-line Insertion Practices (CLIP) - Release 5 
    13. NHSN Generic Infection - Release 5
    14. NHSN - All Biovigilance Vocabulary for Release 5
    15. NHSN - All Healthcare Personnel Safety Vocabulary for Release 5
    16. NHSN - All Patient Safety Vocabulary for Release 5
    17. NHSN HAI - All Vocabulary for Release 5

  5. LOINC Value Sets: Lab Test Name (NND) value set for notifiable conditions based on CSTE Technical Implementation Guide (TIG), Microbiology Lab Test Name, Drug Susceptibility Tests.
  6. SNOMED Value Sets: Microorganism, Evaluating Finding, Disease, Procedure by site, Body site
  7. NHSN Healthcare Service Location Code (HL7 V3)
I have also included a document which includes the latest VADS release announcement (VADS 3.0.6 - 9/16/2010). VADS 3.0.6 (Sept 16th 2010) Release - Announcement.pdf 
Thanks

Sundak Ganesan, MD
Lead CDC Vocabulary Specialist
CDC Vocabulary & Messaging Team

Friday, September 17, 2010

Wherein I promise not to practice medicine

This article from Mass Device is too good to pass up commenting on.  Having someone start a discussion about computer science that is not a computer scientist is always interesting... 
Blumenthal said. "There is a raging debate in the computer science world, which I have only lifted the lid on because I'm not a computer scientist, but it goes basically like this: Do we want a world where somebody sets very detailed standards for what computers have to do in order to create interoperability? Or do we want a world that's a little bit more like the Internet, where a minimal set of standards was created and an enormous, vibrant competition and spontaneous growth occurred?"
Citing the analogy that "you build the interstate highway, but you don't regulate what drives on it," Blumenthal admitted that the debate has not yet been settled.
"I hear both sides of that argument, constantly, and even those people who believe in the minimal set of standards aren't really sure what that minimal set is, but we're working on precisely that," he said.
Hmm, raging debate. Where is this raging debate I'd like to know. I haven't seen it on the Interweb.
I'm a computer scientist by profession and training, so I can speak to these comments.

Let's look at a web page (he did say the Internet, right).

First you need the standard for the content.  Right now we have a number of choices:  HTML, XHTML, or PDF, all with minor variations.  Let's say we picked XHTML because it has a X in it and is based on XML, which has to be good right?  OK, so you did that.  That just pulled in XML.  The character encoding that  XML parsers must support include UTF-16 and UTF-8, so now we have to pull in those two standards.  And these are encodings of Unicode, yet anther standard.  That's 5 standards and we haven't gotten beyond content.

Oh, you want it to look pretty?  There are three different versions of CSS you can provide.

What, you want it to be interactive?  For that there's EcmaScript (JavaScript), with several browser variations, and also VBScript (not really a standard), but you get the point.

OK, so now we have interactive content.  You want to use REST right?  That's not really a standard, but it probably means that your interactive web page now needs something like JSON, yeah?  No?  Yeah!

Now, this needs to be communicated, so we need transport.  Let's use HTTP.  Sure, that's cool.  OK, now what.  Well, HTTP needs a network, let's use TCP, which is going to pull in IP.  Now, how are you going to physically connect?  Wired?     Wireless.  Well, there's the whole family of standards underneath that, 802.11 and all the rest.  Oh, and content descriptions and packaging, we should use MIME for that.

You need to secure it.  TLS for sure, or maybe just HTTPS.  Where's your certificate?  X.509, right?  Sure.  What format is that going to be sent/stored in?  PEM?  PKCS12?  Java JKS?  What standards are you going to use to identify providers and purposes?  ASTM E2212.

That's thirteen choices for standards and I could go on.  If that is a minimal set, then I've misunderstood the concept of minimal.  It certainly illustrates the problem that we face in healthcare  though.

Let's talk about time and wasted effort.  The Internet and these standards did not appear overnight.  When I came to Boston in 1992 I worked for a company that used DECNET.  It was over Ethernet and used TCP/IP mostly by the time I was using it.  DECNET disappeared.  Before that I used to manage a computer service company and before that a Service department at a Computerland.  Remember NETWARE?  I could run that over RG-58 (Thin Ethernet), RG-59 (Arcnet), Ethernet, Twisted Pair, and Token Ring.  All those are gone by the wayside.

The first "public" web page was developed in 1992.  The HTTP Protocol was standardized in 1995.  SSL was standardized around 1994.  E-mail was standardized in 1982, although those standards didn't really take off until much later.  There's a lot of other standards that were developed in the years before that and since then.  The Internet didn't show up overnight, and it wasn't invented by one person, or one standards organization even.

We've got 7 layers in the ISO model, but it appears that a good bit of this discussion is above that in Layer 8 (Politics), or Layer 9 (Religion).

I have an interesting philosophy about religious debates (flame wars) with respect to technical topics.  Taking sides is about as useful as saying you are a democrat or a republican (even though I favor the former).  You really have to look at BOTH sides of the issue to understand what is going on.

I promise not to practice medicine, I hope Dr. Blumenthal will make a similar promise not to practice engineering.

Top O' the Week

Top of the week is The Healthcare Standards Interconnections, a visualization of how the Healthcare Standards community works together.

 
The top three posts of the month are:
  1. I wanna be an ePatient A rhythm track is being written (or so I hear), and Dave and I are going to YouTube this, looking for willing performers in Boston area..., maybe in Cambridge for the HL7 Working Group meeting?  In case you haven't registered for that, today is the last day to get the  Early Bird Registration rates.
  2. Meaningful Use Standards Summary  There are a couple of other things happening with quality measures that I need to follow up on.  See for example this quick tweet from @drtonyah on smoking status, to which I owe a deeper response (Thanks for digging that one up).
  3. MeaningfulUse IG for Public Health Surveillance likely to Change  I've heard no news on the last post so I owe a follow up on the Public Health Surveillance guide.  I know at least one meeting has already occured.

Upcoming Events:
Well, I already mentioned the HL7 Cambridge Working group meeting, but what I didn't say was that there will be an opportunity to attend a half day seminar on standards for meaningful use on Monday October 4th. 
 
Today is also the last day for IHE PCC Proposal Submissions for 2011 development.
 
Recent Events:
Yesterday I spent the afternoon at a Knowledge Management roundtable discussion on EHR adoption.  If you missed it, search #KMForum for a quick recap.  A few quotables:  Emminence-based medicine referring to providers who practice medicine based on their "authority", and "We are in our EHR puberty" referring to the fact that we are headed into a period of rapid change.  The topics ranged far and wide from the main point, and it was a very lively discussion.  I enjoy these meetings.  @janicemccallum who was also there will be writing a blog post on this.
 
On Tuesday I spent an hour with about 165 people from 8 different countries on a free webinar, describing the HL7 CDA and CCD Standards. There are some really good questions, but we didn't get time to address all of them.  Look back here next week for some of the answers.