Saturday, September 28, 2013

CDA is Dead

It must be true, because isn't that what John Halamka said during the plenary session Monday morning at the HL7 Working group meeting?  And isn't that what it means when one of the cochairs proposes that we put CDA Release 3 on hold to focus on FHIR?

No, that's not what we are talking about.  Clinical documents aren't dead.  The architecture is just taking a slightly different direction.  Interest in CDA Release 3 as a specification based directly on HL7 Version 3 and the RIM, is waning, in part because many national programs (including our own) have just made a significant investment in the existing CDA Release 2.  There is still some interest in fixing the wobbly bits that we've been complaining about, and aligning with the current RIM (variable named CDA R2.1, 2.5 or some other value less than 3.0).  That's not where I'd be spending my time, but I wouldn't be adverse to having that happen.  But the new frontiers bit is exploring structured documents in FHIR, rather than where've we've been before.

I expand on this a bit in the following snippet from an interview I gave last week at HL7. 



Clinical Document Architecture isn't going away.  We may just be looking at using some new building materials.

So, if the CDA is dead, long live the CDA!

   Keith

P.S.  See also this post via Grahame Grieve



Friday, September 27, 2013

Safe Exchange of CCDA

In teaching CCDA yesterday, we talked about how to move from CCD/HITSP C32 to CCDA.  The question came up of what to do with a CCDA document that isn't valid or is poorly structured.  I recommend two things:

  1. When Exchanging CCDA Documents for the first time with any partner, ALWAYS validate the content conforms, and if it doesn't, don't import the machine readable data into your CDR.  
  2. Quarantine the content, but make it accessible to clinicians as text.  In other words, if it's valid CDA, but not valid CCDA, transform it to a human readable (e.g., HTML) using YOUR OWN TRANSFORM, and allow that to be displayed.  Note that clinical data in this document hasn't been imported (incorporate is the word used in MU Stage 2) to the CDR to the reader.
These recommendations do two things:
  1. The validation step is a prophylactic, preventing dirty data from infecting the CDR
  2. Putting "dirty" content into a quarantine makes sure that the clinical data that someone took the trouble to communicate to a provide is still available for clinical care (as human readable content), even if not in the most accessible way.
The latter part is simply an appropriate application of both Postel's law, and the principle that the patient comes first.

Wednesday, September 25, 2013

A Theory of Everything

Within HL7 we have produced or are producing numerous specifications dealing with Quality Improvement:

  • VMR for CDS Logical Model
  • VMR for CDS Templates
  • VMR for CDS XML Implementation Guide
  • HED Knowledge Artifact Implementation Guide
  • Decision Support Service Implementation Guide
  • HQMF Specification
  • HQMF QDM-Based Implementation Guide
  • CCDA Specification
  • QRDA Specification
  • Arden Syntax
  • GELLO
  • InfoButton
  • QDM
Each of these addresses a variety of things, some things in the same way, and others in different ways, sometimes for really good reasons, and at others for reasons we perhaps don't yet understand.

We didn't know (and still don't yet in full detail, but we've at least gathered the data), where these specifications overlapped, and what was missing.  Today, Clinical Quality Improvement, Clinical Decision Support, Structured Documents and Templates members met to talk about what is our Theory of Everything, or more specifically, our Quality Improvement Architecture.  I presented the following slide deck to help us figure this out (the magic is on slide 5).



We took the list of specifications above, and parceled out the pieces into slide 5, and put it on a white board [this picture is after we cleaned it up].

As you can see, there is plenty of overlap.  We also noted quite a bit of gap, because we are missing (actually, it exists, but it's lore rather than publication) a lot of specifications where we agree (at the conceptual level). 

In fact, we agreed that at the conceptual level, we are almost in complete agreement.  The one place where we have some variation is between QDM and HL7 Clinical Statements, where there are some disconnects.  There is also a great deal of consistency at the platform independent information model level.

One of the realizations that I came away with was that in the RIM, the rules for interpretations for how to deal with certain structures in the RIM are behavioral/interpretation models ion the computation viewpoint space, rather than logical models in the information viewpoint (e.g., joinCondition has implied behaviors).

There's plenty of work to turn this diagram into an Architecture, and that's what I'll be spending the rest of my week on (when I'm not doing other things).

I was extremely pleased with how this workout went.  We had a room filled to the brim (nearly 50 people) who were writing on post-its and pasting them up on this diagram.  We came to an agreement on where we agree and disagree, and we have some plans for how to move forward.  And I totally was making this up as the week was proceeding, but it worked really well.

We still disagree on some things, but now at least, we have a much better idea about where.  There was a point in the meeting where someone was talking about one thing, and I disagreed, and then we clarified what boxes we were both talking about, and were instantly back in agreement.

It must be Wednesday at the HL7WGM

If you've figured it out by now, I have a tradition on Wednesday mornings at the annual plenary meeting.  Of course it's a time for recognition in the HL7 community, but I have to tell you those blue vases are quite heavy.  What I've got is much cooler and lots easier to carry around.

So who's next up for an Ad Hoc Harley?  I'll give you some hints.  He's young, probably late 20s. He's smart. He knows CDA. And CCDA. And FHIR. Already his contributions to interoperability are great.

Give up?  Need more hints?  He's disruptive.  He knows OAuth. And RDF.  If you haven't figured it out by now, I'm pretty sure he has.

Let's delve a bit more: Genetics.  IT.  Software.  Geeky software.  Health IT Software.  And Poetry.  I'm told he likes poetry.  And speaks fluent French.  He's also an all around nice guy.  And smart?  Did I mention smart?  It doesn't hurt to say it again.

If I were to give this award for one of his accomplishments, I don't know which it would be.  Many of the things he's managed to accomplish is stuff I should have, wanted to or aspired to do, but couldn't pull it off. And I don't know what he'll do next and look forward to it, but frankly I'm also hoping he'll stick around for a while, because we [patients] need him.

So here we go...

This certifies that 
Josh Mandel of Children's Hospital Boston 


Has hereby been recognized for outstanding contributions to Healthcare IT including, but not limited to: SMART and Blue Button Plus and C-CDA Examples and Scorecard and FHIR Open Source


Tuesday, September 24, 2013

Aligning Clinical Content and HIPAA Attachments

This week at HL7, one of the topics of discussion within Structured Documents is a proposal to develop a new specification supporting capture of administrative information required in claims transactions, otherwise known as Attachments. One of the challenges posed for the claims process is that the data required by payers for claims for different services is complicated. From the CMS perspective this is further complicated by the fact that each type of service may be governed by specific regulation with respect to requirements that must be met.

One approach to this is to gather the various requirements and consolidate them in one place, and then require providers to report on all the data elements they captured or didn’t for an encounter based on those requirements. I’ve been arguing against this approach because I very much want the same data being used to record the clinical encounter to be able to be used in this scenario.

The concern that is being raised in that the current claims attachment guide is simple a rather thin veneer over CCDA, and none of the CCDA documents require all the data necessary to be captured to be present. I pointed out, unsuccessfully I might add, that the requirements that meaningful use imposes to capture and report on 17 data elements in the Common MU Data set also aren’t required of CCDA, and yet CCDA is the standard. It’s a combination of those data elements being present in CCDA, the ability for them to be included in any CCDA document, the MU Regulations which require their presence, and the certification testing which ensures that those 17 data elements are present in any document that is sent from an EHR system.

I don’t want HL7 to create an implementation guide that is essentially an enabler of compliance to the hundreds of regulatory requirements. As soon as any of that changes, the guide would need to be updated. One of the challenges here is a demand that any data that could be required for some part of these regulations, but which hasn’t been captured, has to be recorded as not having been captured. This is a packaging problem. The data may not have been captured in the EHR, but might have been captured elsewhere in an ancillary system (e.g., the Medication Administration Record). The data being required may not even be relevant to a particular specialty. For example, providers in the imaging space shouldn’t have to capture encounter data elements that providers in the anesthesia space might, and vice versa. Even in a similar space, what the surgeon reports and the anathesiologist reports will also have specialty specific variations.

If HL7 were to take this approach, the complaints that we would be hearing would be very similar to those reported by Margalit Gur-Arie in her thought provoking post on Innovative Health IT. I can hear it now: “Why do I have to report that I didn't capture this? That’s not even my job. That appears in the surgeons report.”

The approach that I’d like to see is to treat CCDA as a buffet. From it, a provider select the things that they need to capture and report on. There need not be an attestation that “this was not captured”, because if it isn't there, either it wasn't captured, or they don’t have access to that information, or it could be in a separate information system. There’s no attestation today in the current process that a particular piece of data wasn't captured.  Rather, there's a signature that says this is what I have.  From the provider perspective, if the data needs to be there to get paid, you can be sure they’ll figure that out.

We need to stop trying to mandate everything in the document. We should expect other forces like reimbursement and quality improvement incentives to ensure that content used for payment will show up. After all, not getting paid should be a pretty good incentive for providers to get it right.  If the specification simply enforces what is already there in regulation, what will we do when the regulation changes?  We cannot keep updating the standard every time the regulations are changed.

One response to my thoughts on this is that the EHR system will generate the report and simply insert all the necessary statements about “I didn't capture this.” The challenge here is that systems used for administrative transactions and systems used for clinical data management aren't always the same. So now the billing system needs to look at the clinical data, and figure out what is missing. That clinical data could be coming from several different systems (e.g., sometimes the ambulatory provider needs the discharge summary, not even in any of his systems). The attachments transaction can package not just one, but several documents in response to a query. I see no reason why we need create one “Uber CDA” for attachments, when we can in fact submit several. And if need be, a separate attestation that this is all the data available for this claim. Thus, if it’s not there, it wasn't captured.

Please, don’t make us duplicate the need to generate a CDA document for a patient encounter that is already required for meaningful use, and require additional gunge than makes it less useful for clinical care.  And don't make us repackage information from several separate sources in a single document, when the existing claims transactions already support multiple inputs in response to a query.

It may well be that I don't understand, but I've been involved with attachments for ten years and more (in fact, this is now my 10th year as an HL7 member).  CDA and Attachments what brought me here to start with.

Sunday, September 22, 2013

The Best Standards are Invisible

At 10:54 today I declared success at the FHIR Connectathon.  I had been able to create a SecurityEvent on one of the FHIR Servers, and was certain that I'd be able to use it with at least two others soon enough.  From my perspective, that was the result of starting from code I'd written eight months back, updating it to the current FHIR specifications, and working through how I'd implement generating a Security Event logging a disclosure while accessing data using FHIR.  All in all I added about 200 lines of code to my 400 line application (HTML and JavaScript).

For me, success was less about creating the FHIR resource, or querying a FHIR server for a particular piece of data, and much more about solving a particular workflow problem. It really wasn't about FHIR at all. I spent so much more time thinking about how I wanted to address this problem, and designing the service to work the way I wanted it to than I ever did worrying about the details of FHIR. And that to me is the biggest sign that FHIR will be successful.  It gets our of your (or my) way, and lets you (or me) focus on the real (or externally imposed) problems.

The servers and clients running now in FHIR are crafted in multiple programming languages and platforms, including: Java, TCL, C#, Python, JavaScript, Objective C, Ruby and Grails.  Rarely have I seen an IT standard, let alone a healthcare IT standard, with that much implementation done across that many different languages.  There are at least three different reference servers, one of which is an Open Source effort (via Josh Mandel).

Grahame Grieve, Ewout Kramer, Lloyd McKennzie, Josh Mandel, John Moehrke and many others have done a tremendous amount of work on the FHIR specification.  I look forward to resolving the ballot comments on this specification [something that I rarely say].  I'm getting quite impatient to use it, and thrilled that it has become the basis for the Blue Button Plus REST API Pilot efforts.



Friday, September 20, 2013

Words Like Loaded Pistols

I just finished reading Words Like Loaded Pistols: Rhetoric from Aristotle to Obama.  I picked it up last Sunday after it appeared prominently in the sermon.  At 322 pages I expected to be finished with the book Monday night.  I was wrong.  It was one of those books where I needed to read a chapter (or two), and take some time to digest.  What it had to say.  A book that I cannot finish in a few short hours, but want keep reading is a rare pleasure.

One of my favorite things about the book is how the author (Sam Leith) is how much fun he is clearly having a) writing the book, b) using the text as an example of rhetorical figures and tropes, and c) choosing fun examples (dissecting Kyle's Mom is a Bitch from South Park).

Whether you need convince a client, a colleague, or a CEO*, this is simply a book worth reading.

   -- Keith

* The book calls this rhetorical figure a Rising Tricolon


Thursday, September 19, 2013

How We Got Here is Almost as Important as Where We are Going

I met a man on Monday after the Consumer Health IT Summit who proceeded to explain to me what's wrong with Blue Button, CCDA and Meaningful Use. He was a physician, an anesthesiologist by trade.  And for ten minutes I was harangued about how badly broken smoking status and CCDA was in Meaningful Use stage 2.  How the value set was useless to him, didn't capture the data he needed, and how of course everyone knew that the right way to measure smoking history was pack-years.

I listened to his story patiently, and stopped to ask him some of the same questions I had asked 2, 3, 4, 5 and more years ago on the same topic.  The first question is what about non-cigarette equivalences (e.g., pipe, cigar or chew).  I never did  get to the next question about what is the unit of measure (because pack is as arbitrary as tablet) because we moved onto a different part of the topic.

I managed to explain that these questions had been asked (by me) as much as seven or eight years ago, when IHE was first developing some of its profiles, and again some 5 or 6 when HITSP was working on some other topics.  And that at the time, we never did get consensus among physicians as to what the correct way to represent the result was.  So his "everyone knows" didn't come out because what everyone knew was different (Ask the same question of five physicians and you can get six answers, the same is true of lawyers and engineers, the only difference is the question topic).

And then we talked about the fact that the original set of concepts found in Meaningful Use stage 1 came from a CDC Survey Instrument.  And that the main use of these was with quality measures about smoking cessation, not assessment of cancer risk.  And that while the CDC survey concepts may have been solid, what was recorded in EHRs using those same categories blurred the lines because of course EPs don't ask questions the way that CDC surveyed people.  And then how later in stage 2 physicians complained because these concepts didn't fit their workflow because they weren't fine grained enough (light and heavy smoker were subsequently added).  I'll note now that this points out that there are multiple uses for this datum.  And that implementers complained that a set of concepts without codes wasn't useful, and so the Smoking Status value set was born.  And that my friends, is how we come to a value set that it seems the only consensus is that everyone dislikes using it, but have little better to offer.

We also talked about (the new) Blue Button.  He complained that it was still stuck on documents, and that that was not useful (I seem to recall he used stronger language). Yes. For now. And had he been involved in the ABBI workgroup (too little).  Was he aware how it prepares the way (the route) for being about more than documents, supporting the data elements we want (No).  And how it's using the parts of FHIR that appear to be ready now (a bit).  Yes he was familiar with FHIR, he's on an HL7 workgroup looking at FHIR now.  And have you voted on FHIR (I never got a solid answer). I fear the answer is in the negative, and hope otherwise.

It was an interesting encounter.  I felt as if I had somehow managed to explain to one person what is happening and how we got here, and where we are going, and had some success.

Wednesday, September 18, 2013

The route is what matters

Yesterday I spent the day at ePatient Connections along with several other S4PM members and other industry notables.  The reason I was invited to speak at this conference was because of this post.  I'll upload the presentation when I get a chance to later (it's on a different computer).

I also participated in a panel discussion on How Technology is Reshaping the Patient Experience.  We were asked to talk about how we thought that was happening, and to take a futurists view.  I went first and had two key points:

  1. Patients are beginning to use technology to reinsert themselves in the Healthcare Value Chain, and to gain better control over it.  Blue Button is simply the start of this, and we are at the very early stages.
  2. Eventually we will learn that it isn't control over the data where the value is, but rather in links between different data sources.
Right now, everyone wants to give us an app, and get access to us and/or our data.  The money isn't in the app sales, so much as it is in the ability to aggregate patients like us, or our data.  But that truly is a broken model, and all it does is perpetuate the silo mentality.  The web's value isn't in the ability for me to write a blog post, so much as it is for people to link to it from a variety of places.  The biggest value are its links, not its little silos of data.  Just ask Google.  They get it.

I've been working with structured documents for years, even before I ever got into healthcare.  I know the value of a link.  And the links really are what we are all after, not the data.  It's the link from the outcomes, back to the treatment, from treatment back to diagnosis, and from diagnosis back to symptoms, and even before then back to patient behaviors where the true value lies.  In order to create the links that have that value we need to set the data free.  The company or entrepreneur that figures out not this, but rather how to get there from here will be in the winning position.  We know the destination.  The route is where the real challenge lies.  The route, after all, is what a link is; a pointer of the way from here to there.

     Keith



Tuesday, September 17, 2013

Health IT Week

I spent the day at the Consumer Health IT Summit kicking off Health IT week in DC. The day was packed with activities scheduled down to 5, 10 and 15 minute blocks of time.  Even with an unscheduled break built in they still managed to end the day on time.

Farzad was our host du jour.  He briefly talked about how many of us in the room were misfits, a quote that Regina Holliday immediately riffed on in her art (see the last image).

ePatient Dave gave the keynote, "How Far We've Come: A Patient Perspective".  He ran through 40 slides in 15 minutes, an amazing pace.  The deck was remarkable for how many different topics it touched on that we've been through over the past few years, and I even got a starring role in a few slides.  A lot of what we heard today I've heard before in other settings, but many of the people in the room were hearing it for the first time.

OCR updated their memo on patient's right to access data just in time for new rules to go into effect Monday of next week (if you need to find that again quickly, it's at http://tinyurl.com/OCRmemo).  They also released new model notices of privacy practices.  They also added a video (below):




The folks over at OCR have been pretty busy I'd say.

There's some new work, "Under Construction" at ONC around Blue Button, and that's a nationwide gateway into the Blue Button eco-system.  Patient's will be able to find out from that site how to access their data from providers and payers using Blue Button.  Want to be sure your patients can connect to that?  Talk to @Lygeia.  We previewed a great video that I would love to be able to use to sign up data holders.  It's a shame that at a meetup of several well-connected social media folk, there isn't a list of URLs to redistribute, but well, as Farzad said a few months back, Marketing really isn't the government's forte.

We heard from a number of folks at ONC, CMS, OCR and elsewhere in HHS in the first half of the morning.  A couple of key take-aways for me:  In recent regulation, CMS proposed paying for coordination of care, a la using meaningful use capabilities like Blue Button.  Director of eHealth Standards at CMS Robert Tagalicod was also heard to say that "We are looking at incenting the use of Blue Button for Meaningful Use Stage 3."  No surprise to me, but confirmation is always good.

Todd Park (US CTO) popped in for a short chearleading session.  If anyone can talk faster about Blue Button than Farzad, it's Todd (I feel comfortable calling him Todd because that's what my daughter calls him).

We had a short unscheduled break because the ONC folk finally realized that four hours in our seats was NOT going to cut it.  Someone needs to teach these folks how to run a conference someday.  It would also be nice if we didn't have to go through security just to get a cup of coffee (but after GSA and sequestration, what could we expect).  Even so, a quick fifteen minute break and we were back in our seats.  Lygeia wields a mean hammer.

The Consumer Attitudes and Awareness Session was the first foray I'd seen where ONC started talking about marketing.  I'm still a bit disappointed that so far, ONC is only thinking about targeting people who are already sick in their Blue Button marketing efforts, I really think that we need to change our culture, and that means starting with our youth.  I'll keep hounding them and anyone else who will listen until I get my way there.  I can teach an eight-year-old to write a HIPAA letter.  Why shouldn't we start with eighteen-year-olds.

ONC announced the winner of the Blue Button Co-Design Challenge: GenieMD.  The app looks good and it has the one main thing I want, it uses an API to access my data.  I'll have to take a look at it later.

We heard from a panel of eHealth Investors.  I couldn't help but feel that they are still disconnected from patients.  The issue of monetization of eHealth seems to have two places to go, either get more revenue dollars from patients, doctors or anyone else who has it to spend, or take a cut of the savings.  I don't think any one of these investors realize that the Healthcare market is saturated with places to spend money, and adding one more will only take in a little bit, where as the opportunities for SAVING money are probably a lot more lucrative, and could readily pay for the investment.  The more products that are built to deliver patients, patient data, or health data to someone else for bucks, the more silos we simply create in our healthcare delivery system.  We've got to think about ways to make money by freeing up things and breaking down barriers.  As I tweeted during that session:




And that was retweeted a dozen times.

After lunch we met up again to do some work on Outreach and Awareness to Consumers, including getting into some of the marketing details around the new program, and doing some A/B testing with the audience.

Overall, it was a good day.  Now I sit in my hotel room in Philly, writing this post, and preparing for tomorrows ePatient Connections conference where I'm speaking and on a panel.  It's not my usual venue, but I have a soft-spot for Philly since my family is from here, and I grew up outside the city.  So, I'm taking a personal day for the conference here, and then I'm back home for a few days.  After that, it's the worst way to attend an HL7 conference: Drive in every day, and home every night.  I'll get to see my bed,  but probably not any of my family while they are aware.  I'll be at the HL7 Working Group meeting in Cambridge.

And to polish it all off, here is Reggie's art: With Lygeia, ePatient Dave, Farzad, and Leon all getting ready to depart the Island of Misfits.

Fitting I think (or should that be misfitting).  Farzad will be leaving ONC on October 5th, and had this advice to offer his successor (I think she'll do just fine).

-- Keith

P.S.  I am very sad that today was so marred by the attacks at the Navy Yard. That was about a half mile from where we all were sitting.  The Police presence in the city as we headed over to Tortilla Coast was not quite as scary as Boston a week after the bombings, but still close enough.  I'm not used to walking around a city where automatic rifles are carried at patrol ready and police are standing on every block.

Friday, September 13, 2013

Clinical Quality Workgroup

The Clinical Quality Workgroup of the HIT Standards FACA met today to discuss issues around the harmonization of information models and logic for expressing CQMs and CDS and readiness of those for use in the next stages of meaningful use.  While we could have dived right into various standards, I suggested that we first establish some principles around which we would be evaluating the standards and making recommendations.  So, I was asked to make a list of said principles.  I have my notions, but I'd like to hear yours.

Here are some examples:
1.  The standards should be aligned around a common data model.
2.  The translation from representation in one standard to the other should be obvious and non-ambiguous.
3.  Value sets should be able to be separately maintained.

If you have inputs, add them as comments below (here or on Google+), or e-mail me (you can find my e-mail address on this page).

    Keith

Balloting 1000 Page Specification

How do you ballot a 1000 page document in 30 days.  That would be 250 pages per week, or 50 pages a day.  At one minute a page that would be about an hour a day.  Here are a couple of tactics I use:

  1. Use the change log to identify things you are worried about.
  2. Review already submitted ballots from other organizations you trust and pile on.
  3. Ballot general principles.  If you find the same problem in two areas, write your ballot comment in a general way, addressing the principle, and point to the examples you know.  If you find others later, add them on.
  4. Write your comments inline in a Microsoft Word document.  Copy and paste HTML documents into Word, or extract a PDF into Word (requires full version of Acrobat). 
  5. Use this word macro to extract your comments.  This saves precious time getting them into spreadsheet format.  If you don't have time to do that, submit your Word document as your ballot comments. 
  6. Prioritize your work.  Do the things that are of major concern first.  (BTW: This applies to balloting multiple documents as well.)
  7. Divide and conquer:  Split up the work across multiple people in your organization.
  8. If you trust the reviews of others, cite their comments as your own in your vote.  
I routinely use these techniques across multiple HL7 ballots, but rarely have had to apply them to a single specification.  I need to do so this time though, because the latest CCDA specification is some 900+ pages.  If you haven't already signed up for this ballot, it is too late.  However, if you have, then Monday the 16th is your deadline to submit comments.

I have one major recommendation that I already know I'll be submitting on CCDA 2.0 with my negative vote.  

Wednesday, September 11, 2013

The nice thing about standards...

... is not that there are so many of them, but rather, like the weather, if you wait long enough, they will change.

I have been involved in CDA Standardization efforts nationally in the US and Internationally for nearly a decade now.  My first CDA implementation was on CDA Release 1.0 in  2003.  I was first involved in the final round of balloting on CDA Release 2.0 in 2004.  I edited my first CDA implementation guide in 2005 (CRS Release 1).  I wrote my first CDA Profile in IHE also in 2005, and that was XDS-MS.  I helped develop CCD in 2006 and early 2007.  I was also editor for the IHE XPHR profile in 2007.  I was also the first editor (and remained in an editorial position for all subsequent editions) for the HITSP C32 created in 2006, and which was revised 5 times after from 2007 - 2010.  IHE XPHR was also revised again in 2008.  I was also one of many editors for the Consolidated CDA in 2011.

Over the past eight years, I've watched the standard change and improve, and impressed by how far we have come.  For CCDA Release 2.0, I took some time off in any sort of editing role, but am still involved and working on how to make it better.  The HL7 ballot closes next week.

   Keith



Tuesday, September 10, 2013

My way is right

... for me.

All to often in our industry we forget that there is more than one way to accomplish the same thing.  I spent a good half-hour, and I mean good as in well-spent with someone who is on the other side of an issue from where I approach things.  I started off our conversation with something like the following statement:

Both of us know that we have been successful in doing things in our preferred way.  We also know that we have a lot of experience doing that.  It's not that my way is right, or your way is right.  Both can lead to success.  In the last thirty years the question we are debating has yet to be decided in IT, nor in the last twenty years elsewhere, nor in the last ten in our industry.  I'm not going to move you from your preferences and you are not going to move me.  We are not going to solve in the next hour what our industry has not been able to decide in the last 3 decades.  Let us not argue about which way is better, but rather work on how to best resolve this issue so that we can move forward.

We did manage it, and we avoided an hour and a half of chest thumping that would have gotten neither of us anywhere.

--   Keith

Monday, September 9, 2013

I'm so glad I don't do imaging

I just did a little bit of math recently.  I did some rough guestimates (counting data elements at the entry level in a CDA document for a healthy patient).  I figured that about 20 records (rows in a table) would be needed to store the clinical data for a healthy patient.  I know from other work that I've done that for a patient with a chronic condition, a few medications, some allergies and family history and a few other things, that this could climb easily to 100 or even 200.  Let's work with healthy patients.

The state of Texas is about 25 million people (round down to the nearest 5 for ease of computation).  If the data for every patient in Texas was captured, and all were healthy, and all were stored somewhere, we'd be talking about 20 records/person * 25 million persons = 500 million records.
In simple terms, that's half a billion in one year.

There are around 1.2 billion office visits a year in the US.  Each would generate a bit more than one medical summary (I'm guessing less than 1.1).  To account for the fact that these patients are health and sick, let's split the difference and call it 50 records per document.  That's 66 billion records.  Just to be nice, I'll say that each record can fit in 1000 bytes.  That's 66 trillion bytes.  Collect that data for 10 years.  660 trillion bytes.

How quickly you approach a petabyte.

I'm so glad I don't do imaging.

Sunday, September 8, 2013

The Solution

In CCDA Release 2.0 out for Ballot I explained that there is a problem with current content.  In a comment on that post someone involved in the effort made the statement that: "In life, things naturally improve over time, but not if there is nothing to improve upon. Some people complain, and some people deliver."

What I'm looking for is a solution to this challenge, not an excuse to stop the content from going forward.  I want to see HL7 deliver quality standards.  Not addressing a huge issue impacting the implementability of our standards when we have the opportunity is a big mistake.  Fortunately the solution is not that far off.

For all old templates X where a new template ID Y has been assigned, change the ID as follows:
Y.root = X.root
Y.extension = "2.0"

There should be no disagreement that these represent two different identifiers within HL7.  Some might argue that it changes the nature of the namespace represented by X.root.  I would agree, but I would also argue that this is a necessary "bending" of the nature of the namespace that developers will understand, and has a lot of value.

Let's look at few examples.

Gap Analysis
One quick way to do a gap analysis is to add extension="2.0" to all templateId elements found in the output of an existing CCDA implementation, and run that through a validation process for CCDA Release 2.0.  This is very easy to do in software, and will quickly identify areas of concern in your existing implementation.  It doesn't absolve developers of having to do a more thorough review, but what it does do is give them a way to quickly assess the amount of change necessary.  You can do that assessment with an existing document in a few minutes using global search and replace in your favorite text editor.  If you have to change all the template identifiers, that's going to take a few hours or more just to build the transform, and then test it, and then try it out.  The skill levels necessary are very different.  I can assign the former task to a junior engineer, the latter requires a greater degree of skill.

Software Update
After finishing your software changes for a particular template, you update the code to output the extension="2.0" attribute on the template identifier.  You don't have to look this up, remember it, memorize a 15 digit value (or an incompletely followed OID structure change), as an engineer, you just remember it.  That saves you five minutes of coding and cross-referencing.  It allows you to simply update your design documentation, which is another few minutes.  Test cases have to change, but you save a few more minutes there.  Multiply a few minutes here and there by a couple hundred templates and you've saved several days or even a week or two.

Model Updates
Until we have an interchange format, the same process that goes for software update will also have to be manually followed to update tools like MDHT.  If we start with the same template identifiers, update each model as necessary, and then add the extension attribute value.  Again, same kind of value to the implementer.

Version Compatibility Testing
Take an upgraded instance and remove all extension="2.0" attributes.  Run it through the old validator.  What happens?  You can run this test in a very short amount of time and will less skill than is necessary if we change the OID wholesale.  I can write that XSLT in 10 lines or less.  I can use search and replace and do it in seconds.   Make me change every OID and it just got more difficult.

The change I'm requesting makes it much easier to do an update.  How do I know? Because I've gone through similar processes  moving from HITSP C32 version 2.3 to 2.5 where template ID's didn't change (for the most part), and from C32 2.5 and CCDA, where they did.  It wrote the former post in about a day, whereas the latter series of posts took more than a week to get through in effort, and twice that in elapsed time.

Make it easy for us to do the update.  That's what I'm asking for in my negative ballot comments.  And do it the way that templates has already said they want to move forward if you cannot wait for that ballot.  We can even involve the templates workgroup in that discussion.  I don't want to triple the effort to move up to CCDA 2.0, because I too want to move ahead.  Spending more time than we have to in order to upgrade our software isn't going to make HL7 any more friends in the industry.

    Keith

Friday, September 6, 2013

From Profile to Architecture

What's the right way to implement an IHE Profile in code and/or services?  The answer if you look at it might very well surprise you.  I did an exercise recently with a small team of architects where we looked at a particular user scenario: Handling the referral of a patient to another provider and getting back the results (e.g., the 360X Referral Project) We broke the scenario down into several steps, each of those comprising a particular task for the user.  We went back through the the steps and focused on the task of sharing a referral package from the referring provider to the referred to provider.  From the user perspective, the healthcare provider wants to collect and share information on a specific patient to another provider.  They don't care about how it gets there, so long as it is available to that provider.  If fax is the only way to do it, so be it, but ideally they'd like it to be an electronic transmission (which has a much higher fidelity and quality of service).

So, the service that needs to be delivered is to "share a package of information" with another provider, on a single patient.  Oh, and did I mention that the referred to provider might not even be known (it could be up to the patient).  So the first step of the service is to figure out how to get the document to where it needs to go.  If there is a destination (the referred to provider), routes to that provider might be discovered by looking up information in a Healthcare provider directory.  Having established routes, the system must now choose one (or more in some cases).  The mechanism by which the route is chosen depends upon the needs of the referring physician and the patient.  Does it need to get there with high reliability and fidelity?  Direct might be OK, but a push using an HIE or Point to point connection might be better.  Or is it more important that it be quickly but low fidelity is OK and there's not a digital route?  Fax might be the best choice.

Even without a known target provider, a route can still be established.  In regions where everyone uses the HIE, that's probably the best destination.  But another possibility is CD or USB stick.  The route is simply to media.  Another route is to a private store and pull location, which would allow a provider at the opposite end with the right tools and the retrieval key to be able to access and download the content.

Before transporting the package of content over the route, a few things might need to be massaged.  For example, the patient identity might need to be mapped from the referring providers identity domain to another identity domain that the referred to provider understands.  You might need to map metadata about the documents across "affinity domains" (e.g., as in XCA).  These are additional services that need to be supported.  Mapping the patient identity could use PIX, PDQ or XCPD profiles, depending on the route and information available in the original package.

This architecture looks nothing at all like XDS, XDR, XDM or the Direct Protocol, or HPD, or PIX/PDQ/XCPD.  I've thrown in a few extra pieces like FAX (which could also be viewed as secure printing), and you could also imagine that other routes like use of DICOM CSTORE could also be used as the method of transport.

Deeper in the service layer, there are places where these profiles finally come into play, and that is between system boundaries.  Once you've determined the route to the end user, you've also helped to define the service contract used between the sharing systems, and the profile enables the two systems to be communicated.

A naive implementation would develop a "Document Registry/Repository" Service, and implement the Provide and Register, Query and Retrieve Document transactions.  It's there, but only at the system boundaries.  Within the systems implementing the IHE profiles, the path to those services looks much different.

I traverse those boundaries in my head all the time without thinking about it, and so do many of my standards colleagues.  What we need to be able to do is illustrate how these services enable interoperability at the application level, without requiring applications to really get into all the gory details.  The profiles have to put that in front of engineers, but most neither need, nor want to deal with that level of complexity.  The complexity exists for many reasons, some out of necessity, and some because of history (and some would say, bad design decisions).  However, we can make it simpler for developers to understand at the level their applications need to.

What we learned from this was that the profiles and standards are the building blocks.  How you put them together is really what determines how good the space looks and how well it functions for it's designed purpose.  In HL7 and IHE (and elsewhere) we build the bricks and shape the beams.  It's the architecture that makes it pretty.  We may need a few more good examples to show the way. Here's one example thanks to John Moehrke and others: A Service-Oriented Architecture (SOA) 
View of IHE Profiles, which helped to shape how my exercise was developed.

It worked out rather well. I don't have the full picture yet (or even an artists rendering), but I'm starting to get the picture in my head.

Thursday, September 5, 2013

Is MeaningfulUse stifling innovation in HealthIT

I write this post after reading Margalit Gur-Arie's excellent post on Alternative Health IT.  She's one of those people who when they write, I not only listen, but I make sure others know about her posts automatically via Twitter.  It's a short list.

I'm a bit challenged by her post, because in many ways I agree, and in others, I disagree.  What I find stifling is the pace at which Meaningful Use is proceeding.  When you put an entire industry under the MU pressure cooker, the need to meet Federal Mandates overwhelms anything else.  The need to develop software that is able to support a large number of externally controlled mandates can, and in many cases, has resulted in bad engineering.  You can't innovate well on a deadline.  It's not a well-understood repeatable process (actually it is repeatable, but few are able to define and execute on it, but that's for another post).  What results is often "studying to the test", and neither developers nor end users never really learn the lessons that Meaningful Use is attempting to teach.  I've seen multiple times where developers produce a capability that meets the requirements of the test, but which fails to meet the requirements of the customer.  See John Moehrke's excellent analysis of what it takes to pass encryption tests in Stage 1.  If you do JUST what it takes, you can wind up with something that customers don't need and won't use.

In some ways, the test itself is to blame.  But in other ways, it is our attitudes on what that software is supposed to do that is to blame.  Margalit makes several points about the utility of gathering family history and smoking status for patients for whom other things are more important.  One of the first rules of care in the ED is to stabilize the patient.  That means that other things (like capturing medical history) should wait.  If your EHR forces you to a workflow that doesn't support that necessity in the ED, by all means, replace it.  If your process requires the capture of data for every single patient that you don't always need , perhaps you should revise it.  The metrics in meaningful use require that for patients admitted (in a hospital) or treated (in an ambulatory or ED setting); 80% have smoking status and 20% have family history recorded.  This isn't an all or nothing measure.  MU isn't saying do it every single time, but it is saying that this should be part of your practice for most or some patients.  I agree the numbers might be usefully adjusted for different settings, but arguably it is also lot less costly and challenging (for the government) to set one measure for everybody.  Is it fair to consider that X is too high a number?  Possibly.  Is there a number that would make everyone happy?  Hell no.  So we pick one and live with it, and move on.  It's NOT quite as scary or as stupid as it might seem.

The test for providers is more challenging than the test for the developers.  It is made of a couple of dozen questions, and each one is a pass fail question, and you have a year, or 90 days, or whatever, before you find out if you've passed (although you can monitor it).  Failing on any single question results in failing the test overall.  This would be like having a class in which you given 10 tests over the course of the semester, and you assign to the student at the end of the year the lowest grade of any single one of the tests.  I think we'd be better off with a bunch of pass/fail questions and a set metric of what score is needed to pass overall (That's what the menu options do, but few actually see it that way because there often aren't enough of them to be relevant choices).

On where Meaningful Use is succeeding in developing innovation, I think there are a few places of note.  Blue Button Plus supports unprecedented patient access, and while NOT directly required by name in Meaningful Use Stage 2, is built from readily accessible components and requirements that are present in Stage 2.  I'm speaking specifically of the View, Download and Transmit requirements in the incentives rule, and on the standards side, Consolidated CDA [arguably a refinement of an innovation produced several years ago], and standards applied in the Direct Project.  Stage 3 has much more to offer I think, even though we are just starting to get a handle on what it might look like.  The Query Health initiative did some really innovative work that supports not just its particular use case (health research), but also automation of quality measurement using HL7's HQMF.  If you think developing a declarative means for specifying quality measures (and using that for research as well) isn't innovative, you certainly haven't been viewing some of the challenges we were trying to solve from my perspective.

It's not all bad.  It's not all good.  But overall, I think the end result does not paint quite as depressing a picture.  And it is just one program.  It is the biggest one we have right now, but that's about to change.  The ACO rule is kicking in, and we are starting to see providers (who now have an EHR due to meaningful use), start to look at real innovations that support better care.  For them, there's only one test score (how much savings there is at the end of a term), but they get to define the curriculum and how it will be learned.

Wednesday, September 4, 2013

CCDA Release 2.0 Out for Ballot

I'm getting ready to start reviewing the HL7 Consolidated CDA Release 2.0 specification that updates CCDA Release 1.1 presently used for Meaningful Use.  If you haven't already signed up to vote on this ballot, you have until September 9th to do so, and I would advise that you do so.

There are two main documents.  The introductory material is about 50 pages long and is in the first document.  The templates make up the second document which is now some 930 pages long (CCDA Release 1.1 was some 530 pages in length.)  To put this into context, if you spend 1 minute on each page, you will spend nearly 16 hours, or two days on the entire content.  If you spend 10 minutes a page, you would need 20 days, or four work-weeks to review it thoroughly.

I already known I'm voting negative, and that is because 63 templates now have new identifiers just because: "Updated to reference a contained template that has versioned."

  • Every existing document template ID has changed.
  • Clearly more than half of the sections have changed, and probably closer to 80% of them have.
  • Of the 110 entry templates, only 29 are unchanged.
  • And every changed template has a new identifier.

This is a mess on so many levels.  First and foremost, HL7 needs, and is working towards, a mechanism to version templates that means that we won't have to go through this exercise again.  If you agree with me, Vote Negative and simply cite my ballot comments, or write your own if you'd like.

I've been promised that the new tools will help track all those changes, but frankly, until the data is made available in an exchange formalism for the templates (which it is not), and until there are tools that can automatically generate the code from that data (which there is not, but would be once the exchange formalism was there), there's no AMOUNT of PDF or electronic text that will help my engineers avoid the pain they just went through getting to CCDA Release 1.1.  It's time to stop using DIGITAL PAPER as our mechanism to update our standards and our software.

We need to work towards a sustainable mechanism to support changing the template specifications.  We cannot afford to spend the time it took to get from CCD to CCDA again in our industry.  The templates may be good, but the mechanism by which HL7 supports the industry in adopting updated versions needs to be in place before we create another 500 pages of text to review.

Ideally, we'd also have a mechanism by which templates can be reviewed and updated individually or in smaller batches, and by which developers can access and use the content in smaller pieces.  This content is already exceeding the capacity of the software used to deliver it (as anyone with experience editing a 500 page word document can attest).  But I'll save that battle for another day.  First and foremost, I want to avoid a repeat of the painful exercise we all just finished.