Wednesday, April 30, 2014

IHE RECON Profile Refresh

In 2011 IHE published the Reconciliation of Diagnoses, Allergies and Medications (RECON) profile which described the interoperability requirements for reconciling these items from multiple sources.  Remarkably, without even trying, this profile was very well aligned with ONC's Meaningful Use Stage 2 requirements which were published later in 2012, as I described here.

The profile has received little uptake, in part due to attention to Meaningful Use in the US, and perhaps because of some unnecessary complexity.  Many EHR vendors implement the basic Reconciliation capability described in the profile.  However, none implemented the recording functions around the reconciliation specified in the profile, and that may have been one of the reasons.

This year, IHE re-factored this profile to incorporate reconciliation of other data elements needed for care management, and made it simpler and easier to implement.

The Reconciliation Agent is the principle actor in this profile, and it supports the basic operation of reconciling data from multiple lists in an appropriate user interface.  This is the same capability that I've seen most EHR vendors demonstrate, so they can have it basically by declaring it (and ensuring that they support the required functionality in the profile, which most likely do).

We split off the requirement that subsequent content be created with an annotation of the reconciliation act having been performed, and simplified the requirements for that act.  In the original profile, this was required.  Now it is described as a named option.  Finally, the structure of the Reconciliation act has been simplified. When present, it simply declares that all the things of the type indicated in that section (be they problems, medications, allergies, et cetera) have been reconciled, who did the reconciliation and when, and what the sources of information were.  It no longer has to wrap the reconciled content, and because of that, we don't need a different one for each kind of item to reconcile.

So now it is easier to implement, and so I hope to see many more systems using it and testing it at connectathon next year.

Right almost by accident: The JASON Report on HealthIT Infrastructure

I've finally read through the most recent JASON report on A Robust Health Data Infrastructure.  It fails to live up to one pundit's description as the "Son of PCAST" widely reviled by the Healthcare industry.  It failed to live up to its billing.  The PCAST report had some semblance of professionalism about it.  However, the JASON report is amateurish by comparison, full of outdated references, pedantic writing and unjustified opinions.  Even so, its more right than wrong, but probably for the wrong reasons.

My favorite quote from the most recent JASON Report is:
Innovation in health care appears to be frozen by a deluge of overly ambitious, insufficiently practical, and often conflicting advice
Never have I seen such a well qualified self-referential statement statement in a report before, and I have to completely agree with it.  In this case, I'd also have to add confusing advice.

The report relies on out-of-date references in citing the slow growth of adoption of electronic medical records by healthcare providers.  We are in the midst of a technology revolution in terms of adoption.  In four short years, the US has emerged from the bottom of the pack of all nations with respect to Health IT to being near the top, and if the pace continues we will shortly be at the top.  With such rapid change, there is no question that there will be fits and starts and growing pains, and that we won't have gotten it right in the first try.  The critique that we aren't moving fast enough:
The level of interoperability set forth through the CMS Meaningful Use criteria, as a result of the HITECH Act, is too low to drive meaningful progress
Fails to indicate what pace is fast enough, and how we could expect to achieve that pace.  Meaningful Use is starting its second cycle this year.  Systems conforming to the 2014 criteria are being deployed and used and haven't even been in place for half a year.  And yet somehow, we've already learned enough to state that Meaningful Use stage 2 is also a failure.
The criteria for Stage 1 and Stage 2 Meaningful Use, while surpassing the 2013 goals set forth by HHS for EHR adoption, fall short of achieving meaningful use in any practical sense. 
I would like to see what the authors define to be meaningful use and how they can expect a program that has been in place for three years has failed so significantly.  The first two-year stage jumped the national adoption level of Health IT from 30-40% to nearly 80% according to the CDC.  In four short years we've bent the adoption curve faster than any first world country.  And this is too slow.

And once again, we need "scientists" to tell us:
With respect to data formats, the current lack of interoperability among the data resources for EHRs is a major impediment to the effective exchange of health information. ... However, simply moving to a common mark-up language will not suffice. It is equally necessary that there be published application program interfaces (APIs) that allow third-party programmers (and hence, users) to bridge from existing systems to a future software ecosystem that will be built on top of the stored data.
But who fail to recognize that Blue Button Plus is in fact a published API to do just that.  No, to them it's a business model that has yet to succeed (see page 16 of the report).

And what is Blue Button Plus based on?  A standard API (called FHIR) that allows third party programmers to bridge from existing systems developed by one of the hundreds of standards organizations that could only be mentioned by name ONCE in the entire report (HL7), and not even in the context of organizations developing standards.

And while the report remarkably makes mention of connect-a-thons (using the NFS spelling), the authors remarkably make no mention of any current such events occuring (such as a couple of weeks ago in Vienna), or next week in Phoenix (although some might argue that's not quite a connectathon yet).

Of course the report is full many other opinions and absolutes such as:
Current EHR systems do not interoperate at all, and in many cases are unable to even exchange data between hospitals running the same system from the same vendor.
I could spend the entire day ripping the rest of the report apart, but I won't, because there are two things it got right (perhaps for the wrong reasons).

The JASON report fails to make a distinction between the lifetime longitudinal Electronic Health Record, and the electronic medical record systems, payer databases, care management and other Health IT systems from which the EHR will become an emergent property.  But, it properly recognizes that:
EHRs should not be things that one buys, but rather things that evolve through cultural change aided by technology
But fails to distinguish between the components of the EHR (the Certified electronic medical record systems confusingly called EHRs in the Meaningful Use program), and the emergent property that will be the EHR supporting patient care, population health and clinical research.

And secondly:
The architecture must be based on open standards and published application program interfaces (APIs) and protocols. 
There is work led by HL7, and supported by IHE, DICOM and others under development over the last three years, known as Fast Healthcare Information Resources (FHIR) that supports the API that JASON is looking for.  If only someone on the advisory committee had mentioned it to them (or perhaps they did, but it was remarkably unreferenced in the JASON report).

I beg ONC that when they next commission a report, be it from PCAST, JASON or whomever else, that they include on the team of advisors someone who can point the team to current standards work so that it can at least be evaluated, and also give them some up-to-date references, so that they don't embarrass themselves by referring to data that is woefully inadequate.

And, yes, ONC please continue to support HL7 efforts on FHIR, because I truly think that's the open API that you are looking for.

Tuesday, April 22, 2014

Why don't we have interoperability?

I don't know how many times I've been asked that question, or attempted to answer it.  The short answer is fairly simple.  We have the technology to support interoperability today in our hands.  Not everyone agrees on what it is, but it does exist.  Even those of us who don't agree on which technology to use can still use it even if we don't like it.  The real problem is not the technology, but rather in its deployment.  It takes a lot longer to get the technology into the healthcare provider's offices than anyone really wants it to.

You can argue about whose fault that is, and no matter which argument you choose, there's probably some truth in it.  It takes time.  We will get there.  This is not a technology problem.

One thing that I've observed happen more than a dozen times since I got started in this field is that you can solve the technology problems in 1/5 the time it takes to develop and agree to the information sharing policies.  I've seen that happen over an over again.  It's not about bits and bytes.  That's the easy part.

The time is really about the words on paper that wind up having to be approved by the lawyers and signed by the big bosses.

Friday, April 18, 2014

HL7 eNews -- April 17, 2014 : CDA Style Sheet Security Update Complete

This showed up in my inbox late yesterday.

April 17, 2014

CDA Style Sheet Security Update Complete

A potential security vulnerability to the long-standing CDA® (Clinical Document Architecture) style sheet was recently raised and the community took quick action to update the style sheet and address each issue.

This update addresses a potential vulnerability exposed by use of the style sheet in many current internet applications by preventing malicious insertion of executable code into the display instructions for non-XML (Extensible Markup Language) clinical documents (allowed as the body in Consolidated CDA), illegal table attributes, and image URIs (uniform resource indicators) to potentially hostile sites.
When the style sheet was developed and evolved through community efforts, browser support for XSLT (Extensible Stylesheet Language Transform) stylesheets was not commonly seen as a potential source of vulnerabilities, and JavaScript support was not as consistent or pervasive as it is today. These are no longer safe assumptions and we have responded to the potential threat by making the following security enhancements:

  • "Sanitizing" references in the nonXMLBody of a CDA document before passing it to an IFRAME.
  • Removing table attributes such as "onmouseover" that are legal in XHTML but not allowed in CDA
  • Allowing only local relative image URIs by default, but providing a parameter to the XSLT stylesheet to re-enable remote image support for those who need it.

The style sheet updates are not intended as a replacement for other security measures. Recipients should load CDA documents from trusted sources, validate them against both the CDA.xsd schema and appropriate Schematron schemas, scan XML files for potential JavaScript insertion before accepting them from 3rd parties, and stay current with best security practices. The vulnerabilities in the XSLT style sheet are only possible when other security measures are lax.

The updated style sheet is available here http://gforge.hl7.org/gf/project/strucdoc/frs/?action=index.
We appreciate the action of the community to raise this issue and encourage all to continue to work to improve this utility. Special thanks to Lantana Consulting Group for working tirelessly to address these concerns quickly and efficiently.

Sincerely,
Calvin Beebe, Diana Behling, Rick Geimer, Austin Kreisler, Patrick Loyd, and Brett Marquard
Co-Chairs, Structured Documents Work Group


It is this kind of team work that drives the best solutions for the HL7 community and we greatly appreciate the work of this Work Group and others who participated in this effort. The Technical Steering Committee will develop an ongoing security policy for HL7.

/S/
John Quinn
HL7 Chief Technology Officer

/S/
Ken McCaslin
Technical Steering Committee Chair

Thursday, April 17, 2014

HL7 CDA Stylesheet Patches

A few weeks ago, Josh Mandel identified problems with the sample CDA Stylesheet that is released with C-CDA Release 1.1 and several other editions of the CDA Standard and various implementation guides. There are some 30 variants of the stylesheet in various HL7 standards and guides.  A patched version of the file has been created and is now available from HL7's GForge site here.

Both the TSC and the HL7 Board are still discussing how to address this from an organizational perspective, and the Structured Documents Workgroup is currently considering how to address these sorts of tools that are released in the future.  Thanks very much to Rick Geimer of Lantana Group for his extensive work on the patched stylesheet (and to his children for putting up with this over the weekend).

Monday, April 14, 2014

Failing Small

One of the things I've learned over the years of building standards is that the best way to implement something new is to have several good examples of what you work with.  The first one should be simple, and will get you probably two thirds of the way there.   The next should be equally simple.  If you can do it the same way, you've probably validated your initial approach.  The next should be a bit more compex, and expose some of the edge cases you ignored in the beginning.  If it still fits in without change, you must be a master at this already.  That has never happened with anything I've worked on. 

After you get that far, you really need to try this out at several scales, first just by testing, next in a pilot, and finally in regional and national deployments.  Each time you try it out, you move up a level in scale. The advantage here is that each time you can bring lessons learned into the next stage.  Unfortunately, when programs have lofty goals, they often don't allow enough time for the pilot and subsequent stages.

That can result in abysmal failures.  We've seen examples of this in implementation of Health Insurance Exchanges in the US and in my home state, as well as in many national projects worldwide.  

The point is that not only do you want to fail fast, but also to fail small, so that you can succeed big.

Friday, April 11, 2014

Simple Metaphors to Explain Semantic Interoperability

I'm giving a presentation tomorrow at a pre-conference session at HIMSS Jeddah.  My part of the presentation is to cover Interoperable eHealth Exchange, and basically defines the key terms that will be used by the rest of the presenters.  So of course I have to define Interoperability, and I've also been asked to discuss Semantic Interoperability.

For me, the difference between simple interoperability and semantic interoperability is the difference between using the information that has been exchanged, and actually understanding that information.

Consider the basic HTML page.  The browser doesn't really understand the sematics of the content, it only knows what to do to display the information.  When you get into HTML 5 though, there is some markup that actually conveys semantics (see new tags added).  E-mail clients are getting better at this too.  Apple's e-mail client in iOS identifies package tracking numbers, phone numbers and dates and times, and can do interesting things with these, such as take you to the tracking page, add the number to a contact list, or put something on your calendar.  This really highlights the difference between simple interoperability and semantic interoperability.

Using CDA, simple interoperability is what we get when we simply exchange and display CDA documents. Semantic interoperability is what happens when we are able to import the contents of a CDA document into a system, instead of just view it and let the reader understand what is inside.

Wednesday, April 9, 2014

XDS Query Use Cases

XDS is ten years old this year.  It's sometimes hard to believe that something I worked on that long ago is still around (even though there are other things that have been around longer).  Today I had the need to recall some of the use cases for XDS Query that we considered way back then.  It's good stuff because it explains what all that metadata is useful for.

Think about a physician who is speaking with a patient about a past event that may have some bearing on their current treatment.  The patient knows when and where or possibly who or what was done, and might say something like:

Last year I had surgery for something.
Query: Time of Service AND ((Class = Operative Note) OR (Author Specialty = Surgery) OR (Facility Type = Hospital OR Practice Setting = Surgery)

P: I had something like this two years ago, and I saw Dr. Smith.
Query: Time of Service AND Author = Dr. Smith

P: The cardiologist at St. Mary's told me I had ...
Query: Author Institution  = St. Mary's AND Author Specialty = Cardiology

P: I did this test a year ago at ____, and when my doc looked at the result, he told me not to worry about it.
Query: Type Code ~ Test and Time of Service and Author Institution

D: We need to look at the original x-rays showing the placement of your pacemaker.
P: Oh, Dr. Smith did that, but I cannot remember when.  It might have been in ... or was it...
Query: Author = Dr. Smith AND Author Specialty = Cardiology

P: Oh, I had that done at ___ hospital.
Query: Organization AND Author Specialty = Cardiology

P: That was in ____, but I cannot remember who did it.
Query: Time of Service AND Author Specialty = Cardiology

D: We would like to talk to his cardiologist ...
Son: I don't know who that is.
Query: Author Specialty = Cardiology

D: What else happened during that stay?
P: I don't really remember.  I know I had a bunch of tests to rules out something or other, but I don't remember what it was or what they were.
Query: FindSubmissionSets with Submission Time

P: They drew blood if that helps.
Query: Practice Setting = Laboratory

And me, with my Claims Attachments hat on at the time (remember, back then the NPRM was only a couple of years late)
Insurer: Send me all documents for the current stay.
Query: Author Institution and Time of Service

Insurer: Send me all labs for the last 30 days
Query: Practice Setting = Laboratory and Time of Service

And so on.  These were the kinds of questions that led to the creation of the XDS Metadata way back when.  Fortunately, we also had several good lists of Metadata to rely upon as well, including CDA Release 1, CDA Release 2, an early edition of CCR, DICOM, and CEN 13606.  What was chosen was a superset of metadata from those specifications that would allow answers to the questions above.  There is other metadata that is in there for other purposes, but these were the kinds of queries we were thinking about.



Third Term

I'm starting my third term of Informatics classes at OHSU.  Last term I spend on two classes: Medical Decision Making, and Project Management.    My expectations about the classes were flip-flopped, because I though the first one would be more difficult for me than it was, and I though the second would be less difficult because I've been managing projects for 30 years.  I was wrong.

The first one was aided by the fact that I do math for fun, and logic is the mainstay of my work, so learning formulas and ways to evaluate them and assign probablistic values to them was something I do all the time, and I also have played around with Markov Chains, so none of the material was new, just the ways to apply it (which was extremely valuable).

On the Project Management side, there's a ton of stuff that I have already "made lean" in the way I manage projects, but recalling the original process that has had the extra stuff cut away was extremely useful.  I may have cut it away because I didn't need it in one environment (e.g., the org chart), but in other projects its extremely useful because the material isn't the same each time.

This term I'll be taking Quality Management and Consumer Health Informatics.  I won't make any predictions about which will be easy or hard but I suspect both will be engaging and challenging.  I hope to use some of my FHIR / HQMF work as part of my class project in Quality Management.  On the Consumer Health Informatics side, I got a bit of a jump in my first term because I did a term paper on the topic, but that doesn't mean I'm at all an expert, maybe just a bit ahead of the class on SOME of the material.

I'm hoping to avoid a repeat of last two terms where I spent final's week on the road.  I already have a good start.  Instead, I spent opening week between Bangalore, India and Riyadh, Saudi Arabia.

   Keith

Sunday, April 6, 2014

The Dictionary Test

One of the challenges in software engineering, healthcare standards, or any other similarly complex activity is that all of the good words are already used.  So we have to come up with names for things that identify what it is clearly to readers.  As an engineer I observe that it is not the names that are important, but rather the concepts they encode, and as long as I understand what the thing is, I don't really care what you call it. Names are something for the marketing department to decide, not me.

However, with my consumer hat on, it is very important to me that what you call the thing really describes it, and so I have used what I call the dictionary test to evaluate names.  That test is simply this: If you understand the dictionary terms used in the name of a thing, do you obtain a good understanding of what the thing is.  If not, you've failed the dictionary test.

Standards organizations have a difficult problem, because they often think that they can define what a thing is, give it a name, and make it stick.  Guess what, unless somebody really cares about following the standard, this doesn't work.  If you have to train somebody about the meaning because the name doesn't give it right away, you've put a roadblock in the way of the user of that thing.

Health Information Exchange is a perfect example of both passing and failing the dictionary test.  It passes the dictionary test because as a noun-phrase, or a verb-phrase, it perfectly explains a thing or a process being executed.  It fails because you cannot tell whether the thing being described is a noun or a verb.  It sticks because unlike CHIN, NHIN, HIO or RHIO, (or the various decodes of those acronyms), it really does describe in a very non-technical the thing being described.

So if you find yourself explaining why it is called what it is called, and it isn't too late, pick another name. And if it is, (like UML's Actor), be sure to understand the story behind the name so that people will understand it simply.

   Keith

P.S. As I understand it, Actor was a (mis-)translation from the French term used in some of the original modeling design work, that meant Role.  And Role truly is a better name for actor, because the names of actors in most UML diagrams are in fact, roles played by people or systems.  And once I explain that to a group of non-engineers, the term Actor becomes slightly less scary because I've given them the cognitive link to help them make sense of the term.


Friday, April 4, 2014

Security vulnerabilities in C-CDA Display using CDA.xsl

Today, Josh Mandel posted a security report about CDA.xsl which is an example Stylesheet used by HL7 to illustrate how to generate HTML from a CDA document.  The example XSLT has several security holes which can be taken advantage of by an attacker when a system blithely uses it to display a CDA document without first checking to be sure that it is valid according to the CDA standard.

I won't spend to much time on the details since Josh covers them rather well and also did so quite responsibly.  There are two quick fixes against the attacks, an additional attack he didn't mention which is that the referrer attack (his third attack listed) may also be used against the <IFRAME src=''/> content produced by the sample stylesheet.

So let me briefly outline three mitigations, and note that these are not the only ones you might use, and that more analysis is probably needed (the danger of providing a partial solution is that people who have been burned once by using someone else's work can all into that trap again.  Learn your lesson and investigate further):

  1. Prevent any output of a src or href attribute in the HTML that doesn't resolve to an http: or https: path. Better yet, don't use IFRAMEs.
  2. Validate any CDA document against the CDA schema supplied by HL7 before you display it, and refuse to display an invalid document without first confirming with the end user that this document may result in a system compromise (in case it is really an essential document for patient care).  This has the effect of preventing documents containing the invalid content from being able to generate unexpected attributes.
  3. Don't use a browser control you don't have full security control over to display your documents.  Be sure to configure the controls that you do use to NOT generate Referrer headers.  And don't include private state information in your URLs that can be used to attack your system.
And more importantly, the lessons learned from this experience which HL7 or any other SDO should consider:
  1. HL7 published code samples should go through the same kind of analysis and testing that real world code goes through because it is likely seen to be authoritative.
  2. Any HL7 published code should clearly note that it is the recipient's responsibility to ensure that appropriate security precautions are taken when implementing the code.
  3. Someone should be identified as being responsible for identifying and addressing security issues related to the standard, and these should be published in the standard.  IETF and IHE already have procedures that ensure this occurs.  
  4. The HL7 TSC should approve some policies in this area and specific guidance on security issues related to these sorts of examples, and the HL7 Security Workgroup should make some recommendations to the TSC for their approval.
And one some additional one for anyone (Vendor or Provider) using SOUP (Software of Unknown Pedigree).  You have a responsibility to your end users to ensure your product is secure.  That means that if you don't understand the security risks associated with SOUP, you need to do the analysis yourself and be sure that appropriate security precautions are taken.  

Thanks Josh for finding this, and for taking a very responsible and difficult route to ensure that everyone was notified.

For those of you readers who need a fix: the analysis above is incomplete.  At best it serves as a quick patch if you are just learning about it which you can use to mitigate the issues until you investigate further.

Keep up with Josh's additional posts because his work will be more complete, and I hesitate to duplicate it*. I can hardly do better than someone who has been so well recognized for his contributions to standards, twice now.

   Keith

P.S.  Congratulations Josh, you are the first and probably last to receive those two awards (unless John decides to have a different policy).

Thursday, April 3, 2014

Are you providing a service? Or executing a process?

I'm in India this week.  Before I came I needed to get a couple of immunizations, so I called my Doctor's office and spoke with the receptionist, who told me they'd have to get with someone and have them get back to me.  Given that I had to make these travel plans pretty quickly, I was already quite late in the process, and I let her know I was leaving in 5 days.

The next day, after I didn't hear back and getting concerned, I called back.  Someone in nursing should call you today I was told.  Later (after 5pm), when I didn't hear back, I got frustrated and called one of the Urgent Care centers associated with my physician's office.  I got no help there, they couldn't even tell me if my immunizations were up to date, even though they had access to my records.  HIPAA was the argument. Having already had a conversation up the Privacy Officer chain with this organization, I knew that I was pretty stuck.  So when I went to my pharmacy later to pick up my medications, I spoke with the pharmacist.  Unfortunately in Massachusetts, the immunizations I needed required a prescription, and they couldn't help me, but, she said, try calling My Local Hospital's Travel Clinic.  I did later (note that this is now after 5:30pm), spoke with scheduling, and while they couldn't schedule me yet, they transferred me to the voice mail of someone who could help.  I got a call the next morning, and while I wasn't following the usual process, the NP assured me she would fit me in, and told me to speak with scheduling and tell them I had talked to her, and told me what to tell them so they would be able fit me in.  That call took less than 3 minutes.

Later that morning, after I had spoken to her and scheduling, the travel nurse associated with my primary called me back.  She apparently hadn't been able to get to the CDC site she usually used the day after I had called, but now that she could ... and I interrupted her and explained that I had already scheduled an appointment elsewhere, and we pretty much finished the conversation.

Needless to say I got what I needed done, but not because she provided the service I had needed.

Instead, two other people retained my business because they provided the service I needed, rather than following the process established to provide it.

You see, the first nurse couldn't do her job because she was limited to a defined process, and couldn't exceed it, either by policy or simply not understanding what else she could do.  I had already googled CDC India Immunization and found the information from the CDC I needed, but she was apparently using a different site that was down for a day.  The CDC has multiple sites sourcing information on travel an immunizations, and I could readily find what I needed there.  She couldn't even be bothered to tell me why there was a delay.

My pharmacist couldn't do what I asked her to, but she provided a needed service by telling me where I could go.  The NP couldn't follow the usual process for me, but she too provided a needed service by figuring out how she could work me through existing processes she needed to follow.

This sort of thing happens all the time in any business.  I go to one hardware store, and they report to me that they don't have the part I'm looking for.  At another, the clerk tells me that while they don't have that part, I could use this other one, and it would work even better.  And so my plumbing problem was solved, and now when I need hardware, I go to that other store first.

It should be obvious that what you do in healthcare, and what I do in healthcare IT is provide a service, and that we shouldn't let our processes get in the way of doing that.  Processes are supposed to make it easy to provide service.  When they start getting in the way, you need to revise, adapt or change them, or realize that another process might be more appropriate to apply.

Tuesday, April 1, 2014

An HL7 Specification for Addressing Meaningful Use Driven Ballots

I'm proposing today that HL7 take a new approach to addressing the volume of ballots that have been generated recently in response to Meaningful Use initiatives.  In true HL7 expert form, I've put this forth as an HL7 Model, which you can see below.  This material has been reviewed in the past with the Pharmacy and Claims Attachment workgroups who approved it whole-heartedly in prior meetings.  This material takes an already existing and well understood process for getting through multiple ballots and translates it into standardized form.

Click on the image below to see full size.



    -- Keith

P.S.  In case you didn't get it, Happy April Fools Day.