Pages

Tuesday, December 19, 2017

Burning up IHE Profiles with FHIR

QEDm, PIXm and PDQm finished in one day!  I'm ready for the IHE North America Connectathon in record time.  It's a shame I haven't signed up for some of them.

In part, I already had most of this coded, since QEDm and PDQm are really just simple IHE wrappers around existing FHIR resources.  But what really simplified this for me was the HAPI support for translating STU2 FHIR resources into STU3 FHIR resources.  I could rather simply call on my existing code to do the work in STU2, and the translate the results to STU3 using the HAPI FHIR Converter.  Kudos to the HAPI team for making that transition so easy.

The other piece that simplified a great deal of this is that I only needed to code up one resource for QEDm, and then write the code generator.  After that, the remaining QEDm resources were quite simple.  And then I realized I could use the same code for PDQm, which grabbed me another profile quite unexpectedly.

Not to be outdone, I decided I'd add a little extra work in and do the work for PDQm.

I might have to take the extra time I'd allocated myself for this task and toss of MHD as well, but I think I'll focus some attention on integrating CQL first.  That could take a solid week, but I promised myself a good scotch if I could get it done in two days.

   Keith

Thursday, December 14, 2017

The Hero's Journey as the Patient's Longitudinal Record in HealthIT

Clinical documents tell stories, but the clinical document is not a book by itself describing the full story of a patient's care.  It is merely a single chapter, or perhaps just one scene in a chapter related to the life of a patient.  Over the course of a single illness, several chapters will be needed to make the book describing the course, or even several sequels covering significant events in a series.  Over the course of a patient's life, many such stories will be told in that milleau, making up the patient's biographical health library.

As an informaticist and informatician I'm interested in how one would classify these relationships and how that would be implemented in practice.

The metaphors that I've already started with provide an operational framework in which we can place a clinical document, and so, organize our patient libraries.  While metaphors are great teaching tools, we must also relate these terms to our own terms of art in Health IT, lest they be misunderstood.  The table below is one such mapping.

MetaphorTerm
scenedocument
chapterencounter
bookepisode of care
serieshealth concern
milieudisease process
librarylongitudinal record

* sequel is intentionally left out, but will be addressed in Series below.

Scene

A scene occurs within a specific time period, has a setting, involves different people (the patient, healthcare providers, family members, et cetera), and tells the story of an event (service event) occurring in the life of our hero (the patient).  The hero may not even be present in the scene (e.g., as in a lab test) but the scene is still relevant to them.  The scene is represented in a clinical document which describes the setting, the time frame and the cast of characters involved, including their role, describing this service event.  This is a document, and has all the defining characteristics thereof. 

Chapter

I represent a chapter as an encounter.  A chapter ties together several scenes, in closely related time periods, and these are closely associated with some sort of common progression of the story in time.  Clinically, the encounter has many of these same characteristics.  An encounter can result in several clinical documents (scenes), in which the patient receives multiple services (e.g., a consult, a lab result, a diagnostic test and treatment) described in different documents.

Book

I use book for episode of care because the defining characteristic of a book is that it has a resolution that indicates a significant change in the story being told.  The resolution may in fact be the conclusion of the overall story, but often simply represents a change in focus, for example, from onset to diagnosis, diagnosis to treatment, or from treatment to recovery.

Series

The series represents the overarching story in which our hero is engaged.  It is the hero's journey, through various acts (books).  This journey addresses the crisis introduced in the first book of the series, and brings it to a satisfactory (though not necessarily final) conclusion.  Unlike a book, which may leave you hanging, a series brings you to a place where you can stop for a bit, and the tension has been resolved back to an acceptable (though not necessarily starting) state.  In the patient's life, this seems to be best represented as a health concern ... the crisis around which the series resolves.  Series are organized in time, where each book in the series is a sequel to the prior one.  Essentially, each new book is the story of the sequelae related to the prior one.

Milieu

Our hero's story may not be over.  Odysseus's concern, the Trojan war as told in the Iliad, has a followup ... getting home afterwards in the Odyssey.  The patient's story doesn't end after treatment ... there's also recovery.  The health concern changes over time, but the milieu remains the largely same, and is related to the disease process over time.  I had originally considered specialty as the mapping for this, but I'm finding that specialty changes as one traverses from diagnosis to treatment and recovery.  In the case of my mother's heart attack, the specialty changed from ED, to cardiology, to rehabilitation of the course of her disease.  The health concern also changed.  But what remained common was the disease and recovery process.

Library

All of these are components of the library that tells us the life story of our hero.  This is the one thing in common throughout this lifelong journey.  This is the patient's "longitudinal record."

Having provided the metaphor, I will shortly show how we can put this into practice in an XDS Registry or FHIR DocumentReference metadata describing our hero's journey.

   Keith

Wednesday, December 6, 2017

RTF to PDF Conversion

A lot of medical documents are still created in RTF format (or can be readily accessed as RTF).  This is due to the use of Word as a tool in some transcription environments.  Converting these documents to a "standard" format is a bit challenging.

There are some tools that will convert RTF to HTML (or XHTML), but my goal was to be able to convert them to PDF so that I could incorporate the content into the IHE Scanned Document format (and to the HL7 CCDA Unstructured Document format).

I needed this quickly, so I started looking for some open source libraries.  One of the ones I found was LibrePDF.  I had (from a former life) been familiar with the open source iText product, which would have been my first go to, but unfortunately, its licensing model isn't conducive for many to use (it's AGPL, essentially copy-left), even though prior versions had been LGPL or MPL.  It also dropped RTF support in later versions as well.  LibrePDF is a branch from the last MPL version of iText, and still has RTF parsing tools as well.

Unfortunately, LibrePDF doesn't really provide a great deal of information on how to use the components, so here's a quick summary:

To get what you need, including the following two dependencies in your pom.xml:

    <dependency>
        <groupId>com.github.librepdf</groupId>
        <artifactId>openpdf</artifactId>
        <version>1.0.5</version>
    </dependency>
    <dependency>
        <groupId>com.github.librepdf</groupId>
        <artifactId>pdf-rtf</artifactId>
        <version>1.0.5</version>
    </dependency>

The first one grabs the LibrePDF core components.  The second grabs the PDF-RTF tools.  When you grab the libraries you will also get bouncy-castle for decryption, encryption and signing.  You can ignore those unless you are going to be creating PDF files that require those capabilities.  For XDS-SD format PDF files, these features are not essential.

Having done that, you can now use this gist from Ajay Ramesh on GitHub to understand how RTF to PDF conversion is done.  You can comment out the line that reads:

   System.setProperty("os.name", "Windows 7");

This is no longer necessary, because the LibrePDF code doesn't have the same problem that iText 4.2.1 had reported on StackOverflow.

Having generated your PDF, you can now wrap it inside a CDA document, or perhaps use a FHIR DocumentReference resource.

     Keith

Tuesday, December 5, 2017

Cards against Humility


A lot of my friends, including those in the Healthcare Standards space play Cards against Humanity.
If you have a wicked mind, but are fun loving person you've probably played the game.  If you haven't, please note, some of the text on the above link is in the NQSFWD (not quite safe for work depending) range.

This year for Christmas, I'm building some of my friends their very own personal card, in a deck design I call "Cards against Humility". it allows their name to be played in some pretty rude scenarios in the game.  To print the deck, I used moo.com, printed rounded cards in the MOO Size using super paper in the high gloss format.

To create the cards I set up a Word template with paper width of 2.32" and 3.46", with margins of .32" inches all around.  The "front" of the card (really the back in this case) is Arial Bold 30pt with the words "Cards Against Humility".  Since the back of each card can be different (really the front), I then simply listed the names of my 50 best friends and family that I wanted to be victims.  I wrote their names in Arial 18pt Bold.

You have to upload the front and each individual back as a separate PDF file to moo.com.  I did a bit of digging and found a little word Macro to save each page as a separate PDF.  It's listed below.

Sub Word_ExportPDF()
'PURPOSE: Generate A PDF Document From Current Word Document
'NOTES: PDF Will Be Saved To Same Folder As Word Document File
'SOURCE: www.TheSpreadsheetGuru.com/the-code-vault

Dim CurrentFolder As String
Dim FileName As String
Dim myPath As String
Dim UniqueName As Boolean

UniqueName = False

'Store Information About Word File
  myPath = ActiveDocument.FullName
  CurrentFolder = ActiveDocument.Path & "\"
  FileName = Mid(myPath, InStrRev(myPath, "\") + 1, _
   InStrRev(myPath, ".") - InStrRev(myPath, "\") - 1)

  For pageNo = 1 To 51 Step 1
    DirFile = CurrentFolder & FileName & ".pdf"
    UniqueName = True
  
    'Save As PDF Document
    On Error GoTo ProblemSaving
        ActiveDocument.ExportAsFixedFormat _
        OutputFileName:=CurrentFolder & FileName & Str$(pageNo) & ".pdf", _
        ExportFormat:=wdExportFormatPDF, _
        Range:=wdExportFromTo, From:=pageNo, To:=pageNo, UseISO19005_1:=True
        
     On Error GoTo 0
  
  Next pageNo
  
  Exit Sub

'Error Handlers
ProblemSaving:
  MsgBox "There was a problem saving your PDF. This is most commonly caused" & _
   " by the original PDF file already being open."
  Exit Sub

End Sub

It took me about 10 minutes to plan this out, 10 minutes to make the list, 10 to find the macro and edit it, another 3 to fine tune, and about 10 minutes and about $45 to place the order.  For that I get customized, unique, cheap gifts for some of my best friends, and about 1/3 of my remaining shopping done.

   Keith

Wednesday, November 29, 2017

How to say no ... 2017 Edition

RochesterBestiary detail Griffin

Thanks goes out to Brett Marquard for suggesting this update!

Negation has always been a beast of a problem.  In 2011 I described a bestiary of negated concepts used in CDA, and how to relate them using the standard.  As time goes by and the standard becomes more widely used, our understanding of the best practices also changes.

So here is the original bestiary with links to examples to the CURRENTLY ACCEPTED best weapons for slaying these beasts when known and provided by the CDA Examples task force.  And I'll be posting a link to this post in the original.

  1. Patient is not on THIS drug.
    Example pending, check back in a bit
  2. Patient is not on ANY drugs.
    No Medications.
  3. Patient does not have THIS ailment.
    No Known Problems
  4. Patient does not have ANY ailment.
    No Known Problems
  5. Patient does not have THIS allergy.
    Example pending, check back in a bit
  6. Patient does not have ANY allergy.
    No Known Allergies
  7. Patient does not have ANY MEDICATION allergy.*
    No Known Medication Allergies
* New in this edition

Examples for all of the above and more can be found at http://cdasearch.hl7.org/

   Keith


The subtle differences between data needed for different use cases

As a patient, I want to be able to look back at my medical history and understand what the situation was as I saw it at the time.  When I'm referred to another provider, I really only want to talk to them about the situation that I'm experiencing in the now, and for the most part, they only want to see the relevant and pertinent parts of my historical situation as it relates to now.

This creates subtle differences in the views needed to create clinical documents for these two use cases.  The HL7 Continuity of Care Document using CCDA 2.1 can work for both use cases, but the content to include varies.  Preset document formulations will almost always break for some use case.  Customization is essential.

Resolved issues and completed medications are LESS likely to be relevant in the referral case, than the historical view. In the historical view, I want to see the medications that were being used at that time (e.g., metaprolol).  In the referral case, we are likely to only be talking about my current medications (amlodipine/benzapril).  In the historical view, I'll be looking at resolved problems (cervical radiculopathy) as well as those that are still currently active (e.g., my hypertension), but for the referral, we'll probably only be discussing what is current.

What is relevant and pertinent? It depends on your use case, and it means that you have to pay attention to it closely.  It even varies according to the reason for referral.

It gets even more challenging once you start to bring payers into the picture.  What can you share with a payer?  While minimum necessary is the guard rail, there's NO crisp white line.  Can historical data be shared that preceded the payer's relationship with the patient?  I've heard arguments for both sides of that case.

Dynamic query through FHIR based interfaces can help a great deal in these scenarios.  The requester can simply ask the questions (remember Query Health?) they need answered.

   Keith

Monday, November 27, 2017

It's not real

I took a doctor to task over the weekend after he admitted that he didn't believe in a diagnosis of Fibromyalgia.  I told him it didn't matter to the patient what the problem was called, it was real and of concern to them, and that he needed to read up on it and learn more.  That, as an ED doc, he wasn't qualified in the appropriate specialty to make the determination about the reality of the disease, and that he needed to focus on what was happening to his patient, not what someone calls their disease.

He actually thanked me.  I hope he actually does something about it.

--   Keith

Thursday, November 16, 2017

Data Philosophy and Game Theory

It's Dirk Stanley's fault.  He asked on FB:

#CMIO #CNIO #Informatics #HealthIT and other #ClinicalJedi, need your help : Philosophically, which is more important? 
1. Data in
2. Data out 
3. Both are equally important 
All opinions welcome.

My response was pretty straightforward:
Data in, else we would not be where we are today. We need data to act, and can always get it ourselves, and trust it better when we do. But, if all are playing for optimal payoff, it could be viewed as a zero sum game, and so best view is both.

But at the same time it needs a heck of a lot more explanation.

Data sharing as presently practiced by healthcare organizations is in some ways a zero sum game, and in some ways not.  

It's not a zero-sum game for healthcare organizations. The lack of accessible data used in healthcare decision making results in efforts to obtain data. With current practice (fee-for-service), the healthcare organization will do what is necessary to obtain that data (order the test, do the assessment, et cetera).  And it will get paid to do it, so there is little to no additional cost.  For some of the testing, there is little gain (many lab tests are low-margin, commodity items), and others substantial gain.  For example, an MRI can cost $1-3K or even more, and when all is said and done there is definite value going to the creators of the data.

The data, thus gathered, also becomes an asset of the healthcare organization which gathered it, which has value to the organization when they use it, but could help a competitor in some way if they share it.  Sharing that data with a competitor has a cost. The organization has little to gain by the sharing of that data, and something to lose (potentially) if they access data shared by another party.

Victor Dubreuil - 'Money to Burn', oil on canvas, 1893There's a money to burn in this system. Between healthcare organizations, it isn't a zero sum game. The patient (or their payer) is continuously pumping in cash to the system to fill the data needs of the healthcare organization, and which supports the care of that patient when they stay within the same organization.  As soon as the patient moves to a different healthcare organization, they lose the value of that data, that ultimately, they (or their employer) paid for.  

Where the zero-sum part comes from is that what the provider gains with regard to acquiring data, the patient (or their payer) loses. As long as that remains the case, there's little incentive to reduce duplicate testing.  I watched this recently as a young person I know went through a month repeating almost the same tests they had gone through previously to diagnose a chronic illness because more than 3/4 of the way through that process she was forced to change healthcare providers (due to a forced change in health insurance).  There was NO reason for them to refuse the additional testing (getting the diagnosis is critical to their health), and no value for their new healthcare provider to use the previously performed tests. Their payer was out the money for the repeated testing, even though those tests had been done previously.

In the rest of the world of business, when I pay for something to be done, I own that thing. If it a work of intellectual property, I paid for it, I own it, and I have easy access to use it. In healthcare, when a patient pays for a test (or a payer pays on their behalf), the data is treated as if it is owned by the organization that ordered the test (gathered the data), and as the patient I don't always have easy access to it.  I can only benefit from it as long as I maintain a relationship with that healthcare organization.

I can see how game theory could be applied to this situation, such that a system of value-based care could be designed where the greatest value is when there are incentives for data sharing.

Wednesday, November 15, 2017

Understanding and addressing technical debt

Architects and accountants have something in common, which is that they need to understand their organizations assets and liabilities.  For an accountant, these are fairly understood.  For an architect, one might think that they are as well.  Your assets are you IP and processes that add value, that enable your organization to out-pace its competition.  And your liabilities are those that don't.  We have a special word in architecture for IP liabilities: it's call technical debt.

Technical debt is a great opportunity for architects to benefit their organization, and here's why: it's something that is already costing your organization in terms of resources and credibility.  You can probably count the defects in the package, the tech support calls raised, the number of open customer issues that  are due to technical debt. You can put a very clear value on it, which makes it a great candidate for reducing cost.  It isn't free, but it is often quite worthwhile.

How do you do it? It's pretty simple -- pick a mess and clean it up.  I don't just mean pick the stuff up off the floor either, like your teen would clean their room.  At the very least, polish it like a fourth year recruit at West Point.  At best, remodel, and I mean completely remodel or rebuild -- like Grahame Grieve did for HL7 Version 3 creating FHIR.  The corollary to "if it ain't broke don't fix it" should be "if it keeps breaking, stop fixing it and replace it."  When car repairs exceed the cost of payments, it makes sense to get a new car (unless you are talking about something like a 69 Pontiac LeMans*).

It's painstaking work.  Usually messes like this accrue because code becomes fragile, knowledge gets lost, nobody knows quite how that works (or doesn't).  And yet there is still some underlying value to the code because it does something important and cannot be otherwise expunged, so some extra effort is needed.  It's like the antique in the attic that just needs the right refinishing to become an awesome heirloom.  This is frustrating work, often risky, and sometimes it's downright boring  and tedious (ever read through nearly a thousand different logging messages).  On the other hand, the value of the work can be made very clear and well defined.

The biggest challenge you will run into in trying to take on work like this are people who are concerned about the risks you are taking on. The biggest tool you have to combat risk is knowledge, and sometimes that means making the time to obtain more. The most fragile software components are usually the ones where the least is known about them. Go learn it. In the end, you'll be glad you did, even though getting it finished wasn't the most glorious thing you've ever done.

  -- Keith

P.S. As a teen, I spent the better part of a winter replacing an engine in a 69 Pontiac. It was cold, it was hard, it sucked.  It was my ride to school, and I learned a ton.  It looked something like the picture below, but was black.





Tuesday, November 14, 2017

HL7 FHIR Proficiency Exam

Take the HL7 FHIR Proficiency Exam and get acknowledged by HL7 to be:


Prove your proficiency with the HL7 STU3 FHIR Specification.  Become identified as an individual with FHIR proficiency.  Employers, vendors and providers, help HL7 to influence the quality of the FHIR workforce.

Note: This is Proficiency exam rather than  a professional implementation credential.  HL7 is in the planning stages of a full professional certification

Competencies Tested

It's about breadth rather than depth:

  • FHIR fundamentals
  • Resource Concepts                                                        
  • Exchange Mechanisms (includes RESTful API)     
  • Conformance and Implementation Guidance     
  • Terminology
  • Representing healthcare concepts using FHIR resources
  • Safety and Security
  • The FHIR Maintenance Process
  • FHIR licensing and IP      

Do you want to be part of the pilot?

The test is currently being piloted, and is currently available for a limited time to a limited number of individuals.  Space is very limited, so get it fast.

  • Help HL7 improve the test
  • Be one of the first to be certified
Registration and logistics

Pilot Test: What to Expect

  • Online   at test centers, or remote.
  • Closed book
  • 2 hours to complete 50 questions
  • Multiple choice, multi-select, and true/false.
  • No penalty for guessing.
  • Passing score (for the pilot) 70%
  • Cost $20 for members, $40 for non-members (for the pilot only).

How to prepare

Obtain HL7 FHIR Proficiency Study Package

Study FHIR STU3!

This test was made possible by:
Grahame Grieve
Brett Marquard
Brian Postlethwaite
Bryn Rhodes
David Hay
Ewout Kramer
Eric Haas
Virginia Lorenzi
James Agnew
Josh Mandel
Lloyd McKenzie
Rob Hausam
Simone Heckmann
Viet Nguyen
Mel Grieve

Monday, November 13, 2017

On evaluating abilities

When one ponders all the various evaluations of interoperability, you need to look at multiple factors.  A key component of this word, as for many other non-functional requirements of systems is the word "ability".  It denotes a capability, or capacity to achieve some desired goal with some level of effort.

The same is true for other non-functional requirements: Reliability, securability, accessability, usability, affordability.  In each of these, the measure is of degree, rather than a "yes vs. no" evaluation.

When headlines make claims about the existence or non-existence of "interoperability", they most often make the assumption that it exists or it does not.  However, when other evaluations of non-functional requirements are done elsewhere in industry, there's an assumption of degree, where achieving a particular score might assess a product as having one of these "abilities".  Consider the term "drivability" in the automotive industry for example.

When you hear that group believes that product isn't ...able, does that mean that it isn't?  In my world, no.  What it actually means is that product doesn't meet group's goals with respect to ...ability.  Unfortunately, without stating what group's goals are, there's precious little that can be done with that reporting other than to investigate further.

Were early cell-phones usable? It depends.  Did you live in an area where you had coverage?  Could you afford to use them?  Did it make and receive the calls that were important to you?  If your answers to those questions were yes, moderately, and mostly, you might say that they were somewhat usable.  If the answers were no, yes, and no, you would say no.  When I worked in the city, my answer was yes.  When I had to travel to a rural destination my answer was no.  These were the goals that warranted my purchase of a bag phone two decades ago.

Determine the goals.  Assess whether the capability meets those goals.  Only then can you assess whether the capability is sufficiently present or not.  TODAY.  Tomorrow the expectations will be different.

The bag phone I evaluated above would certainly be considered to be barely usable today, even though twenty years ago is was more than moderately useful.

   Keith

Thursday, November 2, 2017

Shifting into Sixth Gear

  1. Standards are like toothbrushes.  Everyone needs one, and everyone wants to use their own.
  2. Standards are like potato chips.  You cannot have just one.
  3. And then there's simply XKCD 927 (a well worn, perhaps even "standard" image in standards circles).



And if you look back to late 2012 and early 2013, you can see some of the discussions I had in this space around a battle between two competing standards from the same organization, one for Clinical Decision Support and the other for Quality Measurement.

What rarely happens in this space is that something new arises from the mess that actually solves two different problems ... in this case though they were two different sides of the same coin. The conditional: If (X) then (Y), and the measure: [patients for whom Y is relevant]/[patients for whom X is true].

What happened?  Clinical Quality Language is what happened.  And in the words of its inventor, "we started with an evaluation environment ... we already had the ELM infrastructure ... and we added an execution language".

Yes, I'm crediting one person for the invention because I watched how this played out, and while every standards effort is a corporate (little-c) one, this one was very much driven by one person which assistance from a cast of dozens and input from many more.  Much in the same way as FHIR was originally driven forward by Graham Grieve, but became an effort backed by many.

In fact, recently, CQL was recently recognized by CMS in the following fashion:

CMS Announces Transition of Electronic Clinical Quality Measures to Clinical Quality Language for the CY2019 Reporting/Performance Periods

So, for changing the paradigm in a big way, in fact, for being to CDS and Quality Measurement what Grahame Grieve was to FHIR, I'm awarding this Ad Hoc Harley as follows:

This certifies that  
Bryn Rhodes


Has hereby been recognized for changing the paradigm in Clinical Decision Support and Quality Measurement

P.S. Bryn and I are working two different tracks yesterday and today at the Digital Quality Summit in DC hosted by HL7 and NCQA.  It's no accident that I chose today to award this particular accolade, but Bryn's award was pretty much in the bag last month when I realized how long it had been since I issued one of these, and looked back at who I had missed.

Monday, October 9, 2017

Where do I find the Medication Generic Name in a CCD Document

The answer is, it depends on your CCD version:

CCDA 2.1 has this to say:
 4. SHALL contain exactly one [1..1] manufacturedMaterial (CONF:1098-7411).
     Note: A medication should be recorded as a pre-coordinated ingredient + strength + dose form (e.g., “metoprolol 25mg tablet”, “amoxicillin 400mg/5mL suspension”) where possible. This includes RxNorm codes whose Term Type is SCD (semantic clinical drug), SBD (semantic brand drug), GPCK (generic pack), BPCK (brand pack).
     1. This manufacturedMaterial SHALL contain exactly one [1..1] code, which SHALL be selected from ValueSet Medication Clinical Drug urn:oid:2.16.840.1.113762.1.4.1010.4 DYNAMIC (CONF:1098-7412).
         1. This code MAY contain zero or more [0..*] translation, which MAY be selected from ValueSet Clinical Substance urn:oid:2.16.840.1.113762.1.4.1010.2 DYNAMIC (CONF:1098-31884).

CCDA 1.1 has this to say:
 4. SHALL contain exactly one [1..1] manufacturedMaterial (CONF:81-7411).
     Note: A medication should be recorded as a pre-coordinated ingredient + strength + dose form (e.g., “metoprolol 25mg tablet”, “amoxicillin 400mg/5mL suspension”) where possible. This includes RxNorm codes whose Term Type is SCD (semantic clinical drug), SBD (semantic brand drug), GPCK (generic pack), BPCK (brand pack).
     1. This manufacturedMaterial SHALL contain exactly one [1..1] code, which SHALL be selected from ValueSet Medication Clinical Drug urn:oid:2.16.840.1.113762.1.4.1010.4 DYNAMIC (CONF:81-7412).
         1. This code SHOULD contain zero or one [0..1] originalText (CONF:81-7413).
             1. The originalText, if present, SHOULD contain zero or one [0..1] reference (CONF:81-15986).
                 1. The reference, if present, SHOULD contain zero or one [0..1] @value (CONF:81-15987).
                     1. This reference/@value SHALL begin with a '#' and SHALL point to its corresponding narrative (using the approach defined in CDA Release 2, section 4.3.5.1) (CONF:81-15988).
         2. This code MAY contain zero or more [0..*] translation (CONF:81-7414).
             1. Translations can be used to represent generic product name, packaged product code, etc (CONF:81-16875).

HITSP C32 has this to say (you can actually find this in the HITSP C83 specification):
2.2.2.8.13 Free Text Product Name Constraints
C83-[DE-8.15-CDA-1] The product (generic) name SHALL appear in the <originalText> element beneath the <code>

It's pretty clear that the preferred way to handle this changed in between CCD 1.0 (HITSP C32) and CCDA 1.1, and also that some critical information loss occurred with regard to how to record generic name between CCDA 1.1 and 2.1.  I think as industry understanding of CDA expanded, the need to express the detail about generic name probably changed, but not necessarily for the better.

If you want to include generic name, you would do it in a translation -- when you don't already list the drug using an RxNORM code from the Semantic Clinical Drug value set (generic codes) (e.g., you use a Semantic Branded Drug code).

I have two statements to make about this:

  1. Not all implementers are informaticists or would understand the distinction between types of RxNorm codes.  We (HL7) need to remember to speak to who is doing the work, not to ourselves.
  2. Brand and Generic information is already represented as relationships embedded in the RxNorm terminology itself.  The simultaneous transmission of a brand code and a generic code for that same drug simply repeats what is already present in RxNorm.  The advice I give these days would be to trust RxNorm before you trust your trading partner, and if what your trading partner tells you CONFLICTS, someone needs to go raise a red flag about inconsistent data.

   Keith


Wednesday, September 27, 2017

Security and Privacy: Where are we headed

So, the new iPhone's are here, along with new security features.  Combine that with this recent bit in my inbox and I have a few predictions.

A study published in Healthcare Informatics Research finds 73 percent of medical professionals have used another staff member's password to access a patient's electronic health record at work, HealthITSecurity reports.


Facial recognition, will be used to solve for this problem.  Patient safety advocates will jump in to take advantage of the technology, which will be followed shortly thereafter by the computer saying, you look tired, are you sure you should be caring for patients …

At some point in time, this will move into the commercial domain (e.g., software developers, others creating IP).  It will expand into eavesdropping protection, which will lead to DOS attacks by small children popping their heads up in the seat behind you while you are trying to get work done on the plane or train or subway.

At some point at an IHE Connecthon, all testing work will stop as we all have to get exceptions to have competitors in the same room with our code, but cannot complete the process with them standing too close. This will lead to an eventual revolt against security and privacy altogether as similar challenges pop up across the business spectrum.

Eventually we will give up altogether on having any sort of privacy or security, and the world will live peacefully together.

   Keith

P.S. And then the aliens come and wipe us all out because we couldn't even hide from them properly.

Restricted Includes

Call me stupid. I spent the last 12 hours working on a performance challenge before I realized what the real solution was.  The issue was that I was using a FHIR _include parameter on an existing query to get included resources that needed to be displayed.  The performance was absolutely miserable.

To explain a bit, MedicationStatement and MedicationOrder reflect two different sides of an intention that a patient be given or taking certain medications.  The MedicationStatement resource is (quoting DSTU2):

A record of a medication that is being consumed by a patient. A MedicationStatement may indicate that the patient may be taking the medication now, or has taken the medication in the past or will be taking the medication in the future. The source of this information can be the patient, significant other (such as a family member or spouse), or a clinician.

Whereas MedicationOrder is:

An order for both supply of the medication and the instructions for administration of the medication to a patient. 

And while neither MedicationOrder nor MedicationStatement reference each other, the MedicationStatement does provide for "supportingInformation" as a Reference to any resource.  I wanted to link the two to show the physician intention with the actual prescriptions and refills given.

But then when querying for MedicationStatement for a time period, I also wanted MedicationOrder, so I just grabbed the included references.  Needless to say, this was a MISTAKE, because a patient may have been taking a medication for years and had literally hundreds of refills (I'm not kidding here 3 years of monthly refills on three meds is > 100, and hell, that could even be me).

The first sign of this was some icky performance.  But see, the MedicationOrder stuff is there not because I have an immediate use for it, but rather because I'm following the CCD/CCDA pattern long established, and I KNOW it will be used in something I have to work with downstream, so I included it.  So, it is kinda hidden and took a while to track down.  AND then I spent about 8 hours trying to improve the performance of the MedicationOrder retrieval instead of asking about the quantity of data.

It might have been advantageous to go after MedicationOrder in the _include because of my data model and processing flow, but FHIR syntax for queries don't cross into _included resource in DSTU2 (I get to play with STU3 soon, maybe they've solved the problem there).  I cannot in DSTU2 say: Give me these MedicationStatement resources, with ONLY the _included MedicationOrder resources that look like that.  Yeah, I'm sure I could use the extended query syntax to get to this, but I'm looking for a bit more elegance here (that's what engineers call complexity that looks cool).

So, here's my thought on syntax:

_include=MedicationStatement:supportingInformation:MedicationOrder(setName)&_restrict:setName.searchParam=value

This would name the set of included fields, AND allow me to set an inclusion restriction on them.
If we only had that, AND if I implemented it, my problem would be solved.  A simple matter of programming, what? Yeah.

Nah.  FHIR Query syntax is complicated enough.  But here is a use case for something we haven't thought of and the nice thing about it is that it seems simple enough to understand (even if I don't yet really know how to implement it).  Is it in the 80%?  Maybe.  I have ONE use case for this.  I could probably find others.  I'm not going to spend much more time on this, I still have to fix that performance problem now that I've found it.

   Keith




Monday, September 25, 2017

In my Inbox

This morning I received a long not necessarily relevant announcement to email list I don't remember subscribing to, followed by 30 replies. The replies are all from relatively educated people, many of whom know better, and are summarized below for your reading amusement:

R1: Please remove me from this list
R2: Hi R2, R3 did not send this to you ...
R3: I am not R2
R4: Please respond to the person/s directly and not send a reply to all
R5: Please remove me from all future emails concerning this program
R6: I find reply all useful when unsure who the admin is.
R7: Must you use "reply to all"
R8: Meme "Reply All"
R9: For God's sake everybody -- quit hitting 'reply all' ...
R10: Please remove me as well.
R11: The same here.
R12: This is officially ridiculous. Can everyone stop replying to all these emails?
R13: Same
R14: I don’t know what this email is either and I certainly did not send it out. Please remove me as well.
R15: Hitting reply on the original message only sends the message to the person who sent the email which should be the admin of the list.
R8: Good luck, R3! Keep me posted on the outcome.
R17: Please remove me from your list...
R8: Who's on first?
R20: You guys realize by replying all and asking people to stop replying all that you're just part of the problem, right?...
R21: I just became an Ohio State fan…
R22: I don’t know why I am on this list, so please remove me as well, whoever the admin is.
R23: And good Lord, people, there’s a contact email in the body of the original message:______
Although I must say this has been highly entertaining and a big improvement over the typical Monday.
R24: Please remove me from this list.
R25: Please remove me from this list. Thank you!
R26: Dear whomever, I already have <degree>.  I need <job>...
R27: 
R28: Me too (in reply to me too).
R29: It appears the original email came from ____. Please direct your request to her alone...
R30: Sorry R?, but hitting reply to all just fills our inboxes with garbage.

    ... and still going ...

P.S. My e-mail is simply going to point to this blog post and ask everyone to comment here.

Monday, September 18, 2017

Comparing Dynamically Generated Documents

Sometimes to see if two things are similar, you have to ignore some of the finer details.  When applications dynamically generate CDA or FHIR output, a lot of details are necessary, but you cannot control always control all the values.  So, you need to ignore the detail to see the important things.  Is there a problem here?  Ignore the suits, look at the guns.

Creating unit tests against a baseline XML can be difficult because of detail. What you can do in these cases is remove the stuff that doesn't matter, and enforce some rigor on other stuff in ways you control rather than your XML parser, transformer or generation infrastructure.

The stylesheet below is an example of just such a tool.  If you run it over your CDA document, it will do a few things:

  1. Remove some content (such as the document id and effective time) which are usually unique and dynamically determined.
  2. Clean up ID attributes such that every ID attribute is numbered in document order in the format ID-1. 
  3. Ensure that internal references to those ID attributes still point to the thing that they originally did.

This stylesheet uses the identity transformation with some little tweaks to "clean up" the things we don't care to compare.  It's a pretty simple tool so I won't go into great detail about how to use it.

   Keith


<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" 
  xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:cda="urn:hl7-org:v3">
  <xsl:template match="@*|node()">
    <xsl:copy>
      <xsl:apply-templates select="@*|node()"/>
    </xsl:copy>
  </xsl:template>
  
  <xsl:template match='@ID'>
    <xsl:attribute name="ID">
      <xsl:text>ID-</xsl:text>
      <xsl:number count='//*[@ID]'/>
    </xsl:attribute>
  </xsl:template>
  
  <xsl:template match='/cda:ClinicalDocument/cda:id|/cda:ClinicalDocument/cda:effectiveTime|/cda:ClinicalDocument/cda:*/cda:time'>
    <xsl:copy>Ignored for Comparison</xsl:copy>
  </xsl:template>
  
  <xsl:template match="cda:reference/@value[starts-with(.,'#')]">
    <xsl:attribute name="value">
      <xsl:text>#ID-</xsl:text>
      <xsl:value-of select='count(//*[@ID=substring-after(current(),"#")]/preceding::*/@ID)+1'/>
    </xsl:attribute>
  </xsl:template>
  
  <xsl:template match='@ID' mode='count'>
    <xsl:attribute name="ID">
      <xsl:text>#ID-</xsl:text>
      <xsl:number count='//*[@ID]'/>
    </xsl:attribute>
  </xsl:template>
  
</xsl:stylesheet>

Wednesday, September 13, 2017

Matt the Mighty, A PrecisionMedicine Super Hero (Dad)

Every year in September, HL7 has its "Plenary" session. This is a half day where we hear from folks outside of the working groups on important topics related to what we do.

This year we heard from Matt Might, whom I now would christen Matt the Mighty for his Super-Dad precision medicine powers.  Either that, or as close in real life as one could come to a Doctor McCoy.

You really have to hear him tell the whole story because A) He is an awesome story teller, and B) there's simply so much more depth to it.

The long and short of it though, is not only does he help to figure out how to identify a rare (n=1?) disease, and develop a diagnostic test for it, and identify other possible sufferers, but also a treatment (not a complete cure,  but addressing some effects) among already FDA approved substances (lucking out on an OTC drug), and develops model legislation that his state passes to allow for "Right to Try" use of medications for these cases, and builds a process by which other n=1? disease patients can benefit from it, starting with his own son.

That's Mighty powerful application of precision medicine (pun fully intended).  If you weren't here, I'm sorry you missed it, and urge you to listen to him speak elsewhere.

   Keith

Thursday, September 7, 2017

Demand Driven Pricing

One of the things we've seen from early warnings about Hurricane Irma is a significant increase in prices in airline fares from some airlines.  Some of this, I'm sure is due to automated pricing algorithms on fares based on demand, for which there may very well be little or no human intervention.

That got me to thinking about how demand driven pricing AND demand driven reimbursement could have an interesting impact on prices for healthcare services IF it were possible to apply them more interactively and faster.

In the battle of algorithms, the organization with the best data would most likely win.  I see four facets to that evaluation of "Best": Breadth, Expression, Savvy, and Treatment (see what I did there?).

  • Breadth
    More bigger data is better.
  • Expression
    If your data is organized in a way that makes correlations more obvious, then you can gain an advantage.
  • Savvy
    If you know how A relates to B, you also gain an advantage.  Organization is related to comprehension.
  • Treatment
    Can you execute?  Does the data sing to you, or do you have to filter signal from a vast collection of white noise?
In the 5P model of healthcare system stakeholders, Polity (Government), Payer, Provider, Patient, and Proprietor (Employers):
  1. Who has the largest breadth of data? The smallest?
  2. Who has the best expression of data? The worst?
  3. Who has the greatest savvy for the data? The least?
  4. Who will be most able to treat the data to their best advantage? The least?

It seems pretty clear that the patient has the short end of the stick on most of this, except perhaps on their "personal" collection of data.

Payers are probably in better shape than others with regard to breadth, followed closely by Polity. The reason I say that is because government data is dispersed ... the left hand and the right hand can barely touch in some places.  Providers rarely have the breadth unless they begin to take on the Payer role as well (e.g., Kaiser, Intermountain, et cetera).

Providers have a better chance of having better expression, being able to tie treatment to condition in more detail, and have some chance at understanding outcomes as well.

It's not clear that employers are THAT much better off than patients, although frankly I honestly don't know how much information they really have.

Treatment is where it all comes together, and right now in the US, it seems that nobody has yet found the right treatment ...

Anyway, it's an interesting place to explore further.

   Keith

Wednesday, September 6, 2017

The Good, the bad and the ugly (HL7 Ballots)

Polling StationHL7 Balloting just closed this last hour.  Here's my recap of what I looked at, how I felt about it, and where I think the ballot will wind up from worst to best.  Note: My star ratings aren't just about the quality of the material, its a complex formula involving the quality of the material, the likelyhood of it being implemented, the potential value to end users and the phase of the moon on the first Monday in the third week of August in the year the material was balloted.

VMR (Virtual Medical Record) 
  1. HL7 Implementation Guide: Decision Support Service, Release 1 - US Realm (PI ID: 1018)
  2. HL7 Version 2 Implementation Guide: Implementing the Virtual Medical Record for Clinical Decision Support (vMR-CDS), Release 1 (PI ID: 184)
  3. HL7 Version 3 Standard: Decision Support Service (DSS), Release 2 (PI ID: 1015)
  4. HL7 Virtual Medical Record for Clinical Decision Support (vMR-CDS) Logical Model, Release 2 (PI ID: 1017)
  5. HL7 Virtual Medical Record for Clinical Decision Support (vMR-CDS) Templates, Release 1 - US Realm (PI ID: 1030)
  6. HL7 Virtual Medical Record for Clinical Decision Support (vMR-CDS) XML Specification, Release 1 - US Realm (PI ID: 1016)

This had a total of six artifacts on the ballot.  Together they get 1 star for being able to pass muster to go to ballot.  As a family of specifications, this collection of material looks like it was written by a dozen different people across multiple workgroups with three different processes. What is sad here is that the core group of people who have been working on this material for some time (including me) is the same across much of this work, and it all comes out of the same place.  VMR was always an ugly stepchild in HL7, and these specifications don't make it much better.  Don't lose hope though, because QUICK and CQL are significant improvements, and the FHIR-based clinical decision support work such as CDS Hooks is much more promising. All appear to have achieved quorum and seem likely to pass once through reconciliation.

Release 2: Functional Profile; Work and Health, Release 1 - US Realm (PI ID: 1202)   
Yet another functional model.  Decent stuff if that is what excites you.  I find functional models boring mostly because they aren't being used as intended where it matters.  Pretty likely to pass.

HL7 Version 2.9 Messaging Standard (PI ID: 773) 
The last? of a dying breed of standard.  Maybe? Please? Not enough votes to pass yet, but could happen after reconciliation (which is where V2 usually passes).

Pharmacist Care Plan  
  1. HL7 CDA® R2 Implementation Guide: Pharmacist Care Plan Document, Release 1 - US Realm (PI ID: 1232)
  2. HL7 FHIR® Implementation Guide: Pharmacist Care Plan Document, Release 1 - US Realm (PI ID: 1232)
Another duo, missing the overweight architectural structure of VMR, but certainly adequate for what it is trying to accomplish.  The question I have hear is about its relevance.  Except in inpatient settings, I find the notion of a pharmacist care plan for a patient to be of very little value at this stage.  In fact, we need more attention on care planning in the ambulatory setting.

These are for comment only ballots and the voting reflects it.  While not likely to "pass", the comment only status guarantees that these will go back through another cycle.  Based on the voting, the material needs it.

HL7 Guidance: 
Project Life Cycle for Product Development (PLCPD), Release 2 (PI ID: 1328)  
HL7 continues to ballot its own processes.  What makes this one funny is that this particular ballot comes out of a workgroup in the Technical and Support Services steering division, which previously rejected another group in that divisions balloting a document because T3SD (their acronym) doesn't do ballots (BTW: That's a completely inadequate summary of what really happened, some day if you buy me a beer I'll get _ and _ to tell you the story.  Better yet, buy them beers).

It's a decent document, and likely to "pass".

HL7 CDA® R2 Implementation Guide: 
International Patient Summary, Release 1 (PI ID: 1087) 
I could get more excited about this particular piece of work if it weren't for the fact that it's all about getting treatment internationally, rather than being an international standard that would eliminate some of the need to deal with cross border issues.  But, it's the former rather than the latter, so only three stars.  A lot of the work spends time dealing with all the tiny little details about making everyone happy on every end instead of getting someone to make some decent decisions that enable true international coordination.

This one is tight, will likely pass in reconciliation, and is getting a lot of international eyes on it.  It's good stuff.

UDI Implementation: 

  1. HL7 Domain Analysis Model: Unique Device Identifier (UDI) Implementation Guidance, Release 1 (PI ID: 1238)
  2. HL7 CDA® R2 Implementation Guide: Consolidated CDA Templates for Clinical Notes; Unique Device Identifier (UDI) Templates, Release 1 - US Realm

By itself, neither one of these might have gotten four stars.  Together they do.  UDI needs a lot of explaining for people.  These documents help.

While the balloting looks tough (the second document is "failing" to pass by a 2/3 majority), it's all about doing what DOD, VA, and others want to ensure interoperability between them.

HL7 CDA® R2 Implementation Guide: 
Consolidated CDA Templates for Clinical Notes; Advance Directives Templates, 
Release 1 - US Realm (PI ID: 1323)  

This is a useful addition to what we can do today with Advance Directives, and a great example of how to deal with backwards compatibility right, and they almost nailed it perfectly (my one negative comment on this item is a fine point).

Not a lead-pipe cinch but surely the issues in this one will be resolved during reconciliation.

HL7 CDA® R2 Implementation Guide: 
Quality Reporting Document Architecture Category I (QRDA I) Release 1, 
STU Release 5 - US Realm (PI ID: 210) 
Useful, necessary, and boring, but of great value.  Sometimes it pays to be boring.
Definitely a lead-pipe cinch to pass.  Third highest in positive votes, with 0 negatives.

HL7 Cross-Paradigm Specification: 
Allergy and Intolerance Substance Value Set(s) Definition, Release 1 (PI ID: 1272) 
ABOUT. DAMN. TIME. An allergy value set we can all use. Nuf said.
The interesting back story here is who is voting negative (who cares) about this.  It looks like a lot of VA/DOD interoperability is going to get decided through standards. I'm pretty certain this stuff is going to get worked out, which has tremendous value to the rest of us.

HL7 FHIR® IG: SMART Application Launch Framework, Release 1 (PI ID: 1341) 
I spend the most time commenting on this one.  I'm looking forward to this seeing this published as an HL7 Standard and in getting some overall improvements to what I've been implementing for the past year or so.

There's definitely some good feedback on this ballot (which means likely to take a while in reconciliation), even though it seems very likely to pass.

HL7 Clinical Document Architecture, Release 2.1 (PI ID: 1150) 
This was the surprise of the lot for me.  I expected to be bored, having said CDA is Dead not quite four years ago.  I was, pleasantly so.  There was only one contentious issue for me (the new support added for tables in tables). They got to four stars by making sure all the issues we've encountered over the past decade and more were addressed. They got an extra star by making it easy to find what had changed in the content since CDA R2.  All in all, a pleasant surprise. CDA R2 still reigns supreme, but I think CDA R2.1 might very well become regent until CDA on FHIR is of age.
Oh yeah.  It passed, so very likely to go normative, which will make discussions about the standard in the next round of certification VERY interesting.

   Keith