Friday, November 16, 2018

A Risk Assessment Excercise in multiple parts: Threats

Villainc
Continuing my risk assessment from last post, I'd like first to report a missing item or three from the previous list of assets being protected:

  • The USB device itself (duh).
  • Other data on that device (personal or otherwise).
  • Anything that device could connect to
Having identified what needs protection, now we need to look at what we are protecting it from:
  • Theft
  • Damage (e.g., electrical hardware damage)
  • Data Corruption
  • Loss of sensitive information
  • Exposure of sensitive information
  • Infection by malware (virus, trojan, ransomware, other)
  • Denial of Service
There are a number of downstream consequences that might result from these core threats, but these threats get at most of the root causes.  I'll look at various potential mitigations for these issues next week.



Friday, November 9, 2018

A Risk Assessment Excercise in Several Parts

Guidelines of impact relevance for IHE profiles
from the IHE Security Cookbook

One of the challenges for anyone involved in activities in Healthcare IT standards development is being able to share documents, presentations, training and other materials used in the development of the standards.  Like many in this field, I have access to not just those materials which I need to be able to share, but also access to a lot of other things that shouldn't be shared and which needs to be protected.

I've been in settings where I'm creating or revising a document or presentation, where the fastest way to get it to somebody somewhere is via a USB memory device.  But if access to external storage is locked out, then I cannot share information, or accept information from devices that may be shared with me.  In some cases, it's been nearly the only way (ever try to get to wireless or WIFI at a very busy, yet under-provisioned conference setting ... sometimes it's just not possible).  I've been in presentation settings where the presenter system is owned by the organization, and for related reasons, is the only thing that can be used for presenting, so the only way to get content may well be a USB stick.  These are infrequent, yet USB is still the fastest way often.

Yet, USB sticks (and other devices) are a two way infection vector, and also a way to enable transfers of huge amounts of information that sometimes shouldn't be shared. Even in cases where it should be, may need its own set of protections (e.g., encryption and authentication for use) to prevent it from falling into the wrong hands.

So, I need a risk assessment and mitigation strategy if I'm to justify any sort of exception to a complete lock-down.  This post represents the first of several posts that walk through a risk assessment process.  We'll start first in this post with assets to protect, move on next to threats, then assessment and mitigation.

Here's a partial list of assets that need protection.
  • My Company Issued Laptop
  • My Data
    I have pictures on my laptop that are mine, which I might want to save, my company laptop has access to many web sites I use for both personal and professional reasons.  I may have personal data related to my work (e.g., Payroll, taxes, benefits, health insurance). I want to protect that content.
  • Infrastructure
    Anything my laptop (where the USB device would be used) can access, can subsequently be attacked by my laptop were it to be infected.
    • Corporate Infrastructure
    • Customer Infrastructure
  • Intellectual Property
    Anything I have access to via that laptop could potentially be a target, including:
    • Company IP
    • Partner IP
    • Customer IP
    • SDO IP
      Examples include presentations, training material, and draft content of specifications that I may be working on.  This is material I often need to share with others.
  • Individually Identifiable Data
    Various regulation requires additional safety around certain classes of data that might be available via my laptop, including:
    • Patient Data (PHI)
    • Data about other Individuals (PII)

Consequential to the threats to any of these assets, are threats to my reputation, and those of my employer, its partners and customers, and to the financial status of those organizations.  One simply need look at what happened last year with the NotPetya attacks to see how much damage can be done.

I invite your comments and feedback below!



Thursday, November 8, 2018

Reassessing HealthIT Standards

After spending umpteen years having a pretty good handle on what's important and where to spend my time, I'm now back at (mostly) square one, having to reassess the standards in flight in HL7, IHE and various other organizations after being out of many loops over the past few years on the implementation side.  For each of about 17 standards organizations, I have to assess what they are doing, and how important it is to me (and to my employer), and then to work out what my strategy should be.  All at the same time sucking from a tremendous fire hose.

Below are links to where you can find out more information for your own assessments, and my thoughts from my current investigations.  While I track many general IT standards; W3C, IETF, Oasis, et cetera, generally require too much in the way of resources [both time and money], and others working in these are generally more qualified than I to handle that work, so those aren't listed. 

HL7: There's a lot of activity around FHIR (of course), and still some activity around CDA (new guides building on C-CDA).  Other things of note: SMART on FHIR, CQL and QUICK. Also, Argonaut and Da Vinci projects can be expected to ballot or contribute some materials back through the HL7 processes.  Attachments is undergoing a shift in focus, and given what's going on with Da Vinci, this should be an interesting time for that work group.  This is an important place to be engaged if you are interested in Health Information Exchange.

IHE: ITI and PCC don't have a lot new to speak of, there's some maintenance work that needs to happen, as well perhaps as some revival.  ITI is considering whether to go to a continuous (quarterly) work cycle, something I tried unsuccessfully to do in PCC for years.  This is a good thing, I think, because it allows for adoption of things in a more timely fashion.  QRPH on the other hand has a few things that seem to be quite attractive, including new work on Aggregate Date Exchange (ADX) [FHIR-based this time, though why they didn't start there is a mystery to me], CQL (an exotic but interestingly useful language for quality measurement and clinical decision support), and PDMP's (that we've seen popping up all over the place in the US).

ISO TC215: There's some interesting things going on here, but not much for my needs.  Much of it is either medical device, or process oriented.

ASTM: Haven't heard a peep in a few years here. Drill down to the sublinks and you'll see few if any new work items. 

OpenID: Something to watch, especially as it relates to SMART on FHIR.

NCPDP: A place to keep my eye on, especially as it relates to PDMP and APIs.

CARIN: Some interesting work on patient facing APIs, a new entre into the space that bears paying attention to.

Carequality: Some new workgroups are forming, FHIR is coming.

CommonWell: Biggest news from CommonWell over the past 12 months has been the connection to Carequality.  I'm not seeing much else, but also not digging too deeply either.

X12: Not really doing it for me.  Everything interesting happening in standards for the Payer sector seems to be discussed in either HL7 Attachments or the Da Vinci Project right now, at least as far as I'm concerned.  If you work for a payer, your mileage could certainly vary.

Thursday, November 1, 2018

What's Changing?

With a new employer will come changes.  For the most part, little enough that I barely had to make only a few small edits to the policies for this blog.  My new employer is Audacious Inquiry, and to the extent that I'm adhering to my own policies, that is about all I'll say here, other than I've known about 1/2 of the senior members of the team for quite some time and highly respect them.  There will be other venues where you can read about what I'll be doing for them in the future.

I'm looking forward to spending more time on standards work, and more time here in this, my own space, where I will talk about the standards work that I'm doing.


   Keith

Wednesday, October 31, 2018

It’s Time

For nearly thirteen years, I’ve been employed by GE Healthcare, or it’s successor,  Virence Health, working on or implementing Health IT Standards.  Today I turned off the last device that connected me to that organizations electronic infrastructure, and synced my email with my newest employer starting tomorrow.

Over that time, my skills, my influence, and all that I am on this blog has been shaped and influenced by the passionate people I have worked with, who truly care about what they do.  As much as I have taught, I have also learned.  In that time, many people have turned a vision I had into reality for me, for themselves, and for our customers.

From the manager who placed a bet on me and found the loophole that could bring me on board, to the strategist who threatened to quit if I wasn’t, to the genomicist who taught me most of what I know, to the budding and passionate architect with a single name (like Madonna), to the brilliant engineer and architect who should have been an architect years before I ever met him, to the young lady whose name I remember simply because of her divinely festival decorated hands who accepted and excelled at evry challenge I set before her, to the woman who project managed three teams with members across three time zones, five cities and two continents, to the services leader who crossed all the t’s I missed, and the integration specialist who passionately drove our engineering teams to do the right things for us, them and our customers, to the guy who taught me almost every thing I know about risk assessment, and the guy who made it impossible for me to say “I know nothing about DICOM”, and the other fellow who made it necessary for me to prove it, and to the person I called regularly to listen to my challenges and help me work them out, to the guy who took SMART and FHIR and built a tool that others said couldn’t be done, the woman who became her products CDA expert and then applied those skills to another key product, and to the best boss I ever had who pushed me up harder and faster than any before him, and perhaps any since, I thank you.

For those, and all the teams I’ve worked with over the years, at Connectathons and showcases, through two different ONC certification regimens (pre and post-HITECH), and beyond, you deserve recognition.

For your continued dedication through all challenges, I’m awarding you all the Ad Hoc Activa award.  It’s the workhorse of a nation, pushing its people to work every day to make their lives better, just as you are.

 Keith

P.S. If you don’t already know how to find me, don’t worry, I’ll still be around, and a later tweet will link to the details.  If you follow me on LinkedIn, you’ll likely find out tomorrow.




Thursday, October 25, 2018

Resolving Networking Woes

Networking woes are the bane of any interface engineer, service representative, or IT help desk person.  Let's talk about the various ways our network connections go down:

  1. We screw something up in the URL we are requesting and get back an error we didn't expect.
  2. Someone fat fingers the hostname or port address somewhere and we cannot find the network endpoint.
  3. The hostname doesn't match the certificate associated with it.
  4. The certificate has expired.
  5. We don't like any of the certificates that have been offered because we don't trust one of the root CAs.
  6. The host is down.
  7. The website on that host is down (e.g., the host can be reached, but the port isn't being listened to).
  8. The proxy server is down.
  9. The system isn't configured with the correct proxy server.
  10. We cannot resolve the IP address of the proxy server.
  11. DNS used to resolve the proxy server address is down.
  12. DNS used to resolve the server hostname is down.
  13. We should/shouldn't be using a proxy.
  14. We haven't entered the proper credentials to authenticate to the proxy.
  15. The firewall doesn't like our URL for some reason (someone once reported to me that since a  URL contained the word "sex" it was rejected by an overly sensitive firewall).
  16. The VPN is down.
  17. DNS registration expired.
    .
    .
    .
I could go on for quite a bit longer.  
The set of diagnostic tools we have is vast: ping, tracert, nslookup, ipconfig, wireshark, openssl, ... (I could go on with many more) but most of these are run from the command line, have lots of options and require human interpretation.  

DNS, Proxy, host, web server, firewall.  By the time  you have everything correct, at least 10 things have to be working.  If every one of them is running at 5 nines, you are now down to 4.  If you have 1000 customers, you now have about a 1 in 10 chance that for one of them, something is wrong in the notwork [sic].  


Why do we do this to ourselves?  Wouldn't it be good if our platform software could tell us IN DETAIL exactly what is wrong when something doesn't work the way we expect it to?  Wouldn't it be even better if the platform told you what could be done to fix it?  Wouldn't it be absolutely awesome if the platform could actually take its own advice and fix the problem?

     Keith

Monday, October 22, 2018

Stages of the Software Legacy Lifecycle

Software has a life cycle just like everything else, and sometimes even goes beyond into the after-life (in keeping with this season, that's zombie-hood).  We've all had to deal with legacy code, either as developers or as users.  I've developed a 13 point scale to measure legacy-ness of software from inception (1) to death (12) and beyond into zombie-hood (13).  The following guidelines to help you place where the code you are working on or using might fit into.

The form of this is "Stage / What They Say / What They are Thinking", followed by what it might really mean when talking to someone who knows something (perhaps not much) about the code in question.

1. First Date / May be Coming Soon / Never heard of that one before.

  • The product manager hasn't heard that idea before, but it sounds intriguing.  Could you say more?

2. Talking about It / In a Future Major Release / Heard of that, still thinking about it.

  • There's an open requisition for the architect who can design that.
  • It's not in the plans.

3. Engaged / In a Future Release / Heard of that, think it might be useful.

  • The architect knows what needs to happen, but needs to document it.
  • There's an open requisition for the engineers who can write that.
  • It's in not in the plans yet.

4. Twinkle / In the Next Major Release / Heard of that, pretty sure it's useful.

  • There's an open requisition for the architect who can design that.
  • It's in the plans.

5. Pregnancy / Coming Soon / Know what that is, planning on it.

  • The architect knows what needs to happen, but may need to document it.
  • There's an open requisition for engineers who can write that.
  • It's in the plans.

6. Birthing / In the Next Release / Know what that is, implementing it.

  • The architect knows what needs to happen, and has documented it.
  • There are engineers who know how to write that code.
  • It's in the plans.

7. Infancy / Piloting / Still working the details out.

  • The architect knows what needs to happen, and has documented it.
  • There are engineers wrote that code.
  • There are still some bugs preventing full shipping.

8. Childhood / It's in Production Today / Implemented it.

  • The architect who designed it is still around.
  • The engineers who wrote it are still assigned to the project.
  • It shipped.

9. Adult / In Maintenance / Been there, done that, losing interest.

  • The last engineer who knows how that works is leaving the company for another opportunity.
  • An open requisition exists for an engineer to maintain that which is expected to be filled soon (hopefully before the last engineer who knows how that works leaves).

10. Middle-Age / Reaching the End of Life / Losing interest.

  • The last engineer who knows how all of that works left the company for another opportunity.
  • There's an intern who can compile it and can fix the occasional bug.

11. Elderly / Legacy / Lost interest but the customer's haven't.

  • The person who wrote that retired.
  • The engineer who took it over from left the company for another opportunity.
  • This is an engineer or intern who maintains that and understands how some of it works.

12. Dead / End of Life / Customers have lost interest.

  • The intern who maintains that goes back to school full-time next week (or at least that's what they said).

13. Zombie-Hood / Extreme Legacy / Everybody else BUT the customers have lost interest.

For that mission critical software that just can't go to end of life.
  • The guy who wrote that is dead, 
  • The person who took it over from him has retired, 
  • The engineer hired to maintain it quit,
  • There's an intern who can run the legacy compiler used for it and who can fix the occasional bug.
  • Nobody else knows how that any of that works anymore.


Just like real life (in the movied), death and zombie-hood can happen to software at just about any time, and death isn't really a prerequisite stage before zombie-hood.

   -- Keith




Thursday, October 11, 2018

Challenges using SMART on FHIR for Multi-Vendor Authorization

Challenge by Nick Youngson CC BY-SA 3.0
One of the challenges for application developers in implementing SMART on FHIR is that of metadata endpoint discovery.  In order to initiate the SMART on FHIR authorization flow, the application developer needs to know the metadata or Conformance endpoint for the FHIR Server it is going to communicate with.  However, this endpoint is going to vary based on the vendor supplying the FHIR Server, the version of FHIR that is supported, and possibly even the healthcare organization deploying it.

The problem here is in the use of metadata as a way to discover the authorization endpoint, when in fact, the challenge for application vendors is configuring to support multiple endpoints with the same application.  If you use MyChart or other apps to access your health records (as I do), you can see how this plays out when you go to login as a patient.  The first step of your login process has you identify your state and healthcare practice (this shows up in other applications as well).  I'm certain that somewhere in that application and others like it, that selection process is doing something to resolve the practice specific endpoint details.

In a single vendor environment, it's pretty easy to address this, but when developing an application to support multiple environments, this can be quite challenging for the application developer.

NO, this isn't a claim that we need a global SMART on FHIR endpoint directory (although that is one way to resolve this issue).  It's more a statement that we've combined the process of conformance inspection with the problem of endpoint discovery.

Think about it: If your authorization endpoints are different, it is also likely that FHIR conformance associated with those endpoints could be different.  Why should they both use the same mechanism.

This creates a challenge for patients, because it means that App developers are likely unable to support as broad a variety of endpoints as they'd like simply due to the configuration challenges presented to them.

Smart SMART developers will obviously work around this.  One possibility is to modify the Launch sequence so that the application first asks the patient where their practice is located through some sort of internally managed directory, so that the application first selects the practice, and then the directory resolves the authorization endpoints AND the actual FHIR conformance endpoint the application can use to customize its operations based one what capabilities are available.

This would allow patient facing SMART applications to support multiple versions of FHIR from multiple provider organizations.

There are a lot of other challenges.  Like anyone else, you have to get your product into the vendor's App store, and there's a lot more vendors than one has to typically deal with for smart phone applications (two covers a lot of territory here, whereas you need 5 or 6 for EHRs to get the same coverage).  And of course, you also have to get your app into the smart phone stores too.  Those aren't problems I'm going to try to solve here.

   Keith

Wednesday, October 10, 2018

But We're Different

The number of times I hear this phrase no longer astounds me. In making this statement the speaker rejects an offered solution because of a perceived difference based on a special need. I've often seen that the special need is similar to other special needs where the proposed solution is already in use elsewhere (Healthcare people sometimes act as if they are the only ones operating in a regulated industry).

Some years ago I led a diverse workgroup across three quite distinct stakeholders trying to solve (what appeared to me to be) the same problem. By "lead", I mean cajoled, bugged, spanked (verbally), herded, out-waited, out-witted, listened, learned, fumed, and eventually rejoiced. Over the course of a year I watched this group evolve three completely separate white papers and approaches into one, and after that evolve into an IHE workgroup (QRPH). That workgroup now looks more deeply into their commonalities than they do their differences.

In my most recent dive into medication management I see a similar opportunity for CDS Vendors, EDI vendors, PDMPs, eRX & CPOE developers, payers and pharmacies to come together around a singular solution for improving medication orders.

The challenge for this group is quite different though. Unlike QRPH, which faced a lack of solutions, attention and funding, medication management has a plethora of all of the above. The poverty in what we have is commonality, dare I say it "standards".

"Oh, yes we do too have those." it will be argued. And I will agree, the solutions have standards in the same way that an organization with 10 priorities has any priorities.  And the challenge we face in replacing 10 with 1 is best summarized by this (de-facto) standard response.

Uses with Permission from XKCD




HAPI FHIR BDD Testing using Serenity and RESTAssured for DSTU2

One of the challenges I've had with using Serenity and REST Assured for testing with the HAPI Server is related to the Content-Type header used with STU2 implementations.

In STU2, that content type is application/xml+fhir and application/json+fhir respectively.
In STU3 and after, these changed to application/fhir+xml and application/fhir+json.
This is discussed in more detail in HAPI FHIR Issue 445.

I finally figured out (I think) where to make the changes to address this in Serenity to handle appropriate reporting.  The key challenge is that I wasn't able to track down the content body of requests in error (or even those that were successful).  That detail can be found in Serenity-core Issue 448.

Unfortunately, what this requires is a hack around the given() method in which one intercepts the given() call, gets the response back, checks to see if it is of an appropriate type, and changes the RestResponseHelper method to appropriately format the response body (an exercise left to the reader).

I'm testing this now (because I have nothing better to do at 3 am).

   Keith



Thursday, October 4, 2018

Deep, Significant and Detailed HL7 Contributions

One of the points of the Ad Hoc Harley award is to recognize unsung heroes in heath IT standards. Almost all past awardees have been people I've worked with directly in standards development. While I have met the current awardee in person we've not spent much time collaborating on anything together, even though he's a fellow New Englander. Even so, I've still benefited from his work in standards, and his open source contributions.

According to two very smart people I know (one a past awardee), he's one of the smartest people they know. In addition to being a brilliant computer scientist [an assessment I heartily agree with having read, used and tweaked his code], he is also an accomplished singer who can harmonize with anything, a person of great humility, and a devoted family man.

While he was unable to be at the HL7 Working group this week, his work was demonstrated in several committee meetings.

His contributions include:

  • CQL
  • CIMPL
  • CDS Connect
  • Maintenance of the Quality Data Model
  • Invaluable feedback on the CDC Opioid prescribing guidelines.
  • Synthea’s Generic Module Framework


Without further ado:


This certifies that
Chris Moesel of MITRE



has hereby been recognized for deep, significant and detailed contributions to HL7 standards.

Tuesday, September 25, 2018

Clinical Workflow Integration (part 2)

Continuing from my last post, let's look at the various technologies that need to be integrated for medication ordering workflows.
Workflow
  1. The EHR / CPOE System
  2. Clinical Decision Support for
    1. Drug Interactions
    2. Allergies
    3. Problems
  3. BPM drug formulary
  4. Medication vocabulary
  5. ePrescribing
  6. Prescription Drug Monitoring Program portal
  7. the provider's Local formulary
  8. Prior Authorization (for drugs which require it)
  9. Signature Capture (and other components needed for eRX of controlled substances)
Add to that:
  1. Custom Forms
  2. A Workflow Engine (props to @wareFLO)
  3. Integration with drug pricing information
  4. Medication history
Each of these 13 technologies potentially comes from a different supplier. I can count at least a baker's dozen different standards (n.b., If I went to all of the meetings for the SDOs publishing those standards I could do NOTHING else).  Some of these components want to control the user interface.  Nobody supplies all the pieces in a unified whole.  Then as you probably already understand, some of the same technology is also applicable to other workflows: refills, medication reconciliation, medication review, et cetera.  And many of these technologies are also applicable inside information systems used by other stakeholders (pharmacists, payers/PBMs, public health ...).

Many physicians ePrescribing workflows look like the starter house on a quarter acre lot, that then got a garage, and hot tub, a later addition to have more room for the kids, a pool, and then a mother-in-law suite.  There's precious little room for anything else.  That's because, like the starter house, it grew over time without appreciable planning put into all the eventual users and uses in which it is used.

Paper based PDMPs have been around for a long time, electronically for more than a decade, but real time integration into medication workflows didn't really start to occur until about 4-5 years ago, and only in the last few have they become prevalent.  Legislation now requires integration into the physician workflow in several states (with more mandates coming).

ePrescribing of Controlled Substances has been around for a lot less time (first authorized federally in 2010 regulation), but electronic integrations started a bit sooner in many markets than did integration with PDMPs.

Many physicians ePrescribing workflows look like the starter house on a quarter acre lot, that then got a garage, and hot tub, a later addition to have more room for the kids, a pool, and then a mother-in-law suite.  There's precious little room for anything else.

It's time to rethink medication workflows, and rethink the standards.  Just about everyone in the list of stakeholders in the previous post wants to make recommendations, limit choices, get exceptions acknowledged, get reasons for exceptions, et cetera.  And everyone wants to do it in a different (even if standard) way, and to make it easier, just about everyone supplies proprietary data and APIs.

We need to unify the components so that they can work in a common way. We need to standardize the data AND the APIs.  We need to have these things take on a common shape that can be stacked and hooked together like LEGO® blocks*, and run in parallel, and aggregated.  We need to stop building things that only experts can understand, and build them in a way that we can explain it to our kids.

   -- Keith

* Yes, I know it's a tired metaphor, but it still works, and you understood me, and so would a
   six-year-old, making my last point.


Sunday, September 23, 2018

Who is responsible for clinical workflow?

RACI Chart 02
Dirk Stanley asked the above question via Facebook. My initial response focused on the keyword "responsible". In normal conversation, this often means "who has the authority", rather than "who does the work", which are often different.

I applied these concepts from RACI matrices in my initial response. If we look at one area-- medication management, physicians still retain accountability; but responsibility, consulting, and informing relationships have been added to this workflow in many ways over many decades, centuries and millennia.

At first physicians did it all: prescribe, compound, and dispense. Assistants took over some responsibilities for the work, eventually evolving into specialties of their own (nurses and MAs).  Compounding and dispensing further evolved into its own specialty apothecaries and pharmacists taking on some of the responsibilities. This resulted in both the expansion and contracting of the drug supply. More preparations would be available but physicians would also be limited by those available in their suppliers formulary. Introduction of these actors into the workflow required the physician to inform others of the order.

The number of preparations grew beyond the ability of human recall requiring accountable authorities to compile references describing benefits, indications for and against, possible reactions, etc; which physicians would consult. I recall as a child browsing my own physicians copy of the PDR -- being fascinated by the details. This information is now electronically available in physician workflows via clinical decision support providers.

Compounding & preparation led into further specialization, introducing manufacturing and subsequent regulatory accountability, including requirements for manufacturers to inform reference suppliers about what they make.

Full Accountability for what can be given to a patient at this stage is no longer under direct physician control.

Health insurance (and PBMs) changed the nature of payment, farther complicating the matters and convoluting drug markets well beyond the ability of almost anyone to understand. The influence of drug prices on treatment efficacy is easily acknowledged. But most physicians lack sufficient information to be accountable for the impact of drug pricing on efficacy and treatment selection. PBMs are making this information available to physicians and their staff. EDI vendors are facilitates this flow of information.

Physicians, pharmacists and payers variously accept different RACI roles to ensure their patients are taking / filling / purchasing their medications. In some ways this has evolved into a shared accountability. I routinely receive inquiries from all of the above, my own responsibility to acquire, purchase, and-take my medications has evolved into simple approval for shipping it to my home.

Attempts to improve availability of drug treatment to special populations (i.e.Medicaid) via discount programs such as 340B add to physician responsibilities. They must inform others of their medication choices for their patients.

Recently, information about prevalence of opioid related deaths and adverse events have introduced yet another stakeholder into the workflow. State regulatory agencies are informed of patient drug use, and want to share Prescription information with physicians accountable for ordering medications.

My own responsibilities as a software architect require me to integrate the needs of all these stakeholders into a seamless workflow. One could further explore this process. I've surely missed some important detail somewhere.

And yet, after all this, the simple question in the title remains ... answered and yet not answered at the same time.

     -- Keith

P.S. This question often comes up in a much different fashion, and one I hear way too often: "Who is to blame for the problems of automated clinical workflow in EHR systems?"

Wednesday, September 19, 2018

Loving to hate Identifiers

Here's an interesting one.  What's the value set for ObservationInterpretation?  What's the vocabulary?

Well, it depends actually on who you ask, and the fine details are rather weak on their effect in the eventual outcome.

Originally defined in HL7 Version 2, the Observation abnormal flags are defined as content from HL7 Table 0078.  That's HL7-speak for an official table, which has the following OID: 2.16.840.1.113883.12.78.  It looks like this.

Value
Description
L
Below low normal
H
Above high normal
LL
Below lower panic limits
HH
Above upper panic limits
< 
Below absolute low-off instrument scale
> 
Above absolute high-off instrument scale
N
Normal (applies to non-numeric results)
A
Abnormal (applies to non-numeric results)
AA
Very abnormal (applies to non-numeric units, analogous to panic limits for numeric units)
null
No range defined, or normal ranges don't apply
U
Significant change up
D
Significant change down
B
Better--use when direction not relevant
W
Worse--use when direction not relevant
For microbology susceptibilities only:
    S
Susceptible*
    R
Resistant*
    I
Intermediate*
   MS
Moderately susceptible*
   VS
Very susceptible*

When we moved to Version 3 with CDA, we got ObservationInterpretation and it looks something like the following.  It's OID is2.16.840.1.113883.5.83.  The table is even bigger (click the link above) and has a few more values.  But all the core concepts below (found in 2010 normative edition of CDA) are unchanged.

ObservationInterpretation     
One or more codes specifying a rough qualitative interpretation of the observation, such as "normal", "abnormal", "below normal", "change up", "resistant", "susceptible", etc.
LvlType, Domain name and/or Mnemonic codeConcept IDMnemonicPrint NameDefinition/Description
1A: ObservationInterpretationChangeV10214
Change of quantity and/or severity. At most one of B or W and one of U or D allowed.
2  L:  (B)10215Bbetter
Better (of severity or nominal observations)
2  L:  (D)10218Ddecreased
Significant change down (quantitative observations, does not imply B or W)
2  L:  (U)10217Uincreased
Significant change up (quantitative observations, does not imply B or W)
2  L:  (W)10216Wworse
Worse (of severity or nominal observations)
1A: ObservationInterpretationExceptionsV10225
Technical exceptions. At most one allowed. Does not imply normality or severity.
2  L:  (<)10226<low off scale
Below absolute low-off instrument scale. This is statement depending on the instrument, logically does not imply LL or L (e.g., if the instrument is inadequate). If an off-scale value is also low or critically low one must also report L and LL respectively.
2  L:  (>)10227>high off scale
Above absolute high-off instrument scale. This is statement depending on the instrument, logically does not imply LL or L (e.g., if the instrument is inadequate). If an off-scale value is also high or critically high one must also report H and HH respectively.
1A: ObservationInterpretationNormalityV10206
Normality, Abnormality, Alert. Concepts in this category are mutually exclusive, i.e., at most one is allowed.
2  S: ObservationInterpretationNormalityAbnormal (A)V10208AAbnormal
Abnormal (for nominal observations, all service types)
3    S: ObservationInterpretationNormalityAlert (AA)V10211AAAbnormal alert
Abnormal alert (for nominal observations and all service types)
4      L:  (HH)10213HHHigh alert
Above upper alert threshold (for quantitative observations)
4      L:  (LL)10212LLLow alert
Below lower alert threshold (for quantitative observations)
3    S: ObservationInterpretationNormalityHigh (H)V10210HHigh
Above high normal (for quantitative observations)
4      L:  (HH)10213HHHigh alert
Above upper alert threshold (for quantitative observations)
3    S: ObservationInterpretationNormalityLow (L)V10209LLow
Below low normal (for quantitative observations)
4      L:  (LL)10212LLLow alert
Below lower alert threshold (for quantitative observations)
2  L:  (N)10207NNormal
Normal (for all service types)
1A: ObservationInterpretationSusceptibilityV10219
Microbiology: interpretations of minimal inhibitory concentration (MIC) values. At most one allowed.
2  L:  (I)10221Iintermediate
Intermediate
2  L:  (MS)10222MSmoderately susceptible
Moderately susceptible
2  L:  (R)10220Rresistent
Resistent
2  L:  (S)10223Ssusceptible
Susceptible
2  L:  (VS)10224VSvery susceptible
Very susceptible

It's got a value set OID as well.  It happens to be: 2.16.840.1.113883.1.11.78.  Just in case you need more clarification.

Along comes FHIR, and we no longer need to worry about OIDs any more.  Here's the FHIR table.  Oh, and this one is called: http://hl7.org/fhir/v2/0078. That should be easy enough to remember.  Oh, and it has a value set identifier as well: http://hl7.org/fhir/ValueSet/observation-interpretation.  Or is it http://hl7.org/fhir/ValueSet/v2-0078?  

Just to be safe, FHIR also defined identifiers for Version 3 code systems.  So the HL7 V3 ObservationInterpretation code system is: http://hl7.org/fhir/v3/ObservationInterpretation.  Fortunately for us, the confusion ends here, because it correctly says that this code system is defined as the expansion of the V2 value set.

And, so, through the magic of vocabulary, we have finally resolved what Observation interpretation means.  Which leaves us pretty much right back where we started last century.

I'm poking fun of course.  This is just one of the minor absurdities that we have in standards for some necessary but evil reasons that do not necessarily include full employment for vocabularists. The reality is, everyone who needs to knows what these codes mean, and we've been agreeing on them for decades.  We just don't know what to call them.  This is one of those problems that only needs a solution so that we don't look ridiculous.  Hey, at least we got the codes for gender right .. right?  Err, tomorrow maybe.