Pages

Tuesday, September 25, 2018

Clinical Workflow Integration (part 2)

Continuing from my last post, let's look at the various technologies that need to be integrated for medication ordering workflows.
Workflow
  1. The EHR / CPOE System
  2. Clinical Decision Support for
    1. Drug Interactions
    2. Allergies
    3. Problems
  3. BPM drug formulary
  4. Medication vocabulary
  5. ePrescribing
  6. Prescription Drug Monitoring Program portal
  7. the provider's Local formulary
  8. Prior Authorization (for drugs which require it)
  9. Signature Capture (and other components needed for eRX of controlled substances)
Add to that:
  1. Custom Forms
  2. A Workflow Engine (props to @wareFLO)
  3. Integration with drug pricing information
  4. Medication history
Each of these 13 technologies potentially comes from a different supplier. I can count at least a baker's dozen different standards (n.b., If I went to all of the meetings for the SDOs publishing those standards I could do NOTHING else).  Some of these components want to control the user interface.  Nobody supplies all the pieces in a unified whole.  Then as you probably already understand, some of the same technology is also applicable to other workflows: refills, medication reconciliation, medication review, et cetera.  And many of these technologies are also applicable inside information systems used by other stakeholders (pharmacists, payers/PBMs, public health ...).

Many physicians ePrescribing workflows look like the starter house on a quarter acre lot, that then got a garage, and hot tub, a later addition to have more room for the kids, a pool, and then a mother-in-law suite.  There's precious little room for anything else.  That's because, like the starter house, it grew over time without appreciable planning put into all the eventual users and uses in which it is used.

Paper based PDMPs have been around for a long time, electronically for more than a decade, but real time integration into medication workflows didn't really start to occur until about 4-5 years ago, and only in the last few have they become prevalent.  Legislation now requires integration into the physician workflow in several states (with more mandates coming).

ePrescribing of Controlled Substances has been around for a lot less time (first authorized federally in 2010 regulation), but electronic integrations started a bit sooner in many markets than did integration with PDMPs.

Many physicians ePrescribing workflows look like the starter house on a quarter acre lot, that then got a garage, and hot tub, a later addition to have more room for the kids, a pool, and then a mother-in-law suite.  There's precious little room for anything else.

It's time to rethink medication workflows, and rethink the standards.  Just about everyone in the list of stakeholders in the previous post wants to make recommendations, limit choices, get exceptions acknowledged, get reasons for exceptions, et cetera.  And everyone wants to do it in a different (even if standard) way, and to make it easier, just about everyone supplies proprietary data and APIs.

We need to unify the components so that they can work in a common way. We need to standardize the data AND the APIs.  We need to have these things take on a common shape that can be stacked and hooked together like LEGO® blocks*, and run in parallel, and aggregated.  We need to stop building things that only experts can understand, and build them in a way that we can explain it to our kids.

   -- Keith

* Yes, I know it's a tired metaphor, but it still works, and you understood me, and so would a
   six-year-old, making my last point.

Sunday, September 23, 2018

Who is responsible for clinical workflow?

RACI Chart 02
Dirk Stanley asked the above question via Facebook. My initial response focused on the keyword "responsible". In normal conversation, this often means "who has the authority", rather than "who does the work", which are often different.

I applied these concepts from RACI matrices in my initial response. If we look at one area-- medication management, physicians still retain accountability; but responsibility, consulting, and informing relationships have been added to this workflow in many ways over many decades, centuries and millennia.

At first physicians did it all: prescribe, compound, and dispense. Assistants took over some responsibilities for the work, eventually evolving into specialties of their own (nurses and MAs).  Compounding and dispensing further evolved into its own specialty apothecaries and pharmacists taking on some of the responsibilities. This resulted in both the expansion and contracting of the drug supply. More preparations would be available but physicians would also be limited by those available in their suppliers formulary. Introduction of these actors into the workflow required the physician to inform others of the order.

The number of preparations grew beyond the ability of human recall requiring accountable authorities to compile references describing benefits, indications for and against, possible reactions, etc; which physicians would consult. I recall as a child browsing my own physicians copy of the PDR -- being fascinated by the details. This information is now electronically available in physician workflows via clinical decision support providers.

Compounding & preparation led into further specialization, introducing manufacturing and subsequent regulatory accountability, including requirements for manufacturers to inform reference suppliers about what they make.

Full Accountability for what can be given to a patient at this stage is no longer under direct physician control.

Health insurance (and PBMs) changed the nature of payment, farther complicating the matters and convoluting drug markets well beyond the ability of almost anyone to understand. The influence of drug prices on treatment efficacy is easily acknowledged. But most physicians lack sufficient information to be accountable for the impact of drug pricing on efficacy and treatment selection. PBMs are making this information available to physicians and their staff. EDI vendors are facilitates this flow of information.

Physicians, pharmacists and payers variously accept different RACI roles to ensure their patients are taking / filling / purchasing their medications. In some ways this has evolved into a shared accountability. I routinely receive inquiries from all of the above, my own responsibility to acquire, purchase, and-take my medications has evolved into simple approval for shipping it to my home.

Attempts to improve availability of drug treatment to special populations (i.e.Medicaid) via discount programs such as 340B add to physician responsibilities. They must inform others of their medication choices for their patients.

Recently, information about prevalence of opioid related deaths and adverse events have introduced yet another stakeholder into the workflow. State regulatory agencies are informed of patient drug use, and want to share Prescription information with physicians accountable for ordering medications.

My own responsibilities as a software architect require me to integrate the needs of all these stakeholders into a seamless workflow. One could further explore this process. I've surely missed some important detail somewhere.

And yet, after all this, the simple question in the title remains ... answered and yet not answered at the same time.

     -- Keith

P.S. This question often comes up in a much different fashion, and one I hear way too often: "Who is to blame for the problems of automated clinical workflow in EHR systems?"

Wednesday, September 19, 2018

Loving to hate Identifiers

Here's an interesting one.  What's the value set for ObservationInterpretation?  What's the vocabulary?

Well, it depends actually on who you ask, and the fine details are rather weak on their effect in the eventual outcome.

Originally defined in HL7 Version 2, the Observation abnormal flags are defined as content from HL7 Table 0078.  That's HL7-speak for an official table, which has the following OID: 2.16.840.1.113883.12.78.  It looks like this.

Value
Description
L
Below low normal
H
Above high normal
LL
Below lower panic limits
HH
Above upper panic limits
< 
Below absolute low-off instrument scale
> 
Above absolute high-off instrument scale
N
Normal (applies to non-numeric results)
A
Abnormal (applies to non-numeric results)
AA
Very abnormal (applies to non-numeric units, analogous to panic limits for numeric units)
null
No range defined, or normal ranges don't apply
U
Significant change up
D
Significant change down
B
Better--use when direction not relevant
W
Worse--use when direction not relevant
For microbology susceptibilities only:
    S
Susceptible*
    R
Resistant*
    I
Intermediate*
   MS
Moderately susceptible*
   VS
Very susceptible*

When we moved to Version 3 with CDA, we got ObservationInterpretation and it looks something like the following.  It's OID is2.16.840.1.113883.5.83.  The table is even bigger (click the link above) and has a few more values.  But all the core concepts below (found in 2010 normative edition of CDA) are unchanged.

ObservationInterpretation     
One or more codes specifying a rough qualitative interpretation of the observation, such as "normal", "abnormal", "below normal", "change up", "resistant", "susceptible", etc.
LvlType, Domain name and/or Mnemonic codeConcept IDMnemonicPrint NameDefinition/Description
1A: ObservationInterpretationChangeV10214
Change of quantity and/or severity. At most one of B or W and one of U or D allowed.
2  L:  (B)10215Bbetter
Better (of severity or nominal observations)
2  L:  (D)10218Ddecreased
Significant change down (quantitative observations, does not imply B or W)
2  L:  (U)10217Uincreased
Significant change up (quantitative observations, does not imply B or W)
2  L:  (W)10216Wworse
Worse (of severity or nominal observations)
1A: ObservationInterpretationExceptionsV10225
Technical exceptions. At most one allowed. Does not imply normality or severity.
2  L:  (<)10226<low off scale
Below absolute low-off instrument scale. This is statement depending on the instrument, logically does not imply LL or L (e.g., if the instrument is inadequate). If an off-scale value is also low or critically low one must also report L and LL respectively.
2  L:  (>)10227>high off scale
Above absolute high-off instrument scale. This is statement depending on the instrument, logically does not imply LL or L (e.g., if the instrument is inadequate). If an off-scale value is also high or critically high one must also report H and HH respectively.
1A: ObservationInterpretationNormalityV10206
Normality, Abnormality, Alert. Concepts in this category are mutually exclusive, i.e., at most one is allowed.
2  S: ObservationInterpretationNormalityAbnormal (A)V10208AAbnormal
Abnormal (for nominal observations, all service types)
3    S: ObservationInterpretationNormalityAlert (AA)V10211AAAbnormal alert
Abnormal alert (for nominal observations and all service types)
4      L:  (HH)10213HHHigh alert
Above upper alert threshold (for quantitative observations)
4      L:  (LL)10212LLLow alert
Below lower alert threshold (for quantitative observations)
3    S: ObservationInterpretationNormalityHigh (H)V10210HHigh
Above high normal (for quantitative observations)
4      L:  (HH)10213HHHigh alert
Above upper alert threshold (for quantitative observations)
3    S: ObservationInterpretationNormalityLow (L)V10209LLow
Below low normal (for quantitative observations)
4      L:  (LL)10212LLLow alert
Below lower alert threshold (for quantitative observations)
2  L:  (N)10207NNormal
Normal (for all service types)
1A: ObservationInterpretationSusceptibilityV10219
Microbiology: interpretations of minimal inhibitory concentration (MIC) values. At most one allowed.
2  L:  (I)10221Iintermediate
Intermediate
2  L:  (MS)10222MSmoderately susceptible
Moderately susceptible
2  L:  (R)10220Rresistent
Resistent
2  L:  (S)10223Ssusceptible
Susceptible
2  L:  (VS)10224VSvery susceptible
Very susceptible

It's got a value set OID as well.  It happens to be: 2.16.840.1.113883.1.11.78.  Just in case you need more clarification.

Along comes FHIR, and we no longer need to worry about OIDs any more.  Here's the FHIR table.  Oh, and this one is called: http://hl7.org/fhir/v2/0078. That should be easy enough to remember.  Oh, and it has a value set identifier as well: http://hl7.org/fhir/ValueSet/observation-interpretation.  Or is it http://hl7.org/fhir/ValueSet/v2-0078?  

Just to be safe, FHIR also defined identifiers for Version 3 code systems.  So the HL7 V3 ObservationInterpretation code system is: http://hl7.org/fhir/v3/ObservationInterpretation.  Fortunately for us, the confusion ends here, because it correctly says that this code system is defined as the expansion of the V2 value set.

And, so, through the magic of vocabulary, we have finally resolved what Observation interpretation means.  Which leaves us pretty much right back where we started last century.

I'm poking fun of course.  This is just one of the minor absurdities that we have in standards for some necessary but evil reasons that do not necessarily include full employment for vocabularists. The reality is, everyone who needs to knows what these codes mean, and we've been agreeing on them for decades.  We just don't know what to call them.  This is one of those problems that only needs a solution so that we don't look ridiculous.  Hey, at least we got the codes for gender right .. right?  Err, tomorrow maybe.



Tuesday, September 18, 2018

The Role of an Interop Expert


In my many years as a Standards Geek and later as an Interop Guru, one of the things that I learned is that my customers will rarely name interoperability as a core requirement.  Nor, do they want to pay extra for it. The only time they have is when it had Buzz (like FHIR does now), or mandates (like C-CDA).  And if you drill into the detail, they will rarely scratch the surface of buzz or mandate.

If you look at the work break down structure for a product, you’ll see that a product release is made up of features that are delivered by software components that have different capabilities.  If one release has 3 features (or 30, your mileage may vary), and each feature requires 3 components and each component needs to use 3 capabilities ... (turtles all the way down), then engineers will have to deliver 3^3 or 27 software components, plus glue for 9 components, plus baling twine for 3 features.  That’s a lot of work.

But if you design capabilities with reuse in mind then it gets a lot easier.  Let’s look at three features:
  • When patient labs are out of range in a lab report, please flag it for my attention.
  • Please route information to Dr. Smith when I’m on vacation..
  • Let me know when Susan’s labs come back.
Each of these is a very distinct requirement, and yet every single one of them can use the FHIR Subscription capability (which in turn can be built over the same parts used to support search) to implement their functionality.  Each one of these follows the pattern: when this, then that.  And so the biggest piece of that is determining the “when this” ... which is exactly what subscriptions are good for.

It’s my job to:
  1. Know this
  2. Call it out in designs
  3. Educate my teams to think this way.
It turns an insurmountable pile of work into an achievable collection of software that can be delivered.

    Keith




Thursday, September 13, 2018

Sorting Codes and Identifiers for Human Use


These pictures demonstrate one of those little annoyances that has bothered me for some time.

 

Have you ever used a tool to display a list of OIDs, codes or other data?  Were you surprised at the results of sorting of codes?  Most of us who work with these things often are, but only for a moment.  However, that brief moment of cognitive dissonance is just enough to distract me from whatever I'm doing at the moment and thus it takes me just a little bit longer to finish what I'm doing.

The first example above came from searching LOINC for blood pressure, then sorting by LOINC code.  Notice that all the 8### series codes appear after the 7####-# and 8####-# series (starting at page 2).  The second comes from a search of Trifolia for Immunization, sorting by identifier.  Look at the order of the identifiers used for Immunization entries in the HIV/AIDS Services Report guide.  Note that urn:oid:2.16.840.1.113883.10.20.31.3.36 comes before urn:oid:2.16.840.1.113883.10.20.31.3.4.

The problem with sorting codes which have multipart components that contain numeric subparts is that they don't sort the way we expect them to when you apply alpha sorting rules to them.

The "right" way to sort codes is to divide them up at their delimiters, and then sort them component-wise, using an appropriate sort (alpha or numeric) for each component, based on the appropriate sort for that component, in the right order.

For LOINC, the right sort is numeric for each component, and the delimiter is '-'.  For OIDs, the right sort is numeric for each component, and the delimiter is '.'.  For both of these, the right order is simply sequential.  For HL7 urn: values, it gets a bit weird.  Frankly, I want to sort them first by their OID, then by their date, and finally by the urn:oid: or urn:hl7ii: parts, but putting urn:oid: first because it is effectively equivalent to a urn:hl7ii: without an extension part.

A comparator for codes that would work for most of the cases I encounter would do the following:

  1. Split the code into an array of strings at any consecutive sequences of delimiters code.split("[-/.:]+").  
  2. Then, for each component in the two arrays
  3. If the first component is numeric and the second is not, the first component is < the second, and the converse.
  4. If the first and second components are numeric, order them numerically.
  5. Otherwise, sort alphabetically, using a case sensitive collation sequence.

NOTE: The above does not address my urn:hl7ii: and urn:oid: challenge, but I could live with that.

There are more than 50,000 LOINC codes, 68,000 ICD-10 codes, 100,000 SNOMED CT codes, 225,000 RxNorm codes, and 350,000 NDC codes.  If sorting codes the right way saved us all just 1/2 second per code in each of those vocabularies, we'd have 3 days of more time in our lives.

I don't know about you, but I could sure use that time.

Tuesday, September 11, 2018

HealthIT Interfacing in the Cloud

I've been thinking a lot lately about Interfacing in the Cloud, and Microservices strategies, and how that makes things different.  Consider interface engines today.

The classic interface engine is a piece of middleware that supports a number of capabilities:
  1. Routing
  2. Queuing
  3. Transformation (message reformatting, tweaking messages)
  4. Translation (codes, cross walks, et cetera)
  5. Monitoring and Management
  6. Transport (HTTP/HTTPS, TCP, SOAP, FTP/SFTP, REST, SMTP/IMAP/POP3)
It's a piece of enterprise middle-ware, and classically a monolithic application that allows you to build highly orchestrated pipelines to do the work.

In the micro-services world, each of these areas could be its own bounded context, with a few extra pieces thrown into the mix (e.g., the Message itself is probably a bounded context shared by many pieces).  Instead of building highly orchestrated message pipelines, we could instead be building choreographed services, even one-off services to support specific needs into operation pipelines that would be highly scalable and configurable in a cloud environment.

It's an interesting approach that I frankly haven't seen anyone take on well yet.  Mostly what I'm seeing is interfaces moving into the cloud first as a great big chunk ... running in a Windows VM somewhere to start off with.  Then some decomposition into parts, and finally, maybe a rational micro-services architecture.  Eventually, I think we'll get there, but I'm not so certain that the emerging victor in the Interface engine competition is going to be one of the current leaders.  It's possible, but sounds very much like one of the classic scenarios found in The Innovator's Dilemma.

I'm not entirely happy with the current situation.  I think cloud needs different thinking and a different way of doing things to handle interfaces. I really hope I don't have to build it myself.

   Keith

Thursday, September 6, 2018

What does it take to become a HealthIT Standards Expert?

As I think back to what it takes to become an expert in Health IT standards, I ponder.
  • Is 10,000 hours of study and implementation the answer?  No.  You can do it faster.
  • Is a really good memory the answer? No.  It helps, but I don't expect you to remember the HL7 root OID (2.16.840.1.113883 -- and yes, I wrote that from memory).  I do expect you to write it down somewhere.
  • Is passion the answer? Maybe, but it isn't sufficient by itself.
  • Is intelligence the answer? It helps, but again, isn't sufficient by itself.
  • Do you have to like the subject?  No.  I hate V2, but I'm still an expert.
  • Is persistence the answer?  Again, it helps, but still isn't enough.
What defines an expert?  An expert is someone with a high degree of skill or knowledge, an authority on the topic.  At least according to one reference standard.

There's a missing piece to all of this, and that's the willingness to share.  As an expert, one has
both a willingness to shares their knowledge to help solve others problems, AND a strong desire to acquires new knowledge when they cannot help.

If you have those two key things, it still doesn't make you an expert, but if you have those and time, you will eventually become one.

   Keith

Wednesday, September 5, 2018

Interface Load Estimation Rules of Thumb

Over the years I've developed a lot of rules of thumb for handling capacity planning for interfaces.  Loads are bursty, they have peaks and valleys.  The following rules of the road have worked well for me over 20 years:

Sum the Week and Divide by 5 to get your Worst Day of the Week
Sum the week (all seven days) and divide by 5 to get a daily average load.  That will give you an average load that addresses your needs for the worst day of the week, and regardless of what day you start on, you can ignore workweek variation.

Shift Changes and Breaks follows the rule of 3
Shift changes and breaks can triple your loads (depending on user workflows).  Users want to finish up what they are doing before they leave.

Plan for 5x Peak Loads
Extraordinary events can place additional burden on your system.  Plan for a peak load of 5 times your average.  That will account for about 99.95% of your requirements, give or take.  That has to do with the special relationship between the Poisson distribution used to describe "job" arrivals, and the exponential distribution, and failure rates (e.g., the failure case where a job arrives when the system is already "at peak capacity").

Load varies Fractally by Time Unit
The last rule is simply this: The deeper you dig into it (month, week, day of week, hour of day), the more detail you can find in instantaneous load variation. More detail isn't better.  If getting more detail doesn't change your estimates ... stop digging.  It's just a rat hole at that point.

I've been able to apply these rules to many systems that involve human workflows: From the number times I'd have to visit a customer based on the number of computer they have (to estimate my costs for a service contract back in the days I managed a computer service organization), to the number of dictionary compression jobs per day back when I was responsible for a compression services team working on spelling dictionaries, to the number of CDA documents a physician might generate over the course of their day.

   Keith