Thursday, August 31, 2017

URGENT HELP NEEDED (Humor)

Originally found in my personal inbox from a software developer still using a Commodore 64. I thought I'd share it today.



If you're reading this, then you are already part of a chain that goes
back to the early 1980s. Early in the morning on June 4th in 1982,
software engineer Dwayne Harris sat down to write a BLISS module for
the then-new VMS operating system. Little did he know, but a
radioactive bug had crawled into his VAX 11/785 prototype, shorted a
power supply capacitor and opened a worm-hole into another dimension.

A small type-2 semi-demonic entity emerged from this dimension and
took up residence in the VMS source repository. Fragments of the
semi-demonic entity's consciousness were also embedded in Dave
Cutler's subconscious (thus explaining the WindowsNT video driver
interface.)

Every 2^20 seconds, a secret society of software engineers gathers in
an unused USENET news group to ritually banish this semi-demonic
entity. Things have been going fine, but the old guard is retiring and
moving on to other projects. We are in desperate need of new software
engineers to carry on the work of this once mighty society of software
engineers.

If we fail to achieve a quorum of 0x13 participants in the banishment
ritual, the semi-demonic entity will be released and any number of
modern plagues will fall upon the online public.

In 1995, we started our ritual late and internet explorer was released
upon the world. Only through fast action was complete disaster averted
and MS Bob coaxed back into a vault underneath Stanford University.

Because of the recent influx of former redditors into the remains of
the USENET backbone systems, we can no longer perform our rituals. As
an alternative, we have developed this chain letter.

At EXACTLY 7:48:12PM PDT 22 July 2015 (10:48:12PM EDT) and every 2^20
seconds afterwards, we ask you to email a copy of this letter to five
software engineers in your address book. The flux of mystical
representational energy through MAE WEST and MAE EAST should be
sufficient to ward off the evil that now faces us.

Remember, for this spell to work, you must be a software engineer and
send the email to other software engineers.

Wednesday, August 30, 2017

An Ongoing Problem in FHIR

How do you find a problem that was occurring during a particular time span?  This is relevant if you are doing a search for Conditions (problems) that are active within a particular time period for something like a quality measure or clinical decision support rule. As I've previously discussed here, temporal searching is subtle.

So, if you have a time period with start and endpoints, and you want to find those conditions which were happening in that time period.  There are only two rules you need to care about:

  1. You can rule out anything where the onset was after the end of the time period.
  2. You can rule out anything where abatement was before the time period started.
What's left?  In the following analysis, I'm ignoring "things that happen at the boundary points". For the sake of argument, we'll assume that time is infinitely divisible and that no two things occur at "exactly the same time".  Obviously we quantize time, and boundary conditions are inevitable.  But they aren't IMPORTANT to this discussion.
  1. Things that had an onset within the time period, or before the time period started.
    1. For those items that had an onset within the time period, clearly its in the time period!
    2. For those items that had an onset before the time period started, one of three things must have occurred:
      1. The problem abated before the time period started (which is ruled out by rule #2 above).
      2. The problem abated during the time period, in which case it clearly was occurring within the time period for some point in that period.
      3. The problem abated after the time period ended, in which case, the time period is wholly contained within the period in which the problem is active, and therefore was occurring during the time period.
  2. Things that abated within the time period, or after the time period ended.
    1. For those items with an abatement within the time period, the are clearly withing the time period.
    2. For those items abated after the time period ended, one of three things must have occurred.
      1. The problem onset was after the time period that ended, in which case it is ruled out by rule #1 above.
      2. The problem had an onset during the time period, in which case it clearly was occurring within the time period for some point in that period.
      3. The problem onset was before the time period started, in which case, the time period is wholly contained within the period in which the problem is active, and therefore was occurring during the time period.

So your FHIR query is Condition?onset=le$end&abatement=ge$start

Done.  Simple ... err yeah, I'm going to stand by that.

   Keith

P.S. Yeah, so easy I had to come back and reverse the le/ge above.  Duh.

Tuesday, August 29, 2017

Or Leave the Tricky Bits to Me


One of the things that I really enjoy about my job is when I get to play with something particularly challenging, and as a result come away from the experience with a better understanding of how things work, or a better process model.

Often times, code gets away from us as developers (same is true with standards).  If you've ever had one of those situations where, as an engineer, you found yourself in the position of having developed a piece of software from the middle-out, you know what I mean.

Middle out solutions are where you have a particular problem, and basic principles are simply too basic to provide much help ... and details are sometimes rather nebulous.  I just need to fix this one problem with ... fill in the blank.  And so you find a way to fix that one problem.  Except that later you find an odd ball exception that doesn't quite fit.  And then there's another issue in the same space.

After a while you find you have this odd mess of code that just doesn't quite work because you came at things the wrong way.  And then some thread comes unwoven and it stops working altogether .. at least for that thing you cared about right now.  That thing somehow was important enough (unlike the rest of the work) to make you take a step back and try a different approach.

Somewhere along the line you took the lenses and flipped them around so that now you can see the forest instead of the trees, or vice versa.  And now that strange jumble of code begins to make sense all over again, fitted together in a different way, to your new model of understanding.

That's what I like about my job.  When that happens.


   Keith






Monday, August 28, 2017

Skip the Tricky Bits When You Can

Someone asked me for an OID today.  I have an OID root (or seven), and needed to assign a new OID in one space to represent a particular namespace.  The details aren't important.

I considered several choices.  One of them was someoid.0 and the other was someoid.2 (since someoid.1 was already assigned).  While if I had been assigning these OIDs in a meaningful order it would have made sense to make this OID appear before what I was already using someoid.1, I chose to assign it to someoid.2 instead, even though someoid.0 is perfectly legal.

Why?  Because not everyone understands that an OID can contain a singular 0 digit in one of it's positions.  And choosing an OID that some might argue with is just going to create a headache for me later where I'm going to have to explain the rules about OIDs to them.  I can avoid that by simply choosing a different OID.  Not only have I avoided a future support call, but I've also avoided a potential issue where someone else's incorrect interpretation of a standard could cause me or my customers problems somewhere down the line.

It would be nice if standards skipped the tricky bits, but we know they don't.  So, when you have a choice, think about your end-user's experience, and keep it simple.  Not every decision you make will let you do that, but for those that do, simply make it a point to think about it.  You'll be glad you did.

   Keith

Friday, August 25, 2017

Four Reasons Why Blockchain isn't the next big disruptor in HealthIT

Don't get me wrong, Block Chain is cool technology, but it is probably NOT the next big disruptor in healthcare.  It's certainly a hammer in search of a nail, but there are so many fasteners in healthcare that we are working with that simply aren't nails.

Fundamentally, Block Chain is a way to securely trace (validate) transactions.  For digital currency, the notion of transaction is fairly simple, I exchange with you some quantity of stuff ... Bitcoin for example. The block chain becomes the evidence of transfer of the stuff.  It's a public ledger of "exchanges".  The value add of the block chain is that it becomes a way to verify transactions.

1. The Unit of Exchange is Different

What's the transaction unit in healthcare?  In my world, it is knowledge related, rather than monetarily related.  The smallest units of knowledge are akin to data types, a medication (code), a condition (code), a lab result (code and value), a procedure (code), an order, an attachment, an address.  Larger units are like FHIR resources, associating data together into meaningful assertions.

2. The Scale of the Problem is Different

Today, there are about 200,000 Bitcoin transactions a day.  If we look at the unit of exchange I mentioned above, a typical CCDA document embodies something on the order of 100 knowledge units.  Let's say there are 150,000 physicians in the US, and each one sees 20 patients a day.  Multiply 150,000 x 20 x 100 = 300 million transactions per day.  To put that number in perspective, Amazon sold about 36 million items on Cyber Monday in 2013.

3. Transactions are Private

When the unit of exchange is an association of an individual (the patient) with a problem, medication or allergy, asserted by another individual (the provider) it's not the same as when it is the exchange is of a disclosed public quantity of stuff between two pseudonymous addresses.  Public ledgers, even with some level of protection behind them, still contain a persistent record of all transactions. After an assertion is made, the effects are pretty permanent, including any damage done, all future assertions to the contrary not-withstanding.  Ask any patient who's every been falsely accused of drug seeking behavior.

4. The Fundamental Problem is Different

The challenge in health IT is not "verification" of knowledge exchanges (transactions), but rather, "enabling" knowledge exchanges between two parties.  With block chain, the question of where to go to "get the ledger" isn't an issue.  In healthcare today, it is.

Block chain is cool tech, no doubt.  Surely there is a use for it in healthcare.  But also, it isn't the answer to every problem, nor specifically the answer to the "Interoperability" problem.  Though right now, you can be assured that it is effectively a free square in your next Interoperability buzzword bingo session.

   -- Keith




Thursday, August 24, 2017

Interoperability and HealthIT: Are we there yet?



Are we there yet?  The short answer, as I quoted from a speaker earlier last week, is: "There is no done with this stuff".  The longer answer comes below.

If you are as old as I am, you remember having to have a case full of Word Perfect printer drivers, Centronics and Serial cables, and you might even have a Serial breakout box to help you work out problems setting up printers.  Been there, done that

What's happened since then?  Well, first we standardized port configurations based on the "IBM PC Standard".  Except that then we had to move to 9 pin serial cables.  And then USB.  And today, wireless.  Drivers were first distributed on disk, then diskette, then CD.  And now you can download them from the manufacturer, or your operating system will do that for you.

If you happen to have a printer that isn't supported, well, if it supports a standard like Postscript, we've got a default driver for that, and for PCL printers, and several dot matrix protocols.  So, today you can buy a printer, turn it on, autoconfigure it, and it just works, right?  Mac users had it a bit easier, but they still went from the old-style Mac universal cables to USB to ...

I upgraded my network infrastructure the other day, and come to find out my inkjet printer that had been working JUST fine on all the computers in the house, and iPhones and iPads, no longer worked on my various Apple devices.  I tracked it down to a compatibility issue between new features of my WiFi router and my old printer.  As a consumer, my expectations of interoperability were definitely NOT met.

Which brings us back to my main point.  The expectation of users with regard to interoperability still isn't being met, even if the situation is improving.  It took us twenty some years to get from where we were then to where we are now, and some configurations still aren't "Plug and Play" with respect to printing.

To figure out how to measure where we are with regard to interoperability, we first need to figure out what it is we want to measure.  And then we need to figure out how to measure the distance to that goal.  When "where we want to go" is an obscure location, figuring out how far we have to go is huge challenge.

Let's assume we want "Plug and Play" interoperability.  What does that actually mean?  We probably want to start with a basic platform and set of capabilities.  You have to define that, first functionally, and then in detail so that it can be implemented.  Then we have to talk about how things are going to connect to each other.  Connecting things (even wirelessly) is hard to do right.  Just ask anyone whose ever failed to connect their Bluetooth headset to their cell phone.  Do you have any clue how much firmware (software embedded in hardware) and software is necessary to do that right?  We've actually gotten that down to a commodity item at this stage.

If we look at the evolution of interoperability in hardware spaces such as the above, we can see a progression up the chain of interoperability.

1. Making a connection between components.
This is a progression from wires and switches to programmable interfaces to systems that can automate configuration of a collection of components.
2. Securing a connection over the same.
This is a progression from internal physical security, to technical implementations of electronic security, to better technical implementations, with progressions advancing as technology makes security both easier and harder depending on who owns it.
3. Authenticating/authorizing interconnected components.
We start from just establishing identities, to doing so securely, and from complex manual configurations, to more user friendly configurations, and finally to policy based acceptance.  At some point, some human still has to make a decision, but that's getting easier and easier to accomplish.
4. Integrating via common APIs or protocols.
Granularies start out at a gross level (e.g., CDA Document), and get more refined as time goes by and communication speed and response times get better, and drive from data (a set of bits) to functional ( function to produce a set of bits to understand) and back to data again (finer grained data) and algorithms (functional instructions again on how to produce data).  This is a never ending cycle.
5. Adapting to capabilities of connected components.
This starts at the level of try and see if it works and respond gracefully to errors, to declaration of optional feature sets, to negotiations between connected components about how they will work together.
6. Discovering things that one can connect to.
We first start by making a list for a component, then by pointing components to lists of things, then by pointing components to places where they can find pointers to lists, and finally, by broadcast protocols where basically, all you need to know is how to look around your environment.  Generally, there will always need to be a first place to look though (it might be a radio bandwidth, a multicast address, or a search engine location)
7. Intelligently interconnecting to the environment one is in.
The final destination.  We don't know what this really looks like for the most part.

Where we want to go is that final stage, and arguably, that's what we have finally begun to reach with the end user experience installing a printer (with some bobbles).  There's still some hardware limitations on Bluetooth devices because those are mostly small things, but even that is reached stage 6.  For healthcare, we are somewhere around stage 4 with FHIR.  CDS Hooks is arguably stage 5. Directories and networks like Carequality or Commonwell or Surescripts RLS will be progress towards stage 6.

The progression down this stack takes time, and the more complex the system, the longer it takes. Consider that printers, headsets and even cell-phones and laptops aren't enterprise class computing systems. The IT industry in general is making progress, but we aren't at a stage yet where enterprise level ERP, CRM and FMS systems are much further along than level 5 or 6, even multi-million dollar industries.  The enterprise level EHR and RCM and EDI systems used in similar sized businesses are moving a bit slower (a classic issue in HCIT).

So, back to measurement.  Are we there has a context.  If your goal is to get to stage 7, be prepared to wait a while and continue be frustrated.  In 2010, my family drove nearly 5000 miles to get sushi. There were plenty of stops along the way, and getting to each was exciting.  If you want to have fun along the journey, identify the way points, and make a point that this IS your NEXT destination. Otherwise, sushi is still a very long way off.

   -- Keith




IHE Quality, Research and Public Health Technical Framework Supplements Published

IHE Quality, Research and Public Health Technical Framework Supplements Published for Trial Implementation.

IHE Quality, Research and Public Health Technical Framework Supplements Published for Trial Implementation

The IHE Quality, Research and Public Health (QRPH) Technical Committee has published the following supplements for Trial Implementation as of August 18, 2017:
New Supplements
  • Family Planning Version 2 (FPv2) - Rev. 1.1
  • Mobile Retrieve Form for Data Capture (mRFD) - Rev. 1.1
Updated Supplements
  • Aggregate Data Exchange (ADX) Rev. 2.1
  • Birth and Fetal Death Reporting-Enhanced (BFDR-E) - Rev. 2.1
  • Retrieve Process for Execution (RPE) - Rev. 4.1
  • Vital Records Death Reporting (VRDR) - Rev. 3.1
The profiles contained within the above documents may be available for testing at subsequent IHE Connectathons. The documents are available for download at http://ihe.net/Technical_Frameworks. Comments on these and all QRPH documents are welcome at any time and can be submitted at QRPH Public Comments.