Pages

Thursday, August 31, 2017

URGENT HELP NEEDED (Humor)

Originally found in my personal inbox from a software developer still using a Commodore 64. I thought I'd share it today.



If you're reading this, then you are already part of a chain that goes
back to the early 1980s. Early in the morning on June 4th in 1982,
software engineer Dwayne Harris sat down to write a BLISS module for
the then-new VMS operating system. Little did he know, but a
radioactive bug had crawled into his VAX 11/785 prototype, shorted a
power supply capacitor and opened a worm-hole into another dimension.

A small type-2 semi-demonic entity emerged from this dimension and
took up residence in the VMS source repository. Fragments of the
semi-demonic entity's consciousness were also embedded in Dave
Cutler's subconscious (thus explaining the WindowsNT video driver
interface.)

Every 2^20 seconds, a secret society of software engineers gathers in
an unused USENET news group to ritually banish this semi-demonic
entity. Things have been going fine, but the old guard is retiring and
moving on to other projects. We are in desperate need of new software
engineers to carry on the work of this once mighty society of software
engineers.

If we fail to achieve a quorum of 0x13 participants in the banishment
ritual, the semi-demonic entity will be released and any number of
modern plagues will fall upon the online public.

In 1995, we started our ritual late and internet explorer was released
upon the world. Only through fast action was complete disaster averted
and MS Bob coaxed back into a vault underneath Stanford University.

Because of the recent influx of former redditors into the remains of
the USENET backbone systems, we can no longer perform our rituals. As
an alternative, we have developed this chain letter.

At EXACTLY 7:48:12PM PDT 22 July 2015 (10:48:12PM EDT) and every 2^20
seconds afterwards, we ask you to email a copy of this letter to five
software engineers in your address book. The flux of mystical
representational energy through MAE WEST and MAE EAST should be
sufficient to ward off the evil that now faces us.

Remember, for this spell to work, you must be a software engineer and
send the email to other software engineers.

Wednesday, August 30, 2017

An Ongoing Problem in FHIR

How do you find a problem that was occurring during a particular time span?  This is relevant if you are doing a search for Conditions (problems) that are active within a particular time period for something like a quality measure or clinical decision support rule. As I've previously discussed here, temporal searching is subtle.

So, if you have a time period with start and endpoints, and you want to find those conditions which were happening in that time period.  There are only two rules you need to care about:

  1. You can rule out anything where the onset was after the end of the time period.
  2. You can rule out anything where abatement was before the time period started.
What's left?  In the following analysis, I'm ignoring "things that happen at the boundary points". For the sake of argument, we'll assume that time is infinitely divisible and that no two things occur at "exactly the same time".  Obviously we quantize time, and boundary conditions are inevitable.  But they aren't IMPORTANT to this discussion.
  1. Things that had an onset within the time period, or before the time period started.
    1. For those items that had an onset within the time period, clearly its in the time period!
    2. For those items that had an onset before the time period started, one of three things must have occurred:
      1. The problem abated before the time period started (which is ruled out by rule #2 above).
      2. The problem abated during the time period, in which case it clearly was occurring within the time period for some point in that period.
      3. The problem abated after the time period ended, in which case, the time period is wholly contained within the period in which the problem is active, and therefore was occurring during the time period.
  2. Things that abated within the time period, or after the time period ended.
    1. For those items with an abatement within the time period, the are clearly withing the time period.
    2. For those items abated after the time period ended, one of three things must have occurred.
      1. The problem onset was after the time period that ended, in which case it is ruled out by rule #1 above.
      2. The problem had an onset during the time period, in which case it clearly was occurring within the time period for some point in that period.
      3. The problem onset was before the time period started, in which case, the time period is wholly contained within the period in which the problem is active, and therefore was occurring during the time period.

So your FHIR query is Condition?onset=le$end&abatement=ge$start

Done.  Simple ... err yeah, I'm going to stand by that.

   Keith

P.S. Yeah, so easy I had to come back and reverse the le/ge above.  Duh.

Tuesday, August 29, 2017

Or Leave the Tricky Bits to Me


One of the things that I really enjoy about my job is when I get to play with something particularly challenging, and as a result come away from the experience with a better understanding of how things work, or a better process model.

Often times, code gets away from us as developers (same is true with standards).  If you've ever had one of those situations where, as an engineer, you found yourself in the position of having developed a piece of software from the middle-out, you know what I mean.

Middle out solutions are where you have a particular problem, and basic principles are simply too basic to provide much help ... and details are sometimes rather nebulous.  I just need to fix this one problem with ... fill in the blank.  And so you find a way to fix that one problem.  Except that later you find an odd ball exception that doesn't quite fit.  And then there's another issue in the same space.

After a while you find you have this odd mess of code that just doesn't quite work because you came at things the wrong way.  And then some thread comes unwoven and it stops working altogether .. at least for that thing you cared about right now.  That thing somehow was important enough (unlike the rest of the work) to make you take a step back and try a different approach.

Somewhere along the line you took the lenses and flipped them around so that now you can see the forest instead of the trees, or vice versa.  And now that strange jumble of code begins to make sense all over again, fitted together in a different way, to your new model of understanding.

That's what I like about my job.  When that happens.


   Keith





Monday, August 28, 2017

Skip the Tricky Bits When You Can

Someone asked me for an OID today.  I have an OID root (or seven), and needed to assign a new OID in one space to represent a particular namespace.  The details aren't important.

I considered several choices.  One of them was someoid.0 and the other was someoid.2 (since someoid.1 was already assigned).  While if I had been assigning these OIDs in a meaningful order it would have made sense to make this OID appear before what I was already using someoid.1, I chose to assign it to someoid.2 instead, even though someoid.0 is perfectly legal.

Why?  Because not everyone understands that an OID can contain a singular 0 digit in one of it's positions.  And choosing an OID that some might argue with is just going to create a headache for me later where I'm going to have to explain the rules about OIDs to them.  I can avoid that by simply choosing a different OID.  Not only have I avoided a future support call, but I've also avoided a potential issue where someone else's incorrect interpretation of a standard could cause me or my customers problems somewhere down the line.

It would be nice if standards skipped the tricky bits, but we know they don't.  So, when you have a choice, think about your end-user's experience, and keep it simple.  Not every decision you make will let you do that, but for those that do, simply make it a point to think about it.  You'll be glad you did.

   Keith

Friday, August 25, 2017

Four Reasons Why Blockchain isn't the next big disruptor in HealthIT

Don't get me wrong, Block Chain is cool technology, but it is probably NOT the next big disruptor in healthcare.  It's certainly a hammer in search of a nail, but there are so many fasteners in healthcare that we are working with that simply aren't nails.

Fundamentally, Block Chain is a way to securely trace (validate) transactions.  For digital currency, the notion of transaction is fairly simple, I exchange with you some quantity of stuff ... Bitcoin for example. The block chain becomes the evidence of transfer of the stuff.  It's a public ledger of "exchanges".  The value add of the block chain is that it becomes a way to verify transactions.

1. The Unit of Exchange is Different

What's the transaction unit in healthcare?  In my world, it is knowledge related, rather than monetarily related.  The smallest units of knowledge are akin to data types, a medication (code), a condition (code), a lab result (code and value), a procedure (code), an order, an attachment, an address.  Larger units are like FHIR resources, associating data together into meaningful assertions.

2. The Scale of the Problem is Different

Today, there are about 200,000 Bitcoin transactions a day.  If we look at the unit of exchange I mentioned above, a typical CCDA document embodies something on the order of 100 knowledge units.  Let's say there are 150,000 physicians in the US, and each one sees 20 patients a day.  Multiply 150,000 x 20 x 100 = 300 million transactions per day.  To put that number in perspective, Amazon sold about 36 million items on Cyber Monday in 2013.

3. Transactions are Private

When the unit of exchange is an association of an individual (the patient) with a problem, medication or allergy, asserted by another individual (the provider) it's not the same as when it is the exchange is of a disclosed public quantity of stuff between two pseudonymous addresses.  Public ledgers, even with some level of protection behind them, still contain a persistent record of all transactions. After an assertion is made, the effects are pretty permanent, including any damage done, all future assertions to the contrary not-withstanding.  Ask any patient who's every been falsely accused of drug seeking behavior.

4. The Fundamental Problem is Different

The challenge in health IT is not "verification" of knowledge exchanges (transactions), but rather, "enabling" knowledge exchanges between two parties.  With block chain, the question of where to go to "get the ledger" isn't an issue.  In healthcare today, it is.

Block chain is cool tech, no doubt.  Surely there is a use for it in healthcare.  But also, it isn't the answer to every problem, nor specifically the answer to the "Interoperability" problem.  Though right now, you can be assured that it is effectively a free square in your next Interoperability buzzword bingo session.

   -- Keith



Thursday, August 24, 2017

Interoperability and HealthIT: Are we there yet?



Are we there yet?  The short answer, as I quoted from a speaker earlier last week, is: "There is no done with this stuff".  The longer answer comes below.

If you are as old as I am, you remember having to have a case full of Word Perfect printer drivers, Centronics and Serial cables, and you might even have a Serial breakout box to help you work out problems setting up printers.  Been there, done that

What's happened since then?  Well, first we standardized port configurations based on the "IBM PC Standard".  Except that then we had to move to 9 pin serial cables.  And then USB.  And today, wireless.  Drivers were first distributed on disk, then diskette, then CD.  And now you can download them from the manufacturer, or your operating system will do that for you.

If you happen to have a printer that isn't supported, well, if it supports a standard like Postscript, we've got a default driver for that, and for PCL printers, and several dot matrix protocols.  So, today you can buy a printer, turn it on, autoconfigure it, and it just works, right?  Mac users had it a bit easier, but they still went from the old-style Mac universal cables to USB to ...

I upgraded my network infrastructure the other day, and come to find out my inkjet printer that had been working JUST fine on all the computers in the house, and iPhones and iPads, no longer worked on my various Apple devices.  I tracked it down to a compatibility issue between new features of my WiFi router and my old printer.  As a consumer, my expectations of interoperability were definitely NOT met.

Which brings us back to my main point.  The expectation of users with regard to interoperability still isn't being met, even if the situation is improving.  It took us twenty some years to get from where we were then to where we are now, and some configurations still aren't "Plug and Play" with respect to printing.

To figure out how to measure where we are with regard to interoperability, we first need to figure out what it is we want to measure.  And then we need to figure out how to measure the distance to that goal.  When "where we want to go" is an obscure location, figuring out how far we have to go is huge challenge.

Let's assume we want "Plug and Play" interoperability.  What does that actually mean?  We probably want to start with a basic platform and set of capabilities.  You have to define that, first functionally, and then in detail so that it can be implemented.  Then we have to talk about how things are going to connect to each other.  Connecting things (even wirelessly) is hard to do right.  Just ask anyone whose ever failed to connect their Bluetooth headset to their cell phone.  Do you have any clue how much firmware (software embedded in hardware) and software is necessary to do that right?  We've actually gotten that down to a commodity item at this stage.

If we look at the evolution of interoperability in hardware spaces such as the above, we can see a progression up the chain of interoperability.

1. Making a connection between components.
This is a progression from wires and switches to programmable interfaces to systems that can automate configuration of a collection of components.
2. Securing a connection over the same.
This is a progression from internal physical security, to technical implementations of electronic security, to better technical implementations, with progressions advancing as technology makes security both easier and harder depending on who owns it.
3. Authenticating/authorizing interconnected components.
We start from just establishing identities, to doing so securely, and from complex manual configurations, to more user friendly configurations, and finally to policy based acceptance.  At some point, some human still has to make a decision, but that's getting easier and easier to accomplish.
4. Integrating via common APIs or protocols.
Granularies start out at a gross level (e.g., CDA Document), and get more refined as time goes by and communication speed and response times get better, and drive from data (a set of bits) to functional ( function to produce a set of bits to understand) and back to data again (finer grained data) and algorithms (functional instructions again on how to produce data).  This is a never ending cycle.
5. Adapting to capabilities of connected components.
This starts at the level of try and see if it works and respond gracefully to errors, to declaration of optional feature sets, to negotiations between connected components about how they will work together.
6. Discovering things that one can connect to.
We first start by making a list for a component, then by pointing components to lists of things, then by pointing components to places where they can find pointers to lists, and finally, by broadcast protocols where basically, all you need to know is how to look around your environment.  Generally, there will always need to be a first place to look though (it might be a radio bandwidth, a multicast address, or a search engine location)
7. Intelligently interconnecting to the environment one is in.
The final destination.  We don't know what this really looks like for the most part.

Where we want to go is that final stage, and arguably, that's what we have finally begun to reach with the end user experience installing a printer (with some bobbles).  There's still some hardware limitations on Bluetooth devices because those are mostly small things, but even that is reached stage 6.  For healthcare, we are somewhere around stage 4 with FHIR.  CDS Hooks is arguably stage 5. Directories and networks like Carequality or Commonwell or Surescripts RLS will be progress towards stage 6.

The progression down this stack takes time, and the more complex the system, the longer it takes. Consider that printers, headsets and even cell-phones and laptops aren't enterprise class computing systems. The IT industry in general is making progress, but we aren't at a stage yet where enterprise level ERP, CRM and FMS systems are much further along than level 5 or 6, even multi-million dollar industries.  The enterprise level EHR and RCM and EDI systems used in similar sized businesses are moving a bit slower (a classic issue in HCIT).

So, back to measurement.  Are we there has a context.  If your goal is to get to stage 7, be prepared to wait a while and continue be frustrated.  In 2010, my family drove nearly 5000 miles to get sushi. There were plenty of stops along the way, and getting to each was exciting.  If you want to have fun along the journey, identify the way points, and make a point that this IS your NEXT destination. Otherwise, sushi is still a very long way off.

   -- Keith



IHE Quality, Research and Public Health Technical Framework Supplements Published

IHE Quality, Research and Public Health Technical Framework Supplements Published for Trial Implementation.

IHE Quality, Research and Public Health Technical Framework Supplements Published for Trial Implementation

The IHE Quality, Research and Public Health (QRPH) Technical Committee has published the following supplements for Trial Implementation as of August 18, 2017:
New Supplements
  • Family Planning Version 2 (FPv2) - Rev. 1.1
  • Mobile Retrieve Form for Data Capture (mRFD) - Rev. 1.1
Updated Supplements
  • Aggregate Data Exchange (ADX) Rev. 2.1
  • Birth and Fetal Death Reporting-Enhanced (BFDR-E) - Rev. 2.1
  • Retrieve Process for Execution (RPE) - Rev. 4.1
  • Vital Records Death Reporting (VRDR) - Rev. 3.1
The profiles contained within the above documents may be available for testing at subsequent IHE Connectathons. The documents are available for download at http://ihe.net/Technical_Frameworks. Comments on these and all QRPH documents are welcome at any time and can be submitted at QRPH Public Comments.

Tuesday, August 22, 2017

Old HP Printer New DLINK Router Problem Solved

So originally I thought this problem was an interaction between QoS (quality of service) capabilities of my new DLINK 890 Router and my old HP 7520 three-in-one printer because when I turned off QoS (resetting my router), the problem got resolved.  Actually, all that did was cut off connections to the printer and force the printer to retry getting a network connection, which it did.  And then it did the thing where it worked for a little while and so I thought the problem was truly solved.

But, in fact it wasn't, as proven to me by my children printing out homework (or failing to do so).  I finally traced it back to a new router capability whereby the router could do some fancy dancing to get twice the bandwidth on the 2.4Ghz channel with smarter hardware.  My printer is old and competely functional but not so smart.  Fortunately, I could turn off this feature just for the 2.4Ghz band which the printer needs, but nothing else in the house really uses, and now everything seems to be working again.

How does this relate to healthcare standards?  It goes back to Asynchronous bilateral cutover -- aka backwards compatibility mode.  My new router has a mode in which it works compatibly with old stuff, and a mode in which is simply leaves that stuff behind.  The default setting is to remain compatible, but of course I knew better and messed things up for a while.

After reading through various and sundry message forums for both my router and my printer, I found nothing that would help me identify or cure this problem.  Pure slogging and some knowledge of protocols and interfaces was all that really helped in the end.  In the end, I turned off 20/40Mhz coexistence, set channel width to 20Mhz (the original standard) and now my printer connects and seems to work just fine on the new router.

What does that mean in the realm of implementation healthcare IT standards?

  1. Backwards compatibility is good. Testing is better.  The 20/40 Mhz coexistence feature is supposed to detect and address the fact that when 20Mhz equipment is in use, the router should be able to configure itself to talk to it, but it doesn't quite work with the hardware I'm using.
  2. Negotiating interface levels is good, but if you didn't design an interface to negotiate in the first place, you are likely to have problems.  Consider HTTP 1.0 vs. 1.1, TLS 1.0 and later releases, et cetera. New protocols should be able to downgrade.
  3. Make it possible for systems to have deterministic behavior controlled by a human.  That way, when all else fails, an SME can tell the system exactly what to do.  This is basically what I had to do for my printer, and for what I'm doing, is a completely satisfactory solution.
   Keith

Friday, August 18, 2017

What's my Doctor's Direct Address

Lisa Nelson [a self-described CDA SME, Wife, Mother and designated daughter of two octogenarians] gave quite a fanciful skit and the ONC meeting last week.  In it, she pretended to be interrupted by her cell-phone, and had conversations with her youngest child [who needed a physical sent for camp], husband [who was sick while travelling], and eldest child regarding the fact that one of the grandparents fell and was in the hospital.  Throughout her response was the same.  Get their Direct address and I'll send ... and then some followup on what to do next [which was to mostly not worry because she had things under control].  That skit, she says, is her dream.

I believe in dreams.  An audience member sitting next to me said in aside: "I'm not sure many providers know or could even find out what their Direct Address was."

I decided to test this out, because I don't actually know the answer to that question for my provider. Nor does he, and NOT for lack of trying.  My secure message to him was quite simple (I've somewhat redacted his responses to preserve his privacy ... my data I feel free to share, but not his):


Do you have a direct address that could be used by other providers to send you data (e.g., a CCDA)? What would I tell them when they ask for it?

Very Same Day, 4.5 hours later.

Hello Keith, 

I have never been asked these technical questions in past and I am not sure. However, I have sent a message to our [Vendor] team to let me know how this works and I will let you know accordingly . I have not heard back from them as yet. I certainly know there are non [my hcp organization] providers who already have a link and they transfer the records directly to us electronically. [other hcp organization] hospital is one of them. 

Best. 
/S/


Following Day

Hello Keith, 

I have left message with our [Vendor] team again and I have still not heard form our [Vendor] team for an answer. I am away from tomorrow for the next week. Certainly there is a electronic link for transferring records because, what I have seen is that [various] Hospitals send me some of the hospital records directly. I believe they have a link, not sure if it is CCDA. I think the [Vendor] team can establish a link if there is not already one in place with the provider you are mentioning. 

Thanks for your patience. 

Best. 
/S/


I responded thanking him for his diligence and let him know my request wasn't urgent.  I then sent an e-mail off to the Medical Director for Informatics to track it down from the other end.

My point here though, it NOT to fault my healthcare provider.  When we design Direct Messaging, we included the notion that patients would be empowered by it in the addressing scheme.  Anyone could have a Direct Address, and it would be a secure way for all stakeholders in the HealthIT ecosystem to exchange information.

But it's not, and there are several different explanations I've considered for why that might be:

  1. Unintermediated electronic communication between patients and their physicians is avoided by policy due to HIPAA and ...
  2. Provider to provider communication is OK (I can get just about any provider to fax records with a phone call to another provider ... but fax them to my HOME number? God forbid, and HIPAA forfend.)
Now you and I know that the HIPAA boogeyman here SHOULDN'T exist.  But it does.  And because of it and years of prior policy, there's a challenge.

Other challenges include patient matching and trust.  When a provider gets a direct message from another provider about a patient, they implicitly trust the source, and are willing to match the patient with the data in the message.  But when patients start communicating via unintermediated electronic means, well, the information goes through a different set of filters.  The first step then is to be sure that one understands WHO the source of the communication is.  Did it come from, and was it intended to be about "my patient".

So, handing out Direct addresses for providers to patients seems culturally to be a bad idea, because you cannot actually know how they'll use it.

The answer here is to flip the addressing scheme, I think.  My Dr's Direct Address for me should be myuserid's+routing@hcpdomain.  When I give that address out, and someone uses it, the message, when received, can be securely accessed by hcpdomain, and it can route it internally as appropriate based on what I used for route.  So, if my userid was mg, and I set up my routing for my pcp to my doctor, his address would be mg's+pcp@hcpdomain.

We don't have to change the Direct Specification to support this.  That's already baked in to the specification.  Patient matching is built in when mg@hcpdomain is my identity as known to my healthcare provider.

It's not his "direct address", but rather "my direct address" for him.

Let's make it easy for patients and doctors to figure this stuff out.  The Direct Project was supposed to be the on-ramp to the health exchange super-highway.  What good is an on-ramp if patient's cannot find it.

    -- Keith

P.S. I had another post planned for the day, but the communication from my provider led me to rethinking Direct addressing, and I thought it relevant to the topics already discussed this week.

P.P.S and an update for the win: I asked someone who would know at my HCP's organization when I wrote the post, and was given w/in 12 hours. Unencrypted e-mail is SO effective (and completely legitimate for me and my HCP to use as I have given permission for that form of communication).







Thursday, August 17, 2017

When All Developers are Not Above Average ...

Ortona FL Lake Wobegon sign01
The reality of the world is that we don't all develop out of an office in Lake Wobegon.  Not all developers are above average.  Healthcare is challenging, we already know that.  By passing that challenge onto developers, we are simply transferring risk to the developer.  Making the hard stuff easy mitigates the risk, and makes it less likely for developers to do it wrong.

I talked about this earlier this week at ONC's Beyond Boundaries meeting, in the context of "Fit for Purpose" Standards.  The human mind has a remarkable capacity to find and take the easiest path. We need to design our standards with that in mind, and use those human factors IN the standard itself.

Good standards tell developers what to do to do the right thing.
Better standards tell developers how to tell if they are doing the right thing.
Great standards make it easy for developers to do the right thing.

Good standards are testable.
Better standards have tests readily available.
Great standards are computably testable*.

Good healthcare standards make health information exchange possible.
Better healthcare standards make health information exchange reliable and accountable.
Great healthcare standards make health information exchange easy.

I'll talk about measuring easy in a future post.

   -- Keith

* All the data necessary to validate conformance to the standard is readily available, from data models to value sets and everything in between.

You really don't understand the CCDA standard when ...


I often hear the complaint (about CDA documents) that: I just want to see ...

At the ONC meeting on Tuesday a provider remarked that in order to understand what was happening for a patient a provider read through 18 hundred pages of CCDA documents.  This was accompanied by the statement that they understood the standard.

I do (and did) protest.  If providers are reading that many pages, then the only thing understood about the standard is the word "Document", and understanding of the application of standards to interoperability in general is also lacking.  Just as in medicine where there is no singular magic pill to make a patient healthy, there isn't just one standard to apply to the various problems associated with interoperability.

CDA documents are snapshots in time of the data associated with a patient care event.  All the data elements found in the the dozen and a half elements defined by the Common Clinical Data Set. The CCD is supposed to contain the relevant and pertinent data, but we know that what is relevant and pertinent to one provider isn't necessarily to another. Even so, it's how the data that is presented to the end user (the provider) that is the problem, not the standard that gets the data from that data set from one provider to another.

Consider multiple ways to address this issue that have all been worked in other standards efforts:
  1. Consolidate data from multiple documents into a reasonable longitudinal view that reconciles information from across multiple sources of data.  There are OTHER standards that explain how to do this (e.g., the IHE Reconciliation Profile).  CCDA is about moving the data, and just like the web, you have to apply other standards to solve other problems.
  2. Use an XSL Stylesheet to make the data easier to read and arrange according to provider preferences.  HL7 and ONC ran a CDA Rendering challenge that produced a number of freely available open source solutions.  CDA is about communication of data.  It is up to applications to make it usable.  CDA isn't a standard for display, or a standard for application function.  It's a standard for communication.  
  3. Allow providers to incorporate the data as it becomes available.  If you implement workflows that support a 360 Closed Loop referral / consultation processes, and enable incorporation of the data into the EHR when it becomes available, you avoid trying to manage and consume multiple documents in "one swell FUPE*".
FHIR isn't going to magically change the challenge of viewing "all the data", but it is going to change the approach used by folks (and that will be the subject of a future post).

   -- Keith

* That's not a mispelling, but rather an acronym standing for Fowled-Up Process Execution

Tuesday, August 15, 2017

Beyond Boundaries Day 1 Takeaways

Information Blocking: The glass is either half empty or half full with regard to interop progress, depending upon where you stand to benefit.  Best data is two + years old, hard to understand if it is even relevant.  Progress is happening, best presentation showed the upward trends.  We need more/better data with real-time measures.  John Everson has the best presentation and backup for his assertions about progress.

Vocabulary (Semantics): If it isn't critical to care provided it is not as important.  Most of our discusssion was around SNOMED CT, LOINC and RxNorm (aka SOLOR).  ICD? CPT? Not necessarily relevant to clinical care.  VSAC is good.

Interoperability Networks and Infrastructure: I find it telling that among 6 of the 8 "nationwide networks" participating in the discussion, half of them are making progress on connecting to each other (Carequality, Commonwell and Surescripts) and are working towards the one network/multiple carriers model and the others are not quite there yet.

APIs: Perhaps the most boring and exciting panel yet, in that everyone is agreed that SMART on FHIR is the way to go.  Focus beyond that is in part where some would focus attention differently.  Essential the battle about the standard is over, the contest seems to be about who might pick up the implementation guide honorary mention, and the front runner (Argonaut Project) wasn't even on the panel... (Though Micky was the moderator of the next one).  FWIW, SMART is out for ballot on the standards track in HL7 this cycle

Third Party Uses: Standards we know (IHE, HL7 V2, V3, FHIR) = Cool, FHIR = Very Cool.  We R doing kewl stuff.  Noted complete absence of X12N from the discussion, even in payer channels where that might be natural to consider.

Some final notes: Tomorrow I am on the starting panel: Fit for purpose, and then we will hear from dietarty/nutritional, LTC, and Behavioral Health providors in a session titled "Across the Continuum", with wrap up from a panel moderated by John Halamka, and including Clem McDonald, Walter Sujansky and Aneesh Chopra.  I didn't see either John or Aneesh today, but both Clem and Walter made their presence known today.  


Monday, August 14, 2017

More Thoughts on OAuth2 Dynamic Client Registration in SMART on FHIR

Avinash Shanbhag writes:


Hi Keith

We had a question for you,  on your blog on “Dynamic Registration”, which I tried to post on the blog, but, somehow I don’t see it posted (Not sure, if you have ONC/HHS email filter on the commentsJ).


==== Excerpt from Blog ==

Think about the fact that the application on your phone and the same application on your wife's phone, even though they are talking to the same endpoint, don't necessarily know about each other, and so if using Dynamic Registration probably get assigned separate client_id and client_secret values.
==== Upto here ===

From the above excerpt, it seems to indicate that each instance of App would get a separate Client ID/Secret. Is that really true? Our understanding is that the client ID/Secret is generated at the “App” level, same as any manual or self-service registration process would do;  and once done, the same ID (and secret if Confidential App) will be used by each instance of the App. So, in a sense, dynamic registration is no different than manual registration other than the obvious fact that this is done via API.

Here’s link to the SMART team’s web page that describes both the manual and dynamic approach

Could you clarify if we misunderstood your point, or if our understanding of the dynamic registration spec is wrong? If my post gets published, feel free to comment publicly.

Thanks in advance and looking forward to seeing you at the Interop Forum tomorrow!

Regards
Avinash


If this is the dynamic registration call (extracted from the link above):

POST /register HTTP/1.1
Content-Type: application/json
Accept: application/json
Host: authorize-dstu2.smarthealthit.org

{
   "client_name": "Cool SMART App",
   "redirect_uris": [
     "https://srv.me/app/cool"
   ],
   "token_endpoint_auth_method": "none",
   "grant_types": ["authorization_code"],
   "initiate_login_uri": "https://srv.me/app/launch.html",
   "logo_uri": "https://srv.me/img/cool.png",
   "scope": "launch launch/patient launch/encounter patient/*.read user/*.* openid profile"
}
Here are some of the problems I see in it:


  1. There's nothing in the above that allows me to trust the application registration content in any way.

    Anyone can use this same data to register an application.  The application itself runs on the patient's device.  If giving the same data gives back the same client_id and client_secret, then these items really aren't secret.  Anyone get the redirect_uri, the initiate_login_uri, and the logo_uri and the client_name.  How does one get these?  Build a FHIR Endpoint and ask the application to register itself against your endpoint.  It will gleefully tell you what it uses to register itself.
  2. Knowing what an application uses to register itself, one can easily determine what client_id and client_secret is assigned to that application.

    This can be done by building an application that follows the dynamic registration protocol using the values extracted via the exploit I explain in #1 above.
  3. There's nothing linking this registration back to a responsible party EXCEPT for an https: URL or three.

    I have 30 or 40 HTTPS urls I control, and all I needed was an email address to create one.  I think that providers need to take some precautions to safeguard healthcare data, and that means that there needs to be some way to trace it back to a responsible entity. That's a little bit more than what is required to create an https URL.  Also, not all redirect or login URIs would be to the Interwebs.
  4. Use of token_endpoint_auth_method assumes public apps are supported (applications that cannot manage a client_secret), and that client_secret won't be used.

    I'd prefer to put a "somebody else's problem field" around application identity management if I could.  Otherwise, I have to deal with a lot more infrastructure that I want, but some identity providers barf at this today (you know who you are).  Either I have to use an identity provider that supports public apps, or I can restrict support to private apps.  The latter seems to be preferable given the infrastructure needed to manage application identities.

    In the example given, there's no reason why this application needs to be public, given that it's redirect and login URIs are "on the web".

Clark Kent cosplayerSo, client_id and client_secret, if created and maintained on a per-registration basis, needs something extra to ensure that you can safely consolidate multiple registrations to a singular client_id and client_secret for any application registering with the same information.  Otherwise, to have these values actually be an id and a secret that MEAN anything, you have to do more work.  What good is a secret if everyone can know it.

That's where I get into the "too much infrastructure" problem, because now we start introducing stuff like PKI to be able to verify publishers of software statements, or external trust bundles (a la DirectTrust or NATE or ...) and ...

Dynamic registration might be the way forward, but it doesn't appear to be ready yet for prime time.

   Keith

Friday, August 11, 2017

Thoughts on OAuth2 Dynamic Client Registration in SMART on FHIR

Some weeks I'm just slow.  I probably spent a week trying to figure out how to deal with Client registration for SMART on FHIR applications.  One of the things I discovered in the process after I did the math:

Think about the number of different applications that patients could use.  Oh, maybe a 100 if we are lucky.  Most of us will use one or two. I know from previous experience the average number will be around 1.1 (at least at first).

Think about the number of patients a provider sees in a year.  Say they have a panel of 2-3 thousand patients, they probably see somewhere between 1/4 to 1/2 of these annually.  Call it an even 1000.

Think about the fact that the application on your phone and the same application on your wife's phone, even though they are talking to the same endpoint, don't necessarily know about each other, and so if using Dynamic Registration probably get assigned separate client_id and client_secret values.

Now, look at a 100 provider practice, and understand that some of the providers in that practice see the same patients that other providers do.  So 100 * 1000 * 50% (for overlap) 50,000 patients seen in that practice (NOTE: These are rough numbers, what is important is the magnitude).

Now, just think about the fact that if every 50% of the patients each use an application, and each one

requires its own client_id and client_secret, you now have 25,000 client_id and client_secret values for what, maybe 100 applications.  And if any of those applications are busted (meaning they register more often than they need to because someone missed something in the spec), you could have some applications registering daily or hourly or whatever the expiration time is on the authorization token, and now that number could really balloon.

Is this realistic?  Is it valuable?  Is it worth having to manage? Would it not be better to have 100 client secrets and client ids which you could then manage if necessary?

I know ONC is promoting use of Dynamic Client Registration, but I look at the cost of doing that, and I am quite certain that there really is a better way.  It's interesting that Twitter, Facebook, Google and many other API publishers outside of healthcare haven't been using OAuth 2.0 Dynamic Application Registration.

I'm wondering, is Dynamic Application Registration really Smart?

   Keith

Thursday, August 10, 2017

Measuring Interoperability

One of the challenges facing ONC for more than the past decade has been measuring interoperability (yes, ONC has been around for  that long).  One of the responsibilities of the National Coordinator in the original order creating the office back in 2004 and continuing on to this day is to "include measurable outcome goals".

These are some things that one might consider measuring with regard to Interoperability:

  • Reductions in costs of care attributable to presence of absence of Interoperability
  • Application support for Interoperability Standards and APIs
  • Transmissions of Interoperable Information
  • Speed of Uptake of Interoperable Solutions
However, not all of these are "Outcomes".  They represent three different kinds of measures, and then something special.  Cost reduction is clearly an outcome. Application support is a capability measurement. Transmissions of information is a process measurement.  Speed of uptake I'll talk more about at the end of the post.

There really is no definitive or easy way to put a $ figure on the ROI of interoperability in healthcare without spending a great deal of time.  It takes a well designed study which can show before and after effects of an interoperability based intervention.  And it isn't clear how much of the $ saving could be attributed to technology, vs. other process changes involved that use that technology.  Yet, among the above, that's in fact the only outcome measure listed.

Application support for standards and APIs is a capability measure.  We can definitively show that applications have more interoperability capabilities than they have.  But we don't have good evidence linking that capability to outcomes.

Transmissions of interoperable information is something where we are actually measuring a process. For example, in ePrescribing, we are measuring the number of prescriptions sent electronically.  We also have some good studies linking that process to improved outcomes, but it's not a direct measure of outcomes.

Finally, the speed of uptake.  This is an interesting measure.  It shows something about capability, in that it demonstrates availability of a particular capability.  But is also demonstrates an outcome, one related to ease of use.  If we look at the ease of uptake and flexibility of HL7 Version 2, HL7 CDA, HL7 Version 3, IHE XD*, and HL7 FHIR, we can get a pretty good understanding of the complexity of the various standards.  

As we embark next week in discussions about fitness for purpose, this is what I would consider to be an "outcome measure" for success in standards selection.  Ease of uptake directly relates to products with interoperable capabilities in products, interoperable processes that can be delivered upon, and real cost reductions in implementation of Health IT solutions.

As we look at "Fitness for Purpose", I think we need to consider "adoptability" as one of the key metrics to consider.  That means that the standard needs to be readily available, easily understood and implemented.  It's a tall order for a standard to meet, and hard to tell how to get there, but I can say this: You'll know it when you see it, and it doesn't happen by accident or intention alone, but more like a bit of both.

   Keith





Wednesday, August 9, 2017

Order Word Important Not Is

This is a quote from an expression used by my family to say to each other, "You understood what I meant, stop fussing about how I said it".

One of the things we've learned after a decade and more of use of XML is that the names of things and the position of them in a hierarchy are sufficient to provide just about all the semantics we need, not the essential order.  The only place where order is important is in LISTS of things that have the same name, and in that case, the order is the only essential characteristic.  XML inherited its dependence on name requirement because that requirement had already existed in an even older standard: SGML, and one of the stated goals of XML was to ensure backwards compatibility with SGML.

One of the weird things in CDA where the order thing shows up is in the order of telecom and address parts of Entities and Roles. It was recognized originally in the creation of the RIM that entities (things in green) could have telecommunications address without having an address.  Since organization extends from entity, the telecom thing had to come first, and then the address followed.

For roles (things in yellow), both addr and telecom were present, and folks put them into the natural order. And so we have this weirdness of two different things having an order requirement that is present in CDA, where there are inconsistencies in the order.

Fortunately, for those of you working with FHIR, JSON doesn't have this restriction (but the FHIR XML surely does have an order restriction). 

Anyway, if you want to know why order is important, it's because XML and XML Schema says it is, not because it really is.  It just make things easier at the time to say that order was important. Recall back in the days when SGML because a standard (1986), computers had as much at 64Kb of memory at a time.  We couldn't afford to read entire files into memory all at once.  These days, we don't think twice about sucking up a megabyte at a time and complain when it takes longer than about 10 seconds to download it.

   Keith

Monday, August 7, 2017

The Challenge of Measuring Interoperability

Many years ago, I created this map showing where in the world IHE XD* profiles and HL7 CDA Implementation guides were used.

The point of the map was to show how popular the specifications were, and basically to provide enough evidence to move the needle forward on these specifications with regard to how they were seen in the industry.

This is a crowd sourced map, but most of the data came from two sources: What I knew of North America, and what another colleague know about Europe and Asia.  Keeping this map up to date is hard.  Authoritative information is difficult to come by for a number of reasons:

  1. It's about what is implemented in the field. 
    1. The organizations that implement these specifications often don't know which specifications they are using, nor do they care to publicize this information.  
    2. The companies that sell the products that implement these specifications also implement other specifications, and aren't necessarily aware of which specifications their customers are using (some of the products I work on have dozens of different interfaces).
    3. Many of these implementations combine the work of several different products from different vendors, so none save the implementers have complete detail.
  2. The information is valuable intellectual property
    This idea is going to be challenging for some folks who think all data about Health IT should be transparent. I'm not going to get into that argument here.
Let's look at two different examples of interoperability: Exchanging transfers of care using some form of CCDA Document, and ePrescribing.  Both of these must be supported by Meaningful Users of Health IT in the US.

We can fairly state that those organizations implementing either are using the standards: CCDA for transfers of care, and NCPDP for ePrescribing.  But when we get down to the underlying exchange mechanisms, we cannot say a whole lot about how the data moves.  For ePrescribing, the transport standards are fixed by regulation for ePrescribing (NCPDP Telecommunications protocol), and we know most of the data moves over one network, but we don't know how it gets there (without exploring the ePrescribing solution market). For Transfers of care, there are multiple choices for transport: the CCDA can move over an HIE, it can move through the Direct protocol, or it can move over some other web services protocol (and in fact, it can move through several protocols before it gets to its final destination).  We've got some pretty good data on ePrescribing (see the ONC Data Brief on Prescribing increases from 2006 to 2014), but I've not found a similar report on transfers of care (perhaps because we haven't been tracking it as long or as well).

I think the crucial point here is that in measuring interoperability, we'd like, as an industry to be able to report successes or failures as was done for ePrescribing. Having organizations (either vendors or providers) report what is moving and how creates an additional burden for them to manage the reporting, and additional burdens are disincentives.  It also exposes information relevant to that organization potentially including processes, market share (e.g., number of patients treated/seen, number of providers using), or vendors.  There is a delicate balance that has to be maintained.  The information gathered needs to be a) easily captured and reported, b) meaningful in terms of measurement, and c) sensitively managed and reported.

  


Thursday, August 3, 2017

Working with Standards Organizations

One of the challenges for any organization when working with standards is that there are so many to choose from.  For many years I had the luxury of being in a position that was nearly entirely devoted to the creation, tracking, management and leadership in standards development.

Participating in standards development is challenging.  Many organizations haven't considered it to be important.  Some simply don't have the resources, or cannot justify them.  However, over the last decade though, we've seen significant advances in Health IT in the use of standards, and in organizational participation and attention. Government mandates have certainly played a part in this, but that's not necessarily a bad think.  In fact, for the most part, this has been a good thing, but there have been a number of problems.

It's hard to get the time, the money and the essential organizational resources and commitments to do all the necessary things. Even was I was focused on developing standards I had a challenge paying attention to everything that MIGHT be important, and to every organization that wanted to play in healthcare.  I mostly restricted my activities to those organizations which had already demonstrated that they had an effective process: HL7 and Integrating the Healthcare Enterprise, and government (ONC) initiated activities like HITSP, the Direct Project, and the Standards and Interoperability Framework.  The latter were (mostly) an adjunct to my already existing IHE and HL7 work, although separately organized and driven by ONC, and to some degree, had a secondary governance process (often different for each project).

When things come in from elsewhere, it becomes more challenging.  Working within an established standards development framework with a well-understood governance is really helpful, because it establishes a baseline and a methodology which you can learn once, and reuse over and over.  But when you have to learn this over and over again, this becomes a problem, because you are basically going through the storming and forming process every time. This is what it was like participating in many S&I Framework processes was like, because each one set its own structure.  Eventually, they came closer together, but it was never quite as "organized" as IHE or HL7.  And even HL7 and IHE insiders know that stuff that happens inside IHE or HL7 still varies depending on where you sit. EHR is different from imaging, is different from Public Health, and so on.

What's the time commitment like?  To participate actively in one of these organizations, you pretty much have to give up 5-10% of your time, or 15-20% to take on a stronger leadership role.  To do that over multiple organizations, it's pretty much impossible for the typical developer to do more than one.

Other activities, such as Argonaut have been quite helpful, at the same time, the fact that they started "outside" of the established bodies and then brought their work into the SDOs was also challenging. It was just one more "place to go" to pay attention.  I'm also not fond of various initiatives that are pony up or be part of an invitation only group to play, as it's just one more place to spend limited influential or financial capital. Some like to complain that HL7 or IHE allow the insiders to have too much influence.  The reality is, that either organization will accept anyone who a) wants to work with them, and b) steps up to a leadership role.  The fastest way to a leadership position is not based on who you know, but on what you can demonstrate you can do.  Some past Ad Hoc winners are just the type of people I'm talking about. You cannot just buy your way in (though some have tried).

In the last decade, I've seen a number of different efforts try to speed up the standards development process.  I've been deeply involved in many of those.  In fact, I participated in one HL7 Standards setting process that went from 0 to done in about 2 months (3 months to award of an Ad Hoc Harley). It went fast because we were refining an existing standard (something IHE does a lot) with some very clear requirements (something we are often missing in the standards development process).  The reality is, there isn't a magic bullet.  When you start with something good, and it is easy (or already well understood) for implementers, you can go fast.  When it's novel, it's still going to take time, and you are going to have to come back and polish the rough edges.  It's that willingness to take on ownership for something that endures past the creation stage where SDOs have even more value. What should happen today if we find we want to advance (heaven forbid) the Direct Standard? Who owns it (ONC effectively, as it was never handed off to an SDO), and who has the people with the right skill set and desire to maintain it (a really good question)?

Over the past 18 months, I've now been in charge of implementing standards, rather than creating them, and I've come to appreciate even more the challenges of implementers.  I don't need for there to be "only one organization", but I've always liked it to be a small set, based on organizations that already have the right people paying attention, and I like it even more now given that I have much less time to spend outside of implementation activities.

FHIR has been a solid, driving force towards integration of standards in Health IT.  IHE, HL7 and others have acknowledged it as a path forward for all.  Even payers are joining in the fray (FHIR could be sounding the death-knell for EDIFACT based standards, now some 30 years old).  The recent SMART on FHIR ballot consolidates yet another issue under the HL7 FHIR banner. Continued collaboration between IHE and HL7 has been pursued by both organizations, as well as members of both organizations.

Could we get to a place where it could be simpler for all?  It is my enduring hope.

   Keith

P.S.  I'm back (I hope) to posting on a more regular basis, although I was out most of this week due to some bad seafood I ate over the weekend.  I'm still not completely back to full health, but close enough to write a blog post.