Pages

Wednesday, June 20, 2018

Add, Replace or Change

Most of the interoperability problems we have today can be readily solved.  All we have to do is replace the systems we already have in place with newer better technology.  Does that sound like a familiar strategy?

If you've been involved in Meaningful Use or EHR certification activities over the last decade, then it certainly should.  The introduction of CDA into the interoperability portfolio of vendor systems has created a whole host of new capabilities.  We added something new.

HL7 Version 3 attempted to replace HL7 Version 2 and it basically was a flop for most of its use cases, in large part because of the tremendous investments in existing infrastructure that would have to be replaced, and which in large part met some viable percentage of the capabilities that the end users were willing to live with and WEREN'T willing to spend the funds to replace with a more capable (yet complex and expensive) solution.

CCDA was a very effective change to the existing CCD and IHE specifications, and incorporated more or less modest changes.  It may have been more than most wanted, but was at least little enough to retain or enhance existing technology without wholesale replacement.

FHIR is a whole new paradigm.  It adds capabilities we didn't have before.  It replaces things that are more expensive with things that can be implemented much more easily and cheaply.  And it changes some things that can still be adapted.  For example, an XDS Registry and Repository infrastructure can be quickly modified to support FHIR (as I showed a few years back by building an MHD (IHE Mobile Access to Health Documents) bridge to the NIST XDS Registry and Repository reference platform).

The key to all of these improvements is to ensure that whatever you are adding, replacing or changing: The costs to the customer (or your own development) are going to be acceptable and adoptable by them (or you) and the rest of the stakeholders in an appropriate time frame.  FHIR has succeeded by taking an incremental approach. 

The birth of FHIR was almost seven years ago.  FHIR at 6 and 7/8ths years old (because young things care about halves and quarters and eighths) is doing quite well for itself. In that time, it has come a very long way, and very fast.  Version 3 never had it so good.  The closest standard I can think of that had anything close to this adoption curve was XML, and that took 2 years from initial draft to formal publication (FHIR took 3 to get to its first DSTU), and I expect widespread industry adoption to the final form (Release 4) to be well inside 2 years.  Whereas it took XML at least 3 and some would say more (although it's industry base was much larger).

So, as you think about how we should be improving interoperability, are you Adding something new, Changing something that already exists, or Replacing something?  Answer that question, and then answer for yourself the question of how that is going to impact adoption.

Wednesday, June 13, 2018

Why AI should be called Artificial Intuition*

This post got started because of this tweet:
The referenced article really isn't about AI, rather it's about an inexplicable algorithm, but a lot of "AI" fits into that category, and so is an appropriate starting point. Intelligence isn't just about getting the right answer. It's about knowing how we get to that answer, and being able to explain how you got there. If you can come up with the right answer, but cannot explain why, it's not intelligent behavior.  It might be trained behavior, or instinctive or even intuitive behavior, but it's not "intelligent".

What's been done with most "AI" (and I include machine learning in this category) is to develop an algorithm that can make decisions, perhaps (most often in fact) with some level of training and usually a lot of data.  We may even know how the algorithm itself works, but I wouldn't really call it intelligence until the system that implements the algorithm can sufficiently explain how its decision was reached for any given decision instance.  And to say that it reached that decision because these vectors were set to these values (the most common form of training output) isn't a sufficient explanation.  The system HAS to be able to explain the reasoning, and for it to be useful for us, that reasoning has to be something we (humans) can understand.

Otherwise, the results are simple mathematics without explanation.  Let me tell you a story to explain why this is important:

A lifetime ago (at least as my daughter would measure it), the company I worked for at the time obtained a piece of software that was the life's work of a physician and his assistant.  It was basically a black box that had a bunch of data associated with it that supported ICD-9-CM coding of data.  We were never able to successful build a product from it, even though we WERE able to show that it was as accurate as human coders at the same task.  In part, I believe that it was because it couldn't show coders (or their managers) HOW it came to the coding conclusions that it got to, and because that information wasn't provided, it failed to be able to argue for the correctness of its conclusions (nor could it could be easily trained to change its behavior).  It wasn't intelligent at all, it was just a trained robot.

Until systems can explain how they reach a conclusion AND be taught to reach better ones, I find it hard to call them intelligent.  Until then, the best we have is intuitive automata.


For what it's worth, humans operate a lot on gut feel, and I get that, and I also understand that a lot of that is based on experiential learning that we aren't even aware of.  But at the very least, humans can argue for the justification their decision.  Until you can explain your reasoning to a lesser intelligence (or your manager for that matter), you don't really understand it.  Or as Dick Feynman put it: "I couldn't reduce it to the freshman level. That means we don't really understand it."

   Keith

P.S. The difference between artificial and human intelligence is that we know how AI works but cannot explain the answers it gets, whereas we don't know how human intelligence works but humans can usually explain how they arrived at their answers.

* Other Proposed Acronyms for AI

  1. Automated Intuition
  2. Algorithms Inexplicable
  3. Add yours below ...


Monday, June 11, 2018

Why we'll never have Interoperability

I don't know how many times, I've said this in the past.  Interoperability is NOT a switch, it's a dial.  There are levels and degrees.  We keep hearing that Interoperability doesn't exist in part because every time someone looks at it, the expectation of where the dial should be at doesn't meet provider expectations.

Few people working today remember early command line interface for accessing mail (that link is for the second generation command line).  Most people today use some form a GUI based client, many available over the web.

To address this in this post, I'm going to create a classification system for Levels of Interoperability.
LevelNameDescription
0AbsentWe don't even know what data is needed to exchange to solve the user's problem.  We may know there's a problem, but that's about as far as we can get.
1AspirationalWe have an idea about what data is needed, and possibly even a model of how that data would be exchanged.  This is the stage where early interoperability use cases are often proposed.
2DefinedWe've defined the data exchange to the point that it can be exchanged between two systems.  A specification exists, and we can test conformance of an implementation.  This is the stage that most interoperability in EHR systems achieve after they've gone through some form of certification testing.
3ImplementableAn instructional implementation guide exists that describes how to do it.  This is more than just a specification.  It tells people not just what should appear where, but also gives some guidance about how to do it, some best practices, some things to consider, et cetera.  This is the stage that is reached when a specification has been widely implemented in the industry and you can find stuff about it on sites like "Stack trace".
4AvailableThis is the stage in which most end-users see it.  Just because some feature has been implemented doesn't mean everyone has it.  We've got self-driving cars.  Is there one in your driveway?  No.  Self-driving cars are not available, even though several companies have "implemented" them.  The same is often true with Interoperability features.  Many vendors have implemented 2015 Certified EHRs, but not all providers have those versions deployed yet.
5UsefulThis is the stage at which users would rather use the feature than not, and see value in it.  There's a lot of "interoperability" that solves problems that just a few people care about, and creates a lot more work for other people.  If it creates more work, it's likely not reached the useful stage.  Interoperability that eliminates effort is more useful.  There are some automated solutions supporting problem, allergy and medication reconciliation that are starting to reach the "useful" stage.
A good test to see whether an interoperable solution has reached this stage is to determine how much the end-user needs to know about it.  The less they need to know, the more likely it's at this stage.
11DelightfulAt this stage, interoperability becomes invisible.  It works reliably, end users don't need to know anything special about it, et cetera.  The interesting thing about this stage is that by the time a product has reached it, people will usually be thinking two or three steps beyond it, and will forget about what they already have does for them.

The level of interoperability is often measured differently depending on who is looking at it and through what lens.  The CFO looks at costs and cost-savings associated with interoperability.  Is it free? Does it save them money?  If not, they aren't likely to be delighted by it.  The CIO will judge it based on the amount of work it creates or eliminates for their staff as well as the direct and indirect costs it imposes or reduces.  The CMO will be more interested in understanding whether it's allowed them to reach other goals, and will judge by different criteria.  And the end-user will want their job to be easier (at least with regard to the uninteresting parts), and to have more time with patients.

By the time you reach "delightful" (often much before) you get to start all over again with refinement.  Consider the journey we've been on in healthcare with the various versions and flavors of HL7 standards.  HL7 V1 was never more than aspirational, V2 was certainly defined, though the various new features sub-releases also went through their own cycles.  Some features in HL7 V2 even got to the level of delightful for some class of users (lab and ADT interfaces just work, most providers don't even know they are there).  By the time the industry reaches perfection, users and industry are already looking for the next big improvement.

Do we have electronic mail? Yes.  Is it perfect yet? No.  Will it ever be?  Not in my lifetime.  We'll never have perfect interoperability, because as soon as we do, the bar will change.

Friday, June 8, 2018

Resolved: Prelogin error with Microsoft SQLServer and JDBC Connections on Named Instances of SQL Server

Just as I get my head above water, some other problem seems to crop up.  Most recently I encountered a problem connecting to a SQL Server database that I've been using for the past 3 years without difficulty. 

We thought this might be related to a problem with the VM Server as that was demonstrating a problem, and in fact after a restart of the VM Server, I was able to access a different vendor's database product on a different server without problems that was also causing me some grief, but I still couldn't access SQL Server.

Here were the symptoms:
  • Connections worked locally on the Server.
  • Connections worked inside the Firewall.
  • Connections that were tunneled through some form of VPN weren't working at all (with two different kinds of VPN).
I was pretty well able to diagnose the problem as being firewall related, but there's at least four between me and that server, and I only have access to two of them, and unfortunately could find no information in the firewall logs to help.  Wireshark might have been helpful except for some reason I couldn't get it to access my network traffic on the 1433 port that I knew SQL Server used.


If you google "prelogin error" you'll see a ton of not quite helpful stuff because for the most part, nobody seems to get to the root causes of my particular problem, but I finally managed to do so.

Here's what I discovered:

My firewall was blocking port 1434, which is the port that the SQL Server Browser service uses to enable named instances to find the right SQL Server service to connect to.  But even after opening that port, things were still not working right, the connection was failing with a "prelogin error".

One of the posts on the Interwebs pointed me to a Microsoft diagnostic tool used to verify SQL Server Communications.  The output of that tool contained something like the following:


Sending SQL Server query to UDP port 1434...

Server's response:

ServerName SERVER1
InstanceName SQL_SERVER_1
IsClustered No
tcp 1433

ServerName SERVER1
InstanceName SQL_SERVER_2
IsClustered No
tcp 59999


What this told me was that the server I wanted to access was listening on a port other than 1433.  And of course, that port was blocked (which explains why Wireshark wasn't helping me, because I was looking just at port 1433 traffic).  Setting up a firewall rule to allow access to any port used by the SQL Server service resolved the issue (since I couldn't be sure the dynamically assigned port would be used again the next time the server was restarted).

I think part of the reason that nobody has been able to clearly state a solution is because if I'd been trying to connect to SQL_SERVER_1, the already existing rule I had for port 1433 would have been JUST fine, and I wouldn't have needed another rule.  And so the published solutions worked for maybe half the users, but not others.  And some published solutions suggested multiple different ways to configure Firewall rules, some of which would have worked some of the time, and others (like mine) would work all of the time.

I realize this has nothing to do with standards, but at least half of those of you who read this blog have had your own run-ins with SQL Server.

Now, you might scratch your head and wonder how this worked before, and what happened to the Firewall rules that enabled it to work.  For that I have a simple answer.  We had recently rebuilt the VM so that it had more resources to do some larger tests, and so the system was redeployed under the same name to a new operating system environment.  And my challenge happened to overlap a) the redeployment, and b) the VM having to have been rebooted.

Root cause analysis is a royal PITA, but having invested quite a few hours with it for this problem, I'll never have it for more than a few minutes again, and now hopefully, you won't either.

   Keith

Thursday, June 7, 2018

Building Interfaces with FHIR

Almost three years ago I created some tools to extract trigger events from HL7 V3, and then made a list of HL7 Version 2 and 3 trigger events.  I came up with over 1000 of these.  FHIR also supports messaging and has other mechanisms to trigger activity (e.g., Subscription) that can be used to trigger actions.

Use Case

Recently I've been looking at a way to automate data collection for 340B reporting, which requires collection of information about the patient, prescriber and medication from MedicationOrder resources.  This program, like many others, requires a small subset of the FHIR Resource data in order to manage the reporting operations.  Like many other programs, the implementers would very much like it if they don't have to wade through hundreds of lines of JSON or XML in order to get the data.  Both data providers and receivers would rather deal with only the minimum necessary information even when it is perfectly OK to have more (e.g., either because of the protected status of a receiver such as public health, or because the data is for treatment rather than payment and operations).

In looking at how this interface would work, there's basically a 4 step process:

  1. Some trigger event occurs which indicates that there's some data to consume.
  2. One or more queries are executed to obtain the necessary data in a bundle.
  3. The resulting data set is transformed into a format that can be more readily consumed.
  4. The data is sent to the receiver.
FHIR Subscriptions can be used to handle MOST of the different trigger event cases.  Search or Batch transactions can specify the data collection aspect.  Sending the data is straightforward.  What isn't so straightforward is transforming the data into a simpler format for the consumer, but there is also a way in FHIR to handle transformation from one structure to another (see StructureMap), and FHIR also has a mapping language defining the transformation.  The Clinical Quality Language provides another mechanism by which data can be accessed and transformed. XSLT would also work for this if one transformed the bundle as XML content.

Right now, the pieces are nearly all there, almost everything needed already exists in Subscription and StructureMap to put it together.  They just aren't put together in a way that allows the interface to be completely defined in an easy fashion.

Most of the detail needed is in how we define the third step above.  Here's how I'd put this together:
  1. I'd define a Subscription for the interface would include the search criteria (in Subscription.criteria) that would essentially identify the trigger event.
  2. The same query used to identify the trigger event would ALSO include any other information essential for producing the data needed in the interface (e.g., _include and chained queries).  That means that any included resources would be defined in the query used for Subscription.criteria.
  3. In order to transform the data, we need to specify a mechanism by which the content can be transformed to another format. StructureMap has what I need but implementing it for the most common use cases feels like overkill, and it's still a work in progress.  I have working code that could do this with CQL today, so that's likely where I would go.  Also, StructureMap is R3/R4 content, and I have a server that's currently deploying STU2.  Everything else I need is already in Subscription in DSTU2. 
  4. For my purposes, I would set Subscription.channel.type would be set to rest-hook and Subscription.channel.endpoint would be set to the endpoint that should receive the report.  The format of the transformed result would be reported in Subscription.channel.payload.

Transformation

Subscription doesn't have a way to specify the transform details, so the next step would be to define an extension to support that.  Here's what I'd add to Subscrition to support this:

First of all, I'd add an element called transformation to channel which would be an optional backbone element (cardinality [0..1] because there should be at most one for a channel) describing the transformation to apply to the queried content to specify how the channel data (a Bundle) would be transformed before sending to the endpoint.  It would have two fields, method and transform.  Method describes the method by which the transformation occurs, and transform provides a reference to the content defining the transformation. Method would be a code, containing values from an extensible vocabulary including cql, fhirmapper, operation, structuremap and xslt.  These are further described below:

CodeDisplayDefinition
cqlCQLTransformation is defined in CQL as the tuple returned in the "return" definition, where the payload provides the source material accessed by the CQL rule.
fhirmapperFHIR MapperTransformation is defined in the FHIR Mapping language.  The source is the payload that contains the content associated with the query used in subscription.
operationOperationTransformation is defined as the result of a FHIR operation, where the source parameter for the transformation is the payload.  
structuremapStructureMapTransformation is defined in a StructureMap resource.  Source is the bundle that would result from the payload provided by the subscription.
xsltXSLTTransformation is defined in an XSLT transform.  The input is the payload provided by the subscription.  The output of the transform is what would be sent to the receiver.  While input would be in XML, output could be in any format (you can, for example create JSON and CSV files with XSLT).
The transform field would be a resource reference (StructureDefinition, OperationDefinition) or URL reference to the content of the transform.

XSLT, fhirmapper, structuremap would implement the transformation in an application defined manner that really needs no more specification.  The input and output are fairly well understood for these types, and how the transformer is called is really not that interesting.

For operation and cql, a little more detail would be needed to define how the payload is communicated, and the result of the transformation is extracted from the transformation.  A transformation operation should ideally be idempotent, meaning it causes no changes (with the exception perhaps of audit records) on the server.  

Operation

When an operation has only one output parameter named "return", that's what gets returned.  This works well in a transformation scenario.  The only further definition that is needed is how to specify the input that the transformation operates with.  What I'd probably want to implement for operation (because it is simplest) is to define the operation and any of its parameters in a URL, with POST body of type multi-part/form-data where the source parameter defines the input of the transform in the body of the post.  In that way, fixed parameters of the operation transforming the content can be specified in the transform URL, and the body can be the part that is sent to the receiver.  I'd probably add a parameter to the operation called endpoint and another called header to which I'd pass the values of the subscription endpoint and header parameters.  In this way, I could fully define in a Subscription how the server would implement the transform, and offload the work of sending the transformation to the endpoint.

NOTE: Operation can return content OTHER than a FHIR Resource by using a return type of Binary.  In HAPI, when you do this, what happens is that the returned content is simply the content of the binary resource.  I've successfully used this to return CDA and JSON content from operations I've defined.  This eliminates a step for integrators of extracting the content from a FHIR Parameters resource, which is something many would rather NOT have to do.

CQL

For CQL, things get a little bit more complicated.  Executing a set of CQL rules produces results in a name/value pair.  The tricky bit is figuring out how to integrate the payload content (the input) into the CQL execution, but that can be implementation dependent.  The simpler part is to figure out how to get the output.  There are two ways I can see to deal with this: 
  1. Return all defined values as a JSON Object. This works, but some of the defined values created in a CQL rule might be used internally, whereas others are meant for external consumption.  I have a CQL Operation created that executes a CQL rule, and allows you to specify which parts of the rule to return in a JSON object.  This is OK, but you can get just as much control by saying the result to be returned is in an object with a predefined name.  
  2. Return the defined object using the name "return".  This is my preferred solution, as it allows the rule to have a single result (which can be a tuple), mirrors the OperationDefinition pattern, and allows the transform to be computed and extracted.
What I'd probably do in my own implementation is provide support for OperationDefinition to start with, and build an operation definition framework that would let me create operations to handle cql, fhirmapper, structuremap and xslt methods.  Then, in my subscription handler, I could translate those other forms into an operation call, and let it do the transformation and notification work.

This suggests an additional capability that would be helpful in the Operations framework to FHIR, which is the ability to define where to send the response to the request, and what header parameters to supply for that interaction.  Typical RESTful protocol is in request/response pairs, but there are times when what you really want to do is send the response somewhere else as a notification.

I'd have to think carefully about the security ramifications for this, but my current thinking is that any FHIR HTTP request could have an _endpoint and _header parameter that works in a way similar to the way these parameters are used for a rest-hook subscription.  When these parameters are given, they specify the restful endpoint that would receive the resulting content, and the headers to send with it.  These parameters could then enable queries, operations or other content generating requests to send notifications to a specified URL.  When these parameters are used, the returned result of a query, operation or other FHIR API call could return the following values to indicate success:

Response CodeMeaning
202 AcceptedThe request was accepted, is syntactically correct, and the response will be sent to the requested endpoint but may not have been sent yet.  This essentially is a response that says, yes, I got your request and it looks good, I'll send the results along to the designated endpoint, but you have no guarantee that it actually got there.
204 No ContentThe request was accepted, is syntactically correct, and the response was sent to the requested endpoint and was accepted by it.  This tells you the response was sent and successfully acknowledged by the requested endpoint.

Summary

  • Create global parameters for _header and _endpoint and add code to handle them (this is a good place to consider HTTP or HAPI Interceptors).  
  • Add extensions to Subscription.
  • Figure out how to change operation to use PUT instead of POST (for rest-hook implementation).
  • Build a transformer operation.

But Wait!

If I define _header and _endpoint parameters for a FHIR operation, then I don't even need an extension on subscription.  Instead, what my endpoint would look like would be:

[base]/$transform?_header=Header for Eventual Receiver&_endpoint=Endpoint for Eventual Receiver&transform=Transform content to use&type=Transformation Type&_method=PUT

Now, I can define my operation easily, skip the extension stuff, and the only real difference is that I'd need to make the operation support PUT as well as POST, OR, make my subscription implementation smart enough to figure that part out when the server is calling one of its own operations with content.  I can add other parameters to support URLs or content to specify the details for the transformation.

OK, thanks for letting me bounce this off of you.  This was a slam-dunk.

Monday, June 4, 2018

Coloring within the Lines

When you first learned to color as a child, someone handed you a picture (like the following) and a bunch of crayons:
At first, you probably just threw blotches of color on the page, ignoring the lines. Then someone explained to you about keeping color within the lines. You practiced and practiced, and then realize that you could do really cool horse pictures within the lines. Then you turned your horse into a Zebra.
Not content with that, you made it into a Unicorn ... and later a Pegasus.

This is a metaphor for interoperability journey we started upon with Meaningful Use in 2010 and where we are headed in 2018.

First we had data that we needed to exchange, and a name for it (we called in the common clinical data set, but a horse by any other name is still a horse).  And we watched as everyone struggled to stay within the lines (the CCD specification).  Some even created a color-by-number guide that made it easier to make a pretty CCD.

When creating this horse picture, we stayed inside the lines with HL7 CDA, but to meet certain ONC requirements to make our horse look like a Zebra, we had to change the way we colored a little bit, but still came up with something that fit the lines (it just had a few more lines), and then we had our Zebra.

At times though, there were things that just didn't fit, and so we had to come up with a horn, or wings.  We did our best to create new lines that fit with (but not within) the existing lines, and went beyond the original drawing but still had the same general appearance, producing the C-CDA.

After a while, there's not much more to do with our horse (CDA), so we start with a new drawing (FHIR).  This horse will still get you from point A to point B, but much faster (no wings needed).

And that's progress in interoperability.