Friday, May 17, 2019

CDMA or GSM? V3 or FHIR? Floor wax or desert topping?

One of the issues being raised about TEFCA is related to which standards should be used for record location services.  I have to admit, this is very much a question where you can identify the sides by how much you've invested in a particular infrastructure that is already working, rather than a question of which technology we'd all like to have. It's very much like the debate around CDMA and GSM.

If you ask me where I want to be, I can tell you right now, it's on FHIR.  If you ask me what's the most cost effective solution for all involved, I'm going to tell you that the HL7 V3 transactions used by IHE are probably more cost effective and quicker to implement for all involved overall, because it's going to take time to make the switch to FHIR, and more networks are using V3 (or even V2) transactions.  And even though more cost effective for the country, it's surely going to hurt some larger exchanges that don't use it today.  CommonWell uses HL7 V2 and FHIR for patient identity queries if I can remember correctly, while Carequality, SureScripts and others use the HL7 V3 based IHE XCPD transactions ... which are actually designed to support federated record location.  As best I know, more state and regional health information exchanges support the IHE XCPD transactions than those exchanging data using V2 or FHIR.

Whatever gets chosen, it's gonna put some hurt on one group or another.  My gut instinct is that choosing FHIR is going to hurt a lot more exchanges that choosing XCPD at this time.

And this is where the debate about V3 and FHIR differs from the CDMA and GSM debate, because FHIR is closer to 4G or 5G in the whole discussion.  Some parts of FHIR, such as querying for Patient identity are generally widely available.  But complexity comes in when you get into using these transactions in a record location service, as I've described previously, and the necessary capabilities to support "record location services" in FHIR haven't been formalized by anyone ... yet.  This is where FHIR is more like 5G.

Just like 5G, this will happen eventually.  But do we really want to focus all of our attention on this, or do we want to get things up and running and give organizations the time they need to make the switch.  I think the best answer in this case is to make a very clear statement: This is where we are today (V3), and this is where we will be going in 2-3 years (FHIR), and make it stick.  And as I've said in the past, don't make it so hard for organizations to pre-adopt new standards.

Policy doesn't always work that way ... just look at what happened with ICD-10, or maybe even Claims Attachments.  But I think where we are at today is a little bit different, which is that the industry really wants to move forward, but would also like to have some room to breathe in order to move forward without stumbling along the way.  Do we really want a repeat of Meaningful Use?

We've seen how too much pressure can cause stumbles, and I think trying to use FHIR for record location services is just moving a little too fast.  I'll be happy to be proven wrong, and eat the floor wax, but frankly, right now, I just don't see it.

   Keith





Monday, May 13, 2019

Terminology Drift in Standards Development Organizations

I used to work for a company that published dictionaries, and one of my colleagues was a dictionary editor.  As he related to me, the definition of a term doesn't come from a dictionary, but rather from use.  A dictionary editor's job is to keep faithful track of that use and report it effectively.  By documenting the use, one can hope to ensure consistent future use, but languages evolve, and the English language evolves more than many.  I've talked about this many times on this blog.

It also happens to be the common language of most standards development organizations in Health IT (of course, I, as an English speaker, would say that, but the research also reflects that fact).

The evolution of special terms and phrases in standards is a particular challenge not only to standards developers, but especially to standards implementers.  As I look through IHE profiles (with a deep understanding of IHE History), I think on phrases such as "Health Information Exchange", "XDS Affinity Domain", and "Community", which in IHE parlance, all mean essentially the same thing at the conceptual level that most implementers operate at.

This is an artifact of Rishel's law: "When you change the consensus community, you change the consensus" (I first heard it quoted here, and haven't been able to find any earlier source, so I named it after Wes).

As time changes, our understanding of things change, and that change affects the consensus, even if the people in the consensus group aren't changed, their understanding is, and so the definition has changed.

We started with "Health Information Exchange", which is a general term we all understood (oh so long ago).  But then, we had this concept of a thing that was the exchange that had to be configured, and that configuration needed to be associated with XDS.  Branding might have been some part of the consideration, but I don't think it was the primary concern, I think the need to include XDS in the name of the configuration simply came out of the fact that XDS was what we were working on.  So we came up with the noun phrase "XDS Affinity Domain Configuration", which as a noun phrase parses into a "thing's" configuration, and which led to the creation of the noun phrase "XDS Affinity Domain" (or perhaps we went the other way and started with that phrase and tacked configuration onto it).  I can't recall. I'll claim it was Charles' fault, and I'm probably not misremembering that part.  Charles does branding automatically without necessarily thinking about it.  I just manage to do it accidentally.

In any case, we have this term XDS Affinity Domain Configuration, which generally means the configuration associated with an XDS Affinity Domain, which generally means some part of the governance associated with a Health Information Exchange using XDS as a backbone.

And then we created XCA later, and had to explain things in terms of communities, because XCA was named Cross Community Access rather than Cross Domain Access.  And so now Affinity Domain became equivilated (yeah, that's a word) with Community.

And now, in the US, we have a formal definition for health information network as the noun to use in favor of how we were using health information exchange more than a decade and a half ago (yes, it was really that long).

So, how's a guy to explain all this means the same thing (generally) to someone who is new to all this stuff, and hasn't lived through the history, and without delving into the specialized details of where it came from and why?  I'm going to have to figure this out.  This particular problem is specific to IHE but I could point to other examples in HL7, ISO, ASTM and OpenEHR.

The solution it would seem, would be to hire a dictionary editor.  Not having a grounding in our terminology would be a plus, but the problem there is that we'd a need a new one periodically as they learned too much and became less useful.



Thursday, May 9, 2019

It's that time again...

The next person I'm going to be talking about is responsible for open source software that has impacted the lives of tens millions of patients (arguably even hundreds of millions), tens of thousands (perhaps even hundreds of thousands) of healthcare providers, and certainly thousands of developers around the world.

The sheer volume of commits in the projects he's led well exceeds 50 million lines of code.  He's been working in the open source space for nearly a decade and a half, most of which has been supporting the work of the university hospital that employed him.

It's kind of difficult to tell a back story about him that doesn't give it completely away (and many who've used the work he's been driving already know who I'm talking about).  I'm told he's an accomplished guitar player, and I also hear that his latest album of spoken word and beat poetry will be coming out soon.

I can honestly say I've used much of the open source code he's been driving forward at four different positions for three different employers, through at least eight different releases, and I swear by the quality of the work that goes into it.  I'm not alone, the work has been downloaded or forked by several thousand developers all over the world.

I know that he sort of fell into this open source space a bit by accident as the person who had been driving one of the HL7 open source project moved on to greener pastures and he took up the reigns.  Since then, he took the simplicity and usability of that open source project into a second one that has driven HL7 FHIR on towards greater heights.  If it weren't for some of the work he's done, I can honestly state the FHIR community would have been much poorer for his absence.

Without further ado:


This certifies that
James Agnew of
Simpatico Intelligent Systems, Inc.



has hereby been recognized for keeping smiles on the faces of HL7 integrators for the better part of two decades.

HAPI on FHIR has is perhaps the most widely known Java FHIR Server implementations available, HAPI HL7 V2 has been used in numerous projects to parse and integrate with HL7 Version 2 messages, and is included in one of the most widely used open source V2 integration engines (formerly known as Mirth Connect, now NextGen connect).  James has also contributed to other open source efforts supporting HL7 FHIR and HL7 Version 2 messaging.  

Thursday, April 25, 2019

Record Location Services at a National Scale using IHE XCPD

One of the recent discussions coming up around the most recent TEFCA related specifications has to do with how one might implement record location services for patients at a national scale.  The basis for this is the IHE Cross Community Patient Discovery Profile (XCPD).

Here's the problem in a nutshell.  Assume you are a healthcare provider seeing a patient for the first time, and you want to find who else might have information about this patient?  How can you do so?

The first step obviously is to ask the patient who their prior doctor was, and here's where the first fundamental challenge appears.  Sometimes the patient is unable to answer that question, either at all, or at least completely.  So, then, how do you get a complete list?  What you don't want to do is ask everyone who ever might have seen the patient anywhere in the country, because that is not going to scale.

I think that about sums it up.

The IHE XCPD profile is designed to address this.

If the patient is only able to give a partial response, then you know where to start looking.  Here's the key point, once you know where to start looking, the organizations and networks who can answer the question can also point you to others who've seen the patient, and that can get you a more complete list, which eventually will lead to closure.

But wait! How do these organizations know who else has seen the patient?  It's really pretty simple.  Somebody asked them, and in the process of asking them, also told them that they would be seeing the patient, and so the original provider gains the information about the new provider seeing them, which makes them able to answer the question accurately for the next new provider.  And so the well known provider becomes more authoritative, while the new provider is able to provide equally authoritative data.

If the patient is unable to answer that question at all, then you have to figure out who else you might be able to ask that question of.  If the patient is local, you could ask others in the area who might know the patient.  If the patient isn't local (e.g., just visiting), you might try asking others near to where patient resides, which hopefully you can determine.  Since TEFCA is about a network of networks, it's reasonable to assume that there are some regional networks of whom you might ask about a given patient, and they might be able to ask other, smaller regional networks they know about (this could become turtles all the way down, but at some point, you'd expect to stop).

There are some other issues to address.  Just because we got the new provider and the old provider synchronized, doesn't mean everyone else is.  Who has that responsibility?  That's an implementation decision.  It could be the original provider, or it could be the new provider.  Since the new provider is gaining the benefit, one could argue it's there responsibility to notify other networks that have already seen the patient that they are now seeing the patient.  That would be the way I'd implement it.

Note: This doesn't have to be perfect.  It has to be good enough to work.  Perfecting the algorithm for record location to ensure the right balance of performance and accuracy in the RLS is going to take time.  But we can certainly build something that gets the right networks talking to each other.




Saturday, April 20, 2019

Why Software Engineering is still an art

Software engineering isn't yet a science.  In science, you have a bunch of experimental procedures that one can describe, and processes that one can follow, and hopefully two people can reproduce the same result (unless of course we are talking about medical research experiments ;-( ).

Today, I wanted to add some processes to my build.  I'm using Maven (3.6.0), with Open JDK 11.0.2.  I wanted to run some tests over my code to evaluate quality.  Three hours later, and I'm still dealing with all the weirdness.


  1. rest-assured (a testing framework) uses an older version of JAXB because it doesn't want to force people to move to JDK 8 or later.
  2.  JAXB 2.22 isn't compatible with some of the tools I'm using (AOP and related) in Spring-Boot and elsewhere.
  3. I have an extra spring-boot starter dependency I can get rid of because I don't need it, and won't ever use it.  It got there because I was following someone else's template (it's gone now).
  4. FindBugs was replaced with SpotBugs (gotta check the dates on my references), so I wasted an hour on a tool that's no longer supported.
  5. To generate my code quality reports, I have to go clean up some javadoc in code I'm still refactoring.  I could probably just figure out how to run the quality reports in standalone, but I actually want the whole reporting pipeline to work in CI/CD (which BTW, is Linux based, even though I develop on Windoze).
  6. The maven javadoc plugin with JDK 11 doesn't work on some versions, but if I upgrade to the latest, maybe it will work, because a bug fix was backported to JDK 11.0.3
  7. And even then, the modules change still needs a couple of workarounds.
In the summers during college, I worked in construction with my father.  Imagine, if in building the forms for the fountain in the center of the lobby (pictured to the right), I could only get rebar from one particular suppler that would work with the holes in the forms.  And to drill the holes, I had to go to the hardware store to purchase a special brand of drill.  Which I would then buy an adapter for, and take part of it apart in a way that was documented by one guy somebody on the job-site knew, so that I could install the adapter to use the special drill bit.  And then we had to order our concrete in a special mix from someone who had lime that was recently mined from one particular site, because the previous batch had some weird contaminates that would only affect our job site.

Yeah, that's not what I had to do, and it came out great.

Yet, that's basically exactly what I feel like I'm doing some days when I'm NOT writing code.  We've got tools to run tools to build tools to build components to build systems that can be combined in ways that can do astonishing stuff.  But, building it isn't yet a science.

Why is this so hard?  Why can't we apply the same techniques that were used in manufacturing (Toyota was cited)?  As a friend of mine once said.  In software, there's simply more moving parts (more than a billion).  That's about a handful of magnitudes more.


   Keith

Tuesday, April 16, 2019

Juggling FHIR Versions in HAPI

It happens every time.  You target one version of FHIR, and it turns out that someone needs to work with a newer or older (but definately different) version.  It's only about 35 changes that you have to make, but through thousands of lines of code.  What if you could automate this?

Well, I've actually done something like that using some Java static analysis tools, but I have a quicker way to handle that for now.

Here's what I did instead:

I'm using the Spring Boot launcher with some customizations.  I added three filter beans to my launcher.  Let's just assume that my server handles the path /fhir/* (it's actually configurable).

  1. A filter registration bean which registers a filter for /fhir/dstu2/* and effectively forwards content from it converted from DSTU2 (HL7) to the servers version, and converts the servers response back to DSTU2.
  2. Another filter registration bean which registers a filter for /fhir/stu3/* and effectively forwards content from it converted from STU3 to the servers version, and converts the servers response back to STU3.
  3. Another filter registration bean which registers a filter for /fhir/r4/* and effectively forwards content from it converted from R4 to the servers version, and converts the servers response back to R4.
These are J2EE Servlet Filters rather than HAPI FHIR Interceptors, b/c they really need to be right now. HAPI servers aren't really all that happy about being multi-version compliant, although I'd kinda prefer it if I could get HAPI to let me intercept a bit better so that I could convert them in Java rather than pay the serialization costs in and out.

In addition to converting content, the filters also handle certain HttpServlet APIs a little bit differently.  There are two key places where you need to adjust:

  1. When Content-Type is read from the request or set on the response, you have to translate fhir+xml or fhir+json to xml+fhir or json+fhir and vice versa for certain version pairs.  DSTU2 used the "broken" xml+fhir, json+fhir mime types, and this was fixed in STU3 and later.
  2. You need to turn off gzip compression performed by HAPI, unless you are happy writing a GZip decoder for the output stream (it's simple enough, but more work than you want to take on at first).
Your input stream converter should probably be smart and not try to read on HEAD, GET, OPTIONS or DELETE methods (because they have no body), and there won't be anything to translate.  However, for PUT, POST, and PATCH, it should.

Binary could a be a bit weird, I don't have anything that handles creates on Binary resources, and they WOULD almost certainly require special handling, I simply don't know if HAPI has that special handling built in.  It certainly does for output, which has made my life a lot easier for some custom APIs (I simply return a parameter of type Binary, with mimetype of application/json to get an arbitrary non-FHIR formatted API output), but as I said, I've not looked into the input side.

This is going to make my HL7 V2 Converter FHIR Connectathon testing a lot easier in a couple of weeks, because O&O (and I) are eventually targeting R4, but when I first started on this project, R4 wasn't yet available, so I started in DSTU2, and like I said, it might be 35 changes, but against thousands of lines of code?  I'm not ready for that all-nighter at the moment.

It's cheap but not free.  These filters cost in serialization time in and out (adding about 300ms of time just for the conformance resource), but it is surely a lot quicker way towards handling a new (or old) version for FHIR for which there are already HAPI FHIR Converters, and it at least gets you to a point where you can do integration tests with code that need it while you make the conversion.  This took about a day and a half to code up and test.  I'd probably still be at a DSTU2 to R4 conversion for the rest of the week on the 5K lines or so that I need to handle V2 to FHIR conversion.

   Keith



Friday, April 12, 2019

Multiplatform Builds

I'm writing this down so I won't ever again forget, in using a Dev/Build environment pair that is Windows/Unix, I remember to plan for the following:

  1. Unix likes \n, Windows \r\n for line endings.  Any file comparisons should ignore differences in line endings.
  2. Unix cares about case in filenames, Windows not so much.  Use lowercase filenames for everything if you can.
  3. Also, if you are generating timestamps and not using UTC when you output them, be sure that your development code runs tests in the same time zone as your build machine.
I'm sure there's more, but these are the key ones to remember.

   Keith

P.S.  I think this is a start of a new series, on Duh moments.