Pages

Friday, May 31, 2019

Semantic Interoperability: What has FHIR Taught Us?

I've been working on expressing various queries using HL7 Version 2, Version 3, and FHIR.  What I've encounter is probably not shocking to some of you.

5 lines of HL7 Version 2 query encoded in HL7 ER7 format (pipes and hats), translates into about
35 lines in HL7 Version 2 XML, which translates into about
92 lines of HL7 Version 3 XML (about triple that of Version 2 XML), which translates into about
95 characters in one line of FHIR.

When expanded into version 3, a system that understands the principles of HL7 Version 3 can completely "understand" the semantics (but someone still needs to write the software to execute them).  In fact, I can turn an HL7 Version 3 XML representation into meaningful English, because the semantics of the message have been so well captured in it.

But, people don't talk the way that Version 3 does, nor do computers.  We both use three additional sources of information:

Context: Why are we talking? What's our purpose in having this discussion?  Who are we talking to? What are we talking about?
World Knowledge: This includes models of the world that describe the things that we are talking about.  What do we know?  What does it look like?
Inference: Given what we are told, and context and world knowledge, what can be inferred from the communication.  It is because of inference that Postel's law can be applied.  DWIM (Do What I Mean), that CPU operation we all wish was in our computers, can be applied when you know what it is the person could possibly have meant from all available choices.

HL7 Version 2 put the model in the message specification, but didn't tell you what it was talking about at a find enough grained level to make it possible for a human to understand the message.  It might be a language, but it's almost completely positional, with no human readable mnemonics to aid in understanding.
HL7 Version 3 put the detailed model in the message, rather than the message specification, and used a singular, 6 part model of the world (Act, Participation, Entity, Role, Act Relationship and Role Relationship) to describe everything.  V3 is clearly a language for computers to speak in, but not one for humans really. The message is completely detailed, but for a developer, there's so much repeated model to wade through that it's hard to find the point.  It's a language that talks about itself as much as it communicates.

FHIR builds on those details HL7 Version 3 models, but keeps that information as "World Knowledge" in the FHIR Specification (much the same way that Version 2 did).  What FHIR also did was provide for a representation that makes it easy to find the point.  There's just enough granularity in the message to find the code, the identifier, et cetera, for the thing you are looking for.

FHIR goes a step further in adoption of RESTful protocols, because the most common computer operation is to have it answer a question given certain inputs, and get back the answers.  And there's a protocol for how to do that that just about everyone using the web already understands.  It doesn't need OIDs (or anything with dots and numbers in it) to give it meaning.  We automatically know what a query parameter is.  And FHIR said "these are the kinds of things we need to be able to represent" when we ask questions.

Semantic Interoperability?  Pah.  I don't need semantic interoperability.  I need the damn thing to do what I mean, or better yet, do what I'm thinking.  FHIR at least, and at last, that I can think in.

     Keith


Friday, May 24, 2019

Who created this UI? It sucks!

As someone who writes regularly, I am often just as frustrated with Microsoft Word (or any other word processor I've ever used) as others report themselves to be with the user interfaces of EHR systems. Even Apple hasn't solved the problems I need solved.

How many clicks does it take to insert a figure reference to the figure below or above?  How much work is it to create a citation for the link I just inserted into the document?  These should be one button clicks, not the multi-step process they are today.

Why has this crime against writers continued to persist over decades? Nay, centuries... millennia even.

Word processor designers, here's a very clear specification for what I want:

Cross References

Given I have turned the option on, when I type the words "the figure|table|section below" or "the figure|table|section above" and there is a figure or table citation within the current section, insert a reference to it, or if a section, provide me with a list of sections to choose from that I can ignore if I want (so that if I continue typing, it just disappears).  And if I hit undo, treat the automatic insertion as the operation I want undone.

Hyperlinked Bibliography

Given I have turned the option on, when I insert a hyperlink, add new or reuse an existing citation source for the link if it already exists.  Find the individual author and creation date in the page data or metadata, or use a corporate author for the web site.  Include the URL in the citation.  If the URL includes a fragment identifier, find the text where that identifier appears and add it to the title of the reference (e.g., "Hyperlinked Biobligraphy in Who Created this UI? It sucks!).  If the link is to a page in a PDF (e.g., using #page=9 in PDF links) or other media format, treat it as a "document from a website", otherwise use "website" as the reference style.  Use the page title from the <title> tag in the page header.  Prompt me for missing information, but again, let this prompt dialog NOT interfere with my current work, and go away if I continue to type.  Same deal on undo here.  If I say undo, first undo the automatic insertion.

Finally: Stop turning on display formatting marks when I want to insert an index reference term.

   Keith

Friday, May 17, 2019

CDMA or GSM? V3 or FHIR? Floor wax or desert topping?

One of the issues being raised about TEFCA is related to which standards should be used for record location services.  I have to admit, this is very much a question where you can identify the sides by how much you've invested in a particular infrastructure that is already working, rather than a question of which technology we'd all like to have. It's very much like the debate around CDMA and GSM.

If you ask me where I want to be, I can tell you right now, it's on FHIR.  If you ask me what's the most cost effective solution for all involved, I'm going to tell you that the HL7 V3 transactions used by IHE are probably more cost effective and quicker to implement for all involved overall, because it's going to take time to make the switch to FHIR, and more networks are using V3 (or even V2) transactions.  And even though more cost effective for the country, it's surely going to hurt some larger exchanges that don't use it today.  CommonWell uses HL7 V2 and FHIR for patient identity queries if I can remember correctly, while Carequality, SureScripts and others use the HL7 V3 based IHE XCPD transactions ... which are actually designed to support federated record location.  As best I know, more state and regional health information exchanges support the IHE XCPD transactions than those exchanging data using V2 or FHIR.

Whatever gets chosen, it's gonna put some hurt on one group or another.  My gut instinct is that choosing FHIR is going to hurt a lot more exchanges that choosing XCPD at this time.

And this is where the debate about V3 and FHIR differs from the CDMA and GSM debate, because FHIR is closer to 4G or 5G in the whole discussion.  Some parts of FHIR, such as querying for Patient identity are generally widely available.  But complexity comes in when you get into using these transactions in a record location service, as I've described previously, and the necessary capabilities to support "record location services" in FHIR haven't been formalized by anyone ... yet.  This is where FHIR is more like 5G.

Just like 5G, this will happen eventually.  But do we really want to focus all of our attention on this, or do we want to get things up and running and give organizations the time they need to make the switch.  I think the best answer in this case is to make a very clear statement: This is where we are today (V3), and this is where we will be going in 2-3 years (FHIR), and make it stick.  And as I've said in the past, don't make it so hard for organizations to pre-adopt new standards.

Policy doesn't always work that way ... just look at what happened with ICD-10, or maybe even Claims Attachments.  But I think where we are at today is a little bit different, which is that the industry really wants to move forward, but would also like to have some room to breathe in order to move forward without stumbling along the way.  Do we really want a repeat of Meaningful Use?

We've seen how too much pressure can cause stumbles, and I think trying to use FHIR for record location services is just moving a little too fast.  I'll be happy to be proven wrong, and eat the floor wax, but frankly, right now, I just don't see it.

   Keith




Monday, May 13, 2019

Terminology Drift in Standards Development Organizations

I used to work for a company that published dictionaries, and one of my colleagues was a dictionary editor.  As he related to me, the definition of a term doesn't come from a dictionary, but rather from use.  A dictionary editor's job is to keep faithful track of that use and report it effectively.  By documenting the use, one can hope to ensure consistent future use, but languages evolve, and the English language evolves more than many.  I've talked about this many times on this blog.

It also happens to be the common language of most standards development organizations in Health IT (of course, I, as an English speaker, would say that, but the research also reflects that fact).

The evolution of special terms and phrases in standards is a particular challenge not only to standards developers, but especially to standards implementers.  As I look through IHE profiles (with a deep understanding of IHE History), I think on phrases such as "Health Information Exchange", "XDS Affinity Domain", and "Community", which in IHE parlance, all mean essentially the same thing at the conceptual level that most implementers operate at.

This is an artifact of Rishel's law: "When you change the consensus community, you change the consensus" (I first heard it quoted here, and haven't been able to find any earlier source, so I named it after Wes).

As time changes, our understanding of things change, and that change affects the consensus, even if the people in the consensus group aren't changed, their understanding is, and so the definition has changed.

We started with "Health Information Exchange", which is a general term we all understood (oh so long ago).  But then, we had this concept of a thing that was the exchange that had to be configured, and that configuration needed to be associated with XDS.  Branding might have been some part of the consideration, but I don't think it was the primary concern, I think the need to include XDS in the name of the configuration simply came out of the fact that XDS was what we were working on.  So we came up with the noun phrase "XDS Affinity Domain Configuration", which as a noun phrase parses into a "thing's" configuration, and which led to the creation of the noun phrase "XDS Affinity Domain" (or perhaps we went the other way and started with that phrase and tacked configuration onto it).  I can't recall. I'll claim it was Charles' fault, and I'm probably not misremembering that part.  Charles does branding automatically without necessarily thinking about it.  I just manage to do it accidentally.

In any case, we have this term XDS Affinity Domain Configuration, which generally means the configuration associated with an XDS Affinity Domain, which generally means some part of the governance associated with a Health Information Exchange using XDS as a backbone.

And then we created XCA later, and had to explain things in terms of communities, because XCA was named Cross Community Access rather than Cross Domain Access.  And so now Affinity Domain became equivilated (yeah, that's a word) with Community.

And now, in the US, we have a formal definition for health information network as the noun to use in favor of how we were using health information exchange more than a decade and a half ago (yes, it was really that long).

So, how's a guy to explain all this means the same thing (generally) to someone who is new to all this stuff, and hasn't lived through the history, and without delving into the specialized details of where it came from and why?  I'm going to have to figure this out.  This particular problem is specific to IHE but I could point to other examples in HL7, ISO, ASTM and OpenEHR.

The solution it would seem, would be to hire a dictionary editor.  Not having a grounding in our terminology would be a plus, but the problem there is that we'd a need a new one periodically as they learned too much and became less useful.


Thursday, May 9, 2019

It's that time again...

The next person I'm going to be talking about is responsible for open source software that has impacted the lives of tens millions of patients (arguably even hundreds of millions), tens of thousands (perhaps even hundreds of thousands) of healthcare providers, and certainly thousands of developers around the world.

The sheer volume of commits in the projects he's led well exceeds 50 million lines of code.  He's been working in the open source space for nearly a decade and a half, most of which has been supporting the work of the university hospital that employed him.

It's kind of difficult to tell a back story about him that doesn't give it completely away (and many who've used the work he's been driving already know who I'm talking about).  I'm told he's an accomplished guitar player, and I also hear that his latest album of spoken word and beat poetry will be coming out soon.

I can honestly say I've used much of the open source code he's been driving forward at four different positions for three different employers, through at least eight different releases, and I swear by the quality of the work that goes into it.  I'm not alone, the work has been downloaded or forked by several thousand developers all over the world.

I know that he sort of fell into this open source space a bit by accident as the person who had been driving one of the HL7 open source project moved on to greener pastures and he took up the reigns.  Since then, he took the simplicity and usability of that open source project into a second one that has driven HL7 FHIR on towards greater heights.  If it weren't for some of the work he's done, I can honestly state the FHIR community would have been much poorer for his absence.

Without further ado:


This certifies that
James Agnew of
Simpatico Intelligent Systems, Inc.



has hereby been recognized for keeping smiles on the faces of HL7 integrators for the better part of two decades.

HAPI on FHIR has is perhaps the most widely known Java FHIR Server implementations available, HAPI HL7 V2 has been used in numerous projects to parse and integrate with HL7 Version 2 messages, and is included in one of the most widely used open source V2 integration engines (formerly known as Mirth Connect, now NextGen connect).  James has also contributed to other open source efforts supporting HL7 FHIR and HL7 Version 2 messaging.