Monday, June 17, 2019

Telling Time the HL7 Way

If you've never been to an HL7 Working group meeting, you'll run into some shorthand that long-time HL7'ers know that you'll have to catch up on.  The first is how we split up the day.

Officially, the day has 4 quarters, with breakfast, lunch and two breaks:


  • Breakfast starts 8-ish, and goes until 9:00.
  • Q1 goes from 9-10:30am
  • Morning Break is 10:30-11am.
  • Q2 is from 11am - 12:30pm
  • Lunch goes from 12:30-1:45pm.  There's plenty of time to each, call home, and take a short meeting.
  • Q3 is 1:45-3pm and is a "short" quarter by 15 minutes.
  • Cookie break is 3-3:30pm.
  • Q4 goes is 3:30-5pm.


A "Q0" meeting (not part of the official nomenclature, but still well-understood) is before breakfast, usually 7-ish, but could also be "overlapping" with breakfast.

"Q5" and "Q6" are generally "after 5" till about 6:30-ish, and after "Q5" till whenever...  This is often where some good work happens (some would even say "the real work").

If you are doing HL7 meeting stuff from Q0 to Q6, you still have 12 hours for your day job and sleep.  Your mileage may vary.

Monday after 5pm is the cochair's dinner.  If you want to hang with a cochair, they are likely busy from 5-7:30 or so Monday night.

Wednesday starting around 5:30 is the HL7 Reception.  This goes until about 7:30.

The first half of Monday in September is the Plenary session.

Monday and Tuesday at the WGM in January are the two Payer Summit days.

Connectathons are Saturday and Sunday before the Working group meeting.  Quarters?  Yeah, kinda.  We have them, food shows up at the right times.  But it's a Connectathon, software is ready when it's ready.  Some have been known to work until Q8 or 9, and maybe even start at Q -1.

I wanna say board meetings happen somewhere in Q3 and 4 on Tuesdays, but it's really up to the chair.

Technical Steering Committee (a governance committee) meets Saturday and Sunday.
International Council is Sunday and Thursday Afternoon.
Education Facilitators Lunch is Monday most meetings.

   Keith



Friday, June 7, 2019

What's your Field of View?

When you look at something under a microscope, what you see varies based on the level of magnification.  How much you can see and distinguish fine detail depends essentially upon your field of view.

One of the things that I've been looking at recently is personal health data stored in consumer apps and wearable devices.  Most of the details here amount to a FHIR Observation of some sort, with a code to describe the data element (and a value as a code, or quantity, or perhaps even a waveform).  We know that codes are computer friendly, but they aren't people friendly (and software developers ARE people, regardless of what others might tell you).

So, when everything is an observation, it gets messy for software developers who want nice, easy to remember mnemonics and JSON stuff that is focused right where they are focused.  Things that FHIR can capture and store, but maybe FHIR isn't actually the right place for those working in this space.

PCHA and Continua have some specifications in this space too, but again, NOT easy for developers to use, because once again, too much focus on the terminology, and not on what the developer is trying to do.

We need to find a way to move terminology out of the way.  Open mHealth looks like it's at a better place for this space, but folks who've invested heavily in FHIR and other standards don't agree.  But wait, what if those developers aren't my audience?  What then?

It all depends on your field of view.  And mine, as usual, is many and varied.

   -- Keith



Wednesday, June 5, 2019

Best practices for Logging and Reporting errors in FHIR

Over the years I've developed a number of micro-services implementing and using FHIR APIs.  I've developed a number of best practices for logging and reporting on errors that occur.  Some of these follow.


Logging


  1. If a call to your API is not validly formed, log this as a warning in your service's log.  You detected an error in user input, and handled it properly.  This is NOT an error in your application, it is an error in the calling application.  You DO want to WARN someone that the calling application isn't calling your application correctly.  You don't want to alarm them that your application isn't working right, because in fact, it is working just fine.
  2. If something happened in a downstream API call that prevents the proper functioning of your application (e.g., a database read error), this is improper operation of the system, and is an ERROR preventing your service from operating (even though there's nothing wrong in the service itself), and should be logged as such.  
  3. IF you implement retry logic, then:
    1. Log as warnings any operation that failed but finally succeeded through retry logic.
    2. Log as errors any operation that failed even after retrying.
  4. If an exception was the cause of an error, consider:
    1. If you KNOW the root cause (a value is malformed), say so in the log message, but don't report the stack trace. This will cut unneeded information from your logs, which you will be thankful for later. For example:
      try {
         int value = Integer.parse(fooQueryParameter.getValue());
      } catch (NumberFormatException nfex) {
         LOGGER.warn(
            "Foo query parameter ({}) must be a number.",
            fooQueryParameter.getValue());
      }
    2. If you don't know why the error occurred (there could be multiple reasons), do report the stack trace in the log:
      try (PreparedStatement st = con.prepareStatement(query)) {
         ResultSet result = st.execute();
      } catch (SQLException jex) {
         LOGGER.warn("Unexepected SQL Exception executing {}",
            query, jex);
         throw new InternalErrorException(...);
      }
    3. Consider pruning the stack trace at the top or bottom.  From the bottom because you know your entry points, infrastructure before that probably isn't that useful to you (e.g., tomcat, wildfly).  From the top because details after your code made the call that threw the exception isn't necessarily something you can deal with.  
  5. DO report the query used (and where possible, parameter values in the query) in the log. Consider also reporting the database name when using multiple databases. I have often seen database exceptions like "parameter 1 has invalid type" with no query included, and no values.
  6. Consider how you might implement retry logic in cases of certain kinds of exceptions (e.g., database connection errors).
  7. Use delimiters in your logging output format to make it easier to read them in other tools (e.g., spreadsheets).  I often use tab delimiters between the different items in my logging configuration: e.g.,
    %d{yyyy-MM-dd HH:mm:ss.SSS}\t[%thread]\t%-5level\t%logger{36}\t- %msg%n
  8. Consider reporting times in the log in a timezone that makes sense for your implementation (and more importantly, to your customer).  When your customer reports they had a problem at 9:33am, you want to be able to find issues at that time in the logs without having to compute offsets (e.g., from GMT ... do you know yours).

Reporting Errors in OperationOutcome

  1. Use 400 series errors like 400 or 422 when the fault is on the part of the client (e.g., invalid operation syntax, or unsupported query combination).  NOTE: HAPI on FHIR will report unsupported query combinations as a 500 series error (which I fix using an interceptor or filter).
  2. Use 500 series errors like 500, 503 or 507 when the fault is on the part of your service.
  3. DO tell the calling user what the problem is in easy to understand language, and if possible, include corrective action they can perform to address the issued.
  4. DO NOT include content from Exception.getMessage().
    I sometimes see this:
    catch (Exception e) {
      outcome.addIssue()
        .setSeverity(IssueSeverity.ERROR)
        .setDiagnostics(e.getMessage());
      throw ...
    }
    This is not good behavior.  You often have no clue what is in e.getMessage(), and often no control.  It can leak information about your technology implementation back to the API user, which can expose vulnerabilities (see below).
  5. DO NOT include the stack trace in the OperationOutcome.  This belongs in your logs, but not in the user response.  See the OWASP Error Handling cheat sheet.
  6. For database errors, you might want report the kind of database (e.g., patient chart, provider list), but not the exact name of the database.  Again, you want to be clear, but avoid leaking implementation details.

Use Error Codes

Finally, consider creating error codes (which can be reported to the user).  Report the error code WITH the human readable message.  The value of unique error codes is that:
  1. The error code does tell you where in your code the error occurred, but doesn't expose implementation details.
  2. Error codes can be associated with messages in ways that enable translation to multiple languages
  3. Error codes can also be associated with actions that users can take to correct the error (if it is on their part), or which your operations staff can take to either further diagnose OR correct the error.  For example:  DB001: Cannot access Provider database.
    Then, in your operations guide, you can say things like: "DB001: This message indicates a failure to connect to the Provider database".  Verify that the database services are up and running for the provider database for the customer site ...


Friday, May 31, 2019

Semantic Interoperability: What has FHIR Taught Us?

I've been working on expressing various queries using HL7 Version 2, Version 3, and FHIR.  What I've encounter is probably not shocking to some of you.

5 lines of HL7 Version 2 query encoded in HL7 ER7 format (pipes and hats), translates into about
35 lines in HL7 Version 2 XML, which translates into about
92 lines of HL7 Version 3 XML (about triple that of Version 2 XML), which translates into about
95 characters in one line of FHIR.

When expanded into version 3, a system that understands the principles of HL7 Version 3 can completely "understand" the semantics (but someone still needs to write the software to execute them).  In fact, I can turn an HL7 Version 3 XML representation into meaningful English, because the semantics of the message have been so well captured in it.

But, people don't talk the way that Version 3 does, nor do computers.  We both use three additional sources of information:

Context: Why are we talking? What's our purpose in having this discussion?  Who are we talking to? What are we talking about?
World Knowledge: This includes models of the world that describe the things that we are talking about.  What do we know?  What does it look like?
Inference: Given what we are told, and context and world knowledge, what can be inferred from the communication.  It is because of inference that Postel's law can be applied.  DWIM (Do What I Mean), that CPU operation we all wish was in our computers, can be applied when you know what it is the person could possibly have meant from all available choices.

HL7 Version 2 put the model in the message specification, but didn't tell you what it was talking about at a find enough grained level to make it possible for a human to understand the message.  It might be a language, but it's almost completely positional, with no human readable mnemonics to aid in understanding.
HL7 Version 3 put the detailed model in the message, rather than the message specification, and used a singular, 6 part model of the world (Act, Participation, Entity, Role, Act Relationship and Role Relationship) to describe everything.  V3 is clearly a language for computers to speak in, but not one for humans really. The message is completely detailed, but for a developer, there's so much repeated model to wade through that it's hard to find the point.  It's a language that talks about itself as much as it communicates.

FHIR builds on those details HL7 Version 3 models, but keeps that information as "World Knowledge" in the FHIR Specification (much the same way that Version 2 did).  What FHIR also did was provide for a representation that makes it easy to find the point.  There's just enough granularity in the message to find the code, the identifier, et cetera, for the thing you are looking for.

FHIR goes a step further in adoption of RESTful protocols, because the most common computer operation is to have it answer a question given certain inputs, and get back the answers.  And there's a protocol for how to do that that just about everyone using the web already understands.  It doesn't need OIDs (or anything with dots and numbers in it) to give it meaning.  We automatically know what a query parameter is.  And FHIR said "these are the kinds of things we need to be able to represent" when we ask questions.

Semantic Interoperability?  Pah.  I don't need semantic interoperability.  I need the damn thing to do what I mean, or better yet, do what I'm thinking.  FHIR at least, and at last, that I can think in.

     Keith



Friday, May 24, 2019

Who created this UI? It sucks!

As someone who writes regularly, I am often just as frustrated with Microsoft Word (or any other word processor I've ever used) as others report themselves to be with the user interfaces of EHR systems. Even Apple hasn't solved the problems I need solved.

How many clicks does it take to insert a figure reference to the figure below or above?  How much work is it to create a citation for the link I just inserted into the document?  These should be one button clicks, not the multi-step process they are today.

Why has this crime against writers continued to persist over decades? Nay, centuries... millennia even.

Word processor designers, here's a very clear specification for what I want:

Cross References

Given I have turned the option on, when I type the words "the figure|table|section below" or "the figure|table|section above" and there is a figure or table citation within the current section, insert a reference to it, or if a section, provide me with a list of sections to choose from that I can ignore if I want (so that if I continue typing, it just disappears).  And if I hit undo, treat the automatic insertion as the operation I want undone.

Hyperlinked Bibliography

Given I have turned the option on, when I insert a hyperlink, add new or reuse an existing citation source for the link if it already exists.  Find the individual author and creation date in the page data or metadata, or use a corporate author for the web site.  Include the URL in the citation.  If the URL includes a fragment identifier, find the text where that identifier appears and add it to the title of the reference (e.g., "Hyperlinked Biobligraphy in Who Created this UI? It sucks!).  If the link is to a page in a PDF (e.g., using #page=9 in PDF links) or other media format, treat it as a "document from a website", otherwise use "website" as the reference style.  Use the page title from the <title> tag in the page header.  Prompt me for missing information, but again, let this prompt dialog NOT interfere with my current work, and go away if I continue to type.  Same deal on undo here.  If I say undo, first undo the automatic insertion.

Finally: Stop turning on display formatting marks when I want to insert an index reference term.

   Keith


Friday, May 17, 2019

CDMA or GSM? V3 or FHIR? Floor wax or desert topping?

One of the issues being raised about TEFCA is related to which standards should be used for record location services.  I have to admit, this is very much a question where you can identify the sides by how much you've invested in a particular infrastructure that is already working, rather than a question of which technology we'd all like to have. It's very much like the debate around CDMA and GSM.

If you ask me where I want to be, I can tell you right now, it's on FHIR.  If you ask me what's the most cost effective solution for all involved, I'm going to tell you that the HL7 V3 transactions used by IHE are probably more cost effective and quicker to implement for all involved overall, because it's going to take time to make the switch to FHIR, and more networks are using V3 (or even V2) transactions.  And even though more cost effective for the country, it's surely going to hurt some larger exchanges that don't use it today.  CommonWell uses HL7 V2 and FHIR for patient identity queries if I can remember correctly, while Carequality, SureScripts and others use the HL7 V3 based IHE XCPD transactions ... which are actually designed to support federated record location.  As best I know, more state and regional health information exchanges support the IHE XCPD transactions than those exchanging data using V2 or FHIR.

Whatever gets chosen, it's gonna put some hurt on one group or another.  My gut instinct is that choosing FHIR is going to hurt a lot more exchanges that choosing XCPD at this time.

And this is where the debate about V3 and FHIR differs from the CDMA and GSM debate, because FHIR is closer to 4G or 5G in the whole discussion.  Some parts of FHIR, such as querying for Patient identity are generally widely available.  But complexity comes in when you get into using these transactions in a record location service, as I've described previously, and the necessary capabilities to support "record location services" in FHIR haven't been formalized by anyone ... yet.  This is where FHIR is more like 5G.

Just like 5G, this will happen eventually.  But do we really want to focus all of our attention on this, or do we want to get things up and running and give organizations the time they need to make the switch.  I think the best answer in this case is to make a very clear statement: This is where we are today (V3), and this is where we will be going in 2-3 years (FHIR), and make it stick.  And as I've said in the past, don't make it so hard for organizations to pre-adopt new standards.

Policy doesn't always work that way ... just look at what happened with ICD-10, or maybe even Claims Attachments.  But I think where we are at today is a little bit different, which is that the industry really wants to move forward, but would also like to have some room to breathe in order to move forward without stumbling along the way.  Do we really want a repeat of Meaningful Use?

We've seen how too much pressure can cause stumbles, and I think trying to use FHIR for record location services is just moving a little too fast.  I'll be happy to be proven wrong, and eat the floor wax, but frankly, right now, I just don't see it.

   Keith





Monday, May 13, 2019

Terminology Drift in Standards Development Organizations

I used to work for a company that published dictionaries, and one of my colleagues was a dictionary editor.  As he related to me, the definition of a term doesn't come from a dictionary, but rather from use.  A dictionary editor's job is to keep faithful track of that use and report it effectively.  By documenting the use, one can hope to ensure consistent future use, but languages evolve, and the English language evolves more than many.  I've talked about this many times on this blog.

It also happens to be the common language of most standards development organizations in Health IT (of course, I, as an English speaker, would say that, but the research also reflects that fact).

The evolution of special terms and phrases in standards is a particular challenge not only to standards developers, but especially to standards implementers.  As I look through IHE profiles (with a deep understanding of IHE History), I think on phrases such as "Health Information Exchange", "XDS Affinity Domain", and "Community", which in IHE parlance, all mean essentially the same thing at the conceptual level that most implementers operate at.

This is an artifact of Rishel's law: "When you change the consensus community, you change the consensus" (I first heard it quoted here, and haven't been able to find any earlier source, so I named it after Wes).

As time changes, our understanding of things change, and that change affects the consensus, even if the people in the consensus group aren't changed, their understanding is, and so the definition has changed.

We started with "Health Information Exchange", which is a general term we all understood (oh so long ago).  But then, we had this concept of a thing that was the exchange that had to be configured, and that configuration needed to be associated with XDS.  Branding might have been some part of the consideration, but I don't think it was the primary concern, I think the need to include XDS in the name of the configuration simply came out of the fact that XDS was what we were working on.  So we came up with the noun phrase "XDS Affinity Domain Configuration", which as a noun phrase parses into a "thing's" configuration, and which led to the creation of the noun phrase "XDS Affinity Domain" (or perhaps we went the other way and started with that phrase and tacked configuration onto it).  I can't recall. I'll claim it was Charles' fault, and I'm probably not misremembering that part.  Charles does branding automatically without necessarily thinking about it.  I just manage to do it accidentally.

In any case, we have this term XDS Affinity Domain Configuration, which generally means the configuration associated with an XDS Affinity Domain, which generally means some part of the governance associated with a Health Information Exchange using XDS as a backbone.

And then we created XCA later, and had to explain things in terms of communities, because XCA was named Cross Community Access rather than Cross Domain Access.  And so now Affinity Domain became equivilated (yeah, that's a word) with Community.

And now, in the US, we have a formal definition for health information network as the noun to use in favor of how we were using health information exchange more than a decade and a half ago (yes, it was really that long).

So, how's a guy to explain all this means the same thing (generally) to someone who is new to all this stuff, and hasn't lived through the history, and without delving into the specialized details of where it came from and why?  I'm going to have to figure this out.  This particular problem is specific to IHE but I could point to other examples in HL7, ISO, ASTM and OpenEHR.

The solution it would seem, would be to hire a dictionary editor.  Not having a grounding in our terminology would be a plus, but the problem there is that we'd a need a new one periodically as they learned too much and became less useful.