Friday, May 17, 2019

CDMA or GSM? V3 or FHIR? Floor wax or desert topping?

One of the issues being raised about TEFCA is related to which standards should be used for record location services.  I have to admit, this is very much a question where you can identify the sides by how much you've invested in a particular infrastructure that is already working, rather than a question of which technology we'd all like to have. It's very much like the debate around CDMA and GSM.

If you ask me where I want to be, I can tell you right now, it's on FHIR.  If you ask me what's the most cost effective solution for all involved, I'm going to tell you that the HL7 V3 transactions used by IHE are probably more cost effective and quicker to implement for all involved overall, because it's going to take time to make the switch to FHIR, and more networks are using V3 (or even V2) transactions.  And even though more cost effective for the country, it's surely going to hurt some larger exchanges that don't use it today.  CommonWell uses HL7 V2 and FHIR for patient identity queries if I can remember correctly, while Carequality, SureScripts and others use the HL7 V3 based IHE XCPD transactions ... which are actually designed to support federated record location.  As best I know, more state and regional health information exchanges support the IHE XCPD transactions than those exchanging data using V2 or FHIR.

Whatever gets chosen, it's gonna put some hurt on one group or another.  My gut instinct is that choosing FHIR is going to hurt a lot more exchanges that choosing XCPD at this time.

And this is where the debate about V3 and FHIR differs from the CDMA and GSM debate, because FHIR is closer to 4G or 5G in the whole discussion.  Some parts of FHIR, such as querying for Patient identity are generally widely available.  But complexity comes in when you get into using these transactions in a record location service, as I've described previously, and the necessary capabilities to support "record location services" in FHIR haven't been formalized by anyone ... yet.  This is where FHIR is more like 5G.

Just like 5G, this will happen eventually.  But do we really want to focus all of our attention on this, or do we want to get things up and running and give organizations the time they need to make the switch.  I think the best answer in this case is to make a very clear statement: This is where we are today (V3), and this is where we will be going in 2-3 years (FHIR), and make it stick.  And as I've said in the past, don't make it so hard for organizations to pre-adopt new standards.

Policy doesn't always work that way ... just look at what happened with ICD-10, or maybe even Claims Attachments.  But I think where we are at today is a little bit different, which is that the industry really wants to move forward, but would also like to have some room to breathe in order to move forward without stumbling along the way.  Do we really want a repeat of Meaningful Use?

We've seen how too much pressure can cause stumbles, and I think trying to use FHIR for record location services is just moving a little too fast.  I'll be happy to be proven wrong, and eat the floor wax, but frankly, right now, I just don't see it.


Monday, May 13, 2019

Terminology Drift in Standards Development Organizations

I used to work for a company that published dictionaries, and one of my colleagues was a dictionary editor.  As he related to me, the definition of a term doesn't come from a dictionary, but rather from use.  A dictionary editor's job is to keep faithful track of that use and report it effectively.  By documenting the use, one can hope to ensure consistent future use, but languages evolve, and the English language evolves more than many.  I've talked about this many times on this blog.

It also happens to be the common language of most standards development organizations in Health IT (of course, I, as an English speaker, would say that, but the research also reflects that fact).

The evolution of special terms and phrases in standards is a particular challenge not only to standards developers, but especially to standards implementers.  As I look through IHE profiles (with a deep understanding of IHE History), I think on phrases such as "Health Information Exchange", "XDS Affinity Domain", and "Community", which in IHE parlance, all mean essentially the same thing at the conceptual level that most implementers operate at.

This is an artifact of Rishel's law: "When you change the consensus community, you change the consensus" (I first heard it quoted here, and haven't been able to find any earlier source, so I named it after Wes).

As time changes, our understanding of things change, and that change affects the consensus, even if the people in the consensus group aren't changed, their understanding is, and so the definition has changed.

We started with "Health Information Exchange", which is a general term we all understood (oh so long ago).  But then, we had this concept of a thing that was the exchange that had to be configured, and that configuration needed to be associated with XDS.  Branding might have been some part of the consideration, but I don't think it was the primary concern, I think the need to include XDS in the name of the configuration simply came out of the fact that XDS was what we were working on.  So we came up with the noun phrase "XDS Affinity Domain Configuration", which as a noun phrase parses into a "thing's" configuration, and which led to the creation of the noun phrase "XDS Affinity Domain" (or perhaps we went the other way and started with that phrase and tacked configuration onto it).  I can't recall. I'll claim it was Charles' fault, and I'm probably not misremembering that part.  Charles does branding automatically without necessarily thinking about it.  I just manage to do it accidentally.

In any case, we have this term XDS Affinity Domain Configuration, which generally means the configuration associated with an XDS Affinity Domain, which generally means some part of the governance associated with a Health Information Exchange using XDS as a backbone.

And then we created XCA later, and had to explain things in terms of communities, because XCA was named Cross Community Access rather than Cross Domain Access.  And so now Affinity Domain became equivilated (yeah, that's a word) with Community.

And now, in the US, we have a formal definition for health information network as the noun to use in favor of how we were using health information exchange more than a decade and a half ago (yes, it was really that long).

So, how's a guy to explain all this means the same thing (generally) to someone who is new to all this stuff, and hasn't lived through the history, and without delving into the specialized details of where it came from and why?  I'm going to have to figure this out.  This particular problem is specific to IHE but I could point to other examples in HL7, ISO, ASTM and OpenEHR.

The solution it would seem, would be to hire a dictionary editor.  Not having a grounding in our terminology would be a plus, but the problem there is that we'd a need a new one periodically as they learned too much and became less useful.

Thursday, May 9, 2019

It's that time again...

The next person I'm going to be talking about is responsible for open source software that has impacted the lives of tens millions of patients (arguably even hundreds of millions), tens of thousands (perhaps even hundreds of thousands) of healthcare providers, and certainly thousands of developers around the world.

The sheer volume of commits in the projects he's led well exceeds 50 million lines of code.  He's been working in the open source space for nearly a decade and a half, most of which has been supporting the work of the university hospital that employed him.

It's kind of difficult to tell a back story about him that doesn't give it completely away (and many who've used the work he's been driving already know who I'm talking about).  I'm told he's an accomplished guitar player, and I also hear that his latest album of spoken word and beat poetry will be coming out soon.

I can honestly say I've used much of the open source code he's been driving forward at four different positions for three different employers, through at least eight different releases, and I swear by the quality of the work that goes into it.  I'm not alone, the work has been downloaded or forked by several thousand developers all over the world.

I know that he sort of fell into this open source space a bit by accident as the person who had been driving one of the HL7 open source project moved on to greener pastures and he took up the reigns.  Since then, he took the simplicity and usability of that open source project into a second one that has driven HL7 FHIR on towards greater heights.  If it weren't for some of the work he's done, I can honestly state the FHIR community would have been much poorer for his absence.

Without further ado:

This certifies that
James Agnew of
Simpatico Intelligent Systems, Inc.

has hereby been recognized for keeping smiles on the faces of HL7 integrators for the better part of two decades.

HAPI on FHIR has is perhaps the most widely known Java FHIR Server implementations available, HAPI HL7 V2 has been used in numerous projects to parse and integrate with HL7 Version 2 messages, and is included in one of the most widely used open source V2 integration engines (formerly known as Mirth Connect, now NextGen connect).  James has also contributed to other open source efforts supporting HL7 FHIR and HL7 Version 2 messaging.  

Thursday, April 25, 2019

Record Location Services at a National Scale using IHE XCPD

One of the recent discussions coming up around the most recent TEFCA related specifications has to do with how one might implement record location services for patients at a national scale.  The basis for this is the IHE Cross Community Patient Discovery Profile (XCPD).

Here's the problem in a nutshell.  Assume you are a healthcare provider seeing a patient for the first time, and you want to find who else might have information about this patient?  How can you do so?

The first step obviously is to ask the patient who their prior doctor was, and here's where the first fundamental challenge appears.  Sometimes the patient is unable to answer that question, either at all, or at least completely.  So, then, how do you get a complete list?  What you don't want to do is ask everyone who ever might have seen the patient anywhere in the country, because that is not going to scale.

I think that about sums it up.

The IHE XCPD profile is designed to address this.

If the patient is only able to give a partial response, then you know where to start looking.  Here's the key point, once you know where to start looking, the organizations and networks who can answer the question can also point you to others who've seen the patient, and that can get you a more complete list, which eventually will lead to closure.

But wait! How do these organizations know who else has seen the patient?  It's really pretty simple.  Somebody asked them, and in the process of asking them, also told them that they would be seeing the patient, and so the original provider gains the information about the new provider seeing them, which makes them able to answer the question accurately for the next new provider.  And so the well known provider becomes more authoritative, while the new provider is able to provide equally authoritative data.

If the patient is unable to answer that question at all, then you have to figure out who else you might be able to ask that question of.  If the patient is local, you could ask others in the area who might know the patient.  If the patient isn't local (e.g., just visiting), you might try asking others near to where patient resides, which hopefully you can determine.  Since TEFCA is about a network of networks, it's reasonable to assume that there are some regional networks of whom you might ask about a given patient, and they might be able to ask other, smaller regional networks they know about (this could become turtles all the way down, but at some point, you'd expect to stop).

There are some other issues to address.  Just because we got the new provider and the old provider synchronized, doesn't mean everyone else is.  Who has that responsibility?  That's an implementation decision.  It could be the original provider, or it could be the new provider.  Since the new provider is gaining the benefit, one could argue it's there responsibility to notify other networks that have already seen the patient that they are now seeing the patient.  That would be the way I'd implement it.

Note: This doesn't have to be perfect.  It has to be good enough to work.  Perfecting the algorithm for record location to ensure the right balance of performance and accuracy in the RLS is going to take time.  But we can certainly build something that gets the right networks talking to each other.

Saturday, April 20, 2019

Why Software Engineering is still an art

Software engineering isn't yet a science.  In science, you have a bunch of experimental procedures that one can describe, and processes that one can follow, and hopefully two people can reproduce the same result (unless of course we are talking about medical research experiments ;-( ).

Today, I wanted to add some processes to my build.  I'm using Maven (3.6.0), with Open JDK 11.0.2.  I wanted to run some tests over my code to evaluate quality.  Three hours later, and I'm still dealing with all the weirdness.

  1. rest-assured (a testing framework) uses an older version of JAXB because it doesn't want to force people to move to JDK 8 or later.
  2.  JAXB 2.22 isn't compatible with some of the tools I'm using (AOP and related) in Spring-Boot and elsewhere.
  3. I have an extra spring-boot starter dependency I can get rid of because I don't need it, and won't ever use it.  It got there because I was following someone else's template (it's gone now).
  4. FindBugs was replaced with SpotBugs (gotta check the dates on my references), so I wasted an hour on a tool that's no longer supported.
  5. To generate my code quality reports, I have to go clean up some javadoc in code I'm still refactoring.  I could probably just figure out how to run the quality reports in standalone, but I actually want the whole reporting pipeline to work in CI/CD (which BTW, is Linux based, even though I develop on Windoze).
  6. The maven javadoc plugin with JDK 11 doesn't work on some versions, but if I upgrade to the latest, maybe it will work, because a bug fix was backported to JDK 11.0.3
  7. And even then, the modules change still needs a couple of workarounds.
In the summers during college, I worked in construction with my father.  Imagine, if in building the forms for the fountain in the center of the lobby (pictured to the right), I could only get rebar from one particular suppler that would work with the holes in the forms.  And to drill the holes, I had to go to the hardware store to purchase a special brand of drill.  Which I would then buy an adapter for, and take part of it apart in a way that was documented by one guy somebody on the job-site knew, so that I could install the adapter to use the special drill bit.  And then we had to order our concrete in a special mix from someone who had lime that was recently mined from one particular site, because the previous batch had some weird contaminates that would only affect our job site.

Yeah, that's not what I had to do, and it came out great.

Yet, that's basically exactly what I feel like I'm doing some days when I'm NOT writing code.  We've got tools to run tools to build tools to build components to build systems that can be combined in ways that can do astonishing stuff.  But, building it isn't yet a science.

Why is this so hard?  Why can't we apply the same techniques that were used in manufacturing (Toyota was cited)?  As a friend of mine once said.  In software, there's simply more moving parts (more than a billion).  That's about a handful of magnitudes more.


Tuesday, April 16, 2019

Juggling FHIR Versions in HAPI

It happens every time.  You target one version of FHIR, and it turns out that someone needs to work with a newer or older (but definately different) version.  It's only about 35 changes that you have to make, but through thousands of lines of code.  What if you could automate this?

Well, I've actually done something like that using some Java static analysis tools, but I have a quicker way to handle that for now.

Here's what I did instead:

I'm using the Spring Boot launcher with some customizations.  I added three filter beans to my launcher.  Let's just assume that my server handles the path /fhir/* (it's actually configurable).

  1. A filter registration bean which registers a filter for /fhir/dstu2/* and effectively forwards content from it converted from DSTU2 (HL7) to the servers version, and converts the servers response back to DSTU2.
  2. Another filter registration bean which registers a filter for /fhir/stu3/* and effectively forwards content from it converted from STU3 to the servers version, and converts the servers response back to STU3.
  3. Another filter registration bean which registers a filter for /fhir/r4/* and effectively forwards content from it converted from R4 to the servers version, and converts the servers response back to R4.
These are J2EE Servlet Filters rather than HAPI FHIR Interceptors, b/c they really need to be right now. HAPI servers aren't really all that happy about being multi-version compliant, although I'd kinda prefer it if I could get HAPI to let me intercept a bit better so that I could convert them in Java rather than pay the serialization costs in and out.

In addition to converting content, the filters also handle certain HttpServlet APIs a little bit differently.  There are two key places where you need to adjust:

  1. When Content-Type is read from the request or set on the response, you have to translate fhir+xml or fhir+json to xml+fhir or json+fhir and vice versa for certain version pairs.  DSTU2 used the "broken" xml+fhir, json+fhir mime types, and this was fixed in STU3 and later.
  2. You need to turn off gzip compression performed by HAPI, unless you are happy writing a GZip decoder for the output stream (it's simple enough, but more work than you want to take on at first).
Your input stream converter should probably be smart and not try to read on HEAD, GET, OPTIONS or DELETE methods (because they have no body), and there won't be anything to translate.  However, for PUT, POST, and PATCH, it should.

Binary could a be a bit weird, I don't have anything that handles creates on Binary resources, and they WOULD almost certainly require special handling, I simply don't know if HAPI has that special handling built in.  It certainly does for output, which has made my life a lot easier for some custom APIs (I simply return a parameter of type Binary, with mimetype of application/json to get an arbitrary non-FHIR formatted API output), but as I said, I've not looked into the input side.

This is going to make my HL7 V2 Converter FHIR Connectathon testing a lot easier in a couple of weeks, because O&O (and I) are eventually targeting R4, but when I first started on this project, R4 wasn't yet available, so I started in DSTU2, and like I said, it might be 35 changes, but against thousands of lines of code?  I'm not ready for that all-nighter at the moment.

It's cheap but not free.  These filters cost in serialization time in and out (adding about 300ms of time just for the conformance resource), but it is surely a lot quicker way towards handling a new (or old) version for FHIR for which there are already HAPI FHIR Converters, and it at least gets you to a point where you can do integration tests with code that need it while you make the conversion.  This took about a day and a half to code up and test.  I'd probably still be at a DSTU2 to R4 conversion for the rest of the week on the 5K lines or so that I need to handle V2 to FHIR conversion.


Friday, April 12, 2019

Multiplatform Builds

I'm writing this down so I won't ever again forget, in using a Dev/Build environment pair that is Windows/Unix, I remember to plan for the following:

  1. Unix likes \n, Windows \r\n for line endings.  Any file comparisons should ignore differences in line endings.
  2. Unix cares about case in filenames, Windows not so much.  Use lowercase filenames for everything if you can.
  3. Also, if you are generating timestamps and not using UTC when you output them, be sure that your development code runs tests in the same time zone as your build machine.
I'm sure there's more, but these are the key ones to remember.


P.S.  I think this is a start of a new series, on Duh moments.

Wednesday, April 10, 2019

V2-to-FHIR on GitHub

The tooling subgroup of the V2 to FHIR project met a minor milestone today, creating the code repository for V2 to FHIR tools in HL7's Github.  If you want to become a contributor to this team, let me know, either here or via e-mail (see the send-me an email link to the right).

Our first call for contributions are sample messages including ADT, MDM, ORU, SIU and VXU messages from any HL7 V2 version in either ER7 (pipes and hats) or XML formats.  We'll be using these samples to test various tools to run the V2 to FHIR conversion processes in different tools in the May V2-to-FHIR tooling track.  There will be more information about that track provided on the O&O V2-to-FHIR tooling call on Wednesday April 24th at 3pm EDT

We are looking for real world testing data, rather than simple sample messages, with the kind of variation we'd expect to see in the wild.  If you have messages that you've used for testing your Version 2 interfaces, test messages for validating interfaces, et cetera, and want to contribute, we'd appreciate your sending them along.  You can either become a contributor, or send me a link to your zip file, or send me an e-mail with your sample messages, and I'll work on getting them into the repo.

No PHI please.  Yes, we are looking for real world data, but no, we don't want real world patient identities in here.  I know you know the reasons why, but I probably should say it anyway.

In contributing this data, you will be granting HL7 the rights to use this data for the V2 to FHIR project, just as you would with any other contribution you make to an HL7 project.

EHRs are ACID, HIEs are BASE

Phenolphthalein in FlaskI was talking about clinical data repositories, HIEs and EHRs with a colleague this morning.  One of the observations that I made was that in the EHR world, and some of the CDR world, folks are still operating in a transactional model, whereas most HIEs use data in an analytic fashion (although still often with transactional requirements).  There are differences in the way you manage and use transactional and analytical data.

Think about this.  When you ask an HIE for a document for a patient, are you trying to make a business (in this case, a health related care) decision?  Yep.  Is your use of this information part of the day-to-day operations where you need transactional controls?  Probably not, though you might think you want up to the minute data.

Arguably, HIEs aren't in the business of providing "up to the minute data".  Instead, they are in the business of providing "most recent" data within a certain reasonable time frame.  So, if the data is basically available, and eventually consistent within say, an hour of being changed, that's probably sufficient.  This is BASE: Basically Available, Soft (possibly changing) state, with Eventual consistency.

On the other hand, when you use an EHR, do you need transactional controls?  Probably, because at the very least you want two people who are looking at the same record in a care setting to be aware of the most current state of the data.  In this case, you need Atomic, Consistent, isolated as well as Durably persisted changes.  This is ACID.

BASE scales well in the cloud with NoSQL architectures. ACID not so much.  There are a lot of good articles on the web describing the differences between ACID and BASE (this is a pretty basic one), but you can find many more.  If you haven't spent any time in this space, it's worth digging around.


Friday, April 5, 2019

Find your Informatics mentor at IHE or HL7

I was interviewed yesterday by a college student as part of one of her student projects.  One of the questions I was asked was: What would be your one piece of advice for a graduating student entering your field?

I told her that it would depend (isn't that always the answer?), and that my answer for her would be different than my general answer (because she's already doing what I would have advised others).

My general answer is to find an group external to school or work related to her profession to volunteer in, either  a profession association or a body like IHE or HL7.  I explained that these organizations already attract the best talent from industry (because company's usually send their top tier people to these organizations).  So, by spending time with them, she'll get insight from the best people in the industry.

Organizations like this also have another characteristic, which is that they are already geared up to adopt and mentor new members.  I think this mostly might be a result of the fact that they already have more work than they can reasonably accomplish, and having a new victim member to help them is something that they are naturally supportive of, and as a result, also naturally supportive of the new member.  It's an environment that's just set up to provide mentoring.

There are days when I'm actually quite jealous of people who get to do this earlier in their career than I did.  Participating in IHE and HL7 has given me, and many others quite a boost in our careers, and the earlier that acceleration kicks in, the longer it has to effect your career velocity.  In her case, I'm especially jealous, as she's been working in this space since middle school!

In any case, if you are a "newly" minted informaticist, health IT software engineer, or just a late starter like me, and want to give your career a boost, you can't go wrong by participating in organizations like IHE, HL7, AMIA or other professional society or organization.



Tuesday, April 2, 2019

How does Interfacing Work

This post is part of an ongoing series of posts dealing with V2 (and other models) to FHIR (and other models) which I'm labeling V2toFHIR (because that's what I'm using these ideas for right now).

I've had to think a lot about how interfaces are created and developed as I work on HL7 Version 2 to FHIR Conversion.  The process of creating an interface that builds on the existence of one or more existing interfaces is a mapping process.

What you are mapping are concepts in one space (the source interface or interfaces) to concepts in a second space, the target interface.  Each of these spaces is represents an information model.  The concepts in these interface models are described in some syntax that has meaning in the respective model space.  For example, one could use XPath syntax to represent concepts in an XML Model, FHIR Path in FHIR models, and for V2, something like HAPI V2's Terser location spec.

Declaratively, types in the source model map to one or more types in the destination model, and the way that they map depends in part on context.  Sure, ST in V2 maps to String in FHIR, but so does ID in certain cases, except when it actual maps to Code.

So, if I've already narrowed my problem down to how to map from CWE to Coding, I really don't need to worry much about those cases where I'd want to map an ST to some sort of FHIR id type, because, well, it's just not part of my current scope or context.

Thinking about mapping this way makes it easier to make declarative mappings, which is extremely valuable.  Declarations are the data that you need to get something done, rather than the process by which you actually do it, which means that you can have multiple implementation mechanism.  Want to translate your mapping into FHIR Mapping Language?  The declarations enable you to do that.

But first you have to have a model to operationalize the mappings.  Here's the model I'm working with right now:


  • An object to transform (e.g., a message or document instance, or a portion thereof).
  • A source model for that object that has a location syntax that can unique identify elements in the model, from any type in that model (in other words, some form of relative location syntax or path).
  • A target model that you want to transform to.
  • A location syntax for the target model.
  • A set of mappings M (source -> target) which may for each mapping have dependencies (preconditions which must be true), and additional products of the mapping (other things that it can produce but which aren't the primary concepts of interest).


Dependencies let you do things like make a mapping conditional on some structure or substructure in the source model.  For example, a problem I commonly encounter is that OBR is often missing relevant dates associated with a report (even though it should be present in an ORU_R01, the reality is that it often is not).  My choices are to not map that message, or come up somehow with a value that is close enough to reality.  So, when OBR-7 or OBR-8 is missing, my goto field is often MSH-7.  So, how would I express this mapping?

What I'd say in this case is that MSH-7 maps to DiagnosticReport.issued, when OBR-7 is missing and OBR-8 is missing.  So, this mapping is dependent on values of OBR-7 and/or OBR-8.


Products let you add information to the mapping that is either based on knowledge or configuration.  HL7 V2 messages use a lot of tables, but the system URLs required by FHIR aren't contained anywhere at all in the message (even though they are going to be known beforehand).  So, when I want to map OBR-24 (Diagnostic Service Section Identifier) to DiagnosticReport.category, I can supply the mapping by saying OBR-24 -> DiagnosticReport.category.coding.code and that it also produces DiagnosticReport.category.coding.system with a value of

Mapping Process Model

So now that you understand those little details, how does mapping actually work?  Well, you navigate through the source model by traversing the hierarchy in order.  At each hierarchical level, you express your current location as a hierarchical path.  Then you look at the path and see if you have any mapping rules that match it, starting first with the whole path, and then on to right subpaths.

ALL matching rules are fired (unlike XSLT, which prioritizes matches via conflict resolution rules).  I haven't found a case where I need to address conflict resolution yet, and if I do, I'd rather that the resolution be explicit (in fact, you can already do explicit resolution using dependencies).

If there's a match, then the right hand side of the rule says what concept should be produced.  There can only be a match when the position in the source model matches the concept that you are mapping from, and there exists an equivalent target concept in the model that you are mapping to.  In my particular example: Presuming that DiagnosticReport was already in context (possibly because I said to create it in the current context on seeing an ORU_R01 message type), then DiagnosticReport.category would be created.

At some point, you reach an atomic level with some very basic data types (string, date, and number) in both the source and target models.  For this, there are some built-in rules that handle copying values.

Let's look at our OBR-24 example a bit deeper.  OBR-24 is basically the ID type.  So, moving down the hierarchy, you'll reach ID.  In my own mappings, I have another mapping that says ID -> Coding.code.value.  This rule would get triggered a lot, except that for it to be triggered, there needs to be Coding.code already in my mapping context.  In this particular case, there is, because it was just created previously in the rule that handled OBR-24.  But if there wasn't, this mapping rule wouldn't be triggered.

When I've finished travering OBR-24 and move on to OBR-25, I "pop" context, and now that coding is no longer relevant, and I can start dealing with DiagnosticReport.status.

The basic representation of the mappings are FHIR Concept maps (as I've mentioned in previous posts in this series).

Clarified bullet point one above thanks to Ed VanBaak.

Thursday, March 28, 2019

Back to the Baselines

I've been working quite a bit on mapping V2 messages to FHIR lately.  One of the telling points in V2 conversion is ensuring you run tests against a LOT of data with a lot of variation, especially in the V2 interfacing world.

If you don't test with a lot of data, how can you tell that a fix in one place didn't break the great output you had somewhere else, especially given all the possible different ways to configure a V2 interface.

To do this, you have to establish baselines, and compare your test outputs against your baseline results on a regular basis.  Then, after seeing if the differences matter, you can promote your now "better" outputs as your new baselines.

Automating this process in code makes your life a lot easier.

I like to build frameworks so that I can do something once and then reuse it over and over.  For baseline testing, I decided that I wanted each test case I implemented to be able to store its outputs in folders identifying the test case in the form: testClass/testMethod/testInstance.  Those folders storing output would be stored in target/test-output folder.

And baselines would be stored in the src/test/baseline folder, organized in the same way.

Then I wrote a rather small method in the base class of my testing framework that did the following (FileUtils from Apache Commons IO is great for reading and writing the content):

1. Automated the generation of FHIR Resource output as json and xml files in the folder structure.
Here's some sample code using HAPI on FHIR to do that:

   FileUtils.writeStringToFile(new File(fileName + ".xml"),       xmlOutput = context.newXmlParser().setPrettyPrint(true).encodeResourceToString(b),

2. Compared the generated outputs to baselines.
   jsonBaseline = FileUtils.readFileToString(new File(baselineFile + ".json"), StandardCharsets.UTF_8);
   assertEquals(jsonBaseline, jsonOutput);

And finally, because HAPI on FHIR Uses LogBack, and Logback provides the Sifting Appender, I was also able to structure my logback.xml to contain a Sifting Appender that would store separate log files for each test result! The value of this is huge.  Logging is part of your application's contract (at the very least with your service team), and so if your log messages change, the application contract has changed.  So, if changing a mapping changes the logging output, that should also be comparable and baselined.

The sifting appender depends on keys in the MappedDiagnosticContext (basically a thread specific map of keys to values).  This is where we store the final location of the test log output when the test starts.  My code to start and end a test looks a bit like this:
try {
   ... // do the test 
} finally {

Start is a method that gets the test class and test name from the stack trace as follows:
Throwable t = new Throwable();
StackTraceElement e = t.getStackTrace()[1];
String fileName =
    e.getClassName(), e.getMethodName(), testName);

This is a useful cheat to partition output files by test class and method, and specific test instance being tested by that method (I use a list of files to read, any time I want a new test case, I just drop the file into a test folder).

End is a little bit more complex, because it has to wrap some things up, including log comparisons after everything else is done.  I'll touch on that later.

It's important in log baselining to keep any notion of time or date out of your logging, so set your logging patterns accordingly.  I use this:
[%-5level] [%t] %c{1} - %msg%n%xEx

While my normal pattern contains:
[%-5level] %d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX} [%t] %c{1} - %msg%n%xEx

My appender configuration looks something like this:

<Appender name="testing" class="ch.qos.logback.classic.sift.SiftingAppender">
      <appender name="FILE-${testfile}" class="ch.qos.logback.core.FileAppender">
        <layout class="ch.qos.logback.classic.PatternLayout">

The details out on log file comparison are a bit finicky, because you don't want to actually perform the comparison until the end of the test, and you want to make sure the logger has finished up with the file before you compare things.  After some code inspection, I have determined that logback presumes that it can dispose of the log after 10 seconds.

So, end looks something like this:
protected void end(String testName) {
boolean compare = "true".equals(MDC.get("compareLogs"));, "Test completed");
MDC.put("testfile", "unknown");

if (compare) {
try {
// Wait for log to be finalized.
Thread.sleep(10 * 1000 + 100);
} catch (InterruptedException e) {
// Find and compare the files and assert if they don't match.

One other thing that I had to worry about was the fact that I use UUID.getRandomUUID().toString() in various places in my code to generate UUIDs for things that were being created.  I just replaced those calls to access a Supplier<String> that was part of the conversion context, so that I could replace it with something that had known behaviors for testing.

One last thing, if you build on both Windows and Unix, be sure that your file comparisons aren't sensitive to line ending format.  One way to address that is to replace \r\n with \n throughout after reading the strings from a file.  You might also find that UTF-8 / Windows Latin 1 characters are problematic depending on the character set your logging code assumes.  I generally stick with UTF-8 for all my work, but you never know about software you don't control.


P.S. Yes, I do sing bass.

Experts don't always make the best teachers

To be an expert is different from being a teacher.  To be an expert one must amass a great deal of experience in a field.  This allows you to solve complex problems ... standards-based interoperability for example.

To be a teacher is a different mind-set.  Not only must you remember all the amassed experience, but you must also forget it ... or at least remember what it was like when you didn't know the answers, and if you are really good, the moment at which you finally got it, and then be able to convey that to others.

It's taken me ten years and more to become an expert at interoperability, and while I can claim some skill at teaching, I'm far from expert at it.  As I age, it becomes more difficult for me to remember what it was like to not know something.

Experts are often called upon to train others.  What is simple for us we must remember is not so simple for others without our experience.  And that is the critical piece of self-awareness that we have to learn to develop ... to recognize that there's a certain skill we had to develop, or piece of knowledge we had to slot into place in our minds before we could accomplish the "simple" task.


Tuesday, March 19, 2019

When the NoBlocking regulation is more complex than software

... it's time to apply software tooling.

So I went through various definitions in the Informatin Blocking rule and made a UML diagram.  The value of this became immediately apparent to me when I was able to see for example that Interoperability Element, Health IT Module, and API Technology were somewhat broken.  API Technology is certainly a Health IT Module, and should be defined in terms of that definition.

It also shows the various relationships associated with actors.  As I go through the rule, I imagine there will be other relationships that I can infer from the regulatory text (e.g., fees charged to actors by other actors).

You can see the results below, and more importantly, you can get the source.

Entities (people, organizations, and things) are classes.  Things that can be done (verbs) are represented as interfaces.  The SVG representation links back to the regulatory text, and has mouse-overs citing the source of the link or artifact.


Tuesday, March 12, 2019

How to File a HIPAA Privacy Complaint

I've been seeing a lot of tweets recently complaining about misuse of HIPAA (about a half-dozen).  Mostly from people who know better than doctors what the regulations and legislation actually says.
I tweet back, sometimes cc: @HHSOCR.  The volume's grown enough that I thought it worth while to write a post about it.

If your health care provider or insurer refuses to e-mail you your data, refuses to talk with you over the phone about your health data, or makes it difficult for you, there's someone who will listen to your complaint and will maybe even take action.  The HHS Office of Civil Rights is responsible for investigating complaints about violations of HIPAA.  They don't make the form easy to find (because frankly, they do have limited resources, and do need to filter out stuff that they cannot address), but they do support online complaint filing, and you can get to it online here (I've shortcut some of the filtration steps for you, if you've found this blog post, you probably meet the filter criteria).

Another way to complain is to write a letter.  I know it's old fashioned, but you can do it.  My 8-year-old daughter once wrote a letter to a HIPAA privacy officer.  You don't need to know their name, just the address of the facility, and address it to the HIPAA Privacy Officer.  It'll definitely get someone's attention.  And who knows, you just might change the behavior of the practice (my daughter's letter got the practice to change a form used to report on a visit so that it would be clearer for patients).

I've mentioned before that under the HIPAA Omnibus regulations, in combination with recent certification requirements, providers shouldn't be able to give the excuse that they are not allowed (under HIPAA) to e-mail, or haven't set up the capability to e-mail you your health data.  Those two statements are likely to be false ... but most providers don't know that (if you are reading this blog, you are probably among the exceptions).

I'd love it if HHS OCR provided a simple service that made it possible for patient's to report HIPAA nuisance behavior that would a) send the provider a nasty-gram addressed to the HIPAA Privacy officer at the institution with an official HHS logo on the front cover, and b) track the number of these sent to providers based on patient reports, and c) publicly report the number of nastygrams served to instititions when it reached a certain limit within a year, and d) do a more formal investigation when the number gets over a threshold, and e) tell them all that in short declarative statements:


To whom it may concern,

On (date) a patient reported that (name) or one their staff informed them incorrectly about HIPAA limitations.

The patient was informed that:
[ ] Healthcare data cannot be e-mailed to them.
[ ] Healthcare data cannot be faxed to them.
[ ] Healthcare data cannot be sent to a third party they designate.
... (a bunch of check boxes)

Please see HHS Circular (number) regarding your responsibilities regarding patient privacy rights.

Things you are allowed to do:
... (another laundry list).

This is the (number)th complain this year this office has received about your organization.  After (x) complaints in a year, your organization will be reported on  After (y) complaints total, your organization will be investigated and audited.


Somebody with an Ominous Sounding Title (i.e., Chief investigator)

I'd also love it if HHS would require the contact information for the privacy officer be placed on every stupid HIPAA acknowledgement form I've been "required" to sign (acknowledging I've been given the HIPAA notice ... which inevitably I refuse to sign until I get it), and on every HIPAA notice form I'm given.  Because I'd fricken use it. 

I could go on for quite some time about the pharmacy that couldn't find their HIPAA notice for ten minutes and refused to give me my prescription because I refused to sign the signature pad until they did so, only for them to finally discover that if they'd just given me the prescription, I would see it written on the back of the information form they give out with every medication ... but they didn't have a clue until someone made a phone call.  And of course they claimed I had to sign because "HIPAA" (which says no such thing).

I'd also love it if HSS authorized some sort of "secret healthcare shopper" that registered for random healthcare visits and audited the HIPAA components of a provider's intake processes for improvements (e.g., the HIPAA form in 6-point type at an eye doctor's office is one of my favorite stories, that's a potential violation of both HIPAA and disability regulations).  What the hell, make the payers actually be the ones responsible do it with some percentage of their contracted provider organizations, and report the results to HHS on a periodic basis.

I think this would allow us (patients) to fight back with nuisances of our own which could eventually have teeth if made widely available and known to patients.  I'm sorry I didn't think to put this in with my recent HIPAA RFI comments.  Oh well, perhaps another day, and in fact, since there was an RFI, there will be an NPRM, so these comments could be made there, and who knows, perhaps someone will even act on them.  I've had some success with past regulatory comments before.


Monday, March 11, 2019

The Phases of Standards Adoption

I was conversing with my prof. about Standards on FB the other day, and made an offhand remark about him demonstrating that FHIR is at level 4 in my seven levels of standards adoption.  It was an off the cuff remark based on certain intuitions I've developed over the years regarding standards.  So I thought it worthwhile to specify what the levels are, and what they mean.

Before I go there, I want to mention a few other related metrics as they apply to standards.  One of these is the Gartner Hype Cycle with Innovation Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment, and Plateau of Productivity and Grahame Grieve's 3 Legs of Health Information Standards, and my own 11 Levels of Interoperability (which is really only 7).  There's a rough correspondence here, as shown in the table below.

Grahame's 3‑Legs11 Levels of
Time (y)
-1 StrugglingAt this stage, not only does a standard not exist, but even awareness that there is a problem that it might solve is lacking.
 0 Absent
0 AspiringWe've identified a problem that standards might help solve and are working to solve it.
 1 Aspirational
1 TestingThe specifications exist, and are being tested.
1 & 2
 2 Defined
2 ImplementingWorking prototypes have been tested and commercial implementations are being developed.

2 & 3
 3 Implementable
3 DeployingImplementations are commercially available and can be used by end users.
 2 & 3
 4 Available
4 UsingCommercially available implementations are being used by real people in the real world.
 5 Useful
5 RefiningThe standard, and it's implementations and deployments are being refined.
 6‑10 (not named) 

People are happy with the implementations, and should the question arise about what standard to use, the answer is obvious.

 11 Delightful

How are my seven levels of standards any different from the 11 levels of interoperability?  Not by much really.  What's different here, is that I've given phases instead of milestones.

Why this is important is because each phase occurs over time, and is entered into by different kinds of stakeholders according to a technology adoption lifecycle, and can have innovators, early adopters, majority adopters and laggards in each phase.

Time is interesting to consider here, because standards and technology has sort of a quantum nature.  It can exist in several of my phases described above at once, with different degrees of progress of in each phase, with the only real stipulation is that you cannot be further along in a later phase than you are in an earlier one.

If entry and exit to each phase was gated to completion of the phase before, the timelines for reaching refining stage would take about 5 years, but generally one can reach the starting point of the next phase by starting after the start of the previous phase by 3 to 6 months.  You may have more work to do to hit a moving target, but you'll wind up with a much faster time to market.

As Grahame points out, getting to the end of the cycle requires much more time in the market driving stage of his three-legged race than it does in the initial parts of it. 

Anytime I've done serious work on interoperability programs, I'm always working on 2-3 related projects in a complete program, because that's the only way to win the race.  You've got to have at least one leg in each place of Grahame's journey.  Otherwise, you'll reach a point of being done,  and simply expecting someone else to grab the flag and continue on without you.

Tuesday, March 5, 2019

Whose Interoperability Problem is this?

Is this the challenge of an EHR Vendor? Or a medical practice working with other medical practices who insist on sending faxes and paper copies, perhaps because they don't have some method of sending these over a computer network using digital communication standards such as Direct, or IHE Cross Enterprise Document sharing to the receiving practice?

Yes, we need more inter-connected medical practices.  But is that due to the lack of available interoperability options or the lack of desire to implement them, and if the latter, why is that the case?

Yes, this is an interoperability, but here, we have a question related to workflow:

Workflow related to implementation.
Workflow related to changing the behavior of others in your referral network.
Workflow related to changing your own behavior.

If this practice isn't acceptable, why would you continue to accept it?

Problems like the one Danny illustrated quite well above aren't necessarily due to a lack of technology (or standards, or interoperability) to solve them.  Some times they are simply because the right person hasn't asked the right questions.

Some thoughtful questions to ask:

  1. What other ways could this be done?
  2. Why can't we do it another way?
  3. How much does it cost to do it the way we are doing now?
  4. What might it cost to do it a different way?
  5. What could we do with the savings?


Friday, March 1, 2019


Today I scheduled my intake appointment as a participant in the AllOfUs program.  My PCP is the PI for their efforts with AllOfUs in the group practice that I use in central Massachusetts, and so I signed up to participate this morning.

It took me about 15 minutes to sign up.  The consent process was very well done, and very well written, in quite understandable language.  I'd guess the reading level of the content was around 6-7th grade, but was also a highly accurate representation of what the program is doing, which takes quite a bit of work if you've ever had to do that sort of writing.

The surveys took me another 10 minutes to complete and were especially easy since I'd already seen them having read through the protocol previously.

What surprised me was getting a call from my practice to schedule the appointment, but my sense is, they are already very engaged in this effort (I was to have participated as a patient representative in their outreach program, but was unable to attend the initial meeting due to battery problems with my motorcycle).  That was cool, and took about 5 minutes.

I'm looking forward to see how the program operates from the patient perspective, especially since some of the standards work I'm engaged in now can help refine it from the research perspective later.


Thursday, February 28, 2019

The skinny on the NoBlocking provisions of the CuresNPRM

In my own contribution to two reams, I put together a tweet stream of of over 150 tweets yesterday covering the information blocking related provisions of the Cures NPRM released by ONC during HIMSS week.  It's finally available from the Federal Register (but not yet "published"), and you can find the copy I used to summarize it from ONC here.  Next Monday you should be able to get to the web-based Federal Register content, and I've heard also that ONC will publish the Word version (which is going to be my source for comments given that I can modify electronic commenting tools I've developed for my HL7 work to gather feedback).

I'm NOT going to give the long details of that stream in this skinny post, but you can find the full set at tweet 18 in this 14-page unroll.

The information blocking provisions are the biggest addition to ONC oversight in terms of new regulation since inception of the CEHRT program, and also have potentially the biggest raw impact since then.  The provisions impact:
  • Patients
  • Data Providers
    • Healthcare Providers
    • Health Information Networks
    • Health Information Exchanges
  • Health IT Vendors
    • Certified EHR Technology vendors
The most challenging aspects of this rule related to the fact that data blocking is essentially defined as a behavior that would restrict, restrain, discourage or otherwise prevent access to electronic health information UNLESS ... and then details 7 exceptions.  The exceptions together take about 17 pages of REGULATION, and somewhere around 180 pages of explanatory text in the preface.  That works out to about 2.5 pages per exception for the regulation alone, and around 25 pages of explanatory text per exception.

45 CFR 171 is all new content, and touches deeply on the rights and responsibilities of stakeholders with regard to exchange of electronic health information (EHI is the new acronym you need to learn), in ways that to my knowledge, are unprecedented in digital commerce.

In the main, past regulatory efforts regarding digital data tied to an individual have been related to what CANNOT flow, or in the cases of an individual, what data MUST flow to that individual.  In the case of the Information Blocking rule, the regulatory effort is about what must flow, and what legitimate reasons must exist that might inhibit that flow.  This is a very new approach.

The current challenge has to do with the fact that while the regulation touches on rights and responsibilities of stakeholders, it isn't written in a form that corresponds to any new or existing rights.  Instead, the form it is written in closely corresponds to the ruling law found in the 21st Century Cures Act.  This meets the test that all regulation must meet in being able to establish its ties back to the supporting legislation, but unfortunately doesn't make it very easy to understand.

I think my comments on this regulation are going to be an attempt to reinterpret it based on rights and responsibilities that appear based on the intent of the legislation.  But this is an area where I think I'm going to need to get some expert assistance.  Because while I can probably figure out the right model, I'm not entirely clear on the constraints that current law might impose in interpretation.


Tuesday, February 26, 2019

A Brief summary of my IHE ACDC Profile and A4R Whitepaper Proposals

This week I'm at IHE meetings, and am submitting one of the first (and second) out-of-cycle proposals to IHE workgroups following the continuous development process adopted by IHE PCC and IT Infrastructure (and under consideration in QRPH).

The first of these is the ACDC (Assessment Curation and Data Collection Profile), which advances assessments in a new way that hopefully addresses the challenge that there are literally tens of thousands of assessment instruments that we'd like to be able to exchange in an interoperable manner.  The goal is here is to disconnect the encoding of assessment question and answer concepts from standardized vocabularies such as LOINC and SNOMED CT (which can take some time), yet still enable the capture of assessment data, and handle the encoding task in volume at scale using a mechanism that still needs some R&D.

The profile addresses two connected use cases: the process of assessment instrument acquisition (e.g., by a provider or vendor from an assessment instrument provider or curator), and the process of data collection through a singluar common resource, identified by it's Questionnaire canonical url (basically, a web accessible URL that also acts as a unique identifier).

The first use case allows the assessment instrument curator to make it possible for an assessment instrument acquirer to search for, and explore available instruments and metadata, and eventually get back enough information to make an acquisition decision, and after taking the necessary steps to obtain access (e.g., licensing, click-through, or whatever), to then acquire the executable content (the full Questionnaire resource).

The next use case allows a system that has access to use an executable assessment instrument to ask an assessor application to gather the essential data to make the assessment (it could be through a user interface that simply asked the provider or patient to answer some questions, or it could be more complex, involving answering questions based on available EHR data, or provide some adaptive responses based on other data).  The response to this inquiry would be something like a Bundle containing a) the QuestionnaireResponse, new resources that may have been created as a result of processing the Questionnaire, and perhaps some ClinicalImpression resources that provide the assessment evaluation.

For example, a questionnaire implementing APGAR Score might result in one QuestionnaireResponse resource, and six ClinicalImpression resources, one each for the scores associated with the 5 components of the APGAR score (respiration, heart rate, muscle tone, reflex response and color) and the overall APGAR score result (the sum of all the component scores).

Assessment instruments are commonly used to collect data essential in clinical research regarding a patient's current cognitive or functional status, data related to social determinants of health at the start of a research program, or during the course of research (to determine patient progress).  They are also used to collect data about patient reported outcomes, most often at the end of the research intervention, but perhaps also periodically (also to assess progress).

As we look at the R&D effort that is undertaken to address the scaling problem, we'll be keeping track of our findings regarding what efforts we attempt, and our success in addressing the scaling problem, and will report this in the Assessments for Research (A4R) white paper.  The purpose of doing the white paper gives us the opportunity to explore different approaches and publicize our findings in a way that helps us drive forward progress on the scaling problem associated with encoding assessment instruments in an interoperable manner.

Some thoughts as we've discussed this profile thus far:

  1. It's important to classify assessment data elements by a set of categories that might be important for research.  The hierarchical structure of the Questionnaire allows for groups which enable us to introduce groups that can be used to record classification codes.  For example, if one is performing research on alcohol use, one might be interested in seeking a wide variety of data from multiple assessment instruments.  Enabling use of classification codes in the Questionnaire will enable groups of related questions from multiple instruments to be identified.
  2. How should automated pre-population of responses be addressed.  For example, in cases where there is sufficient data in an EHR system to answer common questions re: Age, gender, et cetera, how might this be enabled in the encoding of the questionnaire resource.
A lot of what goes into these proposals is based on work on assessments and automated data capture for research that started in IHE in 2006 on the RFD profile, in 2007 in the Assessments work done by Patient Care Coordination, and continued through ONC efforts on Structured Data Capture in 2012 (which greatly influenced the work of the FHIR Questionnaire and QuestionnaireResponse resources), and current work being done on Patient Reported Outcomes in FHIR by HL7.

As John Moehrke said, this work won't be "Done Dirt Cheap", which also means it likely also won't be a dirty deed that doesn't get the job done either.  I think we're about to rock on assessments.

Update: Both proposals were accepted today. More later as the work progresses.

Tuesday, February 19, 2019

The short and long of the PatientAccess rule

I never did finish up my regulatory summary post on the patient access rule last week, even though I finished reading the regulation text on Monday of last week. So I'm going to combine that with the detail review.  While the rule still hasn't been published in the Federal Register, you can find the preprint from CMS here.

The Short of It

This is what the reg says, and my responses to it. I start there because I don't want to anchor myself in the regulator's thinking just yet. It's also a LOT less text to read.  

Patient Access

Think of "mom" below as Medicare enrollee, and Kingle as Medicaid enrollee. These are real people for me, which helps me to think about the impacts of the rule.  

Patient Access for Mom

Mom's MA organization has to provide APIs that allow her to use an app after mom approves it to access standardized claim data, adjudications, appeals, provider payments (remittances) and co-payments (cost-sharing) within one business day of claim processing. This is an API form of an EOB essentially, but CMS doesn't use that phrase anywhere in the rule, however see how they describe things here.

Mom can also get standardized encounter data within 1 day, provider directory data, including names, addresses, and phone numbers within 30 days of update, and clinical data and lab results within one day.

And because Mom is also covered by a Part D plan, she'll be able to get information about medications covered too and pharmacy directory data, and formularies,

All using the standards that are adopted by the Secretary at 45 CFR 170.215, which includes FHIR DSTU2, ARCH, Argonaut Data Query, SMART, OIDC and FHIR STU3 ... or some more advanced version of the standards unless specifically prohibited; what @HealthIT_Policy (Steve Posnack) calls raising the upper bar.

Mom can tell her Medicare advantage organization to go get data from her previous plan up to five years after changing the plan.

MA providers have to participate in a trusted exchange where they can exchange this data.

Patient Access for Kingle

Now, Kingle, a Medicaid beneficiary I know basically gets the same rights as Mom, because the States have to do this for them too. And just like Mom, Kingle gets access to the same data. With all the same aforementions and aforesaids thereunto pertaining.

And MA and States must provide web sites in which mom and Kingle can get all the information they need about this stuff, including their rights, and how to bitch to the OCR and the FTC if need be.

And 438.242 of the rule says that Health Information Systems must "Participate in a trusted exchange network" which exchanges health information, supports secure messaging or query between payers, providers and PATIENTS.

Patient Access for Any Life CMS Touches

All of the aforemented, aforesaids and thereuntos apply to CHIP beneficiaries as well, and qualified health plan members in federally facilitated exchanges (27 states), and some other stuff. So, basically, if the Feds give money to states or MA organizations to provide healthcare, they have to give mom, Kingle or any other beneficiary access to their data.=

Conditions of Participation

Conditions of Participation translated means if you get paid by CMS funding, you have to do this. If you happen to be a hospital participating in Medicare or Medicaid and you have an EHR (just about all of them), implementing HL7 V2.5.1 ADT messages (all of them), then the system must send notifications with patient name, doctor, hospital and diagnosis.

Critical Access Hospitals and Psychiatric hospitals have the same responsibilities as other hospitals. The reason for separating these out is so that CMS can change there mind about them individually in the final rule, because some of them might complain very loudly, and this gives CMS a way to codify their requirements differently.

Qualified Health Plans in Federally funded (facilitated) Exchanges have to do the same thing as MA, States, CHIP plans, et cetera, or get a special exception and have a good reason for non-compliance and a timeline for correction. And even then, there special exception becomes public.

Thus ends my analysis of the rule itself, and you know just about everything I do about what the proposed regulation says.

The Long of It

This is where I analyze the front matter, which contains the regulator's justifications for the regulation content. In here, you also find the alternatives which they consider (and which are still fair game in the final text), and the other questions they are specifically asking you to respond to, and what they say that aren't going to do, or may do later. This is good reading, but I don't want to read it first, because I want to have my own thoughts in front of me before I read theirs.

For whom the bell tolls

The Patient Access rule applies to Medicare/Medicaid Fee for Service, CHIP and CHIP entities, Medicare Advantage, Managed Care, prepaid inpatient & ambulatory health plans, and qualified health plans in federally facilitated exchanges. Nearly anywhere CMS writes a check, as best I can figure.

All patients can have their clinical and administrative data travel with them, with complete records available to their providers, and Payers should be able to exchange with other payers. Of course, APIs play a significant role "without special effort". Everyone in government’s definition of  interoperability references the now famous “without special effort” IEEE text which I was writing about here in 2013.

APIs will use FHIR as the Standard

The most important part of the history in this section for me is the call out of the Da Vinci Project Coverage Requirements Discovery (CRD) profile with CMS, and work on prior auth for CPAP.  Between that and the recent letter of thanks that CMS sent to HL7, we can get a very strong idea of some of the directions of CMS thinking around APIs for Medicare/Medicaid in the future.  There's also some potential here for FHIR to supplant X12N EDI transactions for HIPAA in the future, and some appetite in the industry for the same ... just as a related trend to think about.

HIPAA allows for APIs

In a somewhat long winded response, @CMSGov reminds covered entities that patients are not covered entities and that they can direct covered entities to third parties (and Apps by association) by right under HIPAA (as revised by the OmniBus regs). You might recall my own observations on the impacts of the HIPAA Bus and e-mail, well, they apply equally to APIs according to this text (and my own analysis).

Everyone to Use FHIR

The rule would require “MA organizations, state Medicaid and CHIP FFS programs, Medicaid managed care plans, CHIP managed care entities, and QHP issuers in FFEs (excluding issuers of SADPs)” use HL7 FHIR for APIs. CMS intends “to prohibit use of alternative technical standards that could potentially be used for these same data classes and elements, as well as earlier versions of the adopted standards named in 42 CFR 423.160 [the HIPAA ePrescribing standards], 45 CFR 162 [the HIPAA transaction standards] & proposed at 45 CFR 170.213” 

CMS also reports a “wish to assure stakeholders that the API standards required of ...[that list of payees]... under this proposal would be consistent with the API standards proposed by ONC [in the Cures rule]."  Where no standards are elsewhere mandated and HIPAA transaction standards are the only ones available, the rule would “require entities subject to these proposals to use these HIPAA standards when making data available through the API.”  In payer to payer information exchanges, they could still use HIPAA trasaction standards they already have or could use FHIR et. al. for exchange required by the rule, not throwing the baby out with the bathwater.

Pages 32-57 is mostly about use of FHIR and APIs and some not quite so new stuff already in regs from CMS.  If you read the cures rule, you'll see a lot of similar discussion here.  It's mostly background though.  The key point is that the APIs are going to be FHIR, same set as selected by ONC (though I imagine some thought will be needed around claims, and EOB statements as that work progresses in HL7).

Page 57 starts the discussion from CMS’s view on standards update process, and is following ONC's lead in the Cures rule.

The open API in the rule “would include: adjudicated claims (including cost); encounters with capitated providers; provider remittances; enrollee cost-sharing; and clinical data, including laboratory results (where available).” Simplifying that for the non-EDI crowd: Claims data is what docs send to payers, cost/remittance/cost-sharing is what patients get from insurers on an EOB statement, and clinical data what providers put in their EHR (using USCDI).  Also available via APIs would be provider directories and medication formulary data. 

CMS has to say much of the same things over and over because different regulations apply to different entities they pay under programs legislated at different times, and some require slightly different variations because of those variations.

Miscellaneous but Important Short Topics


Much of the rule is to be applicable by January 1, 2020, but for some (CHIP), by July 1, 2020. That’s not a typo. Shot and a beer that the industry response is going to push for a later date is a bet I think nobody will take, maybe we should bet on the actual date appearing in the final rule.  Given rule deadlines, Jan 1, 2020 is very short notice.  The rule still hasn't been published, let's say it is on March 1, then the comment period goes through March and April, and then CMS can start putting together it's responses afterwards.  I'd allow for another 60 days or so for that to get done, and it still has to go through another week or more at OMB before publication as an FR.  So, call it about 90 days total.  That means an FR could show up in the late July early August time frame, with an implementation date 6 months away?  That seems VERY tight, especially for the payer space.  I'm guessing those dates will move in response to industry push back.

Color Commentary

CMS claims patient access is “designed to empower patients by making sure that they have access to health information about themselves in a usable digital format and can make decisions about how, with whom, and for what uses they will share it.”

It sure as hell will as I read it!

This kind of data will make unprecedented price transparency available to patients through APIs, and the third parties they wish to share it with under the rule. Imagine if you will what one could do with EOB and price data from millions of patients, think of intervention studies where the intervention is a change in health plan for example.  Think about what patients who pool their data with others might learn:  Under plan 1, doctor D charges X$ for procedure P, what does doctor D charge under plan 2?  What does doctor E charge under plan 1?  How many procedures P do doctors D and E do?  What's the cost of procedure Q? As I think these parts through, this could be earth-shattering to enable patient cost controls, almost makes me sorry to not be on Medicare just yet.

The scary part is who else will try to take advantage of it... and I see many opportunities for abuse here... especially in terms of resale of patient data gathered by apps, even anonymized or aggregated.  I think much thought is needed on the unforeseen consequences, and a risk analysis on these components is something I think the industry should certainly do in response, with feedback to CMS on the results.

Trusted Exchange Network

As I read through the section on Trusted Exchange Networks in in the rule, I don't see enough words for me to equivilate [yes, that’s a word] it to the same as trusted exchange framework, though I see parallels. A Framework is not a network (just ask someone from Carequality/Sequoia if they are a network).  I think there will certainly be tie-ins between the two, but I don't think they are the same thing.

Complexity in the Rule

CMS comments on the need to align Medicare and Medicaid to support care, but the rule also makes it clear that CMS needs to align many programs (MA, part D, CHIP, FFE and others) on standards. A better rule structure with common content might improve compliance.  This, as I said earlier has in part to do with the legislative background associated with CMS responsibilities under so many programs.  I think a "Chinese Menu" approach might be applicable here though, where, like ONC, CMS creates a list of requirements that other sections reference as appropriate.

(Some) States need to Up there Game on Dual Eligible Patients

Under Increasing the frequency of federal-state data exchanges for dually eligible individuals CMS is telling states that do this monthly is that daily exchange is necessary, and will help them cut costs and improve patient outcomes for both CMS AND the states that are behind.

The new "Wall of Shame"

CMS runs through background on Information Blocking from page 126 through 135, and the fact that CMS will publish attestations regarding information exchange publicly on the three questions in section I here.

NPPES to support Electronic Contact Information

Under the rule, CMS would use its NPI provider directory to publish digital contact information for both individuals and facilities eliminating the problem I described here.  This was a thought that I've dropped in various suggestion boxes of many years, and was discussed very early on in the Direct project.

ADT Notifications

Under conditions of participation for hospitals, the rule would require some form of notification (i.e. a functional capability) to be give to other providers upon patient admit, transfer or discharge, but not requiring a specific standard for it, for those providers with 2015 CERHT having HL7 V2.5.1 ADT messages (see 170.299(f)(2)).  Special call-outs for psychiatric hospitals and critical access hospitals allow CMS to use same or different requirements for these kinds facilities in the final rule.  This is a smart move by CMS to alleviate the challenges that might be raised by those institutions with special requirements.

Requests for Information

The last part of the Patient Access NPRM isn't about rules, but rather questions that CMS wants to get feedback on before it makes more policy in this space.  There are three key topics, and I'd suggest you read and respond to these:

  1. Supports for Long-term and Post-Acute Care
  2. Patient Matching
  3. Innovation Center Models for Advancement

The End (for Now)

And that takes us to the end of the interesting stuff in the front matter.  The rest (from page 172 to the start of the regulation) cover regulatory disclosures that talk about costs of the rule, data collection, and other stuff that is required of the regulatory, but generally very difficult to analyze without deep economic expertise.  However, if you have that ability, and provide feedback in this space (not many do), it would probably wake someone up.