Wednesday, September 18, 2019

One more time ...

It's a Wednesday morning in mid-September, and those of us at the HL7 Working Group meeting will shortly be hearing from Ed Hammond and some will be awarded vases in Duke Blue, and so that means it's also time for me to award the next Ad Hoc Harley.

This next awardee is someone I'd best describe as a passionate lifetime student.  I first met this person while I was chair of the IHE Patient Care Coordination workgroup in IHE and they were just learning CDA.  I love to teach, and they absorbed just about everything I could teach and then sought out more. 

It didn't take long before the two of us could spend an hour of committee time discussing the exact way in which some small piece of CDA could be modeled so that it was just right, and the semantics meant exactly what they should.  A very strong leader, we were sure to clash on some topics, but always with mutual respect and retaining friendship even when we disagreed.  Health IT is better for those sorts of interactions, something not everyone appreciates (but I do, and so do they).

Before too long they had become CDA Certified, led a workgroup developing multiple CDA specifications and an author/editor of as many CDA implementation guides as I had.  They've been deeply involved in specifications for long term care, quality measurement, first in CDA and later in FHIR.  Most recently they've been leading terminology efforts with a team developing specifications that will aid the under served by enabling the capture of Social Determinants of Health in the HL7 Gravity Project.

I look forward to continuing to work with her, and learning what she has to teach me in this space myself as I award the next Ad Hoc Harley to ...

Lisa Nelson, BA, MBA, MMI of MaxMD


for being a lifetime student and a teacher of leaders...


Friday, September 13, 2019

Life will find a way

The defining behavior of an organization or organism is that it will resist change that is perceived to threaten its existence.  Few organizations (or organisms for that matter) fail to do so, or ever consider cases beyond which the need for their existence ceases to be present.  This is also true of work groups, committees, and organizations, even those built to address temporary situations, unless there's some very specific hayflick limit assigned to them during their creation (and even that isn't always sufficient).

So very often as change agents, we fail to take this into account when we envision change.  For example, this recent post from Z-Dogg MD on Medicare for All talks about the hidden consequences of Medicare for All.  I can't fault Z-Dogg's logic here, but what I do fault is the fundamental assumption that "other things remain the same".  Yes, if Medicare for All were to become a thing, the existence of hospitals, and medicine as a profession would certainly be threatened.

But here's the catch.  Hospitals are organizations, and medical professionals are organisms.  There's an appropriate meme for this:
 

And the challenge for many is that we won't actually understand how, or in what way the organisations and organisms threatened by these changes will adapt to the change.  But adapt they will.

One thing routinely taught in both User Experience and Informatics classes is an awareness of unintended consequences.  With any big change, there will be consequences.  In a complex system, some of those consequences are certainly going to be undesirable to those organisms or organizations whose existence is perceived to be threatened by that change.  But simple logic that assumes that the system isn't going to adapt to the change is isn't going to cut it.  Yet few will think beyond it.

In the case of Medicare for All, I can honestly say that I haven't a clue what will happen, but if hospitals are threatened, they will actively seek out ways to remain profitable.  And physicians who are accustomed to a certain level of income may in fact leave medicine, but they will eventually be replaced by others who never know that the same level of compensation (see this summary).  Other impacts might be to pressure institutions that train physicians to do so in a way that doesn't leave them with crippling debt. We know this will happen because a hole in the ecosystem ... or the economy won't be left vacant for long.






Tuesday, September 10, 2019

How Things Fall Apart


Related image

It doesn't take an evil overlord to destroy something.  Things will fall apart on their own if you simply wait long enough.

Beyond a certain size, a software project, a publication, a technical specification, or a standard becomes unmanageable.  This can be mitigated with processes, but processes executed by humans aren't nearly as a repeatable as processes managed using automation.

Let's take a look at an example of how something like this happens in standards...

In HL7 Version 2, the PV1 segment reports information about a patient visit.  An important thing happens during an inpatient or emergency department visit, which is that the patient is discharged.  You will want to know about this, in terms of both where to, and when.  In HL7 Version 2.1, where is captured in PV1-37 Discharged To Location, and when in PV1-45 Discharge Date/Time.

Let's look at the evolution of PV1-37 over time.

VersionDatatypeDescription
2.2CMComponents: <code> ^ <description>
Definition: indicates a facility to which the patient was discharged. Refer to user-defined table 0113 - discharged to location.
2.3.1CMComponents: <discharge location (IS)> ^ <effective date (TS)>
Definition: This field indicates a facility to which the patient was discharged. User-defined table 0113 - Discharged to location is used as the Hl7 identifier for the user-defined table of values for this field.
2.5.1DLDComponents: <Discharge Location (IS)> ^ <Effective Date (TS)>
Definition: This field indicates the healthcare facility to which the patient was discharged and the date.
Refer to User-defined Table 0113 - Discharged to Location for suggested values.
2.8.2DLDComponents: <Discharge to Location (CWE)> ^ <Effective Date (DTM)>
Definition: This field indicates the healthcare facility to which the patient was discharged and the date. Refer to User-defined Table 0113 - Discharged to Location in Chapter 2C, Code Tables, for suggested values.

Originally, PV1-37 was just a composite made up of a code and a description.  Between 2.2 and 2.3, the data types and description of the second part of the composite changed from <description> to <effective date (TS)>.  Why?  I don't know.  It could quite simply have been someone asking "What do they put in the description field?", and someone else responding "Usually the discharge date" and from there maybe they decided to set the data type to TS, not realizing this data was already in PV1-45. NOTE: It's really hard to say what happened because this was back in 1994.  The HL7 List Serve only tracks back to 2012 for pafm and V2 lists, and the historical tracker database only goes back to Version 2.5, which came out in 2006/7 time frame, and the ballot desktop only goes back to 2004.

In the 2.4-2.5 time frame the CM data type was replaced with a real data type (DLD), which was defined as being an IS followed by a TS.  That we can determine from change request 135 in the HL7 change tracking database for Version 2.

Over time, IS data types were switched to CWE data types, and TS to DTM as you can see in Version 2.8.2 above.

So, somewhere along the way, we lost track of why <description> became <effective date>, and duplicated what was already in PV1-45 Discharge Date/Time.

People move on, knowledge gets lost, cruft and minor issues accumulate, technical debt accrues.  Eventually, technology needs to be rebuilt, just like anything else, in order to be relevant, useful and most of all, well understood by all currently using it, or it collapses under its own weight.




Monday, September 9, 2019

On The Proliferation of Implementation Guides

For all of the problems that standards solve, they also create more.

There's even a standard comic for this one XKCD/927 below.


Standards are like potato chips, you can't have just one.  I've said this many times.  The challenge that we face though, is that everyone has a somewhat different use case.  So it's more a matter of 14 different flavors of potato chip, rather than 14 different potato chips.  And somehow we have to put all of them into the same bag (maybe) to make everyone happy.
  • Company A wants to record blood pressure using a consumer device simply and easily.
  • Provider B wants to get that information as easy as possible, and their organization wants it to cost as little as is reasonably possible.
  • Professional Society C wants it to be conforming to the highest quality of standards that can be met practically using consumer grade equipment.
  • Medical device manufacturer D wants the data to contain everything they feel is essential to cover their FDA requirements for capture of the information.
  • Medical Researcher E wants to have ALL of the data the same way every time for the most reliable measurements.

Every single one of these constituents has reasonable requirements for why they want what they want. Many of the requirements conflict with each other in many ways.  How do we get from here to there in a way that we can make use of the data?

The way this is done today in standards is essential by eminence, rather than by evidence.  We compensate our under-engineering with real data by over-engineering a solution. To be honest, it's pretty damn difficult to gather the necessary evidence needed.  How would you put a number on any of the following?
  1. What's the actual value to research for having ALL of the data (not just the BP measures, but the signal that was interpreted to produce the measure)? 
  2. How much of the FDA medical device requirements are essential for patient care?
  3. How much variance is reduced by recording cuff size?  
  4. What is the impact of not recording it on patient care?  
  5. How accurate is the data that is recorded? 
  6. What is actually going to cost to get that data to a healthcare provider?  
  7. Whats the cost to develop an app that allows this data to be transmitted using various standards?


Can you even get data that would be aligned with measures associated with any of these questions?  Would you be willing to sit in a room with a number of noted physician and even ask some of these questions?  Because if you do, you will be challenging the status quo somewhere.

But status quo right now is that this isn't happening in a practical, cost effective fashion, and someone has to engineer a solution to make it work.  How much engineering is actually going to happen though, really, without looking at all of these factors.  Because if you don't address those factors, you aren't really doing engineering (you may be over-engineering a perfect solution, a sure sign of under-engineering).

Until we can solve this problem, we're simply going to be stuck with a whole bag of things that is probably more than we need.




P.S. When this ad came out, I did eat just one, just to prove that I could.

Friday, August 23, 2019

The 3rd Annual ONC Interoperability Forum

A dozen years ago, there were 200 people deeply involved in Interoperability programs in the US.  Over this last week I attended the third annual Interoperability Forum hosted by ONC, and I can safely say that there are at least 1000 people deeply involved in Interoperability programs.  It was hard enough then to keep track of it all back then, and I can honestly say that I don't envy Steve Posnack his new role (nor his past one), as well served as we are in this country by having him in it.

I would have to say that 21st Century Cures represents the second reboot of our Interoperability program in the US.  The "first" program was under the ONC created by Bush's Executive Order 13555.  The first reboot of the US program via ARRA/HITECH when ONC was enshrined in law by congress.  Having followed national programs in Canada, Europe, Asia and the Middle East, I can safely say that programs at this level reboot about every 5-7 years.

Big themes this year:

  • Not just payer engagement, but strong payer leadership in interoperability initiatives.
  • Learning from the international community ... International keynotes book-ended this conference, the Dr. Simon Eccles from the UK and NHS program kicked things off, and a team from the Netherlands including long time IHE participant Vincent van Pelt wrapped it up.
  • APIs was watchword 1, quickly followed by TEFCA and then Privacy.
    • Ready, FHIR, Aim... that's the state of FHIR right now.  In Garner hype cycle terms, we are so at the peak of inflated expectations.  This isn't a bad thing, or even a ding, it's just me acknowledging where we are at.  We have to be there at some point.  I expect that FHIR will cross the chasm swiftly though, with the brain power that is working on uses of it to solve so many challenges.
    • TEFCA and the framework are still on the ramp up from what I can figure, there's so much that people have yet to work out (but are working madly to do so).  The big challenges: How to get to an endpoint given a patient, provider or payer identity.
    • The biggest concern of many around APIs is about how to protect consumers (b/c this data isn't protected by HIPAA).  I'll say this again: 20 years ago you never would enter your credit card number on the web, we will figure this out, and mistakes will be made, just as before.  Yes, we can learn from mistakes of the past, but the reality is, you won't discover the real problems until they crop up.
      As to the assertion that consumers aren't ready for APIs, that's like saying that consumers weren't ready for the internal combustion engine.  It's also the wrong concern. Consumers are ready for what the APIs (internal combustion engine) will be used for: Apps (cars, tractors and lawnmowers).
  • The thing that everyone wanted to know and nobody could figure out was "When's the (Cures|CMS|HIPAA) Regulation|Next version of TEFCA going to show up?" 
    • Cures had 2000 comments, it was planned for in Q3 this year, but then there was a shutdown before it ever got published (that was planned for Q4 last year).  The pressure is still on, but my bet: Q1 2020, and I'm sure that ONC wants it out sooner, but I don't see how it could happen in Q3 at this stage (it would already need to be in OMB at this stage), and in my opinion, Q4 is still dicey based on what we heard about the review that's still going on from Elise Anthony on day 1 (recall that last year, it was in OMB in October, and we still didn't see it before Christmas).
    • I heard that the CMS regulatory agenda (the RegInfo site is down this morning so I cannot verify) is suggesting November for it's final rule (Learn how to use these tools if you need to track this stuff).
    • In case you missed it, the 42 CFR Part 2 Update just came out, which is why it isn't listed above.


Thursday, August 15, 2019

Getting FHIR Data from mHealth devices and applications

I've been spending a good bit of my time working on understanding health data in mobile apps and devices.  Most of my research tells me we need to look at what the problems really are, rather than to assert that ___ will solve the problem.

There's not really a good collection of FHIR data coming from mobile apps and devices that could be used for any sort of analysis.  To address this problem, the Mobile Health Workgroup in HL7 is sponsoring a track at the HL7 September FHIR Connectathon to explore what kind of FHIR resources come out of these devices, and produce that collection for analysis.

The workgroup is hosting a meeting on August 23rd at 11am Eastern to discuss this track if you would like to learn more.  Coordinates are below:


 Web Meeting :
Dial-in Number (US): (515) 604-9930
Access Code: 836039
International Dial-in Numbers: https://fccdl.in/i/mhealth
For 24/7 Customer Care, call (844) 844-1322


Saturday, July 13, 2019

Optimizing Inter-Microservice Communications using GZip Compression in HAPI on FHIR

It's widely known that XML and JSON both compress really well.  It's also pretty widely known that one should enable GZip compression on Server responses to improve server performance.  Not quite as widely known , you can also compress content being sent to the server (for POST or PUT requests).  Also, most people can tell you: JSON is smaller that XML.

And size matters when communicating over a network.

So it should be obvious that one should always GZip compress data whenever possible when connecting between two servers, right?

Uhmm, not so much, but you could already see that coming, because what would I write a blog post about if if were true.

Here's the thing.  Compress saves time for two reasons:

  1. It takes less time to transmit less data.
  2. There's less packet overhead with less data.
But it also takes CPU time to compress the data.  So long as the CPU time taken to compress the data on one size, and uncompress it on the other side, is LESS than the savings in transmission and packet overhead, it's a net win for performance.  

Let's look at data transmission:

A maximum transmission unit (MTU) is about 1400 bytes.   This takes a certain amount of time to transmit over the network.  Here are some values based on different networking speeds:
Bandwidth
(Mbps)
Time
(ms)
2.24 
10 1.12 
20 0.56 
100  0.112 
200 0.056 
300 0.037 
1000 0.012 

Depending on network speeds, time saving on sending a single packet can save anywhere from  12 µs to 2.2ms.  This isn't very much, but if you have to send more than one packet, then you have interpacket latency, which is basically dealing with round-trip times from client to server for acknowledgements.  ACKs don't need to be immediate in TCP, a certain number of ACKs can be outstanding at once, there's not latency introduced on every packet) sent.  But your network latency also has an impact (network latency is generally measured on the order of 10s of ms) on the throughput.

I ran an experiment to see which method was fastest when sending data in a POST/PUT Request, using GZip or not using GZip, and the results were interesting.  I send 200 create requests in which I controlled for the size of the resource being sent in terms of the number of packets it would required to be sent over, from 1 to 10 packets of data (where I mean packet, the size of a single TCP segment transmission, controlled by maximum MTU size).  I sent the request in two different formats (XML and JSON), over three different networks.

For a control network, I used localhost, which actually involves no transmission time or effective latency.  Also also did the transmission over my local network, so that it actually went from my system, to my router, and then back to my system.  And then finally, I transmitted from my system, to my external IP address (so it left the router, went to my cable model and came back through it).

I burned the first batch of 200 requests to initialize the code through the JIT compiler.

Here's what I found out:

  1. Don't bother compressing on localhost, you are just wasting about 2ms of compute on a fast machine. 
  2. Don't bother compressing within your local network (i.e., to a switch and back).  Again, about 2ms loss in compute on a fast machine.
  3. Going across a network boundary, compress JSON after 3 packets, and XML always*.
  4. Use JSON rather than XML if you are using a HAPI server.  JSON is ALWAYS faster for the same content.  For smaller resources, the savings is about 20%, which is fairly significant.

What does this mean for your microservices running in your cloud cluster?  If they are talking to each other over a fast network in the same cluster (e.g., running on the same VM, or within the same zone with a fast network), compression isn't warranted.  If they are communicating across regions (or perhaps even different zones within the same region), then it might be worth it if your content is > 4.5K, but otherwise not.  A single resource will generally fit within that range, so generally, if what you are compressing is a single resource, you probably don't need to do it.

It won't hurt, you'll lose a very little bit of performance (less than 5% for a single request if it doesn't take much work), and much less if you do something like a database store or something like that [all my work was just dumping the resource to a hash table].

That very limited savings you get for turning outbound compression on in the client when making an interservice request is swapping compute time (what you pay for) for network time (which is generally free within your cloud), and saves you precious little in performance of a single transaction.  So any savings you get actually comes at a financial cost, and provides very little performance benefit.  Should your cloud service be spending money compressing results? Or delivering customer functionality?

Remember also when you compress, you pay in compute to compress on one end, and decompress on the other.

    Keith

* My results say the compression is faster, but that the difference in results (< 2%) isn't statistically significant for less than 2 packets for XML.  I had to bump the number of tests run from 20 to 200 to get consistent results in the comparison, so it probably IS a tiny bit faster, I'd just have to run a lot more iterations to prove it.