Convert your FHIR JSON -> XML and back here. The CDA Book is sometimes listed for Kindle here and it is also SHIPPING from Amazon! See here for Errata.

Tuesday, July 31, 2018

A Book Review for HealthIT Communicators


My wife works at a library, and often brings home books she enjoys reading.  Recently she told me about a book she was reading by Alan Alda about the work he was doing with Improvisation and communication in Science and Technology.  I grabbed a digital copy for the family including the audio-book version read by Alan.

So much of this book resonated with my experiences over that last decade+, starting with the event that gave me the reason behind the name of this blog.

Many years ago, I found myself speaking to an audience with a slide deck in front of me, behind a podium 20 feet from the front of a raised stage, in an auditorium with an orchestra pit.  After 15 seconds of trying to connect with the audience in that awkward postion, I ditched the podium mike, grabbed the hand-held and went to the front of the stage. 

I was now back in a familiar setting, that of the theater, in which I had spent some three semesters with a fantastic director from my early college days.  And I responded to that setting with all of the training* I had been provided about speaking (projection, diction), connecting with audiences, and MOST of all, improvisation.  First thing I did was went off script**.

I used that entire stage as I talked (from down stage right) about how Cross Enterprise Document Sharing (XDS) would enable patients living and being cared for in Hartford (where we were meeting) who traveled to Danbury (crossing to stage left) get access to their documents, traversing back to the center to reference my slides on the large backlit projection screen behind me (somewhat like a TED setup).  I could see the audience track me with their eyes from one side of the stage to the other.

That presentation was a turning point in my career in terms of how I approached speaking at public events. What I learned on that day was what Shakespeare already knew: "All the world is a stage, and all the men and women are merely players ..."  Since then, I no longer need the trappings of the stage, merely the impression that I have one, to apply that training I so thoroughly enjoyed.  Improv was the key for me, as it was for Alan as he describes in his book.

This is an awesome book for technology communicators. Read it. Enjoy it. Apply it.  I have the same sense about this book as others related to me about my presentation that day.  He nailed it.

   Keith

* For those of you who don't know me well, I am somewhat of an introvert.  Dealing with people in groups is exhausting, and I can spend hours on my own quite satisfied.  But I loved theater because it was a safe environment to go be other than my introverted self.  So I explain myself as being a well-trained introvert, and it was that early theater and improv training I had in high-school and college that I'm speaking of.

** I no longer script presentations, but I do prepare and rehearse them, especially if I've only done them once or twice.

Monday, July 23, 2018

Will Faxes never Die?

A very old fax machine
A question comes to me from a former colleague that I found interesting:

Can you convert a fax document into a CCDA for Direct Messaging?

Yes.  Here's how, at three different levels of quality:

Unstructured Document

This is the simplest method, but it won't meet requirements for 2014 or 2015 Certification.  It's still somewhat useful.

  1. Convert the image into a bitmap file format such as PDF or PNG.
  2. Create a CCDA Unstructured Document.  Optionally, apply the IHE Scanned Document requirements to the content as well.
  3. Include the document into a Direct Message

Getting to Electronic Text

Getting the content to meet 2014 or 2015 requirements for certification means doing quite a bit of additional work.  First step is to get to electronic text.
  1. First, you have to apply text recognition technology to the output to turn it into electronic text.  This converts the image content into letters and symbols that the computer can recongize.
  2. From this, create a bare-bones narrative structure with headings and section content.  Apply some very basic Natural Language Processing (NLP) to recognize headings and section content.  Signals such as line spacing, paragraph formatting and font styles are helpful here (that would often be included in a more than basic text recognition pass).  From here, you could create a Level 2 CDA (note, NOT a C-CDA yet) that would be MORE useful to the receiver.

Getting to Certification

After getting to electronic text, now you need to get to CCDA Entries.  It can be done, I've been there and done that more than a decade ago.
  1. From the previous step now you need to code the section headings to LOINC, and match the document content to an appropriate C-CDA template (knowing that you can also mix and match sections from other C-CDA documents into the base C-CDA CCD requirements).  At this point, you are at level 2 with coded sections.
  2. So finally, you need to run some specialized NLP to recognize things like problems, medications, allergies, et cetera ... and THEN
  3. Convert the specialized content to match the C-CDA template chosen in step 3.
And now, you COULD meet 2014 or 2015 Certification requirements.

Would I do this?

The question not asked, but which I will answer is:

Would you convert a fax document into a CCDA for Direct Messaging?

Probably not, AND certainly not for the structured content option.  Current NLP algorithms being what they are, you could probably get to about 95%  accuracy with the structured markup, which means about 1 error in 20 items.  That's MULTIPLE structure recognition errors per page, NOT a level of accuracy appropriate for patient care. The level of effort involved in cleaning up someone else's data is huge, the value obtained from it is very rarely worth the cost.  You are better off figuring out how to give your high volume fax senders a way to send you something better.

I might consider implementing the Unstructured Document mechanism as a feature for providers that are getting faxed today, as many would find it of some use.  It's not really much more though than giving them an e-mail end-point attached to a fax number, so again, of very little additional value.

Thursday, July 19, 2018

Sweat the Small Stuff

Small things can sometimes make a big difference.  The difference between an adequate piece of equipment and an excellent one can be huge.  Sometimes the things that you need to change aren't your equipment, but rather yourself.  That's more than just money, it's time and effort.  It's hard.  It's annoying.

The way that small things can make a big difference is when they provide a small but consistent improvement in what you do.  Keyboards for example.  Today I had to switch back to a crappy backup keyboard because the 5 and 7* keys on my Unicomp keyboard died.  I can already feel the change in my typing speed.  More than half my work is typing, and the better keyboard is the difference between 70 WPM and 75 WPM.  That's a 6.667% difference in speed.  It's small, I can live with it for a short period of time.

What will using the cheaper keyboard cost me?  Well, I don't spend all my typing time at top speed, so really, it only impacts 50% of my work. But, for that 50%, that's the most productive time I have, because the other time is meetings and overhead.  So now I'm losing not just 6.667% of my time, I'm actually missing it out of my most productive activity.

Amortized over a year, that's a whole month of my productive time that I somehow have to make up for.  There goes vacation.  All for lack of a minor improvement.  I'll probably get the Unicomp repaired (it's a great keyboard when it works), but I've got a better one on order with Cherry MX blue switches.  They have a similar feel to the spring-switches in the Unicomp IBM-style switches and are the current "state-of-the-art" for typists as best I can tell.  And if it breaks, I can replace the dang switch, which I cannot do on the Unicomp without about two-three hours of effort.

A colleague and I were talking about how making personal changes can help you in your efforts.  His point was that many cyclists spend hundreds (or even more) to reduce the weight of they bicycles by a few more ounces to get greater hill-climbing speed.  He noted that losing a few pounds of personal weight can have a much greater impact (I'm down nearly 35 lbs since January, but my bike has never had a problem with hill-climbing, so I wouldn't know about that).

Learning to touch type was something I succeeded (if you can call a D success) in doing in high school, but never actually applied (why I got the D) until someone showed me that I was already doing it (but only when I wasn't thinking about it).  After discovering that, over the next six months, I went from being a two finger typist to four, and then to eight, and then to ten.  That simple physical skill has a huge impact on my productivity.

I now make it a point, when I learn a new application to understand how to operate it completely without removing my fingers from the keyboard.  And I train myself to operate the applications I most commonly use to learn them that way because it makes a small difference that adds up.  It's an almost meaningless small thing that greatly improves my productivity.  Yeah, I ****ing hated it when Microsoft changed the keyboard bindings in office (and I still remap to some that I have long familiarity with), but I spent the time to learn the new ones.  It ****ed me off for six months, but afterwards it paid off.

Here's where this starts to come into play in Health IT.  We KNOW that there are efficient and inefficient workflows.  We KNOW that changing workflows is really going to yank people's chains.  How do we get people to make even small changes who want to keep doing things the way they always have been?   And more importantly, what is going to happen to those non-digital-natives who have to adapt to an increasingly more digital world when their up and coming colleagues start having more influence.

When we get rushed, we let the small stuff slip.  It's a little bit more time, a little bit more effort.  And the reward is great and immediate, we get more done.  But the small stuff has value.  It's there to keep us from making mistakes.  Check in your code before the end of the day ... but I'll have to take a later train ... and now your hard drive is dead tomorrow, and you have to redo the day's work.  Which would you rather have?

Sweat it.  It's worth the effort.

So, what small thing are you going to change?  And what big difference will it make?

   Keith

* 7 is pretty darn common in hash tags I use, and in e-mails I write.  That's pretty dang frustrating.

Wednesday, June 20, 2018

Add, Replace or Change

Most of the interoperability problems we have today can be readily solved.  All we have to do is replace the systems we already have in place with newer better technology.  Does that sound like a familiar strategy?

If you've been involved in Meaningful Use or EHR certification activities over the last decade, then it certainly should.  The introduction of CDA into the interoperability portfolio of vendor systems has created a whole host of new capabilities.  We added something new.

HL7 Version 3 attempted to replace HL7 Version 2 and it basically was a flop for most of its use cases, in large part because of the tremendous investments in existing infrastructure that would have to be replaced, and which in large part met some viable percentage of the capabilities that the end users were willing to live with and WEREN'T willing to spend the funds to replace with a more capable (yet complex and expensive) solution.

CCDA was a very effective change to the existing CCD and IHE specifications, and incorporated more or less modest changes.  It may have been more than most wanted, but was at least little enough to retain or enhance existing technology without wholesale replacement.

FHIR is a whole new paradigm.  It adds capabilities we didn't have before.  It replaces things that are more expensive with things that can be implemented much more easily and cheaply.  And it changes some things that can still be adapted.  For example, an XDS Registry and Repository infrastructure can be quickly modified to support FHIR (as I showed a few years back by building an MHD (IHE Mobile Access to Health Documents) bridge to the NIST XDS Registry and Repository reference platform).

The key to all of these improvements is to ensure that whatever you are adding, replacing or changing: The costs to the customer (or your own development) are going to be acceptable and adoptable by them (or you) and the rest of the stakeholders in an appropriate time frame.  FHIR has succeeded by taking an incremental approach. 

The birth of FHIR was almost seven years ago.  FHIR at 6 and 7/8ths years old (because young things care about halves and quarters and eighths) is doing quite well for itself. In that time, it has come a very long way, and very fast.  Version 3 never had it so good.  The closest standard I can think of that had anything close to this adoption curve was XML, and that took 2 years from initial draft to formal publication (FHIR took 3 to get to its first DSTU), and I expect widespread industry adoption to the final form (Release 4) to be well inside 2 years.  Whereas it took XML at least 3 and some would say more (although it's industry base was much larger).

So, as you think about how we should be improving interoperability, are you Adding something new, Changing something that already exists, or Replacing something?  Answer that question, and then answer for yourself the question of how that is going to impact adoption.

Wednesday, June 13, 2018

Why AI should be called Artificial Intuition*

This post got started because of this tweet:
The referenced article really isn't about AI, rather it's about an inexplicable algorithm, but a lot of "AI" fits into that category, and so is an appropriate starting point. Intelligence isn't just about getting the right answer. It's about knowing how we get to that answer, and being able to explain how you got there. If you can come up with the right answer, but cannot explain why, it's not intelligent behavior.  It might be trained behavior, or instinctive or even intuitive behavior, but it's not "intelligent".

What's been done with most "AI" (and I include machine learning in this category) is to develop an algorithm that can make decisions, perhaps (most often in fact) with some level of training and usually a lot of data.  We may even know how the algorithm itself works, but I wouldn't really call it intelligence until the system that implements the algorithm can sufficiently explain how its decision was reached for any given decision instance.  And to say that it reached that decision because these vectors were set to these values (the most common form of training output) isn't a sufficient explanation.  The system HAS to be able to explain the reasoning, and for it to be useful for us, that reasoning has to be something we (humans) can understand.

Otherwise, the results are simple mathematics without explanation.  Let me tell you a story to explain why this is important:

A lifetime ago (at least as my daughter would measure it), the company I worked for at the time obtained a piece of software that was the life's work of a physician and his assistant.  It was basically a black box that had a bunch of data associated with it that supported ICD-9-CM coding of data.  We were never able to successful build a product from it, even though we WERE able to show that it was as accurate as human coders at the same task.  In part, I believe that it was because it couldn't show coders (or their managers) HOW it came to the coding conclusions that it got to, and because that information wasn't provided, it failed to be able to argue for the correctness of its conclusions (nor could it could be easily trained to change its behavior).  It wasn't intelligent at all, it was just a trained robot.

Until systems can explain how they reach a conclusion AND be taught to reach better ones, I find it hard to call them intelligent.  Until then, the best we have is intuitive automata.


For what it's worth, humans operate a lot on gut feel, and I get that, and I also understand that a lot of that is based on experiential learning that we aren't even aware of.  But at the very least, humans can argue for the justification their decision.  Until you can explain your reasoning to a lesser intelligence (or your manager for that matter), you don't really understand it.  Or as Dick Feynman put it: "I couldn't reduce it to the freshman level. That means we don't really understand it."

   Keith

P.S. The difference between artificial and human intelligence is that we know how AI works but cannot explain the answers it gets, whereas we don't know how human intelligence works but humans can usually explain how they arrived at their answers.

* Other Proposed Acronyms for AI

  1. Automated Intuition
  2. Algorithms Inexplicable
  3. Add yours below ...



Monday, June 11, 2018

Why we'll never have Interoperability

I don't know how many times, I've said this in the past.  Interoperability is NOT a switch, it's a dial.  There are levels and degrees.  We keep hearing that Interoperability doesn't exist in part because every time someone looks at it, the expectation of where the dial should be at doesn't meet provider expectations.

Few people working today remember early command line interface for accessing mail (that link is for the second generation command line).  Most people today use some form a GUI based client, many available over the web.

To address this in this post, I'm going to create a classification system for Levels of Interoperability.
LevelNameDescription
0AbsentWe don't even know what data is needed to exchange to solve the user's problem.  We may know there's a problem, but that's about as far as we can get.
1AspirationalWe have an idea about what data is needed, and possibly even a model of how that data would be exchanged.  This is the stage where early interoperability use cases are often proposed.
2DefinedWe've defined the data exchange to the point that it can be exchanged between two systems.  A specification exists, and we can test conformance of an implementation.  This is the stage that most interoperability in EHR systems achieve after they've gone through some form of certification testing.
3ImplementableAn instructional implementation guide exists that describes how to do it.  This is more than just a specification.  It tells people not just what should appear where, but also gives some guidance about how to do it, some best practices, some things to consider, et cetera.  This is the stage that is reached when a specification has been widely implemented in the industry and you can find stuff about it on sites like "Stack trace".
4AvailableThis is the stage in which most end-users see it.  Just because some feature has been implemented doesn't mean everyone has it.  We've got self-driving cars.  Is there one in your driveway?  No.  Self-driving cars are not available, even though several companies have "implemented" them.  The same is often true with Interoperability features.  Many vendors have implemented 2015 Certified EHRs, but not all providers have those versions deployed yet.
5UsefulThis is the stage at which users would rather use the feature than not, and see value in it.  There's a lot of "interoperability" that solves problems that just a few people care about, and creates a lot more work for other people.  If it creates more work, it's likely not reached the useful stage.  Interoperability that eliminates effort is more useful.  There are some automated solutions supporting problem, allergy and medication reconciliation that are starting to reach the "useful" stage.
A good test to see whether an interoperable solution has reached this stage is to determine how much the end-user needs to know about it.  The less they need to know, the more likely it's at this stage.
11DelightfulAt this stage, interoperability becomes invisible.  It works reliably, end users don't need to know anything special about it, et cetera.  The interesting thing about this stage is that by the time a product has reached it, people will usually be thinking two or three steps beyond it, and will forget about what they already have does for them.

The level of interoperability is often measured differently depending on who is looking at it and through what lens.  The CFO looks at costs and cost-savings associated with interoperability.  Is it free? Does it save them money?  If not, they aren't likely to be delighted by it.  The CIO will judge it based on the amount of work it creates or eliminates for their staff as well as the direct and indirect costs it imposes or reduces.  The CMO will be more interested in understanding whether it's allowed them to reach other goals, and will judge by different criteria.  And the end-user will want their job to be easier (at least with regard to the uninteresting parts), and to have more time with patients.

By the time you reach "delightful" (often much before) you get to start all over again with refinement.  Consider the journey we've been on in healthcare with the various versions and flavors of HL7 standards.  HL7 V1 was never more than aspirational, V2 was certainly defined, though the various new features sub-releases also went through their own cycles.  Some features in HL7 V2 even got to the level of delightful for some class of users (lab and ADT interfaces just work, most providers don't even know they are there).  By the time the industry reaches perfection, users and industry are already looking for the next big improvement.

Do we have electronic mail? Yes.  Is it perfect yet? No.  Will it ever be?  Not in my lifetime.  We'll never have perfect interoperability, because as soon as we do, the bar will change.


Friday, June 8, 2018

Resolved: Prelogin error with Microsoft SQLServer and JDBC Connections on Named Instances of SQL Server

Just as I get my head above water, some other problem seems to crop up.  Most recently I encountered a problem connecting to a SQL Server database that I've been using for the past 3 years without difficulty. 

We thought this might be related to a problem with the VM Server as that was demonstrating a problem, and in fact after a restart of the VM Server, I was able to access a different vendor's database product on a different server without problems that was also causing me some grief, but I still couldn't access SQL Server.

Here were the symptoms:
  • Connections worked locally on the Server.
  • Connections worked inside the Firewall.
  • Connections that were tunneled through some form of VPN weren't working at all (with two different kinds of VPN).
I was pretty well able to diagnose the problem as being firewall related, but there's at least four between me and that server, and I only have access to two of them, and unfortunately could find no information in the firewall logs to help.  Wireshark might have been helpful except for some reason I couldn't get it to access my network traffic on the 1433 port that I knew SQL Server used.


If you google "prelogin error" you'll see a ton of not quite helpful stuff because for the most part, nobody seems to get to the root causes of my particular problem, but I finally managed to do so.

Here's what I discovered:

My firewall was blocking port 1434, which is the port that the SQL Server Browser service uses to enable named instances to find the right SQL Server service to connect to.  But even after opening that port, things were still not working right, the connection was failing with a "prelogin error".

One of the posts on the Interwebs pointed me to a Microsoft diagnostic tool used to verify SQL Server Communications.  The output of that tool contained something like the following:


Sending SQL Server query to UDP port 1434...

Server's response:

ServerName SERVER1
InstanceName SQL_SERVER_1
IsClustered No
tcp 1433

ServerName SERVER1
InstanceName SQL_SERVER_2
IsClustered No
tcp 59999


What this told me was that the server I wanted to access was listening on a port other than 1433.  And of course, that port was blocked (which explains why Wireshark wasn't helping me, because I was looking just at port 1433 traffic).  Setting up a firewall rule to allow access to any port used by the SQL Server service resolved the issue (since I couldn't be sure the dynamically assigned port would be used again the next time the server was restarted).

I think part of the reason that nobody has been able to clearly state a solution is because if I'd been trying to connect to SQL_SERVER_1, the already existing rule I had for port 1433 would have been JUST fine, and I wouldn't have needed another rule.  And so the published solutions worked for maybe half the users, but not others.  And some published solutions suggested multiple different ways to configure Firewall rules, some of which would have worked some of the time, and others (like mine) would work all of the time.

I realize this has nothing to do with standards, but at least half of those of you who read this blog have had your own run-ins with SQL Server.

Now, you might scratch your head and wonder how this worked before, and what happened to the Firewall rules that enabled it to work.  For that I have a simple answer.  We had recently rebuilt the VM so that it had more resources to do some larger tests, and so the system was redeployed under the same name to a new operating system environment.  And my challenge happened to overlap a) the redeployment, and b) the VM having to have been rebooted.

Root cause analysis is a royal PITA, but having invested quite a few hours with it for this problem, I'll never have it for more than a few minutes again, and now hopefully, you won't either.

   Keith