Pages

Friday, March 25, 2016

I really think we should ... Oh look, squirrel

So the question came up last working group meeting about whether to pursue use of the FHIR StructureDefinition as a mechanism to capture CDA Templates in some meeting somewhere.  I wasn't at that meeting or I would have shown my distaste for the idea then.  Grahame's been playing with StructureDefinition and has demonstrated quite successfully I think that he can use it to represent models for everything from HL7 Version 3, CDA (being a version 3 derived spec, that should be no surprise), to I would guess, V2, X12 and even arbitrary XML schema and other models provided they follow a few fairly simple rules.

Thus, I introduce to you the squirrel in question.  It's a cute squirrel, and even a rather powerful one.

Why do we need this?  We have a perfectly good standard for representing CDA templates if we would just use it in the tools HL7 presently uses to publish so much of its own documentation in.

But it isn't FHIR related, and loses to the flavor of the month (year? decade?) I think.  I suspect chasing this squirrel will only distract any further work on something else that might benefit the HL7 community at large ... for example, liberating CDA Templates from Trifolia in a standard format.

The only benefit I see to this distraction may have is to keep people tied up in a harmless activity, at least for anyone but the squirrel.  But I think StructureDefinition will survive that race.

   Keith

Wednesday, March 23, 2016

Pilot Error

Tails of failure often involve a sequence of multiple errors, each of which by itself is not fatal (or in this case, vaguely comedic), but which together produce a totality that is hard to fathom.  I was reminded of this today when I went to register for the last class I need to complete my degree program, only to find out that it is on-campus only.

  1. I've been checking for the last three terms that I knew what I needed, AND
  2. Knowing also that scheduling can sometimes change, I did my best to get as much out of the way as I could up front, AND
  3. seeing that I had one dependency I couldn't meet last term, I checked with the instructor to be sure that the class was to be offered this term, AND 
  4. received confirmation ...

BUT,

  1. I failed to check that it was going to be available online, AND
  2. The syllabus I read was out of date, AND 
  3. I failed to notice that the syllabus was out of date, AND 
  4. I missed the wee bit in the course catalog noting it was offered online only in odd years.
I have two hopes left... I'm wrong, or I can do something else. And, no matter which happens, no babies die if I don't graduate the same year as my eldest, so I'll live either way, and still graduate, just a little bit later than I wanted to.

Gah, Pilot error.  Or in other words, the person most responsible for making sure things lined up the way they were supposed to failed to accomplish that, in part by failing to note other errors or discrepancies in data available.  In this case, I'm the pilot, and guess what, I'm human too.

In healthcare, we often rely on the physician to be the pilot ... but she or he is only human as well, and on a bad day is likely only to perform as well as the systems and people that surround him or her. Design for humanity ... design for error, and the world will be a better place.

   -- Keith


Saturday, March 19, 2016

Thinking?

The title  of this post is a response to a question in my household when someone says "What were you thinking?" when after due consideration the recipient realizes, Oh yeah, that was probably not so smart.

It's the question I had on receipt of a "secure email" the other day, coming from a healthcare institution.  I won't name the institution because the solution is a commercial one from crafted by an Internet security provider (Proofpoint) that apparently thinks it is a good idea.  Let me explain how it works to you:

When someone sends an e-mail that this software thinks needs to be protected, the software takes the body of the e-mail, encrypts it in some form, base-64 encodes the content of an e-mail into an XML payload, then base-64 encodes that into an HTML form.  It then sends that HTML page in an e-mail as an attachment to the original recipient.  In the body of the e-mail is an official looking page containing the sender logo, a lock icon, and text which explains that you have received a secure e-mail and that in order to access it, you either need to open the attachment, or click on a link.

I'd love to do a video that shows how this system works, and compare it to a phishing attack.

Consider that you have just received an e-mail that appears to be from an institution that you have a relationship with.  It looks official, and bears the correct logo [highlight the institution logo on both the phishing and secure e-mail].  When you click on the "more information" link, it clearly goes the institution's web site, which you can quite readily verify.  The email asks you to Open the attached file to obtain your secure message.  Feeling secured by all that you have done to ensure you security, you now open the attachment.  Once again, it looks very good and official, bears the right logos, and even bears a copyright from a trusted security provider.  It asks you to click a button to retrieve your message.

You do so.  At this stage you are now taken to an HTTPS page on the web which has a long URL which looks right on quick glance, and that asks you, since this is your first time to create a user name and password to access your message.  So far, both systems appear to work in nearly the same way.  So, you create your account.

One of these systems will then decrypt the packet sent to you and the other will send your username and password to pirate bay, where someone will then drain all your bank accounts.

The question is, should you open this attachment?  The answer in both cases is, for most people.  Hell no.

  1. You don't have the training to distinguish the attached file (which may contain a zero day exploit) from any other attached file which could infect your computer.
  2. You shouldn't expose your password management procedures to others whose security you cannot verify.  Many of you have pretty poor ones to begin with (like I use the same username and password for everything).
  3. Any of the italicized items in the scenario above are things that ANY competent software engineer or hacker can do.  In fact, if it can be done, a hacker doesn't need to do it him or herself, they can very likely simply steal it.
Why does an internet security provider believe that encouraging people to engage in behavior that security experts advise against, and other security products protect against, would be a good idea? Well, that goes back to another common response in my family to the "What were you thinking?" question:

It seemed like a good idea at the time?

-- Keith

P.S.  When I first saw this message, I actually thought it had originated from my corporate security folks, who craft similar messages in order to encourage people to take their phishing refresher class, an honor I have thus far managed to safely avoid (it's the reward for clicking on an attachment or link in their generated e-mails).  Yes, I get phished internally as training on what to avoid.

Friday, March 18, 2016

It's both what and who...

You've probably heard the line: It's not what you know but who you know that counts.

In standards more than anything else, that's always been true, and yet not true at the same time.  I've found that if you can figure out who to ask, and WHAT to ask, you've conquered 90% of the problem.

What you are doing here is working the problem backwards.  You have a question, and you don't know the answer.  So the next question is WHO has the answer.  But way you don't know who actually has the answer, what do you do now?  Some get stymied at this point, and thrash around until they find out who.  I've been on the end of e-mail chains six and eight long that are evidence of that sort of thrashing.

So, the next question you have to figure out is WHAT form the answer will likely take. From there, you can often figure out the WHO.  This is a fairly obvious step, but it's surprising how many never take it.  So, if it is an HL7 Vocabulary question, the place to look would be active members of the HL7 Vocabulary workgroup.  If you can narrow it down to SNOMED CT or LOINC, that cuts down the number of likely people to two or three.  And so on.

But even then, sometimes you still won't know WHO.  You might have a good starting point for a question once removed from your original question.  Try this:

Dear XXX, I am trying to figure out _____ because _____.  Is this something you can help me with? If not, can you tell me who might be able to answer my question?

Worded this way, your question will often get you at least one more lead.  When you follow up, with your new lead, acknowledge the lead giver:

Dear YYY, I understand from XXX that you are the expert in ____. I'm ...

Suck up a little.  We love it.

If your problem requires more than a one or two line answer, you may need to be prepared to pay for it, but often you will be surprised by how many two-four page answers you will get.  Some of the experts in my particular field like to show off their expertise (I don't personally know anyone like that ... #saracasm).

Be persistent, and don't be afraid to look ignorant.  We understand in this field that ignorance is a correctable problem, and are often more than happy to apply the correction.  In fact, the cure to ignorance is a healthy dose of curiosity.

Thrice today knowing the what told me the who, and knowing the who and asking the right question got me answers quickly. If you keep digging for what and who, eventually your contact list will look like mine.

You won't need to know the answers to all standards questions off the top of you head, but you will know how to get them quickly, and that is JUST as valuable.

   Keith


Teaching an Old Dog new Tricks -- Part 1

One of the challenges of being a solo developer is that you get a favorite set of tools and you stick with it for so long that you often don't keep your skills up to date. But, you can teach an old fart new tricks.

Unfortunately, sometimes when I have to learn those new tricks is when someone has added that tool to my bag without me looking at it first.  I encountered that with Maven today.  I've been using OpenShift to deploy my Pubmed for HealthIT Standards project, and it uses Maven to build my Java sources when I push to my git repository.

But I'm of the old school where you found your libraries, downloaded them, dropped them into WEB-INF/lib and went to town.  That doesn't work anymore, and so I had to learn to use Maven.
Unfortunately, the maven documentation isn't designed for old farts like me and so it took me a little bit to figure out what should probably be obvious.  If you are a young blood, this post probably will seem a bit stupid, but I'm betting there are enough others like me out there that it will be just the thing.

Where before you: Found your libraries, downloaded them onto your computer, moved them into a build structure, and then copied them into WEB-INF\lib when you deploy or build, you skip all that when using a maven build.

Instead, go find how your library is identified in Maven's repository.  Most systems that use maven will tell you with some XML that looks like this.

<dependency>
 <groupId>org.apache.pdfbox</groupId>
 <artifactId>pdfbox</artifactId>
 <version>1.8.11</version>
</dependency>

What you do is add that XML to your pom.xml file for your build, and it will automatically download the jar file from the Maven central repository.  If, for some reason you need to get a library from some other repository, you need to add a line like this your pom.xml file:

<repository>
<id>eap</id>
<url>http://maven.repository.redhat.com/techpreview/all</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>

There, now you are done, and off you go.  I love the simplicity of it, I just wish someone would have explained it for a guy like me who's been used to doing things the hard way for three or more decades.

--  Keith

P.S.  I have a number of these to write, because I've been learning a few new tricks lately.  Three upcoming posts are on MongoDB and Bootstrap (which I'm using now), and AngularJS, which I hope to be using soon.  The first two were mostly straightforward, but I'm finding the last to be a bit, well, hopelessly confusing.  I really don't care about dependency injection (actually I do, but not in chapter 1).  What I want to learn is how to quickly build something, and so far, the three books I've look at through my Safari subscription have been nearly useless.

P.P.S. The reason I had to learn Maven is because my build broke when I added a Jar to WEB-INF/lib, but which was needed for the Maven compilation.  For some reason, Eclipse set the class path right for the java build (probably because I made a manual adjustment to it), but Maven wasn't configured correctly.  Fortunately, it only took me an hour or so to figure it all out and get my build going again.

Wednesday, March 16, 2016

Investing in Automation

I've been burning the midnight oil these past few weeks to automate a workflow that had I executed it manually, I would probably be done by now.  The compensation for automating this workflow, at least in terms of my time works out to have not been worth my investment -- at least if I'm just counting my time.  But...

  1. Eventually, this will allow me to offload a piece of work that right now, in a manual task, only I (or someone similarly skilled) can do, to enable someone less skilled to do the same thing. Given that there are about 500 people like me with that specific set of skills in healthcare, that's really useful.  I don't scale up (something we've all learned by now).
  2. I now much better understand the task being performed, and so can find ways to improve tasks leading up to it in terms of consistency, so as to make the automation much less complex.  That process knowledge is really useful, and I can apply it not to just this process, but to many others like it.
  3. I now have a written, and repeatable record of what actually has to be done to perform the task, in a way that I can actually exchange with someone else, and they might benefit from that to. And many of the things I've learned how to do can also be applied to other problems.
  4. The intellectual challenge in automating this workflow is SO much more interesting and intellectually stimulating than the hours of copy and paste work that I would have been otherwise doing.  So my boredom level is lower, but job satisfaction is higher, and as a result, my overall quality and outlook on life is much improved.
  5. My rather accurate, and repeatable description of the process (in software) is something that others can also improve upon.  I'm certain (because I don't write code every day -- well, mostly not when, oh, OK, so I do write code every day, but not for production use -- except ...), well anyway, I'm still certain that someone could improve what I've written.  Including me a few years later ...
So add all this up, and what I get is a useful piece of code, several hours spent doing something more interesting that stupid and mindless repetition, a better understanding of the process, and most of all, job (or in this case, school) satisfaction.  And eventually, this work will amortize itself, just not for me.

This, I think, is what makes developing software so interesting for some people. There is so much intangible value here, over and above what my time investment does for me.

There's a certain satisfaction I also get from teaching, so what is more satisfying that teaching the stupidest thing in the world (a computer) how to do something almost human, and have it do it successfully.

   -- Keith

P.S. In case you were wondering WHAT I was writing, it was the metadata indexing algorithm for by Pubmed for Standards project.  Embedded in that algorithm is a great deal of knowledge and process I didn't even know existed when I started off on this effort.


ONC's statement of Purpose doesn't justify Certification Oversight Expansion

I just started reading the Expansion Rule and am finding myself both confused and alarmed.  At first blush, it all sounds good, but the ramifications are a bit, well, staggering.

First of all, I look at the justification for the expansion, which looks OK in part.  The bits about having oversight over the testing labs looks good, as does the fact that ONC will have a more direct role in addressing testing issues, and even the fact that they can do their own monitoring all seems OK.  But then I get to some of the concerning bits.

First of all, they cite patient safety as a possible trigger.  Now, ONC-ACB's are already certifying systems according to safety enhanced designed requirements, and I can see where if something came up with regard to patient safety where that was an issue, it becomes something that needs to be looked at, and frankly, as a patient, I'm very happy that ONC may decide to do its own looking.

But then there's this nebulous area where Certified components interacting with non-certified components come into play.  And that's where the challenge starts to show up.  This starts to give ONC oversight of EHR system components that aren't involved with the certification program, under some very loose reading of the statute as far as I can tell.

While the "purpose" for which ONC was created includes all the things they cite in the regulation, HITECH did NOT give ONC authority over all those topics described in the "Purpose".  Instead, it gave them a more narrow set of requirements focused (mostly) on creating a certification program, selecting standards, and coordinating federal efforts on HIT strategy and policy.

I don't see how using the purpose statements as a justification for expansion into EHR surveillance beyond the certification program comes into play, and given that loose reading ONC could also justify taking on a large number of other roles which overlap with other agencies responsibilities:

Under similar loose readings, one could suggest that:

  • Improving quality could authorize ONC to take on AHRQ responsibilities.
  • Reducing costs could authorize ONC to take on CMS responsibilities.
  • Just as reducing errors as presently used as a justification seems like it could authorize ONC to take on some of FDA's responsibilities.

This is a bad idea on several fronts.

If there's a patient safety issue with certified product, certainly ONC should address it.  Addressing product issues like that is already covered under the QMS (quality management system) requirements of meaningful use, and a failure of a vendor to adequately provide or execute on QMS for certified product would certainly apply in those cases.

But when we start talking about interactions with non-certified product, I have at least three concerns:

  1. Overlap with FDA requirements for those components that are classified under medical device regulations.
  2. Confusion about responsibilities when certified components and non-certified components are described as a single unit.
  3. Confusion about vendor responsibilities when the challenge may be introduced by product interactions between multiple vendors, some of which may be certified.
  4. Issues where the root cause could be "modification" in the field by non-vendor personnel.  

While ONC is rightly concerned that there need to be ways for some federal agency to act when a patient safety issue is discovered and not being appropriately addressed, neither its ACB's, nor frankly ONC, has the necessary experience to do so, and as I read the legislation, nor do they have the necessary authorization.

Attacking patient safety as an EHR Certification problem is not the right way to handle this.  Safety is built in, not bolted on.  Don't try to bolt it on to the certification process as an afterthought either.  Put some real thought into how to handle the safety issue, and don't confuse the two.  Talk to your colleagues over at FDA, they have a few decades of experience with addressing those sorts of challenges.

I've still got a lot more reading to do, this is just my first reaction to the first half of what I've read thus far.

   Keith

P.S. As a reminder, these are my own personal views, not those of my employer or anyone else.

Tuesday, March 15, 2016

Status Update on my Pubmed for HealthIT Standards Project

So my capstone project has taken several twists and turns since I first conceived of it, but is still a going concern.  I am very nearly to the point of having many IHE profiles loaded into my index, and will very quickly thereafter have a large number of HL7 standards and implementation guides also loaded.

What have I learned thus far?

  • The lack of a standardized publication format for standards makes it challenging to build an index of them but not impossible.
  • You need deep understanding of the content to parse anything useful from it.  I'm not the only person who could do this, but you'd certainly need someone with both standards expertise, as well as a good deal of experience dealing with structured documentation.  Fortunately, I've got a good bit of that in my background before I ever got deeply involved in standards.
  • PDF is no longer the bane of standards existence, but there is still enough pain that you need to invest pretty significantly to get anything useful out.  Fortunately, I managed to find Apache's PDFbox in time to rescue me from days of copy and paste (instead I traded days of coding, but I can guarantee I won't be sorry).
Agile is clearly the way to approach this, even to the point of building out the index content.  I'm starting with the basic stuff, and will support full text searching of titles and abstracts, but will probably take a bit longer to get to some more complex coded detail (such as coding system or actor types).  For that, I'm going to see what looks to be useful as I go.

I expect in a couple of weeks to have the first build available for public testing, so that people can readily see what is available.  I had hoped to be finished with that at the end of this term (coming very soon), but automating extraction of metadata from the PDF took both longer than I though, and shorter than I thought.

In the original case, I never thought it would be possible for me to do that, but when I looked at copying/pasting data on a dozen plus profiles or more from a half dozen or more documents, I was daunted by the manual task.  Few would have the patience or experience to to do that drudgery. When I discovered PDFbox, and realized that I could automate much of it, I HAD to do it.  For one, the quality of the indexing would jump by at least an order of magnitude, and the speed by two orders (once I got everything coded up).  There's enough us of PDF in the standards world that this would be a godsend.

The ideal solution (and the one I had originally envisioned) would be to get SDOs to agree to a standard format, but that's not really something I can fit into a six month project.  I might get one to move in that direction, but four or more?  Not at the same time.  I have to give them a reason to do so, and thus, I changed my plans a bit.  Let's give people a taste of what such an index can do, and see if then they could be convinced.

So, stay tuned, I should be tossing it out there on the web real soon.

Monday, March 14, 2016

Workflow Management State vs. Business Logic State

Imagine you are engaged in a complex business process, and your boss comes along and asks you how are you doing on the task type for concerned party (a business identifier in many cases).

You respond: I've finished the subtask A, and am working on the subtask B, and subtasks C and D are still waiting on input from subtask performer.  I can't do subtask E and F until those are done.

What you've just done is break down the business logic of your workflow into the components necessary to manage the workflow.  It's probably the kind of input your boss wants because his job is to make sure the workflow is completed efficiently, and he wants to know where he should take action.

Workflow management deals with the underlined stuff.  Business logic with the italicized stuff, which link tasks to execution.  When we talk of the state of a workflow, we often confuse these two sets of "state".  The overall task is in progress (workflow management state), and the task itself has completed one subtask, has another three in progress, with a final two waiting to be started (until their inputs are ready)  The business logic state of this workflow is often described in shorthand, assuming a stepwise set of refinements through execution of subtasks.

It is awareness of this distinction that enables workflow to be automated in a consistent way, and yet still linked to business logic.  At times I find it difficult to explain to folks in healthcare, that even though your workflows are different, that in fact, everyone's are, and yet, there are a common set of task states that can be applied to any task within a workflow.

Once you wrap your head around that, the Task resource makes a lot more sense.

Friday, March 11, 2016

Back to Work...flow


It's taken me a while to get back to the work I've been doing in Workflow for FHIR.  I finally got the Task resource rebuilt as we had discussed back in late November so that people can see what it looks like as a resource, and so we can start playing with it hopefully at the next FHIR Connectathon.

The resource is based in part on IHE's Cross-Enterprise Document Workflow, and on WS-HumanTask, and from there back to a long list of prior workflow standards.  The task resource represents a task to be performed.  It has a name, possibly a code to go along with that, and hopefully a description.  The subject of the task describes in some way, or provides the key input in indicating what should be done.  This could be a procedure, an order, or some sort of request.  The take basically says "do this thing", where the combination of the subject and the task type represents the thing to be done (e.g., refill a medication).  In that example, type supplies the "verb", and the resource is the noun that is acted upon.

The task has a creator and may have an owner.  Creator is important because creation of a task conveys some sort of privilege, as does ownership.  I don't actually say what privileges are granted, because that might vary from system to system, or even within systems across different workflows. Task has a state machine that is loosely modeled after the state machine of human task.

There are a number of operations that I've defined on a Task resource which are shorthands for different kinds of things that are often done to tasks in a workflow.  These operations can be readily mapped to WS-HumanTask operations in many cases. I've simplified and/or combined some because I could readily merge semantics.  For example, setting an input or output to nothing effectively removes that input or output, so there is no need for a separate delete operation.

Some of these operations assign or delegate ownership of the task.  Others set inputs or outputs.  Yet others progress the task through its state machine.  These operations need not be supported by all implementations, but their presence and definition allows for some very controlled access points to be established for task management.  This allows access control mechanisms to be put into place that ensure that only authorized users are manipulating workflows in an appropriate fashion.  A system implementer might allow unfettered read access of its workflow, but then only allow certain kinds of systems (based on their role [e.g., as owner or creator]) to access different operations, and restrict which operations can be performed based on the state of the task.  For example, a completed task might not allow any access to the start operation.  Only a system task might be allowed unfettered write access to the task resource.

This enables robust task management at one end of the spectrum, and very simple workflow management at the other end.

While IHE distinguishes between Workflow and Task, we recognized in FHIR that this view of workflows is really in the eye of the beholder.  One person's (or organization's) task is another organizations workflow.  The provider says "do this lab test", and thinks of that as a task, but the lab has a whole workflow around the test, involving receiving the specimen, staging it, performing the test using the automated equipment, validating the results, reporting on them, and properly disposing of, or preserving the specimen as necessary.  So, there's only a task resource, and it can reference subtasks to address a complex workflow like that just mentioned.

I'm glad to have gotten this much done today, my first full build of FHIR today took me an unprecedented 103 minutes, although later full builds were down around 15-20 minutes or so.  've got quite a bit more to do to add documentation to this resource, before I'll consider it ready for some connectathon testing, but this is a good (re)start.

   -- Keith

Wednesday, March 9, 2016

An impatient patient

There are days when the medical profession stretches my patience to the breaking point.

I called a provider today to ask them to forward my daughter's medical records to their new practice. The office manager reported that I would have to come into the office to fill out a records request form.  I told her that under HIPAA, I need not be required to come into the office to complete such a request. I'll note (but didn't go into that detail) that furthermore, since this request is for treatment, it does not even need a request from me, it could come from my daughter's new provider.  Why do I need to do this?  Because my current provider doesn't have access to the immunization registry that my previous provider reported her immunizations to.  Why?  I have NO clue!  And for this privilege of populating this providers knowledge tables, I will pay for a level 3 new patient visit, which is necessary in order to obtain an accurate physical examination from this new provider, but is somehow not covered by my insurance policy (although the physical exam is).

On the same day, I listen to multiple medical professionals go on and on (and on and on) about the value or lack thereof for "No Known Allergies", and why, if it is only temporally valid, need it ever be recorded.  Do these people even practice?  In a doctor's visit on Monday my wife talked to her provider on three separate occasions regarding her treatment, medications, et cetera.  And in between each of these discussions, the provider had seen another patient or looked at another chart.  Are providers so good that their memory of who among the 30 or more patients they see day has which allergy is correctly aligned with the patient in front of them?  Some days I cannot remember which call I'm even on, and my schedule looks nothing like a doctor's.  They are no more superhuman that I am.

And while I completely respect my Doctor's need for safety, would it really be so bad if, when stopping to ask me the same question for the 14th time in the last two years, he bothered to check to see what they already should know, and confirmed it, and any possible changes, rather than make me REPEAT and possibly forget something I said before?  I know this can be done well, because I've seen it done a number of times, by at least three different providers.  But I only see this behavior about 10% of the time.

And finally, if I hear one more complaint about how an EHR interferes with patient face time, I think I might just cry.  I've seen this done well too, with two different systems, by the same provider (my former PCP).  This is a skill, and you HAVE to practice it, consciously, until it becomes habit, or else you will develop habits that will make your patients dislike your behavior. However, don't blame your behavior on tools used poorly. Learn to use the tools well.  It can be done, but you have to THINK about it, and do so critically and consciously.

   Keith


Saturday, March 5, 2016

Is that a concept domain or a value set?

One of the challenges currently being investigated for Certification under the ONC 2015 edition of certification requirements has to do with APIs and the Common Clinical Data Set.  Let's look at a simple example.

According to the rule, sex must be coded:
(n) Sex—(1) Standard. Birth sex must be coded in accordance with HL7 Version 3 Standard, Value Sets for AdministrativeGender and NullFlavor (incorporated by reference in § 170.299), attributed as follows:
(i) Male. M
(ii) Female. F
(iii) Unknown. nullFlavor UNK
However, in HL7 FHIR, sex is coded using Administrative Gender, defined as:
CodeDisplayDefinitionv2 Map (0001)v3 Map (AdministrativeGender)
maleMaleMale=M=M
femaleFemaleFemale=F=F
otherOtherOther<A
<O
<UN
unknownUnknownUnknown=U=UNK

As you can see here, there is direct, one-to-one and onto mapping from the specified value set for Meaningful Use, and the FHIR required value set.  Conceptually, they support the same concept domain.

In this case, I don't think ONC meant to preclude the use of FHIR simply because the actual values of the codes are a little different.  As the FHIR mapping table shows, the FHIR codes mean exactly the same thing.  I see two ways forward to address this problem:

  1. Subregulatory guidance to the ONC Testing bodies and certifiers that this sort of mapping, where there is a direct one-to-one and onto map provided and published in a consensus standard, should be acceptable for coding fields in the Common Clinical Data Set.
  2. Changing the FHIR Coding system to be the HL7 Version 3 coding system for Administrative Gender.
I prefer solution 1, as the FHIR coding system purposefully clarified and reduced the challenges that V3 created with Undifferentiated (UN).  Furthermore, I believe that solution to be aligned with the goals of ONC in specifying the use of an API.  The two systems support exactly the same concept domain, and the provision I highlighted ensures that this exception only works in the case where the mapping is created by an SDO in a consensus manner.  This prevents others from creating their own maps from SNOMED CT, LOINC or other specified standards that would duplicate the use of those codes.  In other words, it minimizes the impact and ensures that we can continue to use FHIR, but still be in accord1 with the standard.

-- Keith

P.S. I'll let you know how the investigation turns out.

1 Accord - To be in agreement or harmony, agree -- which these value sets do, given that they represent the same concept domain.

Wednesday, March 2, 2016

Built in, not bolted on

Walking through HIMSS you can find a lot of products that "support interoperability."  Many are little more than some variation on a data transformation engine, perhaps combined with some form of database storage.  As a technology user, these bolt on solutions might look like the perfect way to fix the lack of interoperability in your Health IT infrastructure.  With one purchase, you too can solve all your interop issues.

We all know what that is.

So what is one to do? Here are some of my tests for interoperability solutions: Does your vendor know more than just the letters, H-L-7, I-H-E, or F-H-I-R? Ask them if they've ever sent someone to a connectathon or and IHE or HL7 meeting.  See if they even know and can list some HL7 standards or profiles. Do they know that a CDD or CCDA is based on CDA and that is based on HL7 version 3 and X-M-L?  Can they unparse these acronyms, or do they have someone in there booth that can?

If not, be careful, they might just be full of it, and you might be stepping in it.

If you want to see interoperability that works, come on down to the Interoperability Showcase, where you can see products from different and competing vendors working together.  You won't see that anywhere else on the show floor