Monday, December 5, 2016

Why roll your own?

Why should a developer use their own code when there's a ton of high quality open source already out there?

I hear a lot of different reasons:

  1. It's too hard to learn someone else's stuff.
  2. How do I know it is good?
  3. How will we maintain it?  If there are bugs, I have to be able to fix them.
  4. It doesn't fit our design.
All of these are good reasons IF in fact, they are good reasons.  Too few do the work to figure this out.
  1. It is easier to write and understand your own code base than it is to understand what someone else did.  But most people don't even try.  Look folks, this is why they pay you the big bucks ... to understand complex stuff.
  2. You have to look at it.  And I mean really look.  There are a lot of ways to find good stuff.  Is anyone else using it? That's a good sign.  Is it frequently updated? Another good one.  Does it have a vibrant community around it? Another good one.  If the last update was two years ago, you may be SOOL (it's an acronym that basically means out of luck).
  3. That's one of the cool things about maintenance if you find good Open Source because a vibrant open source community will both accept and fix bugs.  And if you are serious (and I mean truly serious) about #1, and choose that source base, you'll also become a contributor back to it.  So not only can you adopt the fixes of others, but you can also make your own fixes.
  4. This can often be a problem.  How flexible is your stack?  How many more dependencies can you take?  If you have a good stack, you are much more likely to find open source that fits onto it without additional dependencies.  You may be challenged around upgrades stack components though, and that's where contributing back becomes important.
What's the value here for adopting open source?
  • Time To Market -- Adopting somebody else's code can let you focus on other things that are necessary to bring your product to market.
  • Maintenance -- It isn't just new development that you could be saving, it might also be on maintenance.  A good open source project will also maintain the solution for you.  And that has tremendous value.  Software costs are often quoted as 20% development, 80% maintenance.
  • Reputation -- Using good open source can enhance the reputation of your own products (if the open source base has a good reputation).  Contributing back can also be a significant reputation enhancer for your organization.
How does this fit in with standards?  A lot of open source projects integrate using standards. Some even BECOME standards (e.g., Schematron).  That's one of the values of standards.

Tomorrow, I'll talk about some cool new open source projects for CDA.

   -- Keith




Wednesday, November 30, 2016

Implementing Partial CDA Validation

In Partial Rejection and Levels of Validity in CDA (or anything else for that matter) I discussed levels of validation of CDA content.  Now I have to make that real.  There are two different ways to go about it.  As you might recall, here are a few of the levels in the partial validation hierarchy:

Level 0: Totally bogus content.  Is this even XML?
Level 1: The CDA Header is valid.
Level 2a: Level 1 + the narrative content is valid according to the CDA Schema
Level 2b: Level 2 + the LOINC codes for documents and sections are recognized as valid.

The first level is just doing an XML Parse without validation.  This will ensure content is well-formed XML.  If you fail this test, no need to go further.

The next level validates everything up through nonXMLBody or structuredBody.  This is easy.  Craft a new CDA Schema by editing POCD_MT000040.xsd as follows (delete struck out material and insert underlined material):

  <xs:complexType name="POCD_MT000040.ClinicalDocument">
    <xs:sequence>
      <xs:element name="realmCode" type="CS" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="typeId" type="POCD_MT000040.InfrastructureRoot.typeId"/>
      <xs:element name="templateId" type="II" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="id" type="II"/>
      <xs:element name="code" type="CE"/>
      <xs:element name="title" type="ST" minOccurs="0"/>
      <xs:element name="effectiveTime" type="TS"/>
      <xs:element name="confidentialityCode" type="CE"/>
      <xs:element name="languageCode" type="CS" minOccurs="0"/>
      <xs:element name="setId" type="II" minOccurs="0"/>
      <xs:element name="versionNumber" type="INT" minOccurs="0"/>
      <xs:element name="copyTime" type="TS" minOccurs="0"/>
      <xs:element name="recordTarget" type="POCD_MT000040.RecordTarget" maxOccurs="unbounded"/>
      <xs:element name="author" type="POCD_MT000040.Author" maxOccurs="unbounded"/>
      <xs:element name="dataEnterer" type="POCD_MT000040.DataEnterer" minOccurs="0"/>
      <xs:element name="informant" type="POCD_MT000040.Informant12" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="custodian" type="POCD_MT000040.Custodian"/>
      <xs:element name="informationRecipient" type="POCD_MT000040.InformationRecipient" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="legalAuthenticator" type="POCD_MT000040.LegalAuthenticator" minOccurs="0"/>
      <xs:element name="authenticator" type="POCD_MT000040.Authenticator" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="participant" type="POCD_MT000040.Participant1" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="inFulfillmentOf" type="POCD_MT000040.InFulfillmentOf" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="documentationOf" type="POCD_MT000040.DocumentationOf" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="relatedDocument" type="POCD_MT000040.RelatedDocument" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="authorization" type="POCD_MT000040.Authorization" minOccurs="0" maxOccurs="unbounded"/>
      <xs:element name="componentOf" type="POCD_MT000040.Component1" minOccurs="0"/>
      <xs:element name="component" type="POCD_MT000040.Component2"/>
      <xs:any processContents/>
    </xs:sequence>
    <xs:attribute name="nullFlavor" type="NullFlavor" use="optional"/>
    <xs:attribute name="classCode" type="ActClinicalDocument" use="optional" fixed="DOCCLIN"/>
    <xs:attribute name="moodCode" type="ActMood" use="optional" fixed="EVN"/>
  </xs:complexType>

This will result in the schema processor ignoring anything after the CDA Header.  Or will it? Actually, this will fail, as the schema now violates the Unique Particle Attribution constraint of XML Schema 1.0.  However, if you could be sure that componentOf would be present, setting minOccurs="1" on that declaration resolves the problem.  But not every CCDA requires that, and so that little fix won't work.  OK, what if we change the definition of that last component so that it can contain anything?  Yep, that works.

It should look something like this:
  <xs:complexType name="POCD_MT000040.Component2">
    <xs:sequence>
      <xs:any processContents="skip" />
    </xs:sequence>
    <xs:anyAttribute processContents="skip"/>
  </xs:complexType>

So, now <component> can contain any sort of well formed XML content, and your "header validator" won't care.

An alternative implementation would use a specialized XSL identity template with some exceptions to skip any unrecognized content after componentOf, and simply delete the component element definition in POCD_MT000040.ClinicalDocument.

The next challenge is validating narrative only content.  For that, you want to tweak section definitions within the document so that you don't care about validating any content that isn't <text>, <title> or perhaps <code> within <section>, and that you validate subsection content.

That's a bit trickier.  For this case, you could define a <component> element at the top level which would be overriden by specializations of <component> defined within the header or entries (which you really don't care about), but which would be processed when matched by <xs:any processContents='lax'>.  However, rather than do that, my recommendation would be to create a specialized identity template that copies only what you want to validate within sections, and skips anything you don't care to validate.  Then you can just use the standard CDA Schema to validate the content without any changes (because all content within a section is optional according to the schema).

In that way, what you've just done is eliminated the potentially invalid content.  There's extra value there, because now what you have is a transform of the original content which, if "narrative valid", is probably safe to keep around for viewing and transformation by a stylesheet.

That identity template is a simple exercise in software engineering.  I'll leave it to the interested reader to figure it out.

Oh, one final thing: Don't be dumb and validate in easy to hard order.  Validate in the other order, because it will cost less in processing time for good documents.  Let the bad ones pay the performance penalty for multiple validation stages.

   -- Keith



Monday, November 14, 2016

Good Interoperability works like a shotgun, but with a single bullet

As an implementer these days, I don't have the luxury of building one-off solutions.  I have to be able to take components and put them together in multiple ways to solve multiple problems.  CDA was the way we used to do this in IHE Patient Care Coordination in the past, where a single section or entry could be used in multiple documents to support multiple use cases.  In fact, if I look at the number of IHE profiles that use the problems, medications and allergies sections we first created, I get at least a dozen+ CDA documents which use those.  They became the foundation of our work for many years.

The same is becoming even more true now with HL7 FHIR.  Each FHIR resource can be used for multiple use cases, and the resources can be put together in multiple ways to achieve a solution.  If I want to build a flowsheet, I can use the Observation resource.  A graph? The observation resource.  A CDS Intervenion? I might want to access the Observation resource.  And it's the same resource (though perhaps with slightly different constraints for different uses).

No longer do I have to concern myself about different models, schema, et cetera, just because how I want to use the thing has changed.

So often, we have limited resources.  We want a shotgun, but all we get is a sling with a single stone. We get one shot at this.  With FHIR, I can line all my ducks up in a row and smack them down with that single stone.  It's not just two birds (use cases), but as many as I can line up.  And in fact, I don't even have to line them up all that much.  Perhaps what I have in FHIR is a flamethrower.

   Keith

Wednesday, November 9, 2016

Accidental Interoperability

I've been spending a great deal of time on the implementation side, which doesn't let me say much here.  However, I had a recent experience where I saw a feature one team was demonstrating in which I could actually integrate one of my interop components to supply additional functionality.

Quite simply, when you build interop components right, this sort of accidental interop shows up all the time.  It's really nice when it does too, because you can create a lot of value through it with very little engineering investment.

Lego could have spent less time on their very simple building block, but because of the attention they spent, there are SO many more ways to connect those blocks, some of them I am certain were never originally intended.

Getting into that component based mind-set can that enables accidental interop is sometimes quite challenging.  All too often we focus on very specific problems and fail to consider how what we are building can be done in a more general way.  When we get that focused, it often enables us to deliver sooner, because we are focused on the singular use case, and can take short cuts to optimize for our use case.  At the same time, that hyper-focus prevents us from looking at slightly more general solutions that might have broader use.

All too often I've been told those more generalized solutions are "Scope expansions", because they don't fit the use case, and the benefits of generalization aren't immediately experienced for the specific use can I'm asked to solve for. Yet my own experience tells me that the value I get out of more general solutions is well worth the additional engineering attention.  It may not help THIS use case, but when I can apply the same solution to the next use case that comes along, then I've got a clear win.  Remember Avian flu?  That threat turned out to be a bust yet CDC spent a good bit of money on a solution for that use case.  Could they use any of it for Swine flu?  Yeah, you really don't want to know the answer to that.




Thursday, November 3, 2016

Patient matching and restricted charts

Patient matching is a tricky area.  Name, birth date and gender are insufficient in a region to match a patient sufficiently for all patients.  For example, one Zip code in Chicago contains enough John Smith's that the likelyhood of an identity collision occuring within a practice in thta region is statistically significant, about 1 in 20 chance of occuring for some patient.  And John Smith is only the thirteenth most popular name in that region.

So you need other identifiers or differentiators to get a better match.

Some organizations have business rules about matching that only allows them to expose patient data to other providers if they get one and only one match.  Thye also have business rules about not displaying any data for patients whose charts are restricted outside their practice. 

Combine these two issues and you have a tricky challenge that is easy to get wrong.

How do you implement the patient identity search? Do you search for only patients with unrestricted charts, or do you search with both but only display if you get unrestricted results?  You have to search with restricted patients included!  Consider: If an identity collision occurs, regardless of whether it occurs for a patient with a restricted chart or not, you still have to detect it!  If you were to just search the unrestricted results, and there were two John Smith's whose identity collided, when someone tried to access data for the one with the restricted chart they would get data for the wrong John Smith 100% of the time.

So, you cannot restrict the identity search, but you have to restrict what it reports.  

This will impact patients whose identity happens to collide with those that have restricted access to their chart.  That is where other identifiers or data (such as Mother's maiden name, email or phone number) can help differentiate patients.





Thursday, October 27, 2016

Partial Rejection and Levels of Validity in CDA (or anything else for that matter)

One of the things that I learned in my Informatics education was that there were many different ways to evaluate validity of something.  It's not a yes/no question, but rather a multi-faceted scale.  Most informaticists are familiarity with a 5 or even 7 point scale used to evaluate quality of evidence.  This is essentially the same idea.

In a recent Structured Documents discussion, the topic of what it means to "reject" an "invalid" CDA document was discussed.  When you look at these terms, they seem like yes/no, binary decisions. But here is how you can turn this into shades of gray:

Level 0: Totally bogus content.  Is this even XML?
Level 1: The CDA Header is valid.
Level 2a: Level 1 + the narrative content is valid according to the CDA Schema
Level 2b: Level 2 + the LOINC codes for documents and sections are recognized as valid.
Level 3a: Level 2 + the entries are schema valid according to CDA.
Level 3b: Level 3a + the codes are recognized.

Here's how a system can respond after making these assessments (note that possibly available actions at higher level include those at the level below):

Level 3b: Discrete data can be imported into the system.
Level 3a: Some data can be coded based on string matching, and for that data which has matching codes, that data can be imported into the system after validation by a healthcare provider.
Level 2b: Narrative only sections (such as Reason for Visit or HPI) can be imported, but no discrete data.
Level 2a: After performing some simple pattern matching, some narrative only sections can be imported, after validation by a healthcare provider.  The document can safely be displayed to the end user.
Level 1: The CDA Header is valid.  The fact that a document has been received for a patient from a particular organization can be displayed to the end user.
Level 0: You might be able to look at some magic numbers in the data to figure out what the heck someone sent you, but there's no way to even assess what patient it was for unless you have that stored in some other metadata (thus Direct+XDM evolves as a good solution for sending files).  You might be able to figure out an appropriate viewer for the content, but even then, there are no guarantees it is safe.

Validity?  It's not a switch.  It's a dial.  And mine goes to 11.  Rejection? That's just a pre-amp filter.




Thursday, October 6, 2016

Preadopting FHIR Release Content

There are many times when you aren't quite ready to adopt a new release, either because it isn't fully baked yet (as for FHIR STU3), or you just aren't ready to suck up a whole release in a development cycle to get the ONE feature you would really like to have.

The FHIR Extension mechanism is a way that you can add new stuff to an existing FHIR release. But how should you use it for problems like those described above?

I'm going to be playing with this in the near future because I have some STU2 stuff going on right now, but I really need some of the fixes to Coverage in STU3.  Right now, STU2 talks about subscriber*, but doesn't actually link the coverage to the member (the actual patient being treated)! STU3 fixes that, and I want to preadopt the fix.

So, how am I going to handle my extensions?  Well, first of all, for every field I add, what I want to do is to name it using the STU3 (Draft) URL (and later to the STU3 URL).  So my extension URL becomes something like http://hl7.org/fhir/2016Sep/coverage-definitions.html#Coverage.beneficiary_x_, and bang, I have a referenceable URL that pretty clearly defines the extension.

What does FHIR say about that?

The url SHALL be a URL, not a URN (e.g. not an OID or a UUID), and it SHALL be the canonical URL of a StructureDefinition that defines the extension.

Apparently what I've done isn't quite right because it doesn't follow the FHIR rules (even though it follows the spirit) of FHIR requirements for extensions. Right? So, NOW what?  Somehow we need to give each extension a name.

Actually, let's see what happens when I plug StructureDefinition into this equation.  I now get:
http://hl7.org/fhir/2016Sep/StructureDefinition/Coverage#Coverage.beneficiary_x_.  Click the link to open where that goes in a new window.  Dang!  Pretty fricken close. StructureDefinition/Coverage produces a redirect that goes to fhir/coverage.html instead of fhir/2016Sep/coverage.html.  So close.

In fact, that's nearly close enough for me.  It seems that if we were to fix the redirect problem on the HL7 Server, this would do exactly what I need.  Note: It's NOT a structure definition that defines an extension, its a structure definition that defines a resource.  But really?  Do I have to point you to a structure definition that defines it AS an extension?  Or can I just do the nearly right thing and get away with it.

I may want to adopt a resource that doesn't even exist yet.  Say I want to use some clinical decision support, but I've invested a bit already in Argonaut stuff based on STU2.  How do I handle that? Fortunately, FHIR has the Basic resource, but I'm going to need to extend the heck out of it.  No problem, I just use the same technique, with gusto, and even better, I could put some automation behind it to populate my stuff.  And so could anyone else.

I wonder what Grahame will think about this?

    Keith

P.S. *  Also broken as defined in STU2, and not yet fixed in STU3 because I am not a patient at my wife's OB/Gyn practice, nor am I a patient at my children's Pediatric practice, but they clearly have me on file as some sort of RelatedPerson.  There's already a tracker for this.

Tuesday, October 4, 2016

FHIR to EHR Data Mappings

I've looked at a lot of EHR database schema over my career.  One of the things that I've found about FHIR is that it captures the essential model of an EHR system (just about ANY EHR system).  There really shouldn't be any surprises here.  It's based on six years of work that led to CCD, C32 and CCDA as well as a lot of other good information sources.  You can also find a good bit of that model already present in in CIMI via some of Intermountain's detailed clinical modeling efforts from the past.  What FHIR does differently is expose the right grain size I think, from what prior efforts in HL7 have attempted.

As both Graham and John have pointed out, yesterday's diagramming work still leaves a lot to be desired.  However, I will continue on to see where it goes.  I think there's an important piece of information there we should all be looking at very carefully.  I was already quite pleased to see that the patient and physician show up right in the center of it all.  Clustering algorithms like those used in GraphViz tend to surface interesting relationships like that.

I'm wondering if there are some better clustering algorithms to play with that might help identify interesting groupings and see how we did with resource classification.  Clearly the grouping of Individuals nailed it in Identification (under Administration).  Something like these might be worth digging into.

   -- Keith

Monday, October 3, 2016

Towards a FHIR UML Resource Relationship Diagram

One of the things that I feel is missing from FHIR documentation is a UML-like diagram of the Resources and their relationships to each other.  With over 100 resources, this is rather challenging to produce.  Fortunately, due to the data driven nature of FHIR development, the production can be automated using layout tools like GraphViz (Grahame had a love-hate relationship with GraphViz during early FHIR development, which resulted in him using something else).

What I did was create an XSLT which produced a graphviz input file after processing all of the FHIR StructureDefinition inputs.  To create a single XML file, I cheated a bit, and simply ran:

c:\build\publish> for /r %f in (*.profile.xml) type %f >>all.fhir.xml

After that, I edited all.fhir.xml and stripped all the extra XML declarations, and wrapped it in an element named FHIR (in the FHIR namespace).  There are other ways to do that to automate that, but this was simple and easy.  Note: The output will include some things that may have been part of the FHIR Build, but not part of the specification (e.g., Account in STU2).  I dealt with that in a later step.

After that, I used a number of different layouts supported by GraphViz to see which worked.  Just using the basic options produces a drawing about 30" x 150".  That's not really easy to use.  The best layout I could find was sfdp, which produces a clustered layout based on edge weights, using a force dependent spring model.  The layout stilled looked dramatically ugly because the lines overlap the nodes, so I set the edges up to use orthogonal connectors.

That looked close enough to be useful, so the next thing I did was to color code the nodes based on the Resource categorization used on the Resources page.  Clinical resources took on a range of purple colors (by subcategory), identification resources were red, workflow became cyan, infrastructure blue, conformance various shades of gray, and finance was green.  The colors helped me to understand how things were clustering in the diagram, so that I could possibly hand tune it.

I plan on doing hand tuning with a graphics editing tool.  Right now I'm checking out GraphVizio, a Visio plugin that lets me import GraphViz drawings.  Running the compute on the graph through Visio is something that is worthy of a long coffee break (actually, the same is true for the various layout methods).  I'm hoping Visio gets out of my way long enough to make this a worthwhile exercise, otherwise, I may have to revert to some other drawing tools.  I still don't like all the stuff Visio added for connecting stuff.  As soon as I move something a bit, it auto-reroutes stuff I've carefully positioned, screwing things up again.  I haven't played around with it long enough to figure out how to turn that off.

While I'm messing around with the diagram, I thought I'd show my initial progress:


As you can see, there's still a ways to go.  The diagram is still way too big, and you cannot even zoom it large enough to read.  What is interesting is that in fact, the FHIR Resources do tend to cluster, and Patient, Practitioner, and RelatedPerson find themselves at the center of care, just like you might expect.

Friday, September 30, 2016

A long journey ends...

OK.  So I've been at this Master's degree thing for the majority of the time I've been with my current employer.  It took me three+ years to find the program that would take me (the Online MBI Program at Oregon Health & Science University), and almost three more, but I'm FINALLY FINISHED!  And the funny thing was, that I finished in the first place I first looked some six or more years ago.

Wow.  What a journey.  I think my favorite three classes are in order:

  1. Human Computer Interaction (the most recently completed)
  2. The Practice of Healthcare (the one I thought for sure was going to kill me)
  3. Introduction to Informatics (one that had me on the floor laughing in the first week)
But quite frankly, I enjoyed every single one, and aced them all (save one, in which I only got an A, and it wasn't in any of the ones above). I'm about 2/100ths of a point from perfect, which quite honestly does NOT reflect my own perception of my ability at all.

I think the biggest thing that I learned over the last three years was the dramatic gulf that still existing between the "practical", technical, software engineering disciplines, and the academic, but also highly intuitive medical disciplines.  The latter are both more and less science than the former, or perhaps I have that reversed.  At the population level, the math and science in healthcare is all there.  Across the software engineering disciplines, not so much.  In the aggregate, software is still high art.  And yet at finer grains, healthcare is so much more art than the day-to-day of writing code and implementing algorithms, which is almost all logic (math).

One of the things that I have very clearly decided is that I will focus much more attention on teaching in the future.  I've always loved teaching.  The most enriching experiences I had was being a TA for the Standards and Interoperability class.  In most of the teaching I do, I see students for a day or two. I never really get to know them or see them grow over the course of a term.  Even though my time teaching was very short, working with others over a period of seven weeks, the last of which covered almost a week together in the same space, was truly different.  I got to see people learn and grow and even change the way that they think in ways I would not have expected, and yet pleasing none-the-less.

Again, I have to profusely thank my advisor, Dr. William Hersh.  Without his support I would never have entered the program, let alone finished it.  I have to say he made it interesting for me in many ways I didn't expect, one of which I hope you'll be seeing in a journal sometime in the next year.

Today, I sign off differently, tomorrow I'll be back to the same-old, same-old.  

   Keith W. Boone, MBI

P.S.  In a couple of weeks I'll be able to share the content of my capstone report with you.  Hopefully I'll be able to put all that writing energy that's been going elsewhere for the last three years back into this blog.



Friday, September 16, 2016

Other People's Stuff

Everyone likes to use their own toothbrush.  We know where it has been, and it fits our hand perfectly.  Someone else's toothbrush is just, well, ick!

The problem with standards is that they often have that "other person's toothbrush" feel to them.  It's not the way I'd do it, or I don't understand why they did X when clearly Y is so much better.  It takes a while sometimes to overcome that icky sensation of putting that thing in our mouth.

Eventually, if we keep at it, it becomes ours, to the point that we might actually find ourself facing the very same challenge trying to convince others to use what has now become "our standard."

It is certainly a true statement that trying to learn something new, or use something different that we are accustomed to is hard.  "I don't have time for this, why can't I just do what I've been doing?" I hear.  In fact, you might actually not have time.  But you may also be missing an opportunity to learn from what others have done.  Only you can decide which is more important.

Standards is all about using other people's stuff.  Few people are in a position to craft standards, many more are in a position to use them.  If, though, after asking yourself the question of "Is this the stuff I need to be worrying about, or is something else more important?"  you come to the conclusion that there is something more important to be worrying about, consider whether using other people's stuff might benefit you, so that you can move on to that more important thing.

It's always easier to understand what you did on your own, rather than to comprehend someone else's work and logic.  But that logic and rationale is present.  If you learn the knack of it, you can do awesome things.

   Keith



Wednesday, September 7, 2016

Changes

Ch-ch-ch-ch-Changes (Turn and face the strange)Turn and face the strainCh-ch-ChangesDon't have to be a richer manCh-ch-ch-ch-ChangesCh-ch-Changes (Turn and face the strange)Don't want to be a better manTime may change meBut I can't trace time -- David Bowie

I'm more than six months into my new position, and there have been a lot of changes over the past few months.

I dropped my eldest daughter off at college last week.  I still haven't adjusted to that.  I found myself wondering at 4:00 today why she wasn't home from school yet.  Oh yeah, I reminded myself.  November for Thanksgiving.

Next week I finish my last class in my Masters in Bioinformatics.  That and turning in my final capstone paper are all that stands between me an my degree.  I've learned a lot over the last three years in that program, and I cannot recommend it highly enough.  Bill Hersh has put together a great program at OHSU.  Whether you go for the certificate, the masters, or even just the 10x10 program, it's all good stuff.

My standards work is slacking off as my implementation work is picking up.  I'm principle architect for three teams working on Interoperability stuff.  I wear three hats. Some days I'm an architect, others a product manager, and others, an engineering manager.  Some days I do all three, sometimes at the same time.

My schedule is split between three time zones, the usual left-coast right-coast for the US that has been the norm for most of my life, but now also about 4 hours in the middle of the night (12am -4am) Bengaluru time.  I sleep when I'm tired, which is not as you would expect to be "most of the time".

I still struggle with what I want to be when I grow up, forgetting that since I managed to reach 50 without doing so a couple of years back, I don't actually have to grow up, and I have a certificate from my family to prove it.

I suppose that some day when I retire, I will want to teach full time, rather than spending about a third to half of my time doing that.  What I think that really means is that my projects will become my students, rather than having students because of my projects.

Now if I could just figure out how to get the next six things done that I need to before the day is over, without moving to somewhere like Mars, or worse yet, Mercury or Venus.

   Keith

Tuesday, September 6, 2016

FHIR in India

In case you missed me, I've just gotten back from a fifteen-day long trip, where I spent the last eleven days of it in India.  While there I conducted three training sessions on FHIR, one internally at GE offices in Whitefield, Bangalore, one for HL7 India, and a final one for a partner organization in Mumbai.  While in India I gave an overview of FHIR to nearly 200 software developers working in Healthcare.

Image via @msharmas

There's a great desire to learn more about FHIR in India, and I was privileged to be there to spark the fires as it were.  I am grateful to HL7 India who was able to pull together a half-day plus session in Bangalore on short notice.  I expect we'll be doing more together to follow up, as I expect I'll be back later to conduct some advanced sessions.  I'm also trying to get a FHIR Connectathon started in India as well, more on that later as plans come together.

   Keith

Sunday, August 14, 2016

Well, Shit.

My twitter feed is all abuzz this morning on the death of @jess_jacobs. Jess is a woman with a challenging illness who documented some of the complete failures of our health system to provide her with even barely adequate care  I've met her briefly about four times in my life. Her death saddens me this morning, not because I knew her well, but because she did great things with her life, and because we share at least one thing in common, our jackets. But in some ways I feel the way that others do when their favorite celebrity dies.

Among many of my friends Jess IS a celebrity.  But outside our world, she is not known well enough, nor is her story told often enough.  So I will walk today in honor of Jess, and instead of telling my story, I will tell hers.

   Keith










Friday, August 5, 2016

Offering an Informal FHIR Chat, Whitfield area, Bengaluru, India

I'll be in Bangalore for about 10 days later this month, to work with several of my teams during the week of the 22nd, and to deliver some standards training internally the following week.  One of my architects suggested that we set up an informal FHIR chat on the weekend I'll be staying through, either Saturday, August 27th, or Sunday August 28th for folks in India who want to learn more about FHIR.

Timing is too tight for me to arrange a venue through any sort of official channels, however, others in the region might be able to put something together.

So, here is the offer.  I'm free for the entire day either Saturday or Sunday, and can deliver an overview of HL7 FHIR for developers in India.  If you are interested in helping to pull this together, please either leave a comment for me here, or e-mail me at keith.boone@ge.com.

   Keith

Friday, July 29, 2016

Round tripping identifiers from CDA to FHIR and back

I think I solved this problem for myself last year or so, or else the answer wouldn't be so readily available in my brain.

The challenge is this: You have an identifier in a CDA document.  You need to convert it to FHIR. And then you need to get it back into CDA appropriately.

There are four separate cases to consider if we ignore degenerate identifiers where nullFlavor is non-null.
  1. <id root='OID'/>
  2. <id root='UUID'/>
  3. <id root='OIDorUUID' extension='value'/>
For case 1 and 2, the FHIR identifier will have system =  urn:ietf:rfc:3986, and the value will be urn:oid:OID, or urn:uuid:UUID.
For case 3, it gets a tiny bit messy.  You need a lookup table to map a small set of OIDs to FHIR URLs for FHIR defined identifier systems.  If the OID you have matches one of the FHIR OIDs in that registry, use the specified URL.  Otherwise, convert the OID to its urn:oid form.  If it is a UUID, you simply convert it to its urn:uuid form.

Going backwards:
If system is urn:ietf:rfc:3986, then it must have been in root-only format, and the value is the OID or UUID in urn: format.  Simply convert back to the unprefixed OID or UUID value, and stuff that into the root attribute.

Otherwise, if it is a URL that is not in urn:oid or urn:uuid format, then look up the identifier space in the FHIR identifier system registry, and reverse it to an OID, and put that into root.  Otherwise, you just convert back to the unprefixed OID or UUID value, and stuff that into root.  In that case, the extension attribute should contain whatever is in value.

So now then, you might ask, how do I represent a FHIR identifier that is NOT one of these puppies in HL7 CDA format.  In other words, I have a native FHIR identifier, and CDA had nothing to do with generating it.  So, there's a system and a value, but no real way to tell CDA how to deal with it.  To do that, we need a common convention or a defined standard.

So, pick an OID to define the convention, and a syntax to use in value to represent system and value when system cannot be mapped to an OID or UUID based on the above convention.  In this manner you can represent a FHIR identifier in CDA without loss of fidelity because CDA does not provide any limits on value.  Oh, and modify the algorithm above to handle that special OID in case four.

I'll let HL7 define the standard, select the OID, and specify the syntax.  I have better things to do with $100 than register an OID for this purpose.  But clearly, it could be done.

   Keith





Tuesday, July 26, 2016

Don't ask them to tell me what I should already know

This particular issue shows up quite a bit in standards based exchange, and frankly drives me a bit crazy.  Somewhere in the message someone asks for several correlated pieces of information to be communicated.  A perfect example of this is in the HITSP C32 Medication template.  We had to provide an RxNORM code for the medication, a code for the class of medication, and a code for the dose form within the template.  We also had to provide places to put the medication brand and generic names. Folks insisted that all of this information was useful, and therefore must be communicated in the exchange.

However, we used RxNORM codes to describe the medication.  Have you ever looked at the content in the RxNORM Vocabulary?  If I gave you the RxCUI (code) of 993837 in a communication, here's what RxNORM will tell you about it.


Within the terminology model, I can give you the precise medication ingredients and doses within each tablet, tell you that it is in tablet form intended for oral use, identify the brand name, and generic form.  Now, what where you going to do with all of that other information you asked me to code?

Having redundant information is helpful as it helps you spot errors.  If the code is 993837 and the reported medication is something other than Tylenol #3 or Acetaminophen 300 mg/ Codeine 30mg, then there is a problem.  So, it is helpful to have SOME redundancy.  But when all those other codes are also present, the system needs as much knowledge as is already in RxNORM to produce that information, and we just lost some (if not most) of the benefits of using a vocabulary in exchanging the information.

There's so much redundancy in the coded and fielded information in the HITSP C32 Medication template as to be ridiculous (and while I argued against it, I did not succeed).  The RxNORM code is all you need, and just to be sure that the sender knows what it is talking about, one of either the brand name, or clinical name of the drug. Everything else after than is redundancy UNLESS you can identify a specific use case for it.

In an information exchange, you should pay attention to exchanges that duplicate already existing knowledge about real things in the world, especially when knowledge bases such as RxNorm exist. The need to exchange world knowledge between systems exists when the receiver of the communication cannot be expected to be readily aware of all of that world knowledge.  If I ask you to get rid of the dandelions in my yard, it doesn't really help a whole lot to tell you to get rid of the yellow dandelions, unless I have some very specialized varieties of dandelions or I've been watering them with food die.

If you are expecting someone to transmit information that can be inferred from world knowledge, ask yourself if that is truly necessary.  You should always include enough redundancy to enable a receiver to ensure that the sender knows what it is talking about, but don't include so much that a receiver would be overwhelmed, or the sender would basically be duplicating the content of a knowledge source.  After all, we have reference works and reference vocabularies so that we can look things up.

   Keith

Tuesday, July 19, 2016

Do you have the vision to use HealthIT standards?

One of the challenges of meaningful use is in how it has diverted attention from other uses of standards that could have value.  Use of the Unstructured Document template provides one such example.  Use of unstructured document specifications, either from IHE Scanned Documents (XDS-SD), or CCD-A Unstructured Document supports exchange of electronic text that is accessible to the human reader (CDA Level 1), but not in structured form (CDA Level 3).  A common use case for this kind of text is in dictated notes, something we still see an awful lot of, even in this nearly post-Meaningful Use era.

Some even incorrectly believe that one cannot uses these specifications because they are "forbidden" by meaningful use.  While users of these specifications cannot claim that use towards qualification for meaningful use, that program is NOT specifying what they can or cannot do elsewhere to improve care.  And again, while use of these specifications do not count towards Certification of EHR systems under the 2015 Certification Edition criteria, again, certification is the lower bar.  You can always do more.

There are a number of benefits for using unstructured documents in exchanges where structured detail is not present.  One of these is simply to make the text available to systems that can apply Natural Language Processing technology to the text to produce structured information.  I've worked on two different systems that were capable of doing that with high reliability, and there are commercial systems available to do this today.  This sort of use can be applied to clinical care, or to so-called secondary uses (often related to population-based care or research).

Provider organizations won't ask for this specification by name, rather, they will ask for the capabilities that it supports.  This has always been the problem of standards.  Meaningful Use, MIPS and MACRA eliminate that problem by making systems directly responsible for implementing standards, and making providers care about that by affecting their pocket book when they use systems that do not.

The challenge for good systems developers and product managers is in applying standards like these to system design, without having their customers asking for it directly.  That takes vision.  Do you have the vision to use standards that you aren't required to?  

Friday, July 15, 2016

A FHIR Resistant Guantlet

Someone asked me for my opinion on adopting STU3 resources in an environment that presently supports STU2.  At first this seems to be quite challenging.  But then, as I thought about it, the following idea occurred to me:

It would be a simple matter of engineering to take an STU3 StructureDefinition, and re-write it as an STU2 StructureDefinition that is a profile on an STU2 Basic resource. Such a structure definition would be ideally suited for transfer to an STU3 environment when it is available, but would work in an existing STU2 environment today.

It eliminates one of my objections to pre-adoption of new resources, uses the Basic resource in a way it is intended to be used (to prototype new stuff), and provides a useful way to test new stuff in existing environments.

I don't have the time personally to write such a tool, but would love to see someone take up this gauntlet I just threw down.

-- Keith

P.S.  Such a tool could also support changed resources, if the tooling was smart enough to understand certain kinds of changes.  It could create extensions for new fields added, restrict fields that are removed, and ignore those for which there were simple name changes (detectable perhaps through a combination of w5, V2 and V3 mappings).

Monday, July 11, 2016

Building software that builds software enforces quality

There's an interesting discussion over on the AMIA Implementation list server about software quality. As is often the case in many of my posts, it intersected with something I'm currently working on, which is a code generator to automate FHIR Mapping.  Basically I'm building software that builds software.

You find out a lot of things when you do that.  First of all, it is difficult to do well.  Secondly, it's very hard to create subtle bugs.  When you break something, you break it big time, and its usually quite obvious.  The most common result is that the produced software doesn't even compile.  The reason for this has to do with the volume of code produced.  A software generator often creates hundreds or thousands of times more software than went into it.  And even though it is difficult to do well, when you do it that way, you can produce 10, 20 or even 100 times the code that you would otherwise, with extremely high quality.

Software generators are like factories, only better, in this way.  If a component on the assembly line is bad (a nut, a bolt, et cetera), that results in a minor defect.  But ALL of the materials produced by a software generator with a very small exception are created by automation.  And a single failure anywhere in the production line halts assembly.  You get not one failure, but thousands.  The rare one-offs that you do get can almost always be attributed to bad inputs and a rare cosmic radiation event. Most of  the time we are dealing with electrons and copies of electrons; we never have just one bad bolt.  We have thousands of copies of that bad bolt.

Another interesting thing happens when you use code generators, especially if you are as anal retentive as I am about eliminating unnecessary code.  You will often find that the code you are generating is exactly the same with the exception of a few parameters which can often be determined at generation time.  When this happens, what you should do is base the class you are generating on a common base class, and move that code to the base class, with the template parameters that you specify.  This is a great way to reduce code bloat.  Rather than having fifteen methods that all do the same thing, the superclass method can do the common stuff, and the specialized stuff can be moved to specialization methods in the derived class.  Template parameters can help here as well.

In the example I'm working on, my individual resource providers all implement the basic getResourceById() method of a HAPI IResourceProvider in the same way, with specialized bits delegated to the derived class.  That's 35 lines of code that I don't have to duplicate across some 48 different FHIR resources.  I really only have to test it once to verify that it does what it is supposed to do in 48 different places.  If I was writing that same code 48 times, I guarantee that I'd do it differently at least once (if I didn't go crazy first).  No sane engineer would ever write the same code 48 times, so nobody using code generation should make the software do it that way either.

I once worked with a developer that generated a half million lines of code in this fashion.  In over two years of testing, implementation and deployment, his code generated a grand total of 5 defects.  For him, we could translate the traditional defects/KLOC metric to defects/MLOC, and he still still outperformed the next best developer by an order of magnitude.

That, my friends, is software engineering.

   Keith

Friday, July 8, 2016

Thoughts on the HIT100

If you've been following Health IT for any length of time on Twitter, you are probably familiar with the now annual HIT100 "competition".  I've done pretty well over the years in being recognized by my fellow tweeters.  In 2011 I came in second, tenth in 2012 and 2013.  In 2014 I never saw the official results, but unofficially I was somewhere in 42nd to 44th place.  In 2015, the HIT99 was announced, and I showed up in 22nd place.  This year I'm probably in the top 50, but haven't really paid much attention to it, except for the nominations in my stream.

I liked the HIT100 when it first came out, because it truly did introduce me to new faces in Health IT. Over the years, it's become less relevant, somewhat steeped in contention between competing parties, and clearly more of a popularity contest at the top levels.  BUT, it's still a good list of people to pay attention to.

What it has stopped being is a place where I can identify interesting people to watch that I'm not already paying attention to.  I know most of the people on these lists.

Michael Planchart (@theEHRGuy) has done us all a great service in starting the HIT100, and regardless of what I've heard about motivations, I also thank John Lynn (@techguy) for ensuring that something like it continued in 2015.  And, I'm glad Michael's back in the saddle for 2016.  I'm staying out of the middle of any debate as to the virtues of vices associated with the 2014 and 2015 contests.

I look forward to seeing the results.  I know some of who has been trolling for nominations, and like I've previously said, the standings at the top ten don't matter much to me.  I hope the event continues, and with less angst than in prior years.

But I'd also like to see a new event celebrating people we wouldn't otherwise hear about or notice, perhaps because they have something valuable to say, but don't know how to say it so that we can all hear it, or perhaps because they are new in the field and we just haven't noticed them yet.  I'm not going to start it (at least any time soon), and would love to see someone take this idea up and run with it.

     Keith

P.S. For all of you who nominated me this year, I am extremely grateful.  There's been some lovely GIFs in those nominations which I share below, along with some truly satisfying feedback.  I'm trying to get back into blogging (its been months since I posted two days in a row!) as I finish up my degree, and get steady in my new role(s).



Thursday, July 7, 2016

The Value of Standards Maintenance

We recently looked at two fairly simple issues on the DSTU Tracker on CCDA 2.1 in Structured Documents.  I thought I'd take the opportunity to break down the cost of this sort of effort.

To establish costs, we first need to establish the cost for one of the people involved.  Some of the effort in maintenance is discovery and diagnosis within an organization of an issue that needs to be raised up to the maintaining organization.  For US developers, I use an estimate of about $75/hr in costs, which averages over a range of salaries from Junior to Senior developer.  For senior staff, (architects, analysts and others) involved in HL7 standards calls, You can double or triple that, because these staff are much more senior.  I'd call it about $200/hr for that group.

On today's call, we addressed two simple issues.  Each of these issues likely involved about 4 hours of development time, 3 to analyze and assess, and 1 to convey the resulting problem to the organizations HL7 representative.  That representative has about two hours of preparation effort associated with each issue, 1 hour to learn and assess the issue with their developer, and another hour of research into the issue, resulting in the development of a change proposal.  Each issue took about 15 minutes to discuss, assess, and agree to a change on the call, with 24 people on the call ($4800/hr!)

4.0   *     75 =   300
2.0   *   200 =   800
0.25 * 4800 = 1200
------------------------
                      $2300

So, for the two issues we addressed today, the approximate cost was almost $5,000, and these were SIMPLE ones.  More complex ones take much longer, often two or three committee calls, sometimes with the committee spending as much as two hours on the issue before coming to a resolution, and in many cases, between committee review, there's a smaller workgroup assessing the issue and proposing solutions (call it five people), which usually spend 2 or more hours on the topic ($1000 or more).

So, a single issue has a cost born by industry volunteers ranging from $2,300 to as much as $15,000.

Consolidated CDA has 255 comments against it today, each of which has been addressed at some point by the working group.  The value of this maintenance ranges from $0.5 to $3.8 million!

To come to a more precise estimate, we have to estimate how many of these issues fall into the easy category, how many fall into the hard, and how many are somewhere in between.  I'd estimate about 10% of issues are in the difficult category, about bit more than half in the easy category, and the rest somewhere in the middle (which I'd estimate costs about $6K).  It works out to about $2.1M for maintenance of CCDA 1.1.

   Keith



Tuesday, June 28, 2016

A Review of the iPad Pro 12.9"

Every year I get myself a present with my annual bonus (in years that I get a bonus, which has been most of them in my career).  This year, I decided to replace my quite functional iPad 2 with an iPad Pro.  I had several reasons for upgrading, one of which was to make sure that my mother would be able to enjoy my wife's iPad (she got my old one, and hers went to my mother), and because I wanted the bigger display and the multitasking support.  I'm often trying to do two or three things at once.

What I like:  The bigger foot print means that it is easier to read and operate.  My high score on one of my video games went up because I can be more precise with the larger display.  And my aging eyes like the bigger screen for reading, which I use my iPad for quite a bit.  It's also easier to see the map while driving.  The multitasking isn't quite all I could ask for, but it beats what I used to have, which was nothing.  I also like what the bigger display does for helping me to manage my schedule.

What I dislike: The bigger foot print means that it is harder to carry around, and nearly impossible to use one handed.  I think I need a different case to be able to use it nearly the way I used to use my iPad while I was moving around on foot.  I use my iPhone now for mapping my way through the city while walking, and I don't carry it around with me everywhere.  That actually is a bonus, because I'm less likely to bury myself in it at the dinner table though.

I haven't spend any money on the pen or a keyboard case, I already have two blue-tooth keyboards that will work with it (I recently acquired a slim blue-tooth full sized keyboard for my travel set up). I may get the pen later just to play with it.

I put it in a Targus case very much like the one I use to have that I finally wore out, and am quite happy with the case, but am still getting used to the fact that the fold is about 1/3 the way in the back, which means I don't use it physically the same way as my old case.

The form-factor is different enough that my use of the device is different, but not substantially so for my needs.  Overall, I think I like it, but kinda wish I'd gotten the smaller iPad Pro in the original form factor that I've been accustomed to.

I'm probably going to spend some time playing with it as an external monitor for my laptop, to see how well that works for me.  I'm presently carrying around an AOC external USB display when I travel that has more screen, but less pixels.  Using my iPad for that could shorten my treck through airport security.  At least once over the past two weeks I had five bins filled through the TSA line, using my iPad for the same purpose could drop that by two, and get my travel computing back into one bag.

   Keith

Wednesday, June 15, 2016

Apple IOS10 to support HL7 CDA in HealthKit

This is pretty big news from Apple for HL7, and something I'm rather feeling a bit proud of myself, even if I only played a small part in getting us to this stage.

Thanks to Andy Stechishin of the HL7 Mobile Health Workgroup for spotting this one and bringing it to the attention of HL7 Membership.

   Keith

P.S. Now if we can get the technology giant to pay attention to FHIR, my day will be made.

Thursday, June 9, 2016

In the Rough

The thing I like about coding is that at its best, it is a fluid way to solve problems.  The best code to work on is the crux of a solution.  The scaffolding is boring.  The shell and outer layers is finish work.  I can do a craftsman-like job on it, but frankly it is boring.  The fun is in figuring out some new way to solve a tough problem, and what I really like to work on is the heart of the problem.

I think the most fun for me is teasing out the solution.  Often I feel as if I can see in my minds eye how the solution is hidden behind or within a rough crystaline structure.  I look at it, pick it up, turn it, shine light on it from different directions, see which way shadows are cast and more.  At some stage, I catch just a wee glimpse at the right angle, and that narrows down some options.

Eventually, I see just enough to know where to put the knife, and cut.  And cut again.  When I started I had only a small clue of the basic shape, but when I finish ...

Image via Smithsonian.com


Tuesday, June 7, 2016

Selecting Population Cohorts in FHIR using extension search parameters

The basic idea is pretty straightforward, right?  What you want to be able to do is identify a set of patients which meet a particular criteria.  Give me all 18 year-old or older men with a diagnosis of X, and a lab test Y where the result is > Z units.  The question is how to express it in FHIR.

If it were SQL, what you would want to say is something like this:

SELECT DISTINCT P.* FROM PATIENT
JOIN DIAGNOSIS DX
ON DX.CODE = X
JOIN RESULTS R
ON R.CODE = Y
WHERE P.DOB <= EighteenYearsAgo 
AND P.GENDER = 'M'
AND R.VALUE > Z

Of course, I just made that database schema up, but the reality is that it probably in not all that far from what you already know.  However, we don't write FHIR queries in SQL, so how do we say this?

First of all, the resource we want to return is Patient, so the first thing we know is that we will start with:

GET [base]/Patient?

Because that will return patients.  If we had wanted observation, we'd start with
GET [base]/Observation?

Next, we can add the basic parameters on gender and age to the query.  These will be AND'd together, giving us:

GET [base]/Patient?gender=male&birthdate=ltEighteenYearsAgo

But the next bits are tricky.  In fact, not really possible in FHIR.  I can find those Conditions where the code is X, and I can find the results where the code is Y and the value is > Z, and I can even get the patients associated with each.
But I cannot get the patients AND'd with these two JOINs.

For example, finding the patients in addition to the conditions where condition.code = X:
GET [base]/Condition?code=X&_include=Condition:patient

And finding results where the code is Y and the value is > Z
GET [base]/Observation?code=Y&value-quantity=gtZ&_include=Observation:patient

But as I look at it, _include isn't a JOIN, instead it is a query with multiple results. Chaining doesn't cut it either.  It supports joins in the forward case, but I want the reverse.  I'm thinking about a syntax something like this:
GET [base]/Patient?gender=male&birthdate=ltEighteenYearsAgo&_join=Observation:patient,Condition:patient&Condition.code=X&Observation.code=Y&Observation.value=gtZ

In the above, the _join argument establishes the join critieria, Observation's patient search  path has to reference the found patient, and the same with Condition's patient search path. The reality is we could probably automatically figure out the join criteria in this case, since there's only one thing in Condition or Observation that would really point to a patient. So, Observation:patient, and Condition:patient are (or could be) implied.  I changed the search parameters a little bit and what I get is:
GET [base]/Patient?gender=male&birthdate=ltEighteenYearsAgo&patientCondition-code=X&patientObservation-code=Y&patientObservation-value=gtZ

Now I have three search parameter extentions: patientCondition-code, patientObservation-code and patientObservation-value which I can use in my profile for patient, and with these, I can do the necessary joins in my happy little server.

I might prefer a little less verbose syntax for these parameters (e.g., condition-code rather than patientCondition-code), but for the most part, the principles are still the same.

   Keith

Monday, June 6, 2016

The Nightly Dumb

There's a design pattern that is often used in healthcare, and in other enterprise integrations with legacy that involves a nightly update.  What often happens is that a days transactions get aggregated into a single big file (by taking a dump of the days work -- which is why it is called the nightly dump).  Then this single big file is sent off somewhere else to be processed.

It's a convenient design.  It doesn't have to deal with transactions or locks .. the days work  is already done and nobody is using the transactional system.  It won't impact your users, because the day's work is already done and nobody is ...
Batches of stuff can be processed very quickly because the days work is ...

You get the picture.

This is such an often used design pattern, and for some reason, it just feels wrong to be so widely used.  There's a heuristic being applied ... that at the end of the day, the day's work is done, which we know isn't really the case.  There's always that one record or two which we never got back to which needs to be updated.  Yet we often act as if the heuristic is in fact a law of physics, only to discover to our chagrin that it is not, and we need to account for that in our processing of the nightly dump.

Often times when I see this pattern in use, I question it.  The answer I typically get is "this is more efficient".  And yet, I wonder, is it really?  Is it more efficient to wait to send something?  Or is that a premature optimization?  Is the delay between sending it, and subsequently finding a problem with it that needs to be corrected really the best way to handle this situation.

One thing I can tell you. The nightly dump is interfacing, not integration. When you have to wait 24 or 48 hours for the nightly dump to have taken place before another system can act on what was done elsewhere, someone is bound to notice somewhere.  And when they do, they are sure to start swearing.  Because at that point, the nightly dump will basically feel as if someone just took a dump on them.

   Keith

P.S.  In case you were wondering, yeah, I just got dumped on.

Friday, June 3, 2016

Boundaries around Task, MessageHeader and Operation in FHIR

On this weeks FHIR Workflow call, we discussed the similarities between the task and message header structures in FHIR.  The notions we associate with messaging, services and tasks have a lot of overlap, and Lloyd asked us to consider these for next week.

Taking a crack at this:
Task as a resource is a record of things to do or what has been done.
Messages are a record of things that have been communicated.
Operations are a record of services to perform.  We don't really have a resource that records the invocation, what we have is a resource (OperationDefinition) that describes HOW and invocation can be performed.

Communications are part of "things that can be done", and communications can initiate other activity.

The focus of the MessageHeader resource is to ensure that a message gets where it needs to go in a timely, consistent and predictable fashion.  To do that, it needs to know the source and destination of the message, the content of the message, and other critical details needed to ensure appropriate routing with respect to timeliness, consistency and predictability.

The focus of the Task resource is to capture the status of activity in providing a service.  This enables the steps required to to perform a service to be monitored, managed and adjusted as necessary to optimize service delivery.  To do this, the task resource needs to know how a task is composed, what actors may be engaged in or affected by the task, and the parameters that may affect the performance of the task.

An operation initiates a service, and the OperationDefinition resource describes how that service is called.  To do this, it describes what the service does, and the parameters that might affect its behavior.

In system automation, we should realize that ANY detectable event can be used as a trigger to invoke activity.  Thus, FHIR based systems might use any of three ways above to automate the delivery services.  Receipt of a message could trigger activity.  Creation (or update) of a task could trigger activity.  Invoking an operation could trigger activity.  And even creating or updating a resource could be a way to invoke activity.

So, we should be looking at design patterns for the invocation of services. Messaging, services, and workflow are all design patterns that can support this.  Invocation of a service is performed by "binding" that service to an event, where event could be described as the "discovery" of a resource meeting certain criteria.


  1. In Message oriented system, the arrival of a message is (or leads directly to) the invocation of a service.
  2. In a Task oriented system, the update of a task can lead to the invocation of a service.
  3. In an operation, the invocation of an operation leads to (or simply is) the invocation of a service.


All three can be used together.  A message can be sent from one system to another saying "Please perform this service".  Upon receipt of the message, the receiving system can create a set of tasks which must be performed to manage the activity.  Upon creation of one of these tasks, another component monitoring its task queue can invoke activity directly using one or more operations.

One of those invocations could then involve communication with another system, requiring messaging to communicate a request to perform additional services (e.g., a lab, on obtaining a positive identification of Type A Influenza might request subtype evaluation from a reference lab).

On boundaries, messaging is used when what needs to be monitored or ensured is the communication activity.  Tasks are used when what needs to monitored or managed is the workflow activity. Operations are used to invoke behaviors directly.

With RESTful approaches, the essentials of messaging are already addressed within the HTTP or HTTPS layer in terms of routing and communications.  However, more complex messaging scenarios can use FHIR Messaging to support more complex communication management.

There also may be cases where where a system needs to both ensure communications, and manage workflows.  In these cases, messaging can be to communicate tasks which reference the activity to be performed.

   Keith

Thursday, June 2, 2016

Why do we put up with it?

I've so often heard "Why do people put up with bad EHR systems".  Before you go there, for those of you who read this blog, answer me this:

Why do you put up with a system for storing meeting invitations that doesn't have a place to put in a teleconference number and passcode, and a web URL in a way that makes it easy for you to dial into the teleconference, and connect to the screen sharing service?

When you get to that answer, you'll have the answer to your first question.

   Keith

Thursday, May 19, 2016

What's in your inventory?

I discovered today something in my inventory I didn't know I had that could solve a particular problem.  The challenge was simply that this is my inventory.

Image via Michael Coghlan

And the solution I needed was some collection of these parts arranged in the right way.  The key to having an inventory that looks like this is that what you can build (and sell) is only limited by your imagination. What's in your inventory?

     Keith



Thursday, May 5, 2016

The battles of Relevant and Pertinent

Enchoen27n3200In the year+ since we started the Relevant and Pertinent project, I've relearned many lessons that would have been readily apparent had I come from a military background. Two of these stand out in my mind:
No battle plan survives past first contact with the enemy.  
What we planed, and what I expected we would execute has gone through many iterations.  So far though, our general strategy has remained: Provide tools to enable developers to ensure that providers aren't overwhelmed by data that is neither relevant nor pertinent to what they are doing.
Never give an order that you know won't be obeyed.
One of the biggest debates we've had in this project is what to do when we've decided that something isn't relevant. The main concern was related to unintended consequences. What if ... we don't send it and it was important ... what if ...

So, we've decided not to decide what to do.  Instead, what we will do is provide rules for assessing he relevance and pertinence on a very coarse grained scale:
  • More Relevant
  • Somewhat Relevant
  • Less Relevant
The data bears out fairly well that there are three clusters of relevance, and that clustering is somewhat insensitive to the provider experience with CCDA documents.  It's hard to argue with that.

In this, we follow the advice of Sun Tzu: 
The supreme art of war is to subdue the enemy without fighting.  -- The Art of War
We will provide some suggestions of different ways to use these assessments to avoid overwhelming providers with too much information, but these won't be requirements of the informative document, merely some things to consider.  Thus, we avoid the battle.

I'm fairly hopefully that we can shortly get to the point of Mission Accomplished.

   Keith

Friday, April 29, 2016

Connecting the Dots: MIPS & MACRA, 2015 Certification Edition and HIPAA Omnibus

This is definitely one of those cases that most people will likely miss unless it is explicitly called out, so here I go.

Within the recently released MIPS/MACRA proposed regulations (which I call NuMu for reasons that should be readily apparent), CMS indicates that the legislation (law not regulation!) says:

To prevent actions that block the exchange of information, section 106(b)(2)(A) of the MACRA amended section 1848(o)(2)(A)(ii) of the Act to require that, to be a meaningful EHR user, an EP must demonstrate that he or she has not knowingly and willfully taken action (such as to disable functionality) to limit or restrict the compatibility or interoperability of certified EHR technology. Section 106(b)(2)(B) of MACRA made corresponding amendments to section 1886(n)(3)(A)(ii) of the Act for eligible hospitals and, by extension, under section 1814(l)(3) of the Act for CAHs. Sections 106(b)(2)(A) and (B) of the MACRA provide that the manner of this demonstration is to be through a process specified by the Secretary, such as the use of an attestation.

The 2015 Edition Certification rule requires that Health IT supporting the View, Download or Transmit capability must:

(C) Transmit to third party. Patients (and their authorized representatives) must be able to:Show citation box
(1) Transmit the ambulatory summary or inpatient summary (as applicable to the health IT setting for which certification is requested) created in paragraph (e)(1)(i)(B)(2) of this section in accordance with both of the following ways:
(i) Email transmission to any email address; and
And finally the HIPAA Omnibus rule suggests that patients may request e-mail transmission of the records they request.  See OCR guidance here.
As a result, a common case that occurs when I have requested e-mail transmission of my HCP saying, we aren't set up to do that nearly disappears after they adopt 2015 technology.  After all, the technology has to support direct e-mail to any e-mail address, and the provider cannot knowingly or willfully take any action to disable that capability.
Even if the MACRA/MIPS regulation changes, that doesn't matter, because what I've highlighted above is IN LAW, not regulation, and the other regulations I've referenced are final.
So, if your provider tells you that they aren't set up for that, you might need to connect the dots for them.  Ask them if they are using 2015 certified product (which will become possible for many starting in 2017, and much more common in 2018), then reference the legislation above, the 2015 certification requirements, and the HIPAA Omnibus requirements.
This is NOW an existing part of public policy, and my head just exploded as I realized its ramifications for patients in 2017 and beyond.

   Keith

P.S. This POST: Creative Commons License
Connecting the Dots: MIPS & MACRA, 2015 Certification Edition and HIPAA Omnibus by Keith W. Boone is licensed under a Creative Commons Attribution 4.0 International License.
Based on a work at http://motorcycleguy.blogspot.com/2016/04/connecting-dots-mips-macra-2015.html.