Tuesday, December 15, 2015

OK Everybody, Synchronize your Resources

You know how in secret agent TV shows in the 70's and 80's, where at the start of a major intervention the agents all synchronize their watches to ensure that they knew what was to happen and when?  And if you failed to do that, you could really mess things up, because you wouldn't be doing the appropriate things at the right time.  Inevitably something messes up in those shows [the target gets back on the elevator to collect a forgotten item], and somehow a message needs to be sent from one party to another to ensure that everyone keeps on track and out of trouble [after seeing a morse code flash from a car mirror at the street level, someone posing as a secretary on the 11th floor distracts the target long enough for the agent to finish his data download].  And of course the communicated signal might have requested "Abort!", but that's not what actually happens, but everything works out in the end.

We have similar problems in healthcare when dealing with workflows.  The problem is really in how we communicate and ensure that the processes of two different organizations involved in a workflow remain synchronized.  Let's look at a sample set of communications in a workflow to produce a laboratory report:

Physician/Order PlacerLab/Order Filler
I need lipids, an A1C, and a cardiac panel for this patient and sample.

OK, got it.

Hey, can you send me a new sample, this one is spoiled.
OK, got it.
Here's a new sample.

Thanks, got it.

Tests are in progress.
......
How are those tests doing?

Lipids are in progress.  A1C in progress.  Cardiac Panel completed, awaiting final review.

Here are the lipids results.

Here's the A1C results.
I need those results STAT!

Tests are all done.

Here's the cardiac panel.

Tests are all done.

Here's the cardiac panel again, the results were amended.

As you can see, there are several points where control passes from one system to another in order to complete the workflow, and, in order to remain synched up, the two systems may need to signal each other in some way.  One way is to query for information [e.g., how are those tests doing] to be sure that the current state is recorded (you might have missed a message somewhere).  Another way is to respond to a "request" with an acknowledgement that you have accepted work [OK, got it], or moved it to a new state [Tests are in progress].  Out of band, even though one system might be "in control", another system could assert privileges to regain control (e.g., an out of band message to change priority [I need those results STAT!], or not shown: [Cancel that order]).  And you may tap someone on the shoulder to say, whoops, something changed [Here's the cardiac panel again ...].

The challenge here is that the Test Order as a resource is actually two (or more) information objects. The physicians "Order" resource is managed at the Order Placer system.  The Lab's "Order" resource is managed at the Order Filler system.  These two systems could communicate using their own resources, but then you have to worry about how gets to edit which resource when, and that gets complicated.  Some have suggested that there are two resources: The Request and the Response, and that the request is owned and managed by the placer, and the response by the filler.  All that does is hide the fact that there are two information objects that need to be synchronized, it's simply the whack-a-mole problem.  We hide the complexity over here, and it pops up over there.  Others, like myself are saying: There's one or two resources, and how you use them to manage your workflow depends on how your systems are integrated, and that your "internal" workflow management infrastructure and your "external" workflow management infrastructure could be looking like the same things, but be managed differently.

FHIR needs to support order management workflows in environments where the norm is Choreography [collaborating systems], and in environments where the norm is Orchestration [centrally controlled systems].  

I'm still trying to figure how to address this.  I think the first step is to acknowledge that the step of "Synchronizing our workflows" has to occur, and that some communications may need to be exchanged which may NOT be a full resource.

The real trick in all of this is making sure that whatever the workflow is, it all works out in the end.

   Keith

Thursday, December 10, 2015

Finding the problem is ONLY the first step

There's something just not right with this model. I don't know what it is, but ... It niggles at the back of my brain. It doesn't work.  It's missing these properties that I expect, but I don't know how to get them out of it, or bake them into it.  And I should... it's just math.

I fuss and fight with it, and finally, I see what I'm missing. It should have been obvious all along.  I no longer wonder about what is wrong with the model, but now have to go and fix it.  I don't know how to do that yet, but I know where I can go find out.

Sometimes finding out what the problem doesn't immediately lead to a solution, but it's a good first start. Now to go find some more magic smoke.

    -- Keith

P.S. Technology is just like normal stuff this way.  The other day I headed out for an appointment, but the bike wouldn't start (and my eldest had the truck). It's cold out I thought... battery probably needs to be put on the trickle charger these days.  So I was going to roll it down the driveway and start it that way, but it wouldn't roll easily.  Thinking I was in gear, I turned the key on (there was enough juice to operate the controls, just not enough to start it), but was in neutral.  I checked more closely, and sure enough the rear tire was flat.  Must be a slow leak I thought, have to fix that, let's grab the compressor to put air in it.  Picked up the compressor and the handle broke off the plastic hose.  So I cancelled my appointment, fixed the compressor, started to put air in the tire, found that air was leaking out the valve stem.  Tried to find a valve stem online, failed.  Called the shop, they explained that the valve is attached to the tube (not a tubeless tire as I'd though), and so I arranged for them to come get it and replace the tube.  I could do it myself, but I got other stuff people pay me to fix (like the problem above) so that I can pay others to fix my stuff.


Monday, December 7, 2015

What is a Standard?

In my spare time [yes, such a thing does exist], I've been thinking about how my "PubMed for Standards" capstone project should index standards.  Which leads to the question in the title of this posting.

It's not the usual form of this question, but rather, what I'm trying to figure out is what is teh "unit of indexing" in the database.  Let's take some examples:  IHE Cross Enterprise Document Sharing (XDS), Web Access to DICOM Objects (WADO), DICOM Key Object Selection Document, HL7 Messaging Standard Version 2.3.1 (HL7 V2.3.1), HL7 V2.3.1 Patient Administration Message, HL7 Version 3, HL7 Clinical Document Architecture Release 2, HL7 Version 3 Person Registry, the IHE PCC Technical Framework, HL7 Consolidated CDA, HL7 CCD Version 1.1, and FHIR.

Now, how would you like to see these indexed in a retrieval system?  Each of these "publications" works in different ways.  XDS lives in the ITI Technical Framework, and is described in Section (Chapter) 10 of Volume 1, and several sections in Volumes 2a, 2b, and nearly all of 2x.  Web Access to DICOM Objects is the last of 18 sections in the DICOM Standard.  A Key Object Selection Document is a DICOM Information Object found in section A.35.4.1 of Part 3 of the DICOM standard.  HL7 Version 2.3.1 is 1026 page document with 12 chapters and 5 appendices, Patient Administration Messages make up one of these chapters (chapter 3). HL7 Version 3 is an aggregation of several standards published on the web by HL7 under the Version 3 title.  Clinical Document Architecture is one of those several standards published under the Universal Domains section of the previous publication, where Person Registry is a topic are under the Patient Administration domain of the Universal Domains section in the same.  The IHE PCC Technical framework is a collection of profiles developed by the Patient Care Coordination Domain.  HL7 Consolidated CDA is an implementation guide, the HL7 CCD Version 1.1 is a type of document described in the previous. FHIR is a standards framework: a collection of data types, resources, and protocols used for the development of health IT interfaces and systems.

Every single one of these may be relevant in a query for information about relevant standards.  As a system designer I may want to learn about all of FHIR, or just a single resource, all of CDA, or just a single template, all of DICOM or just a single part or part of a part, et cetera.

The challenge this creates for developing an appropriate index is trying to understand the granularity. Publication unit is just one level of granularity.  What makes the unit size is perhaps best understood as the smallest invariant unit size for provenance of the information: what is the smallest unit to which you'd provide author information about, or even better, what requires a briefer explanation of the thing being described as a whole.

What I've finally worked out is that an indexable unit is something to which I can identify some form of abstract: A subsection labeled abstract, description, introduction, purpose, scope or some similar heading is what I'm going to call a "Standard" -- for the purpose of indexing them.  Because that's the level at which they can be used or reused.

Where I won't go is in indexing templates beyond those of a document in CDA, or publication units below Implemetation Guide in FHIR, because the degree of proliferation once I break that level of granularity becomes unreasonable ... at least for the scope of my capstone project.

So, if I can reasonably find an abstract, and it isn't a fine grained data object, I would index it.  Where does that leave FHIR resources or OpenEHR templates?  Honestly, I don't know.  Both are as fine grained as CCDA or IHE PCC templates, and yet, they are also considered by some to be primary standards with some level of separately maintained provenance.  I think what I want to do is leave them out for now.

The next area for me to address is vocabularies and value sets, but the real answer for that is that it is out of scope.  I'm trying to fill a gap, and for value sets and vocabulary, UMLS, VSAC, and PHINVADS all address the location problem for these resources.  I don't need to do something to address that non-gap.

   Keith

Thursday, December 3, 2015

Why should HealthIT Workflow integration be Different?

In a traditional (read "non-Healthcare" related) workflow scenario:

  1. A workflow produces a good or service through the collaboration of different parties.
  2. The good or service is developed in various stages with through various processes coordinated between parties, with various checkpoints to ensure quality.
  3. Along the way, various information, services or goods are consumed as inputs, or produced as outputs, eventually generating the final good or service that is the goal of the workflow.
Under the above description, lab tests, imaging or other diagnostic studies, screenings, treatments, medication orders, or referrals are all activities involving workflow.  The good or service produced by a diagnostic study is information about a patient used in the diagnosis of disease.  Treatment provides a service that will either cure, or at least alleviate the symptoms of illness.  So really, there is no difference here.

The flow in workflow is of outputs to inputs between tasks being executed by different participants. The fundamental unit of work is a task.  This is reflected in many different models of workflow in the various standards that I discussed yesterday.  Tasks have a fundamental state diagram that is well described in OASIS Human Task.  Essentially, they are first created, then ready to be claimed, then reserved (claimed), and finally in progress and completed (or failed).  Some special states exist to account for non-recoverable errors (recoverable errors eventually complete or fail), premature exit (e.g., cancellation of an order), or on becoming obsolete (it is no longer necessary to treat a patient whose is no longer ill).  Tasks can also be in a special suspended (held) substate from which they return once they are unsuspended (e.g., putting a medication on hold while surgery is being performed).  It's conceivable that a task can go from any of the previously mentioned states to any other (Interface engine administrators often "requeue" messages that "unrecoverably" failed after patching the code).

Tasks have potential performers, those who could perform the task, an actual performer, a creator, and an administrator who can change it (often the creator and administrator are the same party).  Many of these roles can be delegated or reassigned (forwarded).

Tasks have inputs and outputs. These are essentially named slots to which you can attach things (much like Parameters used with FHIR operations).

There are a couple of other attributes of task that might be useful: Priority, type (of task), type of performer (used to qualify potential owners).

Here's my first crack at what Task looks like in FHIR.

<Task xmlns="http://hl7.org/fhir"> 
 <identifier><!-- 0..1 Identifier Task Instance Identifier --></identifier>
 <type><!-- 0..1 CodeableConcept Task Type --></type>
 <performerType><!-- 0..* Coding Task Performer Type --></performerType>
 <priority><!-- 0..1 Coding Task Priority --></priority>
 <status><!-- 1..1 Coding Task Status --></status>
 <subject><!-- 0..1 Reference(Any) Task Subject --></subject>
 <definition value="[uri]"/><!-- 0..1 Task Definition -->
 <created value="[dateTime]"/><!-- ?? 1..1 Task Creation Date -->
 <lastModified value="[dateTime]"/><!-- ?? 1..1 Task Last Modified Date -->
 <owner><!-- 0..1 Reference(Device|Organization|Patient|Practitioner|RelatedPerson) Task Owner -->
 </owner>
 <creator><!-- 1..1 Reference(Device|Organization|Patient|Practitioner|RelatedPerson) Task Creator -->
 </creator>
 <manager><!-- 0..* Reference(Device|Organization|Patient|Practitioner|RelatedPerson) Task Manager -->
 </manager>
 <input>  <!-- 0..* Task Input -->
  <name value="[string]"/><!-- 0..1 Input Name -->
  <value[x]><!-- ?? 0..1 * Input Value --></value[x]>
 </input>
 <output>  <!-- 0..* Task Output -->
  <name value="[string]"/><!-- 0..1 Output Name -->
  <value[x]><!-- ?? 0..1 * Output Value --></value[x]>
 </output>
</Task>
As a resource this is very well aligned with HumanTask and XDW, and would integrate fairly well in a workflow system defined using BPMN and/or orchestrated or choreographed using other. It doesn't have any specific requirements about "how" the task is managed, but enabled it to be managed and tracked.

Task Resources like this MAY be aggregated into a workflow instance resource, which allows a collection of related tasks associated with a given workflow to be managed together.  My first crack at that resource looks something like this:

<Workflow xmlns="http://hl7.org/fhir"> 
 <identifier><!-- 0..1 Identifier Workflow Instance Identifier --></identifier>
 <name value="[string]"/><!-- 1..1 Workflow Name -->
 <description value="[string]"/><!-- 0..1 Workflow Description -->
 <subject><!-- 0..1 Reference(Any) Workflow Subject --></subject>
 <definition value="[uri]"/><!-- 0..1 Workflow Definition -->
 <created value="[dateTime]"/><!-- 1..1 Creation Date -->
 <author><!-- 1..1 Reference(Device|Organization|Patient|Practitioner|
   RelatedPerson) Workflow Author --></author>
 <tasks><!-- 0..* Reference(Task) Workflow Tasks --></tasks>
 <status><!-- 1..1 Coding Workflow Status --></status>
 <failureReason><!-- ?? 0..* CodeableConcept Failure Reason --></failureReason>
</Workflow>
Now, if you want to put these two together, you can manage just about any workflow someone wants to throw at you.  Do medication orders for certain substances need to be reviewed before being filled?  That's a task that is part of the medication ordering workflow.  Does a positive result need to be confirmed by the lab supervisor?  That's a task that  is part of the laboratory testing workflow. These tasks can be associated with their subjects through the subject reference, without ever interfering with the interpretation of the subject resource.  If you want to dig into the workflow associated with an order, you could query for workflow.subject = resource, or task.subject = resource, and find the workflows still open, or the tasks as yet uncompleted.

This approach simplifies integration of clinical workflows with clinical data without making any assumptions about the workflows that go along with it.  And best of all, almost none of it is specific to healthcare.  Everything I've described above comes from existing IT standards and systems, but can be readily applied to healthcare.  

Wednesday, December 2, 2015

A Brief History of non- HealthIT Workflow Standards

I first started playing with workflows standards in the late 90's, before I ever got involved in Healthcare.  Digital publication was (and still is) all the rage, but back then it was only for high-end users, and there was a good deal of workflow involved that crossed multiple organizations.  It was a critical concern of many customers of the XML database product I worked on.

If you wanted to publish an article, you might need some photos or graphics to go with it.  You needed to acquire those, negotiate rights, develop and approve content, layout, et cetera, before the article ever went live.  You may even have needed some time to covert the article from a proprietary format to a standard format using XML.

Back then there were numerous workflow engines (I recall one from Toshiba that I investigated fairly deeply), and also some standards.  The Workflow Management Coalition (WfMC) had been around for a bit (about 5 years) and had some workflow APIs it had evolved.  A number of workflow engines even supported those APIs.  It was like a DOM for workflow, except that back then we had to interface to those APIs using CORBA, because REST hadn't been invented yet.  There were also a number of other workflow products, including a family of products described as group-ware, the most famous of those had a darling little name that we morphed into an epithet describing its most egregious failure, we called it Bloats [Lotus Notes].

We are now looking into workflow in FHIR, but I think at the moment only superficially, assuming that there are some common structures and capabilities built into (mostly) ordering and fulfillment processes.  Here's where I'd advise taking a step back from "Healthcare IT", and look at what is already available in existing IT. But to do that, we probably need a brief history lesson on what already exists, and its purpose as it relates to workflow.

WfMC created XPDL and Wf-XML, but these weren't even extant at the time I was looking at their efforts.  Instead, what I had been looking at was the Workflow Client Application (Interface 2) Application Programming Interface, otherwise known as WAPI.  This specification, circa 1996 describes the "things a client needs to do" to integrate with a workflow engine.  We want to integrate workflows in healthcare, not integrate with a workflow engine, but this is a pretty good description of some of the functional capabilities we need to support.

OASIS formed the BPEL workgroup sometime in 2003, to develop Business Process Execution Language.  Object Management Group (OMG) developed BPMN, and eventually BPMN 2.0 to be the format for Business Process Modelling Notation.  All of this stuff is fine, but the reality is that it doesn't meet the needs for workflow implementers very well.  XPDL, BPEL and BPMN make it possible to describe workflows and execute workflows, but they don't really help much in enabling the capture of essential data for workflow management.  There's a whole passel of discussion in the workflow community on Choreography vs. Orchestration, but the reality is that it doesn't matter to us, both describe execution, which is a step beyond where we want to be in FHIR at first.

OASIS Human Task does support some of the information capture needs.  If you look at Human Task and many of the operations it supports, you'll see a great deal of overlap with WAPI described above.  Human Task defines a WSDL for its operations; an API or set of functionality that systems supporting workflow need.  Where Human Task succeeds is in describing the data that is needed to manage a workflow.  Where it mostly fails is in the heavyweight SOAP/WSDL based approach used to manage that workflow.

Human Task became the basis for IHE's Cross Enterprise Document Workflow (XDW) [now in Final Text form!]  IHE's XDW takes that description in Human Task and turns it into the closest thing to a workflow resource that we have today: an XML Document that becomes a repository for a specific workflow instances state. Updating that document updates the state of the workflow. The biggest challenge that XDW implementers have is in the grain size.  It's off by one.  What they want to update is the task in a workflow, not the entire workflow.

So, take a look at the various references for workflow here.  Next time I'll explain what I think we need to do with this in FHIR, and how it will enable healthcare workflows in Healthcare IT.

   Keith



Tuesday, December 1, 2015

If I were designing the perfect FHIR Profile Authoring Tool

First of all, when it installed, I'd have it ask a few questions about who I was, my name, e-mail, et cetera. Then I'd have it ask some questions about the organizations I was affiliated with, possibly selecting some of them (e.g. HL7, IHE, etc) from a pick list, or adding others.

When the application starts, I'm going to want to do one of three things: Start where I left off from my last project, or from where I left off on a recent project, or start a new project.  When first loaded, I'm going to have to create a project.

That project is going to be one of five things: Creating an IG, a value set, a profile, an operation or an extension.  That project is going to be created on behalf of one of my organizational affiliations, which will have some preset rules for how the project structure works (or maybe I want to apply my own personal rules -- because I might be setting them for the organization).  Those rules affect where in the project certain files are stored, and in what sort of repository they are stored (e.g., CVS, SVN, a file system, a website or FHIR Server, et cetera).  Having some useful presets would be good.  For example, for GAO, I set up a particular file system structure for the contents of the IG, and the files were checked into SVN.  Files are named in a certain way based on the kind of resource, its id and a few other bits of trivia perhaps.  You could come up with a couple of different schemes.

An IG would start off with one of a few sample IG starter page sets, adhering to a particular content design.  I could swap them out for the most part without fear that my customized content would be messed up.  Similarly for a profile, value set, operation or extension would also allow for some preset designs and customized content files.

When creating an IG, a wizard would guide me through certain selections, "style", kinds of content to include (profiles, value sets, extensions, operations), et cetera.  It would take me through the process of identifying actors (an IHE term mostly closely associated with Application Role in HL7 Version 3), to which a conformance statement might apply.  It would have me define the conformance statement for each actor, including some optional features that might implemented (right now, FHIR conformance resources don't really have support for options though).

When creating a profile in an IG, I'd specify the resource and possibly a role being played.  The tool would create the profile name based on the IG name/code, and the resource name and role.  Thus, a profile for a patient resource in the Guideline Accountable Ordering IG (GAO) would be named gao-patient, and one for a ordering provider might be gao-ordering-practitioner (the form defaulting to igcode[-role-name]-resourcename).

After having named a profile, I'd basically be asked to check of those properties that I wanted to profile, and then quickly modify their values (cardinality, must support, etc).  Where an existing value set was present, I'd be asked if I wanted to restrict or extend it where permitted.  On creating a new value set, I'd be given simple ways to select codes from existing value sets that I'd used previously or which already exist).  On creating an operation, I'd be asked to name the parameters, and further specify their details.  As things went on, I'd discover the need to create a new profile, and could add that profile to a resource elsewhere, and the tool would remember that I referenced an as yet unknown profile, and would put it on my to-do list to create.

Building the IG would be a two stage process.  The first stage would take the information I supplied in separate component files, and put it together into FHIR resources sufficient to be the guide.  The second stage would be to compile those FHIR resources into the content we see on the FHIR IG site today, essentially running the FHIR IG build part of the process over those resources.  The tool would also verify that I followed a number of best practices... that I had validating examples for each profiled resource, that I had examples of input and output parameters, and that I had conformance statements that linked to those profiles, and that no profile or valueset or operation was not linked back to something in the IG (ever create a set of constraints that weren't ever referenced any where?  It's been done before.  I'll bet you can't find the CCD template that was never referenced by any other CCD template, and so hardly ever used).

Anyway, that's my wish list.  Get crackin' on it guys, would you.

   Keith


The Tinker Tax

This weekend I replaced the leaky radiator in my truck.  I know from past experience that this is about a half day job.  A shop would probably charge me 2-3 hours of labor for it.  It took me two days.  I saved about $150 on the labor, but in the process, broke my transmission fluid line, and did so in a way that I couldn't fix it myself.  It cost me about $400 to have that repaired professionally (because that's not a job I would take on not having the tools, and having already screwed it up once). The end result is that I paid about $400 to save $150, so a $250 educational experience.

A friend of mine (Gila Pyke) calls that the tinker tax.  It's a useful measure of the value of a learning experience. In this case, I actually did learn how to do something, just a bit too late for it to pay off THIS time. Next time I'll know better and be able to save myself the hardship of having to take my broken repair job to a pro to fix the fix.

How does this apply to Health IT?  Well, the first time you do something you've never done before, you need to be ready to pay the tinker tax.  The tinker tax isn't always measured in dollars. Sometimes it is measured in time.  Either way, you should budget for it.  I knew when I undertook my radiator repair job that if I failed I could afford to take it somewhere else, and quite honestly, the additional work that was done wasn't wasted (I didn't badly break something that wouldn't have needed repair soon anyway).  But often, Health IT projects are undertaken without any cushion, without any contingency, and with the assumption that everything will work the first time.

Really?  When's the last time that happened for you in any other undertaking that you did for the first time?  Be prepared to pay the tinker tax.