Thursday, January 28, 2016

A belated post on Day 1 at the IHE Connectathon

I'm a bit behind in posting on my Connectathon experiences this week.  After you read my day one post you might understand. I started off this year as I always do, hunting down the guy who was going to tell me what our priorities are this week so I could help him to make a plan for succeeding (what I usually expect him to have prepared in advance).  Some years are better than others, and there's a plan to work from.

When I got to his seat, I had a stunning realization. Oh shit, that's where I'm sitting.

   Keith

Monday, January 25, 2016

Five things you can do to succeed in HealthIT

1. Reuse, repurpose, but do not reinvent.  
NIH is the anathema, if you must invent, invent something new, not something old. That way you are always spending your time on problems that haven't already been solved.  That gives you capabilities nobody else has, and thus, no competition.

2. Build upon and contribute back to the excellent work of others. 
Being associated with good work creates value and reputation without the comensurate investment associated with building alone.  It's not about the tech, it's about the rep.

3. Be known for what you can do with what you have, not with what you have.  
Your most prominent customer feedback will be on your service, not your technology. What you have can change overnight, skills take longer to develop.

4. If you have really good technology (or skills), maintain it.  
Technology and skills needs to be significantly refreshed about every three years or so to stay current, and needs ongoing maintenance to avoid getting stale between refreshes.  The worst thing you can do with great technology is to let it become just good, and the fastest way to do that is to rest on your laurels.

5. Mentor others (Thanks to one of my mentors, Glen Marshal, for reminding me of this)
Leave your competitors in the dust, but not your colleagues.  Don't be irreplacable; you will also be unpromotable.  

Thursday, January 21, 2016

Is this your home or are you just visiting?

It's fairly common in standards work groups and organizations to see fluctuating membership depending on the topic currently under consideration.  Some people are interested in the topic, rather than the work group (or SDO), and others vice versa.  I had just finished visiting with CDS on a project that I'm working on for HSI today, when I realized that the notion of "visiting" and the notion of "home" really impacted how I dealt with the work group.

Home is where you go when you have nothing more urgent to do.  But it also describes who can call on you when something urgent comes up, or who will wait for you to become available when they know they need you.

Just visiting are those places where you go because they are working on something of interest to you, but they don't always do that.

How you deal with standards differs depending on whether you are home or just visiting.  When you are home, you have a bit more leeway.  When you are just visiting, you have to work with the group that you are visiting, and be more accommodating towards their needs.

HL7 is home.  IHE is home. HL7 Structure Documents is home.  IHE Patient Care Coordination is home (though it used to be IT Infrastructure). Any group I co-chair is home. In HL7 more and more, FHIR is becoming another home, and may push me into being more of  visitor to Structured Documents, but hasn't yet.  It really depends on how much attention SD pays to FHIR.  Healthcare Standards Integration is kind of home, sort of like an extended family vacation home though, in part because it is still nascent.  Attachments is kind of like my kids house.  I'm just visiting, but I get many of the benefits of being home.  S&I Framework projects are about just visiting.  Because of the short length of these projects (most are 1-3 years), it's hard to "setup house".  HL7 CDS is a close friends house -- I'm still just visiting, but we know each other pretty well, and I know where the beer and glasses are kept.

As I shift roles, I expect that it may influence where home is in the coming years.

Where is your home?  Are you planning on moving anytime soon?

   Keith



Wednesday, January 20, 2016

Leveled Up

Last night (on the eve of my Birthday) I accepted a role as Principal Interoperability Architect within GE Healthcare IT.  My new role also includes interoperability product management responsibilities. My first official act in this new role will be attending the IHE North American Connectathon this Monday where a number of my colleagues will be testing my employers products.  This Connectathon will NOT be my first, but it will certainly be my first in this new position.  I'm certainly going to be looking at interoperability in a whole new way in the coming years.

I've been working towards obtaining this kind of responsibility for the past year or so.  You, my readers might ask, how this will change my blogging content and habits, or my standards participation.  I've already given that considerable thought:

  1. My blog posts here will continue to focus on standards and standards related policy.  Any work that I participate in on for the development of standards has always been within public view, and will continue to be so.  I don't talk about my employers products in this blog, and that won't change either.  I have other venues for that sort of thing.
  2. Habits will very likely change, just as they did when I started school.  I'll try to keep up, but will likely be overwhelmed with more operational responsibilities that will make it difficult to do so.  I know I can do better than a post a week (on average), but will likely not return to a level of 1 a day that I had before I started school when I finish that later this year.
  3. Standards participation will become more focused.  I'll have to implement, instead of teach implementers.  It won't likely have a huge impact on quantity or quality of participation, but will likely affect the quadrants that I focus on.
Finally, my business card will change.  No longer will it say Standards Geek. In the past I've been responsible for the creation of standards, and now will be responsible for implementation of them in product.  As is fitting, when standards get implemented, my title (on my business card) will also reflect that.  I think I'll start introducing myself as an "Interoperability Geek".

   -- Keith




Tuesday, January 19, 2016

Bring out Your Dead ...

ONC and CMS clarified today what the premature announcement of the death of Meaningful Use by Andy Lavitt last week means for Doctors and Hospitals:

  • Current law requires continued measurement of Meaningful Use as specified under existing regulations!  MACRA neither eliminates MU, nor makes it better.  But ONC and CMS are apparently listening well enough to put a good spin on things.
  • The M in MACRA stands for Medicare, and only affects physician and clinician payment adjustments. EHR incentive programs for hospitals still needs work.
  • The new approach will likely take a couple of years to figure out, though you can expect some of that regulation to start showing up later this year.
  • CMS will be making it easier for healthcare providers to claim hardship.


As Monty Python would say: "It's not dead yet!"

 

ONC Accepting Comments on Standards Advisory

In my inbox ...

   Keith

ONC is Accepting Public Comments on the 2016 Interoperability Standards Advisory
ONC is accepting public comments on the recently released 2016 Interoperability Standards Advisory [PDF - 2MB]. Public comments will be used to begin the process of developing the 2017 Advisory. The comment period will be open for about 60 days and will end at 5 p.m. on Monday, March 21, 2016. The public comment submission form and the preferred comment template [XLSX - 22 KB] are available for download.

The Interoperability Standards advisory is a coordinated catalog of existing and emerging standards and implementation specifications developed and used to meet specific interoperability needs. It represents the results of ongoing dialogue, debate and consensus among industry stakeholders on the standards and implementation specifications that are best available. But most importantly, it is a critical element of our delivery system reform vision where electronic health information is unlocked and securely accessible to achieve better care, smarter spending, and healthier people.

If you need additional information or more time to submit comments, please contact Chris Muir at Christopher.Muir@hhs.gov before the March 21, 2016 deadline.

Monday, January 11, 2016

And then Magic Happens

Engineers do this all the time.  We make some simplifying assumptions about how a design will work, so that we can then focus on other more important stuff.  Sometimes these simplifying assumptions are well understood, and sometimes they presuppose some things that have yet to be proven.  This is what I'm referring to in the title.  It's those simplifying assumptions where "magic [science as yet insuffuciently explained] happens".

In discussing FHIR workflows in Q3 and Q4 today at the HL7WGM, I described some of the simplifying assumptions about how communication would be syncronized between a sender and a receiver, asserting without showing the proof, that there were several ways these systems could be synchronized without having to worry about the exact details.

Grahame balked (rightly so, as we do have to explain this to our readers at some point), and he wasn't convinced that I was right.  He could figure it out quite well on his own, but I'm the one making the assertions, so it is fitting that I prove it.

So, here are the assumptions and assertions:

  1. There are three actors: A sender (which could be an Order Filler or Order Placer) as system tracking tasks which I'll call workflow, and a receiver (which could be an Order Placer or an Order Filler).
  2. The sender can create or update  a resource (the general pattern is the same, the only difference is between PUT and POST), and as a result of that create or update, the receiver can be seen to eventually act upon it within a reasonable time period ensuring synchronization at some granularity of time units.
So basically, this is what we want to ensure is present in all cases:

There are three ways to do this that I covered:
  1. Polling
  2. Subscriptions
  3. Grouping (Receiver and Workflow system are part of the same system)

Polling

Polling looks like this, with the polling system periodically making a query for tasks of interest (in this case, tasks assigned to the polling system), and acting on received results:
Not shown here but which may need to be accounted for:
Time should be synchronized for all systems (so _lastUpdated query works).
The receiver should recognize when a bundle contains a [specific version of a] Task resource it has already acted upon, and not act upon it again.

Subscription

Polling is slow and inefficient in some cases.  Some systems would rather be notified using subscriptions.  That would look like this (note: I didn't go into details about the four different kinds of subscription channels which could be supported):
In some cases, subscriptions are really just a tap on the shoulder that you need to ask for data, and in other cases the data is already presented to you in the subscription communication channel.  So that middle GET on the bottom right might not be necessary.  Again:
Time should be synchronized for all systems (so _lastUpdated query works).
The receiver should recognize when a bundle contains a [specific version of a] Task resource it has already acted upon, and not act upon it again.

Grouped

Finally there is grouping the workflow system with the intended receiver.  In this case, you can simply treat it as if there was a magical subscription present.  You don't need to seen information back and forth because it is already there:
This is the simplest of the three cases, and might be the preferred implementation for simple workflows where you "post something" and it just magically happens.  In this case, it is additional, unspecified by FHIR logic in the server, which simply ensures that tasks are processed.

In this example, you can ignore time synchronization and logic to avoid reprocessing a task, since it won't get messed up that way (although I'm sure someone will figure out how to do that anyway).

As you can see in all of the above, the preconditions and postconditions are reasonably met in a couple of different ways.  Hopefully this addresses some of the concerns raised about how the "magic happens."


    -- Keith

Friday, January 8, 2016

Are you a HealthIT craftsman or an artist

Many would argue that you cannot make a silk purse out of a sows ear. I would note that it has been done. Any competent craftsman can pick the right tools and the right components to build a beautiful product.  What makes the difference between mere competency and artistry is the ability to take crap and turn it into something functional and beautiful in a way that completely hides (or, as in the example below, revels in) its origins.


Often times in Health IT we are not always able to choose our tools, or the components that we must work with.  The ever present effort to incrementally improve our understanding of the information that we need to care for patients often results in new tools and models that simply aren't available to us.  Somehow, we have to figure out how to bring that knowledge and tooling back into an existing infrastructure and "make it work".  May times this occurs with insufficient time, resources, et cetera.

The results of this can lead to frustration or failure.  Meaningful Use exemplifies just such a case; where new tools and technologies are forced onto a constituency that is not ready, cannot be trained, nor is adequately supported in their use.  And yet there are many cases where organizations have been quite successful in crafting solutions that do work, and work quite well with what they have.

Challenges such as these are rampant in our industry.  They can either be avoided based on a craftsman's assessment of inadequacy of the available resources, or treated as an opportunity for an artist to excel.  Which  of these you choose to be is really a personal decision, and their is no fault in preferring one over the over.  I like being able to be a craftsman, but I thoroughly enjoy the challenge of being an artist.

We are all a bit of each.  It's rare to ever find a place where you can do real art with the finest of available materials.  One day, I hope to find a role where I can do the work of an artist with the materials I would prefer to use as a craftsman, but like most of us, I expect I'll have to wait quite a bit for that chance.  Until then, I'll continue to consider which of these two roles I'm playing as I move forward in a project.

   Keith



Wednesday, January 6, 2016

A Workflow Framework

What follows is specification gobbelty-gook that flowed from my brain after the result of our FHIR Workflow discussions in HL7 earlier today.  It doesn't really explain much (specs often don't), but it's essential content somewhere, so I'm saving it here for further conjuration.

  Keith


The essential components of a workflow infrastructure for FHIR include the following:
  1. A task resource which tracks the state of a service request, and which may be referenced by other task resources (known as subtasks).
  2. A workflow system of record that keeps track of task resources.
  3. At least one requester of services (e.g., order placer) that creates task resources indicating that a task needs to be performed.
  4. At least one performer of services (e.g., order filler) that modifies task resources upon performance of a task.
  5. An optional service monitor that tracks the state of task resources.
  6. An optional service manager that can change the state of task resources.
The service requester must create task resources, and monitor the status of task resources that it creates as needed to ensure work is done.

The service performer must read task resources and monitor the status of those resources as they are updated to ensure the appropriate performance of work being requested.

The service monitor may read task resources to enable other functions (e.g., escalation, service level monitoring, dashboarding, et cetera).  Within a functioning workflow environment, it need not be present.

A service manager may write task resources to force a change in state of a task to support other functions (delegation, escalation, tie breaking, et cetera). Within a functioning workflow environment, it need not be present.

The system of record must maintain task resources in their current state.  It should support access to the history of the task resource. It should also support query and subscription to updates of a task resource. It MAY support additional capabilities, such as enforcement of certain constraints when a task is modified.  For example, it might insist that a task in the "Completed" state first be brought back to an "In Progress" state before certain properties are allowed to be modified.  It may also support storage of other resources.

The system of record may be a composition (e.g., federation) of systems, so long as all systems provide a consistent view of the current state of a task within certain limits (e.g., federation may incur some delays between synchronization of views, but these should be small).  The mechanism by which such federation is accomplished is unspecified.

The system of record for a task may migrate from one system component to another as the state of the task changes.  In this case, it must be apparent to all systems participating in the workflow environment which system is the system of record. The mechanism and model for this is not specified, and we only note that REST allows for Representational State Transfer, thus, that a task resource could be transferred from one system to another.

The mechanism by which the service requester and service performer monitor activities on tasks is unspecified.  This may occur through periodic polling, use of subscription, or even out of band communications to ensure that the information stored in the system of record is tracked by the service performer and service requester.

The results of task execution should be independent from the mechanism used for task monitoring. In other words, two systems which are cooperating in a workflow through the exchange of information in a task resource should not be able to distinguish whether the task is being processed by a system that monitors task status through polling or through a subscription mechanism.  This is not to say that there may not be advantages to the use of one mechanism or another, but that the effect should not be sufficiently distinguishable so far as the execution of the workflow is concerned.

To request that a task be performed the service requester shall create task resource on the system of record in any state less than "In Progress" (e.g., Created, Ready, Assigned).  In response to creation of such a task, a service performer may assign the task to itself, or if already assigned, may delegate or reassign the task to another system.  Once the task is in progress the service performer should mark it so.  A task can be suspended or resumed by the service performer or other system actor, or it may be cancelled.  Upon completion of the task the service performer will mark the task as complete.  If the task cannot be normally completed the service performer should mark it as failed, and provide a reason for failure.  If a task is not accepted by a service performer, this should be handled as for failure, with the reason for failure indicating the reason that the task was rejected.

Tasks can have sub-tasks.  The service performer can decide when it accepts a task how to break the work into subtasks and subsequently manage these.  The subtasks need not be stored in the same system of record as the original task, nor need they be assigned to the same task performer.  These subtasks should reference the parent task so that the entire workflow status associated with the initial task can be determined.  In BPMN parlance, a collaboration diagram represents a workflow, and within that collaboration an activity can be represented as a subprocess with seperate activities being performed by other workflow participants.  Within FHIR, the task represents a business process, either at the level of workflow collaboration, or at the subprocess level, and so comprises the functions of both the workflow and task information items.



Workflow Services vs. Storage in FHIR

The question of the day is: Should there be one resource (Task) or two (ActionRequest/ActionResponse)?

I've been a proponent of the one resource approach, but will certainly acknowledge the value of a request/response approach as well.  I think though, the real question is not about resources per se, but rather about functionality.  Today, FHIR is mostly about describing a set of common behaviors (CRUD) around resources, with a little functionality beyond storage and search.

Workflow is about the execution (behavior) of activities.  The kinds of behavior that need to be effected can be seen in the list of WS Human Task Event Types and Data.  That list describes the activities involved in workflow management: Management of the task, changing task states, assignment of tasks, capture of task related data, et cetera.

In an object oriented model, these events are the operations that can be performed on a task.  In FHIR, these could be treated as operations on a Task resource that made appropriate changes to the resource, or responded with error messages when those changes could not be made due to required behaviors of the object.  These operations are where we get into a "Request/Response model".  The operation is the request, and the updates to the Task resource, or the error raised in response to the request is the response.

There are a number of different ways to deal with workflow.  As I've previously mentioned, there are different models for collaboration on workflows, notably around the differences between orchestration and choreography.  I like the discussion in the previous link about Orchestration being intra-institutional vs. choreography being inter-institutional.  I think it helps to narrow the discussion a bit.

As a resource, Task serves to record the status and progress of a workflow, but does not imply a specific workflow structure or rules around how it might be choreographed or orchestrated.  If multiple organizations are able to create and update task resource, that would enable choreography. Task operations could effect request/response behavior patterns to support enforcement of a limited set of allowed task state changes and workflow controls (e.g., per WS Human Task requirements), but still enable either kind of workflow.  Implementation Guides on the use of the Task resource would assist in defining specific institutional or orchestrated workflows in a more refined way.

The only net benefit I see behind the request/response resource pattern is that of supporting separate maintenance of resources, such that the initiator of a workflow can create and maintain the request resource, and the responder can create and maintain the response resource.  But frankly, I don't see a lot of value in that because there's never a single resource that tells you what the state of the task is at any point in time.  Instead, you must look at a sequence of requests and responses, and manage the state yourself.  I know that's a bad idea.  Anytime you have multiple systems implementing complex logic like that, you'll have multiple systems looking at sequences, and come, based on their interpretation of how workflow is supposed to work, different opinions about what the state of the system is.  It is much better to say "this is the state of the system", and allow the maintainer of that state (resource) to impose rules on how it might be changed if it feels a need to, or alternatively, allow it to be completely ignorant of any transition rules.

We don't have to choose between choreography and orchestration, and we don't need to choose between a task resource or request/response patterns.  We can simply say: Here is the resource.  Here are operations that work in this way on the resource.  And a system can say: I implement this resource, and I implement these operations, and you get both.

Tuesday, January 5, 2016

The Age of Innovation

This is the week that everyone wants to predict the next Uber for Healthcare, the new Amazon for the Patient, the ...

Last year, the next big thing was ...

And this year it will be ...

Well, you get it.

The new kid on the block, by the time most of us are reading about it, has usually been around for quite a few years, and is just now getting noticed. Let's look at the age of some innovative things from our past that we see talked about in the headlines as being icons of innovation:

Twitter is 7 years old, it just entered First grade, and sits right next to Uber.  The iPhone is 8, no, 9 years old -- several years away from Middle School age.  Facebook at 10 years isn't even old enough to have its own account, let alone a page. Amazon is studying for it's learner's permit. REpresentational State Transfer (Fielding's original paper giving RESTful architecture its name) is old enough to drive. The MP3 player is a decade older than the iPhone and is old enough to vote. The Innovator's Dilemma is just a bit older. Windows 95 is old enough to drink.  Big Data is a thing.  It has been for decades.  It's been around long enough to collect a master's degree (probably in data science).  The web is about the same age as Big Data.  e-Mail is positively middle-aged.  And if your in-box is like mine, showing it around the middle girth too.

Innovation, true innovation, takes time to be successful.  The next big thing (e.g., FHIR), has already been around for half a decade.  It's younger than Twitter, Uber, Facebook, Amazon and all the REST (pun intended).  It's not just the next big thing. It's the next big think, and I cannot wait to see what it becomes when it grows up.

   Keith



Monday, January 4, 2016

In Hindsight

In Nobody Knows, I raised several questions about the coming year:

The US national program is at a time of transition, from the carrot to the stick. Incentives are no more. Penalties kick in this year. Will that work?

Ha. No. Meaningful Use is largely irrelevant according to some, a problem still for vendors and providers according to others, and mostly unsuccessful in moving the US healthcare system beyond basic EHR functionality.

At the same time, many hope that the incoming congress will pass new laws changing the way the Meaningful Use program works. Will that happen?
Not yet, maybe this year.  There have been several attempts.

Meaningful Use Stage 3 proposed rule, which many and projected to drop December 23rd as usually is still waiting in the wings, Clearly it is no longer business as usual at ONC. What's in the proposed rule?
Now we know.  Except funny thing is, we still don't.  Not exactly.  There's some stuff waiting in the wings based on the comment period.

ONC now hosts at most three of the original people staffing it under ARRA. Few really have a clue what is going to happen here either. What will ONC look like when it grows up?
Hasn't happened yet, and probably won't even begin until later in 2017.  Face it, ONC still has no effective leadership and won't until well into 2017 due to elections.

FHIR will soon be launching its second DSTU. The first pilots of FHIR using DSTU 1 will soon be appearing. Will it work, or won't it?
I'm betting it will, in a big way. More on that later.

IHE and HL7 will soon have a joint workgroup. There is a lot of opportunity here to bring together these two organizations which have had an on-again off-again love-like-hate-envy-love relationship. , will it be successful?
The joint workgroup has been created, and we've even initiated some joint projects, but there's still plenty of untapped potential.  Way too much in my opinion.

I'm looking forward to 2016 in ways that I haven't previously.  With advances in Meaningful Use mostly behind us, it's time to start executing on real interoperability.  And I'm about to kick some ...

   Keith