Wednesday, April 29, 2015

Relevant and Pertinent, the solution is near in HL7

The Project Scope Statement for the HL7 Relevant and Pertinent Project just went to the TSC and yet another group (CIC) is considering signing on.  It's time to start planning how to make this work.  I've talked several times here on this blog about the importance of these key words in the CCD specification.

The purpose of this project is to create an informative document from HL7 which would describe how an automated creator of C-CDA documents might identify relevant and pertinent information for inclusion into the clinical document.  While I know some of you following this already have ideas about how to go about it, (as do I), our first step will be to create an engagement model which we can complete with a number of different groups.

We will first need to identify what the important questions are that we need providers to answer, and figure out how we will engage with them successfully.

I'll be sending out a Doodle poll shortly to the HL7 Structured Documents workgroup list serve to choose a time for these discussions, and to initiate the group who will help us plan how to develop this document.  That group will then take charge of implementing the outreach to providers to gather the necessary answers to these questions.

I've already heard from many interested groups who we will be reaching out to.  If you are interested in participating in this project, drop me an e-mail (just change the DOT to a . in my e-mail address before you send it).

     Keith

Ready, Aim ....

FHIR!
My brain is on FHIR, literally.  Everything I've been doing for since HIMSS started has been about the HL7 FHIR standard, and it's kept me tremendously busy.  Let me work though my list...

I had numerous discussions about FHIR with many people while at HIMSS.  I heard the HL7 sessions were packed, although I didn't try to go to them myself.  I spent most of my time promoting FHIR to various folks and explaining what it is and how they could take advantage of it.  I spent a few minutes talking to Josh Mandel about the SMART technology, and I see how it and FHIR together can shake up an industry.  That was a long overdue conversation, and I'm glad we got to catch up.

After HIMSS I spent a solid week polishing off draft versions of two FHIR-based profiles that I'm editing for IHE PCC.  We just wrapped the public comment edition of one today, and another should be done tomorrow.  Both of these were smaller and simpler to write than any CDA profile I've ever developed.  A third IHE profile is being developed by a team at Allscripts adding FHIR to the RECON profile.  Tomorrow I'll also be helping out a colleague in ITI who is looking for how to profile FHIR operations.  I have a template for that, although these days, I suppose I should say a "FHIR profile".

Before dinner Tuesday evening, I discussed how we plan on teaching the OHSU Standards and Interoperability course next term, and one of the topics we will be beefing up with my help is FHIR. Yes, I did say "we".  I'll be assisting in the instruction, which is a real honor, considering they usually offer that opportunity only to doctoral students.

Thursday I head off to Orlando to give a FHIR workshop.

Later this year I hope to be at FHIR Dev Days to talk about my experiences profiling FHIR with IHE and perhaps even by then, implementing some of those profiles.

Friday at noon eastern, I'll be hosting the #HITsm tweetchat, and the discussion topic is again, FHIR.

The questions for the tweetchat are below:

Q1: Where is #HL7 #FHIR on your hype cycle? Technology Trigger, Peak of Inflated Expectations, or headed to the Trough of Disillusionment?
Q2: When do you think #HL7 #FHIR will reach mainstream use?
Q3: Are you planning on adopting #HL7 #FHIR or #SMART for anything in the near future?
Q4: What do you know about #HL7 #FHIR? Have you gotten any FHIR training or are you considering it in the near future?
 Q5: What’s the best (or worst) #HL7 #FHIR pun you have heard?

Of course, there are a few other things I've been up to as well, and I'll catch you up on some of them later today.

Thursday, April 9, 2015

IHE Clinical Mapping Profile

We completed the standards selection process for the IHE Clinical Mapping profile today. The purpose of this profile is to support mapping of terminologies from one system to another.  We have two use cases for this profile which we are working together with PCD on this year:

  1. PCD's use case involves mapping from IEEE Medical Device Codes to LOINC, for the purpose of translating from IEEE vocabularies into LOINC vocabularies for Vital Signs.  This supports, for example, the need of organizations which must report vital signs values in LOINC, as is proposed in the 2015 Certification critiera.
  2. PCC's use case involves translation of SNOMED CT codes for clinical use to ICD-10 codes for billing.
We plan on having two (or possibly three) transactions in this profile.

The first transaction relies on the FHIR $translate operation on the ConceptMap resource.  In this transaction, you would specify the input code, and some optional context (in an extension to the $translate operation).  The output would be the concept in the target vocabulary.  

We haven't decided whether or not to also incorporate the $batch operation from this resource, this would be the "possibly third" transaction.

We will have an optional transaction from the Clinical Mapper to support download of the ConceptMap resource, so that the translation could be performed locally.  The reason that this is optional is because some translations can be readily supported and implemented algorithmically from the ConceptMap data, while other translations (e.g., SNOMED-CT to ICD-10) might not be as simply represented or implemented.  This option would apply to both actors in this profile.

So now we have two profiles in PCC which are planning on using FHIR DSTU-2 content.

     Keith



Wednesday, April 8, 2015

I will not profile policy, I will not profile policy, I will not ... x100

I probably need to write this more than 100 times, since I wind up doing it so often.

We reviewed the work I had done on the GAO profile I talked about yesterday.  For a lot of what the CDS system needs to do, it is:

  1. Not interested in certain details.
  2. Would prefer not to know other details.
From an interoperability perspective, it can simply ignore #1, rather than having us profile them out.  For #2, there are questions of liability, in which the receiver really doesn't want to be responsible for dealing with someone else's PHI.

The first case represents the profile's lack of requirements or interest.  Rather than profile these out, we are going to identify those that are permitted by the profile, but which may be suppressed either by the sender or the receiver for business, security or privacy reasons.  These are mostly, in this case, business reasons.

The second case represents the receivers specific requirements in certain cases that some details not be sent, because they then become responsible for dealing with the content in ways that they would rather not have to.  In this case, we'd like the profile to support their use of the content, but we won't impose their policies for integration with their system.  Instead, we'll let them impose those policies. So here, we will identify those data elements for which their may be some concerns in the security considerations section, and note that receivers may prohibit use of certain data elements according to their own policy.

This greatly simplifies the profile.

Tuesday, April 7, 2015

Guideline Accountable Ordering

I've spent the last several days using HL7® FHIR® tooling to build the Guideline Accountable Ordering profile in PCC to go along with the CDS for Radiology work that is happening in that domain.  Imaging is the principle use case for this profile, given the need for CDS to be used on imaging orders paid for by medicare based on recent legislation [see section (q)] passed in the "Doc Patch" last year.

Building the profile is fairly straightforward, except that the build process is not really designed for single profile development. That will have to change at some point.  There are times when I have to wait 25 minutes to find out that I have another syntax error.

The basic outline of GAO is that there is a request, comprised of an order for a diagnostic test, and a response that indicates whether the order is appropriate.  The order at this stage is "proposed", not yet "requested", so that the ordering physician can be "guided" by the CDS implementation if he/she so chooses.

The order contains various pieces of data, including the ordered service, the identifier of the ordering provider, a timestamp, an identifier, and various data elements providing indications, reasons, and related clinical data that might help a CDS service determine appropriateness.  Little about the patient need be exchanged initially (e.g., gender and date of birth), and little detail about the service, it's code, a code for indications, and perhaps other data known to be necessary (patient weight might also be important for example).

The service can response with an affirmative "within guidelines" response, indicating that the order is appropriate (and which guideline it was evaluated against), a negative "outside of guidelines" response, or a "no guidelines apply" response, indicating that no guidelines are available to evaluate the request against.  The service can also provide a link to a request for more information, either as a questionnaire which can be responded to, or as a web based service which could be invoked to guide the clinician through more questions and answers (essentially the web-based implementation of the questionnaire).

Right now, I'm looking at the response which comes back.  The closest thing that I've found is the Provenance resource.  This is a bit of an edge case for provenance, and some have suggested that an "Authorization" resource might be appropriate.  This would of course have to wait for DSTU 2.1, so I'm going to stick with the present approach because it is fairly close to what we determined were our requirements.  If it turns out we need to change it, we can.

In the meantime, I'm going to go back to playing with this profile and the FHIR build tools.  I think my most recent build is finished.  I wish I knew why it keeps doing a full build every time.

     Keith




Monday, April 6, 2015

A Domain Model for Workflow

To your right is the model we produced on last Friday's IHE call discussing where we are going with the BPMN white paper.  This is a simplified domain model that we produced to help explain where the BPMN workflow definition fits in, and which shows other data objects are used in a workflow.

We started with a use case familiar to one of the IHE members on the call, the need to manage care for patients via regional programs.  In his case, it was national programs in the third world, but another participant who contributed to this model was clearly dealing with a US-centric care management model as well.

We have in this model a care management program which is defined by guidelines.  Patients are enrolled in this program, and are the subject of care plans, which customize guidelines for each patient based on certain criteria.  These guidelines reference a variety of different workflows that are defined: Patients are screened, tested, diagnosed, treated and followed up on, each of these separate steps potentially involving multiple parties.

To enact the care plans, the Care Management program can instantiate or invoke a workflow on an enrolled patient, making the patient the subject of this workflow.  The workflow itself is compesed of multiple tasks, which have various data elements as inputs and outputs, the output of one task quite possibly being the input to the next one.  The care plans themselves have outcomes, both potential (goals) and actual, and these outcomes are also measured by the various data elements.

This domain model addresses the work that providers need to do on behalf of patients, and exposes the data at the level of granularity where it needs to be readily accessed in the various interoperable components.

I don't think there are any big surprises in this model.  I'm certain people could argue for different clarifications, for example, data elements associated with inputs, outputs and outcomes certainly also relate to data element definitions which appear within either the guidelines or the workflows that support them, or in the goals that show up in care plans.  If you are focused on these aspects of the system, I would agree, you probably need to go a bit deeper.  I'm still focused on what's in the dotted lines, and so think I have enough detail to address that.  HL7 FHIR Resources could readily address data elements, but what is missing for me now, and for workflow in generate in healthcare, is a ready mechanism to represent the Workflow and Task boxes.  This will be one of the areas that I'll be talking to folks in the HSI workgroup in HL7 about in the coming weeks and we'll see where this goes.

   Keith

Friday, April 3, 2015

Towards a FHIR-based workflow model




Earlier this week I talked about differing views of workflow.  We discussed this today on our call to further the BPMN white paper. There are a couple of good posts in that discussion, so I'll start with the first part, and follow up on Monday with what we produced.

Bascially, with the Workflow Document in XDW, we have a very good view of what is happening to a patient.  It shows all the different kinds of participants in the workflow, the tasks that they are engaged in, and how the workflow is proceeding for that patient.  Now, if we stack all these instances in a pile, and look at a side view, we get a different perspective.
In this view, all the participants in the workflow line up vertically by type of participant.  And different instances of a participant type might be involved in different workflows, as you can see to the left here.

You could sort the participant work by task (as each participant might be involved in different tasks), and by participant instance, producing yet another view.  This view becomes a task queue, where you could have multiple tasks of the same type being able to be viewed all at once, or filtered by the participant assigned to them.

The two standards we were discussing in the IHE Radiology meetings on Remote Read, XDW and Unified Procedure Step - RESTful Services (UPS-RS), excel at two different views.  XDW excels at the patient-centric view, and UPS-RS excels at a worklist-centric view.

What I think is needed over the longer term is a standard that supports both viewpoints.  XDW is a great start for this.  I did a technical evaluation comparing UPS-RS to XDW a while back, and we did a similar evaluation in IHE Radiology, with the end result that from a data perspective, both UPS-RS and XDW contain all of the necessary data to manage the workflow.  There are some things that UPS-RS does better especially for Radiology, and some that XDW does better, but for the most part, the two are equal.

Developing new standards is not something that IHE does.  For that, I'd look to HL7, and specifically FHIR.  What I would like to do is propose a couple of new resources for the next round of FHIR work (DSTU 3), which would support workflow management.  The Workflow Resource would support collecting a bunch of tasks, and referencing a workflow definition, a patient, and an owner or manager of the workflow instance.

This would overcome some of the challenges we have in XDW today.  Most workflow document updates are simply to add a task, and leave the workflow metadata itself otherwise unchanged. Replacing an entire document to accomplish this is cumbersome.  The benefit that FHIR provides here is that the Workflow resource and all of its task resources can be separately managed with separate states and histories.  This provides a patient-centric view of a single workflow instance, a population view over a variety of instances (which I haven't mentioned thus far), and a task-centric view of tasks.

Problem solved... Not really, now I need to propose the work, and get it accepted through HL7, and then we need to profile this to make it work across the health care continuum.  Monday I'll be talking about how these different pieces could fit together.  What I'm proposing here is a better way to do what XDW does, and it will take some time to get it done.  Hopefully, I can get HL7 to start on the standards side of this in time for IHE to begin work on the other half.

I'm certain some people will be unhappy about a proposal to change or update XDW, but in my own opinion, I think this would be a great way to move workflow forward.  It's just going to take some time.  I'm happy to move this forward at Internet speeds, but what I will very much try to avoid is to move it forward at ONC's preferred speed (yesterday).

   Keith



Thursday, April 2, 2015

Your problem is not ours

This is a recurring theme for a lot of capabilities being suggested for meaningful use.  The you and us in this this case are the US Federal Government (represented by CMS, ONC, CDC and others) and healthcare IT stakeholders respectively.

How does this show up?

Vaccine Inventory: A physician's EHR is intended to capture information about clinical care of patients, not to manage national inventories of vaccines.  Aren't there better ways to capture this?  I know we already send a lot of this data to local public health in other ways, why do you need to push this onto the EHR?  In this particular example, I might agree that some Health IT may need to track vaccine inventory, and that other codes (such as NDC) may be applicable, I do NOT agree that these should be necessary exchanged in clinical documents designed to support clinical care.  Use the right tool for the job.

Fraud Investigation: Some parts of CMS would like for to us to create an electronic process to support fraud investigations, which as described in the 2015 certification rule requires some three different digital signatures in our health IT systems.  This to replace an existing process where they presently accept faxed copies of documents with wet signatures, in some cases applied by a stamp. All these signatures are in service of what?  Who is going to pay for the additional digital signature infrastructure?  Where is the governance of these certificates?  How will this be governed?  If we have learned something from Direct, we have learned that trust models are important, and that radical changes to trust models should be undertaken very carefully.

Social Services Data Capture:  Modules in Physician health IT systems are proposed in the 2015 Certification rule to become the mechanism of data capture for a great deal of VERY sensitive personally identifiable information, some of it not directly related to healthcare issues.  Gender identity, annual income, or use of social services are certainly related, but not directly applicable. While this might be useful in the context of providing many social services, the standards proposed have not been formally developed in the 2015 Certification rule with coordination with IT used with social services in mind.  I'm NOT certain that this additional requirement is something that should be added to the responsibility of healthcare providers without consideration of the other impacts it has on healthcare services.

In any case, I think we need to be very careful about how we create new criteria for health IT modules.  In the standards advisory published by ONC, a couple of these issues (Fraud investigation and much of the data that I mention with regard to social services data capture) weren't even investigated by Federal Advisory Committees.


Wednesday, April 1, 2015

Why we still need the CCR

Ha, made you look!  Happy April 1st.

   -- Keith