Wednesday, December 1, 2010

Reconciliation of Problems

Today IHE PCC members met to discuss issues around reconciling problems in our first call on the Reconciliation profile approved by the PCC Technical Committee for the 2011-2012 season.  This is probably the second in a series of posts on the Reconciliation profile.

What follows are some not quite stream of conciousness (but nearly so), thoughts on that meeting.  It was, by the way, very productive, thanks very much to Wendy (who I've now embarrased), who lent her expertise from dealing with this problem in Cancer Registries to this topic.

Here is a short summary of what I think were some of the important conclusions that we reached:

Assumptions:
There are two stages to reconciliation, the "automation" stage where software analyzes the information and proposes reconciled results, and the "human decision" stage where a healthcare provider selects the appropriate outputs.

One question this raises which we did address at all was whether the patient could select appropriate outputs.  There is no reason why this profile could not be used by a PHR, but it broadens the use case.  I think that becomes an open question in the profile.

We focused on the first stage, and particularly on problems.  The first step that we identified was that there might be a filter that selects the items from the problem list that need to be reconciled.  In CCD, and therefore in IHE profiles, a problem can be classified as a complaint, symptom, finding, problem, condition, diagnoses or functional limitation.  We pretty much agreed that reconciliation should focus on problems, conditions, and diagnoses.  I don't know that we came to a conclusion on functional limitations.  On complaints, symptoms and findings, we generally agreed that these were typically point in time events that did not need to be reconciled.  So, a patient report of runny nose on 11/1 is different from a report of runny nose on 11/2 even though they might be related to the same diagnosis.

I asserted that all of these items have an identity and that if you encounter two problems with the same identity, then they must, according to the semantics of the information model, refer to the same problem.  If they don't, this is a bug.  This is the only case where we identified that two items are the same with absolute certainty.

The next step is to find items that refer to the same kind of problem. There is a clear case where if the codes identifying the problem are the same for two items, then the items on the list could refer to the same problem.  So, if the code represents "ankle sprain" in both cases, then you have a candidate pair.  Coding systems have built in hierarchies (ICD-9-CM) and is-a relationships (SNOMED-CT) which means that two different codes could also represent the same problem at different levels of specificity.  Identifying these requires clinical knowledge that is outside of the scope of the profile, but we need the profile to be aware that this knowledge can be applied.  But code is not enough, you also need time span.  If the time spans are identical, then the candidate match is very strong.  If they overlap, it is strong, but not quite as strong.  If they are separated by some time, depending upon the problem, they could be the same problem.  For example, if the problem is "lung cancer" an the two time spans are separated by five years, then this might be considered two separate problems, but not if less than that.  But for other diseases, like flu, the time span might be shorter, and for chronic conditions like diabetes, time may not be relevant.  Again, this is clinical knowledge that would need to be applied.

Deciding which code to apply in the reconciled result when hierarchy or is-a relationships exist led to a discussion about which one was preferred, the authoritativeness of the source, and the originality of the information.  Diagnostic tests, for example, tend to produce more definitive and finely specified results than an initial diagnosis (this is a gross generalization).  There seemed to be preference for more fine grained results, but then there is also the desire to be able to access the original diagnosis when available (I think for quality and other metrics).  I think the jury is still out on this, but I personally lean towards the refined result.

One thing we did not address is body site.  Some problems (like ankle sprain) would clearly be different if they applied to a different target site (left vs. right).  So that may also need to be considered.

There are basically three kinds of things to identify: Sets of problems that are identitical (see above), sets which are nearly so (same code and clinically overlapping times), sets which contain refinements (slight variations in code and clinically overlapping times), and sets of these where two or more contain conflicting information (document A says flu resolved at time X, and B says flu still active at time Y where Y > X)



For problems, there is secondary information, like problem status, comments, and severity, which don't seem to be all that useful for determining whether two problems are the same, but which may be information that needs to be reconciled.

We agreed that comments should not be carried forward to a "new" problem that is the result of the reconciliation process, because as data changes, the comment may lose relevance.  For example, a diagnosis of flu that morphs subsequently into pneumonia as more information becomes available, could have a comment to the effect that "watch out for pneumonia" in the flu stage because the patient might be particularly susceptible.  If that comment was retained when the problem transitioned into "pneumonia", it would no longer make sense.

On a similar note, but not widely discussed was "severity".  Severity of a problem can change over time, going up or down.  It's really an "annotation" on the problem, that should have its own effective time, and could appear in multiple instances, but we didn't model it that way in IHE.  As modeled in IHE the Severity observation is not separable from the original diagnosis.  So, I think if we want to deal with severity tracking over time we need to create a new template which includes the effective time of the severity observation.  That won't break anything, and the original IHE severity template is optional on problems, so we can fix it without breaking existing implementations.

Now for some thoughts we didn't address on the call:
Going back to HL7 semantics for problem, and using the example of flu observed on 11/1 in an active problem.  On 11/12 it's still an active problem.  If on 11/14 the problem (flu) is now resolved, then this is simply part of the state diagram for act, going from new on 11/1 to active to completed finally on 11/14.   But, if on 11/14 it instead becomes pneumonia, what you really have is a new act (the observation of pneumonia) that replaces or succeeds the previous act (the observation of flu).  Following the HL7 Concern Tracking model, pneumonia is a new concern that either replaces the previous concern about flu, or could have the concern of flu now as one of its sub-components.  I think though, that in this case, we can safely say that while these are related, they are not the same concern.  But for ankle sprain, where the observation was originally just "ankle sprain" but gets refined to "ankle sprain of deltoid ligament", then the concern is the same, BUT, the observation which is its subject is refined to the new code.

That raises another issue, which is whose concern is this.  If the concern "belongs" to the author, then changing identity of the author changes the identity of the concern.  But according to the HL7 Concern tracking model, the concerns are "of the patient", rather than owned by any single provider:
Thus the concern class can function as a grouper for all activities associated with a specific patient-related problem. The problem can, due to different observations, evolve over time and can be tracked or managed by one or more care professionals. Different professionals may have different opinions of the nature of the problem, but all their observations are grouped under the same concern. So essentially, the concern class is used to track what affects the patient. It is independent of the assigned profession, and can have different personal diagnoses attached. It is what ties all underlying activities (Acts) together.
So, no problem here.  The concern act is a collaboration between healthcare providers, not owned by any single one of them.

That introduces a new problem.  Who is the author of a concern that has been reconciled?  Is it authored by the original provider or by the reconciling provider, or both.  I think here the choice is pretty simple, but again, we didn't address it on today's call.  Realistically, as concerns are updated through reconciliation, the latest reconciler becomes an author at the time of the reconciliation, and the concern can retain previous authors.  The next question I have is what should be necessary to be recorded in a reconciled problem list.  I think that at the bare minimum, the most recent author needs to be retained in the transmission of the information, but prior authors may be retained.

Some other things that came up:
1.  We need a way to indicate that a list of items was reconciled, by who, on what date, and using what sources of information. 

My analysis for this is that this is an act (reconciliation), with a performer (the reconciler), that occured (in event mood) on a specific date and time.  The act references information sources, which can be external documents (a CDA document) or information sources (a clinical data repository returning results via QED).  Describing the former is easy, just use document ID.  Describing the latter is more difficult, as you would at the very least need to describe the query, information source, and date/time of the query, and identity of the querier.  We could potential address that problem by requiring the results of any query to be stored in a document so it could be later referenced if need be.

This act can be contained within an the organizer for the type of list being produced (MEDLIST, PROBLIST, et cetera), which could simplify some data recording for each of the contained content.

2.  Interim results from automation need to include all data that needs or was reconciled, but final results need to include only references to what was reconciled.  That is because the reconciliation agent needs to be able to show the differences between the sources, but the final reconciled list need not, so long as all information can be traced back.

3.  There needs to be some way to represent the clustering of information.

1 comment:

  1. FWIW, a video that covers HL7 v3 Concern acts: http://www.vimeo.com/10865197

    A reconciliation just sounds like any other observation in v3. Many observations (e.g. a diagnosis) are based on all sorts of information sources, i.e. all observations include an element of reconciliation.

    ReplyDelete