Pages

Tuesday, June 30, 2015

Allergies, Severity and Criticality in HL7 CCDA

A rather long document wound up in my e-mail this morning, sent to Structured Documents cochairs, HL7 Leadership and several other individuals (myself included).  The e-mail addresses concerns from the HL7 Patient Care Workgroup about a patient safety issue with the current C-CDA templates for Allergies, specifically in how these templates address Criticallity of an allergy, vs. Severity of a reaction.  C-CDA does NOT in fact relate criticallity with an allergy, only severity.  BTW, that concept of allergy severity goes back to 2005 and before, and was adopted in CCD, and forward into subsequent releases.

The Patient Care Workgroup has been working on these topics for a number of years, and believes that they should be addressed in the HL7 C-CDA DSTU 2.1 Update Project.  I could readily be convinced that is the case.

The concern of the Patient Care Workgroup is that the existing templates do not appropriately address how allergies should presented to a clinician, and as a result, produce a patient safety concern that should be addressed as soon as possible.

I'm pleased to see HL7 having to take on this kind of challenge, and look forward to see how this changes our processes going forward.  In this case, we have a "conflict" between two governance groups over something that could affect patient care.  Arguably, there should be a process to address this sort of issue.  Today, medical device manufacturers also have processes to address patient safety issues in the technology they produce, I think HL7 will need to adopt similar processes.

We saw a somewhat similar issue (addressing security rather than safety) crop up last year with CDA stylesheets.


Friday, June 26, 2015

Pedantry has its place

A couple of recent vocabulary discussions on HL7 Working group lists made me reflect on the topic of precision in the definitions in standards.  We (standards developers) spend a great deal of time being very precise in our definitions.  Often making very fine distinctions about things that people who wind up using them in the real world aren't aware of, and for the most part, don't need to be.

We need this degree of precision in our processes, but we have to remember that the precision is for our own use, not necessarily for that of the developer.  What developers who implement the standards need is something that is clear and obvious and makes sense.  If we cannot take our very precise definitions and describe them to a developer, then as Richard Feynman would say, we don't really understand it ourselves.

   Keith

Thursday, June 25, 2015

He was always my manager...

Sad news came to me yesterday about the best manager I never had.  Kirby Mansfield was the Director of Software Development at the Software Division of Houghton Mifflin, and one of the people principally responsible for my being hired by Houghton Mifflin and coming to work in Boston.

It was the summer of 1992 (I think) when I came up to Boston to visit my best friend Tom (one of the reviewers of the CDA Book) and also a software development colleague whom I've worked with at three different companies, and spend time with him and my soon to be girlfriend and later my wife. Tom brought me into work with him at One Memorial Drive (now Microsoft NERD Center) one day to meet his buddy Win, and his boss Kirby.

When he introduced me to Kirby, we started talking about what I was doing, and Tom made some excuse about having to go to a meeting, and I found myself in a job interview I never expected with one of the kindest and gentlest people I would ever meet in life.  It was so subtle that it took me about 15 minutes to discover what Tom had contrived. While I wasn't in the mood for a new job at that time, my situation changed about 3 months later, and I called Kirby back.  "About that job you were talking about?" I said to him, "I'm interested."

Kirby had just been promoted up the chain, and so I would now be talking to his replacement.  We did a short phone interview, and I was hired within about 48 hours. Kirby made sure I got a good relocation package and a good salary.

Over the next decade (yes, I worked for the same company for that long), Kirby would never be my manager.  I always reported to someone else, and he migrated quickly to the top.  I never quite caught up with him.  But what I remember most about him was that he was always on the floor talking to people up and down the chain, he always sought the opinions of others.  He would patiently explain our business strategy to anyone who asked, and always encouraged everyone to do their best.  Kirby listened, and when appropriate, he also changed his mind.

Kirby was the kind of manager everyone always dreams about.  He was considered to be a mentor to about a half dozen different people I know, all of whom became excellent managers under his tutelage.  That's a pretty significant achievement when you think about it.  Being a mentor is a very special relationship.

I caught up with him every now and then over the years, but never frequently enough -- at least as I look back at it now.  Kirby was never my manager, but in my heart, he will always be my manager.

Friday, June 19, 2015

Remember When?

Remember how almost nobody had a PC, and now everyone does?  Remember how nobody had a word processor or spreadsheet, and now everyone does?

How many of you remember what it took to install an interface card in an IBM PC or compatible system?  You remember jumpers, IRQ settings, port addresses?  Do you remember configuring drivers?  And then the various changes to the technology came along, and after a few years, we just plugged it in and it worked.  Well, mostly.  Some cards didn't live up to the standards.  Some had some configuration jumpers for different features anyway.  And some pairs of cards just wouldn't work together anyway.

Do you remember what it was like in the days of setting up printers with your favorite word processor?  Especially when it needed a custom driver?  And then, when Windows came along, we no longer had to configure every application, but now we needed to install a driver from the manufacturer for our printer when we hooked it up?

And then Windows 95 came along and got rid of all of that with Plug and Play.  Well most of it.  OK, some if it.  And it got better over time.

And cables? Remember having to build serial cables?  Or getting long parallel cables.  Now we have USB, or even WiFi and BlueTooth.

So, now, you can just plug something into your PC, and it works, mostly.  Drivers are automatically installed, downloaded or even updated over the Internet.  How long did that take?

Let's take a look why don't we:

The IBM PC was Announced in 1981
Windows 3.0 was Announced in 1990
Plug and Play came with Windows 95
USB 1.0 was announced in 1996 but didn't reach general adoption until USB 1.1 in 1998
WiFi came out as 802.11a in 1997, but it took 801.11b in 1999 before it became widely adopted, and then the WiFi Alliance was born.
BlueTooth showed up at the turn of the century.

These days, nearly 35 years later, you just plug it in, and it works. Well, mostly.  Sometimes you still need to deal with those crap consumer driver disks that the manufacturers like to give you for home use products.  And sometimes it still doesn't work.

Some of you reading this blog never had to deal with this OLD stuff, it was simply before your time. But for those folks in DC that think major technology advances happen in 3-5 year increments, I wish they'd think back to the days before the Internet, and remember how long ago that was.  It didn't happen overnight.  A baby born on the day the IBM PC was announced is barely old enough to hold office in congress, but still isn't old enough to run for President.

Yes, we've all got a long way to go for Interoperability in healthcare.  But at the same time, we also aren't in the enviable position of having only two or three vendors near monopolies on the applications and platforms to choose from (and get to adopt the standards). No, it's more like two or three thousand vendors, and the standards that I'm referencing are about 3-4 layers higher up on a stack of standards that got us where we are today plugging in a printer.

When's the last time someone just connected major infrastructure components in any business's enterprise with the expectations that some have put forth in Healthcare?  Never. Think about it. What other industries technology infrastructure has received so much attention?  Forget banking.  Anything that requires nothing more than a 2400 baud modem to communication a single transaction isn't on the same footing as healthcare.  If you have to ask why this is so, you probably aren't qualified to be making major decisions about technology infrastructure.

So, do me a favor Senators, get out of my way and let me work.  I get the need, and unlike many, I actually know what to do about it.

Thursday, June 18, 2015

HL7 Template Versioning Rears its ugly head ... again

One of the challenges we've discovered in the C-CDA 2.1 DSTU Update project is the process needed to update templates that reference updated templates.  This involves what we've grown to know as the "ripple effect".  A change to one template, such as the Result Observation requires new versions of templates all the way up the chain, resulting in changes being required to 13 other templates.  Each of these changes requires six additional steps according to a list provided by Brett Marquard, my co-lead on this project:
  1. Update the Title
  2. Update the Identifier
  3. Update the conformance binding to the new template extension
  4. Re-add the figure and update since versioning does not carry forward
  5. Update the contained entity
  6. Update the containing entity
  7. (my addition to the list) Update the examples!
I had proposed today on the SDWG call that we consider that version specific references to a template are really an errata.  The point of changing to the new versioning strategy was to allow a version non-specific reference to be made to a contained template in a conformance constraint.

A contained template reference can be rewritten from:

Results Section (entries optional) (V2) (optional)1. SHALL contain exactly one [1..1] Result Organizer (identifier: urn:oid:2.16.840.1.113883.10.20.22.4.1) (CONF:15516).
to the following:
Results Section (entries optional) (V2) (optional)1. SHALL contain exactly one [1..1] Result Organizer (identifier: urn:oid:2.16.840.1.113883.10.20.22.4.1 or later revision) (CONF:15516).
Pros for this proposal:
It eliminates the necessity to ripple. Because this would be deemed errata, this change DOES not require a change to anything using Results Section, and Results Section can use a later version in other guides calling on it.

Note: The guide itself can have a conformance constraint that indicates that all referenced templates must conform at least to the versions of the referenced templates present in the guide.  This turns into a single Schematron constraint that might be complex, but can certainly be developed easily from a list of template identifiers present in the guide.

Cons raised on the call:

  1. The specificity of the guide is reduced.
  2. The tooling would need to be changed to support this.
  3. There are concerns that this means all versions of a template would need to be backwards compatible in the future.

While I agree that this reduces the specificity of the guide, the present degree of specificity is what causes this ripple effect, and this is, as one person put it: "... a bad bad problem ...".

With regard to the tooling, yes, we would need to address it in the tooling at some point.  However, we could automatically correct the output of the tooling by making a list of the necessary template OIDs that would require this change, and automatically address making the changes in the Word Document.  Later changes would be needed to address the binding change, but those could be made after the publication.  This is, of course, not ideal, but better perhaps than further delays.

Finally, with regard to backwards compatibility, I would argue that if there are major changes to the way a template is modeled (such as was done for problem and allergy status), the appropriate way to handle those changes is to DEPRECATE the old template, and create a new one if need be (we didn't need to do that for those templates, because we handled it differently).

Yes, this would ALSO require some changes to the automatically produced Schematron in the tooling, but I can automate many of those changes as well.  I would also note that tooling issues, while important, should be secondary if a viable workaround exists.  I'm getting really discouraged about how proprietary HL7 tooling are preventing standards work from progressing the way it needs to.  If this were Open Source, at least I could work on fixing the tooling issues.

The HL7 Templates DSTU recognizes containment of another template as one of the possible kinds of constraints on a template (See section 2.10.2).  It also recognized static or dynamic bindings associated with the containment constraint (see section 2.10.5):
... an artifact is bound to an element either as
  • STATIC, meaning that they are bound to a specified version (date) of the artifact,
  • or DYNAMIC, meaning that they are bound to the most current version of the artifact. 
Value set bindings adhere to HL7 Vocabulary Working Group best practices.
A STATIC binding is a fixed binding at design time whereas a DYNAMIC binding implies a need for a look-up of the (most recent) artifact at runtime.
It can also be found in practice to “freeze” any binding defined as “dynamic” to the most recent artifact at the time of the official publication, making “dynamic” bindings actually “static” for the most recent version. This makes the publication stable with regards to the binding of artifacts.
I'm proposing this as ONE possible solution to the ripple effect.

Another possible solution is to be able to automate making some of the changes to the templates. When we originally made these version number changes, I worked with Lantana to make the changes using an XSLT on a download of the templates in the guide (including fixing examples), and then the material was re-imported into the guide.  I could do this again. 

Wednesday, June 17, 2015

Principle Driven Development of Standards

One of the things we did for the C-CDA DSTU 2.1 project was identify a number of principles we would use to make decisions about how to modify templates written in C-CDA DSTU 2.0 to support backwards compatibility with C-CDA DSTU 1.1.  So long as we agreed upon the principles (and we've tested them out), we can simply do the work using these principles to guide us.  Those principles have become invaluable as a means of determining what needs to get done.

Principles come in a variety of forms.  They can be:

  1. part of your methodology or process.
  2. derived from the scope of your project.
  3. based upon best practices.
Using principles to guide the development makes it easier to make decisions.  All too often in development, we wind up answering the same kinds of questions repeatedly (and in the same way). If you find yourself doing that, you've probably found a place where you should be defining a principle.

   Keith



Monday, June 15, 2015

Security is also about Accessibility

This morning's supposedly quick stop at my bank safety deposit box reminded me of this point which sometimes needs clarification.  I left the house early to go to my bank to get my motorcycle title from my safe deposit box (I'm upgrading from a 650 to an 1100 later this week).  After getting into the safe, we couldn't get the safety deposit box keys to work.  There were two challenges:  First, the bank manager didn't know which of his keys he needed to use because there's a separate one for each of the sets of boxes, and they aren't labeled.

Secondly, no matter which of his keys we tried, with his and my key we still couldn't open the box. He called a locksmith, and after they showed up, we were able to get in.  Unfortunately, I didn't get to see the locksmith drill the box, because he was able to find the right key just by looking at them, and then jiggle the key and the door just right to open it.  The problem was that the box just next to mine had recently been reinstalled incorrectly, making it just slightly ever so inaccessible.  That delay cost me several hours of my time.

Security is about protecting assets, but an asset is useless if you cannot use it.  So a key which locks everyone out, including the people who need to use the asset is almost as bad as leaving the asset unsecured.  In fact, given the risk profile I'm dealing with (what I store and what it needs to protect against), a good fireproof safe in my house would probably be a better investment than a safety deposit box.

The same is true about patient records.  The HIPAA Privacy and Security regulation today is very much like my safety deposit box was today.  It does just as much (or more in fact) to keep me away from my health records as it does to secure me against others accessing them inappropriately.  A delay accessing those records could cost a lot more than time.

   Keith

P.S. As a reminder, today is the last day to comment on MU Stage 2.  See Regina Holliday's blog for her comments on NoMUwoME.

Friday, June 12, 2015

FHIR and BlueButtonPlus for Newbies

This one is for ePatient Dave, who asks:


Blue Button Plus (BB+) is an initiative that started at a White House sponsored meeting several years ago (I got to attend with my daughter).  I've written a number of posts about this particular effort, and participated in the Blue Button Plus Pull initiative of the ONC Standards Initiative project.  That workgroup produced an API based on an early draft of the HL7 FHIR Standard.  Integrating the Healthcare Enterprise further developed that into a specification called Mobile Access to Health Documents (MHD for short).  BB+ Pull and MHD are specifications that meet the requirements of the ONC Certification and Standards regulations for EHR systems to support View and Download. A sister specification BB+ Push uses a different protocol (call Direct) based on e-mail to support the "Transmit" requirement of that regulation.

BB+ Pull and VDT capabilities allow you to access the same kinds of clinical data at nearly the same level of fidelity that healthcare providers have in their EHR systems.  This allows patients to use this data in ways of their own choosing, including sharing the data with other healthcare providers, or tracking and using it in applications on their own computers and mobile devices.

As a standard, FHIR has really taken off, reaching awareness heights in the trade press, academic organizations, and medical professionals that many standards organizations would drool with envy over.  HL7 and its creators have much to be proud about in this.  I've awarded several of the developers with an Ad Hoc Harley award, including Grahame Grieve, its chief Architect, and Josh Mandel (who was also very involved in the BB+ work).  However, FHIR is also still in the early adoption stages.  Several vendors are supporting its development and incorporating it into products, but it will take some time for those versions of the products to reach a doctor near you.  From product release to mainstream deployment across a customer base can take several years.

While FHIR has really taken off, Blue Button Plus still has yet to reach similar dizzy heights.  The Blue Button Connector page lists 140 hospitals, 9 pharmacies, 4 labs, and 39 payers who provide BB+ access, but few of these allow you (as BB+ Pull does) to send those records to your favorite application.  There are also about 13,000 physicians who have attested to MU Stage 2 which provide some access for download, but likely don't provide the BB+ Pull capabilities.

Both of these specifications have a lot to offer patients, and we are likely to see even more utilization of FHIR, Blue Button Plus, and IHE's Mobile Access to Health Documents, as well as another FHIR related ONC initiative; the Data Access Framework, which will provide access to granular health data instead of documentation of encounters in the future.  However, at the current rates of development and deployment, it will probably be five years or more until I, as a resident in a rural community, can have a nearby physician who uses a system that supports these standards.  If I were to drive an hour to a Boston-based physician, I might have to way only three years for wide-enough deployment to be readily able to find a physician that could give me my damn data.

   -- Keith

Thursday, June 11, 2015

How Transparency Enables Competition in a Market

So, I'm planning to have solar PV installed at my new house.  The state of Massachusetts and the federal government provide a lot of incentives to enable this.  I can get 30% of my installation cost back in federal tax credits, and something like $1000 from state credits, plus I can generate value through SRECs.  The state publishes a list of installers, as well as thes cost of installation in $/Watt. I've been using that list to pick solar companies to talk to.  The most popular installer also happens to be one of the more expensive, at $5.50/W, while others are in the $3.50 to $4.00 range.

In ONC's report on "Data Blocking", they note that pricing for interfaces is unknown.  Wouldn't it be interesting if incented providers under MU also had to report the price they paid for a system in order to get the incentive?

Oh, damn, the incentives are done. Maybe next time.

Should HL7 support Backwards Compatible only, or Backwards compatibility as an option in the CCDA 2.1 DSTU Update?

One of the outstanding questions that the C-CDA project has to address is whether the 2.1 DSTU will require backwards compatibility only, or whether there will be a mode supporting backwards compatibility, and a mode where C-CDA 2.0 constraints are simply adopted.  We haven't made a decision yet.

Some would argue that supporting backwards compatibility only will simplify the guide for people. I have argued that will require HL7 to create a new project further down the line, and will not enable C-CDA 2.1 to be used in projects which are presently dependent upon C-CDA 2.0, some of which, like QRDA, also have a regulatory component naming a version of the standard.

To move forward, I will be proposing the following wording be adopted in constraints that are added to support backwards compatibility:
To support backwards compatibility, XXX SHALL/SHOULD/MAY YYY
This highlights the constraints added to support backwards compatibility.  Then we add language to the top of the specification depending upon which choice we make:

  1. If we choose backwards compatibility only, we state:
    This specification adds constraints supporting backwards compatibility.  These constraints are identified by beginning with the phrase "To support backwards compatibility", and are otherwise equivalent to any other constraint.
  2. If we choose to support both, we state:
    This specification adds constraints supporting backwards compatibility.  These constraints are identified by beginning with the phrase "To support backwards compatibility."  These are conditional constraints. When a document instance declares that it is supporting backwards compatibility by [mechanism to be established], these constraints must also be followed.  
This will allow the project team to create the constraints now, and make the choice later about whether these constraints are conditional or not.

HL7 is looking for feedback on this.  If you are a member of the Structured Documents mailing list, you can provide your feedback there, or on next weeks Thursday call, or on tomorrows C-CDA 2.1 DSTU Update project call.  As always, any feedback you provide in comments here, I will also forward to the workgroup, supporting them as best as I am able even though I have my own preferences.

Personally, I could still be convinced for option 1 by an argument that resonates with my own concerns, but am still strongly leaning towards option 2.

Tuesday, June 9, 2015

The role of a patient with respect to their chart

For my ethics class I'm writing a term paper that develops an HL7 model showing the various entities, roles, participations and their relationships associated with health information exchange.  The point of this model is to illustrate the various roles with regard to patient data, in order to facilitate ethical evaluation of "Data Blocking".

In the process, I find that there is no appropriate HL7 Role class that can be used to relate a person to the health chart associated with their care.  The health chart role is scoped by the organization that maintains it, when in fact, I had expected it to be scoped by the patient.  The RIM also allows me to relate the organization to a patient chart via the maintainer role, so I think there's some duplication here. The best that I can do is show that the patient role has indirect authority (a role relationship) over the health chart role. But I need a direct role relationship between the patient and their chart is because roles confer rights and responsibilities. I think for this paper I'll use the patient as the scoper and connect the healthcare organization via the maintenance relationship.

This gap is interesting because it shows the disconnect in thinking about patients and their rights and responsibilities with respect to their chart.

Correcting this gap seems fairly tricky. save that there is little to no use of HLTHCHRT in the HL7 2014 Normative Edition.  It might be worth proposing that this role be deprecated, and that a new role: Patient Chart be added, where the player is the Health Chart entity, and the scoper is the Patient.

I'll have to think about that.  RIM Harmonization is a pretty challenging process, and I've already got a lot of irons on the fire (or is that FHIR?).

   Keith




Monday, June 8, 2015

The case against a negationInd extension for supply and encounter in CDA and QRDA

It's been proposed recently that the prohibition in CDA Release 2.0 against extensions altering the semantics of the information doesn't apply to RIM based extensions.  What the standard has to say in section 1.4 on Extensibility is [emphasis mine]:

Locally-defined markup may be used when local semantics have no corresponding representation in the CDA specification. CDA seeks to standardize the highest level of shared meaning while providing a clean and standard mechanism for tagging meaning that is not shared. In order to support local extensibility requirements, it is permitted to include additional XML elements and attributes that are not included in the CDA schema. These extensions should not change the meaning of any of the standard data items, and receivers must be able to safely ignore these elements. Document recipients must be able to faithfully render the CDA document while ignoring extensions.
The rationale for including this extension in QRDA is to enable one measure (of 93) to be able to report that something was not supplied to the patient.

We did a risk analysis of this today on a special Structured Documents call.  Here's the scenario:

Data is captured for quality reporting and quality improvement activities.
Within that context, rules are created to ensure that patients are followed up on if they aren't getting appropriate treatment as evidenced by supply records.

  1. A patient with DVT risk is ordered prophylaxis.
  2. However, that prophylaxis is not supplied for some reason (e.g., patient couldn't afford to pay).
  3. This is recorded using the extension.
  4. A system that uses the proposed extension will correctly detect that the supply did not occur, and can initiate followup.  However, a system that does not use the proposed extension and is developed with the understanding that unrecognized extensions can safely be ignored will not recognize that the supply did not occur.  In fact, it will instead recognize the opposite, that supply did occur.
  5. As a result of this, no followup on the necessary intervention is performed.  
  6. Due to the resulting delay to detect that the prophylaxis was not given, a life threatening health event occurs.

We need to assess three things:

  1. The severity of harm, which we evaluated as critical/life threatening [different organizations may use different terms].
  2. Probability of occurrence of the hazard: Care management systems today are looking for the absence of the supply signal.  Presenting a supply signal using QRDA with the extension will always cause this trigger to fail.  Due to the way it interferes with followup activities, is unlikely to be noticed by a provider, so the probability of occurrence is high that it will occur.
  3. Likelyhood of harm: This would have to be ratedMechanical prophylaxis can reduce the risk of DVT from 27% to 13% alone, or when used with medications, from 15% to 2% [see http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1925160/].
If you work out the rest, you realize this is not really a workable solution by itself.  Something else has to be done.  Documenting the extension and putting some clear warnings around it is something we know doesn't work well.  Trying to create patient safety in a design through labeling is the least effective method according to the FDA.

If I were the product manager for the 93 quality measures, I'd pull the one in question and release it later when CDA Release 2.1 (or as seems more likely, CDA on FHIR) has the capacity to address the need.




Thursday, June 4, 2015

Sproing

Boone's farm is shaping up this spring.  Quotes are on the way for Solar and the side deck, a dozen or so chicks are in their coop, blueberry netting arrived on the truck today, the first vegetables are planted, rhubarb has already been harvested at least twice, horse inquiries are making some progress, kitty cat #2 has arrived (April), kitty #1 (Ferb) and the rabbit (Nutmeg) have seen their new vet, and a new puppy is next on the list of animals to show up, along with finding someone to do the basement remodeling work.  I've negotiated a brushwacking of the back 40 (actually closer to 0.40) for a case of mead, and have to start restringing (wiring) the fencing this weekend after putting up the blueberry netting.  The smoker has already seen two roasts, and about a dozen chicken legs and wings.

On other fronts, BPMN is heating back up, the HL7 Relevent and Pertinent project was approved last week and is moving forward, C-CDA 2.1 DSTU Update is also moving forward, an IHE materials for Clinical Mapping (CMAP), Guideline Accountable Ordering (GAO), Remote Patient Monitoring (RPM) and RECON on FHIR are out for public comment, as is the DAF Implementation Guide for document access.

As classes finish up for spring, I'm enrolled in two classes next term.  I'll be helping to teach Interoperability and Standards, and will be looking forward to taking Evidence Based Medicine from my Informatics professor.

It's looking to be a productive summer.  Hopefully I can keep up.

Wednesday, June 3, 2015

Interoperability Then and Now

Interoperability is all the rage today, just ask twitter, the federal government, or even your local doctor.  Ten years ago, I often had to explain to people why they needed interoperability, and how to ask for it.  Now everybody is focused on it, but as some have said, I may not want to define it, but I know what it is when I see it.

The needle has swung completely the other way since then. As such needles do, this one is due for an [over-]correction.  Many providers have suggested that we don't need Meaningful Use to tell them what to do. Many patients still say that they cannot get their damned data.  After seven years of waiting, I can finally get my data, but my provider's portal barely provides any sensible presentation.

I like John's idea that we are now at the trough of disillusionment on Interoperability, and I think that it is quite true.  Now that we are paying attention, it seems like we aren't getting yet what we want. The technology might be there, but the workflows aren't, or the two aren't as tightly coupled as they could or perhaps should be.  It will be interesting to look back on this point in time about three years from now to see what we did right, and what we did wrong.

   Keith

Monday, June 1, 2015

Precision Research vs. Precision Care

One of the topics that shows up repeatedly in discussions of Comparative Effectiveness Research is the mismatch between data quality requirements in research, and that presently used in EHRs for patient care, for example, as noted in this article.

What then, are the impacts of research on care, if data gathered for care is not as precise as that used for research?

Because of these differences:

  1. Patients who qualify for an intervention according to EHR data may include patients that wouldn't qualify according to the guideline produced based on the research.
  2. Patients who should have qualified but didn't according to EHR data might be missed because the EHR does not capture data according to the guideline.
  3. Interventions provided according to the EHR may not be the same interventions specified according to the guideline.
  4. Interventions captured by the EHR that should have been appropriate according to the guideline might not be captured in a way that they are recognized as being appropriate.
  5. We very likely aren't capturing the outcomes in either case, and if we are, we likely have similar challenges with regard to that data capture.
So we have noise that would introduce variability in who gets treated, and in capturing accurately who was treated and what their outcomes were.

My question is, that if research indicates the number needed to treat is say 50, what is it really given the differences between theory and practice?  Is the promise of all of this precision in medicine real? If not, what needs to happen to make it so?