Pages

Thursday, January 31, 2019

A QueryHealth for AllOfUs

Those of you who have been reading this blog for years are already familiar with what I've written in the past about Query Health.  For those of you who haven't check the link.

Recently, I've been looking into All of Us, a precision medicine research program that takes the ideas of Query Health to the next level.  The original thinking on Query Health was about taking the question to the data.  All of Us has a similar approach, but instead of querying data in possibly thousands of information systems, it uses a raw data research repository to collect data, and a cloud-based infrastructure to support research access to the curated data that is prepared from the raw data sourced from thousands of information systems.  I find the best detailed description today to be found in the All of Us Research Operational Protocol.

There's a lot to be learned from Query Health, and the first thing that any group putting together a large repository of curated and anonymized data is certainly going to be security and confidentiality.  Anonymization itself is a difficult process, and given the large data sets being considered, there's no real way to fully make the data anonymous.

Numerous studies and articles have shown that you don't need much to identify a single individual from a large collection of data collected over time.  A single physician may see 3-6 thousand patients in a year.  Put data from two of them together an the overlap is going to be smaller.  Add other data elements, and pretty soon you get down to a very small group of people, perhaps a group of one that combined with other data can pretty easily get you to the identity of a patient.

For Query Health, we had discussed this in depth, and regarded counts and categories smaller than 5 as being something that needs special attention (e.g., masking of results for small counts).  There was a whole lot of other discussion, and unfortunately my memory of that part of the project (over 8 years old now), is rather limited (especially since it wasn't my primary focus).

Another area of interest is patient consent, and how that might related to "authorization" to access data via APIs from other external sources.  A lot of this can be automated today using technologies like OAuth2, OpenID Connect, and for EHR data, SMART on FHIR.  But as you look at the variety of health information data repositories that might be connected to All of Us through APIs, you wind up with a lot of proprietary APIs with a variety of OAuth2 implementations.  That's another interesting standards challenge, probably not on the near-term horizon for All of Us, considering their present focus.

It's interesting how everything comes back eventually, only different.  One of my ongoing roles seems to be "standards historian", something I never actually thought about.  I'm guessing if you hang around long enough, that becomes one of the roles you wind up adopting by default.

Tuesday, January 29, 2019

Relearning: One of my Favorite Things

One of my favorite things to do is go back to something I learned how to do once and relearn it.  I actually mean that.  I try to keep all of the text books from classes that taught me something that might be useful later on, because quite often, I know I might need to know it in the future, even when I don't have plans for it today.  Those books live in my office in different places, from each different era of my career (usually because I arrange by topic, and I generally float between topics over time), so long as I remember where / what I was learning, I can go back and find the book.

Earlier today, I was looking at a graph and was reminded that it clearly showed the effect of an intervention (basically hiring my employer to take something on), and I was trying to remember how to evaluate that effect.  Then later today, I was trying to identify when and why measuring days between events was a good process measure.  Both of these topics are related to control charts, a method that is used for statistical process control.  Over the last week, I was also looking for how to identify cases when a regime change was identified to trigger further activity.


So, I dug up my second edition copy of the Healthcare Quality Book (now in it's third edition), and turned to the chapter on statistical tools for quality improvement (Chapter 7 in the Second edition).  And there we find a whole chapter basically talking about the utility of control charts, and how to compute the values associated with them.

This would allow me to take a chart like the one below (a graph counting some item N over time T), and through a series of computations, compute a graph like the one on the right.

Where the rate of growth of N clearly enters a new regime, and to clearly show it. Now, for this graph, it's quite obvious, but that's because we've got a couple of years of sample data, and what I want to know is that the regime is changing sooner, rather than later.

In other words, I don't want to wait a day longer than I have to in order to see the effect.  Control charts let me do that.  Why is this important?  Often, providers want to know when something abnormal has happened.  What the control chart does is help to establish a normal range of variation, which makes it possible to detect variation due to special cause, i.e., variation that falls outside of the normal range.

In my particular case, I was looking for a way to signal that a particular event needed special attention, and through control charts, I can readily do that visually, and anything I can visualize, I can generally compute.  Unfortunately, I didn't find the chapter that talks about G/H charts yet, but that simply means I have more reading and searching to do.

I'd never be in a position to apply these charts to what we applied them to in class, because I'm not a doctor.  But I do get to develop software that doctors use, and I also happen to have my own set of "patients" that often need diagnosis and treatment (software systems).  Control charts can work for them just as well as they do for quality management in patient care.

   Keith

  

Thursday, January 24, 2019

An Old Man's Tools

This weekend, my children and I cleared our driveway of about 3 inches of snow covered by about 1/2 inch of ice, in bitter cold weather after Winter Storm Harper.  While "shoveling snow" (mostly breaking ice chunks), I found myself saying something to my children my father used to say to me: "Let the tool do the work."  Somehow, I was never as able as he to let the tool do the work, and now I understand why.

It's an approach of a more experienced (older), less physically energetic (lazier) tool user.  Letting the tool do the work requires an understanding of HOW the tool works: how to hold it, how to use (and abuse) it, what physical advantages it provides, and how to best take advantage of it to get the job done.  In the main, that can all be chalked up to experience with the tool (or ones like it), including how much experience one has trying to be lazy with it.  My youngest pointed out to me that I also had some physical advantages she didn't (height, and uhmmm... weight).  I found her a post hole digging bar, better suited to her capabilities and stature (it had more mass and could just be dropped through the ice) and she was able to be more successful with it than the shovel.

The same thing applies to software engineering.  The tools you select define the level of skills needed to get the job done.  I can do just about anything with XSLT (and for some things, it's my favorite tool for the job), but that's not a skill that anyone has.  I can write a code generator, but again, that's not necessarily a skill that every engineer has.  In defining my approach to software problems these days, I have to look at tools and approaches differently.  I have to find ways to get things done so that I'm not the one who has to do them.  I have to select and identify tools in ways that enable others to do the work, because I don't scale.

My old man's tools aren't necessarily the tools I select for MY use, but rather, the tools I select for younger men and women to use.  My job, like my father's before me, is to pick the right tool, and teach others how to use them according to their skills.

   Keith

P.S. Yes, my birthday was Sunday, and (well) in my fifth decade, I can certainly claim to be "old", although my children still insist I have yet to grow up.

Tuesday, January 22, 2019

The January HL7 Working Group Meeting

HL7 Working Group meetings are a phenomenal way of keeping up with what HL7 is doing. The biggest challenge I have is that I cannot be everywhere at once.  January is also the HL7 Payer Summit, and so there is a lot of activity going on.

The working group meeting starts on Saturday with two activities: The FHIR Connectathon, and the International Council meetings.  I didn't go to either this year, but heard we had about 240 people at the 20th FHIR Connectathon.  I expect to be at the next Connectathon.  For a variety of reasons, a lot of payers were in attendance at the FHIR Connectathon trying out some of the new Da Vinci profiles (e.g., CRD).

While I had some minor challenges with my flight out on Sunday (for some reason my expense application notified me of flight cancellation, but United completely failed to let me know until I arrived at the airport), I did arrive just in time for the working group meeting "proper", the Monday to Thursday (and sometimes Friday) meetings that make up the bulk of HL7 activities.

Structured Documents reviews their plan for the week first thing Monday morning, and I make mine.  I still consider SDWG to be my "home committee", but my interests are too divergent to spend time with any single group.  My plan for the week was to spend some time with SDWG, CQI and CDS, Patient Care in a large joint meeting with CQI and CDS and others, Attachments (another "old home"), and FHIR-I (joint with SDWG). Structured Documents is one of the work groups piloting the use of Confluence to capture the WGM activities.  You can see the output from SD here, including the results of the Wednesday morning, and Wednesday afternoon sessions that I also attended. 

Provenance

The Monday joint meeting with Patient Care discussed a new Provenance project, which several workgroups have interest in.  PC was going to sign on as a "Interested Party", a commitment with no actual governance associated with it, and there was some ongoing discussion during the week between the Security Workgroup, the Community Based Care and Privacy workgroup (formerly Community Based Collaborative Care, renamed to effect what has been the reality for this workgroup over the last half-decade) about who might become primary sponsor.  I had suggested the EHR workgroup (not even in the room at the time, but the one workgroup who had published functional requirements on Provenance, but they later declined).  It eventually wound up I think, with Patient Care as a sponsor (a role that does provide project governance).

Attachments

I got to spend some time with attachments at their joint meeting in the Payer Summit and hear from Steve Posnack and folks from the CARIN Alliance, and where I popped in for a a few other meetings to hear what is going on.  Attachments is very closely following the Da Vinci project, so it is a good place if you aren't a member of the Da Vinci project to learn about what is going on there.  There are a number of payers and clearing houses working on Coverage Requirements Discovery, a topic I have some interest in.  It's basically a "pre-prior-auth" sort of specification, that basically allows a provider to ask a payer what data requirements a payer has to cover care for a given condition (including the need for a prior-auth, since there are no "standards" for what does require one).  FHIR has got payers excited, and for good reasons.  It's allowing them to modernize a HealthIT infrastructure that's long been in need of rejuvenation.

Da Vinci

Da Vinci has several projects going on.  I'm not specifically a member of the Da Vinci project (it too requires a membership fee, and I've got a limited budget).  Even so, many of the Da Vinci specifications are being balloted in HL7 projects, and you can follow them through their sponsoring workgroups, including:


The eHDX project was discussed somewhat in the Patient Care joint meeting.

SD to be Cleaning up the Template Publication Format

Wednesday morning I found myself in a discussion of a topic that I complained about on Twitter and FB a year ago, the use of PDF (or other document formats for multi-ream documents (if printed).  SD won't do that any more after projects in flight are finished (you can't retroactively make a project do more work than they've committed to, HL7 would never get anything done, they learned that from V3).  I nudged that one along after Bret complained ("can you perhaps state that as a motion"), which I quickly seconded.  We spent a lot of time working out the exact wording of the motions, and the follow-on details, but just about everyone was on board with this. 

Technically, this is not as huge a challenge as it once was.  Later in the week Structured Documents assimilated the Templates workgroup, and the FHIR StructureDefinition and ImplementationGuide resources can handle most of the heavy lifting publication needs that the former V3 Templates specification was supposed to help with.  Between the publication of CDA R2 as a StructureDefinition, and the aforementioned resources, CDA templates can be published using FHIR IG tooling with perhaps some modest changes being needed.  I'm still using Trifolia as my de-facto reference for C-CDA templates until HL7 fixes this problem.

HSP Marketplace

No plan stays the same once the battle is engaged, and I found myself in other places, most notably a quarter with the SOA workgroup "battling" out my feedback on the Health Services Platform Marketplace negative.  As battles go, this one was mild, since the workgroup took most of my feedback into account.  The biggest discussion was around Chapter 4, the API, in which my major comment was simply put: "Use API documentation tools to document the API", and "describe your concepts better", again, they agreed in general, and we spent most of our time arguing the specifics.  The HSP Markeplace is shaping up, but I think it still has a long way to go.

V2 to FHIR

I also spent time with Orders and Observations (a.k.a, O&O, or simply OO), on the V2 to FHIR specification.  I spent the last half hour of one meeting unsuccessfully trying to find out what was going there and failed due to agenda overload, but DID get a good overview Wednesday afternoon.  This one has meat on its bones, and offers incredible value as Healthcare organizations figure out the many and various ways to get data into FHIR format.  I also spent Thursday morning 7-8am with several supporters of this project to talk about the tooling needed for this specification.    We agreed in principle to use the FHIR ConceptMap or StructureMap to capture the results of this effort, so that the end result will be computable. I am clearly going to be spending more time with this group, as this is directly related to my current work.

What I Missed

I had plans to spend time with both the Clinical Decision Support and the Clinical Quality Information workgroups this time around and failed because a 30 minute diversion (SOA) turned into the entire quarter, and OO's V2 FHIR project was more immediately compelling.  The case for cloning is never stronger than attending an HL7 Working group meeting.  I also missed out on some discussions in Public Health (formerly Public Health and Emergency Response, but the latter simply folds into the former), and Infrastructure and Messaging (InM) [which simply had nothing on their agenda to which I needed to pay immediate attention].  I'm certain I'll have to spend time with these groups at future meetings.

     Keith




Friday, January 18, 2019

Asking for Help with Grace and Dignity

Although she may claim otherwise, one my my standards colleagues has asked for help with great grace and dignity.  In case you haven't heard, Lisa Spellman has been diagnosed with colon Cancer.  Her prognosis is good, but the treatment (radiation and chemotherapy) is challenging (as some of you are well aware) in many ways.  One of the ways that the treatment is challenging is financially, and that's where you can help.

I know many of you already know Lisa from her work in standards.  She's been involved in standards at IHE, HIMSS, ISO and most recently DICOM.  She has always brought great energy and excitement to what for many would be the dull observation of pork, spice and fillers being ground and stuffed into sausages by a bunch of geeks spouting technobabble.

Her sister started a Go Fund Me to help with the financial impacts of treatment, if you want to participate, check it out at https://www.gofundme.com/lisa-spellman-medical-treatment.

Next week, I'll return with a summary of the January HL7 Working Group meeting.

   Keith

Saturday, January 12, 2019

There is Always Another Way


There's more than one way to skin a cat, or route to a solution. When presented with a problem, a good software engineer can usually think of at least two ways to solve a problem.  And most often, the next thing we do is pick one and invest time and effort into it.  The choice selected may not be the best one, it may be a compromise based on competing factors.

So then, let me unpack this tweet (shown below from Thomas Beale, a well-regarded expert in OpenEHR):
"What we should have: a universal health content model library. What the message people always do: impose their own content 'standard' rather than cooperate with mature approaches to modelling. What we get: dissonance."

There's a little bit of sour grapes in this because of the long-standing competition between HL7 (the message people) and OpenEHR.  These are competing standards in the Health IT spectrum that take different approaches, for which there are pros and cons for each.  HL7 focuses more on exchange and OpenEHR more on data modeling.  Both have their target markets for adoption which overlap to some degree.  HL7 has a significant (order of magnitude) advantage in general adoption, OpenEHR has a significant advantage in existing clinical models, their physician engagement model and clinical validation of content (models which HL7 CIMI workgroup and the AMA (separately) are also developing).

Thomas goes on further in blog post he links to the above statement to say (emphasis mine):
My view is that the only scalable way to create the semantic specifications is for them to be artefacts outside of both vendor products and outside of specific communications formats.
Where healthcare computing needs to go is a complete separation of models of semantics of healthcare and the technologies used to implement solutions at any given time.

You can see where I'm going here when I highlight "the only scalable way" and "complete separation".  As soon as one starts talking about the "only way", the discussion has been elevated from one at layer 7 of the protocol stack, to layer 8 or 9.  Absolutist statements such as these don't allow for compromise.  When the rubber hits the road, compromises are needed, because to a working solution is by definition one that has shipped, and perfection is the enemy of the good.  FHIR R4 has shipped, FHIR is already available in Health IT products covering better than 87% of the US hospital market and 69% of the ambulatory market (the numbers are surely better given the age of my reference and the data it used [it's at least 6 months out of date]).

I generally agree with Tom's statement about separation of model semantics and technologies, but I don't come to the same conclusion about completing the separation between models and technologies.  FHIR is a new technology that is meant to make Health IT software deliverable, and it seems to be delivering on that promise. R4 shipped in late December (a couple of days after Christmas).  IHE just completed updating three profiles to R4 and published those specifications for public comment last week.  HAPI on FHIR is on track to deliver an implementation of R4 in a few more weeks.  The US Core for R4 is out for ballot. That's delivery.

Back to the original tweet, FHIR specifies a content standard for resources, it takes some aspects of the semantics into the implementation of the technology to make life easier for developers.  That's OK, I'm willing to live with that compromise in order to ship.

   Keith

P.S. If I wanted to be snarky, I'd point out that there are probably better words than "mature" to describe well-established or proven methods in technology, preferably ones that don't imply lack of change.


Tuesday, January 8, 2019

A Bet on Standards for the next round of Certification

It's been pretty clear that ONC is putting its money behind FHIR, and we know that 21st Centrury Cures will require new certification requirements, and are pretty much certain that (based on the last round of regulation), that a standard for APIs is going to be specified.

I see two choices: FHIR DSTU2 with Argonaut, or FHIR R4 with US Core, either of these with SMART on FHIR.  Having read through the new US Core over the last week (for the HL7 ballot cycle), it's pretty clear it essentially replaces most of what was found in the original Argonaut specifications.

So, I'm placing my bets on FHIR R4 with US Core and SMART on FHIR.

What I'm NOT placing bets on is when the regulation is actually going to show up.  HHS has some appropriations already, and ONC is still moving some things forward, but OMB is on furlough and the Federal Register site is non-operational, and NIST has also shut down (not that they are essential for publishing the rule, only afterwards in implementing it).

Given the current situation, I'd rather the rule had showed up on December 24th.

   Keith


Friday, January 4, 2019

What's a Standard?

What's a standard supposed to look like? What does it tell you?

These aren't as simple as you think:

Consider the following different kinds of "standards"

  1. A standard for light bulb bases.
  2. A collection of functional requirements that CAN be applied to a particular type software.
  3. A collection of functional requirements that HAVE been applied to a particular type of software.
  4. A protocol specification such as HTTP.
  5. The specification for a markup language.
  6. The specification for a specific implementation of a markup language (or perhaps this is it).
  7. A standard for a programming language.
  8. A standard for performing a particular test.
  9. A standard for performing a particular task.

All of these are standards.
ASTM recognizes six types of standards: test method, specification, classification, practice, guide, and terminology.
NIST recognizes a somewhat different set, one of which they describe as process, but you won't be able to use this link since the US government shutdown last year.
I even have my own classification.

HL7 publishes functional specification (see 2 and 3 above), protocol specifications (like #4, but see MLLP which is essentially the HTTP of Version 2), schemas and specifications (very much like HTML 5), and other things.  Some of them they call standards, others they call informative documents, but many use these interchangeably.  Some call them specifications, others implementation guides, and others profiles.

Some standards describe best practices, models of systems, or provide for ways to talk about things.  Others talk about bits and bytes (or octets).  That's what you were probably thinking about.

There's really nothing magic about describing what a standard is.  It's an way to do something that some group has agreed to follow.  The something part is VERY nebulous.  The some group part is very nebulous.  The enforcement mechanism of the agreement is very nebulous.  In some cases, even a standard itself is very nebulous.

The biggest distinction between what a standard is and isn't has to do with who agreed to it.  Portable Document Format wasn't a standard until it was.  The difference had to do with who agreed to it.  In the first case, it was a single company.  In the second, it was an international standards committee.
This was the "de facto" (in fact) standard for C, until is was supplanted by an earlier version of this as the "de jure" (in law/jurisdiction) standard.

People like standards because it means they don't have to think hard, just smart. And that's why "who" made it a standard is very important.  Sometimes, even when everyone agrees, it seems that nobody agrees on what the standard is (see HTML5, or perhaps you want HTML5) [but at least they agree on what it does].

I read standards like this (a very architectural cookbook), and like this (A pretty decent functional standard with some pretension of being an API as well, but with some notable deficiencies in that latter part that I hope will be corrected in the standards making process), and like this (a vaguely useful thing if you like your standards regurgitated into another publication format with a lot of reused content), and like this (something important, but possibly blown away by this, we'll see what happens when it shows up).

And yes, it's ballot season at HL7, just in case you were wondering.  How do I feel about that?  In a word: Nebulous.

In a thousand words:

     Keith







Thursday, January 3, 2019

Mad Libs for HealthIT


One of the predictions I made yesterday about what isn't going to change in 2019 is that we'll still be hearing about a lack of standards and interoperability in Health IT.

Something I've learned about these phrases over the years is that the speaker drops an important adjective or prepositional phrase that prevents full comprehension of the statement, which, from their viewpoint is almost inevitably true.

Consider the following two statements:

There are no               A               Health IT standards                B                .

and

We don't have interoperability                      C                      in Health IT                      D                     .

For A, substitute one of the following:
  • Agreed upon
  • ANSI Approved
  • Available
  • Deployed
  • Easy to Use
  • Easy to Deploy
  • Inexpensive
  • Implemented
  • Pervasive
  • Rolled Out

For B, substitute one of the following:
  • in <HealthIT Product we use today>
  • that we are willing to use
  • that we know about
  • that won't change our workflow
  • won't impact our revenue
For C, add "between my institution and my"  and fill in with one of the following:
  • patients
  • lab
  • referral network
  • local hospital
  • public health agency
  • clearing house
  • payer
  • billing service

For D, add "because", and fill in with one of the following:
  • the vendor hasn't been selected yet
  • we don't know how to do that
  • we haven't upgraded our Health IT to support that
  • we haven't deployed that feature yet
  • we aren't willing to pay for it
  • they don't have that capability yet (because <pick one of the above and change we to they>)
  • the [regulations|measures] that we want to support aren't out yet
Some of these are good reasons for why there isn't interoperability (or perhaps even standards), and others aren't.  When this information is communicated up the chain to Health IT leaders, regulators, or legislators, inevitably, these auxiliary clarifying phrases (if they were ever uttered) are often dropped to simplify the message.

So, in 2019, before I let them allow my blood to boil, I've resolved to try to identify what might be missing in these statements when I see them.  And if and when I can figure them out, I'll try to report on them here.

   Keith




Wednesday, January 2, 2019

What's not going to change in 2019 for HealthIT?

While many are offering their opinions on what will be new or interesting for this new year, or reviewing the previous year, I thought I'd offer some observations on what isn't going to be different in 2019:

FHIR is still a work in progress.  Yes, FHIR Release 4 marks the first normative version of the standard, but much remains to be done and will continue to be worked on by HL7.

Certification will remain a focus of ONC.  Yep, Meaningful Use may be dead, but there's the CMS program after that, and the one after that and the other program, all of which will continue to require certified EHR technology.

APIs are still the way forward.  Most providers now have access to APIs, but the challenge will be to move them into production.  2019 will be a critical year for many providers to get the APIs rolled out to patients.  The roll-out still allows for a 90-day reporting period, enabling providers a smaller window for demonstrating the promoting interoperability capabilities, but will require use of 2015 certified technology (instead of a combination of 2014 and 2015 as for last year).

Finally, we'll still be hearing about a lack of HealthIT standards, and a lack of interoperability in 2019.  More on that tomorrow.

     Keith