Convert your FHIR JSON -> XML and back here. The CDA Book is sometimes listed for Kindle here and it is also SHIPPING from Amazon! See here for Errata.

Thursday, May 11, 2017

L or l? The Ucum Liter code

Filed under stupid stuff I know.

The code for Liter in UCUM Case Sensitive is both l and L.  You can use either.

See http://unitsofmeasure.org/ucum.html#section-Derived-Unit-Atoms for details.  Basically what they say:
In the case sensitive variant the liter is defined both with an upper case ‘L” and a lower case ‘l’. NIST [63 FR 40338] declares the upper case ‘L’ as the prefered symbol for the U.S., while in many other countries the lower case ‘l’ is used. In fact the lower case ‘l’ was in effect since 1879. A hundred years later in 1979 the 16th CGPM decided to adopt the upper case ‘L’ as a second symbol for the liter. In the case insensitive variant there is only one symbol defined since there is no difference between upper case ‘L’ and lower case ‘l’.
So much for using codes to define distinct meanings for atoms of this vocabulary.  Ah well.  At least they defined L in terms of l.

   Keith 

Friday, April 28, 2017

From $450 to 450¢

From $450 to 450¢ is a 99% savings.  That's what I recently found in reviewing my medication costs.

I recently compared prices on a medication a family member is using.  It comes in several dose forms, and strengths.  To keep the math the same, I'll call the strengths 1,2,3,4,6 and 9.  They can be taken in capsule or tablet form (but not in all strengths), or in oral (pediatric) form.  Not all combinations are available from our PBM, but most are.

Here are the approximate prices and sigs for a 90 day supply:

Strength 1 Capsule 6xDay: $13.50
Strength 2 Capsule 3xDay: $4.50
Strength 4 Capsule + Strength 2 Capsule: $17.50
Strength 1 Tablet 6xDay: $561.00
Strength 2 Tablet 3xDay: $422.00
Strength 6 Tablet 1xDay: $445.00

To be very blunt about this: What the FUCK!?

So now I've gone through EVERY prescription in the household, and so far this is the only one that is that whacked out. But like I said, WTF?

   -- Keith







Wednesday, April 26, 2017

Refactoring Standards

Code and standards (which are a special kind of code) grow with age.  When you started with your code, you had a plan.  It was fairly simple and so you wrote something simple, and it worked.  After a while you realized you could make it do something special by tweaking a little piece of it. Sometimes (if you designed it right), you can add significant functionality.  After a while, you have this thing that has grown to be quite complex.  Nobody would ever design it that way from the start (or maybe they would if they had infinite time and money), but it surely works.

The growth can be well-ordered, or it can have some benign little outgrowths, or they can even be tumorous.  Uncontrolled growth can be deadly, whether to a biological or a computer system.  You have to watch how things grow all the time.  After some time, the only solution may be a knife. Sometimes the guts of the patient will be majorly overhauled afterwards, even though fully functioning and alive.

When the normal adaptive processes work in standards, these growths naturally get pruned back.
It's interesting to watch some of the weird outgrowths of CCR become more and more vestigial over time through various prunings in CCD and CCDA.  FHIR on the other hand, well, that started as a major restructuring of V3 and CDA, and is very much on the growing side.

   -- Keith


Wednesday, April 19, 2017

Separate but Equal Isn't

Every now and then (actually, more frequently than that), two topics from different places somehow collide in my brain to produce synthesis.  This post is a result of that:

I've been paying a lot of attention lately to care planning.  It's been a significant part of the work that I've been involved in through standards development over the last decade, and it comes to special focus for me right now as I work on implementation.

In another venue, the topic of data provenance came up, and the concern that providers have about patient sourced data.  Many of the challenges that providers have is that the patient concerns don't necessarily use the "right terms" for clinical conditions, or that patients "don't understand the science" behind their problems, or that the data is somehow less exact.  My Father's use of the term "mini-stroke" so annoyed one of his providers (a neurologist who reported to him: "there is no such thing"), that it likely resulted in his failing to get appropriate care for what was probably transient ischemia, resulting in an actual stroke, which eventually lead to his ultimate demise through a convoluted course of care.

This concern about veracity or provenance of patient data leads to designs which separate patient health concerns and information from provider generated data.  Yet those same concerns initiate the evaluation process starting first with the patient's subjective experience, gathering of objective evidence through skilled examination, knowledgeable assessment of those details, leading to cooperative and effective planning.

The care planning work that I've been involved in over the past decade originated in early work on patient centered medical homes driven by physician groups, incorporated work from several different nursing communities in HITSP, HL7 and IHE, and eventually resulted in a patient plan of care design, which was subsequently evolved into work implemented in both the HL7 CCDA and FHIR specifications.

The patient's plan of care should NOT originate from a separate but equal collection of data, but rather, from an integrated, patient included approach, that does not treat the patient subjective experience as being any less valuable to the process.  Both FHIR and CCDA recognize that in their designs.  After all, if the patient didn't have any complaints, physicians wouldn't have any customers.

It's past time we integrate patients into the care process with their physicians, and keeping their data "separate" isn't the right way to go.  If my provider want's to be my medical home, he needs to remember, it's my home, not his, and we, as implementers, need to help with that.

   -- Keith



Friday, March 31, 2017

The Longitudinal Identity of a CCD

This question comes up from time to time. For a given patient, how is there a unique identifier which uniquely identifies the CCD for the patient as it evolves over time.

The answer is no, but to understand that, we need to talk a little bit about identifiers in CDA and how they were intended to be used.

Every CDA released into the wild has ONE and only ONE unique identifier by which it is uniquely known to the world.  That is found in /ClinicalDocument/id.  During the production of a clinical document, there are some workflow cases where the document has to be amended or corrected.  And so there is a need to identify a "sequence" of clinical documents, and possibly even to assign that sequence an identifier.  The CDA standard supports this, and you can find that in /ClinicalDocument/setId.

BUT... that field need not be used at all.  You can also track backwards through time using /ClinicalDocument/relatedDocument/parentDocument/id to see what previous version was revised.  And the standard requires neither of these fields to be used in any workflow.

So ... couldn't I just use setId to track the CCD for each patient?

Yes, but fundamentially, you'd be doing something that fails to take into account one of the properties of a ClinicalDocument, and that is context.  Context is the who/where/what/when metadata associated with the activity that the clinical document is reporting on.  When setId is the same for two clinical documents, the assumption is that the Context associated with the content is the same, OR, is being corrected, not that it is being changed wholesale.

The episode of care being reported in a CCD is part of its context, as is the when the information was reported.  If you want to report on a different episode of care, it's not just a new document, it's also a new context.  And that's why I suggest that setId should be different.

This is mostly a philosphical debate, rather than one related to what the standard says, but when you think about the history of clinical documents, you might agree that it makes sense.

Clinical Documents aren't "living" documents.  A key definition of a CCD document is a summary of relevant and pertinent data "at a point in time."  It's that "point in time" part of the definition that makes the CCD document a static item.



Thursday, March 30, 2017

Diagnostic Imaging Reports in a CCD

I clearly missed something somewhere, probably because I assumed nobody would try to include a document in an document after having hammered people about it for a decade. My first real blog post was on this topic.

Here’s the challenge:  According to the Meaningful Use test procedures for View Download and Transmit: Diagnostic imaging reports are to be included in CCD content.  The test data blithely suggests this content:
·         Lungs are not clear, cannot rule out Anemia. Other tests are required to determine the presence or absence of Anemia.

I can see where this summary of a full report might appropriately appear in “Results” section of a CCD document, but this isn’t an diagnostic imaging result. Here’s a some sample Diagnostic Imaging Reports: http://usarad.com/pdf/xray/cr_chest.pdfhttp://www.smartteleradiology.com/sites/default/files/sample-reports/CR-Sample-Chest-01.pdf, http://www.mtinformation.com/medical-reports/x-ray-samples.  I’m reminded of Liora’s “This is a document” slides she uses in her Intro to CDA class, and for good reason.

The content might be stored as a text report, a word document, a PDF, or even worse, a scanned image.  It really depends on what the supplier of the report provides.

The NIST guidance is sub-regulatory, but these are the testing guidelines set forth for the certifying bodies.  However, what I also missed is that the regulation also says that CCD is the standard for imaging reports.  It's in that line of text that reads:

(2) When downloaded according to the standard specified in § 170.205(a)(4) following the CCD document template, the ambulatory summary or inpatient summary must include, at a minimum, the following data (which, for the human readable version, should be in their English representation if they associate with a vocabulary/code set):

(i)Ambulatory setting only. All of the data specified in paragraph (e)(1)(i)(A)(1), (2), (4), and (5) of this section.

(ii)Inpatient setting only. All of the data specified in paragraphs (e)(1)(i)(A)(1), and (3) through (5) of this section.

Clear as mud right?  Here's what (e)(1)(i)(A)(5) says:

(5) Diagnostic image report(s).

Oh damn.

But wait? I can create a DIR, change the document type and header details a bit, and then magically it becomes a CCD.  So, can I create a CCD for each Diagnostic image, and in that way have a "summary" representation of the report.

Nope: Back to the test guide:

3. The tester uses the Validation Report produced by the ETT: Message Validators – C-CDA R2.1 Validator in step 2 to verify the validation report indicates passing without error to confirm that the VDT summary record is conformant to the standard adopted in § 170.205(a)(4) using the CCD document format, including: the presentation of the downloaded data is a valid coded document containing:

  •  all of the required CCDS data elements as specified in sections (e)(1)(i)(A)(1); 
  •  for the ambulatory setting, the Provider’s Name and office contact information as specified in section (e)(1)(i)(A)(2); 
  •  for the inpatient setting, admission and discharge dates and locations, discharge instructions and reason(s) for hospitalization) as specified in section (e)(1)(i)(A)(3);
  •  laboratory report(s) as specified in section (e)(1)(i)(A)(4), when available; and 
  •  diagnostic imaging report(s) as specified in section (e)(1)(i)(A)(5), when available.  
Oh well.  Seems like I need to get my hammer out, this time to fit an entire document into a sentence shaped hole.



Friday, March 17, 2017

Patient Access: It's coming at you faster than you might think

This crossed my desk this morning via POLITICO:

GAO: PEOPLE AREN'T LOOKING AT THEIR ONLINE DATA: The Government Accountability Office took aim at the accessible data requirement in meaningful use in a report released Wednesday. The report, requested by the Reboot group (which include Sens. Lamar Alexander and John Thune), argues that while the vast majority of hospitals and eligible professionals make data available for patient and caregiver consumption, the percentage actually following through isn't high — perhaps 15 to 30 percent, depending on the care setting and data analyzed.
Now, why that's the case is the familiar debate — is it a lack of interest from patients, or perhaps technical difficulties? A GAO analysis suggests the former is definitely at play. The office analyzed the top 10 most popular EHRs and found patient participation rates ranging from 10 to 48 percent.

Ultimately, however, GAO hits ONC for its lack of performance measures for its initiatives — whether, for example, using Blue Button increases patient uptake of data. HHS and ONC concurred with the recommendation.


Here's my take on this.

TLDR; It isn't as bad as the GAO makes it out to be.  The report is based on nearly two year old data, and based on it and prior data, we seem to be within reach of a major milestone: 50% of all patients having accessed their data.

Remember early fax machines? e-mail? These are technology diffusion challenges which took a number of years (even decades) to get over.  We've finally reached a stage where nearly 90% of all providers are capable of offering patients their data in hospital or ambulatory settings, and now people are really getting to use this stuff.

What is the GAO report telling us?  First of all, it is telling us about events in 2015, not 2016 as you might expect.  It takes the government a while to get their act together acting on the data that they gather, and the accompanying study also took some time to set up and execute.  This is in fact why many opponents of the MU deadlines said they were coming at us too fast, because we couldn't even analyze data from prior years to figure out how to course correct before it was time to update the regulations.  We are hopefully past that stage now.

Secondly, we need to look at this in a slightly different context.  Remember I said this was a technology diffusion challenge.  If you are a long time reader of this blog, you might recall an article I wrote about the curve you might expect for technology adoption.  It's a logistic growth curve.

The GAO numbers are saying we are around 30% for ambulatory use, and 15% for hospital use of data access by patients in 2015.  Where are we now?  It's hard to project forward from one data point, because fitting the logistic curve requires estimating two parameters, a time scale, and the inflection point.  The inflection point is at 50%, and is where the rate of adoption reaches its maximum value.

To make something useful out of this data, you have to go back to some similar ONC reports on patient utilization.  You can find the data in ONC Data Brief 30, which includes information from ONC Data Brief 20.  The challenge here is that the GAO report doesn't quite report the same thing, so you have to adjust a bit.  I know from a colleague of mine from early IHE years that some patients get most of their healthcare in the hospital setting (i.e., the ER), while others get their care principally from Ambulatory providers, and others have used both.  That means that some patients have been given access through a hospital, and others through ambulatory providers, and the number of patients who have been given access to their health data is overall, probably greater than the sum of the parts, but we know these are largely overlapping sets.  So, if I simply take the ambulatory data from the GAO report, and compare the number offered access and the number who used it, to similar figures from the previous ONC briefs, I can start to get somewhere.  Here's the basic data.

Year Offered Accessed Total
2013 28% 46% 12.9%
2014 38% 55% 20.9%
2015 87% 30% 26.1%

The number offered access is different in each year, so I have to normalize the results, which I do by multiplying the % of patients offered access to the % offered access who did access data, to get the total % of patients accessing records. That's the number we really care about anyway.

Now I have three points, which is JUST barely enough to estimate the parameters of the logistic growth curve.  How to fit the data?  Well, this paper was good enough to tell me.  Basically, you compute an error function, which in the paper was least squares (a common enough curve fitting function), over the range of possible parameter values.  So I did, and generated a couple of surfaces which show me where I might find the parameters that give the best fit.  Can I tell you how good the fit is?  Not really.  I'm working with three data points, and estimating two parameters.  There's only one degree of freedom remaining.  This is about as back of the napkin hack as it gets. 

Let's look at some pictures to see what we can see here:
First up, we need to look at the error surface to find the minimum.

Error Surface


We can pretty well see it's somewhere in the lower blue region, but we have to get to a much finer level of detail to find it perfectly.  Sometimes tables are better than pictures.  The minimum is somewhere in the middle of this table.  If I expanded data points in the table even further, you'd see they are getting larger all around the area we are focused on.

0.36 0.38 0.40 0.42 0.44 0.46
Jan-2017 1.03% 0.70% 0.44% 0.26% 0.14% 0.07%
Apr-2017 0.54% 0.30% 0.15% 0.06% 0.04% 0.07%
Jul-2017 0.23% 0.09% 0.04% 0.05% 0.12% 0.25%
Oct-2017 0.07% 0.04% 0.08% 0.19% 0.36% 0.58%
Jan-2018 0.05% 0.12% 0.26% 0.47% 0.72% 1.02%

The minimum error for this fit occurs somewhere in 2017, which is also the 50% saturation point.

There's still one more picture to show.  This projects us forward along the boundaries of the fitted range.  As you can clearly see, the projections show we are nearly at the 50% point.  That's a major milestone, and something to crow about.  It also tells me that unless there's another bump to push things ahead faster, we won't get to 90% access until sometime between 2021 and 2024, five to seven years from now.  We have just such a bump (API support) coming ahead in 2018.


It isn't all as dark and gloomy as the GAO report suggests, but it might have been if that report was telling us where we were now, instead of where we were two years ago.

This is rough and ready calculation.  I'm using data that was gathered through different means, and which may not be comparable.  I don't have enough points to make any statements about the reliability of the projects.

It's still good enough for me.  It shows that things aren't as bad as the GAO report might have suggested.  ONC and HHS really need to PLAN ahead for this kind of reporting, so that we can create the dashboards needed to produce this sort of information as it becomes available, instead of 2 years after the fact.

Data reporting for 2016 is just around the corner.  Two numbers: The % of patients offered access, and the % of patients who use it if it is offered will be enough to tell me how good my projection is for now. If those two numbers multiplied together come anywhere between 30 and 40%, I'll feel very comfortable with this projection.