Convert your FHIR JSON -> XML and back here. The CDA Book is sometimes listed for Kindle here and it is also SHIPPING from Amazon! See here for Errata.

Thursday, May 11, 2017

L or l? The Ucum Liter code

Filed under stupid stuff I know.

The code for Liter in UCUM Case Sensitive is both l and L.  You can use either.

See for details.  Basically what they say:
In the case sensitive variant the liter is defined both with an upper case ‘L” and a lower case ‘l’. NIST [63 FR 40338] declares the upper case ‘L’ as the prefered symbol for the U.S., while in many other countries the lower case ‘l’ is used. In fact the lower case ‘l’ was in effect since 1879. A hundred years later in 1979 the 16th CGPM decided to adopt the upper case ‘L’ as a second symbol for the liter. In the case insensitive variant there is only one symbol defined since there is no difference between upper case ‘L’ and lower case ‘l’.
So much for using codes to define distinct meanings for atoms of this vocabulary.  Ah well.  At least they defined L in terms of l.


Friday, April 28, 2017

From $450 to 450¢

From $450 to 450¢ is a 99% savings.  That's what I recently found in reviewing my medication costs.

I recently compared prices on a medication a family member is using.  It comes in several dose forms, and strengths.  To keep the math the same, I'll call the strengths 1,2,3,4,6 and 9.  They can be taken in capsule or tablet form (but not in all strengths), or in oral (pediatric) form.  Not all combinations are available from our PBM, but most are.

Here are the approximate prices and sigs for a 90 day supply:

Strength 1 Capsule 6xDay: $13.50
Strength 2 Capsule 3xDay: $4.50
Strength 4 Capsule + Strength 2 Capsule: $17.50
Strength 1 Tablet 6xDay: $561.00
Strength 2 Tablet 3xDay: $422.00
Strength 6 Tablet 1xDay: $445.00

To be very blunt about this: What the FUCK!?

So now I've gone through EVERY prescription in the household, and so far this is the only one that is that whacked out. But like I said, WTF?

   -- Keith

Wednesday, April 26, 2017

Refactoring Standards

Code and standards (which are a special kind of code) grow with age.  When you started with your code, you had a plan.  It was fairly simple and so you wrote something simple, and it worked.  After a while you realized you could make it do something special by tweaking a little piece of it. Sometimes (if you designed it right), you can add significant functionality.  After a while, you have this thing that has grown to be quite complex.  Nobody would ever design it that way from the start (or maybe they would if they had infinite time and money), but it surely works.

The growth can be well-ordered, or it can have some benign little outgrowths, or they can even be tumorous.  Uncontrolled growth can be deadly, whether to a biological or a computer system.  You have to watch how things grow all the time.  After some time, the only solution may be a knife. Sometimes the guts of the patient will be majorly overhauled afterwards, even though fully functioning and alive.

When the normal adaptive processes work in standards, these growths naturally get pruned back.
It's interesting to watch some of the weird outgrowths of CCR become more and more vestigial over time through various prunings in CCD and CCDA.  FHIR on the other hand, well, that started as a major restructuring of V3 and CDA, and is very much on the growing side.

   -- Keith

Wednesday, April 19, 2017

Separate but Equal Isn't

Every now and then (actually, more frequently than that), two topics from different places somehow collide in my brain to produce synthesis.  This post is a result of that:

I've been paying a lot of attention lately to care planning.  It's been a significant part of the work that I've been involved in through standards development over the last decade, and it comes to special focus for me right now as I work on implementation.

In another venue, the topic of data provenance came up, and the concern that providers have about patient sourced data.  Many of the challenges that providers have is that the patient concerns don't necessarily use the "right terms" for clinical conditions, or that patients "don't understand the science" behind their problems, or that the data is somehow less exact.  My Father's use of the term "mini-stroke" so annoyed one of his providers (a neurologist who reported to him: "there is no such thing"), that it likely resulted in his failing to get appropriate care for what was probably transient ischemia, resulting in an actual stroke, which eventually lead to his ultimate demise through a convoluted course of care.

This concern about veracity or provenance of patient data leads to designs which separate patient health concerns and information from provider generated data.  Yet those same concerns initiate the evaluation process starting first with the patient's subjective experience, gathering of objective evidence through skilled examination, knowledgeable assessment of those details, leading to cooperative and effective planning.

The care planning work that I've been involved in over the past decade originated in early work on patient centered medical homes driven by physician groups, incorporated work from several different nursing communities in HITSP, HL7 and IHE, and eventually resulted in a patient plan of care design, which was subsequently evolved into work implemented in both the HL7 CCDA and FHIR specifications.

The patient's plan of care should NOT originate from a separate but equal collection of data, but rather, from an integrated, patient included approach, that does not treat the patient subjective experience as being any less valuable to the process.  Both FHIR and CCDA recognize that in their designs.  After all, if the patient didn't have any complaints, physicians wouldn't have any customers.

It's past time we integrate patients into the care process with their physicians, and keeping their data "separate" isn't the right way to go.  If my provider want's to be my medical home, he needs to remember, it's my home, not his, and we, as implementers, need to help with that.

   -- Keith

Friday, March 31, 2017

The Longitudinal Identity of a CCD

This question comes up from time to time. For a given patient, how is there a unique identifier which uniquely identifies the CCD for the patient as it evolves over time.

The answer is no, but to understand that, we need to talk a little bit about identifiers in CDA and how they were intended to be used.

Every CDA released into the wild has ONE and only ONE unique identifier by which it is uniquely known to the world.  That is found in /ClinicalDocument/id.  During the production of a clinical document, there are some workflow cases where the document has to be amended or corrected.  And so there is a need to identify a "sequence" of clinical documents, and possibly even to assign that sequence an identifier.  The CDA standard supports this, and you can find that in /ClinicalDocument/setId.

BUT... that field need not be used at all.  You can also track backwards through time using /ClinicalDocument/relatedDocument/parentDocument/id to see what previous version was revised.  And the standard requires neither of these fields to be used in any workflow.

So ... couldn't I just use setId to track the CCD for each patient?

Yes, but fundamentially, you'd be doing something that fails to take into account one of the properties of a ClinicalDocument, and that is context.  Context is the who/where/what/when metadata associated with the activity that the clinical document is reporting on.  When setId is the same for two clinical documents, the assumption is that the Context associated with the content is the same, OR, is being corrected, not that it is being changed wholesale.

The episode of care being reported in a CCD is part of its context, as is the when the information was reported.  If you want to report on a different episode of care, it's not just a new document, it's also a new context.  And that's why I suggest that setId should be different.

This is mostly a philosphical debate, rather than one related to what the standard says, but when you think about the history of clinical documents, you might agree that it makes sense.

Clinical Documents aren't "living" documents.  A key definition of a CCD document is a summary of relevant and pertinent data "at a point in time."  It's that "point in time" part of the definition that makes the CCD document a static item.

Thursday, March 30, 2017

Diagnostic Imaging Reports in a CCD

I clearly missed something somewhere, probably because I assumed nobody would try to include a document in an document after having hammered people about it for a decade. My first real blog post was on this topic.

Here’s the challenge:  According to the Meaningful Use test procedures for View Download and Transmit: Diagnostic imaging reports are to be included in CCD content.  The test data blithely suggests this content:
·         Lungs are not clear, cannot rule out Anemia. Other tests are required to determine the presence or absence of Anemia.

I can see where this summary of a full report might appropriately appear in “Results” section of a CCD document, but this isn’t an diagnostic imaging result. Here’s a some sample Diagnostic Imaging Reports:,  I’m reminded of Liora’s “This is a document” slides she uses in her Intro to CDA class, and for good reason.

The content might be stored as a text report, a word document, a PDF, or even worse, a scanned image.  It really depends on what the supplier of the report provides.

The NIST guidance is sub-regulatory, but these are the testing guidelines set forth for the certifying bodies.  However, what I also missed is that the regulation also says that CCD is the standard for imaging reports.  It's in that line of text that reads:

(2) When downloaded according to the standard specified in § 170.205(a)(4) following the CCD document template, the ambulatory summary or inpatient summary must include, at a minimum, the following data (which, for the human readable version, should be in their English representation if they associate with a vocabulary/code set):

(i)Ambulatory setting only. All of the data specified in paragraph (e)(1)(i)(A)(1), (2), (4), and (5) of this section.

(ii)Inpatient setting only. All of the data specified in paragraphs (e)(1)(i)(A)(1), and (3) through (5) of this section.

Clear as mud right?  Here's what (e)(1)(i)(A)(5) says:

(5) Diagnostic image report(s).

Oh damn.

But wait? I can create a DIR, change the document type and header details a bit, and then magically it becomes a CCD.  So, can I create a CCD for each Diagnostic image, and in that way have a "summary" representation of the report.

Nope: Back to the test guide:

3. The tester uses the Validation Report produced by the ETT: Message Validators – C-CDA R2.1 Validator in step 2 to verify the validation report indicates passing without error to confirm that the VDT summary record is conformant to the standard adopted in § 170.205(a)(4) using the CCD document format, including: the presentation of the downloaded data is a valid coded document containing:

  •  all of the required CCDS data elements as specified in sections (e)(1)(i)(A)(1); 
  •  for the ambulatory setting, the Provider’s Name and office contact information as specified in section (e)(1)(i)(A)(2); 
  •  for the inpatient setting, admission and discharge dates and locations, discharge instructions and reason(s) for hospitalization) as specified in section (e)(1)(i)(A)(3);
  •  laboratory report(s) as specified in section (e)(1)(i)(A)(4), when available; and 
  •  diagnostic imaging report(s) as specified in section (e)(1)(i)(A)(5), when available.  
Oh well.  Seems like I need to get my hammer out, this time to fit an entire document into a sentence shaped hole.

Friday, March 17, 2017

Patient Access: It's coming at you faster than you might think

This crossed my desk this morning via POLITICO:

GAO: PEOPLE AREN'T LOOKING AT THEIR ONLINE DATA: The Government Accountability Office took aim at the accessible data requirement in meaningful use in a report released Wednesday. The report, requested by the Reboot group (which include Sens. Lamar Alexander and John Thune), argues that while the vast majority of hospitals and eligible professionals make data available for patient and caregiver consumption, the percentage actually following through isn't high — perhaps 15 to 30 percent, depending on the care setting and data analyzed.
Now, why that's the case is the familiar debate — is it a lack of interest from patients, or perhaps technical difficulties? A GAO analysis suggests the former is definitely at play. The office analyzed the top 10 most popular EHRs and found patient participation rates ranging from 10 to 48 percent.

Ultimately, however, GAO hits ONC for its lack of performance measures for its initiatives — whether, for example, using Blue Button increases patient uptake of data. HHS and ONC concurred with the recommendation.

Here's my take on this.

TLDR; It isn't as bad as the GAO makes it out to be.  The report is based on nearly two year old data, and based on it and prior data, we seem to be within reach of a major milestone: 50% of all patients having accessed their data.

Remember early fax machines? e-mail? These are technology diffusion challenges which took a number of years (even decades) to get over.  We've finally reached a stage where nearly 90% of all providers are capable of offering patients their data in hospital or ambulatory settings, and now people are really getting to use this stuff.

What is the GAO report telling us?  First of all, it is telling us about events in 2015, not 2016 as you might expect.  It takes the government a while to get their act together acting on the data that they gather, and the accompanying study also took some time to set up and execute.  This is in fact why many opponents of the MU deadlines said they were coming at us too fast, because we couldn't even analyze data from prior years to figure out how to course correct before it was time to update the regulations.  We are hopefully past that stage now.

Secondly, we need to look at this in a slightly different context.  Remember I said this was a technology diffusion challenge.  If you are a long time reader of this blog, you might recall an article I wrote about the curve you might expect for technology adoption.  It's a logistic growth curve.

The GAO numbers are saying we are around 30% for ambulatory use, and 15% for hospital use of data access by patients in 2015.  Where are we now?  It's hard to project forward from one data point, because fitting the logistic curve requires estimating two parameters, a time scale, and the inflection point.  The inflection point is at 50%, and is where the rate of adoption reaches its maximum value.

To make something useful out of this data, you have to go back to some similar ONC reports on patient utilization.  You can find the data in ONC Data Brief 30, which includes information from ONC Data Brief 20.  The challenge here is that the GAO report doesn't quite report the same thing, so you have to adjust a bit.  I know from a colleague of mine from early IHE years that some patients get most of their healthcare in the hospital setting (i.e., the ER), while others get their care principally from Ambulatory providers, and others have used both.  That means that some patients have been given access through a hospital, and others through ambulatory providers, and the number of patients who have been given access to their health data is overall, probably greater than the sum of the parts, but we know these are largely overlapping sets.  So, if I simply take the ambulatory data from the GAO report, and compare the number offered access and the number who used it, to similar figures from the previous ONC briefs, I can start to get somewhere.  Here's the basic data.

Year Offered Accessed Total
2013 28% 46% 12.9%
2014 38% 55% 20.9%
2015 87% 30% 26.1%

The number offered access is different in each year, so I have to normalize the results, which I do by multiplying the % of patients offered access to the % offered access who did access data, to get the total % of patients accessing records. That's the number we really care about anyway.

Now I have three points, which is JUST barely enough to estimate the parameters of the logistic growth curve.  How to fit the data?  Well, this paper was good enough to tell me.  Basically, you compute an error function, which in the paper was least squares (a common enough curve fitting function), over the range of possible parameter values.  So I did, and generated a couple of surfaces which show me where I might find the parameters that give the best fit.  Can I tell you how good the fit is?  Not really.  I'm working with three data points, and estimating two parameters.  There's only one degree of freedom remaining.  This is about as back of the napkin hack as it gets. 

Let's look at some pictures to see what we can see here:
First up, we need to look at the error surface to find the minimum.

Error Surface

We can pretty well see it's somewhere in the lower blue region, but we have to get to a much finer level of detail to find it perfectly.  Sometimes tables are better than pictures.  The minimum is somewhere in the middle of this table.  If I expanded data points in the table even further, you'd see they are getting larger all around the area we are focused on.

0.36 0.38 0.40 0.42 0.44 0.46
Jan-2017 1.03% 0.70% 0.44% 0.26% 0.14% 0.07%
Apr-2017 0.54% 0.30% 0.15% 0.06% 0.04% 0.07%
Jul-2017 0.23% 0.09% 0.04% 0.05% 0.12% 0.25%
Oct-2017 0.07% 0.04% 0.08% 0.19% 0.36% 0.58%
Jan-2018 0.05% 0.12% 0.26% 0.47% 0.72% 1.02%

The minimum error for this fit occurs somewhere in 2017, which is also the 50% saturation point.

There's still one more picture to show.  This projects us forward along the boundaries of the fitted range.  As you can clearly see, the projections show we are nearly at the 50% point.  That's a major milestone, and something to crow about.  It also tells me that unless there's another bump to push things ahead faster, we won't get to 90% access until sometime between 2021 and 2024, five to seven years from now.  We have just such a bump (API support) coming ahead in 2018.

It isn't all as dark and gloomy as the GAO report suggests, but it might have been if that report was telling us where we were now, instead of where we were two years ago.

This is rough and ready calculation.  I'm using data that was gathered through different means, and which may not be comparable.  I don't have enough points to make any statements about the reliability of the projects.

It's still good enough for me.  It shows that things aren't as bad as the GAO report might have suggested.  ONC and HHS really need to PLAN ahead for this kind of reporting, so that we can create the dashboards needed to produce this sort of information as it becomes available, instead of 2 years after the fact.

Data reporting for 2016 is just around the corner.  Two numbers: The % of patients offered access, and the % of patients who use it if it is offered will be enough to tell me how good my projection is for now. If those two numbers multiplied together come anywhere between 30 and 40%, I'll feel very comfortable with this projection.

Thursday, March 16, 2017

I got my Data

A while back several of us HIT Geeks and e-Patients were having a discussion about HIPAA, patient data access challenges, et cetera.  Prior to that I had written a post connecting the various dots between HIPAA, the Omnibus rule, MIPS and MACRA, and the Certiufication rule.

In that conversation I accepted an implicit challenge to get my health data via unencrypted e-mail.  I wrote to someone at my healthcare provider organization in early January and then got caught up in various meetings and never followed up.  Two days later I had gotten some resistance.  My healthcare provider has a portal, which I use and can quite easily get my data already, and in fact often do, which was probably another reason for resistance.

When I finally responded in early February with my acknowledgement of the risks and the fact that I understood them, I got my data the very next day.  I'd made my point.  Two emails and I had it.  Any delays in getting it were my own fault for not following up.

Caveats: I have a good relationship with my provider organization, and also know important thought leaders in that organization and they know me.  I was able to make points others might not be able to. But, when breaking trail, it's usually a good idea to put the person most experienced at it out in front, yes.  And that's where I was and what I did.

  -- Keith

Wednesday, March 15, 2017

Principle Driven Design

When you need to get something done quickly, and it's likely to involve a lot of discussion, one of the tactics I sometimes take is to get everyone to agree upon the principles which are important to us, and then to agree that we will apply those principles in our work.

It's quite effective, and can also be helpful when you have a solo project that is confused by lots of little different relationships between things.  If you work to establish what the relationships are in reproducable ways, and connect them, what you wind up with is a design that goes from a set of principles ... or even, simple mathematical relationships.  And the output is a function of the application of those principles.

It works quite well, and when things pop out that are odd, or don't work out, I find they are usually a result of some principle being applied inappropriately, or that your data is telling you about some outlier you haven't considered.  When HL7 completed the C-CDA 2.1 implementation guide in 6 weeks (a new record I think for updating a standard), we applied this tactic.

Having spent quite a few weeks dealing with the implementation details, I can tell you that it seems to have worked quite well.  And my most recent solo foray into obscure implementation details was also completed quite quickly.


Friday, March 10, 2017

Art becomes Engineering when you have a Reproducable Process

The complaint that software engineering isn't really engineering because each thing you make is often its own piece of art is often true.  But the real art of software engineering, isn't making one-offs.

Rather, it is figuring out how to take simple inputs into a decision making process that generates high quality code in a reproducable way.

I often build code that is more complicate then if I just wrote it because I take the time to figure out what data is actually driving the decision making process that the code has to follow.  Then I put that data into a table and let the code do its work on it.

When the process has been effectively tested, adding more data adds more functionality at a greatly reduced code.

This same method is, by the way, how FHIR is built.  FHIR is truly engineered.  The art was in developing the process.


Wednesday, March 8, 2017

Monitor your "improvements"

Sometime last year, to better manage my blog I thought I would try out Google+ Comments on it.

It turned out to be a disaster on three fronts:
1.  I could no longer delete inappropriate comments.
2.  Comments must have a Google+ account, a restriction I find inappropriate on this blog.
3.  I no longer received e-mails about comments on the blog, which has now put me six months behind answering questions I didn't even know where being posted.

All of that because I failed to monitor the impact of what my "intervention" did.  Don't I know better? Yes, I do.

Year before last I recall an presentation by AMIA by Adam Wright, PhD and fellow alum of OHSU on how changes to clinical decision support system resulted in a failure for certain notifications, and thus be acted upon.  While I cannot find the paper, a related poster is here. One of my favorite classes at OHSU was on how to measure the impact of an intervention quantitatively.

I should have been able to detect the low volume of questions, but didn't.  Fortunately in my case, I just failed to get feedback and had reduced capacity to use my blog.  That situation is now corrected.


Wednesday, February 22, 2017

Transforming from a fhir:identifier to V3:II

So this blog entry has been sitting here waiting for me to post something about how to transform FHIR Identifiers into CDA II elements in a way that works 100%.  The DocumentReference resource has an example that shows this content:

  <system value='urn:ietf:rfc:3986'/>
  <value value='urn:oid:'/>

What this content means is that the master identifier for the document being referenced is a URI as defined by RFC 3986.  For CDA geeks it should be obvious that the above implies that the content of the CDA document would have this in it:

   <id root=''/>

But what about other URIs?  Well, if the URL format is urn:uuid:some-uuid-string, that's also pretty straight forward.  The output for id is simply:

  <id root='some-uuid-string'/>

OK, good.  What if the value is some other form of url?  What do we do now?  Grahame fixed this one for us by registering a new OID.

  <id root='2.16.840.1.113883.4.873' extension='some other form of url'/>

So, great, now we know how to handle all the cases where identifier.system = 'urn:ietf:rfc:3986'

What about the rest of the possibilities?
There are only three URLs you need to map to an OID described in the FHIR Identifier registry.  You can set up those lookups in an XML file somewhere.

How then, would you map an identifier containing a url in root whose corresponding OID you don't know?

There are three ways to handle this case:
1. Come up with something that produces a valid II datatype in CDA, recognizing that it might not be total cool.
2. Set the II as being not fully specified (and so requiring use of nullFlavor), perhaps committing the URL to another attribute so that the receiver might have some chance of fixing it on their end.
3. Giving up completely

Option 3 is not an option I like, so I discarded it.  You can do what you want, I almost never give up.
Option 2 is what I decided to use for my purposes.  It works, is technically valid, and might cause some complaints, but I can say, well, that URL is NOT KNOWN to my system, so we produce something technically correct with all the info the receiver might use to fix it.  So for:
    <system value=''/>
    <value value='my example id'/>

I would use something like:
  <id nullFlavor='UNK' extension='my example id'
This says, I don't know the full II, I've told you what I know, you deal with it.

But, you could cheat, and say something like: "We know that url#fragment-id means the thing inside url with fragment identifier fragment-id.  So we'll treat it like that.  This is what you would get then:

  <id root='2.16.840.1.113883.4.873
      extension=' example id'/>

I'm not fond of this, because it isn't a totally legitimate answer, although most geeks might understand your reasoning, and what you generated.  It also means that your can no longer trust the content of any id that uses the value of 2.16.840.1.113883.4.873, and they really should be able to.

Fortunately for my uses, the nullFlavor case will only show up in exceptional circumstances.  All the naming systems I use in FHIR have an associated OID mapping.

For what it's worth, the same challenges exist with code system mapping, and I do the same thing: Make sure all my FHIR code systems map to an OID as well.

Wednesday, February 15, 2017

An XSLT Design Pattern (and not just for FHIR to CDA conversions)

I have a couple of XSLT design patterns that I've been using and improving over the past decade, which I pulled out again last night to do some FHIR to CDA transformations.  XSLT isn't what you'd call a strongly object oriented language, nor a strongly typed one.  However, the design patterns I used borrow from experiences I've had with strongly typed, object oriented languages such as Java.

The design pattern is a general one which I've used for a number of strongly typed schema to schema conversions.  FHIR to CDA is just one example.  V2 XML to CDA (or back) is another one, and I've also done this for CDA to ebXML (XDS related work), and at one point for CCR and CDA, even though I'd never really call CCR strongly typed.

The first design pattern is in how I define my transformations.  There are two ways in which you can call a template, and they have different benefits.  Sometimes you want to use one, and sometimes you want to use the other.  When going from one format to another in XML, usually there is a target element that you are trying to create, and one or more source elements you are trying to create it from.

My first method uses apply-templates.
<xsl:apply-templates select='...' mode='target-element-name'/>

You can also call apply-templates with parameters.
<xsl:apply-templates select='...' mode='target-element-name'>
  <xsl:with-param name='some-param' select='some-value'/>

This is pretty straight forward.  I use mode on my transformation templates for a variety of reasons, one of which is that it allows you to build transformation trees that the XSLT engine automates processing with.  More often that not, the name of the mode is the output element I'm targeting to generate.

The second method works more like a traditional function call, using call-template:
<xsl:call-template name='target-element-name'>
  <xsl:with-param name='this' select='the-source-element-to-transform'/>
  <xsl:with-param name='some-param' select='some-value'/>

Now, if the source element is always the same for a given target element (e.g., a FHIR address going to a CDA addr element), you create your actual template signature thus:

<xsl:template name='addr' mode='addr' match='address'>
  <xsl:param name='this' select='.'/>

Note that name and mode are the same.  What you want to create here is an <addr> element in CDA. The input you want to convert is a FHIR address element (I checked several dozen places, just about everywhere the Address datatype is used, it is always called address in FHIR resources).  So that explains the first line.  The name of the template, and its mode, identity what you are trying to create.

The second line is critical, and it has some serious impact on how you write your templates.  Instead of assuming a default context for your transformation, the <xsl:param name='this' select='.'/> sets up a parameter in which you can explicitly pass the transformation context, but which will implicitly use the current context if no such parameter is passed.  When you write your transformation, you have to be careful to use $this consistently, or you'll have some hard to find bugs, because sometimes it might work (e.g., when called using apply-templates), but other times not (e.g., when called using call-template).  It takes some diligence to write templates this way, but after a while you get used to it.

Here's the example for converting a FHIR Address to a CDA addr element.  It's pretty obvious:

<xsl:template name='addr' mode='addr' match='address'>
  <xsl:param name='this' select='.'/>
  <xsl:variable name='use'><xsl:choose>
    <xsl:when test='$this/use/@value="home"'>H</xsl:when>
    <xsl:when test='$this/use/@value="work"'>WP</xsl:when>
    <xsl:when test='$this/use/@value="temp"'>TMP</xsl:when>
    <xsl:if test='$use!=""'>
      <xsl:attribute name='use'><xsl:value-of select='$use'/></xsl:attribute>
    <xsl:for-each select='$this/line'>
      <streetAddressLine><xsl:value-of select='@value'/></streetAddressLine>
    <xsl:for-each select='$this/city'>
      <city><xsl:value-of select='@value'/></city>
    <xsl:for-each select='$this/state'>
      <state><xsl:value-of select='@value'/></state>
    <xsl:for-each select='$this/postalCode'>
      <postalCode><xsl:value-of select='@value'/></postalCode>
    <xsl:for-each select='$this/country'>
      <country><xsl:value-of select='@value'/></country>

[There's a little XSLT shorthand nugget in the above transform, another design pattern I use a lot:
  <xsl:for-each select='x'> ... </xsl:for-each>

 is a much simpler way to say:
  <xsl:if test='count(x) != 0'> ... transform each x ... </xsl:if>

especially when x is a collection of items.]

Now, if you want to drop an address in somewhere, you can do something like this:
  <xsl:apply-templates select='address' mode='addr'/>

Or, you can also do it this way:
  <xsl:call-template name='addr'>
    <xsl:with-param name='this' select='$patient/address'/>

Either works, and in developing large scale transformations, I often find myself using the same template in different ways.  The elegance of this design pattern in XSLT extends further when you have two or more source elements going to the same target element.

In that case, you have two templates with the same mode, but different match criteria.  AND, you have a named template.  Let's presume in the FHIR case, you want to map and Resource.identifier to an id data type (it's not to far-fetched an idea, even if not one I would use). You then write:

  <xsl:template mode='id' match='id'>
  <xsl:template mode='id' match='identifier'>

  <xsl:template name='id'>
    <xsl:param name='this'/>
    <xsl:apply-templates select='$this' mode='id'/>

And the final named template simply uses the match type to automatically select the appropriate template to use based on your input value.

Sometimes you want to create an output element but it isn't always called the same thing, even though it uses the same internal structure.  Using default parameters, you can set this up.  Let's look into example above.  Not EVERY id element is named id.  Sometimes in CDA the have a different name depending on what is being identified.  For example, in the CDA header, you have setId (in fact it's nearly the only case for id).  A more obvious case is code.  Most of the time, code is just code, but sometimes it's ____Code (e.g., priorityCode), but the general structure of a priorityCode (CE CWE [0..1]) is pretty much the same as for code (CD CWE [0..1]).  So, if you were going to convert a FHIR CodableConcept or Coding to a CDA CD/CE data type, you might use the same transformation.

  <xsl:template mode='code' match='code' name='code'>
    <xsl:param name='element' select='"code"'/>
    <xsl:element name='$name'>

You get the idea.  Usually, you want to generate <code>, and so you say nothing.  Sometimes you want to generate something different, and so you add a parameter.
  <xsl:call-template name='code'>
    <xsl:with-param name='this' select='$that/code'/>
    <xsl:with-param name='element' select='"priorityCode"'/>

Remember you can also pass parameters using apply-templates, so this also works:
  <xsl:apply-templates select='$that/code'>
    <xsl:with-param name='element' select='"priorityCode"'/>
Enough chatter, back to work.  HIMSS is only a couple of days away.

   - Keith

P.S.  I've missed being able to post while I've been heads down working towards HIMSS and the Interop showcase.  Hopefully I'll get more time when I get back.

Friday, January 20, 2017

FHIR Product Roadmap Jan 2017 update

This crossed my desk this morning via Grahame Grieve (HL7 Product Manager for FHIR).

   -- Keith

R3 plans
The FHIR project is presently finalising "STU3" (Standard for Trial Use, release 3). This 3rd major milestone is currently close to completion. We've been meeting in San Antonio this week to finalise ballot reconciliation, perform testing and quality activities, and we are now focusing on preparing the final publication package. Following our publication plan we expect to be publishing release 3 on or about Mar 20.
R4 plans
Once R3 is published, we will start working on release 4. The various committees that manage the different parts of Release 4 have been discussing their scope of work for R4, and planning their engagement and implementation activities to support that this week.
Some of the major things under consideration for Release 4:
<![if !supportLists]>·         <![endif]>Improvements across all domains
<![if !supportLists]>·         <![endif]>Cds-hooks integrated in FHIR Specification
<![if !supportLists]>·         <![endif]>Query language framework
<![if !supportLists]>·         <![endif]>Support for integrating research and clinical practice
The most significant change is that R4 is expected to be the first 'normative version'. It's important to understand what that means. We will continue to follow our overall maturity model, where content gradually matures through testing and implementation activities that demonstrate success in the real world. The end of the process is "normative" ("FMM level 6"), where the standard becomes stable, and breaking changes are no longer considered.
Only some portions of the specification are candidates for being normative. We are currently considering balloting the following parts of the specification as normative:
<![if !supportLists]>·         <![endif]>Infrastructure (API, data types, XML/JSON formats, conformance layer resources like StructureDefinition and ValueSet)
<![if !supportLists]>·         <![endif]>Administration (amongst others Patient, Organization, Practitioner)
We will continue to seek and receive comments about this. Some clinical resources may be considered, depending how implementation experience unfolds this year.
Overall planned R4 timeline:
<![if !supportLists]>·         <![endif]>Dec 2017: publish first draft of R4 for comment (finalise plans for normative sections)
<![if !supportLists]>·         <![endif]>Jan 2018: first R4 based connectathon(s)
<![if !supportLists]>·         <![endif]>April 2018: ballot R4
<![if !supportLists]>·         <![endif]>May – Sept 2018: ballot reconciliation
<![if !supportLists]>·         <![endif]>Oct 2018: publish FHIR R4
We will conduct a round of market consultations in Aug/Sept 2017 to seek comment on this timeline from the FHIR community.
Note that this timelines anticipates that we publish R4 in October irrespective of the outcome of the normative ballot. Anything that has not passed normative ballot will continue to published as STU. We are still working on preparation, quality and balloting processes to support the normative FHIR ballot.
Longer term, we anticipate following R4 with a roughly 18 month publication cycle, with increasing normative content.
Implementation Progress
FHIR is a specification for a common API for exchanging healthcare data between information systems. Any information system supporting the healthcare system can choose to implement the common API, and exchange data following the rules. FHIR enables a 'healthcare web' to exist, but doesn't actually create it.
HL7 is pleased to work on the FHIR specification with many hundreds of partners, who are all implementing the specification to exchange data in service of the healthcare needs to their enterprises, their customers, and, ultimately, patients. HL7 does not 'implement' the specification (other than various prototype/test services) – our partners and other healthcare services do.
Argonaut Project
One particularly important sub-group of the FHIR community is the Argonaut project, which is a joint project of major US EHR vendors to advance industry adoption of FHIR and we've had many questions about the Argonaut implementation timeline for EHR access. With regard to the Argonaut community:
<![if !supportLists]>·         <![endif]>The Argonaut STU2 specification for Common Clinical Data Set elements is close to being finalized and will be announced shortly.  The Argonaut STU3 specification for Provider Directory will be published after final balloting of STU3
<![if !supportLists]>·         <![endif]>Most Argonaut members who are certifying an API with ONC are using the Argonaut specification; most certifications are expected in Q1/Q2 2017
<![if !supportLists]>·         <![endif]>Software roll-outs have commenced — progress will vary depending on the vendor
<![if !supportLists]>·         <![endif]>It is presently unknown what the adoption rate by provider institutions will be — MU and MACRA in the US provide incentives to make a patient-facing API available by the end of 2017
<![if !supportLists]>·         <![endif]>Some of the Argonaut interfaces provide additional functionality not yet described in the Argonaut specification, but there is considerable appetite for additional services beyond what is currently available. The Argonaut project will be making announcements about its future plans shortly which will be announced in a press release, through collaboration channels, and at

Tuesday, January 17, 2017

Semper et semper ascendens deinceps

Always and always riding forward.  If you remember the original reference, you know what's coming next.

I had the honor today of having Wes Rishel in my CDA, XDS and FHIR class today.  Wes was the guy that co-opted my skills for his workgroup (Attachments), and for HL7 as a whole.  He has had an outstanding career as an Analyst for Gartner, and is a past co-chair of the HL7 Organization.  Now retired, Wes is doing some side consulting work with his rural health exchange.  He's one of the examples that I hold up before me that tells me I'll never need to retire from doing what I love.

Ed Hammond is another young fellow at HL7 who's stamina is outstanding.  Ed has been a mentor to many in HL7 and I count myself also among his mentees.  Ed teaches Informatics at  Duke University, has the longest tenure on the HL7 Board of any known person, having reached the pinacle of leadership at HL7 as Cochair Emeritus.  He's been recognized in many other forums (AMIA for example).  He's so influentially involved in so many places I want to make a Cards against Humanity cards that says "It wouldn't be the same ______ without Ed Hammond."

Both Wes and Ed are hereby inducted into the 2017 class of the Lords and Ladies of the Ad Hoc Harley.

Wes Rishel and Ed Hammond
Semper et semper ascendens deinceps
(ever and ever riding forward)
And now you both can also add LLAHH (Ladies and Lords of the Ad Hoc Harley) after your names if you so wish.

P.S  Pictures don't lie. Ed never ages. He looks the same as he did in 1990.

Sunday, January 15, 2017

The FHIR $relevant Query

So I finally got my implementation of the "$everything" query working on my HAPI Server.  I started this at the FHIR Connectathon whilst waiting for people to hit my server.  It was in response to a need I had to generate a f***-ton of sample data in FHIR for testing and documentations.

Basically I have a code generator that reads FHIR profiles.  It's smart enough to assist me in producing some cool stuff.  After I got done, I wondered what I should do with it.

Given that this week we are closing in on the Relevant and Pertinent ballot, I think I'll just throw it away (just kidding ... I'll keep it around for debugging purposes), but I think the next query I build will be the "$relevant" one.

Here's my initial take on what that might look like:
Allergies = any active allergies.
Conditions = any that have been active in the last 30 days
Medications = any that have been active in the last 30 days, or any that were discontinued for lack of effect or toleration.
Labs = Most recent result of any lab type in last year.
Procedures = last year history and any currently scheduled
Vitals = most recent set
Immunizations = Last year for non-pediatric patients, otherwise all.
Encounters = any upcoming and last 90 days.

What's missing here is probably the most recent clinical impression.

-- Keith

Saturday, January 14, 2017

Healthcare Standards Blog in Feedspot Top 100 Healthcare Blogs

This showed up in my inbox yesterday.  I reposted here with Anuj's permission.


Hi Keith,

My name is Anuj Agarwal. I'm Founder of Feedspot.

I would like to personally congratulate you as your blog Healthcare Standards  has been selected by our panelist as one of the Top 100 Healthcare Blogs on the web.

I personally give you a high-five and want to thank you for your contribution to this world. This is the most comprehensive list of Top 100 Healthcare Blogs  on the internet and I'm honored to have you as part of this!

Also, you have the honor of displaying the following badge on your blog. Use the below code to display this badge proudly on your blog.


 Anuj Agarwal
 Founder, Feedspot
 Linkedin . Twitter

Friday, January 13, 2017

Faking it with the FHIR Basic Resource

The FHIR team created the Basic resource to support extensibility.  It works great except that HAPI only supports one read() method for each resource, and sometimes you have more than one thing for which you need to extend basic. For my needs, I've been looking at Account and Transaction (representing either a payment or a charge).  So, how do I use Basic to implement these non-FHIR "resources".

What I finally worked out was to use a named operation.

With one set of operation parameters, the operation responds as for FHIR Read and is idempotent.
With another set of parameters, it responds as for a FHIR Search (and also is idempotent).
With another set it would respond as for Create/Update, where the distinction between Create and Update merely depends on whether the resource included in the POST/PUT contains an identifier or not.

This gives the user of the profiled Basic resource an experience that is pretty close to what they would get with a true FHIR Resource (and consequently makes it easier to adopt new resources that have been profiled in this fashion).