Pages

Tuesday, September 29, 2015

All the review in the world isn't as good as trying to implement a standard

I'm [back] in the midst of trying to create a C-CDA <--> FHIR translator.  I already know from experience that just looking at a specification won't tell you where the implementation issues might appear ... however, this current experience just serves as an additional reminder.

Some things I've discovered thus far in FHIR's DSTU 2: The Condition and AllergyIntolerance resources take a slightly different approach to capturing status.  Condition separates verification status (provisional, differential, confirmed, refuted, entered-in-error, and unknown) from clinical status (active, relapse, remission, and resolved).  However, AllergyIntolerance groups these together as (active, unconfirmed, confirmed, inactive, resolved, refuted and entered-in-error).  Note the similarity between these three lists.  It indicates that some subsequent harmonization may be needed.

I also noted that AllergyIntolerance simply has onset as a dateTime, whereas condition has dateTime, Age, Period, Range and string.  The onset dates of allergies are just as frequently reported with imprecise dates and times as conditions, perhaps even more so, another place where these two could converge just a little bit better.

That's why I keep on writing code, even if I never expect it to show up in product.

    Keith

Friday, September 25, 2015

Developing Standards is like building a deck

... or any other complex project.  Except that all too often missing a deadline (especially in the prime years of Meaningful Use) is considered to be such a failure that shortcuts are often taken which can have a dramatic impact on quality.  I'd much rather do the job right than do it over.

I just recently had a deck put on my house, and it's about a month late (100% schedule overrun).  But I'm happy with the work that has been done. What were the delays?



Labor: Sometimes people simply weren't available (e.g., subcontractors for plumbing and electrical). When the electrician failed to show up the third time, I did the job myself in about 3 hours.  It was a small enough job that getting him to show up was difficult.  Fortunately, I could make do with my own skills.  In standards, this often happens when a specialist isn't available to review some critical work (as happened recently in some revisions to the Claims Attachments work).  The first problem is in understanding that you need a specialist, and the second is finding one who is available on your schedule, or adjusting your schedule.

At other times (especially during holidays), labor simply isn't available, and you have to deal with the down-time.  Or other higher priority work (to the laborers) gets in the way of your priorities.  Once again, you have to get the work done when the labor is available.

Materials: Sometimes the material runs out before the job does, and more needs to be acquired.  In standards development, we often aren't dependent on materials, but we are often dependent on either data, or other projects to provide materials that we need.  You either have to make do without, or take your best guess, or wait until the right stuff is there.  Frankly, I don't want a tri-colored deck with different planks, so I waited it out.  We did expedite the reorder, and sometimes there are ways that you can expedite the delivery of dependencies, for example, the DAF project needed some materials from FHIR DSTU 2 and CDA on FHIR, so they chipped in to help where applicable.

Review: We were at the mercy of the building inspector's schedule on the deck project, although I live in a small enough town that it usually meant a delay of only a day or two.  However, sometimes that messed up other schedules (I'm about a month behind on another project due to delays on the deck).  In the C-CDA 2.1 DSTU Update project, review was critical, and so we made sure to plan ahead for how long that would take, and I set aside that next week for my own review. Unfortunately, that's not always the case in other standards projects, and frankly, one of the problems with standards development is that we often don't know what will work until we try it.  This is a place where engineering is different from software development (which is still more art than science) [and standards development is very much like software development in many places].  DICOM has an excellent review process for its standards development, but it also induces delays that others would find difficult to accept.

Change orders, a.k.a. scope creep.  My electrical challenges were a result of a change order late in the project.  The electrician could have done most of the work anytime, with the final finish [installing the exterior fixture] taking no more than a half hour.  However, because I added this near the finish of the job, we were at the mercy of the electrician's schedule.  Having your scope nailed down well at the beginning if great, but when you don't, expect delays.  The Relevant and Pertinent project didn't originally have a short physician survey in its scope, but after project review, we decided that we needed to do it anyway.  That's caused several delays, but we aren't going to sacrifice on quality to make up for lost time.

In any project management, there's a triangle of resources, scope and quality.  One of your resources is time, and if you cannot afford to give up time, you may well be trading off quality or scope.  I'd much rather reduce scope and produce a high quality project, but even that is not always a possibility. All too often I've seen government contractors push to hit a schedule because either their lords and masters demand it, or because if they don't finish on time, they don't get paid for any additional time.

When you budget a project, you should always leave yourself a reserve (in time or money) to deal with change.  All too often, when we initiate a standards project, we don't allow for that.  I'm grateful my deck is finally done (or very near so, we are still waiting for the building inspector), and I'm also grateful that most of the projects that I've been involved with this year are similarly close or beyond the stage of being done.

Of course that means that I'm about ready to start the next set of projects, both at the house, and in HL7 and IHE.  IHE's call for proposals for PCC, ITI and QRPH went out a while back and that deadline is looming (tomorrow).  I'm presently working on a PSS in Clinical Decision Support to propose adoption of an IHE profile for Guideline Appropriate Ordering as a FHIR Profile in HL7. The reward as always for a job well done... is another job.  That's just as true in standards as it is in contracting.

   Keith


Wednesday, September 23, 2015

A whirlwind tour on FHIR

Fire Tornado by Karen O'D
I think I just finally solidified my travel in November.  It starts off with three days in Oakbrook, IL where I will be spending the first day (November 11th) training IHE developers about FHIR.   That will be followed by two days of profile proposal technical committee review, in which I expect to see several focusing on using FHIR to implement interoperability in various use cases.

Following that I head off to AMIA for several days where I expect to spend at least some time in sessions discussing FHIR, including various panels, a paper, several posters (Monday 5pm), and a system demonstration.  There will of course be other things attracting my interest as well.

I head out mid-day on Tuesday though, so I won't be able to see everything on FHIR because I'll be heading off to FHIR Developer days in Amsterdam.  This is the second (annual) edition of this event in Amsterdam.  I'll be talking about how IHE Patient Care Coordination developed two FHIR profile proposals, one of which (GAO) we are hoping to ballot through HL7 adoption process (See chapter 18 of the HL7 GOM).

I expect to be thoroughly exhausted when I'm done. But that is what happens to your fuel when you put your FHIR into the whirlwind.

   -- Keith

P.S.  The last bullet on my "What is FHIR?" slide says "A source of bad puns."  I would also add, and good marketing.

Monday, September 21, 2015

A long long time ago ... (XML from Word part 2)

... continuing from XML from Word.

To begin your unflattening, you will have to prepare a piece of data to explain what the structure of the final output needs to look like.  If you are simply unflattening HTML or Word using heading numbers, this is fairly straightforward.  If your document has a good bit more style and structure, you may need to do a bit more work.  Assuming you have a good XML Editor (just about any decent one can do this next step), you should be able to produce an XML Schema from a sample XML document.  The schema will suck, looking something like this:

<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified">
  <xs:element name="body">
    <xs:complexType>
      <xs:choice maxOccurs="unbounded">
        <xs:element ref="p"/>
        <xs:element ref="h1"/>
        <xs:element ref="h2"/>
        <xs:element ref="h3"/>
        <xs:element ref="h4"/>
        <xs:element ref="h5"/>
        <xs:element ref="h6"/>
      </xs:choice>
    </xs:complexType>
  </xs:element>
   ,,,
</xs:schema>

Take the table of <xs:element> names and put them into another file somewhere, and add attributes that indicate the nesting level for each element, like this:
<table>
  <element ref="body" level="0"/>
  <element ref="h1" level="1"/>
  <element ref="h2" level="2"/>    
  <element ref="h3" level="3"/>    
  <element ref="h4" level="4"/>    
  <element ref="h6" level="5"/>    
  <element ref="h6" level="6"/>    
  <element ref="p" level="7"/>    
</table>
This table basically assigns a nesting level (or precedence) to each element name, so that you (or software) can figure out the nesting level.

Where the magic comes in is how I use it next to apply the structure.  You can do this sort of processing of a list of elements really easily in Java or JavaScript or C++ if you understand how to write parser for a language whose parse tree can be described with operator precedence.  But if you want to do this using XSLT, you'll need a lot of research, or a really twisted brain to figure this out. Fortunately for you, I just spent the last week in Portland, so my brain is already twisted after three days of Evidence Based Medicine at OHSU ;-).

To make this stylesheet work, you are going to need to run two passes over your XML (or have two separate stylesheets.  The first pass simply adds an attribute to each element that assigns it the precedence level from the previous document, and then turns this result tree into a node-set (via the EXSLT node-set extension function) and sends it to the next phase.

<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
  xmlns:exslt="http://exslt.org/common" 
  extension-element-prefixes="exslt" version="1.0">

  <xsl:output indent="yes" method="xml"/>
  <xsl:variable name="prec" select="document('precTable.xml')"/>

  <xsl:template match="/">
    <xsl:variable name="pass1">
      <content>
        <xsl:for-each select="content/*">
          <xsl:copy>
            <xsl:copy-of select="@*"/>
            <xsl:attribute name="text">
              <xsl:value-of select="."/>
            </xsl:attribute>
            <xsl:attribute name="_level">
              <xsl:value-of select="$prec/table/element[@ref=local-name(current())]/@level"  />
            </xsl:attribute>
          </xsl:copy>
        </xsl:for-each>
      </content>
    </xsl:variable>
    <xsl:apply-templates select="exslt:node-set($pass1)/content/Tabular" mode="process"/>
  </xsl:template>

What this does is basically run through each element child of <content> and add an _level attribute to that element.  It gets the element by finding it in the /table/element list, looking for one whose @ref attribute matches the name of the element.  Why do I do this step?  Locality of reference for the next ugly bit.  Basically, this is an optimization that makes the next optimization really shine.  My file has 35000 lines.  The algorithm that you might figure out for yourself in XSLT (if you can twist your brain around it) runs on the order of O(n3).  On my first attempt at an algorithm, I was looking for the children of each parent.  That lookup that I preprocess would be needed 42 trillion times if not preprocessed, and it doesn't run quickly since it is essentially a linear search.  Even with the optimized version below, this lookup is best not repeated if you can precompute it, so I do.

The algorithm I finally figured out after failing several times runs a lot faster. I estimate it is around O(n log n). I owe Jeni Tennison a beer if I ever see her again (and Steve Meunch), because I wouldn't have figured it out were it not for her post on his algorithm.

What I realized was that each element in the file has can have unique key computed which identifies its parent, and that key can be expressed in XSLT as the unique identifier of the first preceding sibling of that element whose level in the hierarchy is lower that then of the element.  You declare this in XSLT using the following line:

  <xsl:key name="parent" match="*"
    use="generate-id(preceding-sibling::*[@_level &lt; current()/@_level][1])"/>

Then, these next two templates do the magic restructuring:

  <xsl:template match="/" mode="process">
    <content>
      <xsl:apply-templates select="/content/*[1]"/>
    </content>
  </xsl:template>

  <xsl:template match="*" mode="process">
    <xsl:copy>
      <xsl:copy-of select="@*[local-name()!='_level']"/>
      <xsl:apply-templates mode="process" select="key('parent',generate-id())"/>
    </xsl:copy>
  </xsl:template>

That's a remarkably short bit of code for the magic it performs!  The first template simply kicks things off.  For each element the next template processes, it makes a copy of the XML (using xsl:copy and xsl:copy-of, and then inserts the content of all of the nodes which claim (through the parent) key to be its direct children.  If you instrument the output with <xsl:message> elements as I did when I first ran it, you'll see a BIG pause (at least if you run a 35000 line file through it), and then magically, everything will come out in a great big WHAM!

What is happening here is that first pause is the indexing stage, where the XSL process goes: "OK, he really does mean to use the parent key, I better go make an index." (Yes, I tend to anthropomorphize software).  Then it identifies every node (all 35000) of them, and executes the XPath expression in the use attribute.

generate-id(preceding-sibling::*[@_level &lt; current()/@_level][1])

That XPath expression says: for each preceding child whose level is less than mine, take the first one. Most XSLT processors are smart about any expression which ends in the pattern [number], especially when number is 1, or the expression last().  That usually means that the expression can be computed more efficiently and short circuited once the first item is found.  The indexing step likely has average case execution time of O(n log n).  Each element generates an index key.  The elements are found in O(n).  At the deepest layer, their are O(n) nodes, and it takes a constant time to find their parent. Their are O(log(n)) layers in the tree, and it takes approximately the same amount of time to compute their parent(less actually for balanced trees of breadth X [each node containing X children]).  The recursive processing step is O(n) once the index is precomputed. Putting all that together give O(n log n), which finally made this work without a week of processing time.

The real trick here was instead of trying to find the children of each node, turning the problem on its head and finding the parent of each child.  That is what makes the whole algorithm simple.

How does this apply to standards?  The file I was processing was a vocabulary table written in a giant Word document.


Saturday, September 19, 2015

A long long time ago ... (XML from Word)

A very long time ago (more than 15 years), I worked on a product that allowed you to take inputs from various formats and restructure them as XML (or SGML).  It was a very useful tool, and made it very easy to convert Word documents to XML, especially when those documents didn't have a great deal of nested structure.

This is fairly common: Word, HTML and many other file formats don't really handle heading level nesting the way you would output information in XML.  When you wind up with a document that has a lot of "structural" information it its styles, getting that structural information represented in your XML can be very handy.  But it can be a royal PITA to get that structure back from the Word document.

I used to do this with a Word macro, but these days I find it easier to extract the styled information into an HTML file.  Use the "Save As..." and then use Filtered HTML as your output format, and what you will get is pretty decent HTML which won't contain a lot of Word specific gunge.  Your next step will be to remove all the stupid content in between <o:p> and </o:p> tags that Word inserts to support empty paragraph and whitespace handling in various versions of the IE browser (from about 5.X on they changed various things that needed special HTML handling for each version).

After you've done that, you need to tidy up the HTML so that it is proper XHTML to begin the final phase of restructuring.  To do this, I use jtidy, the Java implementation of Dave Ragget's Tidy program.  The command line is fairly simple:

java -jar jtidy.jar -m -asxml filename

This command will read filename, cleanup the HTML and turn it into XHTML (-asxml), and then modify (-m) to original file to contain the cleaned up output.

So what was
<p class=foo><span class=bar>Stuff<br></span></p> 
becomes:
<p class='foo'><span class='bar'>Stuff<br/></span></p> 
This will make your life a lot easier. In the next two steps.

The next step simply uses the class attribute as the element name in the output.  So all tags are now rewritten using the class names (which were originally your style names in Word).  Here's the stylesheet to start XML-ifying the XHTML.

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" 
  xmlns=""
  xmlns:html="http://www.w3.org/1999/xhtml"
  version="1.0">
  <xsl:output method="xml" indent="yes"/>
  <xsl:strip-space elements="*"/>
  <xsl:template match="html:head"/>
  <xsl:template match="html:body">
    <content>
      <xsl:apply-templates/>
    </content>
  </xsl:template>
  <xsl:template match="html:*">
    <xsl:choose>
      <xsl:when test="contains('1234567890',substring(@class,1,1))">
        <xsl:element name='_{@class}'>
          <xsl:apply-templates/>
        </xsl:element>
      </xsl:when>
      <xsl:otherwise>
        <xsl:element name='{@class}'>
          <xsl:apply-templates/>
        </xsl:element>
      </xsl:otherwise>
    </xsl:choose>
  </xsl:template>
  <xsl:template match="html:a">
    <xsl:attribute name="id">
      <xsl:value-of select="@id"/>
    </xsl:attribute>
  </xsl:template>
</xsl;stylesheet>

Now, you still have this flattened XML.  What you need to do is "unflatten" it, and I'll explain how to do that in my next post.

Monday, September 14, 2015

Code Generators: A love hate relationship

Some of my favorite projects have involved code generators.  I once wrote an LL(1) parser generator which processed SGML DTDs with some added stuff to make some great looking output from SGML.  Another time I took the XHTML from the IHE Wiki and ran it through a transform which then generated code which would output conforming IHE PCC Sections according to the PCC Technical framework.

My present work involves transforming from CDA and CCDA to FHIR and back.  To do that, I'm annotating the output from Trifolia with statements that express how to do the mapping.  Then my code generator essentially writes the transform from the CCDA entries to the appropriate FHIR output (Document, sections and narrative transforms I wrote by hand.  These are the scaffolding upon which my code generator operates from, and I didn't need to have that build automated.)

What I love about code generators is that once you get them right, the code they produce is inevitably correct.  And since the code generator can produce a LOT of code from a large input, this can be incredibly valuable from a software development perspective.

What I hate about code generators is that "ONCE you get them right" part.  Doing that is tricky, and a small change can also be very damaging.  Refactoring a code generator to get yourself out of a design dead-end is like threading a maze blind-folded sometimes.  And the wrong fix on a tiny bug can break TONs of code. However, once you finally get it right, the code is often nearly rock solid. After all, what the computer does really well is repetition.

Code generators are also especially difficult when the language you are writing your code generator is the same as what it will be producing.  Going from XSLT to JavaScript (or vice versa) is a heck of a lot easier than XSLT to XSLT or JavaScript to JavaScript. One challenge is that the levels of escaping you have to go through to ensure the correct output syntax are a pain.  Another is that you have to keep two different execution contexts in your head: The one you are writing your code generator in, and the one that it is writing code in.  I find myself wondering why a variable I clearly declared in one context doesn't exist in the other until I start looking in the right place.  That's not a problem when I have execution contexts in two different languages (var x =0; is so much different from <variable name="x">0<variable>).

What I also love about code generators though, is the challenge they provide, and the satisfaction that they do a tremendous amount of work that I couldn't have done by coding manually.

One of the great things about FHIR is its almost recursive relationship with itself (sort of like writing an XSLT to generate an XSLT, something only a true Geek could enjoy).  The fact that Conformance OperationDefinition, and SearchParameter resources exist to define how a FHIR server works is very comforting to me.  It means that I can stay within the same context when trying to do several things at once (as is often the case with Interoperability).  However, I think my favorite two "recursive" resources in FHIR are ImplementationGuide and TestScript.

Windows NT developers at Microsoft years ago, used to talk about "eating your own dog-food", referring to the fact that they had to use the OS they were building to build the OS that they were building, as they built it.  Well, FHIR is doing the same thing, and starting to develop the necessary resources to build the resources that will build the standard.

It's something that only a Geek could love from an aesthetic viewpoint.  From an outcomes viewpoint, I think the implementers of FHIR and the users of systems that implement it will love it too.  Because once you get it RIGHT, the code rocks.

Thursday, September 10, 2015

Me on FHIR at @OHSUInformatics next Thursday

I'll be in Portland, OR next week for my EBM class and some other stuff. While I'm there I'll be giving a talk this coming Thursday. It will be a high level introduction,

Wednesday, September 9, 2015

I'm gonna have to write another letter ...

This time to my primary care provider.  I just downloaded my last Visit Summary from my PCP (it's been a seven year wait for that capability).  I did a manual review of the CDA content, validated it with the CDA Schema, and the CCDA 1.1 Schematron delivered by HL7.  While there are plenty of online tools that you can use to do such validation, I do not recommend them unless you want your PHI hanging around on the web somewhere.  We aren't all John Halamka after all.

Here are the errors that I found.

<ClinicalDocument
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 

  xmlns="urn:hl7-org:v3"
  xsi:schemaLocation="urn:hl7-org:v3 CDA.xsd">

The xsi:schemaLocation may be valid on an XML document, but it is never appropriate to use in an exchanged CDA document.  According to section 1.4 of the XML Implementation Technology Specification - Data Types (used by the CDA R2 Standard):
xsi:schemaLocation must not be used in an instance to point to a schema document to be used for validation. Instance processors are expected to provide their own schemas.
This is a mostly harmless error as demonstrated above.  The reason that schemaLocation must not be used however is related to security concerns.  An improperly coded receiver could attempt to read the file at that location.  If the location were a web URL, it could be a specially constructed URL designed to cause some sort of stack overflow that might result in execution of code of an attackers choice.

Two other errors that are easily spotted are that the visit summary itself does not correctly reflect information found in the narrative of the clinical document, or in other data on the portal.  It indicates that I'm on unknown meds, but elsewhere on the portal they have correctly recorded all my medications.  There's clearly no reason for this inaccuracy in the visit summary. It also incorrectly reports my smoking status in the document, even though it is directly reflected in the problem list in the visit summary.

Effective times are missing in the machine readable data, but present in the narrative.  This is clearly an error.

There are also some translation issues for problems coded in ICD-9-CM as translated to SNOMED CT:

  • Cardiomegally (SNOMED CT) is NOT left atrial enlargement (ICD-9).  The former applies to the whole heart, while the latter applies only to one portion of the heart.
  • Obesity (SNOMED CT) is not overweight (ICD-9), the two have different clinical definitions.

And while I'm writing a letter, I'm also personally e-mailing technical staff of the vendor who creates this portal and pointing them to this blog post.  There are several bugs in my visit summary resulting from their software, and I expect them to be fixed in a future version.

   Keith

Here's the e-mail I just sent to the vendor's technical folks:

In my role as a patient of a practice that uses your product, I recently downloaded a visit summary from my healthcare provider through their implementation of your portal, I have discovered several implementation errors in the CDA summary it provides. Please read: http://motorcycleguy.blogspot.com/2015/09/im-gonna-have-to-write-another-letter.html for details. Shortly I will also be contacting my healthcare provider (your customer), and letting them know of these particular errors.

I'm honestly just wanting to see this fixed, and I'm hoping you will respond appropriately. Since this is a personal issue for me, please direct all communication on this matter to this e-mail address. If you would like to speak to me personally, I can be reached at ___________.

        Keith W. Boone

CDA -> FHIR : Working Backwards to go Forwards

I've been spending quite a bit of time on various FHIR and CDA related topics.  As I mentioned a bit ago, I got distracted by conversions of FHIR JSON to XML and back.  There was a reason for that, as there was a piece of FHIR JSON I wanted in XML so that I could convert it to CDA.

So why am I working backwards on the CDA -> FHIR side?  It could be that you start with what you know.  Or it could be that I figured it would be easier to reverse the process once I figured out how to do it from a different direction.  As the case may be, I've made some progress.

Here's what I did:

  1. I download the CDA Templates from the C-CDA DSTU 2.1 Release using Trifolia.
  2. I selectively annotated some of the entry templates (section and document templates are boring for the most part) with markup explaining how to map the CDA entry to a FHIR Resource or component thereof.
  3. I wrote some utility stuff to map certain CDA basic data types to the FHIR equivalents (I have a similar set for the reverse mapping already done).
  4. I wrote a code generator (in XSLT) that generated an XSLT stylesheet from the Template XML I downloaded.
  5. I wrote another XSLT that managed the basic parts of the CDA Document to FHIR Bundle conversion, and invoked templates in the stylesheet I automatically generated.
The end result is that I have a model driven transform of CDA to FHIR, and I can probably take the same stuff and figure out how to reverse it.

There's some things that don't quite work because they appear in different orders in CDA and FHIR. For example, the identifiers in a Problem Observation appear AFTER some data in the Problem Act. Since my transform processes things in CDA order (at least for now), it generates some not-quite-correct output.  However, that output is close enough for me to figure out how to address that problem later.  I could probably even use my JSON/XML translator to help with the cleanup [oh damn, there went more sleep as I just came up with that].

   -- Keith


Tuesday, September 8, 2015

Where's your Standards Review in your HealthIT research project?

One of the challenges with implementation projects using standards is that many do not have an appropriate process for review of existing standards.  This challenge is exacerbated by the fact that it is rather difficult to do a search for standards.  There's no "PubMed" for standards like there is for healthcare research.

With hundreds or even thousands of standards available related to Healthcare IT, and yet thousands more in general IT that might be applicable, how is an implementer able to find what is relevant?

Even in HL7, it's difficult to find things, because while HL7 itself records information about all the standards it publishes (and these are also reported to ANSI), but international affiliates of HL7 also develop standards for use within their respective realms.

Building such an index would be a difficult project, as there isn't really even a standardized format for documenting standards (each SDO has at least one, and some have different formats depending on the type of project).

And determining what the appropriate inclusion criteria for such an index would also be challenging. Does HTTP belong in such an index?  What about OAuth or OpenID Connect?  What about Direct? These are all examples of more general purpose IT efforts, some of which were created specifically to meet the needs of Healthcare.

It's an interesting challenge, perhaps even one worthy of a capstone project.

   Keith

Saturday, September 5, 2015

I letter I just wrote to a doctor in response to an incorrect medical bill

Dear Dr. _______,

I am returning this bill to you unpaid, as it clearly is for services not provided to my daughter based on the accompanying diagnosis code: 364.81 “Floppy Iris Syndrome”.  IFIS was first described by Chang and Campbell in the Journal of Cataract & Refractive Surgery in 2005 (see http://www-ncbi-nlm-nih-gov.liboff.ohsu.edu/pubmed/15899440 ).  Based on their description, this is clearly not a syndrome my daughter suffers from. This disease is diagnosed intraoperatively, typically during cataract surgery.  Since you are not treating my daughter for cataracts, nor did she receive any eye surgery, I am presuming that this bill and related charges are incorrect. 

As a result of sending this bill to my insurance company, my daughter will now have an incorrect diagnosis in her insurance record, which could result in denial of services later in life due to a pre-existing condition.

Thus, I am requesting that you correct the diagnosis, and send a new claim to my insurer.  Upon receiving a corrected bill with an accurate diagnosis, I believe appropriate adjustments will be made to your bill, and I will, upon receipt of an accurate description of services being billed for, pay them based on my insurers agreement.

Sincerely,


/s/

Keith W. Boone

Squirrels on FHIR

Distractions in my family are often identified by someone shouting Squirrel! (It's a reference from the movie UP).  So, here is my obligatory squirrel picture.

My current distraction is the FHIR JavaScript conversion tool that I built earlier this week.  At the end of the day today, I decided to give myself some time to play.  I realized that I could probably write an XSLT that would handle 90% of the conversion of FHIR XML to JSON.  From there I could recode it in JavaScript, and then tie it to the data I've generated.  This will only take a few minutes I thought.  An hour later and one false start left me KNOWING it could be done.

Three distractions later, and its now tomorrow.  BUT: You can now use my test page to go in either direction, and I've fixed a bug I found in the original script, as well as created a method to reverse the conversion.  I've also restructured the repository a bit based on a suggestion from Josh.

It too isn't QUITE finished.  Again, I have to deal with all that cruft around properties that start with underscores.  The FHIR documentation isn't quite clear about what the correct behavior is, AND the examples only go so far.  What I've written though, is good enough for my purposes, which is to test things out.

But of course, I realized, that if just refactor things to actually construct the JSON object as I read the XML, then I could use JSON.stringify to ...

ENOUGH!  Back to my homework...

Thursday, September 3, 2015

My FHIR JSON->XML Converter in JavaScript is now on GitHub

So, I've updated the FHIR Converter, fixed some bugs and posted it on GitHub for folks to play with. It is presently based on the DSTU 2 QA release and XML Schema.  You can try it out at my test page.

Usage is fairly straight-foward:
  1. First you create a new converter, specifying the indent level to use in the XML.
    var FHIR_JSONtoXML = new FHIRConverter(2); 
  2. Then you convert the FHIR JavaScript object
    var xml = FHIR_JSONtoXML.toXML(object); 

The toXML() member function will convert the object to an XML representation. It assumes that the object itself complies with the FHIR specification, and does not (yet) attempt to handle input implementation errors or validation.

I've tested it with the FHIR JSON Discharge Summary Example (since my main goal was to support FHIR to CDA conversion), but none done a lot of other testing.

Some things it doesn't yet handle (or which I haven't tested) are extensions and identifiers on primitive types.  Please log your bugs over on GitHub.  I'm not committing to keep this up in GitHub, but others might find it useful, and if the FHIR Core team thinks it is worthy of putting into the build, then I will volunteer to maintain it.  If that happens, I'll let you know.

   Keith

Wednesday, September 2, 2015

What is in a name?

I've been watching a debate on the HL7 Patient Care list about the name of a work product currently described as the "Patient Generated Document, Personal Care Goals, Preferences, and Priorities", which is a change from the original "Personal Emergency, Critical, and Advance Care Plan Document".

One part of the debate is the proper term to use where "Patient" is presently. As an engineer, I learned a long time ago that the names of things don't much matter from an implementation perspective.  As long as I know what you are talking about (in other words, we have a good definition of what it is), then I can implement it.  While the name could make it more obvious (and thus more usable), it isn't an essential attribute.

Most of the debates about what to call current or prospective users of healthcare goods and services resort to some name defined and specified from a position of authority, but with little evidence to back it up.  At best the evidence is based on consensus of an expert group which lacks representation from non-expert subjects who fit the definition of the category being define.  I find such debates to be:

  1. Largely useless.
  2. Tending towards annoying as the resulting list traffic usually just offers up more opinions based on other aforementioned authority figures (or based on personal understanding and authority).
  3. Vaguely amusing, as often some of the debaters are the same folk who wonder why people thing HL7 (or any other standard) is so difficult to understand.
If you really want to solve this problem, build up some evidence.  Do a study.  In this example, find out (click the link to weigh in) what current and prospective users of healthcare goods and services actually want to be called. There are scientific methods to approach that problem.  But please, stop wasting time arguing from a position of perceived authority, because frankly, the reality, when you dig into the evidence is much different from perception.

The name of the thing might be important to understanding, but what is more important (and has always been in any vocabulary discussion), is the meaning behind the name. Don't limit yourself to too few words (80 characters is often not enough) to describe a concept, but also don't make the definition so prescriptive that it can only be used in a single use case.

If it comes down to picking a name, rely on a published authority that the common man will understand.  If what you mean is not the same thing as you'd find in a dictionary, either find a name that better describes the concept, or give it a brand new name that won't be confused and overloaded. Forget your authority, rely on the evidence that can be provided by your target audience.  Failing that, avoid the desire to come up with the perfect name, and the endless debate that goes with it.  Because you cannot succeed.  Somebody won't like it.  I may not even like it, but as long as I understand WHAT you mean, I can live with it.

What is in a name?  A possibly shared understanding of the concept.  Don't rely on the possibility. Ensure it.  

I've built a little survey for you to tell us all what you think about the name, and why.  You can take it here.
   Keith

P.S.  I find myself also amused by the fact that while I'm taking a course on Evidence Based Medicine, I'm also thinking about the uses of evidence in standards.

P.P.S.  Just to add my own two cents to the debate, I like Health Consumer or Healthcare Consumer as the name of the concept of current and/or prospective user of healthcare goods and services. Consumer doesn't quite cut it because its definition doesn't cover the context of health or healthcare, and patient doesn't quite cut it because its definition doesn't quite address the prospective aspect of the concept.  So there, I'm just as guilty as the next person.

Tuesday, September 1, 2015

Nasty Recursive Things

Said in the voice of Gollum.

That's how I feel after spending a couple of days generating a FHIR JSON to XML converter in JavaScript.  I'm in the middle of creating a FHIR to CDA converter, and I happened to have some FHIR example objects that were nearly correct in JSON that I needed to clean up, but not in XML.  I suppose I could have figured out how to do this in HAPI or some other such tool, but I thought I'd look into this bit of a programming challenge as a way to learn more about FHIR serialization formats.

My first problem was that I started this project somewhere around 1 in the morning early Sunday. Why?  Because I was bored I think, but also inspired by Grahame's ask in the FHIR Skype chat if he could generate anything useful from the FHIR data to help in such a conversion.  I realized that I could generate the necessary data from the FHIR Schemas.  I pretty quickly cobbled together an XSLT transform (Yes, I XSLT is for some reason, one of my favorite languages to code in) that operated on the FHIR Schemas.  It generates a JavaScript Object from the Schema for each simple or complex Schema type defined therein.  Here's the definition for Basic:

"Basic": {
    /*Basic is used for handling concepts not yet defined in FHIR, narrative-only resources that don't map to an existing resource, and custom resources not appropriate for inclusion in the FHIR specification.If the element is present, it must have either a @value, an @id, or extensions
     */
  "name": "Basic",
  "base": "DomainResource", 
  "attrs": [
  ],
  "elems": [
    { /*Identifier assigned to the resource for business purposes, outside the context of FHIR. */  
      "name": "identifier", "min": "0", "max": "unbounded","type": "Identifier",
      "isPrimitive": false }, 
    { /*Identifies the 'type' of resource - equivalent to the resource name for other resources.*/           "name": "code", "min": "1", "max": "1", "type": "CodeableConcept", "isPrimitive": false }, 
    { /* Identifies the patient, practitioner, device or any other resource that is the "focus" */
       * of this resource. */
      "name": "subject", "min": "0", "max": "1", "type": "Reference", "isPrimitive": false }, 
    { /*Indicates who was responsible for creating the resource instance. */
      "name": "author", "min": "0", "max": "1", "type": "Reference","isPrimitive": false }, 
    { /*Identifies when the resource was first created. */
      "name": "created", "min": "0", "max": "1", "type": "date","isPrimitive": true }, 
  ]
},

Pretty gross, huh?  It gets a bit uglier. In the above, identifier points to a pretty long type, which eventually references IdentifierUse, which eventually references IdentifierUse-List.
"Identifier": {
    /*A technical identifier - identifies some entity uniquely and unambiguously.If the element is present, it must have a value for at least one of the defined elements, an @id referenced from the Narrative, or extensions
     */
  "name": "Identifier", "base": "Element", 
  "attrs": [  ],
  "elems": [ { 
    /*The purpose of this identifier.*/  
    "name": "use", "min": "0", "max": "1", "type": "IdentifierUse", "isPrimitive": true }, 
     ... elided for readability ...
   ]
},
"IdentifierUse-list": {
  "name": "IdentifierUse-list",
  "base": "xs:string", 
  "values": [ "usual", "official","temp","secondary",]
},
"IdentifierUse": {
    /*Identifies the purpose for this identifier, if knownIf the element is present, it 
     * must have either a @value, an @id, or extensions
     */
  "name": "IdentifierUse", "base": "Element", 
  "attrs": [ { "name": "value", "type": "IdentifierUse-list" }, ],
  "elems": [ ]
},

By drilling down through the types, you can eventually determine that the "use" property of the Identifier type is a primitive type.  But it takes a while.  Rather than try to figure this out at run-time, I precompute the isPrimitive property from the FHIR Schema.  Anything that derives from Element and doesn't have any elements of its own I treat as a FHIR primitive.  As it turns out, this precomputation of isPrimitive meant that all the work I did for simpleTypes in XML Schema was pretty much wasted. Except that it might be useful later.

Processing the JSON gets a bit tricky as well.  The JSON output for elements that can have cardinality > 1 is an array, but when cardinality is limited to at most one, its a FHIR object. However, the output of each of the elements in the array is exactly the same.  So I created a function to process these objects, and if there is only one object and it isn't an array, I make it an array.  I also have to deal with the extra gunge for a property. This is the stuff that, for property X, gets stored in property _X.

It took me quite a while to get this right because I had mutually recursive functions in JavaScript. And my JavaScript chops are a bit rusty, so I forgot to declare a couple of variables, which made them globals, and that made them not have the value they were supposed to after the recursion.  It took me a while to figure all this out, and I still haven't figured out everything I need to do with the extra bits stored in extensions on primitive types.  But, what I have works closely enough for now (but needs a lot more testing).  I'm fixing to put it over in GitHub at some point once I get some more free time, so that folks who want to convert FHIR from JSON to XML can.

Oh, and the JavaScript object that I generate also has all the necessary data to go from XML to JSON as well.  That will have to wait for another time.  And, you could actually use it to validate the JSON. Again, another time.

   Keith