Tuesday, September 29, 2020

The SecretLab Titan Chair (Review)


At one point in time in my life I used to manage the service department of a Computerland, and our particular store was chosen by the state to deliver custom software and accommodations enabling individuals with disabilities to use computers.  As part of this, I wound up spending a good bit of time investigating computer ergonomics, and have for some time paid significant attention to the ergonomics of my home workspace, especially after aging a bit necessitated a change in my setup.

I invested in a very good office chair, but that poor chair is not quite a decade old, and the foam on the seat has degraded to the point that I've had to put at least two additional cushions underneath.  It was time for a new chair.  I'm in this thing like eight hours on a regular day, and even more on some of my irregular days, and so money was less important to me than comfort (having thrown my back out twice in the last two quarters).  I spent the time investigating, and finally decided on getting myself the Titan chair from SecretLab (I get nothing if you click that link).

I wasn't impressed with stock on hand when I did my initial search (it was the smaller unit), and I didn't want to wait for a month or more for a new chair (I'm thinking that Amazon has spoiled me).  Fortunately, over the course of a week of research, they actually wound up with both the unit and colors I wanted for immediate shipping, which meant I'd only need to wait a week.  Your mileage may vary, but I can tell you, if I could have seen and tested the chair in a showroom, I would have waited a month, but it's difficult to wait a month for something that you aren't quite sure will do the job when you needed it last week.

The chair arrived today (one day early). UPS dropped it in the middle of my long driveway instead of bringing it to the door.  At about 70 lbs, I can understand why they didn't want to walk it another 50ft to my doorway.  The humor of me having to haul a 70lb box into my house by myself to help my back wasn't lost on me (my old chair only weighed about 45 lbs or so).

The unpack itself confirmed to me that I'd made the right choice in chair, and the build was also good.  The box was huge, well packed and padded, and came with high quality assembly tools (a Philips screwdriver that I'll likely be using for a while around the house, and steel hex wrench), and the four steel bolts you need for the bottom all packed neatly in a foam insert, instead of shrink wrapped plastic.  The chair itself went together in less than fifteen minutes, and I spent about a third of that time just moving it from the living room through the narrow hallway into my office (I probably should have done final assembly in my office).  

  • This thing is a monster.  The construction is much more impressive than my older office chair, but I would expect that having paid 2.5x as much for it.
  • It's the first chair I've ever owned where, when the hydraulics are all the way up, my feet DON'T lay flat on the floor, which is thrilling.
  • The placement of the included neck pillow, again, not a feature I thought I'd care about, but leaning back to think about something, that's perfectly placed and sized (and completely adjustable)
  • It has a reclining feature which I NEVER would have thought to include as a requirement for an office chair, but having tried it out, I can tell you that yes, I COULD fall asleep in this thing.
  • The armrests (also adjustable up, down, sideways and rotating) were perfectly placed as shipped.  I tried other orientations to be sure.  And again, I have room up, down, sideways or otherwise.
  • It rocks (and that's adjustable).  I twitch and bounce while in deep thought, and there's something comforting about a chair that responds instead of basically being a dead weight.  My old chair did too, but that was more wobble than rock.  And I can adjust from locked in place to free rocking with a simple click.
  • For a simple tweet, my 3 year warranty just became a 5 year warranty.
Frankly, so far, there are only three cons to this thing:
  • I had to wait longer than I wanted for it to arrive.
  • It's expensive [about $450 depending on the model] (but worth it).
  • It's heavy, so if you've done something stupid to throw out your back, have someone to help you move it around.
As with any piece of furniture, the true test will be time.

My youngest is laughing at me, b/c dad has a massive gamer setup for his "work". A powerful laptop, three displays (44" ceiling mounted, 37" desk mounted swivel side monitor, and my laptop screen with a 16" 4K display), a gamer keyboard with blinking LEDs (that I turn off), and to my side, my XBox with noise cancelling headset, and flashy rainbow light wired extra-button controller for when I want to kick back, and now a fancy gamer racing chair.

I could have gone with the D.Va model, but I don't play that character so well, and the pink would clash with my office décor.

     Keith

P.S. A couple of you asked for a review, so here you have it.


Wednesday, September 23, 2020

Why computers should manage combinatorial explosion in test cases

 In A Test Case Generator for FHIR and SUSHI (and SANER) I wrote about how I'm working on generating test cases, and a little language for test case generation.

Here's one thing (among many) that I encountered.  Data for individual test cases should be dealt with independent from other test cases, so that tests don't interfere with each other.  That's why unit tests have setup and teardown.  But the test case generator is creating data for test cases that will be stored in a FHIR Server, and the FHIR Server cannot necessarily do set up and tear down between each test.

So, I cannot use the same patient for each test case, but rather, each test case must refer to patients created specifically for it, so that all the test data can be loaded into a FHIR Server for use at the same time.

It's a lovely little nuance about integration testing that you don't really have to deal with for unit testing.  I've accounted for it in data production for test cases, but it's made for some pretty interesting challenges, as I now have about four phases for parsing generating the data.

Parsing of the test model happens in the first phase. I have this working.
The generation step has at least three phases:
  1. Generating the essential resources and their variants.  I should probably talk about test cases and variants, and so will in more detail below.  This step has to be done in a particular order, because encounter cannot talk about patient or location until these two are defined (on purpose, so that I make the test case author deal with ordering, and I don't have to deal with forward references to stuff that doesn't exist).  I have this working.
  2. Generating Sushi code for each variant needed.  I have this working.
  3. Packaging a set of resources into a bundle for each variant of the test case.  I'm working on this now.

Test Cases and Variants

For my purposes, a test case is a package of data needed to test a measure groups: A bundle of resources.  I have a test case with an encounter, patient and location with the following linkages:

Patient patient (stands alone)
Location location (stands alone)
Encounter.patient  references Patient/patient
Encounter.location references Location/location

I have multiple measure groups, and I want to test the different facets of inclusion/exclusion criteria for the group.  So, an encounter in a test case might be "in-progress" or "entered-in-error".  These are two variations for the test case for one of the encounter measure groups.  If the test case is TestCase1, these variations will be labeled something like TestCase1a and TestCase1b, to distinguish them.  The bundles will be different for each variation.

Also, while each bundle might contain multiple linked resources (e.g., patient, location, and encounter),
the patient, location and encounter in each bundle must be distinct from the patient, location and encounter resources in other variants.  

So, with three possible variations on location, two in encounter, and one on patient, we'll see six (3 x 2 x 1) different cases for Encounter in the bundles.

Bundle1: TestCase1a
Encounter11.patient references Patient/patient11
Encounter11.location references Location/location/11

Bundle2: TestCase1b
Encounter12.patient references Patient/patient12
Encounter12.location references Location/location12

Bundle3: TestCase1c
Encounter13.patient references Patient/patient13
Encounter13.location references Location/location13

Bundle4: TestCase1d
Encounter21.patient references Patient/patient21
Encounter21.location references Location/location21

Bundle5: TestCase1e
Encounter22.patient references Patient/patient22
Encounter22.location references Location/location22

Bundle6: TestCase1f
Encounter23.patient references Patient/patient23
Encounter23.location references Location/location23


Tuesday, September 22, 2020

A Test Case Generator for FHIR and SUSHI (and SANER )

I've often heard the complaint of combinatorial explosion with respect to creating test cases to fully test a system.  The problem is acute.  One part of the solution is good analysis, but the other part of it is automation.

It must be my week for mini-languages, because here is another example of a mini-language, this time used for test case generation.  I think I might have caught the language virus.

TestCase Case1:
    Patient patient X 30 with (
        identifier.value = Identifier,
        identifier.system = "http://sanerproject.org/testdata/patients",
        name.given in "firstnames",
        name.family in "lastnames",
        gender in "genders",
        birthDate within '@1930-09-09' to '@2020-09-09'
    )
Values
    // This is a set of common last names, it is purposefully of prime length
    "lastnames": {
        Smith
        Johnson
        Williams
        Brown
        Jones
        Garcia
        Miller
        Davis
        Rodriguez
        Martinez
        Hernandez
        Lopez
        Gonzalez
        Wilson
        Anderson
        Thomas
        Taylor
        Moore
        Jackson
        Martin
        Lee
        Perez
        Thompson
    }
    // This is a set of first names that are gender free, also of prime length
    // and mutually prime with the set of last names.
    "firstnames": {
        Alex
        James
        Blake
        Kyle
        Drew
        Taylor
        Kennedy
        Jordan
        Parker
        Avery
        Ryan
        Brooklyn
        Cameron
        Logan
        Emerson
        Charlie
        Ezra
    }
    "genders": {
        male
        female
    }

This example says: Generate a test case (in a bundle) containing the resource "Patient" with identifier "patient" and do it 30 times.  Take the identifier.value from an autoincrementing counter.  Set the identifier.system to a fixed value.  Pull given and family names from predefined list of values, iterating over them until done.  Take gender from another list with only two codes.  Generate birth dates from a range of values.

Now, if that was all my language did, you'd not be terribly impressed (or at least I wouldn't be).

But, what if, you could generate multiple resources, and link them correctly by identifiers.  Now we are starting to get somewhere, but it still isn't all that much better than what one can already do with an excel spreadsheet (as we did manually for the first set of test data for SANER automation).

But I also need test cases where I have encounters with, and without reasonReference as variations, with with and without reasonCode values matching a certain value set, and observation and condition resources that match or don't match selection criteria.

So, what if I could specify variation within a field like this:
Patient patient1 
    /* as before */

Condition condition1 with (
   code in COVID19Diagnosis OR in NotACovid19Diagnosis,
   patient = patient1
)

Encounter encounter1 with (
   reasonReference = condition1 OR missing,
   reasonCode in COVID19Diagnsoses OR missing,
   subject = patient1
)

And what if the test case generator spit out were six different bundles where each bundle contained a patient1, condition1 and encounter1 meeting all the appropriate mixes of criteria.

Bundle1: 
condition1 with COVID19Diagnosis, 
encounter1 with reasonReference to condition1 and reasonCode in COVID19Diagnosis
Bundle2:
condition1 with COVID19Diagnosis,
encounter1 with reasonReference to condition1 and reasonCode missing
Bundle3:
condition1 with COVID19Diagnosis
encounter1 with reasonRererence missing and reasonCode in COVID19Diagnosis
Bundle4:
condition1 with NotACovid19Diagnosis
encounter1 with reasonReference to condition1 and reasonCode in COVID19Diagnosis
Bundle5:
condition1 with NotACovid19Diagnosis
encounter1 with reasonReference to condition1 and reasonCode missing
Bundle6:
condition1 with NotACovid19Diagnosis
encounter1 with reasonReference to condition1 and reasonCode in COVID19Diagnosis

OK, now we are talking about something useful.
And if patient1 can vary (non-essentially) across these encounters, and so can encounter location, we've gone a step better.

Yes, this could produce a muck-ton of data.  But, the computer did it, you didn't have to.  All you had to do was give it the correct instructions to do something useful, and it produced something that you can use.

Anyway, more on this later as I continue my experiments.   So far, this was about two days of work, and what I have to show for it is this sample output:

Instance: patient3
InstanceOf: Patient
Description: "Generate sample patients with random characteristics"
* birthDate = "1964-06-09"
* extension[0].extension[0].url = "ombCategory"
* extension[0].extension[0].valueCoding = urn:oid:2.16.840.1.113883.6.238#2054-5 "Black or African American"
* extension[0].url = "http://hl7.org/fhir/us/core/StructureDefinition/us-core-race"
* extension[1].extension[0].url = "ombCategory"
* extension[1].url = "http://hl7.org/fhir/us/core/StructureDefinition/us-core-ethnicity"
* gender = #male
* identifier.system = "http://sanerproject.org/testdata/patients"
* identifier.value = "3"
* name.family = "Williams"
* name.given = "Drew"
* name.given[1] = "Taylor"
Instance: patient4
InstanceOf: Patient
Description: "Generate sample patients with random characteristics"
* birthDate = "1975-09-09"
* extension[0].extension[0].url = "ombCategory"
* extension[0].extension[0].valueCoding = urn:oid:2.16.840.1.113883.6.238#2076-8 "Native Hawaiian or Other Pacific Islander"
* extension[0].url = "http://hl7.org/fhir/us/core/StructureDefinition/us-core-race"
* extension[1].extension[0].url = "ombCategory"
* extension[1].extension[0].valueCoding = urn:oid:2.16.840.1.113883.6.238#2135-2 "Hispanic or Latino"
* extension[1].url = "http://hl7.org/fhir/us/core/StructureDefinition/us-core-ethnicity"
* gender = #female
* identifier.system = "http://sanerproject.org/testdata/patients"
* identifier.value = "4"
* name.family = "Brown"
* name.given = "Kennedy"
* name.given[1] = "Jordan"
Instance: patient5
InstanceOf: Patient
Description: "Generate sample patients with random characteristics"
* birthDate = "1986-12-09"
* extension[0].extension[0].url = "ombCategory"
* extension[0].extension[0].valueCoding = urn:oid:2.16.840.1.113883.6.238#2106-3 "White"
* extension[0].url = "http://hl7.org/fhir/us/core/StructureDefinition/us-core-race"
* extension[1].extension[0].url = "ombCategory"
* extension[1].extension[0].valueCoding = urn:oid:2.16.840.1.113883.6.238#2186-5 "Non Hispanic or Latino"
* extension[1].url = "http://hl7.org/fhir/us/core/StructureDefinition/us-core-ethnicity"
* gender = #male
* identifier.system = "http://sanerproject.org/testdata/patients"
* identifier.value = "5"
* name.family = "Jones"
* name.given = "Parker"
* name.given[1] = "Avery"
Instance: patient6
InstanceOf: Patient
Description: "Generate sample patients with random characteristics"
* birthDate = "1998-03-10"
* extension[0].extension[0].url = "ombCategory"
* extension[0].extension[0].valueCoding = http://terminology.hl7.org/CodeSystem/v3-NullFlavor#UNK "Unknown"
* extension[0].url = "http://hl7.org/fhir/us/core/StructureDefinition/us-core-race"
* extension[1].extension[0].url = "ombCategory"
* extension[1].url = "http://hl7.org/fhir/us/core/StructureDefinition/us-core-ethnicity"
* gender = #female
* identifier.system = "http://sanerproject.org/testdata/patients"
* identifier.value = "6"
* name.family = "Garcia"
* name.given = "Ryan"
* name.given[1] = "Brooklyn"
Instance: patient7
InstanceOf: Patient
Description: "Generate sample patients with random characteristics"
* birthDate = "2009-06-09"
* extension[0].extension[0].url = "ombCategory"
* extension[0].extension[0].valueCoding = http://terminology.hl7.org/CodeSystem/v3-NullFlavor#ASKU "Asked but no answer"
* extension[0].url = "http://hl7.org/fhir/us/core/StructureDefinition/us-core-race"
* extension[1].extension[0].url = "ombCategory"
* extension[1].extension[0].valueCoding = urn:oid:2.16.840.1.113883.6.238#2135-2 "Hispanic or Latino"
* extension[1].url = "http://hl7.org/fhir/us/core/StructureDefinition/us-core-ethnicity"
* gender = #male
* identifier.system = "http://sanerproject.org/testdata/patients"
* identifier.value = "7"
* name.family = "Miller"
* name.given = "Cameron"
* name.given[1] = "Logan"

Monday, September 21, 2020

FluentQuery for FHIRPath

 The SANER Project has been using FHIRPath to define situational awareness measures.  It has need to query at least FHIR Server to evaluate data for the measure, and the server to query is something that can be defined by the implementer.  Although perhaps I should say "servers" to query, because it's possible that data may be available from more than one location, as a recent commentator on our measures suggested.

For example, in some placees, laboratory data may be available from multiple sources, both the hospital lab, and a health information exchange for example.  The existence of any lab data from either would be sufficient to rule a patient into (or out of) a measurement.  We are using the resolve() method to resolve the content of these queries (and yes, I have taken into account pagination in my own implementation of resolve), which leads to measures that have somewhat unreadable measures for those not deeply versed in FHIR search.  

Here's a couple of examples:

( %Base + 'Encounter?' +
  '_include=Encounter:subject&_include=Encounter:condition&' +
  '_include=Encounter:reasonReference' +
  '&status=in-progress,finished' +
  '&date=ge' + %ReportingPeriod.start.toString() +
  '&date=lt' + %ReportingPeriod.end.toString()
).resolve().select(entry.resource)

Patient.distinct().where(
  %Base + 'Observation?_count=1' +
  '&status=registered,preliminary,final,amended,corrected' +
  '&patient=' + $this.id +
  '&date=gt' + (%ReportingPeriod.start - 14 'days').toString() +
  '&code:in=' + %Covid19Labs.url +
  '&value-concept:in=' + %PositiveResults.url
).resolve().select(entry.resource as Observation)

Sure, you can read these.  Of course you can, but what about your staff, leadership, or customers.  You'd probably have to explain that the first looks for Encounters, and their referenced subject, condition and reasonReference resources where Encounter is either in-progress or finished and the date is within the reporting period.  The second is an existence test that succeeds if there is at least Observation resource for a patient where the date of the observation is up to two weeks before the reporting period, showing a positive result on a COVID lab test.

But wouldn't something like below be MUCH easier for not just you, but also for your analysts, leadership and customers to read?

findAll('Encounter',
  including('subject','condition','reasonReference'),
  with('status').equalTo('in-progress'|'finished'),
  with('date').within(%ReportingPeriod)
).onServers(%Base)

Patient.distinct()
  .whereExists('Observation',
    for('patient', $this),
    with('status').equalTo(
     'registered' | 'preliminary' | 'final' 
| 'amended' | 'corrected'),
    with('date').greaterThan(%ReportingPeriod.start - 14 'days'),
    with('code').in(%CovidLabs),
    with('value-concept).in(%PositiveResults)
   ).onServers(%Base)

Of course it probably would.  The Reference Implementation of FHIRPath that Grahame developed provides support for custom functions, but unfortunately, custom functions don't have access (yet) to the left has side (the focus) of the expression.  I'm in the process of fixing that on my fork of his FHIRPathEngine code.

Once they do, here's how I see this working.  A query builder expression returns a URL that is a query being built, taking as input any part of the previously constructed query.  For the most part, these are simply specialized concatenations.

There are two functions: findAll() and whereExists() that start a query builder expression. These functions return a string containing the currently built URL.  findAll('resourceType') would simply return the string 'resourceType?', while whereExists(resourceType) would return 'resourceType?_count=1 (supporting an existence test).

The returned URL is evaluated by an onServers() function which is like resolve() except that onServers takes a list of base urls and the search is executed on each combination of search url and server (with pagination resolved).  In the case of "whereExists()", resolution is allowed to stop after finding the first matching resource on any server (although the queries might be executed in parallel).

onServers() is effectively equivalent to resolve().select(entry.resource), but there's one little bit of extra juice, because it can do this for more than one server at a time.  OnServers is the principle reason that my function set needs access to the focus, so that it can produce the cartesian product of of the URLs and the server base addresses.  And if there are multiple queries, onServers could be smart enough to send a batch query or use FHIR bulk data to get it's results.

The work above is really out of scope for SANER, although we might consider including it as an appendix for others to consider (that's probably the easiest way to address a couple of issues in the ballot wrt to scope, making sure we've got it documented, but not requiring it to be used for successful implementation).

One of the values of FluentQuery is that it provides a means for someone to write a query without it being completely depedendent on how the query URL is constructed.  The lightweight implementation can construct a URL as it goes (and that will be the first implementation that I write).  But other implementations could do some smart things like:

  1. Limiting queries to what a FHIR Server supports, and handling some of the filter parameters differently.  
    1. :in and :not-in
      Not every server supports code:in queries on Observation for example.  But it's a really valuable way to simplify the writing of the query.  
    2. _has
      Rewriting has queries for servers that don't support them.
      Resource1?_has:Resource2:name=value can be handled as first querying for Resource2?name=value&_elements=name with a post filter that collects all the references and then performs Resource2?_id=Reference1,Reference2, et cetera.
  2. Return lists that defer pagination of results until needed.

Search Functions

findAll(ResourceType [, QueryFunctionExpression]*)

The findAll function constructs a relative search URL for the specified resource type, appending the queries specified by any of the QueryFunctionExpression values separated by an &, and returns this in a string.  When executed, this query will return all results including those returned via pagination.

whereExists(ResourceType [, QueryFunctionExpression]*)

The whereExists function constructs a relative search URL for the specified resource type, adds _count=1 to it, and appends the queries specified by any of the QueryFunctionExpression values separated by an &, and returns this in a string.  This query is used for existence tests.

Query Execution Functions

onServers([reliability, ] Servers*)

The onServers function executes a search on the specified servers.  Servers is a list of fully qualified base URLs.  findAll queries resolve the data on all servers (including all pages from each server), and then selects entry.resource from all returned Bundle resources.  The whereExists query returns the first resource found after finding at least one match on any server, returning the first matching resource (and any included resources associated with it) .  Implementations are free to execute searches serially or in parallel.  

The optional reliability parameter indicates how to handle failures during a search, and can take the value of 'skip', or 'fail'.  Skip means if any server fails to respond, or responds with an error code (or exception of some sort), that the resolution should act as if a Bundle was returned with a singular OperationOutcome resource that describes the kind of error that occurred.  This ensures that queries succeed, but there may be missing data. If the value is fail, it indicates that if any server should fail to respond, the expression should throw an exception.  This allows expression writers some limited control over how to handle error conditions when a server is being queried.  Implementations should consider retrying failed queries in either case.

The following executions are equivalent:
onServers('http:example.com/server1','http://example.com/server2')
onServers('http:example.com/server1'|'http://example.com/server2')

Query Parameter Name Functions

The query parameter name functions start the first half of a query parameter to add to the query url.  They simply return the name as a string

with(Name)

The with function constructs the first half of a named query parameter.  It should be followed by a Query Parameter Value Function to construct the second half (the part containing the equals sign).

for(Name, Reference|Resource|Identifier)

The for function constructs a complete query parameter that matches a resource or identifier.

If the first parameter is a reference or resource id, the query parameter is written as Name=Reference.

If the second parameter is a resource, this is converted to a reference to that resource, and treated as above.

If the second parameter is an identifier, this is converted to a reference by identifier search, and written as Name:identifer=Identifier

including(Names)

The including function specifies what other resources should be included.  If any value in Names doesn't start with a resource type, it is prepended with the type of resource specified in _

has(Name)

has constructs the first half of a named query parameter supporting _has searches.  The value will be _has:Name.  Chained has are possible. It should be followed by a Query Parameter Value function to construct the second half (the part containing the equals sign).

Query Parameter Value Functions

Query parameter value functions produce the second half of a query parameter (the part containing the equals sign).

A value can be a primitive, Quantity, Coding, or CodeableConcept, or Reference type.

For Quantity type, value will be expressed in number|system|code form as required by Quantity parameters.  If system and code are empty, but unit is present, a Quantity value will be expressed as number||unit with no system.  If neither of system, code or unit are present, a quantity value will be expressed as number.

For Coding type, value will be expressed in system|code form as required of Token parameters.

Period parameter values work with Date, DateTime, Instant and Period data types.  A date promotes to an appropriate Period in these cases.

equalTo(Values*)

Appends =Values[1],Values[2],...,Values[n] to the query parameter.  Note that Values can repeat, or be a list, or both.  The following three expressions are equivalent:

equalTo('in-progress','finished')
equalTo('in-progress'|'finished')
equalTo('in-progress,finished')

equalToComposite(Value1, Value2)

Appends =Value1$Value2 to the query.

notEqualTo(Values*)

Appends =neValues[1],Values[2],...,Values[n] to the query parameter. Note that Values can repeat, or be a list, or both. The following three expressions are equivalent:

notEqualTo('in-progress','finished')
notEqualTo('in-progress'|'finished')
notEqualTo('in-progress,finished')

greaterThan(Value), greaterThanOrEqualTo(Value), lessThan(Value), lessThanOrEqualTo(Value), approximately(Value)

Appends =prefixValue to the query parameter, where prefix is gt, ge, lt, le, or ap appropriately.  Value must be a singular value.  

startsAfter(Period), endsBefore(Period)
If Period is any date type, promotes that to a Period first.
Appends =prefixValue to the query parameter, where prefix is sa or eb respectively, and Value is Period.start or Period.end respectively.

within(Period)

If Period is any date type, promotes that to a Period first.
Given name is the Query Parameter name, appends =gtPeriod.start&name=ltPeriod.end to the query parameter.  If end is not present (an open period at the end), then it only appends =gtPeriod.start to the query parameter (this is one of the functions that needs access to the focus)

not(token), text(token), above(token|uri), below(token|uri), in(uri), notIn(uri)

Appends :modifier=token|uri to the query parameter, where modifier is not, text, above, below, in, or not-in appropriately. 







Wednesday, September 16, 2020

Generating your own FHIR Narrative with XML using SUSHI

Building an implementation guide relies on a stack of transformations.  It's a good thing we know how to stack things up on top of each other these days.  But what happens when something in the stack isn't quite adequate?  How do you manipulate things to make it all work.

This example a particularly interesting challenge:  I want to be able to generate my own Narrative from the XML version of the FHIR Resource.  I won't have the XML when SUSHI is done, only after the IG Builder completes.  I can easily get the XML Version from the JSON version by running this little program:

    public static void main(String args[]) {
        IParser op = null;
        IParser ip = null;
        for (String arg: args) {
            IBaseResource r;
            File fin = new File(arg);
            File fout;
            if (arg.endsWith(".json")) {
                fout = new File(fin.getParent(), fin.getName().replace(".json", ".xml"));
                op = xp;
                ip = jp;
            } else if (arg.endsWith(".xml")) {
                fout = new File(fin.getParent(), fin.getName().replace(".xml", ".json"));
                op = jp;
                ip = xp;
            } else {
                System.err.println("Do not know how to convert " + arg);
                continue;
            }

            try (FileReader fr = new FileReader(fin); FileWriter fw = new FileWriter(fout)) {
                r = ip.parseResource(fr);
                op.encodeResourceToWriter(r, fw);
            } catch (IOException e) {
                System.err.printf("Cannot convert %s: %s\n", arg, e.getMessage());
            }
        }
    }

But then, how do I insert my narrative into the build process?  If it has to happen
post-SUSHI processing, then IG-builder won't execute it.  I'm only doing this for ONE resource (or perhaps a few), not the 97 or so that I otherwise generate using SUSHI.  It will get infrequently updated, and I don't want to give up the convenience of having the automated SUSHI run in IG Publisher do most of the work for me.  

But hey, if you can make your transform stack run in a circle the way the Crazy Russian does with Pringles, you can make it work.

Here's what I decided to do:

In the Resource that I want generated text, I include the following:

Instance: ComputableCDCPatientImpactAndHospitalCapacity
InstanceOf: PublicHealthMeasure
Title: "Computable CDC Patient Impact and Hospital Capacity"
...
* insert ComputableCDCPatientImpactAndHospitalCapacityText

Then I create a RuleSet named ComputableCDCPatientImpactAndHospitalCapacityText and modify my XSLT to generate that ruleset instead of content of the DIV.  The template to generate the ruleset looks like this:

<xsl:template match="/">
    <xsl:text>RuleSet: </xsl:text><xsl:value-of select="fhir:Measure/fhir:id/@value"/><xsl:text>Text&#xA;</xsl:text>
    <xsl:text>* text.status = #generated&#xA;</xsl:text>
    <xsl:text>* text.div = """&#xA;</xsl:text>
    <div xmlns="http://www.w3.org/1999/xhtml">
        <xsl:apply-templates select="/fhir:Measure"/>
    </div>
    <xsl:text>"""&#xA;</xsl:text>
</xsl:template> 

Finally, my process to update the generated narrative is to do the following:

  1. Generate the resources using SUSHI
  2. Run my XSLT to generate the Narrative
  3. Rerun the IG Publisher
Basically, I'm using SUSHI to generate an input to a second run SUSHI (run by the IG Publisher).

I'm not happy with this.  There should be a way to insert this generation step somewhere else in the tool chain, and update the tool chain (perhaps with templates) so that I don't have to rely on the FHIR Generated narrative, but can rather build my own from the XML resource.

It's a bit uglier than that even, because some of the resource content (e.g., description, definition, et cetera) is actually Markdown, rather than HTML, and I had originally used markdown tags to support bulleted lists or definition lists and links in some of that content where appropriate.  I punted on that problem by simply replacing that limited markup with its (again limited) HTML equivalents since HTML markup is allowed in Markdown.


Wednesday, September 9, 2020

The Bug that Almost Got Away

I discovered a tricky issue in bed counting while finishing up the automated measure for bed counts.

The tricky bit is that you what you are counting is beds (locations), but what you can find out about based on the reporting period (via a FHIR Query) is encounters, and that's what I almost counted.

Here's where it gets ugly.  A patient can have multiple encounters in the same day.  And they can be in different beds (locations) during the same encounter.

It can be worked out so long as as few simplifying assumptions can be made.

  1. Locations referenced by an Encounter are ordered from most to least recent.
  2. Patients can only be in one location at a time.
  3. Each location can handle only one patient.
  4. Encounters are ordered from most to least recent.
  5. If an encounter is active, then the most recent location is occupied by the patient.
Let L be a set of locations and encounters
For each active encounter E during the reporting period:
    if E.location.location is not in L
        Add E to L
        Add E.location.location to L
    end if
End For

At the end of the loop, L will contain:
  1. All occupied locations.
  2. The most recent encounter to cause that location to be occupied.
Here's almost the same thing in FHIRPath.

aggregate(
  iif($total.select(location[0]).location contains $this.location.location.first(),
      {},$total | $this)

Three lines of precious brevity. 

There were so many places I could have gone wrong, and I won't actually know it works until maybe Thursday.  But at least now I'll be counting occupied beds at the end of the period, rather than all beds that were occupations of beds during the period.