Friday, August 20, 2021

Stratifying Race and Ethnicity for SANER

Variation is the bane of standards.  Eliminating needless variation is part of my job.  Doing it in a way that doesn't increase provider (or developer) burden is an indication that it's been done right.

I've looked at a lot of state and national dashboards while working on the SANER Project, and one thing I notice is the variation in reporting for data with respect to race and ethnicity classifications (strata).  Often, when reported on publicly, these two different categories are combined into smaller sets, with groupings like multiple race, other and unknown.

ONC National Coordinator Micky Tripathi noted Health IT reporting variation for this kind of data in his keynote delivered at a recent Strategic Health Information Exchange Collaborative (SHIEC) conference.

Federal reporting uses separate fields for race and ethnicity, and allow for multiple values to be reported for race.  There are 5 possible values for race (not counting various flavors of unknown), and two values for ethnicity according to OMB Reporting requirements.

Reporting multiple races means that there are several ways to report none (flavors of null including unknown, refused to answer and did not ask), 5 ways to report one race, 10 to report two races, 10 ways report three races, 5 ways to report four, and 1 way to report all five, resulting in around than 33 categories.  

Combining that with the various ways to report ethnicity (again with flavors of none), that results in about 165 possible reporting categories.  Looking at the actual statistics, there are about 50 categories that would generally be needed for a given facility (e.g., frequency > a few tenths of a percent) to stratify populations according to race and ethnicity, if non-existing groupings are not reported on, and perhaps an even smaller number for smaller facilities.  It wouldn't be possible for example, for a 100 bed hospital to even use all of the category combinations.

The data is generally rolled up into a much smaller number of reporting categories which vary between states, and these often also vary with how federal dashboards report the same data.  Different states have different racial and ethnic makeups and the public reporting at race and ethnicity data at these levels is designed to address potential disparities relevant to the state.

Given that many state departments of health  also support reporting to federal agencies, how does one normalize reporting without having to have 51 separate specifications for reporting?

The best way to handle this is to stratify by the combination of race and ethnicity, and report all possible existing combinations.  In other words, don't report 0 values for combinations that don't exist, as that can be inferred from the data.  This enables states to roll up this data into a smaller set of categories for their public reporting, yet retain the data needed for federal reporting, and enable federal reporting to roll up differently.  When automatically computed, this level of stratification does not introduce a reporting burden on the reporting providers.



Tuesday, August 3, 2021

Thinking Ahead

A very long time ago, when I worked at Florida State University, I had two rules for programming written on my board:

  1. Just get it to work.
  2. If it works, don't mess with it.
It became a test of the quality of people who would read it, as the ones who suffered from some sort of reaction while interpreting the ramifications were definitely the people that I wanted to have around me.

If you just get it to work, and don't mess with it, you have something that has the very least effort put into it.  If you need to change it in the future, good luck with that.

Along the same lines, when I was growing up, a kid who lived on my street had a small car, I think it was a Dodge of some sort, and he wanted to beef it up. So he took out the engine and tranny, and replaced it with a MUCH bigger one.  Somehow he managed to make it all fit together after modifying the drive shaft, but he forgot one very important thing: Engine torque.  When he finally started the car after spending the better part of a year on this project, he wound up warping the frame, something like the picture to the right.  The car was a total loss.  The point of this story is that you can only do so much with a limited infrastructure.

One of the challenges with FHIR adoption for some use cases is that there are existing HL7 interfaces, labs, ADT feeds, immunizations, et cetera, that are already widely deployed, adopted, and working, in fact, working well.  There's little desire to replace these interfaces with FHIR based interfaces because:
  1. What we have is good enough for what we are doing.
  2. It's working right now.
But as we keep pushing the interoperability needle higher and higher, eventually we will have replace these interfaces. When should we do that?
  1. When what we have isn't good enough for what we want to do.
  2. Or we can't make want we want to do easily work with what we have right now.
The HL7 V2 to FHIR project is an example of what happens when interfaces get stuck in these situations, we cannot easily connect them to newer infrastructure so that we can do more with them, so we build things that enable us to convert from one to the other.  The very existence of the project demonstrates that there's more than we want to be able to with the data present in HL7 Version 2 messages.  This might include things like:
  • Aggregating data from multiple sources
  • Providing more sophisticated searching capabilities
  • Enabling data subscriptions
There's a lot of effort and cost associated with replacing something that works with something else, and it's hard to justify that when the thing that's working is in fact still working.  But, if there was a way to upgrade, replacing your scooter with a Corvette (and you can justify the need for a Corvette), then it might in fact be worthwhile.

When interface standards are mandated by regulatory policy, it's pretty difficult to upgrade.  Consider what happened with X12 5010 standards, or the whole discussion around CCDA 1.1 and CCDA 2.1 backwards compatibility.  It's even more difficult when it all has to happen in a very short time frame.  We need to consider how to have policy enable these kinds of shifts, over REASONABLE time frames.  Two years is not enough time to roll out a new standard without severely impacting an industries capacity to do anything else but roll that out.  We know that from experience (or at least I hope we do).

But what would the next generation ADT, lab, immunization or other standards look like?  And what would they enable us to do that the current ones don't.  It's time to start thinking about that.


Monday, August 2, 2021

YAML as a FHIR Format

 

YAML (short for "YAML Ain't Markup Language" but I simply prefer "Yet Another Markup Language") is a file format that even further simplifies writing structured data files.  A while back I struggled with writing measures for the SANER Project because both XML and JSON formats have some minor issues that make it hard to hand code the expressions.

XML sucks because, well, XML sucks.  Whitespace is really valuable for formatting code, but XML just wants to make it to be gone.

JSON isn't much better because you have to escape newlines and once again, cannot see your code structure for expressions.

I dinked around a bit with YAML input and output, and now because I'm creating a new measure, wanted to get it working properly, which I have now done, at least in so far as my YAMLParser now correctly round trips from JSON to YAML and back to JSON.

The key to making this work is using Jackson to convert between JSON and YAML, and configuring the YAML quotes right so that strings that look like numbers (e.g., "001") don't get treated incorrectly as numbers when converting between the two.

The methods newYAMLMapper() creates a Jackson YAMLMapper correctly configured.

    public static YAMLMapper newYAMLMapper() {
        YAMLMapper m = new YAMLMapper();
        return
          m.enable(YAMLGenerator.Feature.LITERAL_BLOCK_STYLE)
           .disable(YAMLGenerator.Feature.MINIMIZE_QUOTES)
           .disable(YAMLGenerator.Feature.USE_PLATFORM_LINE_BREAKS)
           .disable(YAMLGenerator.Feature.SPLIT_LINES);
    }

Methods for converting between YAML and JSON are fairly simple:

    public static String fromYaml(String yaml) 
      throws JsonMappingException, JsonProcessingException
    {
        ObjectMapper yamlReader = newYAMLMapper();
        Object obj = yamlReader.readValue(yaml, Object.class);

        ObjectMapper jsonWriter = new ObjectMapper();
        return jsonWriter.writeValueAsString(obj);
    }
    public static String toYaml(String jsonString) throws IOException {
        // parse JSON
        JsonNode jsonNodeTree = new ObjectMapper().readTree(jsonString);
        // save it as YAML
        String jsonAsYaml = newYAMLMapper().writeValueAsString(jsonNodeTree);
        return jsonAsYaml;
    }

Converting from streams and readers works similarly.

The YamlParser class implements IParser.  It contains an embedded jsonParser convert resources back and forth between Java classes and JSON formats, and then uses toYaml and fromYaml methods in encodeResourceToString and parseResource methods to read/write in YAML format.  It's NOT the most efficient way to read/write YAML to FHIR, but it works (correctly as best I can tell).

    public static class YamlParser implements IParser {
        private final IParser jsonParser;
        YamlParser(FhirContext context) {
            jsonParser = context.newJsonParser();
        }

        @Override
        public String encodeResourceToString(IBaseResource theResource) 
            throws DataFormatException
        {
            try {
                return toYaml(jsonParser.encodeResourceToString(theResource));
            } catch (IOException e) {
                throw new DataFormatException("Error Converting to YAML", e);
            }
        }
        @Override
        public <T extends IBaseResource> T 
            parseResource(Class<T> theResourceType, InputStream theInputStream)
            throws DataFormatException {
            try {
                return jsonParser.parseResource(
                    theResourceType, fromYaml(theInputStream));
            } catch (IOException e) {
                throw new DataFormatException("Error Converting from YAML", e);
            }
        }
        ...
    }

All of the setter/getter methods on YamlParser delegate the work to the embedded JsonParser, as shown in the examples below.  

        @Override
        public void setEncodeElementsAppliesToChildResourcesOnly(
            boolean theEncodeElementsAppliesToChildResourcesOnly) {
            jsonParser.setEncodeElementsAppliesToChildResourcesOnly(
                theEncodeElementsAppliesToChildResourcesOnly);
        }

        @Override
        public boolean isEncodeElementsAppliesToChildResourcesOnly() {
            return jsonParser.isEncodeElementsAppliesToChildResourcesOnly();
        }


A full blown implementation can be found at YAML Utilities for FHIR 

Monday, June 28, 2021

Help Adrian Get Treatment

I'm asking this at the request of my eldest adult child, going by Aeowolfe these days.  Some of you may have met him a few years back or seen him help me out in some videos for Health IT in the past.

My eldest child's partner has been suffering from chronic pain since mid-2017.  The onset of the disease occurred while they were at College were my eldest met them.  Since then, they lost the support of their family, have subsequently been diagnosed with Fibromyalgia, hypermobility disorder (similar to Ehlers-Danlos syndrome, only missing direct family member diagnosis to match the clinical criteria), and endometriosis.  Our family has been providing support in various ways, including helping them get signed up for Mass Health and disability through Social Security (which took over 3 years to get to).

They've found a treatment plan they think will help (Ketamine-Lydocaine transfusions), but they cannot afford them at their present income, and insurance doesn't yet cover this treatment because it is new (there have been several clinical trials with positive results).  So my eldest has started a gofundme for Adrian.  If you are able and willing to contribute you will have both my thanks, and that of my eldest.



Wednesday, June 16, 2021

Creating your FHIR Artifact JIRA Specification

When you go to ballot, or any form of publication for a FHIR IG through HL7, you have to provide an XML document that defines the pages and artifacts that the HL7 Jira system will use for reporting issues or comments on the specification.

If you are smart, you create an initial template in the HL7/JIRA-spec-artifacts page when you start your project.  If you aren't, you wait a while and then create it.

The initial version should look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<specification
  gitUrl="https://github.com/HL7/fhir-project-mhealth"
  url="https://hl7.org/fhir/uv/mhealth-framework"
  ciUrl="http://build.fhir.org/ig/HL7/fhir-project-mhealth"
  defaultWorkgroup="mh" defaultVersion="0.1"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:noNamespaceSchemaLocation="../schemas/specification.xsd">
  <version code="0.1"/>
  <artifactPageExtension value="-definitions"/>
  <artifactPageExtension value="-examples"/>
  <artifactPageExtension value="-mappings"/>
  <page name="(NA)" key="NA"/>
  <page name="(many)" key="many"/>
  <page name="(profiles)" key="profiles"/>
</specification>

But you will need to add lines for each artifact (Profile, ValueSet, et cetera) and page (.md or .html file) that you generate.

  <artifact name="Artifact Title" 
    key="artifact-id" id="artifact-filename-without-extensions"/>
  <page name="Table of Contents" key="toc"/>

This can  be awfully tedious, but the IG Publisher can create an updated one for you, although if  you haven't created one in JIRA-spec-artificact, for some reason it doesn't seem to create an initial one for you.  I'm not clear on why it doesn't, but I found a workaround.

What you do is create that initial version, and copy it to your templates folder, naming it jiraspec.xml.  Then you run the IG Builder without a vocabulary server.
C:\myproject>  JAVA -jar "..\%publisher_jar%" -ig ig.ini -tx n/a

Telling the IG Builder that you don't have a vocabular server makes it assume that you do NOT have an internet connection, and so it also doesn't try to get your templates or copy the current JIRA Spec over that file I had you create above.  Now when you run the IG Builder, it will create an initial JIRA spec file for you, which you can then generate a pull request to https://github.com/HL7/JIRA-Spec-Artifacts/xml.

Once you've finished, you can find the created specification in your project in template/jira.xml, which you can then rename appropriately and send a pull request to the JIRA-spec-artifacts page.

Ideally, this specification configuration could be more automated in a FHIR IG Build, but for now what we have works.  It's just a bit of a pain.


Wednesday, June 9, 2021

The Interop needle in 2030


ONC has been asking about Health Interoperability Outcomes for 2030.  Some sample statements they'd like the answers to include:

  • “Because of interoperability, ______ before/by 2030.”
  • “Because of interoperability, before/by 2030 [who] will [what].”

    In Why We'll Never Have Interoperability, I note that the goal posts are always moving, bars are ever being raised, the needle just keeps going around and around.  So I thought it would be interested to look at this from the viewpoint of "what will we be concerned about" in 2030 as the next steps, rather than focusing on what we've accomplished.

    Given all of that, these are the problems I think we will we still be trying to solve:

    1. Making challenging content understandable to the average patient.
    2. Making standards of care (e.g., care guidelines) measurable and computable.
    3. Understanding the actual cost of care.
    4. Crossing domain boundaries (care, payment, social services, public health, emergency medical services, research).  Each of these domains is still widely separated with respect to standards
    5. Few of these domains have progressed as far with APIs as simple as FHIR.
    6. Handling variations in dialects of FHIR as supported by different vendors.
    And these are the problems that I think we will have made headway on:
    1. FHIR will have become ubiquitous in hospital and ambulatory practice interfaces between healthcare systems and devices.  It will become available in not just EHR systems, but also departmental, laboratory, medical devices, revenue cycle, and to some degree, imaging systems (although DICOM will retain significant dominance).
    2. Patient facing APIs will be ubiquitous.
    3. Visit scheduling, and much of the pre-visit "paper-work" will be done via the web for most patients.
    4. FHIR will have crossed into the payer space, and HIPAA transaction standards, invading some of the territory previously owned by X12 and NCPDP.
    5. We will start to see FHIR transition into other healthcare related domains (e.g., EMS reporting, social services), but adoption will be limited.



    Monday, May 24, 2021

    A Source Code Profiler for FHIRPath

    I wrote my first source code profiler in 1992, mostly as an exercise to play with but also because back then I didn't have the money for a professional one, and wrote about it in Dr. Dobb's Journal.  That was a sampling profiler, and used the timer interrupt on a PC to determine where in the code the application was running.  Since then, I've had to write my own profiler several times professionally for different reasons and purposes.  Once I wrote some code to instrument some BASIC code and profile it.  That code was running a web site for a major hardware vendor's web store (having been flown onsite to address a critical sporadic outage).  While the profiler didn't show what was causing the outage, it did demonstrate that the application our company had developed wasn't the source of the problem, and I later DID determine what the cause was and we fixed it.  I was given my first handheld computing device as a reward for that save.  And I've been called to develop profiling tools for other custom languages and application implementations as well.

    The focus of this week's FHIR Connectathon for SANER was performance and scaling.  We finished up at noon yesterday, but I was already wire to the rim with caffeine, so I looked for something else interesting to do.  As a result, I instrumented my version of the FHIR Path engine (it's basically a copy of the HAPI engine with some tweaks for use as a SANER Measure Computer) to profile the Measure code.

    There are three places in SANER where a FHIRPath is evaluated to compute a measure:

    1. Measure.group.population which produces the counts for the population.
    2. Measure.group.stratifer which produces the strata for each element in the population.
    3. Measure.supplementalData which specifies how to collect supplemental data for each element in a population.
    Each of these may have an associated FHIRPath expression, and of course, I want to know how the expression performs.

    The FHIRPath engine produces a parse tree of ExpressionNode items.  The tree is walked by FHIRPathEngine.evaluate functions.  What I did was simply added some code to trace the execution times in those functions in my FHIRPath engine code.

    The core class and methods to support tracing in my engine is below.  Note, this is hacked Proof-of-Concept code I wrote in about an hour.  Documentation and everything else is coming.

       class Trace {
            String expr;
            long start, end, children;
            ExpressionNode node;
            int parent;
            int index;

            private Trace(String expr, int index) {
                this.expr = expr;
                this.index = index;
                this.start = System.nanoTime();
                this.parent = -1;
                Object o = new Object();
                o.toString();
            }

            private Trace(int parent, int index, ExpressionNode node) {
                this.parent = parent;
                this.expr = (parent == -1 ? null : traces.get(parent).expr);
                this.index = index;
                this.node = node;
                this.start = System.nanoTime();
            }

            public void done() {
                this.end = System.nanoTime();
                long childTime = end - start;
                top = this.parent;
                if (top != -1) {
                    traces.get(top).children += childTime;
                }
            }

            public void done(ExpressionNode node) {
                this.node = node;
                done();
            }

            public String toString() {
                StringBuilder b = new StringBuilder();
                if (parent == -1) {
                    b.append(",,,,,,,\"").append(expr == null ? " " : expr.toString().replace("\"", "\"\"")).append("\"\n");
                }
                b.append("@").append(Integer.toHexString(expr.hashCode())).append(",").append(index).append(",")
                    .append(parent).append(",").append(start).append(",").append(end).append(",")
                    .append(end - start - children).append(",\"").append(node.toString().replace("\"", "\"\""))
                    .append("\"");
                return b.toString();
            }

            public void setExpression(String expression) {
                this.expr = expression;
            }
        }

        public void dump(File f, boolean append) throws IOException {
            try (FileWriter w = new FileWriter(f, StandardCharsets.UTF_8, append); PrintWriter pw = new PrintWriter(w);) {
                pw.println("Node,Expr,Index,Parent,Start,End,Time,Expression");
                for (Trace t : traces) {
                    pw.println(t);
                }
            }
            traces.clear();
        }

        private Trace trace(String path) {
            if (!TRACE) {
                return dummy;
            }
            Trace trace = new Trace(path, traces.size());
            traces.add(trace);
            top = traces.size() - 1;
            return trace;
        }

        private Trace trace(ExpressionNode node) {
            if (!TRACE) {
                return dummy;
            }
            Trace trace = new Trace(top, traces.size(), node);
            traces.add(trace);
            top = traces.size() - 1;
            return trace;
        }

    Sprinkled in key places in functions that actually execute the profiling code are the statements (in a slightly larger font) in the code below:

        public List<Base> evaluate(Base base, ExpressionNode expressionNode) throws FHIRException {
            Trace trace = trace(expressionNode);
            try {
                List<Base> list = new ArrayList<Base>();
                if (base != null)
                    list.add(base);
                log = new StringBuilder();
                return execute(new ExecutionContext(null, base != null && base.isResource() ? base : null,
                    base != null && base.isResource() ? base : null, base, null, base), 
                    list, expressionNode, true);
            } finally {
                trace.done();
            }
        }

    What goes on is that the call to trace(expressionNode) creates a new timestamped execution record to a stack of records which also points to the record of the calling method [often at the top of the stack].  Then, when the function is finished, trace.done() simply adds the ending time stamp, accumulates the time of this call to an accumulator for child times in the parent record, and returns the stack pointer to the parent's record.

    When a user of the engine calls the dump() method with a file, the execution data is dumped to that file in CSV format.  Records look something like this:

    Expr         Index Parent Start         End         Time Expression
    @63efb167 1 -1 611441841022000 611451511487500 4711600 findAll('Encounter', ...
    @791ea926 4 3 611441845767800 611441845782600 13400 'Encounter'
    @791ea926 5 4 611441845771000 611441845772400 1400 'Encounter'
    @6344cac3 6 3 611441845783900 611441847333500 12700 including('subject', 'diagnosis', ...
    @6344cac3 7 6 611441845785400 611441847322300 1533300 including('subject', 'diagnosis', ...
    @27688320 8 7 611441845786100 611441845787600 1200 'subject'
    @27688320 9 8 611441845786600 611441845786900 300 'subject'
    @39126a23 10 7 611441845788000 611441845789200 1000 'diagnosis'
    @39126a23 11 10 611441845788600 611441845788800 200 'diagnosis'
    @35c9a3be 12 7 611441845789500 611441845790400 700 'reasonReference'
    @35c9a3be 13 12 611441845789900 611441845790100 200 'reasonReference'


    The expression identifies the unique expression from the parse tree being evaluated.  The index and parent describe the evaluation tree (which may evaluate the same expression multiple times with different inputs).  The start and end time reflect the timestamps (in nanoseconds from an arbitrary base).  Time represents the elapsed time (again in nanoseconds).  The expression gives the full text of the evaluated expression.

    You can take this data into a spreadsheet, create a pivot table report, and come up with useful reports on those parts of the expression taking the longest, as shown below.

    Expression   Time (ms)  Executions
    onServers(%Base).exists()         9145.7723         282
    onServers(%Base).where(...         405.8266           2
    $total | $this     47.0435          94
    %ReportingPeriod.start - 1 'year'   10.4663         423


    I normalized times to milliseconds so that the result would be more meaningful.  This shows that the retrieval of primary data takes about 9.1 seconds, and a secondary retrieval about 0.4 seeconds, while everything else is 50ms or less.

    It also demonstrates some opportunities for optimization in the FHIRPath engine.  For example the 10ms taken on the the multiple evaluations of the common sub-expression:
      %ReportingPeriod.start  - 1 'year' 

    This is likely more costly than it needs to be.  Since the FHIRPath engine today does nothing with common subexpression elimination, the profile shows the cost associated with re-evaluating these in the parse.  That's probably an opportunity for a whole other post.

        Keith