Pages

Thursday, June 7, 2018

Building Interfaces with FHIR

Almost three years ago I created some tools to extract trigger events from HL7 V3, and then made a list of HL7 Version 2 and 3 trigger events.  I came up with over 1000 of these.  FHIR also supports messaging and has other mechanisms to trigger activity (e.g., Subscription) that can be used to trigger actions.

Use Case

Recently I've been looking at a way to automate data collection for 340B reporting, which requires collection of information about the patient, prescriber and medication from MedicationOrder resources.  This program, like many others, requires a small subset of the FHIR Resource data in order to manage the reporting operations.  Like many other programs, the implementers would very much like it if they don't have to wade through hundreds of lines of JSON or XML in order to get the data.  Both data providers and receivers would rather deal with only the minimum necessary information even when it is perfectly OK to have more (e.g., either because of the protected status of a receiver such as public health, or because the data is for treatment rather than payment and operations).

In looking at how this interface would work, there's basically a 4 step process:

  1. Some trigger event occurs which indicates that there's some data to consume.
  2. One or more queries are executed to obtain the necessary data in a bundle.
  3. The resulting data set is transformed into a format that can be more readily consumed.
  4. The data is sent to the receiver.
FHIR Subscriptions can be used to handle MOST of the different trigger event cases.  Search or Batch transactions can specify the data collection aspect.  Sending the data is straightforward.  What isn't so straightforward is transforming the data into a simpler format for the consumer, but there is also a way in FHIR to handle transformation from one structure to another (see StructureMap), and FHIR also has a mapping language defining the transformation.  The Clinical Quality Language provides another mechanism by which data can be accessed and transformed. XSLT would also work for this if one transformed the bundle as XML content.

Right now, the pieces are nearly all there, almost everything needed already exists in Subscription and StructureMap to put it together.  They just aren't put together in a way that allows the interface to be completely defined in an easy fashion.

Most of the detail needed is in how we define the third step above.  Here's how I'd put this together:
  1. I'd define a Subscription for the interface would include the search criteria (in Subscription.criteria) that would essentially identify the trigger event.
  2. The same query used to identify the trigger event would ALSO include any other information essential for producing the data needed in the interface (e.g., _include and chained queries).  That means that any included resources would be defined in the query used for Subscription.criteria.
  3. In order to transform the data, we need to specify a mechanism by which the content can be transformed to another format. StructureMap has what I need but implementing it for the most common use cases feels like overkill, and it's still a work in progress.  I have working code that could do this with CQL today, so that's likely where I would go.  Also, StructureMap is R3/R4 content, and I have a server that's currently deploying STU2.  Everything else I need is already in Subscription in DSTU2. 
  4. For my purposes, I would set Subscription.channel.type would be set to rest-hook and Subscription.channel.endpoint would be set to the endpoint that should receive the report.  The format of the transformed result would be reported in Subscription.channel.payload.

Transformation

Subscription doesn't have a way to specify the transform details, so the next step would be to define an extension to support that.  Here's what I'd add to Subscrition to support this:

First of all, I'd add an element called transformation to channel which would be an optional backbone element (cardinality [0..1] because there should be at most one for a channel) describing the transformation to apply to the queried content to specify how the channel data (a Bundle) would be transformed before sending to the endpoint.  It would have two fields, method and transform.  Method describes the method by which the transformation occurs, and transform provides a reference to the content defining the transformation. Method would be a code, containing values from an extensible vocabulary including cql, fhirmapper, operation, structuremap and xslt.  These are further described below:

CodeDisplayDefinition
cqlCQLTransformation is defined in CQL as the tuple returned in the "return" definition, where the payload provides the source material accessed by the CQL rule.
fhirmapperFHIR MapperTransformation is defined in the FHIR Mapping language.  The source is the payload that contains the content associated with the query used in subscription.
operationOperationTransformation is defined as the result of a FHIR operation, where the source parameter for the transformation is the payload.  
structuremapStructureMapTransformation is defined in a StructureMap resource.  Source is the bundle that would result from the payload provided by the subscription.
xsltXSLTTransformation is defined in an XSLT transform.  The input is the payload provided by the subscription.  The output of the transform is what would be sent to the receiver.  While input would be in XML, output could be in any format (you can, for example create JSON and CSV files with XSLT).
The transform field would be a resource reference (StructureDefinition, OperationDefinition) or URL reference to the content of the transform.

XSLT, fhirmapper, structuremap would implement the transformation in an application defined manner that really needs no more specification.  The input and output are fairly well understood for these types, and how the transformer is called is really not that interesting.

For operation and cql, a little more detail would be needed to define how the payload is communicated, and the result of the transformation is extracted from the transformation.  A transformation operation should ideally be idempotent, meaning it causes no changes (with the exception perhaps of audit records) on the server.  

Operation

When an operation has only one output parameter named "return", that's what gets returned.  This works well in a transformation scenario.  The only further definition that is needed is how to specify the input that the transformation operates with.  What I'd probably want to implement for operation (because it is simplest) is to define the operation and any of its parameters in a URL, with POST body of type multi-part/form-data where the source parameter defines the input of the transform in the body of the post.  In that way, fixed parameters of the operation transforming the content can be specified in the transform URL, and the body can be the part that is sent to the receiver.  I'd probably add a parameter to the operation called endpoint and another called header to which I'd pass the values of the subscription endpoint and header parameters.  In this way, I could fully define in a Subscription how the server would implement the transform, and offload the work of sending the transformation to the endpoint.

NOTE: Operation can return content OTHER than a FHIR Resource by using a return type of Binary.  In HAPI, when you do this, what happens is that the returned content is simply the content of the binary resource.  I've successfully used this to return CDA and JSON content from operations I've defined.  This eliminates a step for integrators of extracting the content from a FHIR Parameters resource, which is something many would rather NOT have to do.

CQL

For CQL, things get a little bit more complicated.  Executing a set of CQL rules produces results in a name/value pair.  The tricky bit is figuring out how to integrate the payload content (the input) into the CQL execution, but that can be implementation dependent.  The simpler part is to figure out how to get the output.  There are two ways I can see to deal with this: 
  1. Return all defined values as a JSON Object. This works, but some of the defined values created in a CQL rule might be used internally, whereas others are meant for external consumption.  I have a CQL Operation created that executes a CQL rule, and allows you to specify which parts of the rule to return in a JSON object.  This is OK, but you can get just as much control by saying the result to be returned is in an object with a predefined name.  
  2. Return the defined object using the name "return".  This is my preferred solution, as it allows the rule to have a single result (which can be a tuple), mirrors the OperationDefinition pattern, and allows the transform to be computed and extracted.
What I'd probably do in my own implementation is provide support for OperationDefinition to start with, and build an operation definition framework that would let me create operations to handle cql, fhirmapper, structuremap and xslt methods.  Then, in my subscription handler, I could translate those other forms into an operation call, and let it do the transformation and notification work.

This suggests an additional capability that would be helpful in the Operations framework to FHIR, which is the ability to define where to send the response to the request, and what header parameters to supply for that interaction.  Typical RESTful protocol is in request/response pairs, but there are times when what you really want to do is send the response somewhere else as a notification.

I'd have to think carefully about the security ramifications for this, but my current thinking is that any FHIR HTTP request could have an _endpoint and _header parameter that works in a way similar to the way these parameters are used for a rest-hook subscription.  When these parameters are given, they specify the restful endpoint that would receive the resulting content, and the headers to send with it.  These parameters could then enable queries, operations or other content generating requests to send notifications to a specified URL.  When these parameters are used, the returned result of a query, operation or other FHIR API call could return the following values to indicate success:

Response CodeMeaning
202 AcceptedThe request was accepted, is syntactically correct, and the response will be sent to the requested endpoint but may not have been sent yet.  This essentially is a response that says, yes, I got your request and it looks good, I'll send the results along to the designated endpoint, but you have no guarantee that it actually got there.
204 No ContentThe request was accepted, is syntactically correct, and the response was sent to the requested endpoint and was accepted by it.  This tells you the response was sent and successfully acknowledged by the requested endpoint.

Summary

  • Create global parameters for _header and _endpoint and add code to handle them (this is a good place to consider HTTP or HAPI Interceptors).  
  • Add extensions to Subscription.
  • Figure out how to change operation to use PUT instead of POST (for rest-hook implementation).
  • Build a transformer operation.

But Wait!

If I define _header and _endpoint parameters for a FHIR operation, then I don't even need an extension on subscription.  Instead, what my endpoint would look like would be:

[base]/$transform?_header=Header for Eventual Receiver&_endpoint=Endpoint for Eventual Receiver&transform=Transform content to use&type=Transformation Type&_method=PUT

Now, I can define my operation easily, skip the extension stuff, and the only real difference is that I'd need to make the operation support PUT as well as POST, OR, make my subscription implementation smart enough to figure that part out when the server is calling one of its own operations with content.  I can add other parameters to support URLs or content to specify the details for the transformation.

OK, thanks for letting me bounce this off of you.  This was a slam-dunk.

1 comment:

  1. Have you looked at GraphQL? That would be a lighter weight choice than CQL in this context

    ReplyDelete