Thursday, October 14, 2021

Responding before reading @alissaknight's: Playing with FHIR: Hacking and Securing FHIR APIs

If you've been sitting under a rock, you missed the latest big explosion in the FHIR World, Alissa Knight's most recent report: Playing with FHIR: Hacking and Securing FHIR APIs.  I've downloaded it, and looked at the fantastic graphics (e.g., image right), but I've not really read it yet.  This is the kind of report that will require as deep a read I suspect as some Federal regulations.  If I get the time (perhaps this weekend), I may even do a tweet-through-read-through.

I have some initial thoughts based on my reading of the tweets in this stream:

Here's what I expect to find, noting that this is only my guess as to what is happening:

Various apps rely on a FHIR backend which will allow a replay attack to be performed whereby:

  1. The attacker obtains access to the authorization token used by the FHIR API call to make other API calls on a different patient. 

    NOTE: There are a number of ways to obtain this authorization token depending on how the application is constructed and the level of access one has to developer tools, and application hardware and software.  Assume that the hostile attacker is one in a million that has access to all of that, not the common high-school student (but don't count them out either, some of them are that good).  Is ia a Java or .Net app?  There's a debugger for that, and I can almost assuredly final all of your assemblies or jar files on my device emulator, and debug code running in a device.  Did you obfuscate your code?  If not, I can reverse compile it, there are tools for that too, and even obfuscation is not enough when you consider that platform calls are still going to be obvious, and all the important ones are well known, so I can work back the stack to the code I need to manually investigate.

  2. The attacker constructs new API calls to make request using the same authorization token.

  3. The call succeeds because the only check that is performed on the authorization token by the back end server is that it is a valid token issued by the appropriate authorizer.
The fix is not simple.  Basically, the back end developer needs to bind the access control restrictions associated with the original access request to obtain the authorization token to the subset of data that it authorizes access to (this is the easy part), and enforce them on every request (this is the hard part), incoming and outgoing (unless you can prove the outgoing data will match the authorization based on how the query works).  JWT tokens provide more than adequate capability to bind exactly those details in an easily accessible way.  Essentially, the claims in the JWT should indicate what is allowed to be accessed, when, how, and by whom.  Sadly for my first FHIR Server, JWT was still a work in progress, but I simply encoded an index in the token which pointed to memorialized claims in a database, constructed when the token was first created.

Once you can do that much, fine-grained access control is actually quite simple.  That was originally considered to be out of scope in my first effort for reasons of cost and complexity, but because we had to build the right security infrastructure even to support coarse-grained access control, fine-grained control simply became just a little bit harder, but also a worthwhile differentiator from other offerings.

As I said earlier, my guess as to what is wrong is only supposition, an experienced, wild-assed guess at what I will find upon reading.  But if I can guess it before reading, then arguably, so can someone else as smart as I without barely even an incentive.  Consider also that intelligence and quality of character are not necessarily correlated, nor do they even need to exist in the same body for a vulnerability such as I described above to be readily and easily exploited by someone with enough motive, and knowledge about how to use tools that people smarter than them wrote.

In the world of the great world wide web, FHIR API servers, Java, .net or other platform, the basic concepts of servlets, intercepters, filters, hooks, cut points or similar concepts by other names in non-Java platforms all exist and are able to perform these checks and validations both before and after the call completes, so that you can:
  1. Verify that what is being asked for is allowed to be asked for (never assume that the querant of your back end can only be your application)
  2. The data that is being returned also matches what the authorization allows the end user to see, and either filter, or simply reject the request (after having performed some work that you wish you hadn't).
I led teams building (writing some of the code myself) commercial FHIR servers on top of readily available open source frameworks a number of times in my life (most are still in use), and I've also implemented other API interfaces pre-FHIR.  When I built in security controls, I bound the security tokens to the specifically authorized purpose and sources, and I didn't trust that token to be used only for that purpose, I made sure it was verified every time.

I have to tell you that was also a moderate pain point, because it cost about 10% of my processing time budget. I could however, justify it by citing the specific regulatory text regarding fines for breach, and that I think made a huge difference in my ability to argue for that kind of security (this was perhaps, the hardest part).

I expect to be reading a wakeup call, we'll see how my prediction turns out.  Remember too, APIs are still a work in progress.

Security is hard.
Good Security is harder.
Great Security is an ongoing work in progress.
Perfect security does not exist.

0 comments:

Post a Comment