My brain is still burning with all the discussions from the last week. I thought I'd spend this morning knocking one off my list, which is the ABBI PULL security model using OAuth 2.0. I spent some time with John Moehrke over lunch and breaks talking about it, and we think we've worked out the key details.
Software Deployment Models
There are four different deployment models for software using the ABBI PULL API that I can imagine, each with their own security concerns:
- Running in a Web Server
- Embedded in a Device
- Installed Natively on a Computer or Device
- Running in a Web Browser
I'm going to look at each of these separately, and the same provisions of my last post apply: Slap me if I get it wrong.
Running in a Web Server
The first two are probably the easiest to address. With the first, you have a limited number of instances of the software, (typically) controlled by the software developer. For these applications, the website running the software can be secured from inspection by the end-user, as can its secrets. These applications can use dynamic registration pretty easily. These applications can have their own identity associated with them, secured by the software manufacturer (or in some cases, by the organization deploying it) to ensure that the installed software instance can be identified. Protecting that identity credentials is up to the developer/manufacturer/deployer of the software.
Embedded in a Device
In the second case, embedded in a device, I'm talking about software with no user accessible memory. So, things like activity meters, glucometers, blood pressure cuffs, et cetera, could be included (depending on how they are built, and I'm not talking about something you attach to your smart phone or tablet, but rather a wholly contained device). There will likely be several magnitudes more of these devices and their installed software than there would be in the first case. In these cases, we count on two things: The physical security of the device to ensure that someone cannot access the internals of the software. FWIW: It is possible to crack open the device, and read out the software if standard components are used. It's a risk, but there are some precautions that device manufacturers can take to protect the software here, or they can just choose to take that risk. The risk is somewhat mitigated by the fact that only a few software installations would be thus inspected. So if each device is provisioned with something like a device specific certificate signed by the manufacturer, what is at risk is the software identity associated with a single device (for each device thus inspected). This assumes that that device manufacturer would deploy with something as secure as a signed certificate. If they just use a device serial number, it would be relatively easy to impersonate multiple devices by just using made up serial numbers.
In the case where software is installed and running natively on a computer (e.g., laptop or desktop) or device (e.g., smartphone or tablet), there are several challenges, as I've previously mentioned. It is technically feasible to separately provision each downloaded instance of the software installed on a device with its own identity that can be stored in secure storage, but that may not be feasible for different software distribution models (on media, downloaded through an app-store controlled by outside organizations). It is also technically feasible to require a separate software registration to provision the software (as is done with many software applications), through which the software can be identified.
The approach that I've settled on here is to delegate the risks associated with the identity of the installed software instance to the software manufacturer. The manufacturer would need to either provision the installed and running application with its own secure identity, or use a shared identity among all identical instances of the same package. Since this is a choice the software manufacturer makes, they have some ability to control what happens should someone try to impersonate their software, and can determine what is an acceptable level of risk to them and their customers. Applications which help a patient keep track of non-life-threatening health data (e.g., diet) might require less stringent security measures than those which ensure that patients are taking their medication appropriately, for example.
The choices made by the manufacturer can be described to the authorizer, so that data holders and authorizers can also make decisions about whether to allow the application access and how to alert users based on the type of application identity (shared or separately provisioned).
Web Browser Based
Finally, for web browser based applications, we would need to delegate some of the risk of using the application to the patient. It's hard for me to imagine an application that is completely browser-based in the commercial world (i.e., doesn't have some web-server component associated with it) that could facilitate securing the application identity, but it is technically possible. I think there are two cases here as well: The software developer could choose to deliver a separate identity to each "download" of the browser-based application, or they could share an identity across all such implementations. Given the ease in which a web-browser based application's identity can be obtained, it doesn't seem all that useful to deliver separate identities to each instance of the application running in a different web-browser. There are some ideas I'm playing with here (e.g., tying the application identity to a user authentication), but this requires more thought.
P.S. That's one down (partially), and at least five more to go...