Pages

Thursday, July 27, 2023

TLS 1.2, Server Name Indication (SNI) and SOAP via CXF

It seems that I am destined to become a deep expert in the vagaries of TLS these days.  My most recent challenge was in figuring out why Server Name Indication (SNI) extensions weren't simply working in my BC-FIPS implementation that I talked about in the last few posts.

Background on SNI

For a brief moment, let's talk a little about SNI.  TLS is a lower layer session protocol on top of TCP that encrypts communication.  HTTP and HTTPS are higher layer (Application) protocols on top of TLS.  When you connect to an IP address over TCP, then initiate a TLS connection, the application layer hasn't yet seen the HTTP request, let alone the Host header.  SNI serves, in TLS, the same function as the HTTP Host header.  Effectively, this works in the same way that the HTTP Host header does.

In HTTP, the Host header allows one server to service multiple web sites or DNS endpoints, but unless SNI is used each endpoint must be served with the same certificate, either using a wildcare or multiple alternate names. SNI allows one host to service multiple sites with different certificates for each site.

Integrating SNI with Apache CXF and BCFIPS

Reading through BCFIPS documentation, you'd think at first that all you need to do is enable SNI extensions by setting jsse.enableSNIExtension=true.  Sadly, that's not quite enough, as section 3.5.1 Server Name Identification states.

"... Unfortunately, when using HttpsURLConnection SunJSSE uses some magic (reflection and/or internal API) to tell the socket about the "original hostname" used for the connection, and we cannot use that same magic as it is internal to the JVM. 

To allow the endpoint validation to work properly you need to make use of one of three workarounds:"
And then goes on further to suggest the recommended workaround as follows:

3. The third (and recommended) alternative is to set a customized SSLSocketFactory on the HttpsURLConnection, then intercept the socket creation call and manually set the SNI host_name on the created socket. We provide a utility class to make this simple, as shown in the example code below.  
// main code block 
{   SSLContext sslContext = ...;
     URL serverURL = ...;
     URLConnectionUtil util = new URLConnectionUtil();
     HttpsURLConnection conn =  
        (HttpsURLConnection)util.openConnection(serverURL);
}
That's pretty simple.  What URLConnectionUtil.openConnection does is wrap the socket factory provided by conn (see HttpsURLConnection.setSSLSocketFactory) with one that calls a method to set the server name extension in createSocket after calling the original createSocket method found in the connection.

So, looking at CXF, it's the HttpURLConnectionFactory class that calls url.openConnection.  We could simply override that class and replace with a call to util.openConnection, according the code in that class.  Here's the original.

    public HttpURLConnection createConnection(TLSClientParameters tlsClientParameters,
        Proxy proxy, URL url) throws IOException {
        HttpURLConnection connection =
            (HttpURLConnection) (proxy != null ? url.openConnection(proxy: url.openConnection());
        if (HTTPS_URL_PROTOCOL_ID.equals(url.getProtocol())) {
            if (tlsClientParameters == null) {
                tlsClientParameters = new TLSClientParameters();
            }
            try {
                decorateWithTLS(tlsClientParameters, connection);
            } catch (Throwable ex) {
                throw new IOException("Error while initializing secure socket", ex);
            }
        }
        return connection;
    }

And my modest adjustment to the first two lines:

        URLConnectionUtil util = new URLConnectionUtil(
            tlsClientParameters == null ? null : tlsClientParameters.getSSLSocketFactory()
        );
        HttpURLConnection connection =
            (HttpURLConnection) (proxy != null util.openConnection(url, proxyutil.openConnection(url));

But for some reason, that didn't work.

Debugging this, what I found was that the decorateWithTLS method also wraps connection's socket factory, but it fails to actually look at the server socket factory that may have already been set on the HttpsUrlConnection that was passed into it.

Here's a picture of that method.



It goes on for almost another 100 lines, doing all sorts of weird gyrations that low level code that needs to work with multiple libraries often to, including reflection and a bunch of other oddities.

What's missing here, is an initial check to see if connection is already an HttpsURLConnection, and if so, if it's already got an SSL Socket Factory set other than the default.  In that situation, that's the socket factory (created by URLConnectionUtil) that needs to be wrapped yet again.  Looking through everything this method does, I realized:
  1. I don't care about other than JSSE implementations.
  2. My socketFactory is always set when I enter this method, and that's the one to use.
So, I replaced the middle if statement in my overridden function with:

    if (HTTPS_URL_PROTOCOL_ID.equals(url.getProtocol())) {
        if (tlsClientParameters == null) {
            tlsClientParameters = new TLSClientParameters();
        }
        HostnameVerifier verifier = SSLUtils.getHostnameVerifier(tlsClientParameters);
        connection.setHostnameVerifier(verifier);
    }

Which very much simplifies everything, as all the decorateWithTLS does of interest for me is to set the host name verifier.

So, that is how I enabled SNI with BCFIPS in an older version of Apache CXF.  There's other code you will need as well, because you'll have to get that subclass that creates the connection into the factory used by the Conduit.  That's outlined below.

public class HTTPConduit extends URLConnectionHTTPConduit {
    public static class Factory implements HTTPConduitFactory {
        @Override
        public org.apache.cxf.transport.http.HTTPConduit createConduit(HTTPTransportFactory f, Bus b,
            EndpointInfo localInfo, EndpointReferenceType target) throws IOException {

   
         HTTPConduit conduit = new HTTPConduit(b, localInfo, target);
            // Perform any other conduit configuration here
            return conduit;
        }
    }
    public HTTPConduit(Bus b, EndpointInfo ei, EndpointReferenceType t) throws IOException {
        super(b, ei, t);
        // Override the default connectionFactory.
        connectionFactory = new ConnectionFactory();
    }
}

Elsewhere in your application, you should include an @Bean declaration to create that bean in one of your configuration classes.

@Configuration class MyAppConfig {
    // ...
   @Bean HTTPConduitFactory httpConduitFactory() {
      return new HTTPConduit.Factory();
   }
    ...
}

Thursday, July 13, 2023

Debugging TLS Protocol Failures in BC-FIPS and Spring Applications

Debugging TLS protocol failures can be a nightmare.  With JSSE, you can use the old standby java JVM option: 
    -Djavax.net.debug=ssl,handshake,
data,trustmanager,help
 
to get detailed reporting of what is happening.  Usually that provides more than enough (in fact too much) information to debug the protocol problem, but when using BCFIPS, guess what, it doesn't work anymore.  Why? Well, while these command line arguments make debugging easier, they also transmit decrypted information to the console, which is a huge leak of encrypted information.

So, what's a developer to do?

BCFIPS uses java.util.logging to provide reports on protocol failures.  These reports do NOT include decrypted information, but do include enough information to tell you exactly where the protocol failure happened.  But to enable java.util.logging to work with a SpringBoot application using Logback as its logging agent you have to jump through just a few small hoops.

First, you need to include jul-to-slf4j in your dependencies.  This is a bridge between java.util.logging and SLF4J implementations.

        <dependency>
          <groupId>org.slf4j</groupId>
          <artifactId>jul-to-slf4j</artifactId>
        </dependency>

Next you'll need to activate the bridge during application startup.  It's a good idea to do this as early as possible (before bean loading even).

      import org.slf4j.bridge.SLF4JBridgeHandler;

        ...

      public static void main(String ... args) {
          SLF4JBridgeHandler.removeHandlersForRootLogger();
          SLF4JBridgeHandler.install();

          ...

Once you've done all of the above, you will start getting BCFIPS logs reported via Logback.  But Logback and the SLF4J Bridge has a cost, so you want to add a bit of optimization.  You'll want to avoid some of the extra cost by implementing the LevelChangePropagator to propogate LogBack configuration back to JUL so that you don't have to worry about some of the overhead for disabled logging methods. 

<configuration>
  <contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
    <!-- reset all previous level configurations of all j.u.l. loggers -->
    <resetJUL>true</resetJUL>
  </contextListener>

To enable reporting on protocol errors, 

  <!-- Enable BC Debug Logging by setting level to DEBUG or TRACE -->
  <logger name="org.bouncycastle.jsse.provider" level="INFO"/>

Once you've done all of the above, you will start getting your logs reported to Logback.

I tracked down my problem to an issue with TLS 1.2 Renegotiation, where my client was trying to connect to a server that first allowed the connection, and then renegotiated with client authentication to get to my client certificate.  BCFIPS disables renegotiation by default, to enable it under a limited set of circumstances (that are secure) you can add:
    -Dorg.bouncycastle.jsse.client.acceptRenegotiation=true
to your java command line, or set it in System properties at application startup.


Monday, July 10, 2023

Dynamically Reloading TLS Trust and Identity Material

Wouldn't it be nice if you didn't have to restart your server to dynamically update keys, certificates or trust stores?  I've spend a good bit of time on this across both client and server implementations and so I have a few pointers.  If you've read the last two posts, you know I've been working through requirements and implementation.  Now I'm going to add this auto-renewal of trust and key material to that effort.

Most folks will just need to deal with setting up trust and key managers for their web application.  That's fairly straightforward.  The challenge that I face with this particular application is that there are at least three different ways that trust and key material is provided to the underlying application, depending on how the connection is handled.

The basic idea is to set up a polling thread that periodically checks for changes in trust material, and then when that happens, go off and single anyone that has registered to those change events to update trust material in whatever way they need.

For my uses, inbound connections go through the server, which is what most will have to deal with.  But I also have two different types of outbound connections which are configured in different ways.  Some are SOAP using Apache CXF, others are RESTful API calls made through the HttpsURLConnection class (those APIs aren't that difficult to work with, and so don't need much more).  But each requires a different way to communicate trust and identity material to the system.

Let's start with the first, and most common:

Since Apache Tomcat 8.5 there is an API that enables you to reload key and trust material through the protocol handler for the connection.  During Embedded Tomcat setup (if you do it programatically), you create a Connector and add it to the service.  This connector is where you will add the SSLHostConfig and setup the protocol parameters (e.g., connection timeout, max connections), et cetera through a protocol handler derived from AbstractHttp11Protocol.

Somewhere in this process you will eventually wind up with three things:

  1. The Connector connector.
  2. The SSLHostConfig configuration.
  3. The protocol handler nioProtocol.
    // Configure SSL
    connector.addSslHostConfig(configuration);
    // Get the protocol handler
    Http11NioProtocol nioProtocol = (Http11NioProtocol) connector.getProtocolHandler();
    // Do any configuration to it to the protocol handler.
        ...

After all of this is where you add the magic.  What you are doing here is calling a method to add a runnable to a list of methods to call when trust or key material needs to be updated.  I use this model because three different components need to do something to update trust and key material in the system I'm working with.

    // set up to reload configuration.
    addSslTrustChangedListener(() -> nioProtocol.reloadSslHostConfigs());

My actual implementation of the runnable is a little more complex, because I reuse portions of code that access key and trust stores, but generally, the main idea is to call reloadSslHostConfigs() to force a refresh of key and trust material.

CXF is a bit easier.  I'm still using XML configuration for the HTTPConduit that is used, but the for the bean containing the TLSClientParameters on that conduit, I set up a runnable to refresh the socket factory thus:

    @Bean(name="tlsParamsClientWs")
    public TLSClientParameters getTLSClientParameters() {
        TLSClientParameters p = new TLSClientParameters();
        // Force reload of Socket Factory
        p.setSSLSocketFactory(getSocketFactory());
        // Add listener to update the factory.
        addSslTrustChangedListener(() -> p.setSSLSocketFactory(getSocketFactory(true)));
        return p;
    }

This method constructs the bean that contains the client parameters, and the adds a listener that forces an update of the SSLSocketFactory.  You may be able to just update the parameters and let the factory be created for you, I need a bit more control for my application.  Note: getSocketFactory() and getSocketFactory(boolean forceReload) methods aren't shown here.

For my outbound restful connections which for now use HttpUrlConnection since they aren't that complicated, I have one last method which relies on bean that that eventually calls the getSocketFactory() method referenced above.

This enables all of my inbound and outbound connections to dynamically response to updates in trust material with the addition of a scheduled executor that checks for changes to files every 10 seconds (configurable), and then calls each trust changed listener (catching exceptions inside the loop so that an exception thrown by any single listener doesn't break the next one.

I'm not going to reproduce all of the code, it's fairly straightforward.  You can use something like the Java WatchService (see https://dzone.com/articles/how-watch-file-system-changes) or working with commons.io.monitor classes.

This is the basic idea though:

public void startMonitoring() {
            ScheduledExecutorService s = Executors.newSingleThreadScheduledExecutor();
            s.scheduleAtFixedRate(this::updateTrust, 10, 10, TimeUnit.SECONDS);
        }

public void updateTrust() {
try {
if (checkForUpdates()) {
for (Runnable trustChangedListener  : trustChangedListeners) {
try {
trustChangedListener.run();
} catch (Exception e) {
LOGGER.error("Failed to update trust material", e);
}
}
reloadCount = getReloadCount() + 1;
clientStoreOutOfDate = serverStoreOutOfDate = false;
LOGGER.info("Key and Trust stores updated.");
}
} catch (IOException e) {
LOGGER.error("Could not determine trust material update status", e);
}
}
 

You will probably have to do a bit of work to make this operate in your own environment, but now you can see how to integrate it with both server and client endpoints in several different ways.

Friday, June 30, 2023

Addressing technical challenges with BC-FIPS


Last week I talked about the requirements for implementing TLS and a certified encryption module (specifically Bouncy Castle FIPS or BC-FIPS).  Today I'm going to tell you a bit more about technically how one my go about this, and the specific technical challenges that you may run into.

First of all, BC-FIPS provides some installation instructions that a) no longer work with JDK-11, and b) also don't play well with Spring Boot uber-jar class loading using standard Classpath override mechanisms.  I never found root cause for this problem, all I wound up doing was simply dynamically loaded the BC-FIPS security modules at application startup.

These (non-working) instructions include modifications needed to the JDK, specifically the java.security file and lib/ext folders.

There are three aspects of this configuration:

  1. Creating a compliant SecureRandom (this is described in the BC-FIPS documentation).
  2. Installing the BC FIPS Security Provider
  3. Installing the BC JSSE Security Provider

I do this in a static method BEFORE database initialization.  The reason for this is that DB initialization code needs to be able to get a FIPS compliant socket factory to initialize the connection pool.

private static void init() {
    // This is necessary initialization to use BCFKS module
    CryptoServicesRegistrar.setSecureRandom(getSecureRandom());
    Security.insertProviderAt(new BouncyCastleFipsProvider(), 1);
    Security.insertProviderAt(new BouncyCastleJsseProvider(), 2);
}


/**
 * Generate a a NIST SP 800-90A compliant secure random number
 * generator.
 *
 * @return A compliant generator.
 */
private static SecureRandom getSecureRandom() {

    /*
     * According to NIST Special Publication 800-90A, a Nonce is
     * A time-varying value that has at most a negligible chance of
     * repeating, e.g., a random value that is generated anew for each    
     * use, a timestamp, a sequence number, or some combination of
     * these.
     *
     * The nonce is combined with the entropy input to create the initial
     * DRBG seed.
     */
    byte [] nonce = ByteBuffer.allocate(8).putLong(System.nanoTime()).array();
    EntropySourceProvider entSource = new BasicEntropySourceProvider(new SecureRandom(), true);
    FipsDRBG.Builder drgbBldr = FipsDRBG.SHA512
        .fromEntropySource(entSource).setSecurityStrength(256)
        .setEntropyBitsRequired(256);
    return drgbBldr.build(nonce, true);
}

The code above effectively does what making changes to the JDK's java.security (as recommended by BC-FIPS documentation).  I make all the recommended changes except the ones that initialize the security providers, because I cannot configure the JDK to load the BC classes from the lib/ext folder since that is no longer supported in JDK-11.  The alternative suggested is to put the location of those classes on your classpath during application startup.  However, I also discovered that doesn't work, likely due to conflicts with how uber-jar classloading works (as in fact, those classes technically are on the classpath in the uber-jar).  I also swap out the default keystore format from JKS to BCFKS to ensure compliance with BC-FIPS KeyStore requirements.  Technically JKS is fine for Certificate stores, but frankly, I didn't even want to enable JKS support in case something broke somewhere else.

If your database is in the cloud (e.g., AWS or Azure), you may need to add a certificate to cacerts to enable the database connection using JSSE (BC-FIPS or native Java JSSE code).  I just do this to the cacerts file in the deployed JDK.

    keytool -keystore cacerts -storepass SECRET -noprompt -trustcacerts -importcert -alias awscert -file certificate.der

Next, to convert cacerts to BCFIPS format, this is what you will need to do:

    keytool -importkeystore -srckeystore cacerts -srcstoretype JKS -srcstorepass changeit \

        -destkeystore jssecacerts -deststorepass changeit -deststoretype BCFKS -providername BCFIPS \

        -provider org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider -providerpath lib/bc-fips-1.0.2.3.jar

This does the conversion, and will create a new file "jssecacerts" in the BCFKS format.  The JDK looks for jssecacerts before cacerts, and so now I have both formats still hanging around in case I need them.

A simpler way to do this conversion is with KeyStore Explorer, I tool I often use to inspect/modify key and trust store content.  This tool already has BCFKS support built in, even if it may not be BCFIPS compliant (straight BC also supports the BCFKS format, it's just not a certified component).

Finally, you'll have to change how you configure SSL/TLS for your server and/or client components.  Our system programatically configures using beans for KeyStore, TrustStore, et cetera, but other servers may just use property or configuration values (e.g., server.xml for Tomcat).

Anywhere the default keystore type is present, you'll need to change the type of keystore to BCFKS, and if the provider type is specified, you'd use BCFIPS (as for keytool commands above).

If you want to get a KeyManagerFactory, TrustManagerFactory, or SSLContext programatically, here's how you'd get those:

KeyManagerFactory keyMgrFact = KeyManagerFactory.getInstance("PKIX", "BCJSSE");
TrustManagerFactory trustMgrFact = TrustManagerFactory.getInstance("PKIX", "BCJSSE");
SSLContext sslContext = SSLContext.getInstance("TLS""BCJSSE");

Anywhere the default keystore type is present, you'll need to change the type of keystore to BCFKS, and if you need to specify the KeyStore provider, you'd specify BCFIPS as the provider.

This ensures that all encryption used to protect key and trust material is FIPS compliantly encrypted.  Sadly, the encryption used for JKS nor PKCS12 formats are themselves compliant.

In the continuing saga of this effort, I just recently completed another set of code changes that ensures that I can just drop in new key and trust stores on a shared file system, and all my servers will automatically reconfigure themselves with the latest and greatest.  This greatly simplifies updating certificates for annual renewals or for other reasons, with zero downtime.  More on that later.

Thursday, June 22, 2023

TLS, FIPS and the Bouncy Castle Certified Encryption Module

Image Courtesy of Wikipedia
History

Becoming educated in a topic seems to offer opportunities to become yet further educated, or in other words, once you've demonstrated expertise in a particular technology, more problems related to it will come your way.  So be careful what you work on.

Many years ago, I had to work out how to implement the IHE ATNA profile.  I spent quite a bit of time on this project and became rather expert at diagnosing TLS problems, and configuring Tomcat to support the IHE Audit Trail and Node Authentication Profile (ATNA).  So much so that I first wrote on my experiences in the IHE ATNA FAQ.

Java, has come quite a ways since then.  When the ATNA FAQ was originally written, I think I was using JDK 1.4 or 1.5, which did not have great support even for TLS 1.0.  Now Java is has cranked JDK versions past 11 all the way to 21.  Although, for reasons I will explain below, mine only goes to 11 for this post (I do use JDK 17 for other development).

TLS has also come a long way, releasing new versions, first 1.1, then 1.2, most currently 1.3, and I'm damn near certain there will be a 1.4 and maybe even a 1.5.  Many a system supports TLS.  But often, when working for large corporations or government agencies, you need to go even further, using a NIST Certified FIPS implementation of TLS.  That's one of the problems I've had to work with a team to solve.

Problem Statement

I'm presently embarked upon completely integrating FIPS certified encryption into a Java application that I'm working on (it's already integrated for inbound and outbound communications, this is for other uses).  That application runs on JDK 11 in a Spring Boot 1.5 Java application running inside an Alpine Linux based Docker container, and had already used Bouncy Castle for its crypto activities. Getting all the details right in that environment is a tricky prospect, as I will explain in later posts.  This is just the intro so that folks can understand a bit more about the requirements to be met.

Bouncy Castle

Anyone who does anything with Java and TLS probably is familiar with the Legion of the Bouncy Castle (BC) Crypto libraries.  And if you've been doing Health Information Exchange development work, you are also likely to be aware of NIST FIPS 140-2.  Some of you may even have used BC in FIPS compliant mode (or perhaps had to enable FIPS on your Windows Servers or elsewhere in Java code).  

I haven't seen a lot of attention on using FIPS certified encryption (except in AWS or Azure's Government Cloud environments), and less so in pure Java software implementation. The point of using a certified encryption module is to be sure that the encryption is secure, and the point of having a FIPS compliant mode is to ensure that other insecure encryption capabilities cannot be used.  These are essential requirements in a system where the potential impact to confidentiality, integrity or availability is of at least moderate concern. In healthcare, I believe we'd all agree that it's at least that important when exchanging healthcare data.  And also, there's a federal standard for the term "moderate", as found in the lesser known FIPS 199.

The straight BC libraries, while supporting encryption, aren't NIST Certified Modules.  That's an extra step that requires testing from NIST NVLAP certification laboratories, much like ONC Certification is also performed by accredited laboratories.  Instead, you have to use the FIPS Certified versions of the Bouncy Castle Libraries.  These libraries are largely compatible with the non-certified libraries, but are missing some capabilities those libraries have, frankly because those capabilities aren't certifiable.  They support encryption certainly, but may use ciphers that aren't considered to be secure.

The BC FIPS libraries are currently NIST certified for up to JDK-11.  If you look at BC's roadmap for FIPS certified modules you will see that the first BC FIPS release supporting JDK versions higher than 11 are 1.0.2.4, and 2.0, and those should both support JDK 17.  The 2.0 stream is being submitted through testing using FIPS 140-3, while 1.0.2.4 is tested under FIPS 140-2 requirements.  The first NC FIPS 1.x release that will be tested under 140-3 will be 1.0.3.  While FIPS 140-2 is still the minimum requirement for many government agencies (for those classified as FIPS Moderate), those agencies will require FIPS 140-3 certified modules in the near future.  After 2026, there won't be any FIPS 140-2 certified products (the certification expires), and any new products are now being certified are currently being certified under FIPS 140-3.

Bouncy Castle FIPS modules are freely available for download via Maven, but the latest and greatest code bases are only available to support contract holders.

There are other suppliers of NIST Certified encryption modules for Java, but the Legion of the Bouncy Castle is probably the most widely known, and has the broadest use by other respected software providers (e.g., RedHat, Oracle, and many others).  Some Java implementations rely on OpenSSL, another widely known crypto package.  I prefer to stick with pure Java solutions when I can, so OpenSSL is not my most favorite option.  It's also difficult to configure in Tomcat (which isn't to say that Bouncy Castle is easy, just not as hard).

Now that I've bored you to tears with the requirements that I have to work with, and the solution that was selected (BC-FIPS), later posts will talk more about the implementation details, so that maybe more Java based Healthcare IT applications will take on this prospect.  It would be nice if someday all encryption in Health IT was done by encryption modules that were rigorously tested and which refused to implement insecure protocols (such as SSL, TLS 1.0 or 1.1).  But until implementing such encryption is a LOT easier in Java (and other) applications, that's going to be a hard row to hoe.

Tuesday, June 20, 2023

My HTI1 comments to @ONC_HealthIT

This is what I just submitted for HTI-1 comments.  It's a text file, not a PDF or Word document with a lovely cover letter.  ONC doesn't need all that.  It's generally ordered in the same way as their comment template, but I chose NOT to comment on a bunch of things, and I didn't label it.  Frankly, that all goes back to my first commentThis rule is so extensive, and covers so much new detail that the current deadline for submission of comments is simply too short to process the material adequately.

There's a ton of small issues with spelling and grammar.  It's what happens when all I have is 30 minutes to summarize everything I've just spent this 4-day weekend working on when I wasn't BBQ-ing or playing video games or reading cheap Sci-fi novels or something else.

For what it's worth, I've put over 80 hours of thought into reading, commenting on, and getting feedback from others on this particular rule (at least 10 of that this weekend).

------------

Stop producing such large rules every two to three years. Instead, consider adding smaller chunks of optional certification criteria more frequently (e.g., annually) to address specific topics (e.g., Public Health reporting, Scheduling, Subscriptions, et cetera) and give adequate time for implementation.  Make a schedule for updating the rule (every three years), and stick to it, and leave out the kitchen sink.  I love and hate the RFI questions.  It's a good way to get out the vote, as it were, each big rule essentially being a presidential election.  At the same time, it's extra work in an already huge endeavor that could be better served by an annual strategic RFI inquiry.

Please do discontinue year themed editions. The years were always wrongly applied in any case.

Yes, please do use the definition of "Revised Certification criteria" as defined in the proposed rule.

Please do adopt the most current published versions of USCDI, FHIR US Core and C-CDA Companion Guide, but provide a reasonable time period for implementation, no less than two years after publication of a final rule, and preferably 3.

Please use current guidance on Sex Parameters for Clinical Use (rather than the badly named Sex for Clinical Use) and do not treat this a patient observation. You could literally kill somebody if you mess this up and get it confused with Sex/Gender.  Pay more attention to the parts of this that are truly important, which are those parts that are outside of Male - Typical and Female Typical, where these observations need MORE work.

Where the dictionary will do, please stop defining terms to mean something in conflict.

Provide already has an adequate dictionary definition, what more do you need?  If there is something extra, please say it.

Demographics is already observations about a person useful for classification, why do you need to add observations to the name?

A singular fairness measure doesn't exist.  Corbett-Savises and Goel describe 3 mechanisms to ensure fairness, and in one of these, 8 different measures; Berk et al give 7, and Corbett-Savises and Goel already describe area under the ROC Curve. 

Others have shown that if effects differ between groups, fairness is not possible to establish.  Read page 69 in the chapter on Fairness in "The Alignment Problem" by Brian Christian.  Instead, report on how fairness was approached, and leave it as text.

There simply isn't a single number here yet, and there likely won't be for some time.

The new DSI criteria raises an issue of anti-competiviness that ONC should consider carefully. Certified algorithms have a higher regulatory bar. Uncertified algorithms can be used and built on APIs. Providers will use both. Consider carefully how the impact of requiring certified clinical decision support capabilities in a Base EHR (for which providers are incented by CMS) interacts with the need to promote competition. Ensure that this regulation and further like it don't create a requirement to generate a one-off feature to meet the criteria, but can instead promote and advance clinical decision support use in EHR systems in a fair and competetive way.  In other words, ensure that certified clinical decision support has a percieved benefit other than just checking a box on a feature list for Base EHR system.

The Predictive decision support definition should include the word clinical:

​Predictive decision support intervention means technology intended to support **clinical** decision-making based on algorithms or models that derive relationships from training or example data and then are used to produce an output or outputs related to, but not limited to, prediction, classification, recommendation, evaluation, or analysis. 

Yeah, NTP is good enough as it is.

Start using industry standard ways of defining SLAs if you are going to start including SLAs such as performance times in a rule.

e.g., 95% of requests are completed in 15 minutes.

Please stop referencing draft content in section 299. Yes, I understand ONC is coordinating 10 or more different moving parts but if I can get it done in time as an unpaid volunteer, then people who are being paid by an ONC contract should be able to get it done in time as well. Otherwise, find other contractors who can. It feels sloppy.

Patient demographics and observations.  Leave the title "Patient Demographics"  They are all demographics.  All demographics can be classified as observations about the patient.  The name change does little to add clarity and instead promotes a distinction between classically concieved demographics and novel demographics that really makes that latter second class citizens in data collection.

DS4P sucks.  It's not a good implementation guide, for CDA or for FHIR.  It does little to explain how to use existing FHIR features to meet an existing need, it's been primarily driven by VA and one ambulatory vendor with an add-on product, with little adoption anywhere else.  This work needs a do-over.  The CDA work requires hundreds of lines of XML to do what the V3 RIM indended in one line. The FHIR work provides NO conformance criteria (profiles on uses of defined terminology on FHIR resources).  There's nothing at all that addresses how to express break glass (essential when security tags are introduced with an exception for emergency care).

You really should look into what is going on with the IHE Patient Consent on FHIR (PCF) profile for future rulemaking.

Josh Mandel is awesome.  I love some of what Argonaut has done. But honestly, Scheduling misses the boat for patient needs in a broad sense, and as currently specified only serves providers or payers with an existing relationship to a consumer needing an appointment. This needs more patient/consumer focused attention.

On RFI inquiries: ONC clearly needs someone to help them develop a plan for strategic adoption of standards. This is part marketing, part industry leadership, part alliance development, and then several parts execution in standards development. ONC focuses well on the latter part of this, but fails to accomplish it elsewhere. Argonaut and Da Vinci initiatives seem to have improved on the former parts, but there are still missing constituencies, especially those focused on patient empowerment. Some of those missing constituencies lack the marketing, industry leadership and standards awareness skills necessary to pull it off. ONC could help here, but the model used by Helios for Public Health is not one that seems to be developing the necessary industry leadership or momentum to drive itself forward without continued ONC and CDC support, and there's no such group as yet for patient empowering initiatives.

I like the focus that the TECFA manner allows QHIN and QHIN participants to focus on interoperability using nationally recognized standards.  I think it could be more clearly written.

With regard to data segmentation, I've previously developed FHIR APIS for Certified EHR systems that allow data to be restricted at the patient, visit or observation level, limiting access for different users and purposes of use. The first step is to ensure that systems are able to tag the data in certain ways for a limited set of use cases, the second is to ensure that sensitive data assocaited with a "restricted" visit can be tied back to that visit so that the restricted associated with that visit (e.g., a self-pay visit) can be identified, and then lastly to ensure that only users with specific access (e.g. emergency care, or with access to restricted visits) can access such data, and only when those accesses are requested.  There is no rocket science in this effort, but a lot of due dilligence. The key challenges are:

1. Ensuring that data access layer understand the 
   a. The associated user access priviledges and 
   b. Requested purpose of use
2. Only retrieve and return data that is allowed by 1a and 1b.

Most EHRs do NOT have or drive this capability into product. It's expensive to rework systems that weren't designed with this kind of security at the outset.

I would focus first on the "restricted" visit use case (e.g., self-pay visit).  

1. Define the security flag associated with this visit.
2. Define the access roles and priviledges associated with the use of this information.
3. Define the application functions associated with the display of this information to a user in
a. The EHR
b. The PHR
c. treatment, payment and operations use cases.
d. Other disclosers.
4. Define the mechanism by which purpose of use is communicated via APIs (e.g., Scope, HTTP Category Header)
5. Define the application functions to support "break the glass functionality"
6. Define what happens when restricted data is requested but not authorized (e.g., via search).
a. Can a user know that restricted data exists?
b. Is this a feature available only to some users but not to others?  (e.g., provider can know that restricted data exists that will be shown if they have break the glass privileges, but others will NOT be shown any indication).

Friday, June 2, 2023

HTI1 Robin's Eggs


For those who've been reading this blog for a decade or more, you probably know what a Robin's Egg is.  For those who don't, click the preceding link.

And while Robin is no longer with us, these eggs live on in her memory.  For those who want their Robin's eggs for HTI-1, you can find them here.

There are two files you can grab: 

  1. An edited version of ONC's 508 Compliant Word document containing the text of the rule.  Most of the reformatting is simply adding headings to the damn thing so that it has a navigable table of contents.
  2. A spreadsheet containing all 36 tables from HTI-1.