Pages

Thursday, March 28, 2019

Back to the Baselines

I've been working quite a bit on mapping V2 messages to FHIR lately.  One of the telling points in V2 conversion is ensuring you run tests against a LOT of data with a lot of variation, especially in the V2 interfacing world.

If you don't test with a lot of data, how can you tell that a fix in one place didn't break the great output you had somewhere else, especially given all the possible different ways to configure a V2 interface.

To do this, you have to establish baselines, and compare your test outputs against your baseline results on a regular basis.  Then, after seeing if the differences matter, you can promote your now "better" outputs as your new baselines.

Automating this process in code makes your life a lot easier.

I like to build frameworks so that I can do something once and then reuse it over and over.  For baseline testing, I decided that I wanted each test case I implemented to be able to store its outputs in folders identifying the test case in the form: testClass/testMethod/testInstance.  Those folders storing output would be stored in target/test-output folder.

And baselines would be stored in the src/test/baseline folder, organized in the same way.

Then I wrote a rather small method in the base class of my testing framework that did the following (FileUtils from Apache Commons IO is great for reading and writing the content):

1. Automated the generation of FHIR Resource output as json and xml files in the folder structure.
Here's some sample code using HAPI on FHIR to do that:

   FileUtils.writeStringToFile(new File(fileName + ".xml"),       xmlOutput = context.newXmlParser().setPrettyPrint(true).encodeResourceToString(b),
      StandardCharsets.UTF_8);


2. Compared the generated outputs to baselines.
   jsonBaseline = FileUtils.readFileToString(new File(baselineFile + ".json"), StandardCharsets.UTF_8);
   assertEquals(jsonBaseline, jsonOutput);

And finally, because HAPI on FHIR Uses LogBack, and Logback provides the Sifting Appender, I was also able to structure my logback.xml to contain a Sifting Appender that would store separate log files for each test result! The value of this is huge.  Logging is part of your application's contract (at the very least with your service team), and so if your log messages change, the application contract has changed.  So, if changing a mapping changes the logging output, that should also be comparable and baselined.

The sifting appender depends on keys in the MappedDiagnosticContext (basically a thread specific map of keys to values).  This is where we store the final location of the test log output when the test starts.  My code to start and end a test looks a bit like this:
try {
   start(messageName);
   ... // do the test 
} finally {
   end(messageName);
}

Start is a method that gets the test class and test name from the stack trace as follows:
Throwable t = new Throwable();
StackTraceElement e = t.getStackTrace()[1];
String fileName =
  String.format("%s/%s/%s", 
    e.getClassName(), e.getMethodName(), testName);

This is a useful cheat to partition output files by test class and method, and specific test instance being tested by that method (I use a list of files to read, any time I want a new test case, I just drop the file into a test folder).

End is a little bit more complex, because it has to wrap some things up, including log comparisons after everything else is done.  I'll touch on that later.

It's important in log baselining to keep any notion of time or date out of your logging, so set your logging patterns accordingly.  I use this:
[%-5level] [%t] %c{1} - %msg%n%xEx

While my normal pattern contains:
[%-5level] %d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX} [%t] %c{1} - %msg%n%xEx

My appender configuration looks something like this:

<Appender name="testing" class="ch.qos.logback.classic.sift.SiftingAppender">
    <discriminator>
      <key>testfile</key>
      <defaultValue>unknown</defaultValue>
    </discriminator>
    <sift>
      <appender name="FILE-${testfile}" class="ch.qos.logback.core.FileAppender">
        <file>./target/test-output/${testfile}.log</file>
        <append>false</append>
        <layout class="ch.qos.logback.classic.PatternLayout">
          <pattern>${timelessPattern}</pattern>
        </layout>
      </appender>
    </sift>
  </Appender>

The details out on log file comparison are a bit finicky, because you don't want to actually perform the comparison until the end of the test, and you want to make sure the logger has finished up with the file before you compare things.  After some code inspection, I have determined that logback presumes that it can dispose of the log after 10 seconds.

So, end looks something like this:
protected void end(String testName) {
boolean compare = "true".equals(MDC.get("compareLogs"));
LOGGER.info(FINALIZE_SESSION_MARKER, "Test completed");
MDC.put("testfile", "unknown");

if (compare) {
try {
// Wait for log to be finalized.
Thread.sleep(10 * 1000 + 100);
} catch (InterruptedException e) {
}
// Find and compare the files and assert if they don't match.
}
}

One other thing that I had to worry about was the fact that I use UUID.getRandomUUID().toString() in various places in my code to generate UUIDs for things that were being created.  I just replaced those calls to access a Supplier<String> that was part of the conversion context, so that I could replace it with something that had known behaviors for testing.

One last thing, if you build on both Windows and Unix, be sure that your file comparisons aren't sensitive to line ending format.  One way to address that is to replace \r\n with \n throughout after reading the strings from a file.  You might also find that UTF-8 / Windows Latin 1 characters are problematic depending on the character set your logging code assumes.  I generally stick with UTF-8 for all my work, but you never know about software you don't control.

   Keith

P.S. Yes, I do sing bass.

Experts don't always make the best teachers

To be an expert is different from being a teacher.  To be an expert one must amass a great deal of experience in a field.  This allows you to solve complex problems ... standards-based interoperability for example.

To be a teacher is a different mind-set.  Not only must you remember all the amassed experience, but you must also forget it ... or at least remember what it was like when you didn't know the answers, and if you are really good, the moment at which you finally got it, and then be able to convey that to others.

It's taken me ten years and more to become an expert at interoperability, and while I can claim some skill at teaching, I'm far from expert at it.  As I age, it becomes more difficult for me to remember what it was like to not know something.

Experts are often called upon to train others.  What is simple for us we must remember is not so simple for others without our experience.  And that is the critical piece of self-awareness that we have to learn to develop ... to recognize that there's a certain skill we had to develop, or piece of knowledge we had to slot into place in our minds before we could accomplish the "simple" task.

   Keith

Tuesday, March 19, 2019

When the NoBlocking regulation is more complex than software

... it's time to apply software tooling.

So I went through various definitions in the Informatin Blocking rule and made a UML diagram.  The value of this became immediately apparent to me when I was able to see for example that Interoperability Element, Health IT Module, and API Technology were somewhat broken.  API Technology is certainly a Health IT Module, and should be defined in terms of that definition.

It also shows the various relationships associated with actors.  As I go through the rule, I imagine there will be other relationships that I can infer from the regulatory text (e.g., fees charged to actors by other actors).

You can see the results below, and more importantly, you can get the source.


Entities (people, organizations, and things) are classes.  Things that can be done (verbs) are represented as interfaces.  The SVG representation links back to the regulatory text, and has mouse-overs citing the source of the link or artifact.

   Keith

Tuesday, March 12, 2019

How to File a HIPAA Privacy Complaint

I've been seeing a lot of tweets recently complaining about misuse of HIPAA (about a half-dozen).  Mostly from people who know better than doctors what the regulations and legislation actually says.
I tweet back, sometimes cc: @HHSOCR.  The volume's grown enough that I thought it worth while to write a post about it.

If your health care provider or insurer refuses to e-mail you your data, refuses to talk with you over the phone about your health data, or makes it difficult for you, there's someone who will listen to your complaint and will maybe even take action.  The HHS Office of Civil Rights is responsible for investigating complaints about violations of HIPAA.  They don't make the form easy to find (because frankly, they do have limited resources, and do need to filter out stuff that they cannot address), but they do support online complaint filing, and you can get to it online here (I've shortcut some of the filtration steps for you, if you've found this blog post, you probably meet the filter criteria).

Another way to complain is to write a letter.  I know it's old fashioned, but you can do it.  My 8-year-old daughter once wrote a letter to a HIPAA privacy officer.  You don't need to know their name, just the address of the facility, and address it to the HIPAA Privacy Officer.  It'll definitely get someone's attention.  And who knows, you just might change the behavior of the practice (my daughter's letter got the practice to change a form used to report on a visit so that it would be clearer for patients).

I've mentioned before that under the HIPAA Omnibus regulations, in combination with recent certification requirements, providers shouldn't be able to give the excuse that they are not allowed (under HIPAA) to e-mail, or haven't set up the capability to e-mail you your health data.  Those two statements are likely to be false ... but most providers don't know that (if you are reading this blog, you are probably among the exceptions).

I'd love it if HHS OCR provided a simple service that made it possible for patient's to report HIPAA nuisance behavior that would a) send the provider a nasty-gram addressed to the HIPAA Privacy officer at the institution with an official HHS logo on the front cover, and b) track the number of these sent to providers based on patient reports, and c) publicly report the number of nastygrams served to instititions when it reached a certain limit within a year, and d) do a more formal investigation when the number gets over a threshold, and e) tell them all that in short declarative statements:

e.g.,


To whom it may concern,

On (date) a patient reported that (name) or one their staff informed them incorrectly about HIPAA limitations.

The patient was informed that:
[ ] Healthcare data cannot be e-mailed to them.
[ ] Healthcare data cannot be faxed to them.
[ ] Healthcare data cannot be sent to a third party they designate.
... (a bunch of check boxes)

Please see HHS Circular (number) regarding your responsibilities regarding patient privacy rights.

Things you are allowed to do:
... (another laundry list).

This is the (number)th complain this year this office has received about your organization.  After (x) complaints in a year, your organization will be reported on http://www.hhs.gov/List-Of-Privacy-Nuisance-Violators.html.  After (y) complaints total, your organization will be investigated and audited.

Sincerely,


Somebody with an Ominous Sounding Title (i.e., Chief investigator)
/s/




I'd also love it if HHS would require the contact information for the privacy officer be placed on every stupid HIPAA acknowledgement form I've been "required" to sign (acknowledging I've been given the HIPAA notice ... which inevitably I refuse to sign until I get it), and on every HIPAA notice form I'm given.  Because I'd fricken use it. 

I could go on for quite some time about the pharmacy that couldn't find their HIPAA notice for ten minutes and refused to give me my prescription because I refused to sign the signature pad until they did so, only for them to finally discover that if they'd just given me the prescription, I would see it written on the back of the information form they give out with every medication ... but they didn't have a clue until someone made a phone call.  And of course they claimed I had to sign because "HIPAA" (which says no such thing).

I'd also love it if HSS authorized some sort of "secret healthcare shopper" that registered for random healthcare visits and audited the HIPAA components of a provider's intake processes for improvements (e.g., the HIPAA form in 6-point type at an eye doctor's office is one of my favorite stories, that's a potential violation of both HIPAA and disability regulations).  What the hell, make the payers actually be the ones responsible do it with some percentage of their contracted provider organizations, and report the results to HHS on a periodic basis.

I think this would allow us (patients) to fight back with nuisances of our own which could eventually have teeth if made widely available and known to patients.  I'm sorry I didn't think to put this in with my recent HIPAA RFI comments.  Oh well, perhaps another day, and in fact, since there was an RFI, there will be an NPRM, so these comments could be made there, and who knows, perhaps someone will even act on them.  I've had some success with past regulatory comments before.

   Keith

Monday, March 11, 2019

The Phases of Standards Adoption

I was conversing with my prof. about Standards on FB the other day, and made an offhand remark about him demonstrating that FHIR is at level 4 in my seven levels of standards adoption.  It was an off the cuff remark based on certain intuitions I've developed over the years regarding standards.  So I thought it worthwhile to specify what the levels are, and what they mean.

Before I go there, I want to mention a few other related metrics as they apply to standards.  One of these is the Gartner Hype Cycle with Innovation Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment, and Plateau of Productivity and Grahame Grieve's 3 Legs of Health Information Standards, and my own 11 Levels of Interoperability (which is really only 7).  There's a rough correspondence here, as shown in the table below.

PhasesDescriptionHype
Cycle
Grahame's 3‑Legs11 Levels of
Interoperability
Time (y)
-1 StrugglingAt this stage, not only does a standard not exist, but even awareness that there is a problem that it might solve is lacking.
 0 Absent
0 AspiringWe've identified a problem that standards might help solve and are working to solve it.
 Trigger 
1
 1 Aspirational
1-4
1 TestingThe specifications exist, and are being tested.
Peak
1 & 2
 2 Defined
½-1
2 ImplementingWorking prototypes have been tested and commercial implementations are being developed.

2 & 3
 3 Implementable
 ½-1½ 
3 DeployingImplementations are commercially available and can be used by end users.
Trough
 2 & 3
 4 Available
1
4 UsingCommercially available implementations are being used by real people in the real world.
Slope
3
 5 Useful
2-3
5 RefiningThe standard, and it's implementations and deployments are being refined.
Plateau
3
 6‑10 (not named) 
2-4

People are happy with the implementations, and should the question arise about what standard to use, the answer is obvious.


 11 Delightful
?


How are my seven levels of standards any different from the 11 levels of interoperability?  Not by much really.  What's different here, is that I've given phases instead of milestones.

Why this is important is because each phase occurs over time, and is entered into by different kinds of stakeholders according to a technology adoption lifecycle, and can have innovators, early adopters, majority adopters and laggards in each phase.

Time is interesting to consider here, because standards and technology has sort of a quantum nature.  It can exist in several of my phases described above at once, with different degrees of progress of in each phase, with the only real stipulation is that you cannot be further along in a later phase than you are in an earlier one.

If entry and exit to each phase was gated to completion of the phase before, the timelines for reaching refining stage would take about 5 years, but generally one can reach the starting point of the next phase by starting after the start of the previous phase by 3 to 6 months.  You may have more work to do to hit a moving target, but you'll wind up with a much faster time to market.

As Grahame points out, getting to the end of the cycle requires much more time in the market driving stage of his three-legged race than it does in the initial parts of it. 

Anytime I've done serious work on interoperability programs, I'm always working on 2-3 related projects in a complete program, because that's the only way to win the race.  You've got to have at least one leg in each place of Grahame's journey.  Otherwise, you'll reach a point of being done,  and simply expecting someone else to grab the flag and continue on without you.


Tuesday, March 5, 2019

Whose Interoperability Problem is this?

Is this the challenge of an EHR Vendor? Or a medical practice working with other medical practices who insist on sending faxes and paper copies, perhaps because they don't have some method of sending these over a computer network using digital communication standards such as Direct, or IHE Cross Enterprise Document sharing to the receiving practice?

Yes, we need more inter-connected medical practices.  But is that due to the lack of available interoperability options or the lack of desire to implement them, and if the latter, why is that the case?

Yes, this is an interoperability, but here, we have a question related to workflow:

Workflow related to implementation.
Workflow related to changing the behavior of others in your referral network.
Workflow related to changing your own behavior.

If this practice isn't acceptable, why would you continue to accept it?

Problems like the one Danny illustrated quite well above aren't necessarily due to a lack of technology (or standards, or interoperability) to solve them.  Some times they are simply because the right person hasn't asked the right questions.

Some thoughtful questions to ask:

  1. What other ways could this be done?
  2. Why can't we do it another way?
  3. How much does it cost to do it the way we are doing now?
  4. What might it cost to do it a different way?
  5. What could we do with the savings?


   Keith

Friday, March 1, 2019

AllOfUs

Today I scheduled my intake appointment as a participant in the AllOfUs program.  My PCP is the PI for their efforts with AllOfUs in the group practice that I use in central Massachusetts, and so I signed up to participate this morning.

It took me about 15 minutes to sign up.  The consent process was very well done, and very well written, in quite understandable language.  I'd guess the reading level of the content was around 6-7th grade, but was also a highly accurate representation of what the program is doing, which takes quite a bit of work if you've ever had to do that sort of writing.

The surveys took me another 10 minutes to complete and were especially easy since I'd already seen them having read through the protocol previously.

What surprised me was getting a call from my practice to schedule the appointment, but my sense is, they are already very engaged in this effort (I was to have participated as a patient representative in their outreach program, but was unable to attend the initial meeting due to battery problems with my motorcycle).  That was cool, and took about 5 minutes.

I'm looking forward to see how the program operates from the patient perspective, especially since some of the standards work I'm engaged in now can help refine it from the research perspective later.

   Keith