I've been working quite a bit on mapping V2 messages to FHIR lately. One of the telling points in V2 conversion is ensuring you run tests against a LOT of data with a lot of variation, especially in the V2 interfacing world.
If you don't test with a lot of data, how can you tell that a fix in one place didn't break the great output you had somewhere else, especially given all the possible different ways to configure a V2 interface.
To do this, you have to establish baselines, and compare your test outputs against your baseline results on a regular basis. Then, after seeing if the differences matter, you can promote your now "better" outputs as your new baselines.
Automating this process in code makes your life a lot easier.
I like to build frameworks so that I can do something once and then reuse it over and over. For baseline testing, I decided that I wanted each test case I implemented to be able to store its outputs in folders identifying the test case in the form: testClass/testMethod/testInstance. Those folders storing output would be stored in target/test-output folder.
And baselines would be stored in the src/test/baseline folder, organized in the same way.
Then I wrote a rather small method in the base class of my testing framework that did the following (FileUtils from Apache Commons IO is great for reading and writing the content):
1. Automated the generation of FHIR Resource output as json and xml files in the folder structure.
Here's some sample code using HAPI on FHIR to do that:
FileUtils.writeStringToFile(new File(fileName + ".xml"), xmlOutput = context.newXmlParser().setPrettyPrint(true).encodeResourceToString(b),
StandardCharsets.UTF_8);
2. Compared the generated outputs to baselines.
jsonBaseline = FileUtils.readFileToString(new File(baselineFile + ".json"), StandardCharsets.UTF_8);
assertEquals(jsonBaseline, jsonOutput);
And finally, because HAPI on FHIR Uses LogBack, and Logback provides the Sifting Appender, I was also able to structure my logback.xml to contain a Sifting Appender that would store separate log files for each test result! The value of this is huge. Logging is part of your application's contract (at the very least with your service team), and so if your log messages change, the application contract has changed. So, if changing a mapping changes the logging output, that should also be comparable and baselined.
The sifting appender depends on keys in the MappedDiagnosticContext (basically a thread specific map of keys to values). This is where we store the final location of the test log output when the test starts. My code to start and end a test looks a bit like this:
try {
start(messageName);
... // do the test
} finally {
end(messageName);
}
Start is a method that gets the test class and test name from the stack trace as follows:
Throwable t = new Throwable();
StackTraceElement e = t.getStackTrace()[1];
String fileName =
String.format("%s/%s/%s",
e.getClassName(), e.getMethodName(), testName);
This is a useful cheat to partition output files by test class and method, and specific test instance being tested by that method (I use a list of files to read, any time I want a new test case, I just drop the file into a test folder).
End is a little bit more complex, because it has to wrap some things up, including log comparisons after everything else is done. I'll touch on that later.
It's important in log baselining to keep any notion of time or date out of your logging, so set your logging patterns accordingly. I use this:
[%-5level] [%t] %c{1} - %msg%n%xEx
While my normal pattern contains:
[%-5level] %d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX} [%t] %c{1} - %msg%n%xEx
My appender configuration looks something like this:
<Appender name="testing" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<key>testfile</key>
<defaultValue>unknown</defaultValue>
</discriminator>
<sift>
<appender name="FILE-${testfile}" class="ch.qos.logback.core.FileAppender">
<file>./target/test-output/${testfile}.log</file>
<append>false</append>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>${timelessPattern}</pattern>
</layout>
</appender>
</sift>
</Appender>
The details out on log file comparison are a bit finicky, because you don't want to actually perform the comparison until the end of the test, and you want to make sure the logger has finished up with the file before you compare things. After some code inspection, I have determined that logback presumes that it can dispose of the log after 10 seconds.
So, end looks something like this:
protected void end(String testName) {
boolean compare = "true".equals(MDC.get("compareLogs"));
LOGGER.info(FINALIZE_SESSION_MARKER, "Test completed");
MDC.put("testfile", "unknown");
if (compare) {
try {
// Wait for log to be finalized.
Thread.sleep(10 * 1000 + 100);
} catch (InterruptedException e) {
}
// Find and compare the files and assert if they don't match.
}
}
One other thing that I had to worry about was the fact that I use UUID.getRandomUUID().toString() in various places in my code to generate UUIDs for things that were being created. I just replaced those calls to access a Supplier<String> that was part of the conversion context, so that I could replace it with something that had known behaviors for testing.
One last thing, if you build on both Windows and Unix, be sure that your file comparisons aren't sensitive to line ending format. One way to address that is to replace \r\n with \n throughout after reading the strings from a file. You might also find that UTF-8 / Windows Latin 1 characters are problematic depending on the character set your logging code assumes. I generally stick with UTF-8 for all my work, but you never know about software you don't control.
Keith
P.S. Yes, I do sing bass.
If you don't test with a lot of data, how can you tell that a fix in one place didn't break the great output you had somewhere else, especially given all the possible different ways to configure a V2 interface.
To do this, you have to establish baselines, and compare your test outputs against your baseline results on a regular basis. Then, after seeing if the differences matter, you can promote your now "better" outputs as your new baselines.
Automating this process in code makes your life a lot easier.
I like to build frameworks so that I can do something once and then reuse it over and over. For baseline testing, I decided that I wanted each test case I implemented to be able to store its outputs in folders identifying the test case in the form: testClass/testMethod/testInstance. Those folders storing output would be stored in target/test-output folder.
And baselines would be stored in the src/test/baseline folder, organized in the same way.
Then I wrote a rather small method in the base class of my testing framework that did the following (FileUtils from Apache Commons IO is great for reading and writing the content):
1. Automated the generation of FHIR Resource output as json and xml files in the folder structure.
Here's some sample code using HAPI on FHIR to do that:
FileUtils.writeStringToFile(new File(fileName + ".xml"), xmlOutput = context.newXmlParser().setPrettyPrint(true).encodeResourceToString(b),
StandardCharsets.UTF_8);
2. Compared the generated outputs to baselines.
jsonBaseline = FileUtils.readFileToString(new File(baselineFile + ".json"), StandardCharsets.UTF_8);
assertEquals(jsonBaseline, jsonOutput);
And finally, because HAPI on FHIR Uses LogBack, and Logback provides the Sifting Appender, I was also able to structure my logback.xml to contain a Sifting Appender that would store separate log files for each test result! The value of this is huge. Logging is part of your application's contract (at the very least with your service team), and so if your log messages change, the application contract has changed. So, if changing a mapping changes the logging output, that should also be comparable and baselined.
The sifting appender depends on keys in the MappedDiagnosticContext (basically a thread specific map of keys to values). This is where we store the final location of the test log output when the test starts. My code to start and end a test looks a bit like this:
try {
start(messageName);
... // do the test
} finally {
end(messageName);
}
Start is a method that gets the test class and test name from the stack trace as follows:
Throwable t = new Throwable();
StackTraceElement e = t.getStackTrace()[1];
String fileName =
String.format("%s/%s/%s",
e.getClassName(), e.getMethodName(), testName);
This is a useful cheat to partition output files by test class and method, and specific test instance being tested by that method (I use a list of files to read, any time I want a new test case, I just drop the file into a test folder).
End is a little bit more complex, because it has to wrap some things up, including log comparisons after everything else is done. I'll touch on that later.
It's important in log baselining to keep any notion of time or date out of your logging, so set your logging patterns accordingly. I use this:
[%-5level] [%t] %c{1} - %msg%n%xEx
While my normal pattern contains:
[%-5level] %d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX} [%t] %c{1} - %msg%n%xEx
My appender configuration looks something like this:
<Appender name="testing" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<key>testfile</key>
<defaultValue>unknown</defaultValue>
</discriminator>
<sift>
<appender name="FILE-${testfile}" class="ch.qos.logback.core.FileAppender">
<file>./target/test-output/${testfile}.log</file>
<append>false</append>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>${timelessPattern}</pattern>
</layout>
</appender>
</sift>
</Appender>
The details out on log file comparison are a bit finicky, because you don't want to actually perform the comparison until the end of the test, and you want to make sure the logger has finished up with the file before you compare things. After some code inspection, I have determined that logback presumes that it can dispose of the log after 10 seconds.
So, end looks something like this:
protected void end(String testName) {
boolean compare = "true".equals(MDC.get("compareLogs"));
LOGGER.info(FINALIZE_SESSION_MARKER, "Test completed");
MDC.put("testfile", "unknown");
if (compare) {
try {
// Wait for log to be finalized.
Thread.sleep(10 * 1000 + 100);
} catch (InterruptedException e) {
}
// Find and compare the files and assert if they don't match.
}
}
One other thing that I had to worry about was the fact that I use UUID.getRandomUUID().toString() in various places in my code to generate UUIDs for things that were being created. I just replaced those calls to access a Supplier<String> that was part of the conversion context, so that I could replace it with something that had known behaviors for testing.
One last thing, if you build on both Windows and Unix, be sure that your file comparisons aren't sensitive to line ending format. One way to address that is to replace \r\n with \n throughout after reading the strings from a file. You might also find that UTF-8 / Windows Latin 1 characters are problematic depending on the character set your logging code assumes. I generally stick with UTF-8 for all my work, but you never know about software you don't control.
Keith
P.S. Yes, I do sing bass.