Friday, January 30, 2015

Thursday and Friday at CAT15

Thursday is when the rubber meets the road.  By Thursday night, you've passed all the easy stuff, but if you are like the people sitting around me, you are trying to make sure that you can get that one last test started.  Because if you don't start it by Thursday to get it into the queue, you'll be dropping that profile (it's a Connectathon policy so that monitors aren't swamped with profile tests that won't be worth checking on Friday).

A lot of new code gets written on Thursday as the key priorities have all been hit, and people are reaching for their stretch goals.

One of the things that is new at Connectathon as of 2015 is the number of people who have finished the hard stuff  already.  Many vendors have already finished their XDS, XCA and ATNA tests.  CDA viewing is pretty much done.  One profile (Family Planning) had all vendors finished with it by Wednesday, including a couple of late adds from the Connectathon floor.

There are two factors I think that have to do with this.  One is that the testing infrastructure is improving year over year, which makes a lot of the testing go quicker.  Gazelle has improved to the point that there have only been a few capacity problems that we experienced over the week.  There is still room for improvement here, and also automating some of the verification processes, but compared to where we were ten years ago, this is monumental change.  The network is no longer spelled with two o's, and even though we had a bobble with a switch at our table, it was readily resolved.

The second change is the vendor community interest in interoperability.  I see a lot of newbies here this year, which would have led me to expect more people struggling.  But there's also a lot of people here who know the ropes who have helped them along.  And the volume of information that is available about how to implement the profiles, and the public domain implementations (many of which have been tested here over several years) also make it a lot easier.

Some folks still tell me that Connectathon won't scale, but in fact it has.  The model has also been adopted by a number of regional projects for preparing for real world implementations (we call these projectathons).  It's like the old concern about the cable network being used for Internet access. Detractors said the cable infrastructure wouldn't scale either, but it certainly appears to have done the job.  Connectathons will continue to be an important interoperability event, simply because there's nothing else like it where you can get so much done in so little time.

What takes weeks in the field, we do in hours here.

     Keith

Thursday, January 29, 2015

Wednesday at CAT15

By Wednesday you should have talked to a Connectathon monitor at least a few times.  Let me tell you about these folks, many of whom are close friends.

Almost all of the people who act as Connectathon monitors are volunteers. Most of them aren't getting paid this week to be here.  I know a guy who takes his vacation time to do this every year because he has fun with it.  A lot of my consultant friends show up here instead of billing hours this week. There are only a few whose day job qualifies them to be monitors who also get paid by their employers to show up (that has do with IHE rules about vendor neutrality).  And all of them I know would do it for the love of the work anyway, and are still giving up time from friends and family to be here.

The monitors are here to help you pass.  If you succeed, they succeed.  If they fail one of the tests you submitted, it is almost certainly because you or one of your testing partners missed something in the specifications. If you start with that assumption, you will be right most of the time. I'd bet that 95% of the problems discovered on the Connectathon floor could be readily fixed if everyone implemented one process: RTFM. Read everything that you can, the specifications, the test plan, and the daily announcements. Check the FAQ pages on the IHE Wiki.

And if you have read what you can, and the spec still isn't clear, or the test, start by asking others questions. Talk to others who may know more than you do.  There are plenty of people on the test floor who've been around for a few years. If you and your test partners cannot agree on how the spec reads, read it again, and ask around. If you still can't resolve it, it's time to ask a monitor for their interpretation.

Be prepared to wait, they may have a queue of people in front of you, but they will follow up, and they will offer good advice.  I find it is better to ask questions, rather than complaining.  It gets you a lot further.  For example, ask where you are in the validation queue, rather than complaining about the fact that your tests aren't being validated. Ask who could help, rather than assuming they have the time to walk you through it (because they don't).  Ask who might be good to test with, they usually have good advice here (they know who does good work).

When there is a question about interpretation, start with explaining that you are confused about what something in the spec means, and be prepared to show them the line in question. Show that you've done your homework; they will appreciate it. You might also ask them for advice about who to talk to for information on the profile that you are having challenges with if you cannot find someone yourself. Many of the profile editors are on the Connectathon floor including some monitors.

Connectathon is the ultimate stress environment, we are often called upon here to do in hours what we are usually given days or weeks to accomplish. It's the perfect environment to practice how you'd work with a customer in a stressful situation. A guaranteed way to fail at Connectathon is to be disagreeable to the monitors.  It's not that they'll fail you just because you are being a pain (because they won't), it's that by being agreeable, you might be able to get that extra minute of their time or guidance that will help you pass.

If this is your first Connectathon, I guarantee, by the time you are done, you will have made friends with several of the monitors. When you make your plans to come back next year, remember your friends, they'll be here again too. 

Tuesday, January 27, 2015

Tuesday at CAT15

Tuesday's activities at Connectathon is where real interoperability starts to happen.  Monday is about figuring out your network connections, getting set up, and starting to execute on your no-peer tests. On Tuesday you really start working with peer systems.

A couple of things could happen today which would impact your work.

You might discover that a peer you have been working with abandoned a test you had started with them to work on another problem.  This is a persistent problem at Connectathon, because much of the work is interrupt driven, and your stack (or their stack) can often overflow.  It's a good idea to keep a list of things you haven't finished, and check up on the peers you are working with to ensure that you and they are still on track to finish the test.  Never expect someone else to complete a test without checking on it.  It's a mutual responsibility.

The only way to fail (or succeed) fast, is to get the work checked out fast.  Don't forget to mark any tests you've completed as being ready for validation.  I can't tell you how many times I've heard someone groan because they discovered hours or even days after finishing a test that it didn't get validated because nobody marked it as being ready for that step.  The validation queues get longer as the week goes on, so finish that step as soon as you can.  Keep working after the monitors go home if you need to, but get that in the queue today if at all possible.

Next, you may very well discover a blocking problem.  This usually happens when you fail your first or second verification attempt on a profile.  You need to quickly analyze those failed verifications. I've seen teams which learned on Thursday or Friday that a test that they had completed failed validation, and so there is a mad scramble to fix whatever the problem in time to get the missing test validated.  So remember to keep track of the tests you've done and make sure they are getting validated.  If you wait until Thursday, it could be too late.

When fixing a blocking problem, you will often stop work on tests that won't be valid.  Abort them so you don't have to wade through the clutter to find the results you really need.  And tell your test partners why you are doing that.  And for those tests you simply put on hold, make sure to go back and complete them.

You may discover today that something needs to be rebuilt (and pray that it is today rather than tomorrow). If you don't have the code and compiler locally, make sure that your teams back home have a way to get you an update quickly. Don't count on the Connectathon network to enable you to access a 300Mb download quickly and easily.  Consider what is happening on this network - with hundreds of engineers testing messages. Sometimes a screen refresh to an external website can take several minutes, even for a 140 byte tweet.  FedEx might just be faster...

Late this afternoon, you will probably discover your first ATNA bug in your new HTTP or other stack. Check the ATNA FAQ for some of the best advice gathered by myself and others over the past decade.  You should already have Wire Shark installed.



Monday at CAT15

Yesterday at the IHE Connectathon I got pulled in to help a developer who had gotten pulled in at the last minute to replace someone else who couldn't make it.  Being unfamiliar with the tools or the process, I showed him how to make a plan to succeed.

The first thing you do for your product is prioritize your work.  Since  profile actors embody the chunks of functionality being tested, they become what you prioritize.  Some profile actors are critical because they are going into shipping product, or could be of use at several customer sites, or will be demonstrated this year at the HIMSS Interoperability showcase. These have to get tested.

Some profile actors are nice to have.  These might be used in future releases, or could be important to a single customer, or you are thinking about demonstrating this profile actor at HIMSS, but haven't committed to that yet.  The nice to have stuff should get tested, but not if it prevents the critical things from getting done.

Finally, there are your stretch goals, or what I call gravy testing.  These are the profile actors that, got added to your plate last week by your product manager to support some hare-brained scheme you just heard about, or some attractive showcase leader walked up to you and asked if you support ___, and you said anything less convincing and firm than no, and as a result found yourself signed up to test a profile you just heard about five minutes ago.  The important thing about these is to remember NOT to let somebody else's priorities become yours simply because you haven't assigned any of your own.

Once you've assigned priorities for actors, now you can prioritize tests.  If you are doing anything requiring Concistent Time (CT), or Audit Trail and Node Authentication (ATNA), do your CT test first.  Then you should look for tests you could have prepared for before you got here (e.g., generating a CDA document to upload for verification).  The monitors expect these first, and it's always good to be nice to the monitors.  Ideally, this is on a stick, and you just upload your samples (you can even do this sometimes before you even get your system being tested unboxed).

Do your tests basically in this order:  No Peer Tests first, Peer to Peer tests second, workflow tests (supporting multiple peers) last.  Don't worry if you take some out of order.

Things not to do: Do not do options testing on a profile actor before nice to have tests are done unless the option is truly critical.  Options fit into the gravy.  The only exception to this is when multiple options are provided in a profile, and at least one must be supported.  Fine, pick your one and call that critical (if the profile actor is critical) or nice to have.

Also in the gravy category are tests with your own products.  These might also fit into the nice have category for you, but the reality is, you shouldn't need IHE to set up a Connectathon for you to test with your own products.  Do it if you can, but only after you are safely passed the critical and nice to have stuff.

After you've made your plan, you have one more job to do.  Go through the tests which are critical and find out who your potential partners can be.  Get their table numbers.  Now, go walk the floor (after you've completed your CT test and uploaded your samples), and introduce yourself to them. Let them know you'll be working together this week.

Ok, so that's Monday (a day late).  I'll try to get Tuesday done just a bit earlier.  What can I say, I had a critical issue to address yesterday, my blog is just gravy.


Thursday, January 22, 2015

IHE and HL7 create joint Healthcare Standards Integration Workgroup

This afternoon the first official meeting of the Healthcare Standards Integration workgoup was convened at the HL7 Working group meeting  A roomful of nearly 30 people met to review feedback on FHIR resources that had already been jointly developed by IHE and HL7.

This is a very significant event in the collaboration between HL7 and IHE.  I've been (for the past 5 years or so), the HL7 appointed liaison to HL7, and had struggled to get the two organizations to coordinate their activities better. Others in HL7 and IHE had expressed similar frustrations.  About 6 moths ago this came to a head, and IHE offered HL7 an opportunity to coordinate, and proposed some ways in which we could work together more closely.

The new work group is presently addressing some of its growing backlog of FHIR ballot comments, and will shortly begin working on developing their mission and charter.

I very much look forward to this new mechanism for collaboration between IHE and HL7, as it gives us a way to connect formally, rather than through the various informal mechanisms that we have tried in the past.

Already we have reduced the backlog significantly for the FHIR issues.

I'm sure you'll see more details on the official press releases coming from IHE and HL7, but those won't be coming out for a few days at the very least.

Wednesday, January 21, 2015

Finding a Resource matching a value set in FHIR

Cecil Lynch posed this challenge to me in the bar, and of course it being late, and my birthday (which meant I wasn't paying for beers today), I couldn't find the answer immediately.  But I posed the following question to the FHIR Implementers list on Skype:
"I seem to recall a discussion here earlier where someone described how you could issue a query to request conditions that matched a value set expansion.  Anyone remember what that looked like?"
Lloyd McKenzie had the answer not three minutes later (just after Cecil left the bar).
"Search for ":in" here: http://hl7-fhir.github.io/search.html
(also ":not-in") "
I had promised Cecil a blog post on the answer to his question here, so here it is.

His question: How can I find a FHIR Observation resource where Patient = John Smith, and Observation.code = Culture, and Observation.value is in a Value Set describing all codes for Salmonella.

If you read the recommended link and find the second occurrence of :in on the page it says:

inthe search parameter is a URI (relative or absolute) that identifies a value set, and the search parameter tests whether the coding is in the specified value set. The reference may be literal (to an address where the value set can be found) or logical (a referecnce to ValueSet.identifier)

So, the query is:

GET [base]/Observation?subject.name=John+Smith&name=System|Culture&value:in=ValueSetReference
And what this means is find me the observation whose subject has the name of John Smith, where the type of observation is Culture from some code system identified by System, which appears in the Value Set resource that can be retrieved from ValueSetReference, where in this particular case, the Value Set resource would describe somehow all codes for Salmonella.

Now, the beauty of this is that the FHIR server performing the query doesn't actually have to expand the value set more than once to perform this or any other query against the value set.  And if the value set resource is resident on the same server as the observation, it may not even need to fully expand the resource to perform the search, it only need to execute an algorithm that determines whether the code in a candidate resource could be in that expansion.  There are a couple of ways the server could do that, including precomputing a DFA for matching all codes in the value set.

I was told that if I could show that this was possible in FHIR, Cecil would become a believer.

Now, I'm already a believer in FHIR, but this exercise demonstrated for me once again why, because within 10 minutes of issuing my query, I had the answer, and it was documented right here all along.

     -- Keith

Tuesday, January 20, 2015

Is Your CCDA Document Relevant?

One of the challenges with Meaningful Use has to do with the way that it started; first with one kind of document, the CCD (using the HITSP C32 specification), later migrating to CCDA which supported multiple document types.  Meaningful Use never gave any guidance on which CCDA document types to use for different purposes, and in fact, defined content based on a combination of document types found in CCDA.  As a result, most folks are still using CCD, which really is designed to be a general summary of care.  But physicians generate other kinds of documents during routine care, such as a history and physical note, or a consult note, or an imaging report.

Because the 2011 edition required use of CCD, but these weren't related to what physicians were generating, the documents were automatically generated. And since they were automatically generated, there's no physician in the loop to determine what is relevant and pertinent.  Vendors have built interfaces to make it possible for them to select relevant and pertinent, but that takes time in the physician workflow.  So vendors get told to automate the generation process, and depending on how they do that, the result is often less than ideal.

By about 65 pages.

I've talked about this problem previously, and at the time, it seemed to me that the solution should have been obvious: Use the physician's existing documentation process to select what is pertinent and relevant.

But the evolution of Meaningful Use from a document that physicians don't generate to a collection of documents which they might use assumed a level of integration with physician workflows that simply wasn't allowed for by Meaningful Use timelines.  Basically the setup of clinical documentation workflows is something that is usually done during an initial EHR installation.  It isn't redone when the system gets upgraded, because that is usually a non-essential disruption for the provider.  But that is what it would take to use the new document types.  Now, that process will likely occur over time as hospitals and providers update their systems to improve their workflows, but in the interim, it leaves automatically generated CCD 1.1 as the documents that get exchanged for Meaningful Use.

Now we get into the disconnect.  Most EHR developers that I've talked to know that clinicians are the best judge of what is pertinent and relevant content to be exchanged.  This results in the choice of what is relevant to be set up as system configuration parameters.  Lawyers and HIT administrators (and sometimes physicians), tend to err on the side of caution when trying to figure out what should be sent to a receiving system in this configuration.

Which results in a 70 page "summary" of the patient data, which is at least 65 pages too long for anyone to read (and probably closer to 68 pages too long for the average physician).  As a result, when these documents get created and sent, a physician will look at them once or twice, and then decide they aren't useful and never look at them again.

How do we fix this?  I ran into a similar challenge over the last year and the answer that I came up with then was to talk to clinicians about what the right rules were for limiting the data that would be provided.  There were three sets of limits, time based, event based, and state based limits.

Information about active issues needs to presented regardless of time (assuming appropriate management of active and resolved in the provider's workflow).  This would include problems (conditions), allergies, and any current (active) medication.

Event based limits are based on recent care, e.g., activities done in the last encounter.  So any problem or allergy marked as resolved in the last encounter would also show up so that you could see recent changes in patient state.  Additionally, any diagnostic tests and/or results related to that last ambulatory visit would also be included (inpatient stays need a slightly different approach because of volume).  Finally, the most recent vital signs from the encounter should be reported.

Time based limits include dealing with stuff that has happened in recent history.  For this, for an adult patient, you probably want the last year's immunization history.  You likely want to know about any problems whether they were active or resolved in the patient's recent history (we wound up equating recent history to the last month).

There are certain exceptions that may need to be added, for example, for pediatric immunizations you might extend history to a longer period, but in an age dependent way.

HL7 Structured documents has agreed to begin working on a project in which we will work with clinicians, and reach out to various medical professional societies to help define what constitutes a reasonable set of limits for filtering relevant data.  I'm thinking this would be an informative document which HL7 would publish and which we would also promote through the various professional societies collaborating on this project.

So, I think I was both wrong and right in my original post on this topic.  It is impertinent for a developer or a system to decide what is relevant, but,  it is possible, with the participation of clinicians, to develop guidance that system implementers could use to configure the filter of what should be considered relevant.  And my advice to system designers is that while you might want to supply a good set of defaults, you should always let the clinician override what the system selects.