Wednesday, December 17, 2014

What does a HIE policy cost?

A colleague of mine were talking about policies the other day. She's been involved in policy development in a number of places, and I've also had an inside view of the process a few times myself, as well as a lot of anecdotal evidence.

So, to start with, a typical policy document might be 50 pages or so. From a professional policy writer, that might take about a page a day, and cost about $1000 to $1500 / day.  You could go lower or higher of course, but this is a decent ballpark range.  So to start off with, $50,000 to $75,000 right from the get go.  Now of course, if you already employ such a person, you might argue the rates with me, but once you account for your real costs spent on their salary, their office, their equipment, and their benefits, you are right back in that range.  Of course, that budget is already allocated, and so might be an easier pill to swallow.

Now, lets look at the work involved.  You probably need a few face to face meetings, hopefully you already have the resources to host that, otherwise you have to pay for that, but for now, lets just assume you have those resources.  An you need a few people, call it 10, to help you work out your policies.  Some of these will be lawyers ($$$$) and policy experts ($$$$) and security experts ($$$), and a bunch of C-level folk (or one level down).  Again, pricy.  You probably need them for 2 to three face to face meetings for a couple of days.  Hopefully they are volunteers and you don't have to pay for their time or travel.  Because if you do, their time could cost you oh, call it $250/hr.  And then you need a few teleconferences.  We won't worry about the T-con facilities, because you probably are already paying for that, and for this project, its small change.  So, call it 10 two hour meetings where not everyone shows up every time, and you have 20 hours * 5 people = 100 hours.  So there goes another $25,000 into the policy.  And if you had 3 days worth of face to face meetings with the whole team, then you have another 24 hours * 10 people = 240 hours.  So a grand total of 340 hours x $250 / hr.  So there went another $85,000.  And if you are signing the checks and managing this, you are likely involved for at least some part of the effort just to oversee the whole project.  So there goes another 4 hours a week times 10 weeks = 40.  So throw yourself into the pool, and 40 * 250 = $10,000.

Add it all up and there went somewhere between $170,000 and $195,000, and three to six months of time.  Hopefully you had volunteers, and that cuts it down quite a bit. But I'm betting you are paying for at least one lawyer and one policy expert, and those people don't come cheap. Even so, you might get by at around $150,000.  And depending on your overhead, and the number of people, and the length of the project you might pay even more than these figures.

Back to our 50 page document.  There are about 250 words on a page.  So a grand total of 12,500 words.  Now, that gives us a range of $12-16 per word.  Dang, I'd love to get that.  I wrote 92138 words for The CDA(tm) Book, I seem to be a few orders of magnitude off.

Not all words are of equal value.  Some stuff is pretty straightforward.  Other words are much more important.  I've seen a room of ten people argue for an hour over two words.  That's around $1250 per word.  Remember that figure the next time you get into a debate over words.

-- Keith

Wednesday, December 10, 2014

Project Argonaut for the HL7 community

This showed up in my Inbox this morning via Grahame Grieve and was sent to the HL7 FHIR Mailing list.  Given that there's a lot of interest in what Project Argonaut is, and few details, I thought others might benefit from this as well. I reproduce it here with Grahame's permission (and since that list is available for anyone to join, there's no issues there either).

-- Keith

Project Argonaut was announced last week. You can see the announcement here. That press release was intended for an external community, and didn't address lots of important questions for the HL7 community itself. So here's an outline of what project Argonaut means in practice for HL7

FHIR Ballot on May timeline

Project Argonaut arose in response to our concern that we may not be able to deliver the FHIR DSTU2 for the May ballot next year. Lots of implementers - a much wider community than the identified Argonauts - have a strong interest in that outcome. The Argonauts decided to step up and ask us what they could do to help us meet the May deadline. 

We really appreciate that, and we're glad of the help. But like everything, it comes at a price. We haven't committed that we will absolutely hit the May deadline - we can't - but we've committed to giving it a really good shot. That does mean that there's going to more pressure than there already was, and the FHIR/HL7 leadership (including EC,TSC, FGB, FMG) will need to manage the impact of this on our community. Another price is that the Argonauts have specific goals related to the JASON task force report; Project Argonaut covers more than merely "publish the FHIR DSTU 2 in May"

Specifically, Project Argonaut will have 3 main sub-projects:

1/ Security

The first project is to review the way Smart-On-FHIR uses OAuth and OpenID Connect. Specifically, Dixie Baker will drive this, and consult with the Project Argonaut community and beyond to ensure that the arrangements meet their requirements and also are consistent with wider industry best practices in these regards. 

This work is actually outside HL7. Once it's complete, and the specifications have been tested in real systems and proven satisfactory, it is our plan to work with the Security WG to bring the resulting spec into the HL7 community as a full standard, probably part of the FHIR specification (though not dependent on FHIR - this would have use in a wider context)

2/ CCDA --> FHIR mapping 

Our biggest issue with the May ballot is mapping between CCDA and FHIR. It's our intent to publish a full detailed CCDA profile for FHIR, that will describe how to represent the same content in FHIR that is in CCDA, and provided detailed conversion support. Publishing this as a full profile is not part of the May ballot (and hasn't been for a while); instead, we plan to provide this as an implementation guide subsequently. But this will only work if we're confident that there's basic alignment between CCDA and FHIR, and we can only be sure of this once we've done detailed mapping between the two. So this detailed mapping is a pre-condition of DSTU 2; accelerating this is the principal outcome from Project Argonaut for the FHIR specification itself. 

How this will work is that a small group of people - mainly the ones already doing the work on a volunteer basis - will be paid to focus a significant amount of their time on performing detailed mapping between CCDA and FHIR. This will be done openly, using normal HL7 channels (a mix of email, skype, wiki, and google documents). The mapping documents that this group prepares will probably be similar to the CDA -> FHIR mapping document ( and everyone is welcome to observe/review/comment on the outcomes. Any issues in FHIR resources that this group identifies will be forwarded to the relevant owning committee for resolution using the normal committee processes (the small project Argonaut team has no authority in an HL7 sense).

3/ FHIR implementation testing 

Although the Argonauts are supporting us to produce the FHIR DSTU 2, they have a specific interest in it - the parts that relate to the JASON task force recommendation around API based access to healthcare data. Based on this, the current draft for comment includes a brief Argonaut implementation guide ( that briefly describes Meaningful Use based access to EHR data and documents.

As part of the Argonaut project, we'll be performing several implementation based activities that have the aim of verifying that the DSTU and this profile actually are suitable for their purpose. Those of you who have already been involved in the FHIR project will know that this is basically business as usual for us, it's just expanding our existing approach to a new community. So one of the streams at the San Antonio Connectathon in January will be focused on the Argonaut interest of MU based access to data, which is using the "Fetch Patient Record" to fetch the patient record in MU compliant resources (note that the Argonaut interest is in granular access APIs, but we're just testing this most coarse access for this connectathon). There'll be other engagement activities as well, which will be announced as we get on top of them.

Plenty of people have asked "How do I get involved with project Argonaut?".  There are two answers to this.  

  • In terms of supporting the HL7 project work directly (suitable for the existing HL7 community), reviewing the CCDA mappings and participating in connectathons as well as reviewing the draft DAF work and participating in the ballot cycles and related committee work are all excellent ways of being involved
  • At a corporate level, lots of external organizations have expressed interest in becoming involved. We're working on scaling up the FHIR community to support this, in association with the Argonauts. There will be more news on this soon.

One final note: Lots of people have asked about the relationship between project Argonaut and the ONC DAF project. Well, it's simple: The Project Argonaut work is accelerating the first phase of the DAF project


Friday, December 5, 2014

What's the question?

What ever it is, if it involves Health IT, interoperability, or anything else, the answer seams to be FHIR.  And I agree, FHIR has plenty to offer the industry with regard to interoperability.  Today, the #HITsm tweetchat was all about FHIR, and the recent (yesterday) announcement of the Argonauts project by HL7 and a variety of other participants.  There's no real cabal here, or perhaps I should say that differently: Just about every cabal out there is involved.  It's the cabal of cabals.  You've got HIT Standards committee leadership and CommonWell leadership and HSPC leadership, and leaders of many other factions I'm sure.

I get to laugh a little bit about all this buzz, because I recognized FHIR was going to be pretty significant quite some time ago (in fact 2+ years ago).  And John Moehrke has been involved with the FHIR Management Group since it was initiated.  The doers have been at it for a few years already, it's good to see some of these other folk join in ;-)

And now of course, everyone wants to know what FHIR is and has advice on how to move it forward and make it better.  If you want to understand FHIR, start first with this primer.  Whoops, that's not a primer, that's actually the standard.  But it is already a pretty good place to start.  If you want to see FHIR in action, what I'd suggest you do is sign up for the HL7 FHIR Connectathon happening in January in San Antonio.  If you cannot make it to that event, you still have an opportunity to see what is happening with FHIR in IHE at the IHE Connectathon in Clevland [frankly, I'll prefer the weather in San Antonio, but will be at both events].  If you want to get involved in FHIR, join HL7, and get on the list serves, the Skype chat, the Wiki or just about any other media available.  While FHIR presently is being developed by just about all of HL7, soon there will be an HL7 Workgroup devoted solely to FHIR.

The Argonauts project is about hitting the closing stretch on the next release of FHIR, which is expected to be DSTU Release 2 in May.  There are a couple of major components related to that project (mostly about a few parts of of what FHIR DSTU 2 expects to deliver):

  1. Documents
  2. Granular Data
  3. Integration with OAuth 2.0
  4. Project Management
The OAuth 2.0 piece is a bit extra, but is part of what needs to happen to support things like a RESTful Blue Button API here in the US.  Already there's some work that Josh Mandel did for Blue Button + Pull, and for which IHE started the Internet User Authentication (IUA) profile a couple of years ago.

The big news about FHIR is not so much about Jason or the Argonauts (most all of whom die in the end, except perhaps Heracles), and more about the fact that like the Argo, FHIR has the biggest collection of Interoperability heroes on board.  With a crew like this, it should be an amazing ride. Are you in?


Thursday, December 4, 2014


My 1000 words for the day (and actually, they are Regina's):

Wednesday, December 3, 2014

Bones or Monty Python?

A couple of weeks ago, Bob Watcher wrote: Meaningful Use. Born, 2009, Died, 2014? John Halamka suggested we need to Declare Victory for Meaningful Use (see item #5) by 2016 even earlier this year.

I'm not so sure Meaningful Use is dead.  It's pretty clear its a program in transition, but that writing was already on the wall when there was a transition from payments (2014) to penalties (2015).  I fully expect to have another 500 pages to read this holiday season.

How about you? Who's right? Bones or the old coot from Monty Python?


Tuesday, December 2, 2014

FHIR is all the Buzz

This morning I got another unsolicited sales request, but this one amused me.  The company reports "extensive experience in FHIR and HL7 integration, EMR, HIS, Cloud Engineering, Big Data Integration, HAdoop, Java, J2EE based HA Architectures, Web-services, Android, iPhone and a host of other cutting-edge technologies" buzz-words!

I love it that FHIR has reached Buzz-word status, but even more so, for the right reasons.

Wednesday, November 26, 2014

What does that cost?

What if we all picked one day in 2015, any day, so long as it is a normal working day for doctors and said, "on this day, I will ask my doctor about the cost of ____."  I would call this "National Cost of Care Awareness Day," and we would have it on April 1st (thanks Margalit for both suggestions).

And we would promote it widely.  And everyone would call their doctors, and their doctors would say "I don't know" more times in one day than they've ever said perhaps in an entire year.  And maybe we could ask our insurers the same question.

So, I've made the proposal, who will back it?  Who besides me will promote this?  And more importantly, who will do it?


Tuesday, November 25, 2014

One Less Thing

This one showed up last week in my inbox while I was busing being overwhelmed with AMIA ... From my perspective, this is a good thing, because it takes one more thing OUT of my travel budget, coordinating a number of different activities. I wish there was more of this kind of collaboration, and less of the other ...

Dear IHE USA participants,

Today, a strategic partnership between the Interoperability Workgroup (IWG), HIMSS and IHE USA was announced to streamline the process for achieving connectivity between EHR and HIE systems. This alliance will strengthen IHE USA's current program to improve the quality, value, and safety of healthcare by enabling rapid, scalable, and secure access to health information at the point of care. ICSA Labs  has been selected as the testing and certification body for this effort. Read the full Press Release to learn more.

"Both IWG and IHE USA have worked independently with notable accomplishments. However, the consolidation of our efforts, along with the commitment of these organizations, offers the most promise to create a real, lasting impact and make interoperability a reality in healthcare," said Joyce Sensmeier, MS, RN-BC, CPHIMS, FHIMSS, FAAN, president of IHE USA.

IWG, HIMSS and IHE USA will announce the opening of the new testing and certification program at the IHE North American Connectathon Conference 2015 in Cleveland, Ohio at the HIMSS Innovation Center on Wednesday, January 28, 2015. We cordially invite you to join us as we announce the next phase for IWG's testing and certification program. Visit IHE USA's website to learn more about this conference and register online today.

Thank you for your support and efforts as we raise the bar for interoperability and industry standards in the US. If you have any additional questions please contact


The leadership and staff at the Interoperability Workgroup, HIMSS and IHE USA.

Monday, November 24, 2014

Workflow Automation

I've now seen more than a dozen, and written almost a half-dozen workflow profiles over the past two years.  After the first two, the remainder get simpler.  After the first three, if I cannot automate some part of the process, I must be sleeping.  For my last five, I automated a huge chunk of the content development.  And as it turns out, with BPMN 2.0, there is an XML expression for the semantics of what gets depicted in an IHE Workflow profile, addressing things like task ownership, attachments, messaging, and sequencing.  As workflows get more complex, we'll need to incorporate branching and notifications as well.

So for the next IHE PCC cycle of profile development I will be building an Appendix to the PCC Technical framework profiling the use of BPMN to describe IHE Workflows.  The next IHE Workflow profile to come out of PCC (and Radiology): Remote Read, will take advantage of what I get done to facilitate implementation.

A couple of quick notes on the value of this:

  1. I've shown that you can automate development of conformance rules, test plans, and implementation testing via an XML representation of the workflow.
  2. I've also shown that a lot of documentation can also be generated from the XML representation.
  3. You could actually plug the BPMN into your workflow automation tools to simplify your implementation.

It remains to be seen whether I can profile BPMN 2.0 and its XML representation of the workflow semantics (I care some about the pretty pictures, but not terribly much) to provide the same set of capabilities.  But so far, early investigations are promising.  Since I already have a set of five workflows that I've developed my own little DSL for, basically I'll be looking for how to represent that DSL in BMPN 2.0.

It's one of those things you'll get to look forward to here in 2015 as I develop that content in IHE.  And hopefully, as the IHE/HL7 Joint Workgroup continues on its slow but steady process to creation, we'll be able to look at capturing workflow details in FHIR.  But that will have to wait at least for DSTU 3 (the content train for DSTU left the station months ago).

    -- Keith

Friday, November 21, 2014

My First AMIA

Believe it or not (and most people there found it difficult to believe), prior to last week, I had never been to an AMIA meeting before.  I think part of the reason few believed I had never been is because I knew so many people there, whom I have met in HL7, IHE, HIMSS, HITSP, S&I Framework or other settings over the past decade.  And while I knew so many, even more knew me.

The scientific sessions were probably the most interesting.  Adam Wright had a great presentation on using Possion parameters to detect changes in alert activation levels.  What was cool about that was that you could apply the technique to any countable event, and I've got a ton of different applications for that.  Another presenter had shown (or though she had shown) that NOT using a system was better than using it badly, but in fact, as it turned out (at least in my analysis), she had NO data in the study to illustrate that at all, and in fact, in her control arm, where the system was not used, folks did not fair as well as when the system was used imperfectly.

One session I attended was about the formation of Healthcare Services Platform Consortium (HSPC), a panel discussion led by Stan Huff and other HSPC members.  Wow!  Just what I needed.  Another consortium to pay attention to.  Oh, and this one is run by physician centered organizations, so it is clear that it will be balanced.  I get a little tired of consortium creation and a job function.  After all, there's already not enough things going on, including IHE, HL7, CCC, HIT Standards Committee, CommonWell, S&I Framework (and before that HITSP), CIMI and ...  I honestly don't know how Stan does it, given that he's leading up four (now five) of these efforts, but I sure don't have the travel budget for this.  Don't get me wrong, I really don't have any complaints about direction, just lack of coordination.  We have plenty of that going on already.

There was quite a bit of buzz about FHIR going on throughout AMIA, and it played a major role in at least 5 sessions that I attended.  Healthcare Informatics called it a Smoking Hot Topic at the meeting, and Neil Versel talked about it as a Public API on his blog over at Healthcare IT News.  I think we've just recast Phoenix Rises as a FHIR-bird.

Probably my favorite activity was the poster session, where I got to meet people doing some very real implementation work, many using the standards that I know and love, and write about here.  There were easily half a dozen projects that I found interesting, and I really enjoyed spending time with the people investigating the use of these standards.  In several cases I was able to point them to some other things they might also look at to improve further.  For example, one person was showing a system he and others had built to merge CCD documents, so I pointed him to the IHE PCC Reconciliation profile for some advice on how to identify and display reconciled medication lists.

I also find it fun to walk down University row, and say "Sorry, I'm already in a program..." whenever anyone asks me if I'm interested.

Of course, while AMIA was going on, the rest of the world didn't stop.  I had double homework in one class (at least if I wanted extra credit), and a draft of my term paper due for another.  Which meant that I got precious little sleep.  I'll plan better for next year, because while this year is my first visit to AMIA, it won't be my last.

Saturday, November 15, 2014

IHE PCC Profiles for 2015

Several IHE Domains met last week to discuss work items for the coming season of interoperability. IHE IT Infrastructure's 2015 Plan is described by John Moehrke on his blog.  IHE PCC started with eight work items, which we reduced to six and moved forward with.

RECON on FHIR takes the IHE Reconciliation work and adapts it for use with FHIR.  The basic challenge addressed by this profile is how to represent the reconciliation act.  Fortunately, FHIR has the List resource, and that actually covers everything we probably need for this profile, and was designed in part for this kind of use.

Clinical Decision Support for Radiology is one we'll be developing on behalf of IHE Radiology, but will likely remain a PCC profile.  The idea behind this one is that ordering physicians and radiologists here in the US will soon be required to verify that certain kinds of imaging orders are appropriate in order to get paid.  Other countries, while not so draconic in policies, are starting to have an interest in this sort of order review as well.  So, we want to have some sort of CDS verify the order as it is created, and as it is received, and to gather additional information if necessary.  For this, we'll likely use something like RFD if additional information needs to be gathered, but otherwise, simply return a token of some sort from the CDS system indicating that the order is deemed appropriate according to some guideline.  Then the receiver can simply verify the token if so desired, or test the order against its own set of appropriateness guidelines.

Device Observation Semantic Bridge will probably get renamed.  The point of this one is that some EHR systems would like to have a simplified summary of device observations created to make it easier for them to store the information, in CDA rather than HL7 V2 format.  So, we'll have a content profile that explains how to convert V2 device observations to CDA, create a summary and link it to the device observation.  We'll also have another integration profile that explains how to get vocabulary mappings from one system to another.  We will likely using CTS 2 to support that, although there may be some interaction also with FHIR.  One supplement, two profiles.

Remote Patient Monitoring takes what Continua and IHE have been doing and demonstrating for the last half decade and make it into a profile.  This one should really be very simple. The main point is to have a profile that enables Continua specifications to be tested at connectathon.  This one may go back to PCD once we have finished it, and we expect this one to come out early because it is so well understood.

Remote Read is a Workflow profile that will likely be owned by IHE Radiology when it is complete, but which will be supported by PCC during development.  I also expect to work on an appendix for IHE PCC which shows how to define a Workflow Profile using BPMN 2.0.

Data Access Framework will be an interesting animal.  One part of it will be a framework for supporting document queries in health information exchange, which will become a PCC Profile. Another part of it will be developing US National extensions, mostly on IHE ITI profiles for how they are to be used (which options and vocabulary are required, for example), and a last part of it will be an implementation guide which pulls it all together, that to be developed in IHE USA.

One profile didn't make it through because the author wasn't there to defend it at all. Other work includes cleaning up some long-standing problems with a missing transaction for "Share Content" in Content profiles, and doing further outreach in Nursing.

Thursday, November 13, 2014


As I said last week, I've got a lot going on.  This week I'm at IHE meetings and I'll update you tomorrow on what we did, but I need to take care of something else that's been overdue, also related to what PCC is doing at IHE last week.

Last cycle Patient Care Coordination accepted the Data Access Framework White Paper (DAF) as a work item, and we are proceeding with it further this year.  This couldn't have happened without the contributions of one person in particular.  He was assigned to this task as an ONC Contractor, and like many, had not been deeply involved in IHE before.  But he adapted well to the process, and with a bit of guidance on my part, developed a good deal of the content for the white paper on his own, incorporating into it feedback from the S&I Framework group on DAF.  That white paper was completed early, and went out for two rounds of public comment, and was finally published a couple of weeks ago.

IHE has a lot going on, and this particular white paper involved something like a dozen IHE profiles from two domains (PCC and IT Infrastructure), and quite a bit of complexity to address the various needs for interoperability within, between and across organizations.  Our intrepid author was able to dig into the details of those profiles, and understand a new set of processes for him, and even help us invent new processes in PCC to address the development of the framework.

He'll be joining us again this year to help move the Data Access Framework as a profile, a set of national extensions, and an Implementation Guide (in IHE USA) that will help set forth the standards for interoperability we could be seeing in the near future in this country, and also to explain the IHE framework that has been used internationally for Information exchange.  Without further ado, let me thank the next recipient of the Ad Hoc Harley:

This certifies that  
Nagesh (Dragon) Bashyam of Drager Consulting 

Has hereby been recognized for his contributions to IHE and to Interoperability in the US

Thanks again Dragon, and I look forward to working with you again this year!

Thursday, November 6, 2014

Small talk and small things

November is a packed month:

  • I'm in Saudi this week, teaching people how to teach others what we have been doing for the past two years.
  • Next week is the IHE Technical Face-To-Face Meeting
  • Following that, I'll be headed to AMIA for my first time (as a Student)
  • I have a term paper due shortly.
  • Boxes still have to be unpacked, and I need to remove leaves from more than an acre of property.
My move is complete, and we are quite happy with it (when I'm there).  School is going fairly well. I've got two courses this semester, Scientific Writing, which I love, and Introduction to Biostatistics, which I'm also enjoying.  So far, both classes have been pretty easy for me, although I seem to want to make my writing class harder than it needs to be.  The same is probably true for by Biostatistics class.  But I have allocated the time for these classes, so I'm pushing myself to learn as much as possible in them, even if that goes beyond the curriculum set.

One of my current challenges is finding a doctor for my family in my new home town.  I have some criteria they have to meet:
  1. They absolutely must have a patient portal.
  2. The should have an office within 15 minutes of the house.
  3. They probably should be in my home state (Massachusetts), even though I'm two minutes from Woonsocket, Rhode Island.  The rationale for this is that the hospitals that I would want to go to (or have a family member in) if need be are both in Massachusetts, and that makes things a lot simpler.
  4. What I read about them online should be positive.
  5. If possible, I'd like them to be in a physician group large enough to include the variety of specialties that we've used in the past, including pediatrics, orthopedics, gastroenterology and ob/gyn.  In addition, with my mother living with us part of the year, I'd also like them to have someone in the practice that specializes in gerontology.
The biggest challenge has been finding a physician who meets this criteria and who is taking new patients.  So far, I've been striking out.  Technology doesn't help much for a patient in this situation.  My health insurer's information is insufficient.  They have little about the availability of a patient portal, focusing only on Physicians who work with their own Health IT product, and not reporting about portals supported by anyone else, and they say nothing about whether or not the physician is taking new patients.  And taking new patients is a dubious term when at least two physicians who are "taking new patients" don't have an appointment for more than four months out.  

The last time I changed physicians was with my last move, and it was easy.  I picked up the phone and called one, and they made an appointment for me.  But back then, I lived a heck of a lot closer to Boston, a major city with a serious medical industry in it. Now I am learning what it means to be living in a rural area, near smaller cities, and being stuck with the choices that many are stuck with.  I've also set a much higher bar this time than I had the last time.  Fortunately, I can probably go another three months before I need to have a new physician in place.

This is definitely a first world problem, and it gives me a new found understanding of what a physician shortage might actually look like.  Frankly, for me and most of my family, and our health issues, we don't much need a physician.  I've been thrilled with the care that my children get from their NP, and would be just as happy with an NP or PA for most of my ailments.  By the time it gets serious, I would need to see a specialist in most cases, and it's been pretty obvious even to me what kind of specialist I need.

There are a lot of big problems in healthcare.  This is a small one.  An annoyance.  But I wonder why I have to put up with it, and the many other small annoyances that come with our healthcare system in this country.  We tend to focus on the big things in Healthcare IT, but what if we paid attention to the myriad of small things somewhere?  What would be the economic impact of that?  For most of my life, the cost of those small annoyances probably adds up to a significant amount in terms of my time and money.  Telemedicine just became an option for me, and so I'm thinking about exploring the possibility further.  

And I'll also be thinking a good bit about how Health IT could be addressing those small problems for patients as well.


Wednesday, October 29, 2014

Contained FHIR Resources

Currently under discussion in the FHIR community is what to do with contained resources, and how they could be searched (or not).  I argue that they should be able to be searched, and that FHIR should specify how they can be returned in a standard way, but also that NOT every FHIR conforming system need be able to search for contained resources.  At present, to retrieve any type of contained resource in FHIR, you must perform a query that will return its container, and you cannot directly query the resource type of that resource.  You can chain a query into the resource type, but you have to know then, the association between the container and the contained resource.

But not every system will know that, and it may not very well have ever been considered by the system designer. In analytics, the analytic engines often pick up some pretty strange associations in the data that they process. They completely ignore any notion of what you or I would consider to be elegant information system design.  Even so, having found an association through analytics, now you want to be able to do something based on it.  You might use the presence of a particular fact to drive some sort of decision support, perhaps a dashboard, or something else.  So, you need to be able to get to instances of that fact without necessarily knowing the association.

"Wait!" You argue, "of course the analytics system knows the associations too."  And so it does.  For this design.  But the association discovered could be an important trigger that works with other systems that work with similar, but not identical data.  You find these sorts of associations published all the time in the medical literature: Presence of X leads to Y.  So even the analysis may be from somewhere else.  And you don't care about the container.  You care about finding all the instances of resource X so that you can prevent Y, or treat Y, or even simply reward the behavior associated with X and its related resources.  You might want to count all the Xs, or inspect them further to see if this case qualifies for some sort of intervention, or do something else with them.  What you need to do is not particularly important, just that you need to do something with these things.  So, if you cannot find a particular X, then you have data locked away that isn't all that useful.

Let's look at the mechanics of how this might be implemented:

Using an analogy, lets say you have a system representing people, drivers, cars, registrations, drivers, licenses, and a registration authority and a licensing authority.  Each could be represented as a separate resource and might have a separate identity. And a road trip resource might have a destination, and be associated with driver, a car and the people who are its passengers.

But what do you do about the hitchhiker that you might need to keep track of for the road trip, but don't much care about later?  In FHIR, you could create the hitchhiker as a contained passenger resource associated with the road trip.  After all, outside of the context of the road trip, he doesn't much matter, but he should be listed as part of the resources associated with the road trip.

Now, as resources go, any person resource can be searched, and you can locate any person who has been on a road trip from Chicago to Milwaukee.  Or can you?  Actually you cannot.  Because if John picks up a hitchhiker (Keith) for a road trip from Chicago to Milwaukee, and decides he doesn't care to track Keith as a person, he could just contain Keith as a person resource in the road trip resource using the passenger attribute of the road trip resource.

So far, so good.  But now look at what happens when John goes to do some queries about how his car is being used.  The way that FHIR works today, we would be able find all the drivers, all the road trips associated with the car registered in his name, but what he cannot find is all the passengers who have been in his car.  That's fine, you say, we didn't care about that passenger enough to make it a full blow resource anyway.  And there is some benefit here, because you don't have to return that contained passenger as a resource.

BUT: What happens when you query for road trips?  The query specification says that you are permitted to support chaining of queries across associations.  So John could arguably query for a road trip where Keith was listed as a passenger.  But he could not query for passengers named Keith who have been on road trips with him and get that one where Keith was listed as a contained resource.  So even though the contained resource couldn't be returned, it still needs to be indexed so that the chained search works right.

Implicitly, you can view a containment as a particular kind of association (an aggregation).  The container had the contained resource embedded within it.  And the contained resource has this implicit association with one and only one container.  So what if we said that "_container" acted as if it was an attribute on the Any resource that resolved to the resource that held a contained resource.  If we did that, we could resolve a lot of challenges with search on contained resources.

If you wanted to search for resources that were only contained, you could do that by ensuring that _container was valued.  If you wanted to search for resources that were not contained, you could do that by ensuring _container was not valued.  If you wanted to search for resources that were contained by a resource of a particular type, you'd use _container:ResourceType, just as you would for other associations.

The mechanics of how this would work are pretty clear, and for those use cases for when you want to access "contained" resources, now you have a way to specify that they be present in the search results.  For a contained resource, you could ask to _include resources that appear along the _container association.  If I lost you, let me put it this way.  When John goes to search for people who have been on road trips in his car, he can say GET [base]/Person?_container:RoadTrip&_include=Person._container and he would get a bunch of Person resources, and where they were contained, he would get their containers (which would happen to include those Person resources)*.

Now, an analytics system that simply wants to keep track of what Keith is doing really doesn't care whether the activity is RoadTrip, or Sleeping.  It just wants to find Keith and see what he is up to.  It doesn't care about all the associations, it just needs to report to Keith's wife what he is doing.  Now it can do so without any prior knowledge about how John is keeping track of what he does with Keith.

   -- Keith

* Looking at this a bit more deeply, I can see that GET [base]/Person?_container:RoadTrip is pretty useless without the _include=Person._container.  I'm not sure where to go with that.  We could determine that use of _container required _include, we could make that automatic, or we could come up with something else.  For now, I'm not really worried about it, since it doesn't much matter for the immediate discussion.

Saturday, October 25, 2014

IHE Radiology Profile Selections

This comes to me via my colleague Chris Lindop (who is also IHE Radiology planning co-chair):

The IHE Radiology Planning Committee has selected 3 profiles to develop for trial testing in 2016.  

The proposal development work items selected are:

The Radiology Technical Committee will meet to kick off development work on these profiles Nov. 4-7 at RSNA HQ in Oak Brook, IL.  Please review these profiles or forward them to others. 

The first two of these I mentioned earlier today with regard to PCC planning.  As you can see, this will be a coordinated effort between committees if all goes through.   

Friday, October 24, 2014

2015-2016 IHE PCC Planning

IHE Patient Care Coordination, Quality, Research, and Public Health, and IT Infrastructure met the last two days to discuss profiles and work for the 2015/2016 development cycle.  You can find the minutes of PCC's meeting here, as well as the final evaluation spreadsheet we will be passing on to the technical committee for their review.

I'll make a few comments on this year's work items:

Radiology and Patient Care Devices have some work they'd like PCC's help with.  From the PCD perspective, one of these simply a matter of documenting existing PCD and Continua efforts in a profile so that it can be tested in Connectathon for Home Health Monitoring.  Another would make it easier for medical devices to communicate with some EHR systems, supporting the exchange of PCD-01 messages to a "Semantic Bridge" (fancy way of saying: Interface Engine) to translate it from one form (HL7 V2.6) to another form (e.g., C-CDA or maybe even FHIR) that might be more digestible for some EHR systems.

On the Radiology side, they'd like to see Remote Reading Workflow, and support for CDS during the imaging ordering process.  For the former, we are thinking about XDW-based workflow profile, perhaps combining with another submission (Basic Testing Workflow), and updating the referral workflow profile.  For the latter, I'm thinking this is something like what QRPH has already done with RFD and CRD.  Except in this case, instead of getting back a form, you also have the option of simply getting back a token that says that the imaging procedure is authorized based on the data provided.  This would take the place of getting back a form that asks for information needed to verify that the procedure is warranted, and eventually, that "authorization" token might be returned.

Finally there is the DAF proposal.  This one is challenging.  The basic ask is that we prepare an implementation guide for S&I Framework, but this really isn't an IHE International profile proposal.  So we are looking at putting together something like a template for developing such a guide in IHE PCC, and then having IHE USA fill it in.  So there'd be some joint work, but not an IHE Profile per se.  The template work might need to be addressed by the DCC (Domain Coordination Committee), with some help from PCC.

These are of course, just my opinions; this is still my sabbatical year from chairing anything.


P.S. There's also some IHE/HL7 work that I'm going to be proposing once that joint workgroup gets officially established, but we still have some work to do there.  That's another blog post.

Tuesday, October 21, 2014

JASON and the EHRnauts

I thought I was done with JASON and the EHRnauts for a little bit, when this query popped into my inbox via Will Ross over at Redwood MedNet.  He points to this bit of wisdom:

From the report:
To the extent that query capabilities are included in MU Stage 3, we are at an awkward moment in standards development: Older standards such as XDS/XCA are mature but inherently limited, whereas newer API-based standards are not yet ready for large-scale adoption. We believe it would be detrimental to lock the industry in to older standards, and thus, we recommend that ONC mobilize an accelerated standards development process to ready an initial specification of FHIR for certification to support MU Stage 3.
I love it when people raise the point about limits, without delving into what those limits are.  It always sounds so authoritative.  Yes, documents are limited.

Here are some of the limitations of documents:
  1. You can only operate on what appears within documents.  
  2. You have to have some idea about which documents you want.  
  3. When dealing with multiple documents, you have to deal with redundancy and ambiguity.
  4. Documents are coarser grained than some problems want to deal with.
There are also benefits to documents:
  1. A document can be operated on by a human using very little technology (Human Readability), or by a computer.
  2. Documents link each fact reported to an encounter or visit, a healthcare provider and and institution (Context).
  3. A document provides the complete record of the encounter or visit, not just individual parts that can be interpreted out of context (Wholeness).
  4. The content of a document can be retrieved at any point in time in the future in a way that is repeatable (Persistent).
  5. A document links data to the organization that gathered, uses and manages it (Stewardship).
  6. A document can be signed by a healthcare provider (Potential for Authentication).
Anyone who has studied CDA will recognize where these properties come from.

Now, as to the limits, all of these can be overcome, the question is who does it.  The fundamental organization of data in an EHR around information documented during an encounter isn't likely to change whether you look at a large grained document-centric approach, or a finer-grained data item-centric approach.

You'll only ever be able to find information that has been gathered, or the fact that it hasn't been gathered.  The document based approach means that you need to look at several documents to determine that for a time period, a finer grained approach means that the system you ask for that information must look at all data items in that time period.

You will always have to have some idea about what you want to ask for.  In the document-centric approach, that can be based on document metadata such as who, where, what or when.  Those questions are often asked at the first level of the physician's workflow in their search for more information.  Finer-grained approaches will allow more detailed questions to be addressed that come later in the evaluation of the patient: Did they have this test? If so what were the results? Was ____ ruled out?  When was the last time ___?

When dealing with multiple facts over time, you will have to deal with redundancy, ambiguity and disagreement.  It is quite possible in an EHR today to have one physician assert something, and another deny that same thing, and both may be correct, or they may conflict.  This is true regardless of whether the data items are accessed through coarse- or fine-grained mechanisms.  Documents increase the degree to which this occurs because of the wholeness principle, you get all of the relevant data about the encounter, not just a few small pieces of data.  But you should be able to readily resolve those issues, because you will always have them to deal with as soon as you have multiple data sources.  Documents just make that problem visible sooner, because each document can be (and often is) treated as a single data source, whereas it only shows up in fine-grained access mechanisms once you have more than one data source.

The key issue is that some folks want to get right down into the computer automation of tricky bits, which means that they often don't want what they consider to be the excess baggage of documents. Agreed, we need a better way, and the industry is working on it.

I also love it when I hear "lock in", because frankly, when something better comes along, people will use it, regardless of what the government says or does (consider how mobile has driven healthcare). In most cases, the best thing it can do is get itself out of the way ;-)

To say (as the JASON Task Force does) that "There is currently no industry- or government-led plan or effort focused on ubiquitous adoption of standardized Public APIs." is technically correct.  But let me ask you, where was the plan to adopt HTTP and HTML and CSS for the World Wide Web?  XML and Schema?  MIME and SMTP and POP for eMail?  If anywhere, it was in the minds of the creators of those standards and the implementers. There was no government program driving adoption.

Right now, HL7, CDISC, DICOM, IEEE, OpenEHR, and IHE have all rallied around FHIR as the way forward for a variety of different use cases (see for example IHE's Mobile Access to Health Documents effort).  Major vendors, national programs, industry consortia, and other organizations have publicly announced support for FHIR in products, programs and services currently being developed.  This kind of thing is nearly unprecedented in health care.  To come upon if after the fact and try to impose some US federally crafted plan to make it happen is just a bit ambitious don't you think?  After all, it worked so well the last time with Direct.

My advice is to tread carefully, as ONC has already been doing.  Offer support and assistance, encourage communication among different groups, maybe even fund some development.  However, I think HHS needs to avoid the arrogance of thinking that it could plan this much better than is already occurring naturally in the industry.

I leave you with these thoughts:  The Web took about five years to be widely used, and another five to really mature.  FHIR has been with us for a bit more than two years.  Rather than asking whether we can we afford to wait, consider if it will be worth it to rush.  It won't be too much longer.

-- Keith

P.S.  I was very impressed with the JTF report.  It was a very thoughtful response to the original work.

Friday, October 17, 2014

Take as Written

One of the requirements for the HITSP C32 as specified by AHIP was that a medication entry provide the free text sig.  That specification met the requirement by indicating that it could be found in the <text> element of the <substanceAdministration> element.  This has resulted in implementation confusion because some feel that it meant that ONLY the sig should appear in substanceAdministration/text, and others felt that it could be part of it, but that it could also include the medication and other details (such as preconditions or other cautions or instructions like take with food).

The latter interpretation is what the HL7 RIM specifies should appear in the medication template's substanceAdministration/text element.  Some systems want access to just the sig, and cannot determine where it begins or ends in the medication template.  So Structured Documents is now working on a new template that can be used within the Medication template to show JUST the sig. Rick Geimer and I have been tasked with coming up with a proposal.

Here's essentially the proposal that we discussed on the SDWG call yesterday.  We add a new template called "Medication Free Text Sig" (or some similar name) which provides the free text sig component of the medication.  This template becomes a recommended component of the substanceAdministration template now, and perhaps becomes required in future editions of C-CDA.  It looks something like this:

    <paragraph ID='med-1'>
      <content ID='medname-1'>Amlodipine-Benzapril 2.5-10mg</content>

      <content ID='sig-1'>one capsule daily</content>
    <substanceAdministration classCode='SBADM' moodCode='INT'>
      <text><reference value='#med-1'/></text>
      <consumable typeCode="CSM">
        <manufacturedProduct classCode="MANU">
            <code code="898352" 
      displayName="Amlodipine 2.5 MG/Benazepril hydrochloride 10 MG Oral Capsule"
              codeSystem="2.16.840.1.113883.6.88" codeSystemName="RxNorm">
              <originalText><reference value="#medname-1"/></originalText>
            <name>Amlodipine 2.5 MG / Benazepril hydrochloride 10 MG Oral Capsule</name>
      <entryRelationship typeCode='COMP'>
        <substanceAdministration classCode='SBADM' moodCode='INT'>
          <code code='422096002' displayName='Take' 
              codeSystem='2.16.840.1.113883.6.96' codeSystemName='SNOMED-CT'/>
          <text><reference value='#sig-1'/></text>

This component simply provides access to the free text content of the sig portion of the medication entry.  Like the travel history section, certified EHR systems could start using a template like this today.  Consider this my prescription for how to solve the problem.

   -- Keith

Thursday, October 16, 2014

On a Process for Rapid Template Development

Recently a question crossed the Structured Documents Workgroup List about how to record information about a patient's recent travel.  You can probably guess what recent media events motivated that question.

IHE had long ago developed a template for foreign travel, as part of XPHR. Since this section wasn't required in the HITSP C32, it was not further developed in C-CDA.  However, that doesn't stop anyone from using it, even under Meaningful Use.  The Foreign Travel template is simply a section containing a narrative description of travel.  Narrative capture of travel history is what most EHR systems today support.  This usually appears somewhere in the social history section of the patient chart, and is accessible to any provider caring for the patient in most EHR systems.

For cases of communicable disease, if you want the EHR to be able to apply clinical decision support to recent foreign travel, you would need coded entries, or natural language processing over the narrative.  To code the travel information, you will need to do more in this section.  The basic activity being documented is travel, and so you could readily capture that in an act, with location participants for each place visited.

<act classCode='ACT' moodCode='EVN'>
  <code code='420008001' displayName='Travel'
      codeSystem='2.16.840.1.113883.6.96' codeSystemName='SNOMED-CT'/>
  <effectiveTime><!-- This might be optional, see participant/time below -->
    <low value='Starting Date for Travel History'/>
    <high value='Ending Date for Travel History'/>

  <participant typeCode='LOC'>

The participant would represent the various locations visited during the time period described in the travel act.  We need not go into the entity level since participant.role can capture what we need.

<participant typeCode='LOC'>
    <low value='Starting Date for this Location'/>
    <high value='Ending Date for this Location'/>
  <participantRole classCode='TERR'>
    <!-- This might be optional, and could identify locations using a value
         set such as Geographic Location History -->
    <code code='Code identifying location
      codeSystem='Code System for Locations (e.g., ISO-3166)'/>

We would probably want to constrain participantRole so that only one addr element was present (and was mandatory), and had at least one element of city, county, state or country.  I would also recommend that country always be present, and that if city or county is present, that state also be present.  For disambiguation purposes, you might need to know which of the twelve New London's in the US your patient was recently in.

Some have suggested that location could be rolled up into a code, as I have shown above in participantRole/code.  While I agree that would make certain kinds of decision support easier, it is something that could be done within the clinical decision support module, rather than being specified within the EHR.  The Geographic Location Value set referenced above shows why this might be a problem, as it contains codes describing locations at different levels.

So, now back to the main point.  We quickly went through this model on the Structured Documents workgroup list service in less than three days.  It would take us several months to role this out as a new template.  We need a model of development and consensus building somewhat like what OpenEHR does for archetypes, allowing for quicker development and deployment of these sorts of artifacts.  I also think that this is the way some of these templates should be developed in the future. We should develop a model that can be approved through a general call for consensus, and then periodically, we can roll up several of these templates into a release which gets a more thorough review through the HL7 ballot process.

This would allow HL7 to be responsive to rapidly developing health issues, without having to make it something that we have to panic about.  Note that foreign travel is relevant only for some cases. There are plenty of other ways to be exposed to disease, including everyday activities like going to school or work, or shopping.  For that you might want to be looking at other information in the patient health record, such as their home address, and workplace and school contacts.  IHE also included entries for those contacts in XPHR.

   -- Keith

Wednesday, October 15, 2014

A Short and Sensual Sojourn

One day last week I took a ride on my motorcycle to pick up my daughter from school. Having recently moved into this rural neighborhood, I took to the back roads to explore. To ride is to be connected to the environment around you. The roads were narrow, winding and tree covered. Gleefully I followed them over to her school, absorbing all I could see, smell and feel. Combining the visuals of fiery fall foliage with the aroma of recent rain and fallen leaves, and the soft touch of cool fall air brushing by me, I almost reached a sensual nirvana. All too soon I arrived at my destination. After collecting my daughter, we reversed my original course back to our new home. She also marveled at our new surroundings. The more I explore this new place, the happier I am to have moved here. Why I made such a lifestyle shift was readily answered in a single and all too short ride, I realized, as I pulled into my space at the end of the driveway.

About this text: This was originally part of a homework assignment for my Scientific Writing class, which I then updated a bit. I enjoyed it and am still digging out of boxes, so there is no excess brain to write with for something else today. I promise there will be something with meat on it soon.

Thursday, October 9, 2014

Worth Waiting For

About 12 years ago I was to have given a sermon at my church.  In planning for it, I asked our rector how long a sermon should be.  He answered with this bit of wisdom: "It takes as long as it takes."  Of course, he didn't realize I was asking about what I needed to do next week, and was thinking I was commenting on the length of his sermon that week.  But the phrase stuck with me, and the next week, when he realized WHY I was asking, we had a good laugh over his non-answer.

It is now October 2014.  This time last year HL7 was hurriedly trying to put the final pieces together for C-CDA Release 2.0, so that we could get it out in time for regulation.  It's still not published yet, although it soon will be, and the reality is that we apparently didn't need it in such a hurry.  Recent discussions on HL7 FHIR and the CDA and C-CDA on FHIR projects indicate that the dates for these don't all line up for the DSTU FHIR Ballot schedule.  There's some discussion about whether or not we should delay it.

I for one am all for taking as long as it takes to do things right.  I'm a bit tired of rushing stuff out to ballot to meet deadlines that were made by people who don't necessarily understand real world healthcare provider upgrade and deployment schedules.  We have plenty of work to do, and the industry has gotten itself pretty convinced about the right way to go.  Now we just have to convince folks that it will be worth waiting for.

Thus ends my sermon for this day...


Wednesday, October 8, 2014

Assumed Ignorant

My internet was down for about an hour yesterday.  It could be readily traced back to a specific piece of hardware, and fortunately for me, I happened to have a replacement on hand that wasn't the same make and model that was causing massive internet outages all over the world.  Even when I upgrade, I hardly ever throw anything away.  I have at least three routers and a Hub sitting in my office, unused since they've been replaced with faster equipment.  So, once I knew what the problem was, I dug out my old Netgear Wireless router, reset it to factory defaults, and plugged it in to get us limping back along until Belkin could fix whatever it messed up.

The tech at Charter couldn't explain to me what was wrong, only that it was a problem with the Belkin router.  Belkin couldn't explain what was wrong, only that some software change had caused the problem.  My bet is that the server at Belkin that the routers used to ping to determine that they really had Internet access were either down, decommissioned, or renamed.

In trying to work through the problem with my Internet tech support guy, I ran into a problem that patients (especially chronic ones) have with their doctors.  I know more than your average Internet user about networking.  By the time I've called the cable guy, I've gone through all the standard Tier 1 fixes, sniffed the network if necessary, and have a pretty good idea the problem is NOT at my end. I tried to explain that to this guy, but he didn't have ANY training about how to talk to a tech savvy customer.  He only knows his scripts.  I've had doctor's like that too, who try to dumb stuff down for me because "It's too complicated."  I'd like to show them some of the code I've had to maintain in my life.

In any case, I wish there was something we could do about the attitude that customers or patients should be assumed ignorant until proven otherwise.  I think that there are some basic skills, such as being able to reset your internet box, fill up your tank, change and flush your oil and coolant, throw a breaker, or understand our health, and the healthcare system (such as it is) that should be part of everyone's basic education.  And I think the same thing goes for Physicians and technology!

When did assumed ignorant become the default, and why do we let people get away with it?

Monday, October 6, 2014

Life Flow

For the past few weeks I have been rearranging my life so that I could move. As we move into the new house, my family is redesigning our spaces and technology solutions to better fit our life.  The kitchen is not just about making and eating meals, it is about breakfast, lunch and dinner.  So now part of my kitchen is devoted to making the appliances accessible that we ALL use in the morning, from coffee maker to toaster oven to microwave in one part.  While another part of the kitchen is devoted to more intense meal preparation for dinner done by one or two persons at a time.  

My office which used to hold half our books now has over 90% of them.  The internet router, telephone base station, and main printer, which used to be spread throughout the house are all right next to me so that the "hey Honey, why won't the printer print?" (Or wifi connect, or phone dial) question need not be shouted across half the house.  I can watch the smoker outside from my desk. The exercise and entertainment centers can now be used together, synergistically.

There is so much stuff, we are still in the just get it out of boxes, get it to work stage in many places.   We'll rough out the life flow as we do that, and then fine tune it as we find the problems (afte a coffee spill it was noted that we probably want the marble topped sideboard under the coffee pot rather than the oak topped one).  This is real, rational and agile all at the same time.  I wonder if there isn't a lesson to be learned from this for healthcare.  Except that I don't know if most Physicians could live through the kind of chaos that roughing out and testing workflows would require.

Friday, September 26, 2014

Finance and Twentieth Century Medicine

I'm moving to the country in a few days, to a small farm about fifty miles from Boston.  The process of buying a house is rather complex, sort of like getting healthcare.  The next time someone mouths off to me about how the financial services sector has interoperability down pat, I am going to laugh so very hard at them.

1.  We transacted most of our data exchanges through e-mail and fax, with some telephone and web mixed in.
2.  Every data exchange was paper or PDF based.  Structured data?  I can hear the underwriter evilly laughing in the background.  Yes, please, send me your structured data so we can print it out and transfer it into our underwriting forms manually.
3.  Get me a quote and fax it... (On the hazard insurance policy).

Sure, that is interoperable... As 20th century medicine.


P.S. What the finance sector has learned is how to use interoperability to take THEIR costs out of the system, not MINE.  We should remember that for healthcare too.

Thursday, September 25, 2014

All the Good Names are Taken

A recent thread on the HL7 FHIR List points to one of the real challenges in computer science.  You see, if you don't get to a particular space first, someone else grabs all of the good names.   For example, "namespace" happens to be already used as a reserved word in five different programming languages.

I propose a novel solution to this problem, which is to use common dictionary meanings for terms first. Only when extreme precision is necessary would we disambiguate, and only after demonstration that such disambiguation is necessary.  In these cases, we would subscript the name with the definition number assigned to a definition (in a commonly used dictionary resource like Wiktionary).  If no definition suited, only then would we create a new term which did not otherwise exist in the dictionary.  We would assign someone to then add it to Wictionary, effectively claiming the space.  In this way, maybe we could actually explain how standards work to the average C-level.


P.S. ;-)

Friday, September 19, 2014

The HL7 September Plenary

I spent a good bit of time at the recent HL7 September Working Group Plenary meeting over the past five days with a lot of different workgroups.

While my home is usually Structured Documents, I only spent two quarters with them, first on forward planning and next hearing about the DAF FHIR project (which I'll talk about a bit more later).  We also talked briefly about ONC's HIT Standards and Policy FACA's feedback on C-CDA and also recent issues regarding the quality of CCDA documents being produced.  I agreed to bring this up in the HL7 Policy Advisory Committee meeting later on Wednesday.

I spent a good quarter with InM and ITS talking about the Data Access Framework PSS to create Query and Response profiles, in part satisfying one of the gaps identified in IHE's white paper on the Data Access Framework (this is the link to the public comment version, the Final is to be published soon).  One of the challenges here is that DAF wants to develop profiles that eventually will take advantage of the C-CDA on FHIR project, but they want to do things sooner than that project will be ready so that people can take advantage of the Query and Response profiles to test them.  I made the point that this needs to be coordinated across the other HL7 CDA/FHIR projects and the feedback I got was that "That isn't our project".  This is a common misconception that happens quite a bit when folks bring projects to HL7 is that they think they own the project.  The reality is, this becomes an HL7 project, and HL7 needs to do what it must to manage and coordinate ALL of the projects in its portfolio.  So, there will be some coordination there, and hopefully, we'll figure out how to do that properly.

Another good quarter was spent on QUICK, in which we talked quite a bit about my ONE negative comment on QUICK, which was the "Bad" ballot you can read more about in this post.  We traded a lot of thinking about what QUICK is trying to do.  One of the challenges of this work is that they think some of the names of things in FHIR are actually misnamed when approached from a quality and/or clinical decision support perspective,  I think there are probably three or four things that QUICK needs to do to address these mismatches, including getting some change proposals on the FHIR agenda to address some of these naming issues.  After all, if FHIR is truly EHR focused, we need to recall that at least in one market (the US), both Clinical Decision Support and Quality Measurement are key features that have to be present.

I spent a quarter with the HL7 Policy Advisory Committee, in which we spent about half the time planning the Policy Summit to be held in early December, and the other half discussion how to respond to concerns raised by the HIT FACAs on C-CDA.  We already have many processes within HL7 to address such feedback, and HL7 members use these to get improvements into the standards pipeline.  For example, the Examples task force headed by by Brett Marquard has already begun work on some of the examples that had been identified by the FACA.  Fortunately, we've been tracking these issues, but it might be nice if someone actually fed them more directly into HL7.  We'll be working on how to streamline that.

I spent a quarter with the Attachments workgroup, and we resolved some issues with esMD, but more importantly, Paul Knapp, chair of the HL7 Financial Management workgroup showed up to report on what he has been doing with FHIR in the Financial sector.  A while back I wrote a post about how Blue Button Plus and EOB data might be used to help reduce costs, but one of the outstanding issues has been the missing content standard for an EOB.  Building from the work that Paul has already completed with Claims and Remittances, we believe that FM could (and Attachments would support) the creation of an EOB resource that could be used with Direct, Blue Button Plus, or any other transport.

Thursday morning I spent with at the Payer Summit, giving payers a very high level view of HL7 Standards, along with many other HL7 luminaries.  It wasn't the largest room, but it was certainly chock full of some very interested payers.  I wasn't able to stay for the full summit, but I heard many good things.  Also speaking at the Summit was Brian Ahier (@ahier on twitter).

Finally, I spent my last quarter at the Working Group meeting with a number of HL7 and IHE members discussing the formation of a joint workgroup between IHE and HL7, preliminarily known as the Healthcare Standards Integration workgroup.  The IHE board has already approved this in principle, and we are following the HL7 Governance process to finalize the new workgroup, with the expectancy of final IHE board approval. Hopefully it will be in place before we complete the 2015/2016 Profile Selection process with several IHE Domains in October/November.

All in all, it was a pretty busy week, and I was quite happy to get home to finish packing for my big move to the country, a week from tomorrow.

Thursday, September 18, 2014

That takes guts

Normally I do this post Wednesday morning, but quite honestly had day job and personal distractions (I'm moving in about a week) this week, so I'm doing it today.  Wednesday morning at the HL7 Plenary the God Father of Health Level 7, Ed Hammond gives out the Ed Hammond awards, and I traditionally also give out an ad hoc award.  I do that not so much to compete with Ed (I hope I can do what he does when I reach that degree of tenure)., but to continue the tradition.

Tuesday morning I saw a combination of ribbons on an HL7 Member's badge that I found stunning. They were "First Time Attendee" and "Co-chair".  When I asked further, I discovered that this person was a new co-chair of perhaps the most technically challenging, and also difficult collection (which is a compliment, not a critique) of people to manage.  The Security Workgroup is relatively small, but contains some of the top names in Health IT Security, and has always been a very challenging place to engage.  I leave that to my colleague John Moehrke, who has much more experience in this area.  I know enough about security to know that I'd rather defer to seasoned experts that to try to do it myself.

So this combination of badges deserves special recognition, because while it takes guts as an HL7 first-timer to join the Security workgroup, it takes even more than that be willing to co-chair the group.  An extra special thanks and here we go ...

This certifies that 
Alexander Mense of HL7 Austria 

Has hereby been recognized for for having the guts to take on a role as cochair of the HL7 Security Workgroup

The FHIR Code

A guest post from one of the FHIR Chief's: Lloyd McKensie

A little over three years ago, when Grahame introduced the concept thFHIR(TM) standard, he didn’t just have set of technical ideas for how to better share healthcare information. He also had some fairly strong ideas about what we needed to hold as “important” as we pursued that new approach. The technical approach has evolved, in some places quite a lot. However, the underlying priorities have remained pretty consistent.
at would become

Principles are actually important core to FHIR – or any standards effort. They drive what gets produced. They also guide the community. If the principles aren’t well understood or clearly expressed, it’s easy for a standard to drift and lose focus. It’s also easy for it to deliver the wrong thing. V3 had a really strong focus on “semantic” interoperability. We made great strides in that space. However, we sort of lost track of the fact we still needed technical interoperability underneath that. (And that ease of use was sort of relevant too . . .)

Some of those principles such as “the 80%” have been widely shared (though not always well understood) . Others have found their way into presentations in slides such as the FHIR Manifesto. However, we’d never really sat down as a project and written down exactly what the fundamental principles of FHIR were or why we felt those principles were central to what FHIR was.

So the FHIR Governance Board (with review from the FHIR Management Group) has written down what we see as the “core principles” of FHIR – the FHIR Code, if you will. These are the underlying drivers that we feel should guide every design decision, every methodology rule, every step we take in deciding on scope, ballot timelines, etc. They can be found on the HL7 wiki.

I don’t think any of these principles will be a surprise to those who have been following the FHIR project. They pretty much all stem from the first principle:

FHIR prioritizes implementation 

Note that these aren’t hard and fast rules, but guidelines. You can’t say “I’m an implementer, I don’t like what you’re doing – therefore you’re violating FHIR core principles”. But they do reflect the spirit of what we’re trying to do and we’ll try to adhere to them as much as we can. (As well, we interpret “implementer in the broad sense – we don’t only care about those who write code but about all those who use FHIR.)

The FHIR code isn’t done though, because FHIR isn’t a top-down process. It’s about community (Grahame’s been re-enforcing that a lot this week.) And as I write this, I realize we may have missed a principle that should be added to the list. In any case, we want the principles to be reflective of the desires of the community – so we’re throwing them out to implementers and the broader FHIR community:

Do these principles reflect your vision for FHIR? Is this what should be guiding our decisions? Will this help us to keep our focus on the right things? Are they clear enough?

We’ll take your feedback (here, on the FHIR list, implementer’s Skype chat or any other means you choose). Then we’ll seek feedback as part of the next FHIR DSTU.