Convert your FHIR JSON -> XML and back here. The CDA Book is sometimes listed for Kindle here and it is also SHIPPING from Amazon! See here for Errata.

Friday, August 30, 2013

Mashup Generation

In my younger days, I played role playing games, wrote fan fiction, and hung out on bulletin boards using my modem and phone in the days of the pre-Internet era.  Today, my eldest daughter role plays while writing serial fan fiction with friends on an internet site that is essentially repurposing the infrastructure used to manage a forum, all on her cell-phone.  She lives in the generation that invented mash-ups, and can take four unrelated concepts, throw them together, and build something not just novel, but also quite usefull.  She and her friends think nothing of spending hours writing stories together over the internet.  And I've read this stuff.  I'd pay real money for some of this in e-book form. They are not only creating their own entertainment, but also learning and practicing great skills, and creating an economy all their own.  I'll show you my stories if you show me yours.

This gives me hope, especially if we can get this generation to apply those same skills to some of the more challenging problems we face.  Imagine if your healthcare providers used an internet forum in real-time to discuss your case.  What would that look like?  It might feel foriegn to doctors of today, but to those in my daughter's generation, it will be as natural as breathing.  If only we can get out of their way.


Thursday, August 29, 2013

IHE Cardiology Technical Frameworks and Technical Framework Supplements Published

IHE Cardiology Technical Framework Volumes and Technical Framework Supplements Published

The IHE Cardiology Technical Committee has published the following Technical Framework Volumes as of August 29, 2013:
  • Volume 1 (CARD TF-1): Integration Profiles
  • Volume 2 (CARD TF-2): Transactions
The committee has also published the following supplements to the IHE Cardiology Technical Framework for trial implementation as of August 29, 2013:
  • Displayable Reports (DRPT)
  • Evidence Documents Profiles Cardiology Domain Options: Stress Testing CT/MR Angiography (ED)
  • Image-Enabled Office (IEO)
  • Resting ECG Workflow (REWF)
  • Stress Testing Workflow (STRESS)
These profiles may be available for testing at subsequent IHE Connectathons. The documents are available for download at Comments on all documents are invited at any time and can be submitted at

Wednesday, August 28, 2013


For several reasons I've been working on putting a service oriented face around several interoperability specifications and profiles.  I need to demonstrate an architecture that is readily understandable and which can be composed from/with a variety of off-the-shelf components.  As one who's been developing software and service components for decades, I get quite a bit about what the SOA craze is about.  As a friend and I discussed over the phone today, twenty years ago the buzz was Object Orientation, now it's service orientation, but the compass still points in the same general directions.

I managed somehow to inherit about $200 of Thomas Erl's SOA series of books a couple of years ago.  I've browsed through them a couple of times, but never seriously.  Today, I think I understand why.  I'm more than halfway through Principles of Service Design and am quite frustrated.  It seems that the secret to writing a book about SOA these days is to use a Service Noun Phrase every other sentence (just like I did here).  It may be just that I don't like Erl's writing style, but frankly, I got as much or more out of this 33 page IHE white paper on the topic than I did from the first 300 pages of his book.  I'm glad I didn't buy these books because so much of what I read feels like markitecture, and if not that, common sense.

Last night I had SOA nightmares.  Today I'm feeling a bit better about it, but I'm still struggling to grasp how it is different from what I know already.  For me, the challenge in truly learning something new in the art of software development is in understanding three things:
  • what is it that is like what you already knew but has a different name now, 
  • which of those things are almost like what you already knew, but is slightly different (and in what way),
  • what is truly new.
Once I've figured that out, I can apply it and decide for myself whether the differences matter, and whether the new stuff is really important.  I'll get there, but it can be painful.

And as an added bonus, here is an SOA Buzzword Bingo card for you, just to share some of my frustration.

Business LogicIndependanceContractGovernanceOrientation
BindingComposableServiceGranularityLoose Coupling
BoundaryInventoryMeta DataFlexibiltyTaxonomy

Tuesday, August 27, 2013

Back to School

So today I finished off a bunch of paperwork related to going back to school for me.  I've been accepted to the Biomedical Informatics Program at Oregon Health and Science University.  My faculty adviser will be fellow blogger Bill Hersh, known in blogging circles as the Informatics Professor.  Getting to this stage was quite a bit of work, and involved jumping a number of hoops, given than I'm a non-traditional student.  That's code for I don't have a Bachelors degree.

One of the questions I'm often asked is why I want to do this, especially given that I already teach classes at this level.  I've been a guest-lecturer to MS students in Northeastern's Informatics program and Johns Hokin's MPH degree programs, and have also participated in symposia at Harvard Medical School and at the University of Utah.  In addition, I've written the only text book on the HL7 CDA standard.

I laugh at myself a bit, and I also point at that some of the schools where I've taught won't even let me into their masters' programs. I understand why, and it's not the program or the people involved in it who are at fault. Two deans and one program chair failed where Bill (and I) succeeded.  It's the accreditation process and procedures that are put into place to ensure that we've got quality programs at those schools that don't allow for exceptions (and ensure that everyone gets their pound of flesh).

My first answer is a bit off the wall.  The reason that I want to be here is to learn what my students are learning.  That's actually a pretty good reason when you think about it.  It makes it easier to connect and to understand what they already know (or should know).  The other reason is because while I've got really detailed specialist knowledge, what I don't have is some of the breadth that others do who've been through a program like this.  In my day job, I'll learn it as I go when I realize there is a gap, but this gives me an opportunity to identify a bunch of gaps at once, and concentrate on addressing them.

I hope that in entering this process as late as I am, and as set in my ways as I am, I don't run into the potential dangers of academia.  I expect to be as cantankerous and as challenging a student as I was as an undergraduate (I didn't say I never went to college, just that I never finished).  I just hope that I don't wind up scaring my teachers away.

As I enter into this new distribution of efforts, I'll be stepping back from some other activities.  Recently new cochairs were elected for IHE Patient Care Coordination. For the first time in eight years (yes, it has been than long), I won't be co-chairing a committee in IHE. But I'll still be around and participating.  I will continue on in my role as an HL7 Board Member, and I don't rule out more advanced participation on the HL7 Board in the future.

Monday, August 26, 2013

Social Media for Market Research

I'm not often called on to do market research, but when it falls into my area of expertise, I'm all over it. Back when I first entered the technology field more than two decades ago, it meant being aware of who the right analysts were (e.g., Frost and Sullivan, Gartner, IDC, et cetera), and then talking to your marketing people to get access to the right research.  It also meant being aware of your own market, who your competitors were, and doing some digging.

But how would you find out who the competitors are in a market you don't know quickly?  And when you don't have access to the right market research, or even worse, when you are the first person doing that sort of in depth analysis?

The advent of social media makes some of this a lot easier.  Want to know who works for the competition? Find someone on twitter that you know works for them, and then traverse their links and their friends links. One of those clusters is likely to be a batch of co-workers.  There's a place where you can graph your linked in connections that I don't recall how to find, but what I do remember was that it was a quite accurate reflection of my employment and education history.

Want to know who your competitors are, or the competitors of another business?  Find them on linked in.  Then find past employees of the organization and then where they are now.  There's a pretty good chance (especially in specialized industries) that it will be a competitor. The more specialized the industry, the greater the chance of it being a close competitor.

Want to find out what your competitor is doing technology wise?  Have a look at the technology skills of their employees.  Do they know anything about technology X?  Go look.  How many people on twitter mention X in their profile?  How many followers do they have? What's their Klout score?

Want to find out how much of your market is using Java vs. .Net?  Search for those skills and then filter by companies in your network. Want to know where a competitor has offices?  Find out where their employees are.

How big is the company?  Easy for publicly held firms, just go check their 10-K.  But what about privately held companies?  Is that a 200 person company or a single person consultancy? Locate how many people list them in social media.  Find their Linked-In page and the answer is right there.  If they don't have one, find out how many are on social media, and compare that number to those of similar companies where you do know the size.

Looking at a potential location?  How many people in this city (or state or nation) have these skills?  The right query and filter will answer that for you.  An acquisition?  Same question, different filter.

There's an amazing amount of data in those links.  Some of it is hard and accurate, others fuzzy and gray.
Use it now while you can, because I'll guarantee that eventually you'll have to pay for it.  So get the value out of it while the getting is good.

And remember, anytime someone adds something new to your landscape, understand that there's always something of value that can likely be found underneath, above, inside, beside or behind it.  Even rocks deserve looking under.

Friday, August 23, 2013

IHE USA's Registration Kick-off Webinar | August 27, 2013

In my inbox this afternoon ...

Prepare Your Team for Health IT's Largest Interoperability Testing Event:
The IHE North American Connectathon 2014

Integrating the Healthcare Enterprise's (IHE) North American Connectathon provides an unparalleled opportunity for interoperability testing and problem resolution. Take an active role to advance your products and test at the IHE North American (NA) Connectathon, January 27-31, 2014. To learn more, register for the Registration Kick-Off webinar.
Discover the Benefits of Connectathon Testing
Reduce Development Costs and Time to Market:
  • Debug systems within minutes leveraging a broad cross-section of industry partners on-site
  • Leverage 15 years of test tools developed in partnership with IHE and NIST
Build Quality into your Products:
  • Certify your products for interoperability using an ISO accredited lab and ONC-authorized certification body
  • Implement best practices using IHE Integration Profiles to enable key interoperability capabilities
  • IHE's structured, supervised and independent testing environment ensures the highest quality products

Meet Industry's Standards for Interoperability:
  • New! Test emerging standards offered by industry partners including Continua, Health Story Project and ONC S+I Framework Health eDecisions
  • Prepare for MU Stage 2 certification requirements focused on Consolidated CDA integration
  • Prepare for integration with key North American initiatives that leverage IHE Profiles including:
    • ONC S+I Frameworks
    • Meaningful Use Stage 2 Certification
    • HealtheWay
    • New York eHealth Collaborative
    • Care Connectivity Consortium
IHE Connectathons are held annually across the globe. The NA Connectathon is sponsored by IHE USA in collaboration with IHE Canada. To learn more visit our website or contact
Join the conversation. Follow us on Twitter at @IHEUSA or visit our YouTube Channel.

You have received this email because you are opted-in to receive
information about distance education. 

Want to control your email from HIMSS?
Update your profile or unsubscribe from emails.
If you have any questions or problems please contact us at
33 West Monroe Street, Suite 1700, Chicago, IL 60603-5616

Thursday, August 22, 2013

Information Modeling vs. Implementation Representations

I was thinking last night about how HL7 modeling works going back to the RIM, and how the methodology provides a lot of knowledge relevant to the information model, but that knowledge hardly changes in the communications. Then I was contrasting that rather detailed methodology to some simpler approaches.  FHIR for example, provides one or more mappings from a resource to HL7 V2, V3 or other specifications. Then I looked at the VMR Logical Model.

Through all of this I was trying to determine how an XML Implementation Technology Specification might work that would allow the generation of simpler XML for HL7 Version 3 specifications, and potentially tie FHIR resources more formally back to V3 models.  I don't really have a huge need for all this formalism by the way, but my brain likes to play these tricks on me while I'm trying to go to sleep.

I thought about the rules to generate XML from the VMR UML. That was pretty easy. I realized the missing piece was not actually the translation to XML, but rather the linkage back to the RIM.  And then what popped into my head was the idea that UML logical models such as the VMR map back to information models such as the Clinical Statement through a variety of applied patterns or templates that could be described through a (possibly parameterized) stereotype.  The stereotype and parameter values provide the information modeling knowledge that allows for semantic translation back to the HL7 RIM.  The UML model provides the structure that is used for developing the XML or other representation of the content.

You can apply these stereotypes to classes, relationships and attributes of a UML model.  The parameterized stereotype for a class might well establish how one sets an information context from which attributes and relationships of that class in a logical model access information via the reference information model.  For example, I can now take the class representing the FHIR Condition Resource and associate it with the RIM Condition Class via a UML Stereotype.  I can attach to the severity attribute a UML stereotype representing an "Annotation", with parameters indicating that the type of annotation is "Severity" (from an appropriate coding system).  That stereotype can be associated with a RIM model fragment where the content of the severity attribute in the UML model represents the value attribute of an observation class associated with the base observation by the subject of relationship.

Essentially what this allows is the creation of simplified, nearly arbitrary UML logical models, related back to the RIM and so providing full semantic information in the standard.  And while the models can be nearly arbitrary, there are some natural patterns that would appear because they become the easy way to group or relate things together.  One can readily generate XML or other representations from these logical models, for example, as FHIR already does to generate XML or JSON.  And you have full RIM semantics stored in the model, but not necessarily needing to be conveyed in the message because it's knowledge captured in the model.

I touched on this idea that the domain knowledge in the model need not be transmitted in every message briefly back in 2009 in Synthesis, and also a little bit in in A Triangle has Three Sides.  [One of the nice things about having a blog is being able to see where you wrote about similar issues].  We've been struggling with this idea of simplification in HL7 for quite some time.  I think this idea might provide the bridge between current V3, FHIR and CDS artifacts.

Tuesday, August 20, 2013

Funded vs. Unfunded Standards Development

I've noticed a trend of late.  A significant number of standards efforts are being pushed forwards by funded projects, instead of through unfunded initiatives.  I'd call them "volunteer-led" initiatives, but in an SDO, there are few real volunteers.  I get paid by my employer to participate in standards activities, as do most people I know who are involved.  It is simply a matter direct vs. indirect economic benefit.

What I find disturbing about this trend is that unfunded initiatives that get significant industry backing are where I think the gold is in standardization.  And all these funded initiatives mean that there is more competition for volunteers and mindshare (yes, volunteers still contribute a big chunk to funded efforts) .  It makes it difficult to develop truly innovative efforts.

Let's look at a couple of examples on either side:
FHIR is an unfunded initiative being led by HL7 members.  The CCD efforts were also unfunded.   On the other hand, CCDA was certainly backed by ONC, and CMS backed development of HQMF Release 1 and 2.  And then there's the CDC backed HAI work.

From an innovation perspective, I think the unfunded efforts have more industry value.  I find it difficult to identify a funded effort that created something new, rather than refined something that already existed. Although I do believe that HQMF was one of the more innovative funded efforts (but it also received quite a bit of unfunded support, so maybe that made a difference).

Well, you can make a business of just about everything someone values, and I can't argue against that.  But it does make me wonder where the next innovation will come from, and if we won't be able to do it because we are working so hard on everyone else's agenda items.  Do we really need a seventh release of the HAI specifications (I'm NOT kidding)?  Isn't there a better use of time?

I'm not certain which is the better way to go, or where the right balance is.  It just seems somewhat disturbing.


P.S.  One of the challenges with some funded efforts is that often the customer for the work isn't deeply involved in the work, which can make it harder to figure out what their real requirements are.

Monday, August 19, 2013

What is a standard made of?

Everyone knows what a standard is, right?  It's got a schema, and some XML and some UML diagrams.  Somewhere there are references to some fundamental pieces, and there's a set of services that are being defined.  Well, uhm, yes, maybe, or maybe no.

Here's a short list of things that you might find in a standard, or which appear as a standard.
  • Actors
  • Collaboration
  • Conformance Statement
  • Data Types
  • Domain Analysis Model
  • Implementation Specification
  • Implementation Guide
  • Information Model
  • Interaction Model
  • Logical Model
  • Glossary
  • Schema
  • Service Specification
  • Use Cases
Which of these are standards and which of these are parts?  It's a trick question.  Every one of these has been the principle component of some form of standard or another.  And I'm also missing a bunch of parts.

Understanding the parts, and how the pieces fit together, and how "your" standards community uses them is one of the biggest challenges for new standards developers.  

There are two failure modes that commonly crop up with new standards participants:
  • Assuming that because they have something to teach, you have nothing to learn.  If you cannot speak in the language of your students, how will you connect with them?  Especially if you aren't already in a traditional teacher/student set of roles, but rather in the role of equals?  This is especially a problem with experienced developers who are just being introduced to a new body of work or an SDO for the first time.
  • Learning the vernacular, buying into the methods, and forgetting to look beyond it.  There's never just one way to do things. This happens more often with less experience developers. Once you get it, see how you can apply it to something else, and alternately, how you can apply other things to it.  If you think your method is perfect, think again.  It isn't.  At least as far as I have seen in any SDO I've been engaged with.
Interestingly enough, those same two failures crop up pretty often in the old guard as well.

Here is my suggestion for how to approach this: 
  1. (Pretend to) throw away what you know. 
  2. Learn the vernacular of the body that you are working with. 
  3. Internalize and incorporate it into what what you know and vise verse.
Lather, rinse and repeat as often as necessary.

Friday, August 16, 2013

Evaluating Standards

I've been involved in several national and international efforts where at some point, a standards selection needs to be made.  This happens in IHE, has occurred in the past in HITSP, and frequently occurs in the S&I Framework projects.  It also occurs in the HIT Standards Committee, often based on recommendations of various sub-committees.  And in the past, when I was active in Continua, we spent time on it there as well.

Overall, the process worked pretty much the same way, the first step was to investigate the possibly relevant standards, then they would be evaluated, and finally we would propose which would be used.  Each organization that did the work had different criteria that they evaluated the standard against, but they often came with the same general set of criteria:

  • Fairness
  • Appropriateness
  • Cost to Use
  • Maturity
They all approached it somewhat differently, some with more process and some with less process.  


One criterion that is sometimes overlooked has to do with the development process and availability of the standard.  In all the processes I've been involved in, there is some time spent on availability, but not a lot on other aspects of development.  The need for a standard to be freely accessible to the public has emerged as an important criteria in various initiatives worldwide (and it helped sell the idea of freely accessible IP to the HL7 board).  The other issue is who can participate in standards development and whether that process is fair.  

There are a couple of ways in which this criterion is approached, but for the most part, there are few cases where it has ever become hugely relevant.  The real issue that is never addressed or put on the table is dominance of participants in the standards making process.  You can usually find multiple standards supporting the same capability, and different players and 800-lb gorillas in different places.  

I've rarely seen a standard be dropped from the selection process because it was unfairly produced.  Usually what happens is that it doesn't make it on the list because it does not get past the first stage of identifying standards.  "Oh, that's not a standard, that's ____'s proprietary way of doing it."   If that is not really a questionable statement, it's not worth spending a lot of time on.

I have seen some initiatives fail to list appropriate standards in the initial list, and that usually has to do with dominance of one group in the identification process.  It's usually an easy problem to spot and fix, but it often indicates a sign of struggles to come in the selection process. 


Is the standard appropriate to the task at hand?  Does it meet the needs of the use case?  Was it designed for this use case, or a closely related one? Will it work?  Does it add value?  Is it technically appropriate?
More often that not there will be multiple choices to do the job, and both are equally capable as far as real-world engineers are concerned.  I'm not talking about uber-geeks here. If the average engineer cannot see a difference, then the difference is for the most part, not worth discussing.  It isn't what the experts think, but rather what the engineers in the field are going to do with it that matters.  Ubergeeks specialize and know all the ins and outs and details of the standard.  We can chest-thump our favorite standards with the best, and often do.  Ignore the chest-thumping gorillas and go look at what the rest of the geeks are doing.

I've watched a lot of standards selection processes go into deep rat-holes on appropriateness, when in fact, what it really happening is that other battles are being fought (e.g., cost).  Focus on what is relevant.


This has two aspects, the second of which is always difficult to evaluate.  The first aspect is how much it costs to acquire or license the standard for use.  This is pretty easy to work out.  The second aspect has to do with what it costs to deploy the standard in product.  This is rarely investigated deeply because few really have the time or inclination to develop a full-on project schedule to do a cost comparisons, and those schedules are only initial battle plans that will change immediately upon trying to execute them.  Cost and maturity can be quite intertwined.  

One really good way to address cost is to look at ease of use.  Evidence of that can be found by looking for maturity.


The question of maturity really isn't about how long the standard has been around, but rather about its prevalence and support for it in the market.  A standard like XML was already rather mature shortly after it arrived simply because it was a refinement of the existing SGML standard, and could be supported by SGML processors.  A ten-year-old standard that nobody has implemented is a lot less relevant that a specification that isn't even a standard yet that is being implemented all over the place.

There are a lot of different ways you can get at maturity: How many open source implementations are there?  How compatible is it with existing technology?  How much technology uses the standard?  These all get at what the standard's position is in the market.

You don't need to ask an uber-geek what the market position is, in fact, that's not the best idea, because each uber-geek has his or her own set of tools they like to use, and they will definitely be influenced by their preferences.  That's what market research companies do.  Can't find the research?  That in and of itself is an indication, and usually not a good sign.

A couple of good substitutes for maturity:  How many books on Amazon can you find on the standard?  How many classes or education or certification programs can you find on the standard?  What does Google (or Bing) tell you?  Look not only and number of hits, but also diversity of sources (1000 hits from one source just means you've found ONE authority).  Where are the Internet communities discussing it?  If the standard is being discussed on Stack Overflow, that's probably a good sign.

Putting it all Together

Different people and projects will have different concerns about fairness, appropriateness, cost and maturity, and so will apply different weights to each of the various aspects.  And proponents of one standard over another will often weight the various factors to favor their preferences.  Weights are subjective inputs and will influence selections.  There is no single right answer here.  It depends on what you are trying to do.

Don't expect the amount of process that you put into the evaluation to be all that influential in the credibility of the assessment.  That is a lot more dependent on how deep you really dig into the facts vs. how much you rely on opinion and rhetoric.  The IHE process is fairly lightweight, but the assessments are pretty quick and well informed.  Other processes are fairly rigorous (e.g., HITSP), but the rigor of that process didn't substantially improve the credibility of the assessment.  If you are assessing standards on these axes on a scale of 1-10, you are probably already too fine-grained.

A three or five point scale will provide a great deal more reliability, and will also help you identify cases where there truly are equally capable competing standards.  It's important to recognized when you have that situation, and to make a conscious decision.  Even if the reason that you select standard A over standard B is simply that you like or are more familiar with Standard A, that is still a good reason for choosing standard A when all else is equal.  After all, you have to live with those choices.

Thursday, August 15, 2013

A Low Res HITECH Destination

Here it is.  What is it?  Why, it's where you can send me my data Doc.
Crafted with QR Pixel

Wednesday, August 14, 2013

What do you call that?

You know, that thing that has the same name as itself because it can be composed of smaller versions of itself?  Like an order, which can have sub-orders, or a section which can have subsections.  Except that they only go so deep for the most part.  But when you talk about them, you have to be careful or you wind up with confusion as to what level you are at in which point in time.

In this particular case, what I'm looking to identify is a line item on an order, in a way that is quite clear to everyone what is meant.  The challenge is that the notion of order, as one transfers from a CPOE system to other systems can actually involve several separate orders being distributed external to the CPOE system.

I think the right way to describe what is entered into the CPOE system is to use the term "requisition", that being the list of things that the provider wants to give or administer or have done to/for the patient.  And the right way to describe the external results of the CPOE process are the generation of one or more orders related to that single requisition.  But I find people who are comfortable with the CPOE process or with the order reciept process to all be very comfortable with their own notion of what an order is, even though they have different "views" of it.

It's all very confusing, and sometimes I wish English was more precise, and sometimes I wish people were more precise, and sometimes, well, sometimes, I just don't care what it is called, just so long as we both understand what we are talking about.  But it would be nice if there were a single word for that having just the right nuance that everyone understood.


Tuesday, August 13, 2013

What's the Rush?

I got into a debate... no, not really, call it a fight with one of the SD co-chairs today over our rush to get CCDA Release 2.0 out the door.  I've been asking why we are trying so hard to get it out, and I finally hear today that ONC would like it finished by December.  Now, when the project was first discussed (around the Working Group Meeting in January), that was a perfectly understandable deadline.  After all, if we wanted something that could be referenced in the Meaningful Use Stage 3 NPRM, having a DSTU Ballot in September is ideal.  We could even have the final document perhaps ready in time to be referenced by the NPRM (subtract a few months from the NPRM date if you want to align with regulatory requirements), or at least very close.

Things are slightly (to say the least) different since CMS announced that Stage 3 wouldn't arrive until 2014.  Several people I've spoken to (although none in any Federal agency) seem to believe that Q3 or Q4 is likely, and I agree.  Although trying to get anyone to admit that in government would be next to impossible.  Even so, it's pretty reasonable to assume that deadlines are just a bit looser than they were when we were first discussing this project.

Why do I want to wait?  My main reason is that I would really like the next edition of Consolidated CDA to do two things:

  1. I want it to version templates the way that the forthcoming Templates DSTU proposes.
  2. I'd like one of the adjuncts to the C-CDA DSTU  to be in the new machine readable Templates DSTU format.
There are a couple of reasons for this:  My biggest complaint about the CCDA was not the changes we made to it, but the fact that SO MANY of the template identifiers changed, and the amount of work that created for implementers. We HAD to change the identifiers, but there were several better ways to handle that.  That's ONE of the things that the Templates DSTU (and the IHE CDA Harmonization profile) are trying to work out.  The cost to deal with template identifier change is not something I want to consider.  I know how much time I already spent on it here.

Meaningful Use Stage 4 isn't out of the question.  I think that before we go into Stage 3, we need to have a solid basis for how we manage change in templates before we issue a new edition of CCDA.  And our current mechanism isn't it.  It doesn't matter how much documentation you provide, until it is provided in a standard, machine readable format, it isn't going to be easy to update EHR software to support new editions of the standard.  I've been through this several times already, and I know many of my colleagues feel the same way.

My other big issue is trying to align international projects with the CCDA.  Having template identifiers change underneath me just as I start to get things aligned makes it nearly impossible for me to get anything done.  IHE and HL7 agreed to synchronize on CCDA.  I'm almost done with the IHE synchronization. Given the extra half year needed to polish CCDA into Release 1.1 for Meaningful Use IHE is a full cycle behind.  I don't want to have to redo all that effort and analysis because of the new template identifiers, or figure out how to piece these things back together using the new machine readable change log I've been promised.

So, if I'm right, and Stage 3 isn't "just around the corner", why are we killing ourselves trying to get it done, instead of trying to do it right?  I'd rather spend my time getting the Templates DSTU out in a timely fashion, rather than fighting yet one more battle about speed vs. quality.  Then I'd have machine readable CCDA specifications, based on a standard, and that should make the whole industry happy.  I could download the templates into MDHT, and just generate code.  That would make me joyous.

It might also be nice to have a little bit of CCDA experience in production behind us too, before we finish  release 2.  Implementation experience is valuable, but what are you going to do with the real challenges?  They won't start showing up for a few more months when healthcare providers start using the software that EHR implementers have been implementing and certifying.

I'm guessing that one of the challenges that I'm dealing with is the fact that money is running out of ONC's war-chest.  The chest was already almost empty, but they appear to have managed to move some research dollars and put them into CCDA improvement efforts.  That could be ONE reason why the push is still on.  If so, it's the wrong reason.

Monday, August 12, 2013

What are Procedures really?

As always when you try to standardize things, there are just some things that refuse to fit into any well defined or shaped bucket.  My case in point today is the concept of "Procedure".  I made the happy mistake of observing on the FHIR list that an HL7 RMIM being used to illustrate a question was incorrectly using the V3 Procedure to describe observations.  After all, every V3 expert knows that the definition of Procedure involves the intention to change the state of the subject.  This provides a fairly decent white line from which one can adequately classify distinguish acts whose principal intent is to gather data from those which are intended to do something to the patient.  The reason this was a happy mistake is that it gives us the opportunity to think it over a complex and previously resolved but contentious issue in V3.  Maybe we can do it better this time in FHIR.

That line is certainly blurred when you get into procedures like colonoscopy (where polyps can not only be observed but also removed), or coronary catheterization (with possible angioplasty or stent insertion).  And there are other diagnostic procedures which have been known to also have a therapeutic effect.

However, medical nomenclature used by physicians, nurses, and perhaps most importantly, administrative staff make this distinction quite a bit troublesome.

I observed that a procedure often involves some degree of risk outside of the norm, but wasn't certain whether that was a useful distinction.  Others observed that administratively, things like venipuncture have a procedure code and appear on the bill, but don't necessarily show up on the patient chart with any huge frequency nor have they much clinical significance (in most cases).  Others point out that abdominal ultrasound probably has less risk to the patient than venipuncture (although perhaps not from the perspective of device manufacture).

One definition contributed by Cecil Lynch that I particularly liked is: "An Act that requires the alteration of the physical condition of the subject or the investigation of a body space not amenable to direct observation without instrumentation.”  It sort of gets at the notion that this isn't the run of the mill observation or in-office activity.  But again, we get to shades of gray that make it hard to figure out what to do with other procedures.

One of the things that I'd like to see is a clear distinction being made between "billable event", and the "clinically relevant" activity.  What (usually)  makes a procedure worth note to a healthcare provider?  What does it tell them?  Is it merely a point of reference and/or inference?  Why would they care about it?  Perhaps if we could get at the why, then we could get at the what.

Until then, and even afterwards, there will always be a big gray area.  And I don't expect that will ever change until how we pay for care is much more aligned with how we provide it.

How would you describe it?

Friday, August 9, 2013

They got me started

OK, so ONC got me started yesterday on this topic with their call for participation.

This project arises from the idea that it would be a way to execute on a memorandum of cooperation between the European Commission and the US signed by Secretary Sebelius back in 2010.  For some time nothing really happened.  Then there were some high level meetings between ONC and EU, and still, nothing really has happened.  And now we have an ONC S&I Framework project that wants to do the following.
  1. Create initial set of use cases, based on community and stakeholder input
  2. Compare the data/document structures used in the US and EU by comparing the consolidated CDA (C-CDA) and the exchange standards used in epSOS
  3. Compare US and EU vocabularies, terminologies and clinical models relevant to selected use cases to identify areas of overlap and commonality
  4. Create best-effort Template and Terminology Mappings for selected use cases based on an agreed methodology
  5. Create strategy briefs on long term adoption and sustainability of EU/US health record exchange focusing on 7 areas: cross-vendor integration; education; security and privacy; incentives; standardization; innovation; research
  6. Identify available resources and opportunities for aligning them (technology and standards to support ongoing collaboration with vocabularies, modeling, and interoperability)
  7. Create a common interoperability testing strategy including testing plan and tools
  8. Agree on specifications, standards, architecture, tests for the validation pilot
  9. Validate selected use cases in the transatlantic setting
  10. Release feasibility analysis document for EU/US health record exchange

#1: Use cases is how S&I works, and I don't recall a project that started without use cases.  In this case, though, I'd skip right to the next items, and save use cases for phase 2.  There's enough work to keep an army of experts busy for the next three years.  We already have the use cases in the US and in the EU.  If you must have a use case, call it medical summary be done.  Let's not waste time much more time on it.  It is after all, the most commonly chosen first use case in any regional program, and it's pretty clear from the rest that that is where we need some alignment.

#2: On comparing the data structures, I'm so glad you asked.  epSOS is principally based on IHE PCC Technical Framework plus extensions  mostly in support of e-prescribing.  CCDA comes from HITSP C32 which is additional constraints on the PCC Technical Framework.  They both originated from HL7 CCD.  IHE has been in the process of harmonizing its technical framework with CCDA for the last two years, as led by yours truly.  We are readying content for public comment in a few weeks, and are considering taking some of it through the HL7 ballot process.  Please use that process to your advantage in this activity.

#3: Comparing US and EU vocabularies and terminologies: My advice would be to focus on how to harmonize medication vocabularies.  That is the biggest problem internationally.  The second biggest problem is finding a vocabulary or value set that supports allergies. Problems and Labs are best addressed through international efforts like SNOMED and LOINC, especially given their recent announcement of cooperation.  Get out of their way, and I mean it.  Don't throw up any roadblocks or create any distractions; it has already been too long in coming. Vocabulary alignment is workgroup #4.  Which of those are important is Workgroup #3 (see #5 below).

#4: For template and terminology mappings, I really hope you'll take a look at the existing IHE work which also has been too long in coming.  It will address quite a bit of what you are trying to deal with here.  My advice: Don't redo, rather reuse.

#5: This is actually a really good one to do to start with. This should be a third workgroup though.  It's neither tactical nor technical, but rather strategic.  Let's not confuse them.

#6:  This is strategic and gets assigned to workgroup #3 again.  And if you want to align resources, it would be good to start talking with them before just haring off and doing, and expecting them to follow your lead.  ONC and the EC could quite readily have approached IHE, ISO, HL7 and other SDOs to create a global summit addressing standards alignment.  But it seems that ONC thinks that S&I Framework is the right place to start that.  I'm thoroughly unconvinced, and mostly because of how my International colleagues view ONC. At the doer level, ONC doesn't have the reputation or standing to pull it off.  They have done more to annoy International standards developers than they have to recruit them to their cause.  ONC stumbles around the international front like a clumsy American tourist in Paris who thinks that people will simply understand them  if they speak slowly and loudly enough.  These might be my words, but I'm echoing what I'm getting back from the International community.

#7: I can think of at least on organization that does testing internationally and is already quite involved in EU projects.  Have you considered working with them? Hint: They do this thing called a Connectathon.

#8. Save this for phase 2, after we've done some aligning, and identified a common use case.

#9. Again, save this for phase 2.  And do you mean test it and prove out the concept, or do you mean pilot something, or is there some expectation that something useful and lasting would come out of this?  I want to see the latter, not the former.  Please save your pilot for something that you are pretty sure someone would pony up for, rather than hoping that if we build it, they will come.

#10.  Phase 2.  Like I said, there's enough work for multiple years in this proposal.  Let's work first on alignment around a particular topic (e.g., templates for a patient summary), and then move on to actually doing it once we have it.

So, overall, recommendations, break this into some doable phases, coordinate with international SDOs, and add a workgroup dealing with strategy and one on vocabulary, and finally, pay attention to what others are already doing in the space.

Thursday, August 8, 2013

Call for Participation EU-US eHealth Cooperation Initiative

This just crossed my desk.  Don't EVEN get me started.  Ah, damn, too late.  But you won't have to read it until tomorrow.

   -- Keith

Call for Participation
EU-US eHealth Cooperation Initiative

The ONC Office of Science & Technology and DG CONNECT invite you to attend the EU-US eHealth Cooperation webinar on Wednesday, August 14, 2013 from 10:00am - 12:00pm (ET)/4:00pm - 6:00pm (CEST).

This international initiative will focus on developing a global health information technology framework. The framework seeks to support an innovative and collaborative community of public- and private-sector entities working towards the development, deployment and use of eHealth science and technology to empower individuals, support care, improve clinical outcomes, enhance patient safety and improve the health of populations.

This initiative will be composed of two Work Groups:
  1. Interoperability Work Group: working to develop internationally recognized and utilized interoperability standards for EHRs; and
  2. Workforce Development Work Group: working to create strategies for the development of skilled health IT workforces.
These two Work Groups will come together on Wednesday, August 14, 2013 at 10:00am (ET)/4:00pm (CEST) to discuss project activities and will then meet separately on a weekly basis:
  1. Interoperability Work Group will meet every Wednesday from 10:00am - 11:00am (ET)/4:00pm - 5:00 pm (CEST) starting August 21, 2013.
  2. Workforce Development Work Group will meet every Thursday from 10:00am -11:00am (ET)/4:00pm - 5:00pm (CEST) starting August 22, 2013. 
To stay current with the progress of the EU-US eHealth Cooperation initiative and its Work Groups, participants should register by visiting the Project Sign Up page located at For more information, please visit the EU-US eHealth Cooperation’s Wiki page located at

Your experiences, thoughts, expertise and solutions are critical to the success of this international collaboration.  We look forward to your participation and getting to know you as part of this new initiative.

Authorship is not the same as Legal Authentication

One of the challenges I keep running into in the use of CDA are some misunderstandings of its purpose and resulting from that, misinterpretations of what should be done with content within it.  In this case, an assertion was made that a CDA document is created as an extract from an EHR.  The subsequent claim  is that each piece of data in the EHR carries some legal authentication provenance and that should be attested to within the document.

While extraction from an EHR is one way in which a document can be created, it is certainly not the only way, and even in that case, the legal authentication of data in the EHR is still happening at the encounter level, not at the level of the datum.

The focus of the CDA standard is to provide documentation of services performed in the context of a clinical encounter.  For the most part, these are commonly understood to be the reports that providers produce in a consultation, physical examination, intake or discharge from a hospital, or when reporting on a procedure or surgical operation.

In all of these cases, the document serves as the legal and medical record of the encounter.  As such, there really can be one and only one final legal authenticator to meet with both medico-legal and accreditation requirements.  That attestation indicates that the provider is taking legal responsibility for the combination of content in the document. That doesn't mean that they necessarily agree with everything that appears in that record, but it does mean that the document is a true and correct representation of what occurred and the information available to the provider during the encounter.  If there are questions about information contained in the document, the legal authenticator certainly should, and can in CDA, provide any comments about their concerns about any data represented in the document.

I don't want to see an evolution where every datum that ever appears within a document has to carry provenance indicating who originated it and when in order to assign final legal responsibility to that datum.  At some point, the provider taking care of the patient has to agree to that datum in documenting the encounter, and that agreement is a case where there needs to be an assignment of legal responsibility.  If they were misinformed and acted upon it, there is still a case for them being responsible for checking on uncertain data.  After all, who but they are able to evaluate it clinically?  Furthermore, while final responsibility rests with the legal authenticator, anyone who is able to act as an author also has responsibilities.

The idea being promoted here seems to be coming from the need to avoid fraud.  But avoidance of fraud isn't isn't the principal reason for documenting a clinical encounter.  Estimates of improper payments to Medicare range from 3 to 10% overall.  Yet the cost of developing an IT infrastructure which could track data to this level, enforce non-repudiatable attestation to it, and the necessary changes and interruptions in provider workflows required to develop such a change would wind up costing healthcare a great deal more than we could save by doing it.  Is that really where we want to go?

I don't think so.  The point of the EHR is not to avoid fraud, but rather to provide for better care at reduced costs.  Let's not forget that goal and subsequently introduce more effort and cost into the healthcare process just because there is a technical capability that could be executed on.  A car running on hydrogen as a fuel is also technically feasible; why don't we see more of them?  It isn't about technical feasibility.

Wednesday, August 7, 2013

For Public Service

It should be obvious what this post is, and who it is about.  If not, I'll follow my usual pattern anyway.

Rules are rules, even when they are arbitrary and you made them up yourself.  There are two that I'm pretty inflexible about.  When it is deserved, it goes out, and when it is given for some contribution, it's never given for that same kind of contribution again.  I'd already mentally reserved this for someone else, but it turns out, I was wrong, and it goes to somebody other than I thought.

If you had asked me three or four years ago what I thought of this person, I'd tell you he was a total pain in my posterior.  He had a vision that was not accomplish-able, some ideas that were unworkable, and an attitude about standards organizations that made his own organization unable to do its job well.  Some of my opinions still haven't changed, but others have.  I've known everybody in his position by name, and for the most part, everyone who has been in that position has also known me by name as well.  But he was the first person in that role that I really connected with.  It started at HIMSS a few years back before he had his current role, when he strode up to me, held out his hand and exclaimed Motorcycle Guy! I was impressed.  He'd actually been reading what I wrote for him.

He's also the first guy in his position that I think really connected with the geek culture in Healthcare  IT. Whether or not I agreed with him, he's made me think, and think hard.  And he's done so for the rest of the industry as well.  He's had a huge impact on where we are right now as a nation.  I hope the next person in his position is just as strong, and just as pushy. Because we all need it.

The first time he read what I had to write was probable routed to him via an e-mail that said you should see what this guy has to say about X, where X was something he was pushing, and I wasn't on board.  I said he's been a pain in my posterior.  I'll certainly bet the feeling has been mutual.  And I'll bet that wherever he lands, he'll still be one.  That's not necessarily a bad thing.

When you ride a long way, a pain in the posterior has a way of making you shift your balance, get up off you rear, and hold yourself more upright.  If you slouch, the pain returns, and you have to reassert yourself again.  It keeps you going, and it makes sure that while you are getting yourself there, you are also doing so with attention and care, rather than sloppily.  He's been that kind of pain, and believe it or not, it does help.

Without further ado, this next Ad Hoc Harley goes to ...

This certifies that 
Farzad Mostashari, National Coordinator

Has hereby been recognized for his contributions in
Public Service to Healthcare IT.

P.S.  I'd never really thought Farzad would be the guy who got the first public service ad hoc, and I still can't half-believe I'm doing this.  It just goes to show you how he can make you rethink things.

Tuesday, August 6, 2013

On Models

I've had a lot of discussion with folks about models recently. as it relates to HQMF, as it relates to HIE, and as it relates to EHR products.  The question that almost always comes up is "What is the best model?", and/or "What model do you recommend?"  Unfortunately, my normal answer to this question of "It depends..." fails to satisfy many listeners.  I suspect all to many are simply looking for an easy answer.  I thought I would try again, but differently.

The essentials of your model, whatever it is, are going to be something that everyone can agree on.  In developing a model for medical summaries, the top three and top five items are pretty easy to list out:  Problems, Medications and Allergies are the top three.  Few would argue that.  The next two are labs, and immunizations.  After that it gets a bit fuzzy.  For the top three, we know you need the name of each thing, and likely a code to go with it.  Dates are also important.  Anything beyond that is gravy.  We can all pretty much agree on these essentials.  My model and yours and the models of anyone else will contain these things.

Names in the model don't much matter, but definitions do.  If I call it problem and you call it concern and someone else calls it diagnosis, it doesn't matter if my definition of diagnosis is different from that of someone else.  But we often wind up comparing the words, rather than the concepts.  What matters is that my definition of problem pretty much matches somebody else's definition of concern.  If the concepts are close we can approximate exchange of meaning.  If I can put my thoughts down in a way that you understand, we've communicated.

I can hear the screams: "But what about semantic interoperability?  What you just said means that nobody will thoroughly understand the information being communicated."  Yep.  Nor is it any better even when we share the same mother tongue.  But it gets better.

Communication (and interoperability) is not a science.  It is an art form perfected over time.  What we are able to communicate initially will be rough, but useful, so long as we recognize the limitations [sort of like me asking "When is the toilet?" in Japanese (which does work)].  It will get better, and we'll have a deeper understanding of what is being communicated as time progresses (as I eventually learned my interrogative verbs). Nuances will begin to appear if the communication is sustained over time.

Continued communication creates the model, not the other way around.  If you doubt me, try this experiment.  Call a friend or family member whom you spent a great deal of time in childhood with, or with whom you went to school, and talk to them for a while. Listen to yourself.  Can you hear your communication pattern change as you talk with them?  Does your accent from your childhood gain strength?  I'll bet it does.

Don't try to create a model that perfectly captures all the idioms between two systems.  Instead, create enough of a model to get started, and evolve it over time.  The model you evolve is one that is created with a number of small shifts in agreement, and the direction that it takes may surprise you.  I recall where we started with the HL7 Care Record Summary (CRS) and IHE XDS-MS profile.  I know many of the subsequent tweaks that have been applied as they evolved into CCD, XPHR, CDA4CDT, C32 and finally C-CDA.  Many idioms created in CRS and XDS-MS still live on, but new ones have evolved as well, including some pretty important ideas about how to say no and I don't know.

It took time to get here.  We (those of us using C-CDA) have a model that works for us.  Others (those using another model) have a model that works for them.  There's enough cross-talk that we can still communicate because at the core, we all have a common understanding.  That's not a standard, and the boundaries are fuzzy but growing and evolving.

You can start here (with CCDA) if you want, or you can start from there (pick another model).  If you have nothing, either is a good place to start.  If you do, you should know than by the time you learn it, those using it will have moved further on. If you want to start with your own model, study those of others.  You will find enough similarities to develop enough of a model that you'll be able to communicate with others using the common core.

Language is a living thing.  Your model is only the first (or second, or third) approximation to it, and it to will evolve.

Monday, August 5, 2013

Is my next doctor as important as my next Doctor?

In the last nearly fifty years, I've probably had about as many primary care doctors as has BBC has had Doctors.  I've not ever been really fussy about my doctors, and oh so much more fussy about my Doctors.  Given all the fuss about the next Doctor, I thought I'd better rethink that.  After all, you put your life in your hand with your doctor, surely you should have some incentive to be just as concerned:

  1. My next doctor will probably be male.
    I'm a guy, and while the idea of a doctor who is female might be titillating, it would also be very distracting.
  2. My next doctor will probably be tall and handsome.
    I'm going to need to trust this person pretty quickly, and rely on them quite a bit, so they need to appeal to me visually.  What can I say, I'm human.
  3. He'll probably not be ginger.
    Even among the Irish the odds are against that. Just sayin'
  4. More importantly, my next doctor will be there when I need him.
    Life doesn't happen on a schedule.
  5. I'll be able to reach him easily.
    My doctor will certainly be no more than a cell-phone call away, but I also expect his systems will allow me to access him through any communications media. 
  6. He'll have access to a plethora of technology I don't understand.
    Fortunately, he'll be able to explain it simply.
  7. But he'll be able to figure most things out using just one or two simple tools.
  8. He'll be able to adapt technology to his own use.
  9. He'll have had a LOT of experiences.
  10. He'll be brilliant.
  11. He'll be enthusiastic about his work.

As I read back through these, my requirements for my doctor pretty much match my expectations for my next Doctor.  Hmm. I guess he is pretty well-named.

-- Keith

Friday, August 2, 2013

Unclear on the Concept

I've been thinking about Concept Domains, and the Concept Domain Binding syntax in HL7, and how that applies to IHE's refactoring of its profiles to accommodate the CDA Consolidation guide.  A recent exchange on the Structured Documents workgroup list had me really scratching my head.  It had to do with interactions between allowed subtypes of Null Flavor on a data element, based on the coding strength associated with the concept domain binding to a value set.  Essentially if the concept domain indicated that the value set being used was CWE (coded with exceptions allowed), then nullFlavor='OTH' would never be needed, to represent a code using another coding system, but if CNE (coded with no exceptions).

My head hurt just thinking about it.

Now, lets add in the idea of MIN and MAX, where MIN specifies the set of values that all systems must support, MAX specifies the set of values that all systems could support, and ignorable and all that.  (This information can be found in Core Principles).

Clearly my head is about to explode. And if my head is about to explode, clearly there have already been some major aneurysms elsewhere in the Health IT world.

I get that the way to describe all the possible cases is important to vocabularists, but what I don't get is WHY implementations need the capability to bind using ALL possible variations.  What is the use case for MIN/MAX/IGNORED?  Why do I need all this complexity?

I don't see a need for specifying this at a regional or national (or multinational) level.  Systems that are implementing an actor (IHE)/application role (HL7) need to support the specified vocabulary without error. If you say the vocabulary is some dynamic subset of RxNORM, then in each interaction, both players must behave well in the presence of a valid code, and in cases where there is some dispute about the status of a valid code (for example, one of the systems hasn't applied the latest updates).

An implementable specification certainly needs to be written to the level of detail where you know what vocabulary terms will cause a system response, which may be stored but otherwise ignored, and what is acceptable.  However, for regional, national or multinational implementations (e.g., HIE), it seems that the most reasonable way to express vocabulary is by what actor pairs produce/consume within the context of an interaction.

From an IHE Profile perspective, the distinctions of MIN/MAX seem to be overkill, and in the CDA Harmonization efforts, I'm simply ignoring them.  Assume that MIN = MAX and IGNORED = the null set. In terms of  CWE/CNE Coding Strength, the key point is that implementations must be able to support CNE and a specified value set.  In the real world, those value sets may be expanded upon in an implementation or for a project (and could remain CNE but with a new value set).  Specifying CWE in an IHE profile is basically saying, and you are free to ignore the vocabulary binding if you want.  So, there is no point in having the constraint if it can be ignored, so we won't use CWE either.

Thus, we will bind to a value set in the profile, and that is how it will be tested and what a declaration of conformance (known as an integration statement) means.  We don't expect these value-set bindings to be fixed in code, but rather to be configurable in product, so that implementations can adjust them as needed.

So, binding syntax?  We don't need no binding syntax.  I'm pretty clear on that concept.