Friday, December 31, 2010

What will we do with IT?

While others are thinking hard about how to make Healthcare IT better with new stuff, I'm thinking about how we can take what we've started with and use it in more interesting ways.  As David Tao points out, we are about to open the fire hoses with patient data here in the US.

According to NCHS Survey data for 2007 there were more than 1.2 billion visits to office based physicians, emergency rooms, out patient centers (pdf), and hospital stays.  If 1 in 5 providers attain meaningful use next year, and assuming that accounts for 20% of visits and hospital stays (which is probably undercounting), that gives 240 million visits for which a summary document should be able to be generated.  If even 5% of those visits generate a summary of care document using the HITSP C32, that will generate 12 million documents.  Most organizations I would expect will not even bother to ask, but will simply generate those documents.  At the very minumum, I would estimate that there will be something like 100 million summary documents floating around.  This is not a huge amount in the era of Google, but still quite a bit of data.  I personally expect an order or two of magnitude more than that.

What will we do with all of this data?
  1. Read it.
  2. Chart it.
  3. Index it.
  4. Normalize it.
  5. Measure it.
  6. Reduce it.
  7. Expand it.
  8. Merge it.
  9. Split it.
  10. Dashboard it.
  11. Evaluate it.
  12. Secure it.
  13. Store it.
  14. Anonymize it.
  15. Sell it?
  16. Buy it?
What IT do we need to deal with it?  This is a HUGE opportunity for research and innovation.  What will we do with it?

Thursday, December 30, 2010

Bored? Not really

I used up most of my vacation before the end of the year.  So, I have lots of time where there is nobody to talk to, no calls to attend, and nothing new to do (other than that which ws asked for by the Office of No Christmas).

So what am I doing this week?
  • Finishing ballot reconcilliation on HL7 publications I started (OID ballot and Structured Document Architecture)
  • Reading HL7 Ballots for which my votes are due next Monday (hData is what I have left to look at).
  • Putting together my travel calendar for the next 12 months for IHE, HL7, HIMSS, RSNA and other events 
  • Estimating my travel costs for the next year.
  • Reviewing my annual goals and how I did against them.
  • Updating educational materials for the IHE Connectathon Conference (pdf) and HIMSS11
  • Starting yet another viewpoint on the PCAST report
  • Planning code changes for an alerting project
  • Getting people to talk to each other before a small misunderstanding becomes a huge screwup
  • Writing up a quick brief on what IHE PCC is doing this year.
If you are working this week, what are you focusing your attention on?

Wednesday, December 29, 2010

LOINC Version 2.34 and RELMA Version 5.0 Now Available

Just crossed my desk...

Regenstrief Institute and the LOINC Committee are pleased to announce
that LOINC Version 2.34 and RELMA Version 5.0 are now available for
download at:

This LOINC release contains a wealth of new content, with highlights
including the PROMIS survey, additional PhenX terms, a comprehensive
package of cephalometric terms, and a detailed cytogenetic testing
template. We've also been very busy adding enhancements to RELMA, like
revising all of the search features to use the Lucene search engine.
This includes the multilingual search capabilities as well, but the
foreign language search features are still considered to be in BETA
TESTING, so proceed with some caution and let us know of any problems
you encounter:

Also in this release, we've added support for multiple replacement terms
where appropriate. For detailed information about this and all the other
features of the release, please see the LOINC news announcement:

Kind Regards from the LOINC Development Team at Regenstrief

How should we pay for standards development in HealthIT

I know of a few different models:
  1. Organizations pay $$$ to be members.  Then the SDO gives away the standards for free.  This includes organizations like OASIS, IETF and the W3C.
  2. Organizations pay $$-$$$ to be members, and the SDO makes the material available to members for free, and non-members for a fee $/$$ (often the cost of membership).  ASTM and HL7 are like this.
  3. Organizations that use the standards pay $-$$$ for them.  CPT is like this.
  4. Governments whose populations use the standards pay $$$$ for them, and the standards are available to those populations for free.  IHTSDO (SNOMED CT) is like this.
  5. Governments develop them $$$-$$$$ and given them away for free.  RxNORM and UMLS are like this.
  6. Some organizations pay $$$ to participate in related testing events, and the guides are given away for free.  Other development costs are paid for by benfactors.  IHE is like this.
$ - tens to hundreds
$$ - hundreds to thousands
$$$  - tens of thousands or more
$$$$ - Millions or more

There are challenges to each model. 

Model #1 doesn't seem to work for vertical markets like healthcare because the $$ available are simply not enough to support the work enough to give away the IP.

Model #2 can cut out "little guys" and open source initiatives who don't have resources to purchase the standard or become members.

Model #3 annoys the people who have to pay, but at least puts the onus on the organization benefitting from their use.  Collecting payment is a challenge.  It's not clear how this funding is distributed when payment is for an individual product/standard.

Model #4 is interesting, except when it means that governments exert too much control over what they are paying for.

Model #5 lacks balance in the development side.  I like it for dealing with large aggregations of data like RxNORM or UMLS, but not so good when it's other content that needs consensus development.

Model #6 doesn't seem to scale up and relies on largess from organizations that want to influence what is going on.

Models 1, 4, 5 and 6 rely the largess from somewhere to make standards freely available.  Models 2 and 3 make users of the standards pay for them.  In all cases, the payers recieve some benefit.  In case 1, it seems to be prestige and influence in the development.  Case 2 is sort of an amalgamation of case 1 and 3.  Case 3 is pretty clear that the users benefit, and pay for the standard.  But I know of very few cases where this model is used elsewhere.  Case 4 and 5 seem to argue for public good as the reason for investing by governments.  Case 6 is interesting, as the benefit that comes from testing is certainly worth the investment.

The reason this is interesting to me has to do with my post on HL7 Strategic Initiatives.  It has the most to do with HL7's business model.  It should be no secret that HL7 is considering a new model.  One of the strategic initiatives is "Align HL7's business and revenue models to be responsive to national bodies while supporting global standards development" and the two measures of sucess are:
  • A new HL7 International business/revenue model has been developed
  • A revised HL7 International membership/governance has been developed
HL7 would love to be able to make its IP more accessible to the users.  The key question is what is the best way to do that.  I have my own opinions, and would love to hear yours.

Tuesday, December 28, 2010

Moving to the Next Big Thing

Quite a bit is at stake for us in the US with respect to healthcare and healthcare IT.  The Federal government has invested billions of dollars to "make IT better" (pun intended).  One of the challenges though is to allow an appropriate amount of time for implementation to take effect, and to spur new growth through innovation.  Yet everyone seems determined to line up with the the next silver bullet, or is working furiously to invent it and make it the "new standard".

What we forget is that most standards are established by writing down the most common and/or best practices.  It's not really possible to determine "best practice" from a theoretical perspective.  That's why the word is "practice".  It means that it has to be used before it can be judged.  Theory can take you only so far.

Having then established standards, the next step is learn to use them and build from them.  The next "big thing" isn't a "new standard", but rather new uses to which we can put the newly established standard.  It takes time.  Crowd-sourcing new ideas isn't really feasible.  Adding more resources doesn't work.  You need the right pieces in the right place, at the right time.  I've seen it happen a couple of times and most often it's accidental.

A few years ago, one of the goals assigned to the team that I worked with was to create a certain quantity of inventions.  I didn't work on an R&D team, and in fact, I spent more than 50% of my time on the development of standards, for which the very idea of patents is nearly anathama.  I set myself a personal goal related to the team goal.  Trying to force creativity in that realm, no matter what I tried, though, really didn't work.  After an entire year, I completed filing on one new idea.  That one new idea though resulted from the confluence of right pieces, right place, right time, and right problem. 

I know of one organization that had learned how to manage invention, but they fragmented after building some of the coolest technology I ever saw in healthcare IT.  They worked on it this way:

1.  Define the problem.
2.  Determine what is needed to develop a solution.
3.  Break down the solution into known and unknown components.
4.  For each unknown component, learn what can be learned, and what isn't known yet.
5.  Plan out the necessary learning on the "what isn't known part".
6.  Execute on it.
7.  On successfully learning what is needed, plan on the implemention.

It takes a very process mature organization to be able to plan and execute on a learning activity, and to be willing to NOT plan beyond it until they have results of the learning.

We are still learning about how to use the standards we have, and to build innovations from them.  If I have to read one more report about "the best way" to solve the healthcare IT problem that includes theories on how to create the next big standard, with no industry examples to learn from, I must just scream.  Let's figure out what we can do with what we have today before we try to figure out what we must have tomorrow.  I think we'll find that what we have is much more capable than we've learned yet.

I've already seen some pretty cool ideas using just the CCD and CDA specifications.  Some of those are being used to solve real world problems in clinical decision support and quality measurement.

Wednesday, December 22, 2010

IHE and HL7 Consolidation Project Initiation Starts

I just got this and will be participating with about three hats on...

On behalf of the Health Story Project and HL7 Structured Documents Work Group I am pleased to announce and invite your participation in the launch of the HL7/ IHE Health Story Implementation Guide Consolidation Project.

The Project will take place under the auspices of the Office of the National Coordinator (ONC) Standards and Interoperability Framework and is a collaborative effort among the Health Story Project, Health Level Seven (HL7), Integrating the Healthcare Enterprise (IHE) and interested Healthcare industry stakeholders - collectively, the Consolidation Project Community.

ONC has launched a wiki page to support the project community which you can access here:

NOTE: When you first click the link, you will need to Register for a JIRA wiki account and request a login before you can access the page.

FIRST MEETING: January 4, 2011

# Recurring meeting: Tuesday @ 11:00 AM EST - 12:30 PM EST  
# Conference line: 1-770-657-9270, Participant Passcode: 310940
# Webmeeting: tbs

In short, the Project will:

  1. Republish all eight HL7/Health Story Implementation Guides plus the related/referenced templates from CCD in a single source.
  2. Update the templates to meet the requirements of Meaningful Use, in other words, augment the base CCD requirements to meet the requirements of HITSP C32/C83.
  3. Reconcile discrepancies between overlapping templates published by HL7, IHE and HITSP.

The Project will not develop new content. (There is a parallel project starting up under ONC to address new content requirements – this is not that project.)
The Project wiki includes links to source material including the HL7 Project Scope Statement.

Please let me know if you have any questions. I look forward to a successful collaboration.



12 Months of 2010

On the 12th month of 2010, my country gave to me...

12 Initiatives Advancing
11 Tweeters tweeting
10 contracts leaping
9 leaders leading
8 Programs Granting
7 Reports for reading
6 Jobs a training
5 golden things
4 Calls a day
Advisories and
1 O------ N----  C----

Merry Christmas and Happy New Year to all

Simple things Scale Up

The world of Healthcare Standardization is pretty rarified as careers go.  I would estimate that there are about 5000 practitioners around the world who are "dedicated" to the work.  Of that, there may be about 300-400 who have standards as a "full-time" job.  I would estimate that more than half are in consultancies.

I've met and spoken one-on-one more than once to about 500 of these (I know a lot of them).  There's a list of about 50 that I run across in more than one setting in my travels.  There's another list of about 10 that I join with for sushi on a regular basis where ever we are.

I would estimate that about one third to one half of this crowd is based in the US. 

Downstream from the standard setters are the standards implementers.  Interface engineers, healtcare IT consultants, hospital and group practice IT staff, et cetera.  For every one of "us" there are about 100 - 1000 of "them" (and note that you can be in both the us and them crowd).  And for every one of them, there are yet again 10,000 to 1,000,000 patients who will be impacted by the standards they implement and the software they write (and again, you can participate in all places).

What I and others do will will impact millions of lives, including my own, and that of my family.  It's a heady responsibility, and one that I take very seriously.  I know others like me do as well, because I've heard at least one personal story for every one of that first group of 500 that I mentioned.  All can tell heartbreaking stories of what has happened to them or to a loved one for lack of interoperable IT, and many have several.  I've got at least a dozen.  Most could be solved by simple things.

You are almost all going to be spending time with family over these holidays.  Spend some more time listening to their healthcare stories.  Figure out what simple things could have been done and figure out ways to make implementing those simple solutions even easier.  Go back and implement them.

Tuesday, December 21, 2010

More than half of US ambulatory physicians use an EMR in 2010

W00T! My glass is more than half full today.

Early results from the National Ambulatory Medical Care Survey (NACMS) done every year by the CDC project that more than 50% of office based physicians have some form of EMR or EHR according to recent publication by the National Center for Health Statistics.   Three states have better than 75% penetration according to the data:
  • Minnesota at 80.2%
  • Massachussets at 77.3% (I live here)
  • Wisconsin at 75.4%
Even the bottom three states have better than 35% penetration, including:
  • Kentucky at 38.1%
  • Lousiana at 39.1%
  • Florida at 39.4% (I used to live here)
The NAMCS survey samples non-federal office-based physicians, and does not include radiologists, anesthesiologists and pathologists.

The report goes on to say that more than 10% of providers on average have a fully functional EHR (as defined by a 2008 RWJ report).  The NAMCS definition of a fully functional EHR differs significantly from what would qualify an elligible provider for Meaningful Use, but are pretty close.  A table included in the report shows some of the differences, and you can also download the survey instrument (see question 18).

As a final caveat, this report is based on preliminary survey returns.  Even so, its a good way to end the year.

P.S.  I'll have to reread the RWJ report.  It needs to be added to my history of meaningful use with regard to defining a fully functional EHR.

Monday, December 20, 2010

2010 in Review

  1. According to Blogger and Google Analytics this blog received its 100,000th page view. 
    Google under-reports stats but even so, based on the data I have and projections I can make, it was very likely in the middle of this year.
  2. In September, this blog crossed the 10,000 page views per month mark (Blogger Stats).
    And still rising.  I may even be able to continue breaking monthly records in December, traditionally the month with the lowest response.
  3. In December, this blog averaged more than 3000 page views per week (Blogger Stats).
    Because you are still reading, even in December.  It might be a result of activitities of ONC (the Office of No Christmas) again this year.
  4. Three posts crossed the 1000 views mark this year (Blogger Stats):
    1. Meaningful Use Standards Summary (+5000)
      This post is more popular than any other all time, and is still getting nearly 500 page views per month.
    2. Moving from C32 Version 2.3 to 2.5 (+1500)
      Also very popular, about 250 views per month.
    3. Open Source Standards Implementations (+1400)
  5. Sometime this year (Late September I think), I got more than 1000 hits on a single day.
  6. I initiated the Ad Hoc Harley Awards at the beginning of the year.  There were five award winners this year, resulting in more than 2500 (Analytics + Blogger stats) page views.  Here they are again:
  7. The top four keywords for this blog are (in rank order):
  8. I finished my CDA Book, and it's in the publisher's (Springer) hands.  I expect it to be available in the first half of 2011 (maybe even the first quarter), and there should be an electronic edition. 
  9. This blog was noted in several places, most recently by HL7 Standards (See banner at top right).
  10. I was elected to the HL7 Board (after my second try).
  11. Also of Note:  The Where in the World is XDS Google Map recieved its 100,000th view after 1 year of being in existence.
Now for some looks into a crystal ball for 2011:

Hot Topics:
  • Meaningful Use Stage 2
  • ONC Standards and Interoperability Initiatives
  • HL7/IHE/HealthStory CDA Implementation Guides
  • Clinical Decision Support
Widely Read:
  • Reviews of Pending and Final Regulation on Meaningful Use for Stage 2
A New Year's Resolution:
  • I will try to focus more on worldwide initiatives around Healthcare Standards. 
    I've fallen off as Meaningful Use takes up so much of my day job.  Look for at least one post a week with international relevance, and hopefully more.
  • A 1500 page view day
  • My 500th Twitter Follower (could still come in late December, but not likely)
  • A 2000+ hit post on something other than Meaningful Use
  • Passing the 250,000 views (A quarter million) mark
  • 5 More Ad Hoc Harleys to be Awarded...

Happy Holidays, Merry Christmas, and have a happy Calendar New Year.  My "working new year" comes in Early February, where I hope to see many of my US readers at HIMSS11.  I'll see many other international readers in Sydney at the HL7 Working Group Meeting in January, or the IHE Connectathon the following week in sunny Chicago.

I still have a few posting days left this year.  I spent most of my vacation time this summer traveling around North America.

On and hcsm

Like many in the Blogosphere, I use a collection of tools to help publicize this blog and other interesting information I find online.  One of those tools is  Some folks seem to like and others don't.

For:  It's a nice collection of stuff that ___ reads.  I can read it all at once.  I don't have to click the link to see what it is about.

Against:  Too many ____ Daily's tweets and retweets just add junk tweets to twitter. 

The most recent complaint goes a bit deeper.  It's about how redirects ad revenue to's ad content instead of the original publisher.

I like to use because it automatically aggregates what the people I'm following are reading and I read my own front page that it creates for me every day.  I see stuff I wouldn't otherwise see because I don't have time to click on the tweeter's link that was posted.

It also organizes the tweets by topic, although that clearly needs more work.  I find it rather odd what gets put where some days.  Organization of tweets and links by hashtag are the most valuable to me.  Other tabs are not clearly as good.

I don't like the tabs, because I'd rather scroll through one long page than use tabs (it's quicker or at least feels that way).

I don't worry about ad content. provides a valuable service, and for that, I'm willing to put up with their ads.  They are at least pertinent.  Also, if I want all the details on an article, when I click through, I'll get the original ads that were posted with the article. doesn't usually include the full content of a post unless its mostly a graphic or video.  In that case, I suppose there might be an issue for creators of rich media content, but that's not typically an issue for most of what I read.

I like what does for this blog-site, for my twitter presence, and for my readers.  What do you think about it?

Friday, December 17, 2010

Doug Fridsma to Speak at IHE Connectathon Conference

It's a day for IHE Announcements, here's the latest...

The annual IHE N.A. Connectathon Conference will take place on January 18, 2011 in Chicago, IL. The Connectathon Conference will highlight the organizations and leaders driving the adoption of standards-based health IT solutions.

IHE USA is pleased to announce that Doug Fridsma, M.D., Ph.D, and Director, Office of Standards and Interoperability, ONC, will be a keynote speaker addressing current U.S. health IT policies and initiatives. Attendees will enjoy a full program day, networking lunch, breakout sessions- a new feature for 2011- and a guided tour of the IHE N.A. Connectathon, health IT's largest interoperability testing event. Registration is now open.

P.S. I'll be speaking at this conference as well...

IHE USA Announces Incorporation

IHE USA Announces Incorporation
IHE USA represents the interoperability needs and requirements of the US healthcare system
CHICAGO (December 17, 2010) – After more than a decade of improving the way healthcare systems share information for optimal patient care, IHE USA announces its incorporation.

The mission of IHE USA is to drive adoption of standards-based interoperability to improve patient care through innovation, standards profiling, testing, education and collaboration.

IHE USA, founded and supported by HIMSS and the Radiological Society of North America (RSNA), is a not-for-profit organization that operates as one of four deployment committees of IHE International®. Each deployment committee is a distinct organization with its own governance rules and business models that adhere to IHE Principles of Governance, but has the flexibility to meet the needs of its members and interoperability priorities of a specific country or region. Other deployment committees are:

IHE Europe: Denmark, France, Germany, Israel, Italy, Norway, Spain, the Netherlands and United Kingdom
IHE Asia-Oceania: Australia, China, Japan and Korea
IHE North America: Canada and USA

“IHE technical frameworks simplify data sharing by giving healthcare providers easy access to secure patient information with the development of more than 100 IHE Profiles since 1997,” said Joyce Sensmeier, MS, RN-BC, CPHIMS, FHIMSS, FAAN, who is HIMSS Vice President, Informatics. “IHE USA will continue to strengthen and support the ongoing efforts of the many volunteers and organizations that drive the development and use of standards-based interoperability in electronic health record systems in the United States.”

IHE USA sponsors activities in the three primary areas of testing and test tool development, education and training, and implementation support of IHE Profiles. Plans for 2010-11 include:
·     IHE North American Connectathon held on January 17-21, 2011 in Chicago
·     IHE North American Conference and Connectathon tour held on January 18, 2011 in Chicago
·     Annual IHE Educational Webinar Series – June through September each year
·     Presentations at conferences and workshops
·     IHE Demonstrations, such as the HIMSS Interoperability Showcase™, held at HIMSS11 Conference and Exhibition in Orlando, Florida on February 21-24, 2011.

As background on the establishment of IHE, HIMSS and the RSNA founded IHE International® in 1997 as an initiative led by healthcare professionals and industry across the globe who work together to improve the way healthcare systems share information in support of optimal patient care. Now, more than a decade later, IHE International has become a global organization with more than 300 members that represent healthcare, education, government, professional societies, trade associations and industry throughout the world.
  • For more information on IHE USA, visit or contact the IHE USA Secretariat at
  • Learn more about IHE Connectathons here.
  • Register for the IHE N.A. Connectathon Conference 2011 here.
  • Find out more about the IHE Free Educational Webinar Series here.

About IHE
IHE USA ( is a not for profit organization founded in 2010 that operates as a deployment committee of IHE International®. The mission of IHE USA is to drive adoption of standards-based interoperability to improve patient care through innovation, standards profiling, testing, education and collaboration. IHE USA improves the efficiency and effectiveness of healthcare delivery by supporting the deployment of standards-based electronic health record systems, facilitating the exchange of health information among care providers, both within the enterprise and across care settings, and enabling local, regional and nationwide health information networks in the United States, all in a manner consistent with participation in the activities of IHE International, Inc.

HIMSS is a cause-based, not-for-profit organization exclusively focused on providing global leadership for the optimal use of information technology (IT) and management systems for the betterment of healthcare. Founded 50 years ago, HIMSS and its related organizations have offices in Chicago, Washington, DC, Brussels, Singapore, Leipzig, and other locations across the United States. HIMSS represents more than 30,000 individual members, of which 68% work in healthcare provider, governmental and not-for-profit organizations. HIMSS also includes over 470 corporate members and more than 85 not-for-profit organizations that share our mission of transforming healthcare through the effective use of information technology and management systems. HIMSS frames and leads healthcare practices and public policy through its content expertise, professional development, and research initiatives designed to promote information and management systems’ contributions to improving the quality, safety, access, and cost-effectiveness of patient care. To learn more about HIMSS and to find out how to join us and our members in advancing our cause, please visit our website at

For more information, contact:
Joyce Lofstrom/HIMSS

Thursday, December 16, 2010

Writer's Block and a discourse on XHTML for CDA

So, I'm following my own rule about writer's block, staring at a nearly blank page.  I've discussed previously the notion that CDA should not create its own XML for narrative, instead using XHTML.  It's on my to do list, so let's write about that...

The ramifications of using XHTML for CDA Narrative:

For one, there are a number of XHTML tags that have semantic equals in CDA.  We could readily change content to span, paragraph to p, list to ul or ol, styleCode to class, linkHTML to a with little effort.  That would go a long way towards equivalence of the two.  At the very least, this MUST be done.  There is absolutely no reason to maintain different tags at this level.

In CDA, there are sections of content, and each section can have subsections. XHTML doesn't really deal with that notion very well in heading tags. You have "dividers" or section headers using h1 through h6, but nothing to wrap the section. So, a CDA section is more like an XHTML div element.  We could even use the h1 through h6 tags to represent the title of the section. This would be a CDA "profile" of XHTML, restricting certain structures that we don't like.

The next challenge is attaching participations to the section. Participations are repeatable and of varying types: Author, Informant, Subject, the Generic Participant, et cetera.

This challenge is more fundamental and gets right at the RIM based representation of clinical documents.  The document is a composition of narrative sections and "header" information, and sections are compositions of entries, and with optional subsections.  You can attach participations and entries to sections, but if you move the section structure over to XHTML, the linkage is gone and needs to be added back.  Also, the "inheritance" of context in the narrative needs to be readdressed.

Now, the XML ITS is simply one representation of an HL7 model, and is the one upon which CDA is presently based. But arguably, CDA R3 is a composition of two things: optional domain model content (so-called level 3), and a "narrative view" of that content.  Slicing and dicing the domain model into sections and subsections breaks it apart and interferes with the inheritance of context in the domain model.  So, there could arguably be two layers of "context", one imposed by the narrative view, and another imposed by the domain model view.

So, does the narrative view use XML ITS sort of structure to convey context?  I would suggest that it probably does not.  The "narrative" view is meant to be present for human readability, and should remain as true to a standard designed for presentation like XHTML as possible, maybe introducing a few things that XHTML rendering applications could easily ignore.

We could assume context conduction of the "CDA Header" into the narrative, because the only context that could be overridden at the section level are things like author, informant, subject or a generic participant (not present in R2).  So, how do you assign these participations to the section?

One way would be to require all authors, informants, subjects and other participants to be listed in the CDA Header.  These are, after all, participants in some component (and I use that word on purpose) that is part of the document, and so, are also participants with respect to what appears in the document.  This presents a small challenge because we'd like to quickly be able to identify the "principle" subject of the document easily in the CDA header.  I'd propose that we add an organizer to the document header whose express purpose is to act as a collector for document participants that aren't "primary" to the document context (and so by being referenced in the organizer, don't flow down via context conduction).

Now, each of the participants can exist in the XML, but I want to easily reference them.  We could use ID/IDREF or some other referencing mechanism.  I like ID/IDREF because the XML parser verifies uniqueness of ID and existence of an ID for every IDREF, but others may prefer another referencing mechanism.

So, a div element representing a section would need to attach the participants that are associated with it.  That becomes rather easy by adding one ore more attributes to the div element that lists the IDs of the participants that are associated with the section.  You could have a general cda:participants attribute support all, and use the participation types, et cetera in the model, but that's a bit harsh on the consumer.  Easier would be to support an attribute for each major participation type:  Author, informant, subject, and to meet any other needs: participant.  The extra attributes will be ignored by most XHTML renderers, are easily removed, and by using the IDREFS data type.  So, here is the new section markup:

‹div cda:author='a1 a2' cda:informant='i1'›
 ‹p›The patient has pharyngitis‹/p›

Note how participations are linked to the "section" via the cda:author attribute.

There are three other class attributes on the section:  code, languageCode, and confidentialityCode.  Now, I'd stick with the XHTML lang attribute to represent the semantics implied by languageCode and call that done (they even use the same vocabulary). [Note: I'll deal with multilingual content in a separate post].  That leaves code and confidentialityCode.

Now, the next challenge is classifying the section (dealing with section.code).  What I'd love to be able to say is put the code in an attribute on the div element, but there's not really a clean way to an express a CD in a string as far as I know.  The alternative is to keep a parallel section structure, or add elements to the div element that appear in another namespace.  I'd kinda like to keep the narrative "clean", not introducing stuff that could wind up being displayed if you just sent the narrative to the browser.  That means that only attributes (like cda:author above) could be added.

So, we need to either attach a section element to the div using a model similar to what was done with author above.  I haven't decided at this point how to approach this step.  If there were a string representation of CD, I know what I'd do right away would be to add cda:code and cda:confidentialityCode attributes to the div element.

Well, that sums up about two hours of thought and research.  So much for writer's block.

Wednesday, December 15, 2010

Another XSLT Trick

Today I had to process an XML document that was made up of several sequences of elements that needed to be grouped together in an outer element.  This is a pretty common task for me when converting between different formats.  Usually I use EXSLT sets capability to deal with list intersections but this time the XSLT parser I deployed to didn't support it, and I didn't want to have to test another implementation.

So, looking at this XML:

‹item›Outer Item 1‹/item› 
‹item›Inner Item 1.1‹/item›
‹item›Inner Item 1.2‹/item› 
‹item›Inner Item 1.3‹/item›
‹item›Outer Item 2‹/item› 
‹item›Inner Item 2.1‹/item›
‹item›Inner Item 2.2‹/item›
‹item›Outer Item 3‹/item› 
‹item›Outer Item 4‹/item›
‹item›Inner Item 4.1‹/item›
‹item›Outer Item 5‹/item›

Assume you want to produce this:
 ‹outer›‹title›Outer Item 1‹/title› 
  ‹item›Inner Item 1.1‹/item›
  ‹item›Inner Item 1.2‹/item› 
  ‹item›Inner Item 1.3‹/item›
 ‹outer›‹title›Outer Item 2‹/title›
  ‹item›Inner Item 2.1‹/item›
  ‹item›Inner Item 2.2‹/item›
 ‹outer›‹title›Outer Item 3‹/title›‹/outer›
 ‹outer›‹title›Outer Item 4‹/title›
  ‹item›Inner Item 4.1‹/item›
 ‹outer›‹title›Outer Item 5‹/title›

  1. So the outer items wrapper is the same.  Inside it you have more work to do.
  2. The first step is to process all item children of items that contain the text "Outer Item" (or whatever other matching criteria signals the start of a new list. 
  3. The next step is to find the end of the list of inner components.  That's simply the next "Outer Item" that follows this one in sequence.
  4. Now, the trick.  You process each following sibling of the Outer Item that precedes the end point.
Now for the XSLT that does the work.
‹xsl:stylesheet xmlns:xsl="" version="1.0"›
    ‹xsl:template match="items"›
        ‹xsl:copy›‹!-- 1 --›
              select="item[contains(.,'Outer')]"/›‹!-- 2 --›
    ‹xsl:template match="item"›
        ‹outer›‹title›‹xsl:value-of select="."/›‹/title›
            ‹xsl:variable name="endPoint" 
                /›‹!-- 3 --›
                  . = $endPoint/preceding-sibling::item
                ]"›‹!-- 4 --›
                ‹inner›‹xsl:value-of select="."/›‹/inner›

Now, if you think about what this is doing, it doesn't seem to be the MOST efficient way to process because you are creating two lists using the preceding-sibling and following-sibling axes in XPath, and then intersecting them.  But:
1) these lists are likely to be delayed in their complete evaluation until needed, and
2) A smart XSLT processor can and SHOULD recognize this idiom and use a more efficient evaluation

It's a handy thing to know how to do when you have to process lists of stuff that doen't use list-type XML markup to indicate list boundaries, and you don't have access to Java or EXSLT extensions to support procedural programming.

Now, can you guess what I was doing?  It's related to a previous post on some other blog...

Tuesday, December 14, 2010

A Review of ONC Proposed Initiatives for HealthIT Standards and Interoperability

General Comments
ONC needs to read  The Standards Value Chain: Where Health IT Standards Come From before assigning timelines.  Their timelines are very aggressive for going from availability of implementation guides to deployment in software.  Without an external driver (like Meaningful Use), there is little to increase demand, especially when other things (such as Meaningful Use) demand work in other areas.  But also, ONC needs to take into account what the healthcare Industry has the capacity to adopt.  There are 12 initiatives listed below, and 11 of them are supposed to be operational at some level somewhere between 2011 to 2013.  Given Meaningful Use 2012 is right around the corner (from an implementer perspective), I think there needs to be a wider spread.  Sure, I'd like it sooner, but capacity is hard to increase without changes in process.  Few of these projects speak specifically to how they address that issue.

Clinical Summary
The scope of this effort is unclear given that the challenge is very poorly stated.  The definitions of the data elements for the HITSP C32 Version 2.5 appear in HITSP C154.  C154 follows best practices for data element definition that is rarely followed anywhere else (HL7 Vocabulary does some of it, but few others).  Every data element is uniquely identified, named, AND defined, and includes data element constraints on values.  Further detail on vocabulary is completely aligned with vocabularies SELECTED by ONC for Meaningful Use.  Complaints about ambiguity in vocabulary should be addressed, but please recall that it was ONC and not HITSP which created the ambiguity.

Realistically, the challenge is getting at the requirements from the available documentation, and that requires collaboration between HL7, IHE and ONC, along with tooling such as MDHT's CDA Tools work.  That collaboration has been started and should be continued.  This initiative should be combined with the Templated Clinical Documents because the two are quite overlapping in scope.  CCD is but one of many possible clinical summaries, as I've mentioned previously.  Any challenges in using CCD would be experienced with any other clinical summary developed in that effort unless both are resolved.

Templated Clinical Documents
The challenge and scope statements here clearly describe the problems and a way forward.  This is highly relevant, and work already in progress.  Most of the templates can be found in the CDA Tools work today, and appear in the HITSP C83 specification, which implementors of CCD/C32 will be readily familiar with.

Lab Interface Improvement
Been there, done that.  Hopefully this project will write the guide and not spend yet more time reevaluating vocabularies.  HL7 2.5.1 (because it meet CLIA requirements), LOINC for orders (300 codes cover 95+% of orders) and results (another set being developed by Regenstrief), SNOMED CT for body parts and pathogens, and RxNORM for drugs.  Write the guide, and get the labs to conform.

Medication Reconciliation Improvement
This is a really good idea, but the scope statement is not very clear.  IHE is working on the Reconciliation profile and I'd love to get some help on that from ONC.  We've already had some discussion on problems and our scope includes medications if we can fit it in.  Additional resources would help.

Provider Directories
I testified on provider directories to the HIT Policy committee a few months back, and there is as they note, an IHE Healthcare Provider Directory profile (pdf) on this.  This might focus on open source implementations of the IHE profile in support of The Direct Project and others like it.

Syndromic Surveillance
ISDS just sent around another version of the document for review, as I reported earlier this month.  I think they just write the guide in HL7, as recommended.

Quality Measures
They got it right about linkage between standards, vocabulary and measures.  The REAL challenge is making sure that evidence based medicine is linked to measures to start off with.  I've talked about how quality processes need to have measurement built in previously.  Previous work on quality measure specifications in HL7 was rushed due to contract deadlines, but will be taken up again in a new project that SDWG and CDS workgroups are expected to be collaborating upon again.  It's goint to take TIME to do this right. 

Population Health Query
There are two big challenges here.  The first is addressing privacy and security concerns on third parties being able to query institutional data.  The second one (partially addressing the first) would be describing aggregate queries in an appropriate form; so that results are not just deidentified, but also aggregated in the responses.  2012 is very aggressive for this.

Clinical Decision Support
My most and least favorite topic.  It's my most favorite because the opportunity to improve care is almost boundless.  It's my least favorite topic because it attracts quite a bit of academic interest, many of whom have been locked in ivory towers.  Theres so much going on in this space it's hard to keep track of.  It's completely wrong to try to exchange rules first, but that's the holy grail so everyone starts there.  The FIRST step is to identify the data needed to be exchanged to support evidence based medicine, including clinical decision support and quality measurement.  The SECOND step automates interfacing to ensure such information can be exchanged without creating new interfaces manually.  The THIRD step involves standardizing how the information is used computationally, and is what academics seem to have gotten hung up on.   But computation is programming, and you need to use the right tool for the job, and that includes programming languages.  Some problems could be pattern matching, others rule based, and other intensely computational.  Few languages support all paradigms equally, which is why I avoid the temptation to create another language for CDS.

IHE's Patient Care Coordination domain developed the Query for Existing Data (QED pdf), Care Management (CM pdf) and Request for Clinical Guidance (RCG pdf) profiles to support this kind of exchange.  It is still ahead of its time mostly because it addresses the SECOND step without strongly addressing the FIRST.  QED may finally have gained enough implementors to go final this year.

Blue Button
Blue button is a much discussed initiative developed originally by Markle and implemented by VA and DoD.  The chief problem with blue button is its fixation on human readable text.  I'd prefer to see more work to take standards based XML and make it human readable, rather than generate text versions of the standards.  The self-displaying CDA project in HL7 is one such effort.

Green Button
This is the weakest of all of the proposals.  The value to providers is pretty good for that once every 5-10 year period that they consider replacing there EMR system with something from a different vendor, but it does nearly nothing to address day-to-day value in healthcare.  Yes, it would open up the EMR market and prevent lock-in, and that has value, but I think the cost savings possible here are really quite a bit smaller than for any of the other initiatives.

Value Set Development
Value set development on lab orders is done and similar work on results is in process by LOINC.  Other necessary value sets might include anatomy and pathogens.  HITSP C80 described the value set for anatomy as: Shall contain a value descending from the SNOMED CT® Anatomical Structure (91723000) hierarchy or Acquired body structure (body structure) (280115004) or Anatomical site notations for tumor staging (body structure) (258331007) or Body structure, altered from its original anatomical structure (morphologic abnormality) (118956008) or Physical anatomical entity (body structure) (91722005).  Pathogens could also come from SNOMED CT.

The real challenge is not in defining the value sets, but rather making them deployable and publically available.  IHE has developed the Sharing Values Sets profile (SVS pdf) that could support exchanging standardized value sets in XML.

Monday, December 13, 2010

Prioritizing ONC Initiatives for HealthIT and MeaningfulUse

Recently ONC published a request for feedback on the Standards and Interoperability Framework Prioritization process and proposed initiatives. An overview of the S&I framework can be found here (pdf).

They ask two key questions on the prioritization criteria (xls):
  • Are the current criteria appropriate and sufficient to evaluate Initiatives?
  • Are there additional criteria within the four categories that should be included?
And then they ask you to assess the initiatives (ppt) based on your own weights using the spreadsheet.

The S&I Prioritization framework is a good start. It provides some process on making decisions about how initiatives are to be prioritized, but doesn’t get into several details that are of interest to the healthcare industry. Notably absent from this spreadsheet is who gets to participate in these evaluations and how. A prioritization process that does not include affected stakeholders is of limited value.  There needs to be more detail added to the prioritization framework to determine how stakeholder input is provided.

The prioritization spreadsheet represents a decision making workflow that should be addressed in stages. Relevance should not be weighed against or alongside feasibility. If the project isn’t relevant, then feasibility doesn’t matter and visa-versa.

The scoring and weighting is completely unspecified in the priorities. It would have been better to provide some guidance in this framework so that it could be spit on or applauded. Having nothing at all is not the best way to get feedback. On having both scores and weights, I’d simply drop one or the other altogether. The scores ought to be used to assess a SINGLE initiative, not compare two or more against each other.  Scoring should be relatively straightforward and something that the committee can reach concensus on.  A five point scale ranging from Low, Low-Medium, Medium, Medium-High, High, or similar is easiest to use to reach consensus.  The scores are used to facilitate final decision making, not to "make" the decision for you.  This accounts for individual weighting of the importance of the activities.

These initiatives cannot really be compared to each other, but are part of a portfolio of initiatives. An initiative that completely nails ONE goal, at low cost, high chance of success, and with few resources SHOULD be strongly considered. But one that hits 10 on goals, costs 5 times as much, and has a 50% chance of success, might likely be scored as being equivalent to the former using a linear weighting system, but should be INTENSELY reviewed before adoption.

The first stage of the evaluation should assess the relevance of the initiative. Initiatives that are not relevant need no further evaluation. Relevance of an initiative should also account for subsequent initiatives which it enables. In IHE, relevance of profile proposals is addressed by the planning committee. For the S&I framework, these might be assessed by the HIT Policy Committee.

The next stage of evaluation should address feasibility. Initiatives which cannot be done because they are not yet feasible should be postponed until such time as they become feasible, and a reassessment of relevance should be done at that time also. For initiatives which are not feasible, a question that should be asked is what could make this initiative feasible. That question may identify enabling initiatives which should be done first. In IHE, the feasibility is assessed by the technical committee. In the same vein, feasibility for S&I would be assessed by the HIT Standards Committee.

Relevance and feasibility feed into a third decision making step. That step compares costs of proceeding with the project (which can be assessed during the feasibility phase) to the expected benefits (which can be assessed during the relevance phase). There really isn’t any tab in the ONC S&I Prioritization Framework that addresses cost/benefit or return on investment. The costs should also look at opportunity cost. The prioritization framework must assume finite resources to complete initiatives. Executing one initiative may consume resources needed for another (opportunity cost). It should also consider what other initiatives might be enabled (benefits) by a project. There needs to be some guidance in the framework to determine the extent of resources available for execution. Cost/Benefit and ROI decisions should be jointly assessed based on relevance and feasibility.  Aligned with cost/benefit or ROI evaluation, is a determination of likelyhood of effectiveness. Some of that is described in the usability/accountability tab. If the question of effectiveness cannot be answered, then research or pilots should be done first prior to starting a large project.

This is essentially the same thing that any good organization goes through in prioritizing projects to be completed. Will this project meet the needs of our organizational stakeholders and their mission? Can it be done? Are the benefits worth the cost (what is the ROI)? Will it be useful and effective?

If you have looked at the spreadsheet supplied by ONC, you’ll note that I’ve addressed three of their four areas, and did not address Evidence-Based Medicine and Research Support. These are simply questions focused on relevance of the initiative, as focused by existing EBM and Research initiatives. I see no need to call these out separately.

So, here are the changes I recommend to the Prioritization Framework:
  1. Group everything on Relevance in one place (including applicability to EBM and Research Goals)
  2. Add a section on costs and potential savings / benefits. It may be hard, but any good organization estimates costs and benefits before initiating projects.
  3. Develop a way to determine available resources, and ensure that each project / initiative specifies resource needed for success, INCLUDING volunteer resources.
  4. Ensure that the prioritization process includes adequate industry input from providers, payers, vendors and consumers. Each project should have input from affected industry stakeholders, not just assessments from HIT FACAs.
  5. Drop weights, and use simpler scoring criteria, recongizing that weights are subjective and that initiatives cannot be compared to each other

There are two things that the prioritization process also needs to account for.  One is that there must be an opportunity to say NO to an initiative.  HITSP never had that opportunitity and was expected to "scale up" 100% year over year.  If NO cannot be said to a proposed S&I initiative, then the same problem will appear there.  Also, S&I needs to account for, and allow for initiatives to fail.  Failures teach as much or more as successes.  If we aren't failing, we aren't trying hard enough.

Tomorrow (or Wednesday if I run out of time), I'll post my assessment of the initial proposals...

An Overview of ONC 's Vision and the Role of HealthIT and HITECH in Health System Change and Health Care Reform

Another one... this time from ONC

2010 ONC Update
December 14-15, 2010
Washington, D.C.

Tuesday, December 14, 2010
8:30 am - 5:00 pm EST
8:30 – 9:00 am Opening Remarks
Kathleen Sebelius, Secretary, U.S. Department of Health and Human Services (HHS)
Introduction by David Blumenthal, MD, MPP, National Coordinator for Health
Information Technology, Office of the National Coordinator for Health
Information Technology (ONC), HHS

9:00 – 9:45 am An Overview of ONC’s Vision and the Role of Health IT and HITECH in Health
System Change and Health Care Reform
David Blumenthal, MD, MPP, National Coordinator for Health Information
Technology, ONC
Donald Berwick, MD, Administrator, Centers for Medicare and Medicaid Services

9:45 – 10:15 am An Overview of ONC’s Strategy and Programs
Farzad Mostashari, MD, ScM, Deputy National Coordinator for Programs and Policy,

10:15 – 11:00 am Break

11:00 – 12:15 pm Update on Privacy Regulations and Activities in the Office of the Chief Privacy
Joy Pritts, JD, HHS Chief Privacy Officer, ONC

12:15 – 12:30 pm Break

12:30 – 2:00 pm Getting to Health Information Exchange
Farzad Mostashari, MD, ScM, Deputy National Coordinator for Programs and Policy,
Doug Fridsma, MD, PhD, Director, Office of Standards and Interoperability, ONC
Claudia Williams, Acting Director, State Health Information Exchange Program,

2:00 – 2:15 pm Break

2:15 – 3:30 pm An Overview of HITECH Programs Supporting Providers in Achieving Meaningful Use
Moderator: Mat Kendall, Director, Office of Provider Adoption and Support, ONC
Panelists: Paul Kleeberg, MD, Clinical Director, REACH
Robyn Leone, Regional Extension Center Director, Colorado Regional Health
Information Organization
Norma Morganti, Executive Director, Midwest Community College Health IT
Consortium, led by Cuyahoga Community College
Rick Shoup, Director, Massachusetts eHealth Institute

3:30 – 3:45 pm Break

3:45 – 5:00 pm An Overview of Medicare and Medicaid Incentive Programs

Moderator: Michelle Mills, CMS
Panelists: Robert Anthony, CMS
Elizabeth Holland, CMS
Jessica Kahn, CMS

@NationaleHealth discovers #HCSM

Crossed my Desk this AM...

Contact: Jenna Bramble
National eHealth Collaborative
(202) 624-3275
Like, Follow, Watch, Discuss: NeHC Launches Social Media Efforts
National eHealth Collaborative Joins Facebook, Twitter, and YouTube
WASHINGTON, DC (December 13, 2010)National eHealth Collaborative (NeHC) last week launched its official Facebook page, Twitter feed, and YouTube channel, kicking off its push to develop stronger methods for seeking stakeholder feedback and promoting collaborative discussion about health information exchange.
Following NeHC's NHIN University program, NHIN 204 – Beacon Communities, NeHC Interim CEO Aaron Seib announced that questions from the class would be posted on the new NeHC Facebook page for continued discussion among interested stakeholders. NeHC staff also live-tweeted the class through the Twitter handle @NationaleHealth, and a new video on the Beacon Communities program was posted on NeHC's new YouTube channel.  NeHC's YouTube channel will also feature footage from past NeHC Stakeholder Forums, as well as educational materials and clips from NHIN University classes and other NeHC events.
"We see social media as just one more avenue we can use to leverage two-way communication with stakeholders," said Aaron Seib. "At the end of the day, our goal is to create a robust community of stakeholders who are talking and collaborating to solve some of the most pressing problems facing the widespread adoption and implementation of health information exchange."
NeHC encourages stakeholders to use the Facebook page as a forum for discussion and sharing relevant information and links as well as a resource for keeping up with NeHC news and events. Facebook users can find the NeHC page by typing "National eHealth Collaborative" into the search bar or going to Twitter users can follow NeHC at @NationaleHealth, and the NeHC YouTube channel is live at
 About National eHealth Collaborative
National eHealth Collaborative (NeHC) is a public-private partnership that enables secure and interoperable nationwide health information exchange to advance health and improve health care.  Working in conjunction with its partners, including the Office of the National Coordinator for Health IT (ONC) in the U.S. Department of Health and Human Services (HHS), NeHC engages stakeholders in a collaborative and consensus-driven way to realize common goals that lead to transformative change.  NeHC reaches broadly into all sectors of healthcare and health IT, employs open and inclusive methods, and makes its outcomes broadly available for continued improvement.  This philosophy and approach allows NeHC to offer a uniquely balanced perspective that leverages diverse points of view and provides the essential public-private platform for collaboratively pursuing solutions to universal trusted and effective health information exchange. 

Friday, December 10, 2010

Getting Real input from the Public

One of the challenges I've seen over the last six to seven years of being involved in the development of healthcare standards is that of obtaining good public input on the development of standards or regulation affecting the industry.

The first problem is one of effective marketing about the opportunity to provide comment.  In the legal space, notices of pending law and regulation make it out only to state or federally sponsored web sites.  Every law or regulation is treated equally in that regard.  With standards, again notice only goes out to members of the SDO or profiling organization, rather than broadly to the industry.  In both cases, affected populations can remain largely ignorant of pending actions because there's no concerted effort to notify them of these opportunities.

The media is largely unhelpful.  Even when they provide information about pending laws or regulation, they rarely provide information about where the public can provide feedback.  For example, this AP story published in the Boston Globe talks about a pending IRS rule that affects consumers using health cards to pay for non-prescription drugs.  But nowhere in this article did the AP provide the link to the pending change (pdf)  so that consumers could find it and provide feedback to the IRS and Treasury department.

Similarly, you will rarely if ever find a report, even in the healthcare IT related media, on new work available for comment from HL7, IHE, CAQH, Continua, X12 or the numerous other organizations that produce standards and implementation guides.  Perhaps it is due to lack of demand (in which case the fault goes right back to the consumers), but I think this is in part also to a lack of attention to this topic from the producers.  We hear all about new initiatives from some organizations like the Robert Wood Johnson Foundation, or Markle's Connecting for Health because those organizations have highly evolved and staffed marketing arms.  But technical standards development organizations which are mostly volunteer led simply don't have that expertise freely available.  I think there are a few things that could be done differently.  For one, if every SDO put out a press release and sent it to the relevant media outlets, I think they might get a bit more attention to their efforts.

From the side of the public, there are other challenges.  At a public event a few months ago titled Crossing the Infrastructure & HITECH Meaningful Divide, I spoke to a largely non-technical audience about Meaningful Use and Standards and the need for their participation in the discussions both at the regulatory level, and at the standards making level.  The audience responded that there are some very big challenges for them.  For many organizations, devoting sufficient resources and expertise to make a meaningful contribution is time intensive and daunting from an outsiders perspective.  It's hard to understand and even harder to be heard when you don't speak the language of the SDO "insider".  And yet, part of the formation of that language is a necessary component of the norming process.  We establish these little languages and vernacular so that we can understand each other, and know what we are talking about.  And unless you've recently come into the discussion with an outsider perspective, its very hard to know what you need to explain, because you've already learned the langauge, and so don't know what others don't know.

But, if you want to regain some control over what is going on around you, you also need to be listening and participating in the right places. The best way to make the SDOs more responsive to YOUR needs is to talk to them.

Thursday, December 9, 2010

ONC Seeking Comment on PCAST Report on HealthIT

I've spent most of yesterday and today digesting the PCAST (pdf) report on HealthIT.  Now ONC has put forth an RFI seeking your comments.  Click the link to get the whole RFI including details on how to respond.  The specific questions are detailed below so you can start thinking about your answers.  The back of the book is here if you get stumped ;-)


P.S.  Updated URLs for the RFI as of 12/10/2010

ONC seeks comment (pdf) on the questions below. Comments on other aspects of the PCAST report are also welcome.

1. What standards, implementation specifications, certification criteria, and certification processes for electronic health record (EHR) technology and other HIT would be required to implement the following specific recommendations from the PCAST report:

a. That ONC establish minimal standards for the metadata associated with tagged data elements;

b. That ONC facilitate the rapid mapping of existing semantic taxonomies into tagged data elements:

c. That certification of EHR technology and other HIT should focus on interoperability with reference implementations developed by ONC.

2. What processes and approaches would facilitate the rapid development and use of these standards, implementation specifications, certification criteria and certification processes?

3. Given currently implemented information technology (IT) architectures and enterprises, what challenges will the industry face with respect to transitioning to the approach discussed in the PCAST report?

a. Given currently implemented provider workflows, what are some challenges to populating the metadata that may be necessary to implement the approach discussed in the PCAST report?

b. Alternatively, what are proposed solutions, or best practices from other industries, that could be leveraged to expedite these transitions?

4. What technological developments and policy actions would be required to assure the privacy and security of health data in a national infrastructure for HIT that embodies the PCAST vision and recommendations?

5. How might a system of Data Element Access Services (DEAS), as described in the report, be established, and what role should the federal government assume in the oversight and/or governance of such a system?

6. How might ONC best integrate the changes envisioned by the PCAST report into its work in preparation for Stage 2 of Meaningful Use?

7. What are the implications of the PCAST report on HIT programs and activities, specifically, health information exchange and federal agency activities and how could ONC address those implications?

8. Are there lessons learned regarding metadata tagging in other industries that ONC should be aware of?

9. Are there lessons learned from initiatives to establish information-sharing languages (“universal languages”) in other sectors?

The Language of HealthIT

Yesterday I summarized my thoughts on the Health IT report from the Presidential Council of Advisors on Science and Technology (PCAST).  The advisors made several recommendations, one of which was to "develop" a language for healthcare that could be used in web services and which could be used to access disagregated healthcare data.

I propose such a standards based language already exists, and will explain in more detail.  It's clear that the PCAST has looked into Health IT standards to some degree.  But its also clear that they aren't experts on the topic.

I've gone through the report again to pull out their suggested requirements for such a language:
  1. A robust platform for creating user interfaces, decision support, storage and archiving. (p38, item #2)
  2. Support for cross-organization data exchange (p38, item #3)
  3. Support for strong privacy protection (p38, item #4)
  4. Health IT systems must have the ability to communicate and aggregate health information in the ways needed to serve patients, doctors, and researchers. (p39, 1st para, 1st sent.).
  5. Enable communication of clinical data. (p39, 2nd sent.)
  6. Enable innovators to develop new tools to use healthcare information. (p39, 3rd sent.)
  7. * Support policy and governance models to drive innovation. (p39, 4th sent.)
  8. * Support appropriate access to information (p39, 2nd para, 1st sent.)
  9. * Service Oriented Architecture (p39, last para)
  10. Use XML (p41, 2nd para).
  11. Support attachment of metadata on data (p41, 4th para) elements including:
    1. Identifying information about the patient
    2. * Privacy protection information
    3. The provenance of the data—the date, time, type of equipment used, personnel
  12. * Availability of a national infrastructure for finding health data, and for controlling access to it (p41, last para).
  13. Services to include: services would include those associated with crawling, indexing, security, identity, authentication, authorization, and privacy (p42, 1st sent.)
  14. * Support patient (or agent) ability to restrict access to the data. (p42, 1st para)
  15. Support dynamic aggregation from multiple systems (p42, 2nd para)
  16. Interoperable and intercommunicating in conformance to a single Federal standard (p43, 2nd para)
  17. * Auditable for compliance with privacy and security policies (p43, 2nd para)
  18. Extensible langauge for tagged data elements (p43, 5th para).
John Halamka summarizes these requirements thus (and check his blog tomorrow for details) as follows:
  1. Content for Health Information Exchange should be expressed in XML using structured data with controlled vocabularies/code sets in a format that is infinitely extensible
  2. Data elements should be separable and not confined to a specific collection of elements forming a document.
  3. Metadata included as attributes of data elements should include at least person name/date of birth/data element name
  4. Privacy controls should restrict query of data elements per patient preference. Data elements should be queryable using search engine technology (with privacy filters)
  5. De-identified data should be available for population health, research, surveillance etc.
So, let's see how where HL7, IHE and HITSP have already addressed the issues:
Architecture and Standards
The PCAST report talks about different kinds of things.  It does not separate requirements for trasport, content, security and privacy, et cetera.  It does not address application requirements vs. interface requirements.  Applications provide services.  Services have interfaces.  Not all applications will support the same set of services.  Some of the "complaints" PCAST makes about existing standards aren't really appropriate, because they have an application requirement in mind that isn't met by one of the services they are looking at.

Now, privacy and security information need to be separately maintained and stored from clinical data.  About the only thing that needs to be stored with the clinical data is PERHAPS the privacy/security classification of it.  One reason for separating this is that they can change independently from the clinical information (new laws/regulation, new or altered patient preferences, et cetera).  The same is true for more specific access control and audit information.  So there isn't "one row" or set of XML elements that would necessarily appear together in an exchange that includes all of the privacy/security information with the clinical data.  I won't address the privacy/security or infrastructure requirements (marked with a *) from the above in this part of the post.  I will say that these have been addressed to some degree in the existing IHE and HITSP specifications and that more work has been done in HL7 on access controls.

One set of standards (CDA/CCD) are currently used under HITECH/Meaningful Use to send/recieve collections of discrete data in documents.  Now, a document is an aggregation of information for a single visit.  The CDA standard is designed for sending aggregations of data specific to a visit.  To complain that it doesn't support other forms of aggregation fails to acknowledge what it was designed to do.

CDA supports many of the requirements from above, including: #1, #2, #4-6, #10-11, #13, #15, #16 and #18

#1:  CDA is a robust way to store the clinical data and metadata, and can support UI and decision support through other application behaviors.  Some implementors have developed decision support services already based on exchange of CDA and CCD, and there are other interfaces to access the clinical data contained within the CDA document, e.g., the IHE QED profile (pdf).

#2:  This is already being done with CDA at Partners, South Shore Hospital, MA Share to the SSA, in the Vermont HIE, between the VA and DOD, and in other HIEs across the country.  It appears as a requirement in the current Meaningful use Regulations (45 CFR 170.205(a)(1)).

#4:  CDA/CCD is a way to aggregate this information for the collection of data about a SINGLE VISIT.  Other specifications, such as IHE QED profile mentioned above, can access the same data across visits using nearly the same XML.  Work is under way in HL7 on CDA Release 3, and on a new ITS that could uuse the SAME XML for all uses.

#5:  CDA enables exchange of clinical data for a visit.  Other profiles of HL7 RIM based standards support other ways to aggregate the data across visits.

#6:  I mentioned the Clinical Decision Support example under #1 above.  You can use CDA and QED to create a patient specific report based on content in a clinical data repository.  QED becomes the method to aggregate and CDA the way to report the aggregation in human readable narrative.

#10:  Yep, CDA uses XML.

#11:  All of the requested data appears in the CDA header and can also be attached to sections or entries in the CDA document.

#13:  CDA is a document format that can be crawled and indexed at a fine grained level.  In fact, the IHE QED profile anticipates that use and further tells how the two work together.

#15: The IHE QED profile is designed to support query and aggregation of data from multiple sources using almost the same markup found in CDA.  That profile can be used for research, gathering data to generate quality metrics, population surveillance, et cetera.

#16:  CDA can be part of that.  There really isn't ONE single standard, but a collection of them that work together (e.g., CDA/CCD, LOINC, SNOMED, et cetera).  See #18 below as well.

#18:  CDA, like all HL7 Version 3 based standards supports extensibility by binding the content with vocabulary standards that contain the detailed medical knowledge about problems (e.g. ICD or SNOMED), dianostic results (LOINC), procedures (CPT, ICD or SNOMED), medications (RxNORM)

The architecture that PCAST asks for already exists, and CDA is one part of it.  The other parts include the HL7 RIM, the HL7 Care Record, and HL7 Care Record Query standards.

More work is needed.  One of the work items is to make easier to use implementation guides.  HL7, IHE, Health Story and ONC are on it.  Another issue to to address transport.  NHIN Direct is a simple model, but isn't the robust exchange that is needed for other use cases like access to longitudinal data.

As a platform, the HL7 CDA and Care Record standards, as profiled by IHE, and applying the additional rules and vocabulary selected by ANSI/HITSP build the healthcare langauge that PCAST wants.

The language is just one part of the platform.  We need transport and we need security and privacy controls on that information.

NOTE:  The PCAST report speaks to the state of healthcare IT as deployed in the majority of institutions today, but DOES NOT speak to the capability of current healthcare IT solutions that are being deployed now. Much of what is currently available from Healthcare IT products supports the language that I just described.
Innovators are already using this langauge, based on the HL7 RIM, to do very interesting things. You can aggregate the data in multiple CDA documents into a CDR (via services that Crawl, index and identify using the existing profiles like XDS and QED). Then you can query that CDR to get data for a variety of uses (QED). Then you can apply clinical decision suppport (IHE CM and RCG -- also using HL7 Care Record), definitions about quality metrics (HL7 QDS), produce reports on quality (HL7 QRDA), or generate an immunization report for a patient (using HL7 QED and HL7 IC as demonstrated by IHE two years ago). It's all there. We just have to use it, instead of spending time trying to reinvent it.
There's a huge difference between what is available and what has been deployed. Advances in the development of these standards have taken some time to reach the healthcare marketplace, but they are getting there. The reasons for that are NOT a technology problem, but rather related to how Health IT infrastructure is invested in, updated, and deployed in the US.   The problem is not in developing the architecture and specifications, because believe it or not, we've been there and done that. 
The challenge is in getting it deployed.  There are several theories on why it hasn't been deployed.
  1. Some fault the standards or the ease in which they can be understood and used. 
    Ease of understanding is perhaps reasonable argument but NOT a complete one because I know that a significant share of vendors in the market DO understand them and have implemented them.  HL7, IHE, Health Story, ONC, and others are working on the ease of use problem. 
  2. Others explain that there's no incentive to deploy them. 
    That is also reasonable argument.  One part of that has to do with who benefits from use of them, and that's NOT a technology issue, but a policy one that I won't even attempt to address.  There are incenctives through ARRA/HITECH and Meaningful Use which are pushing forward, starting with exchange of data about visits and aggregated data about quality, but over the longer term (e.g., beyond 2015), more will be needed.  We need to think about how to accomplish that long term investment in healthcare IT, and make it worth the investment of healthcare providers. 
Ripping and replacing existing work won't solve either problem, it will just set us back to where we were 5-10 years ago, without addressing the fundamental issues.  Our healthcare system, and the IT it uses, is large, complex and emergent in its behaviors.  We need to figure out how to take the work that has already been done and use it to guide it towards the behaviors we want to see: use of the healthcare language.  That evolution will enable the revolution that PCAST is looking for.