Pages

Saturday, May 29, 2010

Touring Granville Island in British Columbia

A friend and collegue in healthcare standards turned me onto this place the last time I was in Vancouver.  I didn't have time to go then, but I did today.  Granville Island is small, and it's not really an Island.  It's a small peninsula just south of downtown Vancouver.  The place was created by filling in two sandbars in the early 1900's and was originally home to a number of factories.  In the mid to late 1970's, Granville Island was converted to an Artistic and Cultural center and is the home of a number of low-rent studios that now reside in the former factory buildings.  There are several theaters, rows of art studios representing all types of art, a public market that has just about any kind of fresh food you can imagine (I picked up some elk and bison pepperoni), and plenty else to look at and spend money on.  I spent probably more than I should have, but felt it was well worth it.

I headed over to Granville Island to go see the Artisan Sake Maker's studio in Railspur Studios and to experience their locally made Osake Junmai Ginjo Nigori Genshu.  I also tried several varieties of their Junmai sake, including the Junmai Nama Genshu, the Junmai Nama, and Junmai Nama Nigori.  I purchased a couple of bottles of the Sake to take home.  The premium Nigori has just a bit of carbonation, and so has a little more bit to it than your usual nigori sake.  It was a bit surprising but also very good.  Osake is now growing their own rice in Canada and expects to be making Sake with it next year.

Here's just a bit of Sake terminology for you:
Sake: Wine made from water, rice, yeast and koji
Koji: A mold used to turn rice starches into sugar that the yeast can then turn to alcohol.
Junmai:  Pure, the Sake is made only with traditional ingredients and there is no added alcohol.
Nama:  Micro-filtered rather than pasturized.
Nigori:  Unfiltered sake.  Includes particulates which are stirred up into the wine when serving.  Usually a little bit sweeter and full bodied because of the rice particulate.
Ginjo:  The rice is milled finer than for regular Sake, and contains only 60% of the grain (the outer part is milled away).
Daigingo:  Even more of the outer grain is milled away leaving 50% or less of the grain.
Genshu:  An undilluted strength of about 18-21% alcohol (note: Sake is usually more alcoholic that the equivalent volume of wine).

Just like there are wine and scotch regions, there are also "Sake" regions in Japan.  I'm not well enough versed in Sake to know the various regions -- but I'm learning.

Thursday, May 27, 2010

Off to Canada

While most of my friends will be enjoying the long weekend, tommorrow I'm headed off to Canada to help set up one of our systems for yet another interoperability demonstration.  This demonstration will be at e-Health 2010 in Vancouver, BC.  If you happen to be in the area, stop by at the Interoperability Showcase booth.

Following that, I head to Milwaukee to help teach a course on IT standards, and then head down to our Barrington office outside Chicago, IL.  Then it's back to Boston.  This follows traveling to Rio, so in three weeks I'll have racked up more than 20K miles, which is about 3 times as much as I've racked up in the first quarter.

Wednesday, May 26, 2010

The May HL7 Working Group Meeting

The May HL7 Working Group Meeting was held at the Hotel Windsor Barra in Rio de Janeiro, Brazil.  Getting to the meeting was a challenge for many HL7 members.  United Flight 861 from Dulles Airport in Washington DC was rescheduled due to a mechanical problem, leaving several hundred travelers waiting nearly 15 hours for a new plane.  The replacement flight stopped in Saõ Paulo and did not go on to Rio, which required rebooking for those headed to Rio.  I was one of those travelers and had already spent 6 hours waiting for the plane.  US Air Flight 800 out of Charlotte, NC had issues with navigation instruments and turned back after getting not quite half way their.  They fixed the problem after a while and turned around, only to realize that they had been in the air too long (read: not enough fuel), and then turned around again to land back in the US.  A flight from Puerto Rico had weather problems landing in Texas, and had to land at another airport to refuel, causing that contingent to miss their connection to Rio.  Most of us eventually arrived, safely I might add.  The meeting was not as lightly attended as Kyoto, but not as heavily attended as meeting in the US.  I'm told that there were more than 200 in attendance in Rio.

I spent Monday with the Structured Documents Workgroup.  We reviewed the weeks agenda, approved an updated mission and charter for the workgroup which will be posted on the SDWG site once approved by the TSC.  We also updated our governance (which again requires further approval).  The Structured Documents workgroup follows the usual workgroup processes with one exception:  Quorum is 5 members (with balance), and must include two cochairs. We also updated our three year plan.  That completed the first half of Monday.  The second half was consumed with hearing about Template tooling and template registry efforts.  The template tooling pilot ran an example template through three different technologies to develop templates.  The first of these are the Model Driven Health Tools that several HL7 tooling developers have been writing in Eclipse.  This is a very cool tool, and you can find out more about it at the website.  I recommend taking a look at it.  The second was an illustration of template development using the Static Model Designer being developed by HL7 UK members as a replacement for the current Visio tooling.  This is also pretty cool stuff, but requires more experience in HL7ese.  Both tools were up to the job of developing the templates in the pilot.  The last prototype wasn't discussed due to a mixup in scheduling.

I also gave a report out to the SDWG on progress on the Templates Registry Pilot.  If you've been following me, you know my hard drive had an unfortunate meeting with a coke bottle a few weeks ago.  I don't back up installed software because it inevitably has to be reinstalled again, so I lost several days work and haven't been able to catch up because there was other (more urgent) software to be reinstalled.  I'm back on that this week.  We did review the platform configuration and the Data Model for the registry.

I spent part of Monday with the Marketing Committee discussing the HL7 Ambassador program.  We will be putting together a set of Ambassador talks around Meaningful Use (which includes the CDA/CCD talk that I give), and promote that session for the Cambridge Working Group meeting in October.  We also discussed the new Ambassador webinars (see earlier press release today).  I will be giving one on CDA and CCD later this year (and in other venues sooner).

Tuesday early morning was the Education Breakfast.  I slept miserably the night before (a bit of Montezuma's revenge caught me), but managed to attend and was better by Q2.  We reviewed classes and education schedules for the Cambridge WGM, and talked about some new content that needs to be developed (specifically around SOA).

I spent some time Tuesday afternoon with the ITS Workgroup, discussing hData activities and the MicroITS.  We reviewed the current set of requirements for a Micro-ITS and the Micro-ITS tool developed by Robert Warden (building from the MDHT work if I remember correctly).  At that meeting we agreed to move the Micro-ITS work forward as a ballotable item to set contraints around what a Micro-ITS is and isn't, and how they relate to the HL7 RIM. 

I tried to join the Security workgroup later in the day, but ran into problems since I hadn't brought my Skype headset downstairs.  It was hard to follow the meeting when everyone else in the room was using Skype and several others were present (but not in the room).  After about 5 minutes we had a network problem and that meeting effectively broke up.  I spent the rest of my time that day slightly misplaced, thinking I was going to the ARB meeting on SAIF Tuesday Q4 (turns out that I was supposed to be at Wednesday Q4).  It was fine anyway, because I got to hear the ARB address Lloyd's comments on SAIF, many of which were well made.

Wednesday morning a couple of members of the SDWG (including me) met with the EHR workgroup to review the current status of the mapping of interoperability requirements from CDA based on the EHR Functional Model.  There were two formal proposals to CDA Release 3 (Audit and Access Controls) whose status were reviewed.  The first was rejected by structured documents as it covers a feature of a medical records management system, not a clinical document.  The latter was accepted to support marking of the sensitivity of a document with multiple sensitivity codes (these are found in the misnamed "confidentialityCode" attribute in the clinical document).

During lunch I was "interviewed" by the chair of the Patient Care workgroup, who was gathering data on use of the Care Provision messages developed by that workgroup.  I'm the author of several IHE profiles, including Query for Existing Data, Care Management and Request for Clinical Guidance that use these, as well as having developed a prototype implementation for one of our products.  So, he got an earful from me, and I owe him a followup e-mail.

Early Wednesday afternoon I again met with the ITS workgroup, and we reviewed again the requirements of a MicroITS.  Charlie McCay had a number of good questions and as a result I nominated him to help me write up the MicroITS requirements that we agreed would be balloted next cycle.

Late Wednesday afternoon I met with the TSC to review progress on the respelled SAIF (See previous posts on SAEAF Revisted and Demystifying SAEAF ... Maybe).  We talked about the feedback that the ARB and TSC has recieved on SAIF so far, and reviewed a presentation by the technical services workgroup that explained how HL7 artifacts could map into the framework.  This is where TSC chair Charlie McCay uttered the now famous: "The resolution to that [issue] is that I'm stuffed and I still have work to do" in response to a query for people who could help him finish up some SAIF deliverables.

Thursday morning I spent with the Templates workgroup, and we repeated the discussions that we had jointly with them in Structured Documents Monday afternoon on platform configuration and the data model.  I really must get back to work on the build environment, and get those parts and documentation loaded into GForge or OHT, so I'll sign off with that.

Health Level Seven Offers New Webinar Series

This must be the week for press releases.  This also crossed my desk this morning (for noon release).  Note, I happen to be one of several HL7 Ambassadors.  I deliver Ambassador presentations on the HL7 Clinical Document Architecture Standard and Continuty of Care Document Implementation Guide.  These presentations are about 20 minutes each with 10 minutes for questions (30 minutes total), or they can be combined into a one hour presentation. 



Health Level Seven, Inc.
Contact: Andrea Ribick
+1 (734) 677-7777




Health Level Seven Offers New Webinar Series


The first webinar is an introduction to HL7’s role in developing interoperability standards to bring together the world’s disparate healthcare systems.

Ann Arbor, Michigan, USA – May 26, 2010 – Health Level Seven® (HL7®), the global authority for interoperability and standards in healthcare information technology with members in 55 countries, today announced the first in a series of HL7 Ambassador Webinars - “The HL7 Healthcare Connection.”

The HL7 Healthcare Connection is a free webinar that will be held on Tuesday, June 8 from noon to 1 pm ET.

Grant Wood, senior IT strategist with Intermountain Healthcare’s Clinical Genomics Institute, HL7 ambassador and member of the HL7 Clinical Genomics Work Group, will discuss how the implementation of HL7 standards and messaging architecture solves the problems of disconnected healthcare systems and serve as a vehicle for interoperability with disparate healthcare IT systems, applications and data architectures.

HL7’s healthcare standards play a key role in the exchange of electronic data in much of today’s global healthcare community and represents some of the most widely implemented healthcare standards in the worldHL7 comprehensive standards provide a comprehensive framework that improves healthcare delivery, optimizes both clinical and administrative workflow, creates a shared language, and enhances knowledge transfer among all healthcare stakeholders, including healthcare providers and their patients, government agencies, the vendor community, and other related standards groups.

This webinar is free and open to anyone interested in healthcare IT, and is the first in an ongoing series of HL7 Ambassador webinars in development. To register, please visit http://www.hl7.org/.

HL7 Ambassadors present standardized presentations about HL7 as speaker volunteers. They are available to present at local, regional or national conferences. Please contact HL7 at +1 (734) 677-7777 if you would like to schedule an HL7 Ambassador for an upcoming event.

About HL7 International
Founded in 1987, Health Level Seven International is the global authority for healthcare Information interoperability and standards with affiliates established in more than 30 countries. HL7 is a non-profit, ANSI accredited standards development organization dedicated to providing a comprehensive framework and related standards for the exchange, integration, sharing, and retrieval of electronic health information that supports clinical practice and the management, delivery and evaluation of health services. HL7’s more than 2,300 members represent approximately 500 corporate members, which include more than 90 percent of the information systems vendors serving healthcare. HL7 collaborates with other standards developers and provider, payer, philanthropic and government agencies at the highest levels to ensure the development of comprehensive and reliable standards and successful interoperability efforts.

HL7’s endeavors are sponsored, in part, by the support of its benefactors: Abbott; Accenture; Booz Allen Hamilton; Centers for Disease Control and Prevention; Duke Translational Medicine Institute (DTMI); Eclipsys Corporation; Eli Lilly & Company; Epic Systems Corporation; European Medicines Agency; the Food and Drug Administration; GE Healthcare Information Technologies; GlaxoSmithKline; Intel Corporation; InterSystems Corporation; Kaiser Permanente; Lockheed Martin; McKesson Provider Technology; Microsoft Corporation; NHS Connecting for Health; NICTIZ National Healthcare; Novartis Pharmaceuticals Corporation; Oracle Corporation; Partners HealthCare System, Inc.; Pfizer, Inc.; Philips Healthcare; QuadraMed Corporation; Quest Diagnostics Inc.; Siemens Healthcare; St. Jude Medical; Thomson Reuters; the U.S. Department of Defense, Military Health System; and the U.S. Department of Veterans Affairs.

Numerous HL7 Affiliates have been established around the globe including Argentina, Australia, Austria, Brazil, Canada, Chile, China, Colombia, Croatia, Czech Republic, Denmark, Finland, France, Germany, Greece, Hong Kong, India, Italy, Japan, Korea, Mexico, The Netherlands, New Zealand, Romania, Russia, Singapore, Spain, Sweden, Switzerland, Taiwan, Turkey, United Kingdom, and Uruguay.

For more information, please visit: http://www.hl7.org/


# # #

The College of American Pathologists becomes IHE Lab Domain Sponsor

This press release crossed my desk this morning:

NEWS RELEASE
CAP STS
500 Lake Cook Road
Suite 355
Deerfield, IL 60015
800-323-4040
847-832-7700
www.capsts.org
www.cap.org/DIHIT

CAP STS CONTACT
Candace Robertson
847-832-7764
crobert@cap.org

IHE CONTACT
Chris Carr
630-368-3739
secretary@ihe.net
FOR IMMEDIATE RELEASE

May 25, 2010

CAP BECOMES IHE LABORATORY DOMAIN SPONSORING ORGANIZATION
CAP and IHE Collaborate to Advance Health Information Interoperability

Northfield, Ill. and Oak Brook, Ill. May 25, 2010—Integrating the Healthcare Enterprise International, Incorporated (IHE) has named the College of American Pathologists (CAP) as the primary Sponsoring Organization of the IHE Laboratory Domain.

Healthcare and industry professionals globally initiated IHE to improve the way computer systems in healthcare share information. IHE brings together healthcare information technology stakeholders to implement standards for communicating patient information efficiently. The CAP, a leader in the practice of pathology and laboratory medicine, includes a division devoted to assist clients pursuing semantic interoperability for electronic health records (EHRs) and other applications.

The IHE-CAP collaboration will accelerate the process for defining health IT standards and promote health IT interoperability for the laboratory—complementing efforts in many countries to create national EHR systems.

"We are thrilled to have the CAP becoming a sponsor of the IHE Laboratory Domain," said David S. Mendelson, co-chair of the IHE International Board and professor of Radiology and chief of Clinical Informatics at Mount Sinai Medical Center, New York. "The clinical laboratory has been a critical part of IHE's expansion across the spectrum of care and it will be very beneficial to have such an important stakeholder organization driving the adoption of interoperability standards in that domain."

The CAP, as the primary Laboratory Domain Sponsor, will be responsible for supporting domain operations, including the development, publication, and maintenance of IHE Technical Frameworks. Technical Frameworks are globally recognized specifications for the implementation of standards to achieve effective systems interoperability. One issue the Laboratory Domain will address in 2010-2011 is the next generation of Laboratory Device Automation.

The CAP has more than 40 years of experience in healthcare terminology standards development, resulting in the creation of SNOMED Clinical Terms® (SNOMED CT®). Its SNOMED Terminology Solutions Division (STS) leads CAP’s standards and health IT initiatives through its Diagnostic Intelligence and Health Information Technology (DIHIT) team. CAP members and DIHIT staff will represent the CAP in the IHE collaboration.

“The CAP’s clinical and technical expertise in health IT, paired with our long-standing relationships with key stakeholders worldwide, will greatly enhance the development of the Laboratory Domain,” said Kevin Donnelly, CAP STS vice president and general manager. “Our goal is to ultimately improve patient outcomes through interoperable health IT systems that provide data to assist in diagnoses, clinical decision support, and improvements throughout the healthcare system.”

“We are pleased to support the IHE in the Clinical Lab Domain. This emphasizes the importance of pathologist stakeholders’ input to improve quality of care in supporting the pathologist’s role as being central to the patient care team,” said David L. Booker, MD, FCAP, chairman of Pathology, Trinity Hospital, Augusta, Ga. Dr. Booker is a member of the DIHIT Committee, CAP liaison to the IHE Anatomic Pathology Technical Committee, and the co-Chair of the Health Level Seven International (HL7) Anatomic Pathology Work Group (APWG).

The IHE works to promote interoperability in health IT through coordinated adoption of appropriate standards and supports the activities of other domains, such as Cardiology, Radiology, and the Laboratory and IT Infrastructure. The Healthcare Information and Management Systems Society (HIMSS), the Radiological Society of North America (RSNA), and the American College of Cardiology (ACC) sponsors IHE.

About CAP STS
SNOMED Terminology Solutions™ (STS), a division of the College of American Pathologists (CAP), is the leading organization in pursuing semantic interoperability for electronic health records by offering customized best-practice terminology implementation and education services. Our goal is to ultimately improve patient care through supporting the pathologist’s role as chief diagnostician/clinical care advisor and advancing interoperable EHRs. The Diagnostic Intelligence and Health Information Technology (DIHIT), a department within CAP STS, is committed to advancing health IT standards, practices, and tools, such as the CAP Diagnostic Work Station initiative; and standardized electronic reporting, including the CAP electronic Cancer Checklists (CAP eCC). The CAP is a medical society that serves more than 17,000 physician members and the laboratory community throughout the world. It is the world’s largest association composed exclusively of board-certified pathologists and is widely recognized as the leader in laboratory quality assurance. The CAP is an advocate for high-quality and cost-effective patient care. For more information, visit www.cap.org/DIHIT or write to snomedsolutions@cap.org.

About IHE
Integrating the Healthcare Enterprise (IHE) is a global initiative dedicated to advancing health information technology by achieving standards-based interoperability. IHE brings together stakeholders to implement standards to address critical information sharing needs. Through its proven process of collaborative development, testing, demonstration and implementation, IHE accelerates the real-world deployment of effective electronic health record systems. For more information, visit www.ihe.net. SNOMED CT® is a copyrighted work of the International Health Terminology Standards Development Organisation. ©2002-2010 International Health Terminology Standards Development Organisation (IHTSDO®). All rights reserved. SNOMED CT® was originally created by the College of American Pathologists. “SNOMED,” “SNOMED CT,” and IHTSDO are registered trademarks of the IHTSDO. All other trademarks used in this document are the property of their respective owners.

###

Tuesday, May 25, 2010



IHE Community,

The IHE IT Infrastructure (ITI) Technical Committee has published the following supplements to the ITI Technical Framework for Public Comment:

  • Cross-Enterprise User Assertion - Attribute Extension (XUA++)
  • Deferred Document and Dynamically Created Content (D3S)
  • Healthcare Provider Directory (HPD)
  • Query Enhancements to Sharing Value Sets
The documents are available for download at http://www.ihe.net/Technical_Framework/public_comment.cfm. Comments should be submitted by June 24 to the online forums at http://forums.rsna.org/forumdisplay.php?f=198.

Monday, May 24, 2010

Governance, SDOs, PEOs and NHIN Direct

Next week I head to Vancouver and then to Milwaukee. In Milwaukee I'll be teaching a generalized course on Interoperability standards to Masters students in software engineering, along with one of my collegues. As part of that course, I teach how to work with standards organizations, and the need for consensus based standards in industry.

As I review the course materials today, I'm looking at some potential disconnects between what I teach, and where I participate. Where I spend most of my time now is in IHE, HL7 and NHIN Direct, and last year, you could replace NHIN Direct with ANSI/HITSP. 

Not too long ago, I spent about an hour getting my ear bent about the governance, processes and status of some of the organizations that I particpate with. IHE International was incorporated in 2009 and its Principles of Governance can be found here.  Anyone who wants to participate in IHE can do so without charge, and after becoming a member, can vote on and participate in any activities.  ANSI/HITSP, while no longer under Federal Contract still maintains its web site.  Documentation about HITSP processes can be found on that site.  When it was active, anyone who wanted to could join HITSP and can vote on and participate in any activities.  HL7 is an accredited ANSI Standards Development organization, and publishes their bylaws and Governance and Operations Manual on the web.  Membership is by fee, but anyone who wants to participate can become a member and can vote on and participate in any activities.  Non-members can also participate in committee discussions on standards and in the HL7 Mailing lists.  HL7 standards can also be voted on by non-members for a modest administrative fee.

HL7 is a Standards Development Organization. IHE and HITSP are (or were in the case of HITSP) Profiling and Enforcement Organizations (PEOs -- an acronym I believe to have been invented by Wes Rishel).  But NHIN Direct is the oddest duck of the lot. 

NHIN Direct is an "Open Government" project.  There's some detail about the NHIN Direct project on their FAQ page.  I have not been able to find detailed documentation about governence or process on the NHIN Direct pages.  Some of it is there, but other parts are missing. 

For example, there are three ways to participate:
1.  By being a member of the core group (which I happen to be).
2.  By joining the wiki and e-mail lists and participating there (which I also do).
3.  By passively participating using the resources provided by NHIN Direct.

Someone from a large research organization that does quite a bit of government work asked me last week how you get into the first group (his organization wasn't able to, but had tried).  I didn't and still don't have an answer.  I know it's by invitation, don't know what the qualifying criteria are, and I have yet to find anything other than what is stated on the FAQ.

Recently, NHIN Direct announced a new co-chair to one of the workgroups.  I wasn't aware that A) there was a vacancy, or B) what the process would be to "run" for that vacancy.  Apparently the process for the selection of leadership NHIN Direct is not documented either.  I didn't see any call for a vote either on the new leadership.  BTW:  I'm not against this particular leader, I think he's a pretty good leader, even though we disagree on several issues of substance.

Back to the class that I will be teaching:  I describe six key features of concensus standards bodies.  These features are derived from the HITSP Tier 2 process (word document), US Federal Law, and a circular published by the Office of Management and Budget describing policies on Federal use and development of voluntary consensus standards and on conformity assessment activities.  Here are my six features, accompanied by an analysis of how NHIN Direct stacks up:
  • Open -- Membership should be open to all affected parties.  In NHIN Direct, there are two classes of membership, contributors and decision makers.  Decision making isn't open to all affected parties.
  • Balanced  -- Members should come from providers, suppliers and consumers of affected products.  I see some balance in the membership of NHIN Direct, but no documentation of it.
  • Process Oriented  -- The organization should have a defined process.  A very weak link here, as there is little documentation of any process in NHIN Direct.
  • Appeals -- Decisions should be able to be reviewed and appealed.  I don't see any documentation, nor would I know what those processes would be in NHIN Direct.
  • Consensus Based -- Decision making should be based on a consensus of the organization members.  This is one of NHIN Direct's strong points.  It's very clear that everyone has a chance to be heard, and I've actually learned a number of ways to improve consensus building in other organizations where I participate from the NHIN Direct work.
  • Maintenance -- Specifications produced need to provide for ongoing maintenance.  Because there's very little documentation about NHIN Direct, and because it is a "Project" of ONC rather than an organization, it's not clear what the ongoing maintenance process will be.
I very much support the activities that NHIN Direct is working on, because I think that they will enable smaller healthcare providers to exchange clinical data between themselves and other providers.  However, NHIN Direct has quite a bit of work to do before I would even consider putting them into the category of a consensus based standards body.

Which leads me to my final questions:  What should be done with the NHIN Direct specifications when they are complete and implemented?  Should they be run through a Standards Development Organization ballot or voting process?  Should NHIN Direct try to become an SDO or PEO (Profiling and Enforcement Organization)?  What do you think?

Wednesday, May 19, 2010

NIST Testing Plans for Conformance Testing of CCDs under Meaningful Use

The HL7 Structured Documents Workgroup spent more than an hour yesterday talking to a representative from NIST about their conformance testing plans for meaningful use as they are related to Patient Summaries. We spent most of that time talking about the emboldened text in the following paragraph from the conformance test for the certification criteria defined in §170.304 (f) of the IFR.  That conformance test description can be found here: http://healthcare.nist.gov/docs/170.304.f_ElectronicCopyOfHealthInformation_v0.5.pdf:
"Per ONC guidance, the requirement for displaying structured data and vocabulary coded values in human readable form requires that the received XML (CCD or CCR) be rendered in some way which does not display the raw XML to the user. In addition, the standardized text associated with the vocabulary coded values must be displayed to the user. There is no requirement that the actual coded values be displayed to the user, however, the Vendor may choose to do so. The Vendor may also choose to display locally defined text descriptions of the vocabulary codes, however, the standardized text must always be displayed."
The key problem with this paragraph is that it misunderstands the use of controlled vocabularies like SNOMED CT, ICD-9-CM, RxNORM and LOINC. There is no “standardized text” associated with these codes when used in clinical practice, or perhaps it might be better to say that there are a lot of different choices for which text you could use. What is standardized is the code, and the text helps to supply the meaning for the code. But for any given code, there isn’t only one text that could generate the specified code.

In addition, many provider organizations have developed interface vocabularies into these coding systems. By making providers use specified text in the narrative, what is happening is that the regulation is changing not how providers exchange information, but also the words that they use to provide care, and therefore the way that they perform it. In my experience, this is the best way to get providers to reject an implementation. The text that they use is developed based on many years of experience providing care, and is what they find to be best suited for that purpose.


The point about codes is that they allow computers to talk to each other in ways that computers will understand, without dealing with the possible variations that can be found in narrative. The problem that ONC sees is that providing the codes in a “level 3” coded entry way in CCD is thought to be too complex for some to implement. After four years of testing this in HITSP and IHE, and seeing many commercial and open source implementations that support “level 3” CCDs, I have to argue that it isn’t that complicated, and if you are going to require the “codes” or specific text values in the CCD, it’s better to do it right.

The notion that the “code” or a controlled term needs to appear in the text is one that appears in a different use case, “claims attachments”. In attachments, the codes are used to adjudicate payment decisions, not deal with treatment. Providers can likely tolerate this sort of requirement when it comes to getting paid, but not when it impacts how they provide care. Frankly, I think that the claims attachment work needs to be refreshed to reflect what we’ve learned about communicating patient summaries and clinical documentation in the last three years. It would be better if the claims process simply reused what we are requiring for clinical care. The idea that we pay for the care provided and not some code designed specifically for payment is one I’ve discussed before on this blog.

Even so, if ONC doesn’t want to require use of “level 3” CCDs, I have to wonder what they will do in CCR, which doesn’t have a standardized text representation of the data. Using that standard, you essentially would be providing the code anyway.

The suggestion that Structured Documents suggested was to build off of other testing that NIST is already conducting. In order to show that the narrative in a “level 2” CCD is supporting the appropriate vocabulary, you need to show that A) the text produced in the CCD is coming from the appropriate list in the EHR system, and B) that the EHR uses one of the necessary coded vocabularies in the IFR to maintain that list.

It was a very productive meeting, and I have to give NIST a great deal of credit for reaching out to the SDOs to answer some of these implementation questions. I would not want to be stuck between the rock and the hard place that they in. They basically have regulatory text from three different places that they have to deal with, and aren’t in a position to change any of it to make sense. They just have to make it all work. I continue to be impressed with the work that NIST has done, and their willingness to reach out and ask tough questions, and continue to work through until they get solutions.

Tuesday, May 18, 2010

Questioning the Question

Some days things come out of my mouth that make me stop and want to say: Why didn’t I think of that? The internal conversation continues something like this: But you just did. Okay, why didn’t I think of that sooner? Sometimes it takes the right question at the right time to cause my brain to synthesize a good (and sometimes even really good) response.

The question at this particular point in time was: How can I query for a document containing certain information I need. I stopped the questioner to provide me with more context, as I usually do. Essentially, I’m trying to get at the real problem. The problem in this case was a process where a patient is sent for referral, and certain clinical data is sent with the referral. The clinical data being sent is specified by an implementation guide that indicates required and optional data. In some number of cases though, the optional data is more important than others. It depends on the clinical context, and it’s hard to judge when it will be valuable.

The questioner didn’t have all the details on how often this would occur, how likely it would be to delay care. The proposed process simply mirrored an existing paper referral process, but put more interoperability requirements around it.

There are two ways to solve this person’s problem. Solution 1 would have been to simply answer his question. I could have done that using at least three different standards or implementation guides or IHE profiles, some of which are more mature than others.

The problem can also be simply resolved using solution 2. Solution 2 makes all of the data necessary to make any referral decision required in the original content sent with the referral. This has a related implementation cost, which is unknown, but can be estimated. The other solution, send data that is easy to get to and in exceptional cases perform at least one, and possibly more round trip communications to get additional data when needed also has costs, also unknown but also estimable.

The right answer in this case is to do a bit more digging before coming up with a solution.

Let’s look at some examples using imaginary data. I’ll use $5,000 / year as the generic annual maintenance cost of a single interface, and multiply that value by a factor of 10 for initial implementation of each interface. These are imaginary values. They could be higher or lower, and the purpose is not to scare anyone, just to show an example of the analysis.

Solution 1 has 6 interfaces, two outbound on the referrer side, one inbound on the referrer side, and two inbound in the referred to side, and one outbound on that side. For the sake of argument, let’s assume that the two outbound and inbound interfaces are identical, it’s just the addition of a query / response pair. That gives us a total of four interfaces that have to be implemented and maintained, with some interrelated moving parts. That adds up to an implementation cost of $200K for the four interfaces, plus $20K per year to maintain them.

Solution 2 is a bit more complicated to implement for the outbound and inbound interfaces, call it a 50% increase in implementation cost, but the same costs to maintain them. Solution 2 has an outbound interface on the referring side, and an inbound interface on the referred to side. The total implementation cost is $150K for these two interfaces, and annual maintenance is $10K / year.

Taking the analysis that far would indicate an immediate win for solution 2, but let’s assume that it wasn’t an immediate win and take the analysis further. Let’s say it was three times harder to implement the more complex interface. You’d be at $300K to implement, and $10K / year to maintain, which would take 10 years to provide sufficient ROI for this solution to become equal in cost to solution 1 (300K + 10K * 10 = 200K + 10 * 20K = 400K).

So far, so good, we understand the analysis. But against implementation costs we also have to weigh other possible costs and risks to see what could be tolerated. Solution 1 incorporates a potential workflow delay. Anytime a delay is introduced into a workflow, we have to ask ourselves what the impact would be on patient care, and whether it introduces a risk to the patient. If 1 referral out of 20 needs more data, and the delays introduced in getting that data result in an impact in patient care again, 1 time out of 20, is that an acceptable risk? Is one chance in 400 an acceptable likelihood for this risk? I cannot answer this question without knowing the possible impact on the patient (it could even be me). If life threatening, then I certainly wouldn’t accept those chances, but if this is simply a minor inconvenience, I might.

If the patient is being referred for consultation on a potential cancer finding, a delay in the referral process could introduce a life threatening risk, but for a routine and non-life-threatening condition, it might not.

Now, let me take this one step forward into implementation guide development. We have two different use cases, and in those, two different potential solutions, each with associated costs, and different risk profiles depending upon the context. The referral process is a common use case that can be as simple as a referral to an ENT for an ear infection, or as complex as a consult for a patient with a possible cancer diagnosis. The choices we make for the content to be exchanged is broad because the use case is broad, and without more specifics, we cannot require more data because it would introduce an implementation cost that might not be acceptable for a wide variety of common use cases. That has to be weighed against other use cases that require more stringent controls on data exchange, because these address rarer, but also, disproportionately more complex and costly care.

How do you weigh the value of these two use cases against each other? What choices do you make about the data that is required and optional to cover both these cases (assuming you even know about these two cases in the first place). What is the value of the solution for the common but not life-threatening solution? That value of the solution for rarer but life-threatening cases?

I don’t have a good process to explain to others on how to decide this sort of thing. For one, I’m not clinical, so I have to rely on the clinical judgment of others to make many of these assessments. Secondly, there isn’t really a way to objectively measure these risks. To do so would be to put a value on the cost of human life, which may be technically feasible, but is arguably infeasible in so many other ways.

There are some tools that we can use. Cost / benefit calculations, even using fictional data can be very revealing. If you are right even on the order of magnitude here, the results to the real world are often comparable[citation needed]. Risk assessments are also extremely valuable, and should be done even before cost / benefit calculations. Even with these tools though, expanding a use case to provide broader coverage can make things too complicated to completely analyze using these tools. Then, you just have to go with your gut.

In some ways, standards development is still an art. Someday it will become a science, and when we get there, I’ll know it, because there will be nothing else left for me to do but enjoy life. I hope that day comes when or after I decide to retire.

[citation needed] I wish I could find the research that backs this up. When I do get into a Masters program it would be an interesting topic for further study.

Thursday, May 13, 2010

Today I have my Marketing Hat On

If you've been following the Where in the World is XDS Map, you've see that it's grown, but I know I'm missing at least half of what is out there.

So now I've created a Survey using Survey Monkey that will allow you to supply me with the information I need to update the map.  I'm about to go hunting down HIEs, vendors and anyone else I can find who can help me fill in this map.  Don't wait for my e-mail, get to me first!

And if you happen to run across any information about XDS availability in any other form, product announcements, open source or otherwise, please let me know.


View Where in the World is CDA and XDS in a larger map

Legacy, Leapfrog, Lifecycles and a Rant

In what I consider to be still the early days of my software development career,  I was responsible for updating the decompression code in the spelling correction product that my company developed and licensed to other vendors.  You are probably still using it now, because that technology was eventually purchased by Microsoft.

When I worked on it, it was already old code, somewhat hard to read, and rather crufty, but it worked.  Of course, I hated it.  Nobody should ever have to deal with legacy this old.  A couple years later I had to update it again to support a new search enhancement product.  I had more leeway then, so, I rewrote that code, and incorporated new memory mapped IO technology that was now included with the Windows 32 APIs.  Overall I more than doubled the speed of that module, and it worked better, was easier to read, and all that.  I no longer hated it, because I had rewritten it.

I later profiled the whole spelling correction system, only to find out that my improvement only improved the total speed of the system by twenty percent.  With Moore's law still in effect, my improvement was outpaced by the technology in a little more than three months.  Which is what the rewrite took from start to finish.

A friend tells a similar but probably apocryphal story, of optimizing code in a Fortran compiler.  Ten years later a bug is reported, which is traced back to his optimization, which as was later discovered could never work.  In ten years of deployment, the code he optimized was used by only one customer.
The moral of this story is to put your efforts where they will have value, and understand what that value is.  The value of reworking something that already works comes from several places:
  1. More maintainable.  This is a big plus, especially for "legacy code" that needs a lot of maintenance.
  2. More extensible.  This can be valuable, but only if you have some idea where those extensions are going to come into play.
  3. Faster, better use of resources.  Look out for Moore's law.  Reductions by an order of magnitude or more are needed to be really useful.
It took me years to break the habit of immediately trying to fix something that already worked.  What I began to realize was that I was spending more time "fixing" things, with little to show for it, and not enough time building off that which already worked.  I began to look into what was worth fixing and what wasn't more critically.  I relearned how to build from someone else's advanced position by leapfrogging ahead of it.

So, how does this related to standards?  It takes quite a few years for a standard to go from idea to working product, and a few years more before it is "widely available".  One author I happen to respect estimates about 3 - 5 years in this paper on the Standards Value Chain.  I've worked with some pretty cool standards, and the people who helped to write them, including HTML, XML, XSLT, XPath and XML Schema.  My own experiences with these standards, which now "run the net" reflect that judgement.  I also see that holding true with healthcare standards, in more or less the same time span depending upon the novelty of the technology, and the entrenchment of that which it attempts to replace (One of the things holding back HL7 Version 3 is that it attempts to replace working "legacy" technology, without enough return on investment.)

Now, on to health information exchange, and standards and implementation guides for that, and the purpose of this rather long post.  I find myself dreadfully annoyed at a poster who comments on "legacy" of implementation guides like XDS, XDR and XDM.  These guides are about where I'd expect them to be in an adoption trend that needs about 3-5 years before you see them be available in product.

Usually, I don't respond to "attacks" like this one, but because I'm really annoyed at people who don't read specifications they are responsible for today, I feel like cutting loose.  I'll probably be sorry later.  Here's the text I found annoying:

The NHIN-Direct project has given us the opportunity to step back from legacy technologies and consider a greenfield solution to allow physicians to actually talk to each other for the benefit of their patients. It has also proven that the HIT community is shackled to its bloated legacy constructs and has become incapable of admitting its missteps or daring to think outside the box. We wouldn't want to lower market entry barriers and put pressure on the incumbent vendors to actually deliver value in a truly competitive market, would we? I'm just thankful that we have seen the user cost of HIT solutions significantly decrease over the past 5 years. (Oops)

First of all, I'm involved and participating in NHIN Direct, and contributing code and solutions, so please don't consider me to be "shackled" to anything.  Yes, I'm building off of earlier work.  I find that to be helpful, not harmful.  Reuse according to one source if five times more effective than rework.
XDS, XDR and XDM were "invented" for the purpose of allowing physicians to actually talk to each other for the benefit of their patients.  XDS resulted purely by accident, but the idea was right, and it was purely "thinking out of the box".  That was six years ago, and back then it was a greenfield solution.  It took two years to get that solution completed to the point where it was ready for implementation, and another two years to see it become readily available in products.  These days it is available not only from vendors in the HIT community, in open source, but also from major IT vendors.  About on or even slightly ahead of schedule.  Over that time, IHE went back and corrected some "mistakes" producing XDS.b (and retained some others), and all that is available too.

Lower market entry barriers?  How about 10 open source projects supporting these standards, ready for implementation in healthcare environments, with real world implementations using them?  Free code is a pretty low entry point.  Code that's been tested by somebody else is also a big plus.  I have to compete against open source, as well as every other major IT Vendor out there, who also supports that set of standards.

Now, what will NHIN Direct deliver? Specifications and working code that supports features already available through other solutions. Translating this into my own words I believe that these will be prototypes that will take some time to develop into hardened products that will be available on the market about 1-2 years from completion of this first phase if there is a demand for them. I expect there will be and I also expect that NHIN Direct will break into the adoption curve early.  What is the value here?  The value is connecting the more than 400,000 physicians out there in small practices, which is why I'm participating in these activities.  If I can make XDS/XDR or XDM simple enough for use in NHIN Direct, I will, and would be stupid to not try.

Now, back to my original point.  What is the value of starting completely over?  I'll admit that something is to be gained, and I likely know where many of the problems are in the existing specifications.  With relatively few execeptions, there's very little technology we use today that wouldn't benefit from reinvention (including the automobile, my biggest pet peave).  But what is the cost?  What will happen to the dozens of implementations of that technology within a few hours of where I live (including the hospital I would use should I need to)?

Richard Soley, CEO of the Object Management group talks about the N+1 standards problem.  According to Richard, a new standard rarely if ever reduces the number of standards used to accomplish a task  Instead, it creates yet another choice that needs to be implemented.  From my own perspective, I have to acknowledge the truth of that.  XML hasn't completely replaced SGML yet even though it's clearly better (and in fact, you are using SGML technology right now I'll bet).

So, back to the value statement.  What else could I be doing more productively with my time?  Personally, I think there's a lot to be said for figuring out how to deal with the "Clincial Decision Support" integration problem, and not a lot of traction in this area.  I've been working on that for the past couple of years.  Of coure, by the time that problem gets solved, I'll have to deal with someone else telling me that's a legacy solution and that they have a better one. 

Please, beat me to it.  Then I can go back to playing leapfrog.

Wednesday, May 12, 2010

patientId and sourcePatientId and XDS Metadata

I've been working on the NHIN Direct project, and the Content Packaging workgroup keeps running into a thorny problem with XDS metadata around patient identifiers and demographics.  There are three metadata elements that are relevant:  patientId, sourcePatientId and sourcePatientInfo.

Part of the problem is that requirements behind why particular items make it into final specifications are not often reported in the specifications themselves (SDO's and PEO's take note, traceability from requirements to specifications is important).  So understanding why patient identifiers are present in the metadata is important to coming up with a solution.  As best as I can reconstruct from my own knowledge, here are the functions of these metadata elements:

XDS Metadata ElementXDSXDRXDM
patientIdIdentifies the patient whose information is being shared to the XDS registry that recieves it.  Used to ensure the data is put with the right records.Used to identify the patient whose information is being shared to the reciever.  May be ignored by the reciever.  It could be the recievers patient identifier when known in advance.
sourcePatientIdIdentifies the patient whose information is being shared from the perspective of the sender of that data.  Used to enable audit trails and diagnostics.
sourcePatientInfoProvides further confirmation of the the patient whose information is being shared from the perspective of the sender of that data.  Used to enable diagnostics.

Given that this is reconstructed from memory, and because my memory is incomplete, I invite your comments on these details.

The problem is that simple message coming from e-mail, containing a CCD or CCR, may want to be routed through a gateway that translates it into XDM or XDR metadata.  You can get just about everything you need from the "routing metadata" used by NHIN Direct.  That metadata includes:  from, to, date, and message id.

Submitter = From
IntendedRecipient = To
submission date / time = message date
message id can be tracably mapped to a submission identifier
hash, size and mimeType can be computed from the content.
typeCode and classCode can be set to generic values indicating "clinical data", or for certain types of documents (e.g., CCD or CCR), can be determined by inspection.
Author can be determined by inspection of some content, and can be assumed to be "From" for others (just plain text), but would have to be omitted elsewhere, and that's ok, because it's only required if it is known.
confidentialityCode could be slugged to a fixed value established according to policy by the gateway.
creationTime can be determined by inspection for CCD or CCR, but not for some other formats.  It could use message date as a proxy.
sourcePatientInfo can be determined based on inspection of CCD or CCR documents, but not other formats, and is optional in the metadata.

But patientId and sourcePatientId are required, and can be obtained by inspection of only some formats.  So what can be done for those?

Based on the intention for these fields, when using XDR and/or XDM, they could be based on any thing that uniquely identifies the patient, and both could contain the same value.

There are several possible solutions, but the one that I would propose is the following:

When the patient and message have a one to one relationship (message is about only one patient, which is the usual case), the message id provides a source of unique identity.  This has the benefit of not being a Medical Record Number and possibly not being individual identifiable information.  According to the Internet Message Format, the message id must be unique, and there are a couple of ways to make that identifier map into the appropriate XCN data type.  One of these is to assign an OID used to identify patients by these message identifiers.

Of course, none of this is necessary when the content contains an XDM format zip file, because that will already have all of the necessary metadata in it. 

Tuesday, May 11, 2010

I'm on TV

Well, not yet actually, but I soon will be.  TipTV to be exact.  Last week I video-taped about 6 hours of training on CDA, CCD and XDS at our TipTV studios.  The studios are located in the GE Healthcare Institute where we hold many of our training and education activities just outside of Milwaukee.

The training groups works with eMedia Studios and Services to do the filming.  Jim was my director, and Keith the camera operator.  I was Mr. Boone, or "the other Keith" for the two days while filming.  A description of the studios appears on eMedia's web page here, and you can even take a walkthrough tour.  I was in Studio B (the first studio your see on the tour).

Taping for the day started out scary.  I had just finished putting the polishing touches on my presentations the night before and saved them.  But, an innappropriately placed coke bottle ended up bathing my laptop on the way to the studio.  I managed to recover the presentations by swapping the hard drive into another computer before that hard drive finally did go belly up.  So in the first two hours we put twenty minutes of CDA training into the can.  It took me a bit, and some prompting from the director to get back into my groove.  We polished off the CDA class and the XDS class that same day.

Thursday morning, I dropped my computer off to be replaced, and we video-taped the CCD presentation.  In 12 hours, we put 12 hours of video "in the can".  They taped both me and what I was projecting on my laptop, so 6 hours of training became 12, all of it filmed in HD.  I finished in time to pick up the replacement laptop before I headed back to Boston.

The taping they did of me used a green screen ("chromakey") background, so who knows, I could show up in Paris, or Hawaii, or with the pyramids behind me in Egypt in the final production.  We are hoping to have this finished sometime in the next six weeks, but have to fit it in with other productions that are being edited.  The studios include advanced editing equipment, a graphics department, and lots of other "Cool toys".

Although I've acted in theater before, and had some stage productions video taped, I never taped in a studio before.  This was a new and interesting experience for me, and one I hope can be repeated.  TipTV has quite a bit of continuing education on the use of diagnostic imaging equipment, but this is the first time they've put effort into developing training on standards.  I hope it's successful.  The lead for this project feels that it will be, because as she says, where else can you get it, but from the horse's mouth.

My response?  Whinny

Monday, May 10, 2010

News from IHE Europe

The dust has finally settled (literally), and most attendees of the IHE European Connectathon have succecssfully made it home. Air traffic after the Connectathon was disrupted by an unpronouncable volcano's eruptions in Iceland, to the point of disrupting IHE meetings in the US in the beginning of May for some.

In the meantime, two IHE National Deployment committees have joined IHE International, bringing the total number of regional deployment committees to nine.  IHE Suisse was created in March of this year, and IHE Turkey in November of 2009.

A recent press release from IHE Europe (the Regional Deployment organization for the European continent), observes that "that several IHE profiles were endorsed for national programmes in several countries where IHE national initiatives are active."  Some of those initiatives can of course be found on the Where in the World is XDS map, and others will be added when they become known.

At this year’s European Connectathon, a total of 2,250 interoperability tests were carried out and of these 1,950 passed. 80 profiles and 94 systems were tested in five domains (Radiology, Cardiology, IT Infrastructure, Laboratory and Patient Care Coordination) by 66 companies bringing together over 250 engineers.  Also occuring with the event were meeting of DICOM working groups and other interoperability activities.

Peter Kuenecke was re-elected cochair of IHE-Europe back in April.  He reports that "we have seen a significant shift from a testing platform to a true European Forum on interoperability.”

IHE Europe has created a new role, Director of Interoperability, to monitor EU projects where IHE is involved, including the European Patient Smart Open Service (epSOS), the Healthcare Interoperability Testing and Conformance Harmonisation (HITCH) and Calliope.  Past user Co-Chair Karima Bourquard has been appointed to that newly created position of Director of Interoperability.

The 2011 Connectathon will be held at Leopolda Storica in Pisa from April 11 to 15.

-- Keith

P.S.  The post is an amalgamation of information found in 3 recent press releases from IHE Europe.  For more information, please contact:

Peter Künecke
IHE-Europe Vendor Co-Chair

vendor.cochair@ihe-europe.net


Jacqueline Surugue

IHE-Europe User Co-Chair
user.cochair@ihe-europe.net

Thursday, May 6, 2010

You are being watched

When I started this blog almost two years ago, I almost immediately hooked it up to Google Analytics.  Google Analytics provides an amazing amount of data about you, my readers.

I recently did a little study about who reads this blog using that tool.  I gathered up information about the top 500 network sources hitting this blog, and then segmented those sourced into 9 categories:

Network:  These sources are communications companies providing internet services to the general public.  Included in this category are also hotels, libraries and other methods of public access that I could clearly identify.  About 55% of all visitors come from these sources, and account for about 70% of all visits.  After figuring this out, I threw out all these sources, since this pretty much tells me nothing about the readers as I cannot tell where they readers are employed.

Vendors:  This includes anyone remotely identifyable as selling IT products for the healthcare industry.  Vendors account for approximately 40% of the remaining visitors, and also for about 40% of all visits.

Universities and Research Organizations account for about 20% of non-network visitors, and 15% of all visits.

Healthcare provider organizations include about 15% of the visitors, and 15% of the traffic.

Governmental agencies include a little less than 15% of the visitors, and a little more than 20% of the traffic.  There are fewer readers in government than elsewhere, but they seem to be paying more attention.  I am certainly heartened by that statistic.

Payers, Quality Organizations, and a few odd ducks account for about 10% of visitors, and a little less than 10% of visits.

The smallest group are consultants, and these range in size from 5-10 person organizations all the way up to 1000+ person organizations.  They account for 5% of the visitors to this site, and about 2% of the traffic.  Basically, the consultants are not paying much attention to me.  I'm not sure of what to make of that.  Some of them get quite a bit of my feedback directly, and don't necessarily need to read this blog to know what I think.  The rest probably don't care.  I'm not sure what to make of that, but I'm not losing any sleep over it either.

If you've done the math, you realize that my percentages don't quite add up.  There are rounding errors (I rounded to nearest 5%).

Many, many years ago, when I worked for a "free magazine", we used to have to publish annually a Controlled Circulation report.  This blog is "Free media", so you can consider this my controlled circulation report. Magazines produce these reports so that their advertisers can be aware of who the audience is.  I'm producing this report not so that I can advertise (see the new policy page), but so that both you and I know who is reading this blog.

     Keith

P.S.  An amusing anecdote:  While reading through the list of networks being used to access this blog, I discovered that a couple of governmental agencies reading it are in the intelligence business.  It seems it was my turn to watch the watchers ;-)

Wednesday, May 5, 2010

Tranmitting Attachments using XDR

I encountered an act of Healthcare IT Coordination that deserves mention this morning.  I spent about 30 minutes with a team from the Centers for Medicare and Medicaid Services discussing with them capabities of XDS and XDR.  They want to be able to use NHIN CONNECT to allow providers to submit medical records to them in response to a case where CMS is auditing claims.  What is truly remarkable here is the reuse of existing work from the NHIN project to share documents for clinical use, moving it towards the financial realm.

It seems possible if this pilot succeeds that others could use this same process in support of claims attachments.  In a world with too many standards, could this be a case where we see fewer rather than more? 

Also notable is that CMS has been working with SSA, who has a very similar use case, and is leveraging as much as possible the work that organization has done.

Tuesday, May 4, 2010

Meaningful Interoperability is not defined by Meaningful Use

I spent an hour today on a call with NIST (along with many other HL7 leaders) regarding the testing framework they are presently developing for meaningful use.  One of the the issues that NIST correctly identified is that the standards selected for meaningful use are not sufficient to support interoperability.  They pointed out to ONC that the SDOs have spent many person-years developing implementation guides that ensure interoperability.  Because these were not selected by the IFR, NIST has been directed to fill the gaps in a few short weeks.

To resolve this problem, NIST is working with HL7 and other SDOs to identify what is enough to ensure interoperability.  They are in fact, creating "baby" implementation guides.  I would not want to be stuck in between their rock and hard place right now.  The danger here is that years of consensus building and implementation efforts could be completely irrellevant if the wrong choices are made. Hopefully the choices that are made by NIST and the SDOs will enable use of and not conflict with existing guides; without requiring their use.  Yet those same choices need to be strict enough to ensure interoperability.

If you look at the schedules that NIST has to commit to, this is insanity.  HL7, IHE and HITSP spent years developing standards and implementation guides that do exacly what is needed for various use cases in the IFR (immunizations being one of those). The number of person-hours spent developing and refining these guides and building concensus around how they should look is immense.

The EHRA also recognized this as a problem, and is currently reviewing a set of recommendations to its members on what implementation guides should be used.

My recommendation to NIST is to implement NO MORE than what is in the rules in their testing framework, even if it doesn't guarantee interoperability.  Then, to ensure interoperability, the next part of the process would be to verify products DEMONSTRATE interoperability with other systems using those standards.  The vendor community has been doing this at IHE Connectathons for years.  Demonstration is one of four ways to validate conformance (The other three are Test, Inspection and Analysis).  Combining a demonstration of interoperability with verification that the product is correctly implementing the base standard would be an effective way to meet the goals of the IFR and the Certification process.

In this way, we can use the standards specified in the IFR, and the implementation guides that we chose to support, without concerning ourselves with reinvention, insane schedules, and introduction of incompatibilities.

The end of an ERA

Started with ARRA.  I just hope none of it is in error.

HITSP closed its doors quietly last Friday.  The web site will be retained at least until such time as the new Harmonization process is announced, and I have my own copies of all the content as I've mentioned previously.  I've heard that we could hear something in the next month on the new harminization process, and by my lights, that's soon enough.  I will point out again that better continuity of care could have been established by the coordinators caring for our nation's healthcare IT.  They knew the planned discharge date and could have executed accordingly.

My chief concerns right now are whether the new harmonization process will be as open to input from interested parties as in other consensus based standards models, and whether they will address the time, scope, and quality triangle that weren't addressed in assigning HITSP deliverables.  Consensus takes time to establish.  Processes being experimented with in other initiatives (e.g., NHIN Direct) seem to be more open in some ways, and more closed in others.  Time will tell whether they work. 

Whatever that new process will be, I intend to participate.  At the same time, I've used the HITSP hiatus to get more involved in local activities around Healthcare IT.  That's introduced me to new people and places where there's still a great need for education about standards. 

In the last few years in HITSP, I've written or edited more than a ream of paper's worth of text, and read ten times that. Most of it has been useful, some of it more than others.  I've met scores of really bright, really energized people.  One thing that I've learned about all of them is that they are all extremely passionate, and really care about healthcare.  It's not hard to understand why when you start listening to their personal stories.

Volunteers (and leaders) will always get credit for the work that was done in an organization like HITSP.  But there is one group of people that really deserves special mention: The contractors that kept HITSP going through thick and thin.  Thanks to people like Cindy, Don, Lori, Gila, Anna, Bob, Suzi, Sarah, Gene, and Allyn (to name a few), HITSP was able to deliver as much as it did.  Yes, they got paid for it, but frankly, there were times when they didn't and they still kept plowing.  Most of us were funded to do this work on company time. They were on their own time and often weren't compensated nearly as adequately as they should have been based on the hours they put in.
The allocation of work to the allocation of hours never computed for many of them.  They often signed up for a 20 hour stint and sucked up 40 and 60 hour weeks (I've got the Skype chats and e-mail trails to prove it). 

There are a few volunteers who I also think deserve a special mention.  Thank you Scott, for teaching more about medications than I ever thought I wanted to know, and also to you Rachel, who did the same for X12N transactions. 

Finally, I want to thank the many nurses who participated in these activities.  Nothing is more tenacious than a nurse caring for a patient who feels that the patient isn't being cared for right.  Yes, the doctors and the geeks will always be there, but the job won't be done right until the nurses finish and bless it. 

Bless you all, and I hope to see you in the next phase.

Monday, May 3, 2010

Get out and Educate

I spent last Thursday and Friday at the Governer's Conference on Healthcare IT hosted by Deval Patrick.  It was a very interesting conference, and a bit eye opening.

I was impressed by two things at this conference:
1.  The overall desire to improve healthcare.
2.  The lack of knowledge about the status and availability of healthcare IT standards.

It appears to me that Standards do not exist in the minds of most of the attendees of this conference unless they are recognized by the Federal Government.  The lack of awareness of standards was apparent in many sessions.  Few people were aware that the basic standards used in several New England HIE initiatives were the same.  Fewer still understood what would be needed to close any gaps to make these initiatives be able to communicate with each other.

I attended one session where a consultant from Accenture reported to the attendees: "There are no standards."  I expect that sort of response from organizations that lack critical awareness of standards, but not from those that fund a CTO role in an Healthcare Standards Organization like HL7 International.  That response by the way, follows statement earlier in the day from John Halamka reporting the exact opposite.  Rest assured that I had a followup comment on that remark.

A more accurate statement would be that current regulation does NOT recognize the existing standards for many of the functions required under meaningful use.  There's a reason for that, as elaborated upon from Dr. David Blumenthal the day before.  We lack a great deal of knowledge about deployment experiences necessary to determine which standards should be selected for one.

At the whole conference, there were perhaps four people who've been involved in healthcare standards activities that I recognized.  That includes HITSP, WEDI, HL7, IHE and others.  If we want national and regional policy to make sense with regard to technology, then more of us who have some notion about what that technology can do need to be involved.  It's time to get out of the cube or the basement office and into the public eye to educate those making decisions.

The challenge for Standards organizations is that our efforts are done on a volunteer basis.  That includes marketing.  We need to rethink that.  If you want someone to use your products, you need to make appropriate investments to ensure that decision makers are aware of them and their benefits.

 My mantra for this year: Get out and educate.