Pages

Friday, January 29, 2010

No-Cost Extension to Contract announced by ANSI/HITSP

Recently, I've seen a few media reports that HITSP is over, and at least one clarification/retraction.  To ensure that everyone knows what is really going on, I've been given permission to reproduce this communication from the ANSI/HITSP secretariat to the members...



Document Number: HITSP 10 N 459

Date: January 28, 2010

TO: Healthcare Information Technology Standards Panel (HITSP) and Public Stakeholders - - FOR INFORMATION

FROM: Michelle Maas Deane

HITSP Secretariat

American National Standards Institute

RE: No-Cost Extension to the HITSP Contract

Please note that the following remarks were made to the HITSP Panel meeting on January 25, 2010 by Frances E. Schrotter, Senior Vice President and Chief Operating Officer, ANSI.

ANSI is pleased to announce that the Government has granted HITSP a no-cost extension to the current contract which will continue through April 30th, 2010. Among other things, this will enable HITSP to have a presence at the upcoming HIMSS conference, and support the quality reporting activities being demonstrated in the Interoperability Showcase there.

Because the extension is ‘no-cost’, there will necessarily be a ramp-down in HITSP operations which will have the following characteristics:

  1. The primary HITSP Contractors, ANSI (including GSI Health), HIMSS, Booz Allen Hamilton, and ATI, will remain engaged to operate the initiative during the extension period.
  2. The public website will remain active at least through the extension period, to allow for access to HITSP’s body of work by members and industry.
  3. HITSP will not convene the Board or Panel during the extension period beyond 1/31/2010. Further, none of the committees (Technical nor Coordination) will be officially convened during this period.
  4. While we will not have a need to continue to engage our many subcontractors who have served as writers and facilitators, I want to take this opportunity to express our deep appreciation to each and every one of you. Without you, we could not have accomplished what we did and we all owe you a great debt of gratitude. We remain hopeful that an opportunity will arise in which we will reengage you. In the mean time, we welcome your voluntary participation in HITSP activities during this period, recognizing your important role as a stakeholder in this process as well as staff.
  5. Current term limits notwithstanding, ANSI would be very pleased if the existing leadership and membership of HITSP would maintain their existing positions in the organization during the extension period, albeit with reduced activity. Specifically –
  • ANSI would be very pleased if our Chair, Dr. John Halamka, would agree to remain as Chair during this period, serving as a public face of HITSP, and particularly being available to represent HITSP in the scheduled Standards Town Hall during the HIMSS10 Conference and Exhibition in March.
  • ANSI would be very pleased if the current HITSP Board members would agree to maintain their seats during the extension period, and agree to remain listed on the website as such, in the event a circumstance arises in which their service is needed. If there are Board members who do not wish to maintain their seats, we ask that they notify Fran Schrotter or Michelle Deane from ANSI, or alternatively, Lee Jones, HITSP Program Manager.
  • ANSI would be very pleased if the current member organizations of the Panel would agree to continue their membership, and remain listed on the website as such. If there are Panel members who do not wish to maintain their seats, we ask that they notify Fran Schrotter or Michelle Deane from ANSI, or alternatively, Lee Jones, HITSP Program Manager.
  • ANSI would be very pleased if the current leadership of the Technical Committees, Tiger Teams and Coordination Committees would agree to continue in their offices, to be available to respond to issues triaged to them from ANSI regarding the HITSP body of work, as needed. If there are Committee/Tiger Team co-chairs who do not wish to maintain their seats, we ask that they notify Fran Schrotter or Michelle Deane from ANSI, or alternatively, Lee Jones, HITSP Program Manager.
  • ANSI is not disbanding the HITSP committees, though they are not anticipated to be formally convened to accomplish official HITSP work during the period post 1/31/2009. We understand the sense of community engendered in these groups, and therefore will continue to make available the Technical and Coordination committee membership areas of SharePoint collaboration site so that those currently with access credentials will maintain their level of access unless surrendered. Similarly, the listservs will remain active, though the privileges to directly post to them will be restricted. Please see Michelle Deane from ANSI with any specific questions on those matters.
  • During the extension period, ANSI asks that those holding any of the roles just described would seek prior-authorization from ANSI before speaking authoritatively on behalf of HITSP in a formal setting, such as a publication or public presentation or speech. This will allow for ANSI to better manage the official public posture, messaging, and obligations of HITSP. Please contact Fran Schrotter or Michelle Deane from ANSI, or alternatively, Lee Jones, HITSP Program Manager, for any such authorization.
Regarding others of matters of import for the current period, we offer the following commentary in response to recent inquiries:
  • The current work deliverables of the committees will all be delivered to ONC with full communication of their current state. This makes it available to the Government for all future deliberations they deem relevant to leverage the strong work products of HITSP. Further, those portions of the current period’s work that are ready for public comment will, in fact, be published for public comment, and made available on the HITSP website in typical fashion consistent with activity. The comment tracking system will be opened to receive comments, and those members currently with privileges in that system to see those comments will retain that level of access. We believe that convening the comment period enhances the work of the Technical Committees/Tiger Teams by garnering broader input on the subject matter, thereby increasing the value of the artifacts for the industry and the government. There will be a couple of notable differences between this upcoming comment period and historical ones, namely:
    1. There is currently no plan for formal disposition of the comments gathered, and such work will be deferred until the resumption of normal HITSP activity.
    2. ANSI will export all comments gathered during the comment period, and publish them on the HITSP public website for broader access by members and industry.
  • We recognize the importance and relevance of the recently-published Interim Final Rule (IFR), and Notice of Proposed Rule Making (NPRM), and have received inquiries regarding HITSP mounting a formal unified response to submit during their respective comment periods. We are unable to support a unified response inasmuch as it would require gathering individual input, synthesizing same into a coherent response, and an additional convening of the Panel to review and approve the response. Unfortunately, these activities are not feasible within the parameters that were outlined regarding the extension period we are entering. However, HITSP certainly encourages its member organizations to respond individually, including the HITSP perspective into your commentary. If there are other ideas you may have as to how HITSP may be helpful toward that end, please let us know so we can determine the best way we can be helpful there.




Thursday, January 28, 2010

IFR vs IDE

I'm still playing catch up from being away from the office for two weeks, so today's post comes late and is a little disjoint.  Recently I had a very good experience with standards that I'd like to share.  The other day, an old computer I inherited suffered a terminal power supply failure, but I needed to get data off it's hard drive.  I bought an external drive enclosure and plugged my drive it into the Standard EIDE connector, checked the very nicely labeled jumper connections on the drive, and installed it into the enclosure.  The enclosure contains a very nice little gadget that bridged between the EIDE connector of the drive to a USB connection.  I plugged the drive into the USB connection and my home laptop recognized it nearly immediately.  I was able to use some diagnostic tools to fix the partition corruption and will later offload the files we wanted.

Now, a USB connection has 4 connectors, two for power and two for signal.  An EIDE connected is one of those big wide things that uses ribbon cables.  One transmits data in parallel and the other serially.  There's also a lot of details about interrupts, addressing, and other stuff on the EIDE connector that all gets serialized in USB communications.  Fortunately for me, theres a nice little simple-minded piece of hardware that goes from the EIDE connector to the USB connector.  It is very simple minded because these two standards (and the USB standard for media) are very well specified.  The most perplexing thing would have been the drive jumpers but those were also very well labeled and the instructions on the enclosure were very clear.  The way they operate is very different, but the functionality is the same, and bridging between the two nearly painless and VERY inexpensive.  This is interoperability at its best.

On the flip side, when we look at doing something that should seem very simple (connecting an interface), it can take quite a bit longer.  You'd think that all you need to do there is hook up a network cable, enter an internet address (and port), install a few certificates, and be done.  In fact, to achieve interoperability at the transport level, that is all you need to do and I can train someone to do repeatedly and well in less than an hour.  It's the other issues that take up all the time.  What is the format of the data? What has to be there? What should be there? Et cetera.  What are the policies that surround the use of the interface? And on and on.

What I realized from this experience is that SOAP vs REST is a pointless discussion SO LONG AS THE FUNCTIONALITY IS IDENTICAL where it matters (I remain unconvinced, but lets ignore that for now).

1.  Transport is easy, and if functionally two transports are the same, bridging between them is also easy (and cheap).  SOAP, REST, I really don't care.
2.  Deciding on content to exchange is hard.
3.  Crisp documentation is a big help.

That hard drive?  The hardest thing about making it work again was selecting the appropriate content format to exchange.  Fortunately for me, there were only 18 choices for the file system format, and I knew exactly which one would work for my situation, and for other things, the crisp documentation was really valuable.  For healthcare content exchanges, there might be 18 choices for the first item you need to decide, and another 18 choices remaining.  That's something like 1818 which is a pretty big number (~40 sextillion).  Those are the real choices that we need to reduce. 

Let's look at a very simple example from the recent IFR.  In order to communicate information between an EHR and Public Health Agencies, we are directed to use HL7 Version 2.3.1 or HL7 Version 2.5.1.  That's a start, even if a choice between two items.  Look further.  Which of the more than 100 HL7 V2 messages should be used?  The IFR doesn't say.  Could I use a BAR message to post a charge transaction in HL7 V2.5.1 to post information to public health?  That seems remarkably unlikely.  Given what I know, it should be ADT, ORU or MDM, and probably would be the second or first.  Having gotten that far, there are a number of different segments than have to be debated.  Where does the patient ID go?  PID-2, PID-3 or PID-4?  Well, according to the standard, it could be any of these...  Moving on, what must PV1 contain?  Do you even need a PV1?

So, lets stop worrying about whether everything should be SOAP or REST, and start worrying about these more challenging issues.  I'm concerned about what a system needs to do for public health surveillance. The current IFR has basically solved the EASY problem without any guidance for the hard one, and there is no crisp documentation identified that would help.

If you happen to have any insight or ideas about what is intended for public health reporting, or what SHOULD go there, I'd very much like to hear your thoughts and would also ask you to share them with others.

Wednesday, January 27, 2010

IHE Product Registry Open for Submissions and other IHE news



IHE Community,

IHE North America Connectathon Sees Expanded Participation
The IHE North America Connectathon, which took place January 11-15 in Chicago, set new records for number of participants, systems tested and successful tests performed. In addition to nearly 500 individual testing participants, more than 120 attendees took part in a one-day conference associated with the testing event. Read more.

Interoperability Showcase at HIMSS10
The annual Interoperability Showcase will be presented at the Healthcare Information and Management Systems Society (HIMSS) 2010 conference March 1-3 in Atlanta, Ga. With 73 participating vendors and organizations, the HIMSS 2010 Interoperability Showcase illustrates how interoperability drives improvements in the quality, safety and efficiency of care. Read more.

Patient Care Device User Handbook for Public Comment
The IHE Patient Care Device domain has released its User Handbook 2010 edition for public comment. Healthcare administrators who makes purchasing decisions, clinical engineers, IT systems analyst and medical technology evaluators will find the handbook a valuable resource. It describes how to use IHE PCD profiles to improve how the integration capabilities of systems and devices are selected, specified, purchased and deployed. Comments are requested by February 12th, 2010. Read the handbook.

Product Registry Open for Submissions

IHE has developed a new resource for developers of healthcare IT systems to publish information about the interoperability capabilities of these systems. The IHE Product Registry will enable vendors to develop and publish IHE Integration Statements for systems that are available commercially or as open source code. Users will be able to browse or search this information by system type, IHE profiles and actors implemented, company name and other criteria. The Product Registry is ready to receive submissions now at http://product-registry.ihe.net/. We also welcome feedback from submitters and users on how the registry might be improved.


The Product Registry will replace the IHE Integration Statement page at http://www.ihe.net/. That page will no longer be maintained. Companies that have published Integration Statements linked to that page are strongly encouraged to publish their information in the Product Registry.



Tuesday, January 26, 2010

Meaningful Use NPRM Comments

As promised, I've been reviewing the Meaningful Use NPRM.  Below you can find some of my comments as this proposed rule relates to the use of certified EHR technology and the Meaningful Use IFR.  In the following, Roman text is quoted from the NPRM.  Text in Italics are my comments on it.  In this review I have only focused on issues of clinical content used to meet the objectives, and the measures of them, and the relationship of the NPRM to the IFR.  I have not addressed any issues related to payment, schedules, et cetera.

§495.6 Meaningful use objectives and measures for EPs, eligible hospitals, and CAHs.
(c) Stage 1 criteria for EPs and eligible hospitals or CAHs.
On each of the Measure sections of this part the text "of all unique patients seen by the EP or admitted to an eligible hospital or CAH" should be amended to include "during the EHR Reporting period". This applies to sections (c)(2)(ii), (c)(3)(ii), (c)(4)(ii), (c)(5)(ii), (c)(7)(ii), (c)(11)(ii), (c)(13)(ii) and (c)(14)(ii).  This is a minor but necessary clarification.  I don't believe that HHS intended for
the measure criteria to include all patients ever seen by an EP, eligible hospital or CAH, but it doesn't explicitely state the reporting period in the measures.


(5)(i) Objective.
(A) Preferred language.
(B) Insurance type.
(C) Gender.
(D) Race.
(E) Ethnicity.
(F) Date of birth.
(G) For eligible hospitals or CAHs, the date and cause of death in the event of mortality.
Subpart (5)(i)(B) should indicate what is meant by Insurance Type.  There are several vocabularies used to describe insurers, including those found in the 4010 and 5010 implementation guides from X12N and others found in HL7.  At the very minumum, we need to know what distinctions are important here.
Subpart (5)(i)(D) and (E) should reference OMB Guidance on the reporting of Race and Ethnicity.  It will not be helpful to report race and ethnicity if everyone does it differently and cannot role up to the OMB categories.


(5)(ii) Measure. At least 80 percent of all unique patients seen by the EP or admitted to the eligible hospital or CAH have the demographics specified in paragraphs (c)(5)(i)(A) through (G) of this section recorded as
structured data.
This section should be amended to state 'recorded as structured data or an indication that the patient declined to provide this information or does no know it.'  This change is needed because under OMB guidance and as elsewhere defined in healthcare standards (e.g., HL7), Race and Ethnicity are self declared by the person being so classified, and such classification should be voluntary.

(c)(6)(i) Objective.
(1) Height.
(2) Weight.
(3) Blood pressure.
(B) Calculate and display the body mass index (BMI) for patients 2 years and older.
(C) Plot and display growth charts for children 2 to 20 years including body mass index.
(ii) Measure. For at least 80 percent of all unique patients age 2 years or older seen by the EP or admitted to the eligible hospital, record blood pressure and BMI and plot the growth chart for children age 2 to 20 years old.
The measure in subpart (6)(ii) requires the recording of BMI, not height and weight.  However, BMI can be dynamically computed from Height and Weight as recognized in subpart (B), and the latter two have other clinical uses (e.g., weight based dosing).  I would recommend that the measure be altered to replace BMI with height and weight.  This measure would then be aligned with 42 CFR §170.302 (e) Record and chart vital signs found in the IFR, which indicates that a system should "..electronically record, modify, and retrieve a patient’s vital signs including, at a minimum, the height, weight, blood pressure, temperature, and pulse."

Also in 42 CFR §170.302 (e) (3) "Plot and display growth charts. Plot and electronically display, upon request, growth charts for patients 2-20 years old.", but (6)(ii) Requires that the growth chart be plotted for children age 2 to 20 years old.  In this case, the NPRM should be altered to state that the EP, eligible hospital or CAH has enabled functionality to plot a growth chart for children age 2 to 20 years old.

Why?  While an annual review of the patients BMI should be performed, it should not be made necessary for every visit made by the patient.  It may not be relevant for the condition for which the
patient is being treated (e.g., a referral to an ENT for an ear infection), and would require providers to engage in additional activitity in order to be a meaningful user without a specific medical benefit to the patient.


The modified section (6)(ii) appears as I suggest rewording it below:
(6)(ii) Measure. (A) For at least 80 percent of all unique patients age 2 years or older seen by the EP or admitted to the eligible hospital, record blood pressure, height and weight.and BMI and
(B) The EP, eligible hospital or CAH has enabled functionality to plot a growth chart for children age 2 to 20 years old.


(d) Additional Stage 1 criteria for EPs.
Under this section, several references are made to use of certified EHR technology to report, transmit or provide information.  However, these transmissions can be performed in a number of ways, only a few of which conform to use of the standards selected by the Meaningful Use IFR and which would be required for certification.  For example, prescriptions could be ordered by the certified product using FAX technology rather than use of the selected NCPDP SCRIPT standard.  I would like to see
clarification made to these sections to be clear what form of report, transmission or provision of this information is acceptable.
 

(4)(i) Objective. Send reminders to patients per patient preference for preventive/follow-up care.
(ii) Measure. Reminder sent to at least 50 percent of all unique patients seen by the EP that are 50 years of age and over.
The objective and measure are not coordinated.  There are a number of reminders (e.g.,
immunization) that are appropriate for patients under 50 years of age.  I would recommend removing the age constraint on the measure.  Yes, this will increase the burden on providers to remind
patients of necessary treatment, but to counter that, I would suggest that the percent be reduced to 25% of all unique patients to compensate.



(5)(i) Objective. Provide patients with an electronic copy of their health information (including diagnostic test results, problem list, medication lists, and allergies) upon request.
(ii) Measure. At least 80 percent of all patient requests for an electronic copy of their health information are provided it within 48 hours.
(6)(i) Objective. Provide patients with timely electronic access to their health information (including diagnostic test results, problem list, medication lists, and allergies) within 96 hours of the information being available to the EP.
(ii) Measure. At least 10 percent of all unique patients seen by the EP are provided timely electronic access to their health information.
I happen to like this one, as it basically gives patients the right to electronic access to their information without some of the rigamarole I've had to go through in the past.  However, these two basically state the similar things, with two different measures of performance.  I would remove one of these and alter the other to include the requirements of the first.  For example:
(6)(i) Objective. Provide patients with timely electronic access to their health information (including diagnostic test results, problem list, medication lists, and allergies) upon request within 96 hours of the information being available to the EP.
(ii) Measure. At least 80 percent of all patient requests for an electronic copy of their health information are provided it within 96 hours of the information being available to the EP.

This restatement also eliminates the issue of having to rely on patient participation to be seen as a meaningful user, as would be the case in the existing measure under (6)(ii).


(8)(i) Objective. Capability to exchange key clinical information among providers of care and patient authorized entities electronically.
(ii) Measure. Perform at least one test of certified EHR technology's capacity to electronically exchange key clinical information.
As noted under my comments on the Meaningful use IFR, the capability to communicate key
clinical information crosses the boundaries of provider type, and so this requirement should be moved up to section (c).  All provider types should be able to recieve key information regardless of the provider type that communicated it.  Definitions of "key information" produced by a provider type might be retained under section (d) and section (e) and referenced in section (c).


(e) Additional Stage 1 criteria for eligible hospitals or CAHs.
Under this section, several references are made to transmit or provide information, or to report "in the form and manner specified by CMS." However, these transmissions can be performed in a number of ways, only a few of which conform to use of the standards selected by the Meaningful Use IFR and which would be required for certification. I would like to see clarification made to these sections to be clear what form of report, transmission or provision of this information is acceptable and ensure that it is aligned with the standards selection (e.g., PQRI).


(3)(i) Objective. Provide patients with an electronic copy of their health information (including diagnostic test results, problem list, medication lists, allergies, discharge summary, and procedures), upon request.
(ii) Measure. At least 80 percent of all patient requests for an electronic copy of their health information are provided it within 48 hours
(4)(i) Objective. Provide patients with an electronic copy of their discharge instructions and procedures at time of discharge, upon request.
(ii) Measure. At least 80 percent of all patients who are discharged from an eligible hospital or CAH and who request an electronic copy of their discharge instructions and procedures are provided it.
Again I like these, but there are repetetive and can be combined.  I would merge them into one
requirement:

(3)(i) Objective. Provide patients with an electronic copy of their health information (including diagnostic test results, problem list, medication lists, allergies, discharge summary, discharge instructions and procedures), upon request.
(ii) Measure. At least 80 percent of all patient requests for an electronic copy of their health information are provided it within 48 hours (5)(i) Objective. Capability to exchange key clinical information (for example, discharge summary, procedures, problem list, medication list, allergies, and
diagnostic test results) among providers of care and patient-authorized entities electronically.


§495.332 State Medicaid (HIT) plan requirements.
    ...
(f) Optional--proposed alternatives. A State may choose to propose any of the following, but they must be included as an element in the State Medicaid HIT Plan for review and approval:
    ...
(2) (i) Additional requirements for qualifying a Medicaid provider as a meaningful user of certified EHR technology consistent with §495.4 and §495.316(e) of this part.
(ii) A State may propose additional meaningful use objectives beyond the Federal standards at §495.6, if they do not require additional functionality beyond that of certified electronic health record technology. See also §495.316(e).
§495.8 Demonstration of meaningful use criteria.

(a) Demonstration by EPs. An EP must demonstrate that he or she satisfies each of the applicable objectives and associated measures under §495.6 of this subpart as follows:
(1) For CY 2011,
(iii) For Medicaid EPs, if, in accordance with §495.316 and §495.332, CMS has approved a State's additional criteria for meaningful use, demonstrate meeting such criteria using the method approved by CMS.
I'm not particularly in favor of adopting standards only to allow them to be altered or modified so that we wind up with 56 different requirements across the country.  While the States need to have input in how they deal with Medicaid recipients, the altering of meaningful use criteria on a state-by-state level will not be beneficial to patients or providers country wide.  I'd like to see more clarity made in §495.332 that functionality includes the selected standards.  For example, functionally the electronic transmission of prescriptions could be performed using standards other than the selected standard in the IFR.

Monday, January 25, 2010

IHE Announces Dose Compositing Supplement for Public Comment

The IHE Radiation Oncology Technical Committee has published the following profile for Public Comment:


Dose Compositing
For this profile, the term Dose Compositng is used to denote the process of combining information from two spatially-related 3-D dose (matrices) [represented as DICOM RT Dose objects]. Two use cases are supported by this profile. The first use case (Registered Dose Compositor) involves accepting two dose instances and a spatial registration instance and combining the spatially-registered doses to produce a new dose instance. The second use case (Compositing Planner) involves accepting a (prior) dose instance and a spatial registration instance and creating a new treatment plan and dose instance(s) based on the prior dose.

This Supplement can be found at
http://wiki.ihe.net/index.php?title=Frameworks#IHE_Radiation_Oncology_Technical_Framework

IHE Sponsors welcome comments on this document and the IHE initiative. They should be directed to the discussion server at http://forums.rsna.org/ or to:

Director of Research
American Society for Radiology and Oncology (ASTRO)
8280 Willow Oaks Corporate Drive, Suite 500
Fairfax, VA 22031
ihero@astro.org

Comments will be accepted on this supplement until February 26, 2010.

What happens to HITSP Now?

As many of you know, ANSI/HITSP's contract with ONC expires on January 31st of this month.  Many have assumed that with the expiration of this contract, HITSP would also disappear, but this is NOT the case.  ANSI/HITSP was created in 2005 by ANSI with collaboration from HIMSS, the Advanced Technology Institute (ATI) and Booz Allen Hamilton prior to any government contract.  Major funding for HITSP activities over the last four years has come from HHS through the award of the ONCHIT-1 contract to HITSP, and other contracts have also been awarded. 

The expiration of the ONCHIT-1 contract will have several impacts on ANSI/HITSP, but the organization is not disappearing.  Leaders of ANSI and of HIMSS have indicated in previous communications to HITSP members that there were plans to continue the organization after the expiration of the ONCHIT-1 contract.  HITSP also has another contract with CMS that continues (as I understand it) through the 2010 HIMSS Interoperability Showcase where HITSP specifications used for quality reporting will be demonstrated.

Carol Bean of the Office of the National Coordinator indicated in her comments to the HITSP Panel that there will be an RFP for an organization to replace HITSP which will be "coming soon".  As always, ONC can state very little about any pending issue that has not been released through official channels.  She did indicate that there is funding available to support continued communication through HITSP for its harmonization activities (although that cannot be used to respond to any RFP).  Communications in HITSP include conference calls, mailing lists and the HITSP web site.  I'm sure we will hear more from HITSP program management team about pursuit of that funding opportunity in the near future. 

John Halamka also referred to the RFP and indicated that it will very likely result in a differently named organization: He suggested the Standards Harmonization Collaborative.  This is a name similar to what is used in Canada as Mike Nusbaum reported on here earlier this year in A Canadian Perspective on Standards Harmonization.  The Canadian Standards Collaborative operates under the custodianship of Canada Health Infoway.  In Hello again, it's me, stirring up the pot I talk about what a similar organization might do in the US.

One component of this new organization would address one of the common issues that have been mentioned with regard to access to some of the standards in various communications on the web, and at the same time address a longstanding issue in HL7.  That issue has been the lack of an HL7 US Affilliate.  Some of the details of what being an affiliate entails are described in the 2009 Affiliate Agreement Form available as a Word document from the International Council page of the HL7 Web site.  Among the functions of an International Affiliate are:
  • To represent the interests of the Affiliate realm to HL7 through voting on HL7 ballots, participation in HL7 governance, and through a seat on the International Council
  • To make HL7 standards available to affiliate members
  • Be able to localize HL7 standards (word document) for use in the Affiliate's realm, including the ability to specify vocabularies used with HL7 standards
I look forward to the RFP for the replacement for HITSP.  Many of us who have been volunteers and leaders in the HITSP activities will certainly be active in any replacement, and I intend to be one of those who are. 

On other topics, I have begun review of the NPRM with respect to how it aligns with the standards selected in the IFR and will be posting those comments tomorrow.

    Keith

P.S.  There have been some concerns raised within HL7 with regard to how a US Affiliate could impact HL7's revenue.  Realistically, I don't believe this will be a large impact.  I do not believe that many US organizations that are currently members of HL7 directly would defect to the US affiliate just because it provides a more inexpensive way1 to get access to the standards.  In so doing they would lose the principal benefit of HL7 membership: the ability to vote on HL7 standards and governance. A US Affiliate would have those same privledges, but they would be executed on behalf of the affiliate in its entirety, not by individual members. Most HL7 members that I know of find the most significant benefit of membership to be the ability to vote, rather than access to the standards (although that is also important).  HL7 members have much more influence in voting and governence of HL7 International, which is as it should be.
1 Our brethren in other countries can access HL7 standards by being a member of the affiliate organizations.  Affiliate membership is often much less expensive that HL7 membership.  HL7 membership ranges from $1000 to more than $18,000 depending upon your organizations revenue or budget.  In comparison, an organization can join HL7 UK for £650 + VAT (~ $1230 USD), or HL7 Australia for $350 AUD (~ $315 USD), or HL7 India for 10000 INR (~ $215 USD).

Friday, January 22, 2010

Template Registry

There were plenty more people here on Phoenix on Friday than usually. That's because Phoenix is having a major rainstorm, rather than there being a ton of great topics being discussed on this last day of the HL7 Working group meeting. However, there was one topic that I did especially stay for:

Templates Registry Pilot Kickoff

The Templates workgroup hosted a morning joint meeting with the Structured Documents, Patient Care and Vocabulary workgroups to kick off the Templates Registry Pilot project.  This project builds off the requirements that came from the Templates Registry Business Requirements project, which can be found in the HL7 Templates Registry GForge site.  We reviewed a slide deck descrfibing these requirements at a high level.

The rest of the meeting we spent going over what our first steps would be and started to put together a plan for the first few iterations of development.  A large number of people in the room volunteered to participate in various aspects of using the registry and designing some of the components, but we are still looking for resources to help with the development.

Iteration I will consist of a key design document on the registry metadata content, and development of one major component to support registration of templates and viewing of a template registration.  The registry metadata content will likely be derived from the ISO 15000 eBusiness Registry Information Model which provides a schema for registry metadata.  There are open source implementations of these specifications available on Source Forge, and the registry standard is freely available  on the OASIS web site.  It will include by necessity some limited ability to integrate through terminology services likely through a very simple interface, and will also house a light-weight template repository for those contributors who cannot store their artifacts elsewhere on the web.  The first UIs will be very light on features.

Iteration II will add UI to support terminology lookup and review processes.

During the meeting we identified a new requirement not previously captured which was the ability to mark templates that an organization is using.  This could be used to make the community aware of who is using a particular template for the purposes of notification (which we did cover in the requirements), but which could also be used to help build a community around templates.

In parallel with these efforts we will need to review available technology for the notification infrastructure, user credentialling and audit, and possible infrastructures to incorporate that support CTS.

All in all it was a successful kick-off, and at least 15 people signed up to help in different phases.

     Keith

Implementation Technology Specifications

I met today with the Implementation Technology Specification (ITS) Workgroup to discuss ITS issues related to the HL7 CDA Standard, both for the current CDA Release 2.0 and also CDA R3 currently being developed by the HL7 Structured Documents Workgroup.

One Schema to Rule them All and Bind them
In the first meeting, we discussed Grahame Grieve's proposal for a RIM-based ITS.  This ITS is, I think, rather important for the representation of HL7 Version 3 messages and documents.  The RIM-based ITS uses the HL7 RIM semantics to produce a single schema that describes a RIM-based artifact (message or document).  The beauty of the RIM-based ITS is that it contains something on the order of 50 or so classes. This is a volume that can be readily taught to engineers, rather than the current collection of 50 domains, 1000+ interactions, and I'm certain an order of magnitude more "clones"1. It also associates the RIM class names with the XML element names so that element names are meaningful.   This eliminates much of the cognitive dissonance caused by the current XML ITS.


The ITS workgroup agreed to develop a scope statement for a project that would develop this as an HL7 standard.


The μ ITS 
The idea of the μ ITS is spawned by two other initiatives, as I stated in a previous post:  hData and the Green CDA work of Alschuler and Associates and Bob Dolin.  Taking the ideas of these two projects a step further, I came up with the concept of the μ ITS.  We discussed this idea in the ITS meeting along with the other two ideas, and came to the conclusion that they address three areas of concern that have some overlap and some differences.  The ITS work group agreed to develop a scope statement to be reviewed an sent up the chain.  I'm writing the first draft with collaborators from MITRE (developers of hData).  Structured Documents agreed to write a scope statement for development of a proof of concept for the Green CDA using the CCD templates, and to work with the ITS and Implementation and Conformance work groups on it.

Now, here's how I think this all breaks down:
The μ ITS is a collection of principles describing a micro-ITS, its required properties, artifacts and deliverables.  hData is a transport mechanism that uses one or more micro-ITS implementations to exchange information.  Green CDA is an instance of a micro-ITS.  A fourth project is a μ-Processor.  The μ-Processor takes as input a collection of business rules governing an exchange described in a controlled fashion, some business names, and an algorithm for unifying them.  As output it produces a micro-ITS.  The specification of the μ ITS will be informed by the development of a prototype μ-Processor.


The RQ-ITS

Finally, there is one last ITS I'd like to mention in passing.  It is derived from the work of the HL7 Clinical Decision Support work group on the URL-based implementation guide for the Context Aware Information Retrieval (Infobutton).  That specification is a RESTful query for information pertinent to a clinical act in a particular context.  It occurs to me that several HL7 interactions, some of which are queries, and others of which are retrieval operations, could benefit from a similar specification.  The RESTful Query ITS presents a set of rules by which an HL7 message that retrieves a resource is converted from the HL7 Model presentation (an R-MIM) into an HTTP GET request.  Interactions which are appropriately designed can readily and quickly implemented using these rules.  I believe that the URL-based Infobutton Specification is an implementation that could very much inform this sort of ITS.  One of the things that would likely go into an RQ ITS is the application of business names to an HL7 model artifact.  The business names would then become the parameters of the query.  

Summary
Today was a very productive day.  I was able to help coordinate the efforts of two different committees trying to reach similar goals.  Throughout there was a spirit of collaboration and even more importantly, the willingness to look at innovation in HL7.   Not everyone feels comfortable with these directions.  There are some slippery slopes that we could slide down with no way back up.  A few pitons have been driven in to rope us off, and even those don't seem all that substantial.  However, we do have to listen to our customers, and these initiatives are certainly attempts to meet their needs.

I'm not sure that Green CDA is something that HL7 would have considered a few years ago, nor do I believe that we would have had the necessary experience to approach it as we are today.  The same is true of the μ ITS.  I'm hopeful that we'll see significant progress on both of these initiatives in Rio in May, and look forward to playing with some of the ideas that we've been exploring this week.  Tomorrow I'll tell you about another toy that I plan on playing with in the coming months ... Template Registries.



     Keith


P.S. I hadn't realized that I had created a triple pun with the name μ ITS. 
  1. The Greek letter MU is one character before the Greek letter NU, and the the MU ITS is a step back  from the New ITS.  
  2. When translated to its English pronunciation MU is an acronym for Meaningful Use.  
  3. Finally, the μ symbol is used for the metric prefix micro, also meaning small, and a μ ITS is intended to create smaller and simpler messages (this is the one I missed)

1 A clone is a renamed copy of a RIM class that has been restricted by a working group to a subset of the full RIM semantics. The name of the clone in an HL7 message or document appears as the XML Element name but it "carries" no semantics.

Thursday, January 21, 2010

Attaching Backwards and Forwards

Claims Attachments
I met with the Claims Attachments workgroup today.  This is one of the groups that have made CDA what it is today, and they didn't even know it.  This group has been working for more than 10 years to get Attachments part of US regulation.  Their work was done on time.  The specifications were ready shortly after CDA Release 1.0 became a standard more than 8 years ago, and they were updated again about 4 years ago to support CDA Release 2.0.  We've simply been waiting for the powers that be to finish the regulation required by the HIPAA laws.

It is because of the work of this group that we have the narrative structures and codes for more than 10 different kinds clinical documents already specified when HL7 and IHE began the work on CDA Release 2.0 Implementation guides 4 years ago.  Their efforts contribute the standardization of Discharge Summaries, Referrals, Consultations, History and Physicals, Laboratory Reports, ED Reports, Nursing Notes, Operative Notes, Procedure Notes, Progress Notes, and yes, even the CCD.  These have appeared or soon will appear in numerous implementation guides from HL7, IHE and ANSI/HITSP.

Even though the attachments regulation has never been finished, implementation guides that are compliant "computer-decision variants" (otherwise known as Level 3 documents) of the Laboratory claims attachment implementation guide are being exchanged today in at least three different states (one state-wide), and similar documents are being exchanged world wide!  The same is true for human decision variants (Level 1 and 2) for at least three different document types in the Clinical Reports guide.  The only thing holding Clinical Reports back from being "computer-decision variants" are differences between the attachment requirements for use of billing codes and minor structural variances from the CCD work that followed after those guides were done.

Continuing the spirit of recognition from yesterday, these are results to be very proud of.  I'd like to commend the committee on a job very well done.  In the ten years since your inception, you've very quitely and very much in the backgroup changed the healthcare landscape.  The tables in the AIS guides of LOINC codes for different document types are the very same tables that I and others used in HL7 and IHE in creating our very first (and second and third) implementation guides.

As it stands today, the most recent proposed regulation is two years old, and there is no known date when it will be final.  The Health Reform bill now in jeapardy has provisions in to to bring this back in both the House and Senate versions (with different dates).  My hope is that some useful form of the Healthcare reform bill passes, and that it does set forth a reasonable deadline for the Claims Attachments regulation that many of us have been waiting for.

What I recommended to this group for a direction to move forward in was to revise the Clinical Reports Attachment Implementation Guide to:
  1. Reference existing specifications from ANSI/HITSP, IHE and HL7, and as the basis for the reports that can be exchanged for either the Human Decision Variant (Level 1 and 2) or Computer Decision Variant (Level 3)
  2. Reference the templates in the CCD specification for use with the Computer Decision Variant
  3. Document how to add billing codes to the clinical content in a way that will allow existing documents used in clinical exchanges to be created so that they
    1. Can easily have billing codes added to them (using the translation code mechanism of CDA)
    2. Or can be easily produced in a way that they can be used both for clinical and administrative purposes.
However, to accomplish this goal, the Claims Attachment Workgroup will need the help of EHR Vendors.  The group is currently composed of mostly payers and a few providers, with little EHR vendor representation.  Given where we stand on Healthcare reform and proposed timing, the time to start on this project is now.  Attachments needs to take advantage of all of the work that is being done for meaningful use, and needs to have new guides ready for a new regulation should Healthcare reform pass.  Please consider sharing your time with this workgroup in HL7. 

I know we all have our heads wrapped around the impacts of the meaningful use NPRM and IFR, and the pending certification NPRM.  However, we also need to be looking forward to the future when meaningful use is real.  I'd very much like to see alignment of CDA implementation guides across both the administrative and clinical spectrum.  You may feel like its too soon to consider now, but I know that if we don't think about it, we'll certainly be scrambling to deal with it later.  I've been there and done that several times, and I'd rather not have a year like the last one where everything that needs to be done with standards is needed yesterday.

We also discussed issues outside of the scope of attachment content, but certainly of interest to this community.  Several people expressed interest in exploring how healthcare information exchanges could readily support the exchange of attachments between payers and providers instead of, or in addition to traditional models using X12N transactions.  There are a lot of ways this could be made to work, and it could very well bring together some very powerful synergies between payers, providers and patients.  I'm still thinking about this one.

    Keith

P.S.  I haven't forgotten about the μ ITS, I'm still thinking about the details.

Tuesday, January 19, 2010

Recognition

Tomorrow morning HL7 recognizes various member organizations for their sponsorship of HL7 activities, and at every September Plenary meeting, they also recognize individual contributions to HL7 via the Ed Hammond Awards.  I decided that I am going to institute the Ad Hoc Motorcycle Guy Harley Award this evening based on a random comment someone made to me this evening.  Unlike the Ed Hammond awards which come with a pretty vase in Duke Blue, the Ad Hoc Motorcycle Guy Harley award appears in black and chrome, comes with no physical object of any monetary value.  However, the bragging rights you do get may be more valuable, we'll see as time goes on.

Tonight I was reminded of the contributions of one person to the use of CDA in multiple standards venues, and I'd like to honor that person with the First ever Ad Hoc Motorcycle Guy Harley Award. The rules of who gets the Ad Hoc Motorcycle Guy Harley award are completely arbitrary.  There is no nominating committee, although nominees are always welcome.  The bar to recognition is fairly high if the first recipient is any evidence, and I hope to maintain the quality of recipients in subsequent awards.  I wont' award more than one a year for the same type of industry service, and I expect to award no more than five a year.

This first award recipient is a software engineer working in a relatively obscure office.  The person receiving this award has contributed a great deal to one of the most frequently referenced tools that I point software engineers to who are implementing CDA documents.  He diligently reviews specifications from HL7, IHE, and ANSI/HITSP, comments on them using all the appropriate processes, follows up to ensure each issue he has raised is addressed by the relevant standards body, and then updates the work of his organization to see that implementors of CDA benefit from his work.  He spends tireless hours looking at what would be to most people a meaningless combination of XML, XPath Expressions and CDA constructions, reviews and patiently explains to others how the software his organization develops got the results it did, and what they must do to correct their problems; has his e-mail address posted online, answers innumerable e-mails on a weekly basis, especially in the months of October through January, and yet maintains a very even and fair keel through all of this.

This certifies that 
Andrew McCaffrey of NIST 


Has hereby been recognized for outstanding contributions to the forwarding of Healthcare Standardization

Andrew, congratulations and thank you for all your many years of service developing the NIST CDA Validator, review of countless HL7, IHE and ANSI/HITSP technical specifications.  Your collegues may have put me up to this, but I got to chose how you were mentioned, and I truly believe that you deserve significant recognition for all the work you have done on behalf of CDA interoperability, not just here in the USA but worldwide.  In honor of this recognition, the NIST CDA Validator also gets a link on this blog.

Hot topics at the HL7 Working Group Meeting

There were a number of topics that came up today and the HL7 Working Group Meeting, but two of them seem to deserve attention here.  Both of these are related to meaningful use, but in completely different ways.

CCR and CCD Translations
The first topic that was heavily debated in the Structured Documents working group meeting was whether or not HL7 should publish transforms between the HL7 CCD format and the ASTM E2369 CCR format and back, and document the limitations of such a transform.  Most of the debate was between cochairs, with me on the side of publication, two others adamantly against it, and a third being realtively neutral.

Conceptually, we all agree that the only meaningful way forward (e.g., for 2013) is to communicate clincial information is through the CDA standard and HL7, IHE and HITSP implementation guides based on those.  Where we differ is in tactics.   Some would have HL7 do nothing to make it easier to use CCR with EHR systems, and others would make it easier to transition between use of CCR and use of CDA.  I'm in the second camp.  We voted on whether to develop a project to do this and the outcome is that the committee agreed to do so, though not unanimously given the divisions.

Even though I'm not fond of the CCR XML, I must acknowledge that A) It's been regulated as a requirement that applications be able to view them at a very minimum, and B) that regulation is extremely unlikely to change as a result of ANY industry feedback.  Having acknowledged that, there are a couple of ways forward.  1) Penalize those who have gone down one path or another, or 2) make it easier to cross between the two.  My own thoughts are to take the latter road, because this will make it easier for others to come to HL7 and use CDA.  That includes both vendors and healthcare providers.  Most of the latter are simply caught in the middle of this debate. 

This has the danger that it also enables implementors to go the other direction.  I have the same attitude about this concern as I do about the concern that sharing healthcare data could cause a healthcare facility to lose customers.  If you believe that providing better service to your customers will enable them to leave you to go elsewhere, you need to look at what you are doing much more closely.  However, if you really believe in what you are doing, this can only be to your own advantage.

The industry needs this, and it will be done whether or not HL7 does it.  I would prefer to see HL7 take the lead and do it right because it is the appropriate thing to do to serve our constituents.  Some feel that this will simply perpetuate the problem of two standards, and those concerns are indeed reasonable.  However, HL7 cannot ignore the lessons that CCR had to teach us.  Furthermore, we must not ignore the needs of our constituents just because their requirements (imposed from an external body), seem not to be in HL7's best interests tactically.  The strategic move is to serve the industry, and by so doing, gain the trust and confidence that will allow that industry to adopt the HL7 CDA standard.

This is the high road, or path less travelled by.  It may be more difficult, but to me it seems to be the right direction to go.  I may not like having to choose this path, but it will be a better result for patients.  I am here not for any one company or organization.  I develop healthcare standards for  my wife, my children, my mother and anyone else who needs healthcare.  If there is something I can do to make sure that they get the right care, I will, and this seems to be it.

Green CDA
Green CDA is good for the environment because it uses fewer electrons.  The idea behind Green CDA is that there is a simpler way to exchange CDA documents that meets 80% of the capabilities of CDA, but has a much simpler to understand XML.  This is another of the lessons learned from CCR.  I can see the point of pain that Green CDA is going after.  As someone who has taught CDA to more developers than I can count, and having implemented CDA using numerous implementation guides, I very clearly see the point of pain it is addressing.  I agree wholeheartedly that it needs to be addressed.  Green CDA is very similar to the hData format developed by MITRE.

There are some limitations in Green CDA as currently proposed that I have some concerns about, and so I propose something just a litte bit different.  I'll just introduce this concept below, as it needs more fleshing out.

The μ ITS
Green CDA is simply another Implementation Technology Specification or ITS for use with the CDA standard.  The same techniques could readily be applied to any other HL7 Version 3 message or document with similar benefits.  I believe that we should approach this as an alternate ITS for HL7 Version 3.  I call this the μ ITS because it is close to the New ITS proposes a few years ago in HL7, but takes a slight step backwards.  Also the μ ITS takes its name from the synthesis of model of meaning and model of use that I discussed earlier this year.  When I told the cochair of the TSC and a past supporter of the New ITS that we needed something like this he was floored because I adamantly and successfully opposed that ITS.  I still think the idea is a little bit wonky because I understand the value of communicating meaning vs. use, but I don't see a way around it.  I guess it goes to show that I can indeed be educated even though I turn 45 in two days.

It's getting late, and the details of the μ ITS are still rolling around in the back of my head.  I'm going to sleep on it and write more about it tomorrow.

     Keith

Friday, January 15, 2010

Day 5 and off to Phoenix

On the morning of day 4 you start to get grades (red, yellow, green) for profiles that you've failed, may be failing, or passed.  The morning of day 4 usually has very little red because there isn't much danger of failing yet, but if you don't make any progress on that day on a profile, it goes to red Friday morning.  I woke up to two reds this morning on Day 5 that I didn't understand. 

Check Status Frequently
On tracking it down I was reminded that you need to continually check status.  In this case, one critical path test got paused because of missing information in the logs, without me ever noticing.  Not noticing it, I didn't rerun it so there was no progress yesterday.  Thus a red flag. 

Avoiding Test Failure
In this particular case, the test wasn't clear about what information was expected, but it was easily accessed from my logs, so I resubmitted it.  But, I've seen other systems get tests paused or failed for lack of following instructions.   A complaint I commonly hear from connectathon monitors, managers and profile authors is that reading appears to be a lost art these days.  One of the connectathon monitors wears a shirt whose acronym can be interpreted as Read The Free Manuals.  It's important to read the instructions on the tests and then follow them, and be prepared provide even more information than is asked for just in case.

Have the Right Team
Something to add to my list for success is the team makeup that you send to connectathon.  To be successful a system should have at least one technical expert who can quickly find and correct problems in your application code, and you should  also have someone who is detail oriented manager who can plan how to execute the tests, and track that they've succeeded.  Few systems succeed without both skill sets, and you rarely find them in one person (when you do, keep them around).  If you are sending more than one product, you should also have someone to manage your overall participation.  If your application has several subsystems, you may need more than one technical expert.  It helps to have prior connectathon experience.

I've been fortunate that one of the teams I'm working with has that person with both skill sets, another team that has multiple experts and a manager ensuring execution, a third team that read everything very closely, and the fourth team has multiple experts and plenty of connectathon experience.  We didn't get everything done that we wanted to, but we did get everything done we had planned.  That's a successful connectathon in my book.  Well, off to Phoenix.

Thursday, January 14, 2010

Top Ten Posts for 2009

I'm taking a brief break from connectathon reporting tonight to report on the top 10 postings from last year.

As I reported back in August, I've been tracking the performance of this blog since inception.  The list below shows what you thought was the most popular reading last year.  The numbers to the right indicate the popularity of the post and represents the standard deviation from the mean number of readers for any post.

10. Data Mapping and HITSP TN903 (3.2)
9. IHE PCC Profile Requirements for Templates  (3.4)
8. At the rim of the dam or the edge of a precipice?  (3.7)
7. What is HITSP Doing?  (3.9)
6. Demystifying SAEAF...maybe  (3.9)
5. Template Identifiers, Business Rules and Degrees of Interoperability  (4.0)
4. IHE Releases Trial Implementation Profiles  (4.3)
3. If I had a Hammer  (5.4)
2. Clinical Decision Support (5.8) 
1. Laboratory Orders   (7.2)


January is already an interesting month, since two posts have in one week moved into the top 10, and this month is also on track to be the best ranked month in my history.  The two posts are: 
I was both surprised and thrilled at the popularity of the Where in the World is XDS site.  I've created a smaller and more memorable link for it here:  http://tinyurl.com/wwxds.

Day 4

I have little to report today because everything is going really well this year.  The teams I am working with are done with their planned testing and are moving on to stretch goals.  So I will spend a bit of time talking about the Connectathon monitors, most of whom are volunteers.

This year there are about 50 IHE Connectathon monitors and 4 managers, and another 5 HITSP monitors.  The connectathon monitors posed for a group shot which you can see below.  The guy laying on the floor in front of everybody is Steve Moore.  He's been involved in connectathons for a decade and runs the whole testing activity.  Many of the other people here have given up a week of their own time and traveled to the event to help with the testing.  A few of the monitors come from elsewhere in the IT industry and show up each year to help us out.  Others have been engaged in IHE for many years and have cochaired one of the technical committees or authored one or more of the profiles we are testing.



The connectathon monitors work very hard, and Steve the hardest of all.  Last year when we finished connectathon, we had on the order of 3000 tests processed by the connectathon monitors.  This week in the middle of day four we are already at that mark.   At our current rate, the monitors are processing around 125 tests an hour.

In addition to the Connectathon monitors, we also have 5 monitors reviewing conformance to the HITSP specifications. 



As a participant in the IHE Connectathon for the past 6 years, I have come to have a great deal of respect for the work of the connectathon monitors.  This is an immense undertaking that couldn't be done at all without them.  When we get to see the Certification process NPRM for meaningful use later this month, I hope that there will be opportunities in it to take advantage of all of the work that the IHE volunteers do at connectathon to enable interoperability testing and certification.

We will see many of the monitors again at the HIMSS showcase, as many also perform as docents at the HIMSS 2010 Interoperability Showcase.  Docents take people on tours of the Interoperability Showcase and walk them through the different demonstration scenarios.  There are still a few opportunities to participate in the showcase as a docent.  If you have some familiarity with Healthcare IT and interoperability and want to be a docent, contact me (see the Contact Me link at the upper right of this page), and I will forward your information along to the showcase organizers.

If you are a connectathon monitor, Thank you! for all your work. 
If you are a connectathon participant, please thank the monitors as well. 

Good  look to all and I'll see many of you next week at the HL7 Working Group Meeting in Phoenix.  Follow the activities there on twitter using #HL7WGM.

   Keith

P.S.  I appreciate hearing from all of you this week who read what I write here.  Keep the feedback coming, and if you disagree with anything I write here, post a comment.  Also, if there's a topic you'd like me to cover, drop me a line.  I can be educated...

Wednesday, January 13, 2010

3 Down, 2 to Go

What a difference a day (and a good night's sleep) makes.  The TLS problems of yesterday were solved by adding -Dhttps.protocols=TLSv1 and -Dhttps.cipherSuites=TLS_RSA_WITH_AES_128_CBC_SHA to the startup script.  These two parameters set the default protocols and cipher suites supported by the JVM and are described in more detail in the JSSE Reference Guide.  Setting these two parameters completely eliminates the need for the ATNASocketFactory that I've been using for years.  I updated the ATNA_FAQ just a few minutes ago with this new information, and also added some information about setting the default host name verifier for those who run into that issue.

The fun today was ensuring that I had the right SOAP headers for some of the web service calls.  According to some of the tests, if you are performing a web service call that returns information in XOP/MTOM format, you must make the call using XOP, even though you may not have any need for MTOM.  The way to ensure that you are using XOP with SAAJ is to set the Content-Type header to specify application/xop+xml.

The following Java code will do that and set the action component appropriately for the RetrieveDocumentSet transaction.

MessageFactory mf = MessageFactory.newInstance(SOAPConstants.SOAP_1_2_PROTOCOL); 
        SOAPMessage msg = mf.createMessage();
        msg.getMimeHeaders().setHeader("Content-Type", "application/xop+xml;charset=utf-8;action=\"urn:ihe:iti:2007:RetrieveDocumentSet\"");

Getting past that took several hours, because it isn't very well documented anywhere.  Well, now it is.  Of course, if you are using something other than SAAJ, this may not be an issue for you, and if you are using Open Source tools, they've already addressed the issue in most cases.

Remarkably, after yesterday, I expected to end the day today with more homework, but instead I'm very pleased to report that  I'll be working on the CDA book tonight (and this blog posting will already be done before dinner).  This year, even though we have more than 100 actors supporting XDS at connectathon, things have gone a lot more smoothly than prior years.  This is due in part to maturity of the testing tools (they are much better than last year), and of KUDU, which went through serious growing pains last year.

May of the teams I'm working are done or nearly done their testing, and need only to finish group testing on demand.  Others that I've talked to are reporting very similar experiences with their systems, so it's not just us.  I believe that this also represents maturity of the IHE profiles and implementation tools.  Most of the issues cropping up now are in integration of those tools with products, rather than in the tools themselves. 

More tomorrow...

TLS and AES

Day 0 of Connectathon is about planning for success, and Day 1 of connectathon is about content.  Everyone is stuffing content into Kudu, ripping it  down and viewing or importing it, et cetera.  But if Day 1 is about content, Day 2 is about pushing documents to registries.

It never fails that I wind up seriously debugging someones TLS connection at Connectathon Day 2 and this year was no exception.  Certificates that worked on the development system back home don't seem to work on the production system we shipped to the connectathon, certificates seemed to be at fault, then configuration. The rule should be that what you test with is what you ship to connectathon, but at some point things need to move off of developer systems to the hardware that you want to have in the demonstration.  It's better to make that move before connectathon rather than afterwards.  I cannot blame this team, some issues just plain trump connectathon, and I'm as much or more at fault than any of them.

Building Certificates
I've been using the same script to build connectathon certificates for the last four years.  The commands are listed in the ATNA FAQ (one of the top 20 most popular pages on the IHE Wiki).  I wind up rebuilding keys rather frequently for my teams or for others on the connectathon floor (when I have time).   One of these days I'll build a web tool that will enable people to generate certificates in all the popular formats just so I can stop having to do it myself.

Debugging the Connection
One of the critical tools I use to debug TLS connections is Wire Shark.  Using it I can tell within seconds (or minutes if I have to wait for a resend) why a TLS connection isn't working.  It just tells me what the error is without me having to go look it up in the RFC (unlike the Java logs).  It takes me about 3-5 times as much effort to diagnose connection problems with Java logs.  I'm still using the version I installed 3 years ago (it was called Ethereal back then), and it is still saving me a lot of time.  Another handy use for Wire Shark is to capture proof that you are communicating securely when connectathon monitors show up.  I save off the logs for each transaction into files that I store on my system for that purpose.

Updating the ATNA FAQ
The ATNA FAQ is rather old as IHE FAQs go, and its age is showing.  I started writing it about a year after the trial implementation of the ATNA profile was released (I implemented ATNA the second year it was available), and finished the first draft in a Word document that I circulated among the IHE community during and after the 2006 connectathon. 

The problem I'm running into now is that the code I wrote four years ago to deal with restricting the encryption algorithms does not work on my platform combination any more.  Axis2 and other SOAP tools now use the popular HTTPClient tools to marshall the requests.  My code doesn't work with those tools (yet).  It may simply be that methods used to set the default secure socket factory seem to have changed in newer versions of Java.  I'll have it fixed in the morning.


Ranting and Raving
Rant

I'm still wishing there was a .Net solution to support AES on older Windows platforms.  This particular Microsoft technical bulletin indicates that AES is present on some versions of Windows XP and Server 2003 as well as Vista and Server 2008 (in the encrypting file system).  Note here that AES is also used by Office 2007.

However, if you or I want to use AES for anything through a Microsoft OS, you must have Server 2008 or Windows 7.  I've heard several reports of concerns about having to change operating system requirements for products in order to meet meaningful use requirements (AES is one of the best encryption options that  comply with the IFR).  Can you imagine what it would mean to have to upgrade the OS for all computers in a large hospital or IDN to support meaningful use?  That's one way to spend money, but not, I'm sure what was intended by the HITECH Act.  Why isn't it supported by those older versions?  One good reason might be the fact that the implementation referenced in the technical bulletin isn't FIPS certified.

Rave
The number of vendors supporting XDS this year has grown by leaps and bounds over the last two years.  I don't have the exact numbers (yet), but I looked briefly at a slide reporting them last night, and it looks to have better than doubled, and the growth in Registry and Repositories is even better. 

Testing this year is also much smoother than last year.  Kudu is snappier, the number of tests done on Day 1 (and Day 2) seem to be much improved.  Overall, the testing seems to be going very well.

Day 3 will be about PIX and PDQ and I hope to tell you how well that went tommorrow evening.



P.S. I've added the IHE Connectathon to the "Where in the World is XDS" map for this week only.  That map is available also from http://tinyurl.com/wwxds to make it easy for you to point others to it.

Tuesday, January 12, 2010

Connectathon Day 1 (and Day 0)

Today (actually yesterday if you want to get technical) was the first day of the IHE North American 2010 Connectathon.  And the day before that was load in.  This is the first year I was here Sunday morning for Connectathon, and I'll tell you that it makes a big difference.
  1. You have plenty of time to get your equipment connected to the network, verify connectivity with others and get ready for Monday when the real work happens.
  2. You can review your connectathon tests and make a plan to complete them.
  3. You can write code uninterupted to fill in any gaps that you may have discovered.
Now for observations on Day 1:
The morning is usually spent dealing with basic connectivity, kudu review, preparing and uploading a bunch of data.  Most of this can be done back in the office or on Sunday.

The first thing most people do on Day 1 is install a hosts file on their machine and configure it to connect to the IHE Connectathon network. This is basic networking. You'd be surprised at the number of people though that don't bother with this step until they show up.  I have a key fob that I circulated with the hosts file, and I gave my teams the spiel on how to succeed (or fail).  I was already set up having been here Sunday.  I must admit though, that more people were ready to go at the start than last year.

There are a couple of things that catch people off guard during this time.  One of them is the proliferation of virtual machines.  If you have a VM, you either need 2 IP Addresses (and most people only ask for one), or you need to know how to configure your VM host.  Most VM hosts provide a way to route network trafic into the VM through Network Address Translation.  This allows the VM to have a completely separate network address from the host, and yet allow the host to act as a network connection to that VM for the outside world to see.  If you don't know how to configure your VM to support that (or if you've never done it before), the best way to deal with it is to count up the VMs, and add 1 to determine the number of IP addresses that you need to request.

The other issue is that many of these systems use multiple machines to complete their work (e.g., a server and an interface engine).  Some folks have assumed that they only need an IP address for the machine that people will be connecting to them with, and that they don't need one for the other servers because they can just send traffic to the separate system.  But, if you are given a DHCP address, guess what?  Systems on the "DHCP network" cannot communicate into the Connectathon network, and visa versa.  So, make sure you ask for an IP address for every machine that's doing real work.

Another thing people spend time on in the morning is installing and testing certificates.  That should already be done during MESA (pre-connectathon) testing, however, there are some good and bad reasons why people are still dealing with that.  The better reasons have to do with configuring new hardware for connectathon after MESA testing, and the worse ones have to do with not completing that testing before you show up.

One of the first interoperability tests performed is time synchronization.  There are two rules here:
  1. Set your clock to Central time.  That's where we are after all.
  2. Sync up with your time server using SNTP or NTP.
There's a FAQ on how to set up common operating systems for the CT profile on the IHE Wiki.  There are a couple of issues on time syncronization that you need to be aware of.  If you are having problems getting your server to syncronize, this document describes some of the reasons why your clock doesn't sync up right away on Windows.  I suspect you can also use this information to address similar issues on Unix systems.  Basically, systems using NTP slew the clock to get it correct, and it takes a while for things to sync up, but eventually it gets there.  The only way to force them to jump the clock is to stop all syncronization activities, update clocks and start them up again.

 For XDS edge systems, the next set of steps is usually to perform the CONTENT_CREATOR and CONTENT_CONSUMER tests.  Some folks want to get going right away sending documents using XDS, but that is not the best strategy. 
  1. Many people are still getting set up and working the bugs out with TLS and ATNA, and starting right off with a Provide and Register transaction will take a good deal of time.  Let others work out their issues first and then work on those tests.
  2. Similarly, Registry Stored Query or Retrieve Docuement transactions require that there be some documents to work with, and their just aren't that many to test with initially.
If you are a content consumer, design your application so that there is a debug mode in which you can download documents from KUDU, and process them from local file system.  That's a quick way to knock out a dozen or more tests quickly on day 1.  If you are a content creator, make sure you have a way to write files out to your local file system so that again, you can knock out a half dozen tests quickly.

Of course, if everyone follows these strategies, there will still be no documents to process later in the day.  That's why it's important to have two people assigned to your system.  While one of you is knocking down a double dozen of tests, the other should be working on getting the peer to peer testing working.

BTW:  If you've figured out a way to quickly run several tests, run more than the necessary three.  Why?  Because some of your tests may fail (never of course your own fault).  Running more than the necessary three will ensure you get at least what you need, and will also help others pass their own tests.

One of my favorite things about connectathon is that it's all about getting everything to work.  There's no benefit to you if your competitors fail, because this week, they are your partners.  This week you can solve problems together in a day that would have taken you a month in the field.  Go forth and make it work, your customers and mine will be much happier!

Sunday, January 10, 2010

How to Succeed or Fail at Connectathon

In addition to the application software you are testing, you should have the following things in your bag or on your laptop:
  • Network Sniffer (WireShark is what I like to use)
  • A wired hub, cables and power strip.
  • Copies of all the IHE, HL7, DICOM, IETF or other specifications you implemented.
  • Application Development Tools (debugger, compiler, et cetera)
If you just want to fail, leave some of it at home.  For the rest, see the table below:

Succeed
Fail
Planning
  1. Make a spreadsheet of all of the tests you need to complete by copying the data from Kudu. Use one tab each for No Peer Tests, Peer to Peer Tests and Group Tests.
  2. Review your actors. If you have not completed MESA testing for some of your actors, DROP them.
  3. Hide rows for ANY OPTIONAL tests, if you have time at the end, you can do these, but not before you've completed REQUIRED tests.
  4. Delete rows for tests required for Options that you don't implement.
  5. Plan for weather delays.  Chicago in January often has snow.  Plan to arrive the day before you are needed.
You can skip the planning step.  You don't need any planning to fail.  You won't need to look at Kudu until Monday around 10:00pm (which is when your plane lands anyway, right?).
Load in and Setup
  1. Drop your box on the table, plug it in, cable it, check network connections.
  2. Configure it to use your assigned IP address.
  3. Activate the /etc/hosts or /windows/system32/drivers/etc/hosts file that you created last week.
  4. Update it with any changes that you find out about.
  1. Search for your equipment that was shipped late and hasn't arrived yet.
  2. Find a place that will recover your data from your un-backed up hard drive.
  3. Ask a monitor what your IP address should be.
  4. Add to your hosts table every time you need a new address.
No Peer Testing
  1. Do your Time Client Test.
  2. Upload the filled out ATNA Questionairre to the WIKI
  3. Upload all of the objects you expect others to be able to render
  4. Render all objects that others have provided for you (check back frequently for subsequent posts)
  5. Provide testing hooks to enable no peer testing of your applications.
  1. Search the internet for how to configure windows to get it's time from an NTP client
  2. Ask someone what ATNA is and whether your product needs it.
  3. Start building the objects you need others to render on day 2.
  4. Don't worry about rendering objects until Friday.
  5. Require applications to use all workflow steps to perform no peer tests.
Peer to Peer Testing
  1. Design your system to be easily reconfigured without a reboot.
  2. Store configurations that you need to use for the connectathon.
  3. Find test partners for each transaction and make a plan for when you are going to test with them.
  4. Save optional tests for last.

  1. Design your system so that a reconfiguration requires a reboot of your server.
  2. Enter configuration parameters manually each time you have to reconfigure.
  3. Test with partners before telling them you are going to run a test.
  4. Spend a lot of time on an optional test that isn't working while you still have required tests to perform.
Group Tests
Make sure that by Wednesday evening you've completed at least one of every test you need for the group tests.
Start testing critical path features Thursday morning.