This crossed my desk today and was worth noting as an example of open government even though its on topics I don't usually address (Administrative transactions). The web cast is apparently being supported by the VA, as reported by the Meeting Agenda page.
I love this idea and hope it catches on with other healthcare related federal advisory committees... not that I have any particular two in mind or anything.
NCVHS will pilot a new webcast project on Friday, December 3rd during the Standards Subcommittee hearing. As always, we are researching innovative ways to inform and invigorate stakeholders and our external customers. The pilot will allow participants to view and hear all presentations in real time as well as download copies of all documents presented for future use. Audio will be provided through the actual webcast and not a dial-in number. The link to the webcast will be placed on the meeting agenda at http://www.ncvhs.hhs.gov/101203ag.htm on the day of the meeting and we’re working on the plans to archive the documents for subsequent retrieval. Evaluation of the pilot will be based on web participation, so please use this new feature if you cannot attend the hearing, spread the word to other interested parties, and give us any feedback about its utility. Continuation of this service also is dependent on resource availability, but we would like to explore options and possibilities for improved access nonetheless.
Take care,
Lorraine
Lorraine Tunis Doo
Sr. Policy Advisor, CMS
Pages
▼
Tuesday, November 30, 2010
Monday, November 29, 2010
Healthcare Messages Over the Internet: The DirectProject
An announcement regarding the Direct Project...
November 29, 2010
The Direct Project announced today the completion of its open-source connectivity-enabling software and the start of a series of pilots that will be demonstrating directed secure messaging for healthcare stakeholders over the internet. The Direct Project specifies a simple, secure, scalable, standards-based way for participants to send authenticated, encrypted health information directly to known, trusted recipients over the Internet.
Also announced:
- A new name - the Direct Project was previously known as NHIN Direct
- An NHIN University course, The Direct Project - Where We Are Today, to be presented by Arien Malec, November 29 at 1 PM ET, sponsored by the National eHealth Collaborative
- An extensive list of HIT vendors (20+) that have announced plans to leverage the Direct Project for message transport in connection with their solutions and services
- Presentations at the HIT Standards Committee on Tuesday November 30 where three or more vendors will be announcing their support for the Direct Project.
- A thorough documentation library including a Direct Project Overview
- Best practice guidance for directed messaging based on the policy work of the Privacy and Security Tiger team
- A new web site at DirectProject.org
- A new hashtag #directproject for following the Direct Project on twitter.
The Direct Project is the collaborative and voluntary work of a group of healthcare stakeholders representing more than 50 provider, state, HIE and HIT vendor organizations. Over 200 participants have contributed to the project. It’s rapid progress, transparency, and community consensus approach have established it as a model of how to drive innovation at a national level.
Today, communication of health information among providers and patients is most often achieved by sending paper through the mail or via fax. The Direct Project seeks to benefit patients and providers by improving the transport of health information, making it faster, more secure, and less expensive. The Direct Project will facilitate “direct” communication patterns with an eye toward approaching more advanced levels of interoperability than simple paper can provide.
The Direct Project provides for universal boundaryless addressing to other Direct Project participants using a health internet “email-like” address.
The Direct Project focuses on the technical standards and services necessary to securely transport content from point A to point B and does not specify the actual content exchanged. When The Direct Project is used by providers to transport and share qualifying clinical content, the combination of content and The Direct Project-specified transport standards may satisfy some Stage 1 Meaningful Use requirements. For example, a primary care physician who is referring a patient to a specialist can use The Direct Project to send a clinical summary of that patient to the specialist and to receive a summary of the consultation.
How might the Direct Project be Used?
2009-10 Congress and agencies of the federal government have created regulations that require physicians and hospitals participating in the ARRA/HITECH incentives awarded for meaningful use of EHR technology to:
- send messages and data to each other for referral and care coordination purposes;
- send alerts and reminders for preventive care to their patients;
- send patients clinical summaries of their visit and of their health information
- receive lab results from labs
- send immunization and syndromic surveillance data to public health agencies
- integrate with HIT vendor systems
Each capability can be enabled with point-to-point secure e-mail or in a more integrated manner as HIT vendors and public health agencies enable communication with the Direct Project.
How will the Direct Project affect states and Health Information Exchanges?
States that are receiving federal funding to enable message exchange are being asked by the ONC to facilitate Stage 1 Meaningful Use information exchange. The Direct Project may serve as a key enabler of directed messaging for all states and Health Information Exchanges. Even states that have some level of health information exchange capability need to address areas that are currently uncovered by a regional or local Health Information Organization (HIO).
As state plans seek to address a means to fill the gaps in exchange capability coverage, the Direct Project may provide a bridge to enabling the basic exchange requirements for Stage 1 Meaningful Use. The Direct Project does not obviate the need for state planning for HIE, neither does it undercut the business case for HIOs. More robust services can be layered over simple directed messaging that will provide value to exchange participants.
There are already organizations that have announced the establishment of national clinical exchange networks, including integration with the Direct Project. States and HIO’s will need to decide how best to provide Direct Project services to their constituents, whether by partnering with existing exchange networks or incorporating direct messaging into the services they provide.
The Direct Project Implementation
The Direct Project is organizing real-world pilots to demonstrate health information exchange using The Direct Project standards and services. Six pilots are ramping up including:
Rhode Island Quality Institute, Redwood MedNet and MedAllies will be sending Continuity of Care Documents to other providers for referrals and transitions of care. Visionshare will be linking to immunization registries. Carespark (Tennesee) will be linking the VA with private clinics providing health services to veterans. And Connecticut’s Medical Professional Services, an IPA, will be linking Middlesex Hospital with primary care providers.
To help the Direct Project implementers, an open source reference implementation of the Direct Project standards and services has been developed under the guidance of the Direct Project. To ensure the broadest possible participation, the reference implementation has been implemented in two flavors: Java and .Net.
The HISP
Connectivity among providers is facilitated by Health Information Service Providers (HISP). HISP describes both a function (the management of security and transport for directed exchange) and an organizational model (an organization that performs HISP functions on behalf of the sending or receiving organization or individual).
The Direct Project is bound by a set of policies that have been recommended to the HIT Policy Committee (HITPC) or are being examined by the HITPC’s Privacy and Security Tiger Team for directed messaging. Within this context, the Direct Project has developed best practice guidance for secure communication of health data among health care participants who already know and trust each other. The Direct Project assumes that the Sender is responsible for several minimum requirements before sending data, including the collection of patient consent. These requirements may or may not be handled in an electronic health record, but they are handled nonetheless, even when sharing information today via paper or fax. For example, a sender may call to ask whether a fax was sent to the correct fax number and was received by the intended provider.
The following best practices provide context for the Direct Project standards and services:
- The Sender has obtained the patient’s consent to send the information to the Receiver.
- The Sender and Receiver ensure that the patient’s privacy preferences are being honored.
- The Sender of a Direct Project transmission has determined that it is clinically and legally appropriate to send the information to the Receiver.
- The Sender has determined that the Receiver’s address is correct.
- The Sender has communicated to the receiver, perhaps out-of-band, the purpose for exchanging the information.
- The Sender and Receiver do not require common or pre-negotiated patient identifiers. Similar to the exchange of fax or paper documents, there is no expectation that a received message will be automatically matched to a patient or automatically filed in an EHR.
- The communication will be performed in a secure, encrypted, and reliable way, as described in the detailed The Direct Project technical specifications.
- When the HISP is a separate entity from the sending or receiving organization, best practice guidance for the HISP has been developed for privacy, security and transparency.
The Direct Project is not targeted at complex scenarios, such as an unconscious patient who is brought by ambulance to an Emergency Department. In the unconscious patient scenario, a provider in the ED must “search and discover” whether this patient has records available from any accessible clinical source. This type of broad query is not a simple and direct and therefore requires a more robust set of health information exchange tools and services that The Direct Project does not provide.
The Direct Project is an integral component in a broader national strategy to have an interconnected health system through a Nationwide Health Information Network (NHIN). The NHIN is “a set of standards, services and policies that enable secure health information exchange over the Internet. The NHIN will provide a foundation for the exchange of health IT across diverse entities, within communities and across the country, helping to achieve the goals of the HITECH Act.”
Brian Ahier is chairman of the State of Oregon’s Health Information Technology Oversight Council Technology Workgroup. Rich Elmore is Vice President, Strategic Initiatives at Allscripts. David C. Kibbe is a family physician, senior advisor to American Academy of Family Physicians and co-founder of the Clinical Groupware Collaborative.
Thursday, November 25, 2010
Engage With Grace - 2010
If you are reading this on Thanksgiving day in the US, print this out and read it at home.
Keith
P.S. In keeping with the traditions of the day, and the rounds of thanks making it around the Interweb, thanks to all today in healthcare who are working to make IT better. You know who you are, and if you are reading this, yes, I do mean you.
Done reading? Print it out and share this with family members today. To learn more, see http://www.engagewithgrace.org/Keith
P.S. In keeping with the traditions of the day, and the rounds of thanks making it around the Interweb, thanks to all today in healthcare who are working to make IT better. You know who you are, and if you are reading this, yes, I do mean you.
Wednesday, November 24, 2010
PHIN VADS Webinar Series - Supporting Meaningful Use
Crossed my desk today...
PHIN VADS collaborative workspace
CDC Vocabulary and Messaging team has launched a new collaborative workspace for PHIN VADS in Public Health Connect (phConnect). This online forum will facilitate the adoption of vocabulary and messaging standards as well as collaboration between value set developers, quality measure developers, implementers, messaging analysts, public health departments, vendors, SDOs and CDC Vocabulary and Messaging team. PHIN VADS workgroup is one of the workgroups of public health Vocabulary and Messaging Community of Practice (VMCoP).
Please visit the PHIN VADS forum online at http://www.phconnect.org/group/phinvads to discuss any PHIN VADS value set development and implementation issues. This forum also includes all the resources related to VADS application and content.
ONC Federal HIT Vocabulary Task Force testimony about PHIN VADS
CDC Vocabulary and Messaging team has recently provided testimony to Office of National Coordinator (ONC) Federal Health IT standards committee vocabulary task force regarding the vocabulary development and PHIN VADS value set distribution.
CDC PHIN VADS Testimony hyperlinks to Federal Health IT website:
Please contact the CDC Vocabulary and Messaging team at PHINVS@CDC.GOV if you have any questions about PHIN VADS or the webinar series.
Thanks
CDC Vocabulary and Messaging Team
Greetings,
CDC Vocabulary and Messaging team announces an upcoming, free web seminar series about CDC Vocabulary Server (PHIN VADS) that supports the distribution and implementation of vocabularies associated with public health Meaningful Use (MU) measures and the HL7 messaging implementation guides.
PHIN VADS application can be accessed at http://phinvads.cdc.gov
Mark the Dates:
- November 30th, 2010 1 p.m. to 2:30 p.m. EasternVADS Content and Application Overview.Teleconference and LiveMeeting link: http://www.phconnect.org/events/phin-vads-content-application
- December 2nd, 2010, 1 p.m. to 2:30 p.m. EasternVADS Application Integration and Web Services Overview for DevelopersTeleconference and LiveMeeting link: http://www.phconnect.org/events/phin-vads-application
- Dec 3rd, 2010, 1 p.m.to 2:30 p.m. EasternVADS value set authoring tool for value set creators and quality measure developers.Teleconference and LiveMeeting link: http://www.phconnect.org/events/phin-vads-value-set-authoring
To keep the seminars at a manageable size, though, we maintain a maximum number of 99 simultaneous participants logging-on during any seminar.
CDC Vocabulary and Messaging team has launched a new collaborative workspace for PHIN VADS in Public Health Connect (phConnect). This online forum will facilitate the adoption of vocabulary and messaging standards as well as collaboration between value set developers, quality measure developers, implementers, messaging analysts, public health departments, vendors, SDOs and CDC Vocabulary and Messaging team. PHIN VADS workgroup is one of the workgroups of public health Vocabulary and Messaging Community of Practice (VMCoP).
Please visit the PHIN VADS forum online at http://www.phconnect.org/group/phinvads to discuss any PHIN VADS value set development and implementation issues. This forum also includes all the resources related to VADS application and content.
CDC Vocabulary and Messaging team has recently provided testimony to Office of National Coordinator (ONC) Federal Health IT standards committee vocabulary task force regarding the vocabulary development and PHIN VADS value set distribution.
CDC PHIN VADS Testimony hyperlinks to Federal Health IT website:
- One Stop Shop for Meaningful Use Vocabulary -Sept 1st, 2010 (Written Transcript, Audio)
- Best Practices and Lessons Learned: Vocabulary Infrastructure - March 23rd, 2010 (Written Transcript, Audio)
Please contact the CDC Vocabulary and Messaging team at PHINVS@CDC.GOV if you have any questions about PHIN VADS or the webinar series.
CDC Vocabulary and Messaging Team
A Perfect Implementation Guide
John Halamka talks about "perfect" specifications in his recent post on what keeps him awake at night. His example happens to be technically incorrect in several places in examples, and I'm certain was expensive to produce, but that only goes to prove my point in this post.
What keeps me awake at night are application developers that want to write systems level code1 thinking that it's at the application level and on the other end, implementation specifications that aren't written at the application level.
There are a couple of challenges. One is understanding at what level each specification should be:
CDA and the RIM are at the systems level. They are a very powerful infrastructure for delivering semantically interoperable content for all kinds of uses, essentially an HTML for healthcare. It's not just limited to patient summaries, but can go into great detail. But with great power comes some of the complexity that you expect at that level. One way to make CDA easy is to focus that power on specific use cases, which is what IHE, HITSP and many others have done using templates. Another way to do that is as in any other systems level sort of thing, provide code that makes it easy to do it (CDA) right. I've pointed to a number of open source implementations for CDA and CCD on the open source implementations page above. You needn't worry about the details because others have done that for you.
Now, Arien Malec (of Direct Project fame) and others have noted a dearth of .Net open source implementations for CDA. One reason for that is that the open source community likes open source platforms, and .Net doesn't quite fit that mold for many (see comments here). So there need to be another driver to get to open source .Net code. Maybe its creator can step up with some support, or a community of .Net developers with an interest in this can figure out what needs to be done if the right sort of organization is put around it. But realize also that having a .Net implementation supporting CDA is percieved by some as a commercial advantage. I don't know whether an open source community will appear or not, but I would hope so. But something has to drive it.
The remaining specifications are headed towards the application level. But there are other challenges in simplifying things. The first of these is in the way specifications are delivered. Until SDOs start delivering UML models as XMI as part of the standard, most of us are stuck with PDF, Word Documents, or at the very best, XML representations of narrative text. Even organizations with rich modeling capabilities like OASIS and HL7 rarely deliver up the source content for the UML models. So, the specifications aren't machine readable with commonly available off the shell tools (and MIF2 doesn't and won't ever cut it for me).
What makes machine readable specifications valuable? The ability to aggregate, automate and validate based on those specifications.
Aggregation is very important, because without the ability to aggregate information, we are left with a "peeling the onion" problem. Even though there's a way to scale up the development of implementation guides by distributing the work, there is no clean way to put all the pieces together.
Automation of code generation is also very valuable. This can be used to quickly and easily generate source code for implementation, validation, storage, et cetera. Give me the data and I can write and test and validate a code generate that will operate off of it quite a bit faster than I can write each class separately on my own. Back when IHE PCC was still developing its TF on the wiki, I grabbed the fairly structured XHTML from it, extracted the data for the templates into XML, and wrote another transform to generate XSLT stylesheets that build correct content in a matter of days for over 400 templates with documentation. Even if I spent 10 minutes per template, that would be weeks. And I'm a lot more comfortable with the amount of testing I put into my automation than I ever would be with the testing I personally could put into the development of 400 templates ... and that doesn't count documentation! Code generation isn't just for implementing objects in Java or .Net. I can generate code to validate, create a WSDL or IDL or other interface, generate SQL tables, or implementation documentation, you name it.
It's because it's data, and I can compute with it. So, to the SDO's out there, I have to repeat a phrase that has been used elsewhere: Give us the damn data!
The MDHT CDA Tools project that I awarded Dave Carlson the Ad Hoc Harley for is being used to do exactly this for CDA, delivering UML models for the following:
If a model contains all the data necessary to implement at a particular level, and I have the models for all levels, then I can automatically generate the implementation guide documentation at all levels. The challenges are not really technical as the CDA Tools project has already been used to deliver a very nice implementation guide for Genetic Testing Reports (zip with PDF) in an HL7 ballot.
The challenges are organizational. If HL7 delivers content in UML format, and organizations use that content to develop there own guides, how does HL7 get revenue from that intellectual property when an organization wants to publish their guide based on the HL7 guide? John Halamka has some interesting thoughts in today's post on how to address those issues, and HL7 is examining them as well.
HL7 and IHE and ONC are hopefully about to embark together on a project to Consolidate a number of HL7 guides with the IHE work and CCD templates that they reference, and possibly include the HITSP C83 templates. Use of MDHT and CDA tools is a vital part of the plan. If we can get all those communitities working together, that would produce my ideal guide.
Keith
P.S. One of the keenest challenges we all face in developing implementation guides is that of self-reference. The guide provides the rules which define what is valid in the XML, which then has to be included in the guide in examples, which needs to be tested against the rules in the guide. One of the most time consuming tasks has always been generating good examples. If you give me the data, I can do that, and automate some generation of examples!
1 Don't bother to read this if you've ever implemented your own HTTP stack, HTML display engine, SSL transport, XSLT processor or XML parser that conformed to specifications. That should rule out about 10% of the readers of this blog. BTW: I've done the equivalent, but only once or twice, and it's never been a routine part of my job.
What keeps me awake at night are application developers that want to write systems level code1 thinking that it's at the application level and on the other end, implementation specifications that aren't written at the application level.
There are a couple of challenges. One is understanding at what level each specification should be:
CDA and the RIM are at the systems level. They are a very powerful infrastructure for delivering semantically interoperable content for all kinds of uses, essentially an HTML for healthcare. It's not just limited to patient summaries, but can go into great detail. But with great power comes some of the complexity that you expect at that level. One way to make CDA easy is to focus that power on specific use cases, which is what IHE, HITSP and many others have done using templates. Another way to do that is as in any other systems level sort of thing, provide code that makes it easy to do it (CDA) right. I've pointed to a number of open source implementations for CDA and CCD on the open source implementations page above. You needn't worry about the details because others have done that for you.
Now, Arien Malec (of Direct Project fame) and others have noted a dearth of .Net open source implementations for CDA. One reason for that is that the open source community likes open source platforms, and .Net doesn't quite fit that mold for many (see comments here). So there need to be another driver to get to open source .Net code. Maybe its creator can step up with some support, or a community of .Net developers with an interest in this can figure out what needs to be done if the right sort of organization is put around it. But realize also that having a .Net implementation supporting CDA is percieved by some as a commercial advantage. I don't know whether an open source community will appear or not, but I would hope so. But something has to drive it.
The remaining specifications are headed towards the application level. But there are other challenges in simplifying things. The first of these is in the way specifications are delivered. Until SDOs start delivering UML models as XMI as part of the standard, most of us are stuck with PDF, Word Documents, or at the very best, XML representations of narrative text. Even organizations with rich modeling capabilities like OASIS and HL7 rarely deliver up the source content for the UML models. So, the specifications aren't machine readable with commonly available off the shell tools (and MIF2 doesn't and won't ever cut it for me).
What makes machine readable specifications valuable? The ability to aggregate, automate and validate based on those specifications.
Aggregation is very important, because without the ability to aggregate information, we are left with a "peeling the onion" problem. Even though there's a way to scale up the development of implementation guides by distributing the work, there is no clean way to put all the pieces together.
Automation of code generation is also very valuable. This can be used to quickly and easily generate source code for implementation, validation, storage, et cetera. Give me the data and I can write and test and validate a code generate that will operate off of it quite a bit faster than I can write each class separately on my own. Back when IHE PCC was still developing its TF on the wiki, I grabbed the fairly structured XHTML from it, extracted the data for the templates into XML, and wrote another transform to generate XSLT stylesheets that build correct content in a matter of days for over 400 templates with documentation. Even if I spent 10 minutes per template, that would be weeks. And I'm a lot more comfortable with the amount of testing I put into my automation than I ever would be with the testing I personally could put into the development of 400 templates ... and that doesn't count documentation! Code generation isn't just for implementing objects in Java or .Net. I can generate code to validate, create a WSDL or IDL or other interface, generate SQL tables, or implementation documentation, you name it.
It's because it's data, and I can compute with it. So, to the SDO's out there, I have to repeat a phrase that has been used elsewhere: Give us the damn data!
The MDHT CDA Tools project that I awarded Dave Carlson the Ad Hoc Harley for is being used to do exactly this for CDA, delivering UML models for the following:
- HITSP C32
- HITSP C62
- HITSP C83
- IHE XPHR
- IHE XDS-MS
- IHE XDS-SD
- HL7 CDA
- HL7 CCD
- HL7 History and Physical
- HL7 Consult Note
- HL7 Progress Note
- The CMS CARE-SET Guide (based on HITSP C83)
If a model contains all the data necessary to implement at a particular level, and I have the models for all levels, then I can automatically generate the implementation guide documentation at all levels. The challenges are not really technical as the CDA Tools project has already been used to deliver a very nice implementation guide for Genetic Testing Reports (zip with PDF) in an HL7 ballot.
The challenges are organizational. If HL7 delivers content in UML format, and organizations use that content to develop there own guides, how does HL7 get revenue from that intellectual property when an organization wants to publish their guide based on the HL7 guide? John Halamka has some interesting thoughts in today's post on how to address those issues, and HL7 is examining them as well.
HL7 and IHE and ONC are hopefully about to embark together on a project to Consolidate a number of HL7 guides with the IHE work and CCD templates that they reference, and possibly include the HITSP C83 templates. Use of MDHT and CDA tools is a vital part of the plan. If we can get all those communitities working together, that would produce my ideal guide.
Keith
P.S. One of the keenest challenges we all face in developing implementation guides is that of self-reference. The guide provides the rules which define what is valid in the XML, which then has to be included in the guide in examples, which needs to be tested against the rules in the guide. One of the most time consuming tasks has always been generating good examples. If you give me the data, I can do that, and automate some generation of examples!
1 Don't bother to read this if you've ever implemented your own HTTP stack, HTML display engine, SSL transport, XSLT processor or XML parser that conformed to specifications. That should rule out about 10% of the readers of this blog. BTW: I've done the equivalent, but only once or twice, and it's never been a routine part of my job.
Tuesday, November 23, 2010
Computers and Doctors: Let each do what they do best
Today's reading included a number of different but related posts:
So, people are miserable at data entry, whereas computers are flawless. On the other hand, computers are only as good as people at understanding other people in the very best of cases, and getting them to do that well using traditional inputs like handwriting or voice is hard. So, we need to spend some time thinking about how to make it easy for Healthcare providers to communicate to IT solutions.
Computers are great at search, but humans are not so good. On the other hand, people are really good at identifying relevant stuff and thinking about alternative search strategies. So, we need to incorporate more searching strategies into HIT and provide support for human assist.
Computers are tireless and good at repeated interaction, but people aren't so good. People on the other hand can pick up quite a bit from cues that computers today don't even see or hear in a single interaction. So, we need to think about what repetetive tasks we can assign to the computer, and where we need the human interaction.
The challenge in Healthcare IT is how to let each one do what it does best, and make the two talk to each other in ways that work well for both. You hear a lot talk a lot about user interfaces, and computer interfaces, but we need to think more about the "social interface" that HIT provides (or mostly doesn't) in order for it to be really effective.
- The Computer will See you Now about how computers interfere with patient/provider interactions
- The too informed patient in the age of WebMD a puppet show about how one doctor deals (badly) with an informed patient.
- The Future of Health: Robots, Enchanted Objects, and Networks a post by @SusannahFox on scaling healthcare.
So, people are miserable at data entry, whereas computers are flawless. On the other hand, computers are only as good as people at understanding other people in the very best of cases, and getting them to do that well using traditional inputs like handwriting or voice is hard. So, we need to spend some time thinking about how to make it easy for Healthcare providers to communicate to IT solutions.
Computers are great at search, but humans are not so good. On the other hand, people are really good at identifying relevant stuff and thinking about alternative search strategies. So, we need to incorporate more searching strategies into HIT and provide support for human assist.
Computers are tireless and good at repeated interaction, but people aren't so good. People on the other hand can pick up quite a bit from cues that computers today don't even see or hear in a single interaction. So, we need to think about what repetetive tasks we can assign to the computer, and where we need the human interaction.
The challenge in Healthcare IT is how to let each one do what it does best, and make the two talk to each other in ways that work well for both. You hear a lot talk a lot about user interfaces, and computer interfaces, but we need to think more about the "social interface" that HIT provides (or mostly doesn't) in order for it to be really effective.
Monday, November 22, 2010
IHE Radiology Technical Framework Supplements Published for Trial Implementation
IHE Community,
IHE Radiology Supplements Published for Trial Implementation
The IHE Radiology Technical Committee has published the following supplements to the IHE Radiology Technical Framework for Trial Implementation:
IHE Radiology Supplements Published for Trial Implementation
The IHE Radiology Technical Committee has published the following supplements to the IHE Radiology Technical Framework for Trial Implementation:
- Basic Image Review (BIR)
- Mammography Acquisition Workflow (MAWF)
- Radiation Exposure Monitoring (REM)
The profiles will be available for testing at IHE Connectathons beginning in January 2011. The documents are available for download at http://www.ihe.net/Technical_Framework/index.cfm. Comments should be submitted to the online forums at http://forums.rsna.org/forumdisplay.php?f=12.
Friday, November 19, 2010
Natural Language Processing and CDA
I got my start with HL7 CDA while working with a company that did extensive natural language processing. And before that, I worked with a company for 9 years that developed linguistic software (spelling, grammar, search enhancement) and licensed SGML and XML based content and publishing tools. One of the discussion topics at AMIA this year was on support for Natural Language Processing and CDA, and how that could be supported in CDA. Now I know that many have expressed interest in this area over the years, including some folks at the VA in Utah, others at the University of Utah, others working at IBM have written on it, and several vendors who are doing NLP based processing for healthcare and working with CDA (almost all of whom are involved in the Health Story implementation guides these days).
I can tell you that CDA Release 2 is a good but not ideal fit for Natural Language processing tools, and that I hope that CDA Release 3 will be somewhat better. It was pretty easy for me to take the NLP output of a system that I had worked with and generate a CDA document, but CDA was not the solution of choice for direct NLP. It could be in R3, but not without some changes and extensions to datatypes and the RIM.
The way that most NLP systems working with electronic text work is by pipelining a number of annotation processes over the electronic text. Lower level processes feed into higher level ones, and produce sequences of annotations as they go. Some of the first processes might segment text into natural boundaries, like paragraphs and sentences, based on punctuation and other cues. The next step is identifying word boundaries (and actually, depending upon how you do it, they can be done in either order or simultaneously). Just identifying sentence boundaries would seem to be simple, after all, you can just check for periods and question marks and exclamaintion pts. Right?
This annotation can be kept in numerous ways, as can the text associated with it. Some systems keep the text whole, and annotate separately by using pointers into it. Others insert markup into the text (maybe even in XML format) with additional data. There are many levels of analysis, and subsequent levels build on prior ones (and may even correct errors in prior ones) . So, it isn't uncommon for the different layers of analysis to be stored as a separate stream of content. Sometimes a mixed model is used, where some structural markup used to identify words, sentence and paragraph boundaries, but deeper analysis is stored separately.
So, NLP representations can annotate inline or separately. The CDA R2 model is a little bit of both, where structural markup (sections, titles, paragraphs, content, tables) is inline, but clinical statements are stored in a separate section of the CDA document, and can (and often but not always) do point back to the original text to which they apply.
I highlight the term original text above advisedly, because certain HL7 data types support reference back to the original text for which a concept has been applied (and it is the originalText property of these data types). Two of these are the Concept Descriptor (CD or coded concept) and Physical Quantity (PQ). That makes it very easy to tie some higher level concepts or measurement back to the text in the document.
But other data types don't support that: Many numeric spans in text can be represented using PQ, but not all numbers in text are measurements of something, they may be ordinals, so PQ is not always appropriate. There are other important data types for which you want to link back to the original text, including dates (TS), URLs, numbers (INT, REAL), names of people (PN), organizations (ON), places (ADDR) and things (EN). Unfortunately, neither data types R1 or R2 support identifying the original text associated with these parts. What is interesting is that some of the more esoteric HL7 data types like PN, TELECOM and ADDR are more important in NLP as data types than they are elsewhere in computing. Some slightly more complex data types like IVL_* also need to support text, I can think of at least 3-4 cases where you'd see that used in narrative text.
While I mentioned dates above, I didn't mention times. That's because I'm cheating. A time without a date is a measurement in some time unit (hour, minute, second) from midnight. HL7 doesn't really have a "time" data type separate from that use of PQ as far as I can tell. So, a time can point back to orginal text (using PQ), but a date cannot.
So to fully support NLP in CDA Version 3, we need to fix data types in the next release to allow just about all data types to be able to point back to orginal text. For NLP use today, we can extend the XML representation of those types in either R2 or R3 to include an extension element that supports pointing back to the original text.
Those pointers back to orginal text are relative URLs local to the current document, so they usually look something like #fragment_id, where "fragment_id" is the value of an ID attribute on an element surrounding the text being referenced.
But then you run into ambiguity problems in NLP. The classic sentence "Eats shoots and leaves" has two interpretations as punctuated (and three if punctuation is known to be missing) . Is leaves a verb or a noun? It could be either. So there are two alternate parses of the sentence where one treats [shoots and leaves] as a compound subject, and the other treats the whole thing as a compound sentence with two components [eats shoots] and [leaves]. So, how do you mark that up to support both parses? There's no way to interleave these two parses of text with XML markup that signals both sets of boundaries at the same time. To do that well you'd need something like LMNL (pronounced luminal), which is probably better for NLP anyway, but let's stick with XML.
There are ways to annotate the phrase so that you can point to non-sequential spans using a URL fragment identifier. That requires using a pointer scheme in the fragment identifier, such as that defined by XPointer, or separately, a simpler restriction using this not quite IETF RFC for the xpath1() pointer scheme. Both schemes solve the multi-span problem quite nicely and the xpointer() scheme is supported in a number of open source processors (even for .Net). So as a way to point back into text, it solves the overlapping span problem pretty well so long as you don't have abiguity at lower levels than word boundaries.
So, we can identify the small stuff, but how can we then make clinical statements out of it? That's mostly easy. For Acts, which is really the workhorse of the RIM, you have act.text. Act.text can point to the narrative components that make up the clinical statement, and the clinical statement becomes an annotation on the narrative. But what about the other RIM classes. The two key ones that would be needed are Entity and ActRelationship, but Role is also important too. If I write:
Patient Name: Keith W. Boone
I'd want to be able to point back to the text for the role and the entity. There's no RIM solution for that, so again, an extension would be needed to encompass Entity.text, Role.text, and ActRelationship.text, so that these things could be tied back to narrative.
So, what will be better in CDA R3, if we are missing text in several RIM classes and originalText in data types? Well, if we go with XHTML to represent the narrative, and allow the XHMLT div, table, ul and ol tags to be treated as organizers (a section is an organizer) then we can divvy up the CDA document into two parts. The narrative is an ED data type, with an explicit mapping from certain XHTML element types used to structure the text into organizers. The other part are annotations on the text that are the clinical statements that can follow committee models without being broken up by the CDA document content organizers.
There are two ways we could go here: Inline or separate. Do we allow a clinical statement inside an XHTML text organizer to be a representative of what is said in that text? That hearkens back to CDA R1 structure, and that would eliminate some of the need for Entity.text et al, but may not be expressive enough for NLP requirements, and would have some other issues with representations of committee models. So, I think there is a component of the CDA document that contains the clinical statements, and the clinical statements point to the text.
That leaves another important problem, which is how to deal with context conduction that was traditionally handled throught the section structure of the document. That requires another post, and isn't really related to NLP processing.
I can tell you that CDA Release 2 is a good but not ideal fit for Natural Language processing tools, and that I hope that CDA Release 3 will be somewhat better. It was pretty easy for me to take the NLP output of a system that I had worked with and generate a CDA document, but CDA was not the solution of choice for direct NLP. It could be in R3, but not without some changes and extensions to datatypes and the RIM.
The way that most NLP systems working with electronic text work is by pipelining a number of annotation processes over the electronic text. Lower level processes feed into higher level ones, and produce sequences of annotations as they go. Some of the first processes might segment text into natural boundaries, like paragraphs and sentences, based on punctuation and other cues. The next step is identifying word boundaries (and actually, depending upon how you do it, they can be done in either order or simultaneously). Just identifying sentence boundaries would seem to be simple, after all, you can just check for periods and question marks and exclamaintion pts. Right?
This annotation can be kept in numerous ways, as can the text associated with it. Some systems keep the text whole, and annotate separately by using pointers into it. Others insert markup into the text (maybe even in XML format) with additional data. There are many levels of analysis, and subsequent levels build on prior ones (and may even correct errors in prior ones) . So, it isn't uncommon for the different layers of analysis to be stored as a separate stream of content. Sometimes a mixed model is used, where some structural markup used to identify words, sentence and paragraph boundaries, but deeper analysis is stored separately.
So, NLP representations can annotate inline or separately. The CDA R2 model is a little bit of both, where structural markup (sections, titles, paragraphs, content, tables) is inline, but clinical statements are stored in a separate section of the CDA document, and can (and often but not always) do point back to the original text to which they apply.
I highlight the term original text above advisedly, because certain HL7 data types support reference back to the original text for which a concept has been applied (and it is the originalText property of these data types). Two of these are the Concept Descriptor (CD or coded concept) and Physical Quantity (PQ). That makes it very easy to tie some higher level concepts or measurement back to the text in the document.
But other data types don't support that: Many numeric spans in text can be represented using PQ, but not all numbers in text are measurements of something, they may be ordinals, so PQ is not always appropriate. There are other important data types for which you want to link back to the original text, including dates (TS), URLs, numbers (INT, REAL), names of people (PN), organizations (ON), places (ADDR) and things (EN). Unfortunately, neither data types R1 or R2 support identifying the original text associated with these parts. What is interesting is that some of the more esoteric HL7 data types like PN, TELECOM and ADDR are more important in NLP as data types than they are elsewhere in computing. Some slightly more complex data types like IVL_* also need to support text, I can think of at least 3-4 cases where you'd see that used in narrative text.
While I mentioned dates above, I didn't mention times. That's because I'm cheating. A time without a date is a measurement in some time unit (hour, minute, second) from midnight. HL7 doesn't really have a "time" data type separate from that use of PQ as far as I can tell. So, a time can point back to orginal text (using PQ), but a date cannot.
So to fully support NLP in CDA Version 3, we need to fix data types in the next release to allow just about all data types to be able to point back to orginal text. For NLP use today, we can extend the XML representation of those types in either R2 or R3 to include an extension element that supports pointing back to the original text.
Those pointers back to orginal text are relative URLs local to the current document, so they usually look something like #fragment_id, where "fragment_id" is the value of an ID attribute on an element surrounding the text being referenced.
But then you run into ambiguity problems in NLP. The classic sentence "Eats shoots and leaves" has two interpretations as punctuated (and three if punctuation is known to be missing) . Is leaves a verb or a noun? It could be either. So there are two alternate parses of the sentence where one treats [shoots and leaves] as a compound subject, and the other treats the whole thing as a compound sentence with two components [eats shoots] and [leaves]. So, how do you mark that up to support both parses? There's no way to interleave these two parses of text with XML markup that signals both sets of boundaries at the same time. To do that well you'd need something like LMNL (pronounced luminal), which is probably better for NLP anyway, but let's stick with XML.
There are ways to annotate the phrase so that you can point to non-sequential spans using a URL fragment identifier. That requires using a pointer scheme in the fragment identifier, such as that defined by XPointer, or separately, a simpler restriction using this not quite IETF RFC for the xpath1() pointer scheme. Both schemes solve the multi-span problem quite nicely and the xpointer() scheme is supported in a number of open source processors (even for .Net). So as a way to point back into text, it solves the overlapping span problem pretty well so long as you don't have abiguity at lower levels than word boundaries.
So, we can identify the small stuff, but how can we then make clinical statements out of it? That's mostly easy. For Acts, which is really the workhorse of the RIM, you have act.text. Act.text can point to the narrative components that make up the clinical statement, and the clinical statement becomes an annotation on the narrative. But what about the other RIM classes. The two key ones that would be needed are Entity and ActRelationship, but Role is also important too. If I write:
Patient Name: Keith W. Boone
I'd want to be able to point back to the text for the role and the entity. There's no RIM solution for that, so again, an extension would be needed to encompass Entity.text, Role.text, and ActRelationship.text, so that these things could be tied back to narrative.
So, what will be better in CDA R3, if we are missing text in several RIM classes and originalText in data types? Well, if we go with XHTML to represent the narrative, and allow the XHMLT div, table, ul and ol tags to be treated as organizers (a section is an organizer) then we can divvy up the CDA document into two parts. The narrative is an ED data type, with an explicit mapping from certain XHTML element types used to structure the text into organizers. The other part are annotations on the text that are the clinical statements that can follow committee models without being broken up by the CDA document content organizers.
There are two ways we could go here: Inline or separate. Do we allow a clinical statement inside an XHTML text organizer to be a representative of what is said in that text? That hearkens back to CDA R1 structure, and that would eliminate some of the need for Entity.text et al, but may not be expressive enough for NLP requirements, and would have some other issues with representations of committee models. So, I think there is a component of the CDA document that contains the clinical statements, and the clinical statements point to the text.
That leaves another important problem, which is how to deal with context conduction that was traditionally handled throught the section structure of the document. That requires another post, and isn't really related to NLP processing.
Thursday, November 18, 2010
Reconciliation
No, this isn't about me making nice with someone after teeing them off ...
This is about an IHE Profile proposal that I presented (doc) to the PCC Technical Committee today on Reconciliation of data elements found from multiple sources of information. The problem is that too much data can be overwhelming. According to a more than fifty year old paper by George Miller, the human working memory capacity is seven plus or minus two. Later research shows this to be somewhat different depending upon how the information can be chunked, but the limitation is still present. And yet, numerous studies show that the average number of medications taken by high risk populations (elders, patients with chronic conditions, et cetera) exceeds seven. So, for complex cases, the data that needs to be reconciled could exceed human working memory, which is a recipe for error. And you cannot just go out and buy and upgrade either.
Or, can you? We use applications which syncronize data all of the time, and notify us of changes. Every time you sync your smart phone, a reconciliation process is executed. So, how can we make the EHR technology support this better?
That's the idea behind the Reconciliation profile. The point is that you have a service, call it the Reconciliation Service, that
Those systems can do this veryt well because they have some common rules about how they exchange calendar and contact information. We can take advantage of some of those same rules. If two problems have the exact same universal ID (a data element on clinical statements required by all IHE profiles), then they must, according to the semantics, represent the same problem. So we can find some of this easily if we agree not to lose identifiers in exchanges. But also, we can take advantage of other rules to identify possible duplicates where IDs aren't synced. In those cases, the system might look at the code used to describe the problem, and whether the dates overlapped. Two instances of flu with overlapping dates can be identified as likely being the same "problem". There are other heuristics that could also come into play, especially in vocabularies where there is a hierarchy or "isa" relationship. If document A indicates a problem on 11/18/2010 of ICD-9-CM code 845 (ankle sprain), and document B indicates a problem using ICD-9-CM code 845.01 (ankle sprain of deltoid ligament) on the same date, then this is also a case where automation can identify a potential duplicate, with richer results.
The output of this process would be a list of duplicates, updates, and new information in a structured format that could be presented in a user interface, to be accepted, rejected or modified by a healthcare provider.
So, instead of looking at 2 or 3 lists of 8-10 items, he or she can look at one consolidated list showing a composite view of information, that can be highlighted with respect to variances over time.
The output of the interaction with the provider would be the reconciled list, plus the following pieces of data: Who performed the reconciliation, when, and against what data sources. That information would flow into the next system that needed to perform reconciliation, which would further benefit the process.
So, if you have document A, B and C containing information to reconcile, and C indicates that it contained a reconciliation of A and B already, then C already contains one provider's reconciled list.
The profile doesn't specify how to do the reconciliation, but it does identify certain cases where some kind of reconciliation can be done, and how to record the results to indicate what was done. So there are some functional requirements that the reconciliation service needs to meet to generate the output. It also creates opportunities for some systems to be really smart about how they identify duplications and changes, which can greatly add value.
Reconciliation is something that is done on just about every visit or transfer of care for a patient. Because of the volume of data that might be dealt with, it's a process that is ripe for some interoperable automation. We vote on whether this proposal goes forward tomorrow morning. I expect that it will go forward, as it has quite a bit of support.
That's some cool technology. I want to implement it.
This is about an IHE Profile proposal that I presented (doc) to the PCC Technical Committee today on Reconciliation of data elements found from multiple sources of information. The problem is that too much data can be overwhelming. According to a more than fifty year old paper by George Miller, the human working memory capacity is seven plus or minus two. Later research shows this to be somewhat different depending upon how the information can be chunked, but the limitation is still present. And yet, numerous studies show that the average number of medications taken by high risk populations (elders, patients with chronic conditions, et cetera) exceeds seven. So, for complex cases, the data that needs to be reconciled could exceed human working memory, which is a recipe for error. And you cannot just go out and buy and upgrade either.
Or, can you? We use applications which syncronize data all of the time, and notify us of changes. Every time you sync your smart phone, a reconciliation process is executed. So, how can we make the EHR technology support this better?
That's the idea behind the Reconciliation profile. The point is that you have a service, call it the Reconciliation Service, that
- Knows how to get information from other information systems, including EHRs, PHRs, Immunization Registries, disease registries, ePrescribing hubs, or HIEs.
- Understands the semantics by which problems, meds, allergies, et al, are exchanged (via CCD/C32).
- Can automatically identify duplicated, updated, or new information based on those semantics,
- And can interact with a User Agent to display to the provider, and get updates back from the provider on these changes,
- And can report the reconcilled results.
Those systems can do this veryt well because they have some common rules about how they exchange calendar and contact information. We can take advantage of some of those same rules. If two problems have the exact same universal ID (a data element on clinical statements required by all IHE profiles), then they must, according to the semantics, represent the same problem. So we can find some of this easily if we agree not to lose identifiers in exchanges. But also, we can take advantage of other rules to identify possible duplicates where IDs aren't synced. In those cases, the system might look at the code used to describe the problem, and whether the dates overlapped. Two instances of flu with overlapping dates can be identified as likely being the same "problem". There are other heuristics that could also come into play, especially in vocabularies where there is a hierarchy or "isa" relationship. If document A indicates a problem on 11/18/2010 of ICD-9-CM code 845 (ankle sprain), and document B indicates a problem using ICD-9-CM code 845.01 (ankle sprain of deltoid ligament) on the same date, then this is also a case where automation can identify a potential duplicate, with richer results.
The output of this process would be a list of duplicates, updates, and new information in a structured format that could be presented in a user interface, to be accepted, rejected or modified by a healthcare provider.
So, instead of looking at 2 or 3 lists of 8-10 items, he or she can look at one consolidated list showing a composite view of information, that can be highlighted with respect to variances over time.
The output of the interaction with the provider would be the reconciled list, plus the following pieces of data: Who performed the reconciliation, when, and against what data sources. That information would flow into the next system that needed to perform reconciliation, which would further benefit the process.
So, if you have document A, B and C containing information to reconcile, and C indicates that it contained a reconciliation of A and B already, then C already contains one provider's reconciled list.
The profile doesn't specify how to do the reconciliation, but it does identify certain cases where some kind of reconciliation can be done, and how to record the results to indicate what was done. So there are some functional requirements that the reconciliation service needs to meet to generate the output. It also creates opportunities for some systems to be really smart about how they identify duplications and changes, which can greatly add value.
Reconciliation is something that is done on just about every visit or transfer of care for a patient. Because of the volume of data that might be dealt with, it's a process that is ripe for some interoperable automation. We vote on whether this proposal goes forward tomorrow morning. I expect that it will go forward, as it has quite a bit of support.
That's some cool technology. I want to implement it.
Functional Governance
When is a requirement not a functional requirement? When it gets into implementation details. One of the discussion going on in HL7 is around a project currently under development which seems ready to specify implementation rather than functionality. I say seems because I have only the project scope statement, and statements by one of the committee leaders pushing the project forward to go on, and that's where I have gotten that impression. I haven't yet looked at the draft materials that the project has produced because they aren't to my knowledge published yet.
I won't go into all of the details. If you've been following some of the HL7 list traffic, you already know those. HL7 develops several kinds of standards. One of these are functional models. These are sets of requirements on specific kinds of healthcare IT, like the EHR Functional Model or the PHR Functional Model. Others types of standards are messaging and services standards like Version 2 or Version 3 (messages and documents), or Clinical Decision Support or Terminology (Services).
So, the problem here is one of how vocabulary should be used in a functional model. For a specific use, it's certainly acceptable to say that X shall be able to be communicated using Y vocabulary. For example, the system shall have the capability to exchange laboratory results using LOINC. But other uses get into implementation. To say that Problems shall be able to be stored using ICD-10-CM might go to far, depending upon the purpose of the system.
Storage is more often than not, one of the implementation characteristics by which functional requirement to exchange information is met. But there are other mechanisms by which that same functional requirement can also be met. I know of at least two external third-party vocabularies whose claim to fame are that they are A) more granular than vocabularies like SNOMED or ICD-* and B) are able to be easily mapped to either. If a product stored information using either of those vocabularies was able to regurgitate information in an exchange using an appropriate vocabulary, then it meets the functional exchange requirement. But what it does internally is left up to it.
Now, in this particular case, it's clear that this particular project has a favored vocabulary, and that vocabulary is also either completing, or in the process of being mapped to another, broader, more widely used vocabulary. And the favored vocabulary is a licensed product that is not as readily available in some regions as the other one. And while the licensing costs are rather inexpensive as vocabularies go, there's still substantial time and enery spent A) verifying suitability for purpose, B) licensing the product, C) accepting delivery and incorporating it, et cetera, especially if an alternative one is already available and licensed. And the cases where it surfaces as a functional requirement are also important because it could change a lot of implementation details, or just a few.
So, the question is, how should a functional model specify use of a terminology. That's an interesting question, and not one that can be answered in a few days (I think). But I think it needs to be answered before this particular project will be ready to move forward. But, the pressure is on to push this project forward because balloting deadlines are nearing, and the project team (and committee leadership) thinks they are ready to publish. Those are not necessarily good reasons to move forward. There are reasons why we have this "three-tiered" governance model in HL7 (Committee, Steering Division and finally TSC). So, I'm pushing back and saying this particular problem needs to be resolved before I would feel comfortable voting yes to move forward with the project.
Another part of the issue has to do with where and how it is appropriate to select a specific vocabulary for a specific topic. I'm certain that particular domain expertise is not widely present in the EHR committee, although it may be quite present in the project team. I do not want to see the HL7 Functional Models becoming a back door to the selection of vocabularies without there being some way to ensure that that whole process encourages appropriate domain specific review. Futhermore, "requirement of a vocabulary" in a particular FM should be predicated on a level of acceptance of that vocabulary as being appropriate to the purpose. But that isn't a EHR FM sort of decision to make, at least not as I understand it from the perspective of HL7 governance.
Essentially my pushback is that A) I don't want someone to tell me how to implement the system, I want them to tell me what they want it to do, and I'd like to know how that will be ensured in not just this project, but all projects with a similar issue (because this isn't the first time I've encountered this), and B) how can we be sure that requirements to use a particular vocabulary are vetted first in an appropriate domain before they show up as product requirements. If we decide to trust the process used to create the vocabulary, and its acceptance for a particular purpose is well established in the industry, I'm OK with that. What I'm not OK with is using the EHR Functional Model as a way to gain acceptance for a specific vocabulary without seeing the vetting process happen.
Of course, some people probably think I'm blowing steam, and way over the top. One of the e-mails seemed to indicate that. That could be. I don't know this particular vocabulary from Adam, it's the first time I've heard of it, I'm not a licensee and cannot look at it, and I don't know if it is suitable for the particular purpose it is being promoted for, nor would I be even able to tell if it was. But I also know that most of the balloters on EHR Functional Models would be in my shoes as well -- so, I'll push a little bit more.
If nothing else, this project has raised an important issue through the HL7 governance process, and those issues will get put on the table and I expect, be addressed. That's all I'm asking for, and that is after all, the function of the governance process.
I won't go into all of the details. If you've been following some of the HL7 list traffic, you already know those. HL7 develops several kinds of standards. One of these are functional models. These are sets of requirements on specific kinds of healthcare IT, like the EHR Functional Model or the PHR Functional Model. Others types of standards are messaging and services standards like Version 2 or Version 3 (messages and documents), or Clinical Decision Support or Terminology (Services).
So, the problem here is one of how vocabulary should be used in a functional model. For a specific use, it's certainly acceptable to say that X shall be able to be communicated using Y vocabulary. For example, the system shall have the capability to exchange laboratory results using LOINC. But other uses get into implementation. To say that Problems shall be able to be stored using ICD-10-CM might go to far, depending upon the purpose of the system.
Storage is more often than not, one of the implementation characteristics by which functional requirement to exchange information is met. But there are other mechanisms by which that same functional requirement can also be met. I know of at least two external third-party vocabularies whose claim to fame are that they are A) more granular than vocabularies like SNOMED or ICD-* and B) are able to be easily mapped to either. If a product stored information using either of those vocabularies was able to regurgitate information in an exchange using an appropriate vocabulary, then it meets the functional exchange requirement. But what it does internally is left up to it.
Now, in this particular case, it's clear that this particular project has a favored vocabulary, and that vocabulary is also either completing, or in the process of being mapped to another, broader, more widely used vocabulary. And the favored vocabulary is a licensed product that is not as readily available in some regions as the other one. And while the licensing costs are rather inexpensive as vocabularies go, there's still substantial time and enery spent A) verifying suitability for purpose, B) licensing the product, C) accepting delivery and incorporating it, et cetera, especially if an alternative one is already available and licensed. And the cases where it surfaces as a functional requirement are also important because it could change a lot of implementation details, or just a few.
So, the question is, how should a functional model specify use of a terminology. That's an interesting question, and not one that can be answered in a few days (I think). But I think it needs to be answered before this particular project will be ready to move forward. But, the pressure is on to push this project forward because balloting deadlines are nearing, and the project team (and committee leadership) thinks they are ready to publish. Those are not necessarily good reasons to move forward. There are reasons why we have this "three-tiered" governance model in HL7 (Committee, Steering Division and finally TSC). So, I'm pushing back and saying this particular problem needs to be resolved before I would feel comfortable voting yes to move forward with the project.
Another part of the issue has to do with where and how it is appropriate to select a specific vocabulary for a specific topic. I'm certain that particular domain expertise is not widely present in the EHR committee, although it may be quite present in the project team. I do not want to see the HL7 Functional Models becoming a back door to the selection of vocabularies without there being some way to ensure that that whole process encourages appropriate domain specific review. Futhermore, "requirement of a vocabulary" in a particular FM should be predicated on a level of acceptance of that vocabulary as being appropriate to the purpose. But that isn't a EHR FM sort of decision to make, at least not as I understand it from the perspective of HL7 governance.
Essentially my pushback is that A) I don't want someone to tell me how to implement the system, I want them to tell me what they want it to do, and I'd like to know how that will be ensured in not just this project, but all projects with a similar issue (because this isn't the first time I've encountered this), and B) how can we be sure that requirements to use a particular vocabulary are vetted first in an appropriate domain before they show up as product requirements. If we decide to trust the process used to create the vocabulary, and its acceptance for a particular purpose is well established in the industry, I'm OK with that. What I'm not OK with is using the EHR Functional Model as a way to gain acceptance for a specific vocabulary without seeing the vetting process happen.
Of course, some people probably think I'm blowing steam, and way over the top. One of the e-mails seemed to indicate that. That could be. I don't know this particular vocabulary from Adam, it's the first time I've heard of it, I'm not a licensee and cannot look at it, and I don't know if it is suitable for the particular purpose it is being promoted for, nor would I be even able to tell if it was. But I also know that most of the balloters on EHR Functional Models would be in my shoes as well -- so, I'll push a little bit more.
If nothing else, this project has raised an important issue through the HL7 governance process, and those issues will get put on the table and I expect, be addressed. That's all I'm asking for, and that is after all, the function of the governance process.
A Progress Report on Self Displaying CDA
As promised, here is a progress report on Self Displaying CDA. Now, the whole premise of this project is that the XML, CSS and CDA standards are sufficient to embed a CSS stylesheet inside a CDA document so that you can display it automagically in a browser.
So, let's take a look at what that would require:
So, what doesn't work?
Well, my proof of concept starts with a pair of demonstration files for Displaying XML with CSS. I simply added an element at the very end of the catalog.xml document from that site (* see note below):
‹css ID='css'›
‹/css›
Then I inserted the CSS stylesheet content inside it, fired up IE 8 and viewed the result. It worked. Then, I tried this test in four other browsers. Firefox succeeded, but Opera, Safari and Chrome failed. And Opera made my **** list for overriding my browser preference without asking.
Without looking at the source code (because that would be cheating -- but which you can get for some of the engines these browsers are using), I tried to figure out if there was something I was doing wrong. And there was. The problem is with href="#css" in the Stylesheet PI. That's a relative URI to the "current resource", that points to the element identified by the ID (type not attribute) having the value "css". But, there's no way for the XML Parser to really know that the ID attribute on the CSS element is of the xs:ID type. FireFox and IE are just guessing. And unless the parser knows that the element is of the xs:ID type, it cannot reference the URL. And because specifying the CDA schema location isn't legal according to the HL7 XML ITS, there's no way to tell the parser how to find the schema so that it can figure this out. So that's a stalemate.
But I recalled that the W3C had a fix for this, at least in the DTD for XML Schema (put that in your parser and validate it), where they included ATTLIST declarations for all the XML Schema elements using the ID attribute. So, I know a way around this (or so I thought). I could include a DOCTYPE declaration that will tell the parser that the element was of the ID type, just like the W3C did. So I very carefully added a DOCTYPE declaration, and put it after my XML declaration as required by the XML standard, and before the stylesheet PI so that the parser would know that the ID attribute on the css element was of the type ID BEFORE it tried to process the stylesheet PI. Like you see below:
‹?xml version="1.0" encoding="ISO-8859-1"?›
‹!DOCTYPE CATALOG [‹!ATTLIST css ID ID #IMPLIED› ]›
‹?xml-stylesheet href='#css' type='text/css'?›
That failed to work either, and was also a little bogus including a DOCTYPE that wouldn't validate the XML, but only defined an ID attribute for one element. Then I was wishing that W3C had defined the xml:ID attribute in the XML standard. They didn't, but they did define xml:id later. OK, so now I have something else to try, but I'm betting it won't work either... 5 minutes later ... NOPE, that failed too in the other browsers, but at least it still works in IE and FireFox.
But that at least gives me a rationale for what SHOULD be done inside the CDA document, maybe. And it also says something else about what should happen to ID attributes that were manually added to the CDA schema. They may need to use xml:id instead. There's a good reason to do it that way.
Anyway, I'm stuck with IE 8.0 and Firefox, at least until someone fixes WebKit (for Safari and Chrome) or Opera.
So, the first question this Guide needs to answer is which attribute to use to identify the stylesheet. Should it be an xml:id attribute, or should it be the CDA ID attribute on the ... OH DANG ... and then I realize that there's a placement problem.
The ID attribute appears on the ‹observationMedia› element containing the ‹value› element containing the stylesheet, instead of on the ‹value› element directly. This is ONE element too far away from the stylesheet (and as it turns out THAT DOESN'T matter to the two browsers I'm stuck with here, but I'd like to be legit). The figure below illustrates the problem. The one below marked notLegal is where the attribute really SHOULD (shall?) go.
‹observationMedia ... ID='legal'›
‹value mediaType="text/css" representation="TXT"ID='notLegal'›
‹/observationMedia›
OK, so here's a thought. If we use xml:id, it's already a legal CDA extension, and can go right on the ‹value› element just fine. Well, as long as the CDA producer and reciever already deals with legal CDA extensions. And that puts it in the right place instead of one element above the right place. But now that becomes a different problem because it puts an extension in the mix. Well, that's one to save for committee discussion.
So the next big problem is the table box model and whether or not I can get it to work on both Browsers. There's a work-around if it doesn't work... which is simply to say that Self-Displaying-CDA documents either DON'T contain tables, or contain padding and use monospaced fonts for display and use the block rather than table display styles.
The CSS stylesheet? What? You want the CSS stylesheet? That's pretty easy, but also needs more work to deal with the styleCode attribute, which it doesn't handle just yet, and a half dozen other details. What follows is what I have so far, and it's still pretty slim, about 4K. The commented out table display types are what IE DOESN'T seem to support.
So, how do you use this? You create an observationMedia element in any section of your CDA document. In the value element, you use mediaType='text/css', representation='TXT' and xml:ID='css'. Then you include the above stylesheet text inside the value element and add the appropriate stylesheet PI (see the top of this page).
There are still some details to work out. That PI should be an "Alternate" stylesheet so the browser won't barf and so you can offer up an XSLT based one for use. There should be a document template in the CDA to identify it as conforming to the rules for "self-displaying-cda" (or maybe there are a couple of them with different degrees of conformance ... haven't gotten there yet, but table support or not leads me to think that there are degrees).
Oh, and sorry, but no embedded images in Self-Displaying-CDA. There's just no way to handle them. XSLT can work, but not CSS ... yet.
To follow the development more closely, check out the HL7 Wiki pages for the project.
* Code samples above use French quote symbols instead of real less than and greater than symbols around the XML because Blogger just doesn't know how to deal with XML markup symbols in Compose mode. Just search and replace them with the right characters if you copy and paste the code from the blog.
So, let's take a look at what that would require:
- A CDA document. OK, I have plenty of those.
- An XML Stylesheet Processing Instruction which uses just a fragment identifier to indicate the location of the element containing the stylesheet. It looks like this:
‹?xml-stylesheet href='#css' type='text/css'?› - An element with an ID containing a CSS stylesheet.
- A browser that knows how to apply CSS to XML (I have plenty of those).
So, what doesn't work?
Well, my proof of concept starts with a pair of demonstration files for Displaying XML with CSS. I simply added an element at the very end of the catalog.xml document from that site (* see note below):
‹css ID='css'›
‹/css›
Then I inserted the CSS stylesheet content inside it, fired up IE 8 and viewed the result. It worked. Then, I tried this test in four other browsers. Firefox succeeded, but Opera, Safari and Chrome failed. And Opera made my **** list for overriding my browser preference without asking.
Without looking at the source code (because that would be cheating -- but which you can get for some of the engines these browsers are using), I tried to figure out if there was something I was doing wrong. And there was. The problem is with href="#css" in the Stylesheet PI. That's a relative URI to the "current resource", that points to the element identified by the ID (type not attribute) having the value "css". But, there's no way for the XML Parser to really know that the ID attribute on the CSS element is of the xs:ID type. FireFox and IE are just guessing. And unless the parser knows that the element is of the xs:ID type, it cannot reference the URL. And because specifying the CDA schema location isn't legal according to the HL7 XML ITS, there's no way to tell the parser how to find the schema so that it can figure this out. So that's a stalemate.
But I recalled that the W3C had a fix for this, at least in the DTD for XML Schema (put that in your parser and validate it), where they included ATTLIST declarations for all the XML Schema elements using the ID attribute. So, I know a way around this (or so I thought). I could include a DOCTYPE declaration that will tell the parser that the element was of the ID type, just like the W3C did. So I very carefully added a DOCTYPE declaration, and put it after my XML declaration as required by the XML standard, and before the stylesheet PI so that the parser would know that the ID attribute on the css element was of the type ID BEFORE it tried to process the stylesheet PI. Like you see below:
‹?xml version="1.0" encoding="ISO-8859-1"?›
‹!DOCTYPE CATALOG [‹!ATTLIST css ID ID #IMPLIED› ]›
‹?xml-stylesheet href='#css' type='text/css'?›
That failed to work either, and was also a little bogus including a DOCTYPE that wouldn't validate the XML, but only defined an ID attribute for one element. Then I was wishing that W3C had defined the xml:ID attribute in the XML standard. They didn't, but they did define xml:id later. OK, so now I have something else to try, but I'm betting it won't work either... 5 minutes later ... NOPE, that failed too in the other browsers, but at least it still works in IE and FireFox.
But that at least gives me a rationale for what SHOULD be done inside the CDA document, maybe. And it also says something else about what should happen to ID attributes that were manually added to the CDA schema. They may need to use xml:id instead. There's a good reason to do it that way.
Anyway, I'm stuck with IE 8.0 and Firefox, at least until someone fixes WebKit (for Safari and Chrome) or Opera.
So, the first question this Guide needs to answer is which attribute to use to identify the stylesheet. Should it be an xml:id attribute, or should it be the CDA ID attribute on the ... OH DANG ... and then I realize that there's a placement problem.
The ID attribute appears on the ‹observationMedia› element containing the ‹value› element containing the stylesheet, instead of on the ‹value› element directly. This is ONE element too far away from the stylesheet (and as it turns out THAT DOESN'T matter to the two browsers I'm stuck with here, but I'd like to be legit). The figure below illustrates the problem. The one below marked notLegal is where the attribute really SHOULD (shall?) go.
‹observationMedia ... ID='legal'›
‹value mediaType="text/css" representation="TXT"
‹/observationMedia›
OK, so here's a thought. If we use xml:id, it's already a legal CDA extension, and can go right on the ‹value› element just fine. Well, as long as the CDA producer and reciever already deals with legal CDA extensions. And that puts it in the right place instead of one element above the right place. But now that becomes a different problem because it puts an extension in the mix. Well, that's one to save for committee discussion.
So the next big problem is the table box model and whether or not I can get it to work on both Browsers. There's a work-around if it doesn't work... which is simply to say that Self-Displaying-CDA documents either DON'T contain tables, or contain padding and use monospaced fonts for display and use the block rather than table display styles.
The CSS stylesheet? What? You want the CSS stylesheet? That's pretty easy, but also needs more work to deal with the styleCode attribute, which it doesn't handle just yet, and a half dozen other details. What follows is what I have so far, and it's still pretty slim, about 4K. The commented out table display types are what IE DOESN'T seem to support.
paragraph, addr, text { display: block; } ClinicalDocument { display: block; } #css { display: none; } entry { display: none; } name { display: block; } section›text { display: block; } table { display: block; /* table; */border-spacing: 2px; } tr { display: block; /* table-row;*/ vertical-align: inherit } thead { display: block; /* table-header-group; */ vertical-align: middle } tbody { display: block; /* table-row-group; */ vertical-align: middle } tfoot { display: block; /* table-footer-group; */ vertical-align: middle } col { display: table-column; } colgroup { display: table-column-group; } td { display: inline-block; /* table-cell; */ vertical-align: inherit; } th { display: inline-block; /* table-cell; */ font-weight: bolder; text-align: center; vertical-align: inherit } table›caption { display: block; /* table-caption; */ text-align: center; } structuredBody { margin: 8px; display: block; } ClinicalDocument›title { font-size: 2em; margin: .67em 0 } structuredBody›component›section›title { font-size: 1.5em; margin: .75em 0; } structuredBody›component›section›component›section›title { font-size: 1.17em; margin: .83em 0 } structuredBody›component›section›component›section›component›section title { margin: 1.12em 0 } structuredBody›component›section›component›section›component›section›component›section›title { font-size: .83em; margin: 1.5em 0 } structuredBody›component›section›component›section›component›section›component›section›component›section›title { font-size: .75em; margin: 1.67em 0 } title { padding-top: 10px; font-weight: bolder; display: block; } addr { font-style: italic } sub { font-size: .83em; vertical-align: sub } sup { font-size: .83em; vertical-align: super } list { margin: 1.12em 0; margin-left: 40px; display: block; } list [listType="ordered"] { margin: 1.12em 0; margin-left: 40px; display: block; list-style-type: decimal; } list [listType="unordered"] { margin: 1.12em 0; margin-left: 40px; display: block; list-style-type: decimal; } item { display: list-item; } list›list { margin-top: 0; margin-bottom: 0 } br:before { content: "\A"; white-space: pre-line } streetAddressLine:before, city:before country:before { content: "\A"; white-space: pre-line } center { text-align: center } :link, :visited { text-decoration: underline } :focus { outline: thin dotted invert }
So, how do you use this? You create an observationMedia element in any section of your CDA document. In the value element, you use mediaType='text/css', representation='TXT' and xml:ID='css'. Then you include the above stylesheet text inside the value element and add the appropriate stylesheet PI (see the top of this page).
There are still some details to work out. That PI should be an "Alternate" stylesheet so the browser won't barf and so you can offer up an XSLT based one for use. There should be a document template in the CDA to identify it as conforming to the rules for "self-displaying-cda" (or maybe there are a couple of them with different degrees of conformance ... haven't gotten there yet, but table support or not leads me to think that there are degrees).
Oh, and sorry, but no embedded images in Self-Displaying-CDA. There's just no way to handle them. XSLT can work, but not CSS ... yet.
To follow the development more closely, check out the HL7 Wiki pages for the project.
* Code samples above use French quote symbols instead of real less than and greater than symbols around the XML because Blogger just doesn't know how to deal with XML markup symbols in Compose mode. Just search and replace them with the right characters if you copy and paste the code from the blog.
Wednesday, November 17, 2010
NHIN 203 - An Update on The DirectProject
NHIN 203 – An Update on The Direct Project
NHIN 203 – The Direct Project: Where We Are Today
DATE: Thursday, November 29, 2010 (add to your calendar) TIME: 1:00 – 2:30 pm ETFACULTY:
- Arien Malec – Coordinator, The Direct Project
AUDIOCONFERENCE: (866) 699-3239 or (408) 792-6300
(Please join the event with a computer system first and follow the audio instructions on the screen.)
ACCESS/EVENT CODE: 668 619 540
ATTENDEE ID: You will receive this number when you join the event first with a computer connection.
READ THE NHIN 203 COURSE DESCRIPTION AND LEARNING OBJECTIVES
Review the full Fall Semester Course Catalog: www.NationaleHealth.org/NHIN-U
Did you miss any of the NHIN University 2010 Spring Semester?
Recordings and transcripts are available here.
Recordings and transcripts are available here.
Tuesday, November 16, 2010
How NOT to make Beer
One of my hobbies, which I haven't done in quite some time is home brewing my own beer. A very long time ago I started brewing with a long-time friend of mine, way back when I lived in Tallahassee Florida. We were inventing our own recipe, and so we decided to get about 9 lbs of malt, a combination of light powder malt, and canned amber and dark malt, plus some cracked crystal malt, and hops and yeast (I don't remember what kind). My buddy had a huge spaghetti pot that we brewed in, holding about 4.5 gallons of liquid.
We filled it to our usual depth and started to add the malt. But we usually brewed with about 6-7 lbs of malt, and this time we had over 10 lbs, and well, it just didn't fit. Which we discovered about 2/3rds of the way into the process. So, in a panic, we grabbed a smaller pot, and began bailing. Then we tried to mix the malt together in a way that would make sure that both pots had the same content. At the stage where we were adding hops, the challenge of two pots was trying to figure out how to keep it even, or not. We figured out that it didn't really matter, and so put most of the hops (of one kind) in the larger container, and the remaining smaller amount of the other in a hop bag in the other.
Once we finished, we transferred the contents to our 5 (actually 6) gallon wart container. I came back the next day and we pitched the yeast. We moved the just pitched beer into his study (which in that rented house had long ugly shag carpet) where it would bubble away while he and his wife went away for the week. Now, this was in Florida, and it got hot, and we had more wart than usual in our bucket. So, needless to say, the airlock was blown off the wart when the heat/lack of space caused significant over-pressure.
He came back to an interesting smell coming out of his study, but most of the fermenting beer was still in the bucket, so he recapped it. The next weekend I came over to help move it to the glass carboy where the yeast could settle from the beer. So, we siphoned the beer from the bucket into carboy. That worked great. Then we went to put in the rubber stopper, and I don't remember who, but I think it was my friend, who slammed it in so hard that it went down through the neck into the bottle. So now we had this little floating rubber ship in the glass bottle that was meant to keep the carboy closed. It was college days, and we had only one rubber stopper, so we figured out how to get it out. It took some sterilizing solution and some hangar wire.
After that bit of bobble and a few weeks settling, it was time to bottle. So we had to first siphon from the carboy back into the bucket to keep the sediment out. My friend started the siphon, and managed to get way too big a mouthful ... and so he spit it out into the bucket ... that we were actively siphoning into. It wasn't intentional, he was just trying to keep from drowning in not quite finished beer. We looked at each other, and laughing said almost simultaneously: "No known pathogens grow in beer". Lots of nasty things could grow in beer, but nothing that could harm us (especially at the alcohol levels we brewed with). So, we added a bit of priming sugar and bottled, and then set the bottles back into the study to be stored for a little while until they were ready for drinking.
I did say it was Florida, and that it was warm right. So again, my friend wound up with beer on that shag rug after a couple of the bottles exploded.
A few weekends later, it was time to test our work. Amazingly, it was fantastic. We had tried to write down the recipe, but really, with everything that went wrong, and with all of the unreproducable additives, there was no way we could ever reproduce what we called "Scary Beer".
Now, Scary Beer was a fantastic beer, and met all of our requirements. It had lots of malt, and alcohol, and these were much in demand. It worked because even though we had a miserable process, there were too many other things that made it nearly impossible to fail. The fact that we were making beer, which is somewhat self-purifying, made up for a lot of our failures.
But, this is not a process I would try to repeat, and frankly, it was unrepeatable. We lacked a certain discipline in writing down how we did what we did, and as a result, have never really been able to repeat it.
The moral of this story is that a successful process is not necessarily a repeatable one, especially if you didn't keep good notes. In some cases, as in this one, its not one I would even think desirable to repeat.
I'll let you apply that lesson where you will. I think right now, I'll go have a beer.
We filled it to our usual depth and started to add the malt. But we usually brewed with about 6-7 lbs of malt, and this time we had over 10 lbs, and well, it just didn't fit. Which we discovered about 2/3rds of the way into the process. So, in a panic, we grabbed a smaller pot, and began bailing. Then we tried to mix the malt together in a way that would make sure that both pots had the same content. At the stage where we were adding hops, the challenge of two pots was trying to figure out how to keep it even, or not. We figured out that it didn't really matter, and so put most of the hops (of one kind) in the larger container, and the remaining smaller amount of the other in a hop bag in the other.
Once we finished, we transferred the contents to our 5 (actually 6) gallon wart container. I came back the next day and we pitched the yeast. We moved the just pitched beer into his study (which in that rented house had long ugly shag carpet) where it would bubble away while he and his wife went away for the week. Now, this was in Florida, and it got hot, and we had more wart than usual in our bucket. So, needless to say, the airlock was blown off the wart when the heat/lack of space caused significant over-pressure.
He came back to an interesting smell coming out of his study, but most of the fermenting beer was still in the bucket, so he recapped it. The next weekend I came over to help move it to the glass carboy where the yeast could settle from the beer. So, we siphoned the beer from the bucket into carboy. That worked great. Then we went to put in the rubber stopper, and I don't remember who, but I think it was my friend, who slammed it in so hard that it went down through the neck into the bottle. So now we had this little floating rubber ship in the glass bottle that was meant to keep the carboy closed. It was college days, and we had only one rubber stopper, so we figured out how to get it out. It took some sterilizing solution and some hangar wire.
After that bit of bobble and a few weeks settling, it was time to bottle. So we had to first siphon from the carboy back into the bucket to keep the sediment out. My friend started the siphon, and managed to get way too big a mouthful ... and so he spit it out into the bucket ... that we were actively siphoning into. It wasn't intentional, he was just trying to keep from drowning in not quite finished beer. We looked at each other, and laughing said almost simultaneously: "No known pathogens grow in beer". Lots of nasty things could grow in beer, but nothing that could harm us (especially at the alcohol levels we brewed with). So, we added a bit of priming sugar and bottled, and then set the bottles back into the study to be stored for a little while until they were ready for drinking.
I did say it was Florida, and that it was warm right. So again, my friend wound up with beer on that shag rug after a couple of the bottles exploded.
A few weekends later, it was time to test our work. Amazingly, it was fantastic. We had tried to write down the recipe, but really, with everything that went wrong, and with all of the unreproducable additives, there was no way we could ever reproduce what we called "Scary Beer".
Now, Scary Beer was a fantastic beer, and met all of our requirements. It had lots of malt, and alcohol, and these were much in demand. It worked because even though we had a miserable process, there were too many other things that made it nearly impossible to fail. The fact that we were making beer, which is somewhat self-purifying, made up for a lot of our failures.
But, this is not a process I would try to repeat, and frankly, it was unrepeatable. We lacked a certain discipline in writing down how we did what we did, and as a result, have never really been able to repeat it.
The moral of this story is that a successful process is not necessarily a repeatable one, especially if you didn't keep good notes. In some cases, as in this one, its not one I would even think desirable to repeat.
I'll let you apply that lesson where you will. I think right now, I'll go have a beer.
Converting from an HL7 Version 2 Message to CDA
This post was originally written for and published on the HL7 Standards blog, here: http://www.hl7standards.com/blog/2010/11/16/converting-from-an-hl7-version-2-message-to-cda
One of the most common ways to integrate the HL7 CDA Standard into existing solutions is to create a CDA document from an HL7 Version 2 message. Three HL7 Version 2 messages are commonly used to do this: MDM_T02, ORU_R01, and ADT_* messages. The way that I like to do this is by converting the HL7 Version 2 message from the EDIFACT (pipe and hat) form into an XML format, and then transform that via XSLT to a CDA document.
There are HL7 Schemas for most Version 2 releases that members can obtain from the HL7 Web site. Most interface engines I’ve worked with provide some way to convert from HL7 V2 to XML, but few make it easy to use the Standard HL7 schemas for these messages. The existence of these must not be well known to interface developers. A couple of interface engines do make it pretty easy to convert to an XML format that is close enough to work with. Pragmatically, I often choose the easiest way to get the messages into XML, and not necessarily the most standards compliant way. It would be nice if interface engine vendors supported the standards.
Of course, there is quite a bit more involved in these mappings, and I describe some of these details in “The CDA Book”, which I expect to be available in the first half of 2011. There is an entire chapter that describes the mappings field by field for each of the above segments, and also provides details on how to map from HL7 Version 2 data types to the data types used in CDA and other HL7 Version 3 standards. But if you cannot wait for the book, at least you know that A) it can be done, and B) where to start.
Having created the XML, the next step is to map the contents into the CDA standard. The HL7 message segments map very well to the CDA standard, as you can see from the table below.
The MSH/EVN segments contain information about when the message was created and its identifier. This can be used to help fill in details about the CDA document that was created.
The NK1 segment contains information about next of kin and emergency contacts. Most CDA implementation guides put these in as ‹participant› elements in the CDA header, where the typeCode XML attribute is set to IND.
The OBR segment represents ordered tests in an order message, and completed tests in a result message. These are often panels, and would appear in their own section of a CDA document. The order information itself can be recorded as ‹observation classCode=’OBS’ moodCode=’RQO’› to describe the order and participants (e.g., the ordering physician would be the author of an observation in request mood).
The ORC segment represents order details, usually the order number. That would most commonly be mapped to the place in the CDA header meant to store these details.
The PID segment describes the patient. In fact, just about every field in the PID segment has a home in the ‹recordTarget› element of the CDA header.
The PV1 segment describes the visit. That too has a place in the CDA header that describes the visit for which the document is written.
The SPM segment describes the specimen and procedure for obtaining it. Parts of that segment should map to a ‹specimen› participant, and other parts to a specimen taking ‹procedure› act in a clinical statement in the CDA document.
The TXA segment unfortunately only appears in MDM messages. I say unfortunately because the TXA segment is what the CDA header is modeled upon, and contains many document specific details that other messages such as the ORU and ADT just don’t address.
Finally, we come to the all important OBX. The OBX segment is the workhorse in HL7 Version 2 messages. Because it follows the name/value pattern, it can be used to record just about anything. But, because it does that, it can also be mapped to just about anything else.
In MDM messages, the OBX segment actually contains the contents of a clinical document. Some systems already send CDA documents in the OBX segment of an MDM message, but for this post, I’m assuming you aren’t working with one of those. If you are working with an MDM message, the contents of the OBX segment will typically wind up in the ‹nonXMLBody› element of the CDA document.
In ORU messages, the OBX segment usually describes a test result and value associated with that result. Depending upon the feed, this could be an ‹observation› within a section (e.g., a lab message), or the ‹text› associated an entire ‹section› described by the OBX (e.g., part of an imaging report).
Keith W. Boone has been very active in the development of the CDA Standard and CDA implementation guides for the last 7 years. He works as a Standards Architech for GE Healthcare and in that role participates in the development of standards through a number of SDO organizations including HL7, IHE, ISO, ASTM and Continua. He is presently cochair of committees in HL7 and Integrating the Healthcare Enterprise that develop CDA implementation guides, and will take a position on the HL7 board in January of 2011. He was the cochair in ANSI/HITSP of the Care Management and Health Records Technical Committee, and editor of the HITSP C32 Summary Documents using the CCD specification develped by that organization. He has authored or edited more than a score of CDA implementation guides and related specifications. His most recent work “The CDA Book” is a textbook on the HL7 Clinical Document Architecture standard that will be published by Springer in the first half of 2011.
One of the most common ways to integrate the HL7 CDA Standard into existing solutions is to create a CDA document from an HL7 Version 2 message. Three HL7 Version 2 messages are commonly used to do this: MDM_T02, ORU_R01, and ADT_* messages. The way that I like to do this is by converting the HL7 Version 2 message from the EDIFACT (pipe and hat) form into an XML format, and then transform that via XSLT to a CDA document.
There are HL7 Schemas for most Version 2 releases that members can obtain from the HL7 Web site. Most interface engines I’ve worked with provide some way to convert from HL7 V2 to XML, but few make it easy to use the Standard HL7 schemas for these messages. The existence of these must not be well known to interface developers. A couple of interface engines do make it pretty easy to convert to an XML format that is close enough to work with. Pragmatically, I often choose the easiest way to get the messages into XML, and not necessarily the most standards compliant way. It would be nice if interface engine vendors supported the standards.
Of course, there is quite a bit more involved in these mappings, and I describe some of these details in “The CDA Book”, which I expect to be available in the first half of 2011. There is an entire chapter that describes the mappings field by field for each of the above segments, and also provides details on how to map from HL7 Version 2 data types to the data types used in CDA and other HL7 Version 3 standards. But if you cannot wait for the book, at least you know that A) it can be done, and B) where to start.
Having created the XML, the next step is to map the contents into the CDA standard. The HL7 message segments map very well to the CDA standard, as you can see from the table below.
Segment CDA Element MSH/EVN ‹ClinicalDocument› NK1 ‹participant› NTE ‹text› OBR ‹section›
‹observation›OBX ‹observation›
‹section›
‹nonXMLBody›ORC ‹infulfillmentOf› PID ‹recordTarget› PV1 ‹encompassingEncounter› SPM ‹specimen›
‹procedure›TXA ‹ClinicalDocument›
The MSH/EVN segments contain information about when the message was created and its identifier. This can be used to help fill in details about the CDA document that was created.
The NK1 segment contains information about next of kin and emergency contacts. Most CDA implementation guides put these in as ‹participant› elements in the CDA header, where the typeCode XML attribute is set to IND.
The OBR segment represents ordered tests in an order message, and completed tests in a result message. These are often panels, and would appear in their own section of a CDA document. The order information itself can be recorded as ‹observation classCode=’OBS’ moodCode=’RQO’› to describe the order and participants (e.g., the ordering physician would be the author of an observation in request mood).
The ORC segment represents order details, usually the order number. That would most commonly be mapped to the place in the CDA header meant to store these details.
The PID segment describes the patient. In fact, just about every field in the PID segment has a home in the ‹recordTarget› element of the CDA header.
The PV1 segment describes the visit. That too has a place in the CDA header that describes the visit for which the document is written.
The SPM segment describes the specimen and procedure for obtaining it. Parts of that segment should map to a ‹specimen› participant, and other parts to a specimen taking ‹procedure› act in a clinical statement in the CDA document.
The TXA segment unfortunately only appears in MDM messages. I say unfortunately because the TXA segment is what the CDA header is modeled upon, and contains many document specific details that other messages such as the ORU and ADT just don’t address.
Finally, we come to the all important OBX. The OBX segment is the workhorse in HL7 Version 2 messages. Because it follows the name/value pattern, it can be used to record just about anything. But, because it does that, it can also be mapped to just about anything else.
In MDM messages, the OBX segment actually contains the contents of a clinical document. Some systems already send CDA documents in the OBX segment of an MDM message, but for this post, I’m assuming you aren’t working with one of those. If you are working with an MDM message, the contents of the OBX segment will typically wind up in the ‹nonXMLBody› element of the CDA document.
In ORU messages, the OBX segment usually describes a test result and value associated with that result. Depending upon the feed, this could be an ‹observation› within a section (e.g., a lab message), or the ‹text› associated an entire ‹section› described by the OBX (e.g., part of an imaging report).
Keith W. Boone has been very active in the development of the CDA Standard and CDA implementation guides for the last 7 years. He works as a Standards Architech for GE Healthcare and in that role participates in the development of standards through a number of SDO organizations including HL7, IHE, ISO, ASTM and Continua. He is presently cochair of committees in HL7 and Integrating the Healthcare Enterprise that develop CDA implementation guides, and will take a position on the HL7 board in January of 2011. He was the cochair in ANSI/HITSP of the Care Management and Health Records Technical Committee, and editor of the HITSP C32 Summary Documents using the CCD specification develped by that organization. He has authored or edited more than a score of CDA implementation guides and related specifications. His most recent work “The CDA Book” is a textbook on the HL7 Clinical Document Architecture standard that will be published by Springer in the first half of 2011.
Monday, November 15, 2010
Some Thoughts on MeaningfulUse Phase 2
Last week I was kicking around some ideas for phase 2 of Meaningful use with some colleagues in the standards and interoperability space. My basic idea is that now that ONC has taught itself and some EHR vendors how use a hammer, we should move on to the other tools in our tool box. Essentially, what I'd like to see is that all summary documents meet the ONC data requirements, without necessarily requiring it to be a C32 document. It just needs to conform to the section and entry requirements:
The change in the regulation is pretty simple:
What changes in the tests? Instead of looking for the C32 template ID, you look for the C83 template ID, and if present, you test for presense of appropriate entries from C83 to meet the criteria. Again, not a big change.
Huh? What is the point of this change?
The point is that ONC use of the C32 is as a hammer right now. The way the regs are currently written, physicians document their care, and incorporate content from what they've already documented into a C32. What we can do under the rules as rewritten above is actually use and exchange the original documentation created by the healthcare provider during the encounter. That way, a History and Physical, Consult Note, Discharge Summary, Referral Note, et cetera, which contains the same summary information as a C32 can stand in its place.
Now, it really only took a year or two for many engineers to pick up on this concept at IHE Connectathons. This is because engineers are inherently lazy (I speak for myself in this), and in a good way. We want to avoid writing new code, and so look for ways to reuse stuff we have already written. So, by making sure that problem, allergy and medications lists show up the same way in every IHE profile, we made sure that we didn't have to rewrite code that dealt with them. The same thing is true in the HITSP Specifications as much as possible (we didn't have time to fix everything, but we did cover most of it). So, if your system can generate a consult note (C84), H&P (C84), referral note (C48), discharge Summary (C48), ED Note (C28) or patient summary (C32), you can meet the criteria as rewritten above because they all rely on the same definitions in C83.
There are other advantages:
1. The same documents that are generated as part of existing provider workflows can be used to meet the patient summary requirements. There are just more kinds of documents, and the additional content can be made available to patients for better patient access. For example, the discharge summary (required under existing regulations in inpatient settings), can be used to meet both the discharge summary and patient summary requirements.
2. Documents are identified based on the type of visit: Consultation, H and P, Discharge, ED Note, et cetera, so when patients or providers go looking for information, they can see the type of service provided.
3. The same content used for clinical care can support claims attachments for just about every kind of note described in the Clinical Reports attachment guide, which is another regulatory requirement we'll be required to meet by 2014 under ARRA.
For #3, the Patient Protection and Affordable Care Act (P.L. 111-148) states in §1104(c)(3):
The change in the regulation is pretty simple:
§ 170.205 Content exchange standards and implementation specifications for exchanging electronic health information. The Secretary adopts the following content exchange standards and associated implementation specifications:Essentially, what we are saying is that the test results, problems, medications, allergies (and procedures if a hospital or CAH) would be in conformance with HITSP/C83 CDA Content Modules, per the following rules:
(a) Patient summary record—
(1) Standard. Health Level Seven Clinical Document Architecture (CDA) Release 2, Continuity of Care Document (CCD)(incorporated by reference in §170.299). Implementation specifications. The Healthcare Information Technology Standards Panel (HITSP) CDA Content Modules ComponentSummary Documents Using HL7 CCD Component/C32C83incorporated by reference in §170.299).
(2) Standard. ASTM E2369 Standard Specification for Continuity of Care Record and Adjunct to ASTM E2369 (incorporated by reference in §170.299).
§ 170.304 Specific certification criteria for Complete EHRs or EHR Modules designed for an ambulatory setting.
(f) Electronic copy of health information. Enable a user to create an electronic copy of a patient's clinical information, including, at a minimum, diagnostic test results, problem list, medication list, and medication allergy list in ... The standard ... specified in §170.205(a)(1) or §170.205(a)(2)...
(h) Clinical summaries. Enable a user to provide clinical summaries to patients for each office visit that include, at a minimum, diagnostic test results, problem list, medication list, and medication allergy list. If the clinical summary is provided electronically it must be ...Provided on electronic media or through some other electronic means in accordance with ... The standard ... specified in §170.205(a)(1) or §170.205(a)(2)...
(i) Exchange clinical information and patient summary record — (1) Electronically receive and display. Electronically receive and display a patient's summary record, from other providers and organizations including, at a minimum, diagnostic tests results, problem list, medication list, and medication allergy list in accordance with the standard ... The standard ... specified in §170.205(a)(1) or §170.205(a)(2)...
(2) Electronically transmit. Enable a user to electronically transmit a patient summary record to other providers and organizations including, at a minimum, diagnostic test results, problem list, medication list, and medication allergy list in accordance with ... The standard ... specified in §170.205(a)(1) or §170.205(a)(2)...
§ 170.306 Specific certification criteria for Complete EHRs or EHR Modules designed for an inpatient setting.
(d) Electronic copy of health information. (1) Enable a user to create an electronic copy of a patient's clinical information, including, at a minimum, diagnostic test results, problem list, medication list, medication allergy list, and procedures ... The standard ... specified in §170.205(a)(1) or §170.205(a)(2)...
(f) Exchange clinical information and patient summary record — (1) Electronically receive and display. Electronically receive and display a patient's summary record from other providers and organizations including, at a minimum, diagnostic test results, problem list, medication list, medication allergy list, and procedures in accordance with ... The standard ... specified in §170.205(a)(1) or §170.205(a)(2)...
(2) Electronically transmit. Enable a user to electronically transmit a patient's summary record to other providers and organizations including, at a minimum, diagnostic results, problem list, medication list, medication allergy list, and procedures in accordance with ... The standard ... specified in §170.205(a)(1) or §170.205(a)(2)...So, what changes are required for systems meeting Phase I requirements to meet that requirement? Depending on how they are written, none, but otherwise small. If those solutions were written to import data based on whether they used the C83 defined sections, there are NO changes required. If they were written to only work with the HITSP C32, then, they have to condition the imports on the C83 defined sections and entries (which by the way, C32 already requires).
What changes in the tests? Instead of looking for the C32 template ID, you look for the C83 template ID, and if present, you test for presense of appropriate entries from C83 to meet the criteria. Again, not a big change.
Huh? What is the point of this change?
The point is that ONC use of the C32 is as a hammer right now. The way the regs are currently written, physicians document their care, and incorporate content from what they've already documented into a C32. What we can do under the rules as rewritten above is actually use and exchange the original documentation created by the healthcare provider during the encounter. That way, a History and Physical, Consult Note, Discharge Summary, Referral Note, et cetera, which contains the same summary information as a C32 can stand in its place.
Now, it really only took a year or two for many engineers to pick up on this concept at IHE Connectathons. This is because engineers are inherently lazy (I speak for myself in this), and in a good way. We want to avoid writing new code, and so look for ways to reuse stuff we have already written. So, by making sure that problem, allergy and medications lists show up the same way in every IHE profile, we made sure that we didn't have to rewrite code that dealt with them. The same thing is true in the HITSP Specifications as much as possible (we didn't have time to fix everything, but we did cover most of it). So, if your system can generate a consult note (C84), H&P (C84), referral note (C48), discharge Summary (C48), ED Note (C28) or patient summary (C32), you can meet the criteria as rewritten above because they all rely on the same definitions in C83.
There are other advantages:
1. The same documents that are generated as part of existing provider workflows can be used to meet the patient summary requirements. There are just more kinds of documents, and the additional content can be made available to patients for better patient access. For example, the discharge summary (required under existing regulations in inpatient settings), can be used to meet both the discharge summary and patient summary requirements.
2. Documents are identified based on the type of visit: Consultation, H and P, Discharge, ED Note, et cetera, so when patients or providers go looking for information, they can see the type of service provided.
3. The same content used for clinical care can support claims attachments for just about every kind of note described in the Clinical Reports attachment guide, which is another regulatory requirement we'll be required to meet by 2014 under ARRA.
For #3, the Patient Protection and Affordable Care Act (P.L. 111-148) states in §1104(c)(3):
(3) HEALTH CLAIMS ATTACHMENTS.—The Secretary shall promulgate a final rule to establish a transaction standard and a single set of associated operating rules for health claims attachments (as described in section 1173(a)(2)(B) of the Social Security Act (42 U.S.C. 1320d–2(a)(2)(B))) that is consistent with the X12 Version 5010 transaction standards. The Secretary may do so on an interim final basis and shall adopt a transaction standard and a single set of associated operating rules not later than January 1, 2014, in a manner ensuring that such standard is effective not later than January 1, 2016.Now, wouldn't it be novel if the care we were getting and the information which was being used to pay claims for that care were using the same standards?