People often ask me why I'm involved in standards, or what it does for my employer. I can speak generally on this topic because I've had several different employers who have had an interest in standards, as well as experiences with other organizations who engage with IT and specifically Health IT standards.
The most commonly perceived benefit is that there is an advantage to a organization using a standard by being part of the development of the standard, sometimes to the exclusion of others in your industry. This makes me giggle more than just a bit, because while I wish that it might be true, it generally isn't. If you are dealing with a consensus based SDO, anyone who wants to can participate in the development process. In HL7, you don't even need to be a member of the organization, or pay any fees to participate in the development. Your only cost is your time. So, there's no exclusive access.
The "opportunity" presented by participation is early access to content that could eventually become a standard. But most standards development processes take quite a bit of time to get to the point of developing content that is sound enough to begin building software to support. You wind up trying to hit what is either a fast moving target in the early stages, or something which moves at a glacial pace only to introduce new ideas later that break what you might have spent months coding. Building product at the same time as the standard is being developed means that you have to deal with change, and with schedules that are outside your own control. This is especially challenging in healthcare where you have regulatory processes that might also need to be overlaid on your development efforts.
I recall one organization that I worked with that was trying to build an XSLT stylesheet editor at the time that the standard was being developed. Over the course of XSLT development, starting from the first submission in August of 1997 to the first working draft in August of 1998, to the final recommendation in November of 1999 there were numerous design changes that required a great deal of code rework. Having early access doesn't mean that you will have a successful product either. That particular project died before the standard itself became widely used. There was just too much change for the product to accept.
Another common mis-perception is that an organization can simply donate something that it has developed to become a standard. While some standards organizations do provide mechanisms by which standards developed outside the organization can be voted on to become a standard (HL7 does allow this), that's a very hard process for a commercial vendor to succeed at. In part, because the standards development process is open to all, and because affected organizations will want to protect themselves, rarely will a commercial vendor be able to "donate" its IP without having to accept a number of improvements and changes to it. One of the biggest challenges in the development of the XML and HTML Document Object Model Release 2 was dealing with existing work that had been done by two leaders in web browser development. The Web Browser wars were as much fought on T-Cons and W3C meetings as they were in the marketplace. The two biggest opponents in those wars steadfastly refused to allow features into the standard that would have given one or the other a significant advantage in its products.
Organizations must also address the SDO's development process and methodology. A specification that isn't developed using the organization's process, specification structure, and methodology simply won't be able to be accepted without significant change. MITRE has devoted a significant amount of time in the development of their hData specification in order to fit with HL7's methodology. And, by the way, it's very rare that a single organization could develop something on its own that couldn't stand some improvement when examined through the eyes of others (that too is a benefit of the use of the SDO process).
So, what is the benefit to participation if it doesn't provide a significant commercial advantage over the competition? The benefit is elsewhere. Use of standards allows an organization to focus on its core competencies and proprietary* value. Most organizations (except interface engine suppliers) don't focus on "interoperability". Instead they focus on optimizing workflows to perform specific tasks, using technology to enable their customers to better perform their jobs.
Interoperability? Those are wheels on the car. Customers didn't use to ask for them by name (Meaningful Use did change that to a great extent), they were simply expected to be there and to work. The only time something new came into play is when it added a capability that didn't previously exist (like IHE's Cross Enterprise Document Sharing or XDS profile), and customers wanted and asked for that capability. That's why for example, most HL7 ADT interfaces and Laboratory interfaces are several revisions behind the current release of HL7 Version 2.x. Until meaningful use came along, the standards that customers were using worked and were good enough for what they indicated they wanted to accomplish.
Another place where using standards is a benefit is access to the expertise of broad groups of experts. The existing CDA and CCD implementation guides are not the work of one person. The HL7 CCD specification has the input of 14 separate editors, and an even wider group of people who provided commentary, input and improvements.
Use of exchange standards allows organizations to avoid work. They don't need to spend time and effort developing, supporting and maintaining proprietary mechanisms to connect things. Think of what you do today to connect a printer to your PC. Printer manufacturers don't focus on page description languages (any more at least), USB connectors and serial interface characteristics. Those are instead addressed by standards. Where the printer manufacturers focus their attention is on printing speed, resolution, quality and cost. Could you have imagined 10 years ago being about to get 300dpi, 8 page per minute printing that worked with virtually any computer you owned for less that $50?
A final benefit is customer appreciation of standards. While they may not always ask for them by name, they do appreciate when they are supported. Being able to say "we use the standards in our product" is a great check-box item on a product specification sheet. Don't get me wrong, employers also like to be able to say "we led the development of standards" too, but even that rarely shows up in RFPs. My first job working with standards was with an organization that led much of the development on XML based standards, including XSLT, XPath, DOM2 and others. Having that experience can be valuable, but only if that leadership also shows up in the quality of your products, and their ability to meet a customer's need. I've always liked to think that my experience in standards has value, and when it hasn't, I've found a different place to work.
Keith
* Proprietary used to mean "belonging or pertaining to a proprietor" (and still does in some circles). A proprietary product is something sold by a proprietor. It should be a neutral term, rather than a judgmental one, but in today's jargon it has changed in meaning. You can build a proprietary product based on standards. In fact, most interface engines and middle-ware products are just that.
Monday, January 31, 2011
Friday, January 28, 2011
A critique of adoption models for HealthIT
John Moore of Chilmark Research had recently recalled the book to me in his post on developing a maturity model of Healthcare Information Exchange. HIMSS Analytics has a similar model for EHR Adoption. I happen to have a copy of Watts' book on my shelf, so I cracked it open again.
The problem with both of these models is that they work from assumptions about where we are with technology products today, and where we would like to be. They'll need to be changed when the entry level criteria for being an HIE or having an EMR improve. I like to think about levels more in terms of organizational capabilities, rather than fixed feature sets.
Page 6 of Watts Humphrey's book talks about 5 levels of organizational maturity. The first level is "Initial" and you get it for free. There is no "maturity" at level 1. Where I worked, we added a "Level 0", which is when you don't even know about the CMM. This was stolen from the fourth order of ignorance (see Five level of ignorance). John's critiera for level 1 for HIE is that you get something, but that something is ill-defined and provides basic capabilities (e.g., a portal).
At level 2, you get into a repeatable process, which implies basic management control. John talks about providing basic referal capabilities in his model for HIE's, but I'd rather look more closely. An HIE at level 2 of the CMM should be able to repeatedly roll out new features to its customers. This is somewhat different than providing a fixed set of features.
Level 3 in the CMM is the "Defined Process". Here, an organization has a documented process and follows it. It is an important, but as Dr. Humphrey's points out, only an interim step. There are no guarantees that the process is effective. I see this as being applied to HIE Governance. An HIE at this level would have written procedures in place that would describe how they go about developing new exchange capabilities. John has HIE's start gathering basic metrics and improving the HIE experience.
Level 4 is where real change begins to happen, because at that point, both projects and processes begin to be measured and evaluated. This is what quality measurement is about. An HIE at this level will be able to tell you how effective its offerings are in improving patient care. John has HIE's driving compliance across the organization. This is one of his best measures, because he's looking at what the effects of the HIE processes are (driving compliance), instead of a specific set of features.
Finally, Level 5 optimizes processes. When products and processes aren't measuring up, they are changed in ways that introduce efficiencies and improvements. An HIE at this level will be constantly improving not just its product offerings, but also the way it rolls them out. This is what is needed to establish the appropriate scaling in HIE efforts. One of John's goals here is really good: Improve operational efficiency and effectiveness of care, but the others are still too feature driven.
As critical as this post is, I realize that these fixed models of maturity or adoption are useful. John has a good list describe what HIE's should strive for today, and the HIMSS Analytics model for EHR Adoption helps organizations figure out a plan for EHR deployment.
As I look back through Watts' framework, I can see a number of other places where it can be applied in our nation's Healthcare IT programs, and ways in which different aspects of it have been applied. Current initiatives are trying to improve healthcare all over at once. We are after level 4 and 5 improvements (automation of the geneartion quality metrics an process redesign in standards development) at the same time as we trying to create a repeatable process for enabling information exchange. We should look at these programs through the lens of quality improvement as well, and see what the right order of steps is.
Take a look at Watts' book, and figure out where your organization might be. The company I worked for wound up at level 3 after about 18 months of working towards it, (and also became ISO 9001 Certified. Getting to that level was a huge improvement from where we were.
Wednesday, January 26, 2011
Just in Time Presenting Skills
I have over 200 slide decks on my computer. Every year I have to create at least ten new decks. I also have decks of slides given to me by standards colleagues and coworkers that I have permission to borrow from as appropriate. I have decks I produced for IHE, HL7, HITSP, EHRA and others, and then I have the internal decks I use at work.
You want a presentation on IHE or HL7 or HITSP, I got a dozen of em. Need a new one? I can put it together given a day's notice, or even less in some cases. I might not like you for asking, but part of my job is to promote these activities.
Often times, many of you will have to do the same thing. How can you do it? I've already given you a couple of hints. Have ready made content you can reuse. But here are a few more.
First of all, the words on the slide are NOT important to your audience. They will get copies of those words later, and can read them faster than you can speak them. The best kind of slide to speak to is one where you can change the message to suit your audience or purpose. A single picture illustating an event. A FEW words describing a concept. If you use too much text, the slide it too tight to be reused for other things.
Second, practice before speaking, if even in your head. Don't create a slide and figure you will wing it. Know what you want to convey. Again, don't worry about the words. I use the same HL7 CCD deck every time and use different words each time. There is NO script. I know what I want to convey and the slide is my prompter, but not my script.
Can you speak for five minutes on one slide? Ten? Try it. It's a worthwhile skill to develop. Most folks spend 2-3 minutes on a slide. If you can spend 5-10 minutes and make the image on the slide memorable or tied into your story, do it.
I hate animation on slides. Why? Because as soon as I have to use the same deck in a webinar, I have to undo it all. If you animate, use a second slide that changes what was on the previous slide a bit. That will play on a webinar. I find other peoples animations of slides disruptive, because I don't pause to click where they would. I use their stories, but in my own words, and with my own stories interwoven.
Oh, and stories are great. Any time you can come up with a simple vingette, tale, or longer story that illustrates what you are trying to achieve, it makes your point more memorable.
I'll finish this post with a little story and a last tip. It's hard to engage with your audience standing behind a podium. If you have space use it. One of the best IHE presentations I ever gave was up on a stage in Connecticut. Twenty feet back from the edge of the stage was the podium, and next to me was a HUGE screen showing my slides (this was before I'd ever seen TED). I started to present, and less than 30 seconds in, excused myself and moved to the front of the stage. Then I used the entire space pretending that stage right was Connecticut and Stage left was California, and moved documents from one end of the stage to the other while describing XDS. You could tell from the audience attention on where I was that I'd captured their attention. I also used the deck and screen, pointing and emphasizing not with a laser, but with my hands and body. If you read my first post, that's the same presentation where I got the name for this blog.
Well, back to my current Just in Time presentation.
You want a presentation on IHE or HL7 or HITSP, I got a dozen of em. Need a new one? I can put it together given a day's notice, or even less in some cases. I might not like you for asking, but part of my job is to promote these activities.
Often times, many of you will have to do the same thing. How can you do it? I've already given you a couple of hints. Have ready made content you can reuse. But here are a few more.
First of all, the words on the slide are NOT important to your audience. They will get copies of those words later, and can read them faster than you can speak them. The best kind of slide to speak to is one where you can change the message to suit your audience or purpose. A single picture illustating an event. A FEW words describing a concept. If you use too much text, the slide it too tight to be reused for other things.
Second, practice before speaking, if even in your head. Don't create a slide and figure you will wing it. Know what you want to convey. Again, don't worry about the words. I use the same HL7 CCD deck every time and use different words each time. There is NO script. I know what I want to convey and the slide is my prompter, but not my script.
Can you speak for five minutes on one slide? Ten? Try it. It's a worthwhile skill to develop. Most folks spend 2-3 minutes on a slide. If you can spend 5-10 minutes and make the image on the slide memorable or tied into your story, do it.
I hate animation on slides. Why? Because as soon as I have to use the same deck in a webinar, I have to undo it all. If you animate, use a second slide that changes what was on the previous slide a bit. That will play on a webinar. I find other peoples animations of slides disruptive, because I don't pause to click where they would. I use their stories, but in my own words, and with my own stories interwoven.
Oh, and stories are great. Any time you can come up with a simple vingette, tale, or longer story that illustrates what you are trying to achieve, it makes your point more memorable.
I'll finish this post with a little story and a last tip. It's hard to engage with your audience standing behind a podium. If you have space use it. One of the best IHE presentations I ever gave was up on a stage in Connecticut. Twenty feet back from the edge of the stage was the podium, and next to me was a HUGE screen showing my slides (this was before I'd ever seen TED). I started to present, and less than 30 seconds in, excused myself and moved to the front of the stage. Then I used the entire space pretending that stage right was Connecticut and Stage left was California, and moved documents from one end of the stage to the other while describing XDS. You could tell from the audience attention on where I was that I'd captured their attention. I also used the deck and screen, pointing and emphasizing not with a laser, but with my hands and body. If you read my first post, that's the same presentation where I got the name for this blog.
Well, back to my current Just in Time presentation.
Tuesday, January 25, 2011
EHR and CDS can improve Quality when you Plan for It
If all you do is use better tools, and you don't measure all the benefits those tools provide, of course you won't seen improvements. I just finished reading through the Archives of Internal Medicine report on the use of EHR's and Clinical Decision Support to improve quality.
There are some problems with the way this particular study was designed. The intervention "Use of an EMR with Clinical Decision Support" is used to address 20 specific quality of care metrics. However, the intervention as defined is not designed to address any one, let alone all of those quality of care metrics specifically. In fact, provider use of EHRs may be motivated by several factors: Improved use of clinician time, better capture of information used for reimbursement, avoidance of medication errors, use of ePrescribing or electronic billing, as well as support for higher quality of care for specific conditions. The study only address the last as a motivator for use.
In order to obtain quality improvements for the specific measures cited in the report, you need to plan for it when deploying an EMR in your practice. Just because an EMR supports clinical decision support doesn't mean that it provides ALL CDS possibilities all of the time. Different practices have different workflows for which the CDS capabilities of products may need to be configured. For example, some products provide documentation templates that support certain clinical guidelines and have CDS rules enabled. But these guidelines and the templates implementing them must be customized to specific care settings based on population age (e.g., pediatric/geriatric), gender (e.g., ob/gyn), formulary, et cetera. The most commonly enabled CDS feature in EHRs is med/med and med/allergy interaction avoidance. Yet there is not a single quality measure in the study that addresses that issue. Yet, it is the most commonly cited issue I hear about alongside the IOM report "To Err is Human".
The conclusions of the report don't surprise me given its design. Nor should they surprise anyone else who reads it. If the treatment (use of EHR + CDS) is non-specific to the problem (the specific 20 quality measures), the expectation that it would have any effect is a marginal hypothesis to begin with. A better study would be to examine the effects of EHR use with CDS where implementation is designed to improve a specific quality measure. In that case, the intervention would be designed to treat the specific issue and the results would be much more relevant.
The data gathered in the NACMS surveys is insufficient to conclude that decision support doesn't help, but right now, that's all the data the study authors had to work with. I suspect that future studies, which will include EHRs that have had CDS rules enabled to support high priority quality measures will have better results. To better explore those results, it might be useful to start gathering metrics on what quality measures healthcare providers have enabled CDS to support.
Updated at 1/25 3:39 pm
A good analogy to this study would be performing a clinical trial on use of medications to treat a specific set of conditions, where the use of any medication was enough to be considered in the numerator, and measuring the impacts of medication use on disease treatment.
There are some problems with the way this particular study was designed. The intervention "Use of an EMR with Clinical Decision Support" is used to address 20 specific quality of care metrics. However, the intervention as defined is not designed to address any one, let alone all of those quality of care metrics specifically. In fact, provider use of EHRs may be motivated by several factors: Improved use of clinician time, better capture of information used for reimbursement, avoidance of medication errors, use of ePrescribing or electronic billing, as well as support for higher quality of care for specific conditions. The study only address the last as a motivator for use.
In order to obtain quality improvements for the specific measures cited in the report, you need to plan for it when deploying an EMR in your practice. Just because an EMR supports clinical decision support doesn't mean that it provides ALL CDS possibilities all of the time. Different practices have different workflows for which the CDS capabilities of products may need to be configured. For example, some products provide documentation templates that support certain clinical guidelines and have CDS rules enabled. But these guidelines and the templates implementing them must be customized to specific care settings based on population age (e.g., pediatric/geriatric), gender (e.g., ob/gyn), formulary, et cetera. The most commonly enabled CDS feature in EHRs is med/med and med/allergy interaction avoidance. Yet there is not a single quality measure in the study that addresses that issue. Yet, it is the most commonly cited issue I hear about alongside the IOM report "To Err is Human".
The conclusions of the report don't surprise me given its design. Nor should they surprise anyone else who reads it. If the treatment (use of EHR + CDS) is non-specific to the problem (the specific 20 quality measures), the expectation that it would have any effect is a marginal hypothesis to begin with. A better study would be to examine the effects of EHR use with CDS where implementation is designed to improve a specific quality measure. In that case, the intervention would be designed to treat the specific issue and the results would be much more relevant.
The data gathered in the NACMS surveys is insufficient to conclude that decision support doesn't help, but right now, that's all the data the study authors had to work with. I suspect that future studies, which will include EHRs that have had CDS rules enabled to support high priority quality measures will have better results. To better explore those results, it might be useful to start gathering metrics on what quality measures healthcare providers have enabled CDS to support.
Updated at 1/25 3:39 pm
A good analogy to this study would be performing a clinical trial on use of medications to treat a specific set of conditions, where the use of any medication was enough to be considered in the numerator, and measuring the impacts of medication use on disease treatment.
Competetiveness and Standards
I pulled this sentence from a review I'm doing for the NIST RFI on Standards:
Replacing an existing standard is costly. Doing so disadvantages those who have invested in the technology that standard supports. That includes not just the implementors of those standards, but also the customers of those implementers. Backwards compatibility is a key focus of many SDOs when they upgrade or replace their existing standards. This ensures that the pathway to adoption of new features is made easier.
When all parties agree to the same standard, the playing field is level (or at least more level), especially when using concensus processes and governance. When the playing field is changed, it creates a disadvantage to those who have already been playing on the existing field, often to the advantage of those who have not, as the latter have no investment to replace.
Wes Rishel's law of consensus also comes into play. When the consensus group is changed, the consensus may also change.
Participating in consensus based standards development is expensive, mostly in terms of committed effort and travel, and to some degree, in other expenses related to participation (e.g., membership fees). Having more places to participate is not necessarily better from the point of view of one who participates in multiple locations. I've also found that they best way to get a consensus organization to change its behaviors is easier from inside rather than outside (I've had experience with both).
On the other hand, having other organizations working outside is a good way to get new ideas into existing SDOs. If you've read The Innovator's Dilemma, you probably realize that you need small teams to think up new ideas. Doing so collaboratively with existing organizations can be beneficial in that you can take advantage of some of the existing infrastructure without being burdened by all of it if the projects are set up appropriately. HL7 has mechanisms by which other organizations can engage that may be worth persuing (e.g., through an MOU).
From the point of view of standards, more is less, and less is more. That goes for both organizations and standards initiatives.
How do the standards impact organizations and their competitiveness?The rest of this post explores this idea in today's context:
...
Standards can be used competitively to provide an advantage to one group over another, or to create a level playing field for all parties.
Replacing an existing standard is costly. Doing so disadvantages those who have invested in the technology that standard supports. That includes not just the implementors of those standards, but also the customers of those implementers. Backwards compatibility is a key focus of many SDOs when they upgrade or replace their existing standards. This ensures that the pathway to adoption of new features is made easier.
When all parties agree to the same standard, the playing field is level (or at least more level), especially when using concensus processes and governance. When the playing field is changed, it creates a disadvantage to those who have already been playing on the existing field, often to the advantage of those who have not, as the latter have no investment to replace.
Wes Rishel's law of consensus also comes into play. When the consensus group is changed, the consensus may also change.
Participating in consensus based standards development is expensive, mostly in terms of committed effort and travel, and to some degree, in other expenses related to participation (e.g., membership fees). Having more places to participate is not necessarily better from the point of view of one who participates in multiple locations. I've also found that they best way to get a consensus organization to change its behaviors is easier from inside rather than outside (I've had experience with both).
On the other hand, having other organizations working outside is a good way to get new ideas into existing SDOs. If you've read The Innovator's Dilemma, you probably realize that you need small teams to think up new ideas. Doing so collaboratively with existing organizations can be beneficial in that you can take advantage of some of the existing infrastructure without being burdened by all of it if the projects are set up appropriately. HL7 has mechanisms by which other organizations can engage that may be worth persuing (e.g., through an MOU).
From the point of view of standards, more is less, and less is more. That goes for both organizations and standards initiatives.
Monday, January 24, 2011
Public Responses to the PCAST Report
All comments to the PCAST report will eventually be made public on the web. But until the government releases them, you still have some opportunity to read comments others have already made publically available if you are interested.
First is my "Netizens 2.0 Response". This really isn't a response by me alone, but all healthcare bloggers I could find, whether I agreed with them or not.
Comments to ONC: PCAST HIT Report Becomes a Political Piñata (Vince Kuratis) -- Vince provides a list of comments from others and summary of each. You should read his summaries. Many (but not all) of the reports he comments on are listed below.
EHRA - I had some input here.
ACR and RSNA (pdf)
IHE USA (pdf) - I also had some input here.
HL7 (pdf) - While I provided some input here, I don't see it in the final result.
Brian Ahier (Healthcare IT Evangelist)
Healthcare Information and Management Systems Society
American Hospital Association (pdf)
Federation of American Hospitals (pdf)
The Center for Democracy in Technology
Project HealthDesign
The Markle Foundation
The Clinical Groupware Collaborative (Microsoft Word)
The Society for Participatory Medicine (pdf)
Siemens (pdf)
FasterCures
CLOUD
Comments on the PCAST Report (Margalit Gur-Arie)
-- Keith
P.S. Once again, Vince beats Google and Bing for precision (relevance) and recall within the first five pages
First is my "Netizens 2.0 Response". This really isn't a response by me alone, but all healthcare bloggers I could find, whether I agreed with them or not.
Comments to ONC: PCAST HIT Report Becomes a Political Piñata (Vince Kuratis) -- Vince provides a list of comments from others and summary of each. You should read his summaries. Many (but not all) of the reports he comments on are listed below.
EHRA - I had some input here.
ACR and RSNA (pdf)
IHE USA (pdf) - I also had some input here.
HL7 (pdf) - While I provided some input here, I don't see it in the final result.
Brian Ahier (Healthcare IT Evangelist)
Healthcare Information and Management Systems Society
American Hospital Association (pdf)
Federation of American Hospitals (pdf)
The Center for Democracy in Technology
Project HealthDesign
The Markle Foundation
The Clinical Groupware Collaborative (Microsoft Word)
The Society for Participatory Medicine (pdf)
Siemens (pdf)
FasterCures
CLOUD
Comments on the PCAST Report (Margalit Gur-Arie)
-- Keith
P.S. Once again, Vince beats Google and Bing for precision (relevance) and recall within the first five pages
Friday, January 21, 2011
This is Connectathon
By the time most of you read this, the connecthon is largely over. If you haven't finished your testing tonight, you better have only one or two more to get through tomorrow. Otherwise, you are might as well be outta here -- which could be the only sliver lining in this cloud given the travel delays anticipated today.
Think of what follows below as "Director's Commentary".
0:00 - 0:07 As you come in to this video, you'll see someone on the far right who is looking for where the person he expects to be sitting right here is.
0:04 - 0:12 Moving in from the right are engineers from two different products working out what they need to do to make it work.
0:12 - 0:18 In the middle background you can see this one fellow in quite a hurry to see what's going on with his partner over there.
0:16 - 0:23 And then we have this guy who is NOT running to the bathroom (which are behind us in this scene). He must have even more to get done this week.
0:25 - 0:29 Striped Shirt: "If you do this it will work." Blue Hooded Sweatshirt: "Ok! I got it now."
0:28 - 0:31 He's reading the spec. Can I keep him?
0:33 - 0:34 Engineer to carefully listening monitor ... So what we do to protect the data is...
0:36 - 0:40 That guy taking notes doens't usually work with the other two.
0:41 - 0:43 Team Programming
0:48 - 0:50 Hmm, what is going on here?
0:51 - 0:52 Lotsa Interop happens inside these boxes. They use LBL (little blinking lights) technology.
0:54 - 1:00 On the phone back to the office. "What I need you to do is ..."
For some of this, it is good bye for another year, for others, see you in committee in Toronto in two weeks, and for others, til we meet in Pisa in a few months.
For all of us, it is Connectathon, and to all who participated, "Good Luck! And see you again next time."
P.S. Thanks to all my birthday well-wishers and to whoever had them send a cake to my room.
Think of what follows below as "Director's Commentary".
0:00 - 0:07 As you come in to this video, you'll see someone on the far right who is looking for where the person he expects to be sitting right here is.
0:04 - 0:12 Moving in from the right are engineers from two different products working out what they need to do to make it work.
0:12 - 0:18 In the middle background you can see this one fellow in quite a hurry to see what's going on with his partner over there.
0:16 - 0:23 And then we have this guy who is NOT running to the bathroom (which are behind us in this scene). He must have even more to get done this week.
0:25 - 0:29 Striped Shirt: "If you do this it will work." Blue Hooded Sweatshirt: "Ok! I got it now."
0:28 - 0:31 He's reading the spec. Can I keep him?
0:33 - 0:34 Engineer to carefully listening monitor ... So what we do to protect the data is...
0:36 - 0:40 That guy taking notes doens't usually work with the other two.
0:41 - 0:43 Team Programming
0:48 - 0:50 Hmm, what is going on here?
0:51 - 0:52 Lotsa Interop happens inside these boxes. They use LBL (little blinking lights) technology.
0:54 - 1:00 On the phone back to the office. "What I need you to do is ..."
For some of this, it is good bye for another year, for others, see you in committee in Toronto in two weeks, and for others, til we meet in Pisa in a few months.
For all of us, it is Connectathon, and to all who participated, "Good Luck! And see you again next time."
P.S. Thanks to all my birthday well-wishers and to whoever had them send a cake to my room.
Thursday, January 20, 2011
Information Session on Nationwide Health Information Network Specifications - January 26, 2011
Another press release crossing my desk...
Information Session: Nationwide Health Information Network Specifications
Please join members of the Nationwide Health Information Network technical team for a special information session on implementing Nationwide Health Information Network specifications. This orientation will bring together technical consultants, entities that have implemented the specifications, and organizations just starting the process of implementation in an informative exchange of best practices and a robust question and answer session.
Updated January 24th
When: Wednesday, January 26, 2011; 3:00-4:00 p.m. EST Dial-in: 1-888-282-9568 Meeting ID: 2523818
For more information, email bechang@deloitte.com
| |||
NHIN University: NHIN 205 - The NHIN Value Proposition- Monday, Jan. 24 @ 2pm
NHIN 205 – Revisiting the NHIN Value Proposition
Come kick off a new year for National eHealth Collaborative's NHIN University with a look at the many ways in which participating in the Nationwide Health Information Network can improve American health and healthcare. NHIN 205 – Revisiting the NHIN Value Proposition will feature representatives from organizations currently participating in the Nationwide Health Information Network Exchange to deliver updates on the progress of the Exchange program and discuss the value generated by live health information exchange. The faculty will highlight how participation in the Exchange may be relevant to your organization's short and long term information exchange needs.
NHIN 205 – Revisiting the NHIN Value Proposition
DATE: Monday, January 24, 2011 (add to your calendar) TIME: 2:00-4:00 pm ETFACULTY:
- Michael Matthews – Chair, NHIN Exchange Coordinating Committee; CEO, MedVirginia
- John Mattison, MD – Chief Medical Information Officer, Kaiser Permanente
- Kitt Winter – Director, Division of Health Information Technology, Office of the Chief Information Officer Social Security Administration
AUDIOCONFERENCE: (866) 699-3239 or (408) 792-6300
(Please join the event with a computer system first and follow the audio instructions on the screen.)
ACCESS/EVENT CODE: 663 168 203
ATTENDEE ID: You will receive this number when you join the event first with a computer connection.
READ THE NHIN 205 COURSE DESCRIPTION AND LEARNING OBJECTIVES
Review the full Fall Semester Course Catalog: www.NationaleHealth.org/NHIN-U
Did you miss any NHIN University classes? Recordings and transcripts are available here.
Comments Sought on Potential Stage 2 MeaningfulUse Objectives
Sorry, this one has been sitting in my outbox for a week or so...
Comments Sought on Potential Stage 2 Meaningful Use Objectives
The Health IT Policy Committee is seeking comments on potential stage 2 objectives from the public on a draft document of preliminary recommendations it has developed. See recommendations and directions for submitting comments [PDF - 188 KB]. The Policy Committee will consider all of these comments in forming the stage 2 meaningful use recommendations it presents this summer to the U.S. Department of Health & Human Services.
The period for request for comments will be 45 days, ending February 25th.
Comments can be submitted electronically to regulations.gov.
The Health IT Policy Committee is seeking comments on potential stage 2 objectives from the public on a draft document of preliminary recommendations it has developed. See recommendations and directions for submitting comments [PDF - 188 KB]. The Policy Committee will consider all of these comments in forming the stage 2 meaningful use recommendations it presents this summer to the U.S. Department of Health & Human Services.
The period for request for comments will be 45 days, ending February 25th.
Comments can be submitted electronically to regulations.gov.
Wednesday, January 19, 2011
Two Rules for Connectathon
“ | Two rules of software development
|
I've written about these rules before. For connectathon they apply in spades.
Violate rule number 2 at your own risk. If you decide that you need to do a clean install just before you get here, make sure you have it all. Sometimes doing unnecessary work to fix things does more damage than living with what you had already tested.
On rule number 1, if you are having a problem, work first on getting it fixed to the point that it passes. Make the fix elegant and maintainable when you have more time (and keep good notes).
There's another thing about rule #1. If you cannot get it to work with partner A, don't spend hours trying to work it out. Find another partner to test with to see if you can just get it to work. Don't let a blocking problem keep you from completing other activity.
Finally, one last note: If you cannot get it to work with A + B + C all at once, simplify. Try to make it work with just one of them. Then add the other, then put it all together. Don't try ATNA TLS + ATNA Logging + PIX Query + XDS Provide and Register as your very first test.
Tracking your progress at Connectathon
After over eight (!) years participating at the IHE Connectathon, I've developed a number of techniques to determine how on track I or teams I'm supporting are. These are really simple metrics.
Day one:
Did you come with your stuff and are you on the network by 9:00? Good
Are you still having network problems at 10:00? Bad
At the end of the day, have you done ALL your no-peer testing? Good
Are they all verified? Excellent.
Have you made progress on peer-to-peer testing? Good
Have you made 0 progress? Really bad.
NOTE: In past years, connectathon monitors have been told on Wednesday to focus priorities on peer-to-peer testing, because if you haven't finished your no-peer tests by then, likely you will fail.
Day two:
At the end of the day, do you feel like you will finish peer-to-peer tomorrow? Good
Have you nearly finished or finished peer-to-peer testing on any profile? Excellent.
Do you still have more than half of your peer-to-peer testing to finish? Bad
Still not connected? If you cannot connect tomorrow, better check for an early flight out.
Day Three:
At the end of the day, are you done peer-to-peer testing? Good.
Are they all verified? Excellent.
Are you nearly done? Don't panic YET, but get it done first thing, or stay late if possible.
Still not connected? Go home.
Day Four:
Did you make progress on group tests? Good.
Did you finish group tests? Excellent.
Still not connected? Why are you here?
All done? Excellent. Now is the time to go for stretch goals or help colleagues.
Day Five:
Ready to be told you can leave? Excellent.
Nearly done? OK, but scramble.
If, along the way you discover that you WON'T be able to meet some of the criteria for a profile, and you don't have another reason to keep testing it, DROP it, or at least don't waste any more time on it. Dropping a profile is a favor to others who may otherwise try to test with you.
This morning's connectathon lesson is brought to you by the Llama.
Day one:
Did you come with your stuff and are you on the network by 9:00? Good
Are you still having network problems at 10:00? Bad
At the end of the day, have you done ALL your no-peer testing? Good
Are they all verified? Excellent.
Have you made progress on peer-to-peer testing? Good
Have you made 0 progress? Really bad.
NOTE: In past years, connectathon monitors have been told on Wednesday to focus priorities on peer-to-peer testing, because if you haven't finished your no-peer tests by then, likely you will fail.
Day two:
At the end of the day, do you feel like you will finish peer-to-peer tomorrow? Good
Have you nearly finished or finished peer-to-peer testing on any profile? Excellent.
Do you still have more than half of your peer-to-peer testing to finish? Bad
Still not connected? If you cannot connect tomorrow, better check for an early flight out.
Day Three:
At the end of the day, are you done peer-to-peer testing? Good.
Are they all verified? Excellent.
Are you nearly done? Don't panic YET, but get it done first thing, or stay late if possible.
Still not connected? Go home.
Day Four:
Did you make progress on group tests? Good.
Did you finish group tests? Excellent.
Still not connected? Why are you here?
All done? Excellent. Now is the time to go for stretch goals or help colleagues.
Day Five:
Ready to be told you can leave? Excellent.
Nearly done? OK, but scramble.
If, along the way you discover that you WON'T be able to meet some of the criteria for a profile, and you don't have another reason to keep testing it, DROP it, or at least don't waste any more time on it. Dropping a profile is a favor to others who may otherwise try to test with you.
This morning's connectathon lesson is brought to you by the Llama.
The Connectathon Conference
It's a bit of a different experience being an attendee at the Connectathon Conference than it is being on the Connectathon floor. For one, since I was speaking, I wore a suit. Most of the people attending the conference don't know me, and I figure it makes it easier for them to accept me as an expert if I don't get too much in their face about it. For another, its an opportunity to hear about how people are using IHE in the real world.
I missed Lisa Spellman's introduction, but I caught Elliote Sloane's (IHE International Board Chair) presentation. Elliote talks about the history of IHE (going back to 1998), its growth, its newest (and oldest) international member (IHE USA - where he is also on the board). He also talks about the growth of connecthon participation this year, where we have over 100 companies testing more than 55 IHE profiles. Last year is was more than 80 companies and 55 profiles. That's better than 20% growth. At a dinner later in the evening, I was talking about IHE's growth since I first showed up on the scene. It's pretty darn impressive. Since 2001, the North American Connectathon has grown in participants at an average rate of better than 12% a year, and when I look at the total figures (Europe, Asia and other Connectathons), it is growing even faster, better than 15% a year. I wish I was getting that kind of return on my investments.
Here's a chart of the data I dug up from the Connectathon results on connectathon participation.
Dr. Doug Fridsma, Director of Standards and Interoperability at ONC joined IHE at the Connectathon Conference as the keynote speaker. Doug ran the audience through his current vision of the Standards and Interoperability Framework. He highlighted some key points. One was that specifications should be crisp, and include everything you need with nothing more, and they need to be extensible. He also highlighted the three recently announced initiatives, one of which includes participation from 3 IHE co-chairs and one board member along with several other IHE members, that being the HL7/IHE/Health Story Consolidation project. I also introduced several members of the IHE Lab workgroup to Doug and I believe we convinced them to participate in the Laboratory Interface Improvement project.
Mike Nusbaum gave his update on IHE Canada and how it now fits into the Canadian Standards Collaborative (until recently, IHE Canada participated indirectly, but wasn't part of the collaborative).
Lee Jones (formerly of the former ONC and HITSP) gave an update on the current health IT environment. The most memorable phrase from his slides were "Meaningful Frenzy", which pretty much describes my life for the last year and the next two. He also pointed out operational challenges seem to be more daunting than than the technology ones that the PCAST tries to address. He notes that payers haven't until recently been engaged in mainstream interoperability, but that recent acquisitions of HIT by Payers may be signalling a change.
After lunch, we heard from Dr. Keith Dreyer who is doing some truly cool things with image sharing in EHR systems at MGH. IHE has recently developed the Image Enabled Office which looks very similar to the image integration capabilities in Mass General's LMR. Dr. Dreyer also reports some amazing reductions in use of high cost imaging procedures using CPOE and Clinical Decision Support
Next I gave a workshop on creating IHE profiles, and again, we came up with a winner and three other proposals that will be forwarded to other IHE domains. The winning submission which I will champion at the next opportunity in PCC is enabling information exchange from pre-surgical information systems to the hospital HIS system. The American Dental Association is sponsoring a Dentistry Domain in IHE. Several new IHE Denistry members joined in this meeting and developed their very first profile proposal. The first meeting is later this week, so they'll already have something to discuss. I didn't plan it that way and neither did they, but we've already ensured that domain will hit the ground running. I've got another idea for IHE Eye Care on dealing with umpteen imaging devices, and I'll forward that to the cochairs. Finally, my personal favorite will get a free ride simply because I like it. The idea is to extend Request for Clinical Guidance (pdf) profile to enable providing feedback to a CDS Service. The key idea here is that after providing some alternatives, the EHR can send back a response indicating what was done. That will enable the supplier of the decision support to do a bunch of cool stuff, most notably benchmarking and metrics, but also use that feedback to improve CDS.
After all that, we gave the attendees a tour of the Connectathon floor, and Doug got a personal tour of the floor led by several IHE luminaries. I tagged along for fun.
All in all, it was a good day. I did manage to check in with my teams today, and they seem to be mostly on track.
I missed Lisa Spellman's introduction, but I caught Elliote Sloane's (IHE International Board Chair) presentation. Elliote talks about the history of IHE (going back to 1998), its growth, its newest (and oldest) international member (IHE USA - where he is also on the board). He also talks about the growth of connecthon participation this year, where we have over 100 companies testing more than 55 IHE profiles. Last year is was more than 80 companies and 55 profiles. That's better than 20% growth. At a dinner later in the evening, I was talking about IHE's growth since I first showed up on the scene. It's pretty darn impressive. Since 2001, the North American Connectathon has grown in participants at an average rate of better than 12% a year, and when I look at the total figures (Europe, Asia and other Connectathons), it is growing even faster, better than 15% a year. I wish I was getting that kind of return on my investments.
Here's a chart of the data I dug up from the Connectathon results on connectathon participation.
Dr. Doug Fridsma, Director of Standards and Interoperability at ONC joined IHE at the Connectathon Conference as the keynote speaker. Doug ran the audience through his current vision of the Standards and Interoperability Framework. He highlighted some key points. One was that specifications should be crisp, and include everything you need with nothing more, and they need to be extensible. He also highlighted the three recently announced initiatives, one of which includes participation from 3 IHE co-chairs and one board member along with several other IHE members, that being the HL7/IHE/Health Story Consolidation project. I also introduced several members of the IHE Lab workgroup to Doug and I believe we convinced them to participate in the Laboratory Interface Improvement project.
The Consolidation project is intended to address the biggest issue HITSP was never funded or contracted to address. That is the creation of a one-stop specification for the implementation of CDA documents. It goes just a bit further in that not only is there a specification, but also a UML model. That's critical for future IHE/HL7 and ONC collaboration efforts. If we all publish the necessary data in the UML model in a standard format like XMI (I just picked that one out of a hat), implementers of CDA would be able to:
1. Use off-the-shelf tools to create software for reading and writing CDA documents.
2. Create implementation guides based on the CDA standard.
3. Share implementation guide data for use by others.
It restores what we tried to accomplish in the first years of IHE PCC by creating the content on the wiki, but in a much better way. The MDHT CDA Tools project produces DITA output. That can be transferred to proprietary formats like Microsoft Word, or standard formats like PDF, DocBook or even, heaven forbid, well-linked XHTML. I'm eager to see what we can do with the tool, and hopeful that IHE PCC can begin to use it to develop profiles. IHE Radiology is looking at MDHT as well, because they have some CDA templates to create this year also.
1. Use off-the-shelf tools to create software for reading and writing CDA documents.
2. Create implementation guides based on the CDA standard.
3. Share implementation guide data for use by others.
It restores what we tried to accomplish in the first years of IHE PCC by creating the content on the wiki, but in a much better way. The MDHT CDA Tools project produces DITA output. That can be transferred to proprietary formats like Microsoft Word, or standard formats like PDF, DocBook or even, heaven forbid, well-linked XHTML. I'm eager to see what we can do with the tool, and hopeful that IHE PCC can begin to use it to develop profiles. IHE Radiology is looking at MDHT as well, because they have some CDA templates to create this year also.
Mike Nusbaum gave his update on IHE Canada and how it now fits into the Canadian Standards Collaborative (until recently, IHE Canada participated indirectly, but wasn't part of the collaborative).
Lee Jones (formerly of the former ONC and HITSP) gave an update on the current health IT environment. The most memorable phrase from his slides were "Meaningful Frenzy", which pretty much describes my life for the last year and the next two. He also pointed out operational challenges seem to be more daunting than than the technology ones that the PCAST tries to address. He notes that payers haven't until recently been engaged in mainstream interoperability, but that recent acquisitions of HIT by Payers may be signalling a change.
After lunch, we heard from Dr. Keith Dreyer who is doing some truly cool things with image sharing in EHR systems at MGH. IHE has recently developed the Image Enabled Office which looks very similar to the image integration capabilities in Mass General's LMR. Dr. Dreyer also reports some amazing reductions in use of high cost imaging procedures using CPOE and Clinical Decision Support
Next I gave a workshop on creating IHE profiles, and again, we came up with a winner and three other proposals that will be forwarded to other IHE domains. The winning submission which I will champion at the next opportunity in PCC is enabling information exchange from pre-surgical information systems to the hospital HIS system. The American Dental Association is sponsoring a Dentistry Domain in IHE. Several new IHE Denistry members joined in this meeting and developed their very first profile proposal. The first meeting is later this week, so they'll already have something to discuss. I didn't plan it that way and neither did they, but we've already ensured that domain will hit the ground running. I've got another idea for IHE Eye Care on dealing with umpteen imaging devices, and I'll forward that to the cochairs. Finally, my personal favorite will get a free ride simply because I like it. The idea is to extend Request for Clinical Guidance (pdf) profile to enable providing feedback to a CDS Service. The key idea here is that after providing some alternatives, the EHR can send back a response indicating what was done. That will enable the supplier of the decision support to do a bunch of cool stuff, most notably benchmarking and metrics, but also use that feedback to improve CDS.
After all that, we gave the attendees a tour of the Connectathon floor, and Doug got a personal tour of the floor led by several IHE luminaries. I tagged along for fun.
All in all, it was a good day. I did manage to check in with my teams today, and they seem to be mostly on track.
Tuesday, January 18, 2011
The Knack is not the only skill you need
You have all likely seen this video.
I'm in a room filled with people who have the Knack. You know the type. Give them a great big box of stuff that requires assembly, and they begin immediately begin to put it together without reading the instructions.
The problem is, at the end, when there is one screw left over, they don't know what they've missed. So, now they begin goving over the instructions in detail to figure it out. They will and eventually do figure it out, because they know if they don't, something bad could happen.
Remember, I'm in a room full of these people. Their jobs this week are to put together interoperable solutions. The screws that are left over result in failed tests.
I cannot tell you how many times over the past eight years I've read the instructions to someone else, or the specification, or how many times my colleagues have done the same thing. It usually results in a polite "Oh...thank you" and they resolve their issue to move on to the next problem.
The ones I want to keep are the ones who learn the most valuable lesson of this interaction. They start to read the instructions first.
After all, if the solution was so obvious, IHE would not have been asked to solve the problem in the first place.
So, my best advice for this week. Read.
I'm in a room filled with people who have the Knack. You know the type. Give them a great big box of stuff that requires assembly, and they begin immediately begin to put it together without reading the instructions.
The problem is, at the end, when there is one screw left over, they don't know what they've missed. So, now they begin goving over the instructions in detail to figure it out. They will and eventually do figure it out, because they know if they don't, something bad could happen.
Remember, I'm in a room full of these people. Their jobs this week are to put together interoperable solutions. The screws that are left over result in failed tests.
I cannot tell you how many times over the past eight years I've read the instructions to someone else, or the specification, or how many times my colleagues have done the same thing. It usually results in a polite "Oh...thank you" and they resolve their issue to move on to the next problem.
The ones I want to keep are the ones who learn the most valuable lesson of this interaction. They start to read the instructions first.
After all, if the solution was so obvious, IHE would not have been asked to solve the problem in the first place.
So, my best advice for this week. Read.
Monday, January 17, 2011
A Virtual Connectathon
Those of you who regularly follow me know where I am this week... The IHE North American Connectathon. I think that connectathon is one of the coolest events the healthcare industry does, and its because of the collegial atmosphere and get it done attitude that permeates the whole event. I'll be live-posting on Connectathon more this week, but I wanted to start off the day with an interesting and related exploration. A couple of days ago, Doug Fridsma (the keynote speaker at the Connectathon conference on Tuesday) commented to me that this is something that should also be available virtually and year-round (he attended the HL7 Working Group meeting which I was also at last week). | Help Needed I'm gathering all the web commentary on the PCAST report I can find for a submission to the ONC RFI. If you have links that may not be what I have already seen, tweet me or send me an e-mail with it. |
I've been thinking about this topic a little bit, and wondering what a virtual connectathon would look like. There are a few challenges to overcome to make the connectathon a virtual event, and the first of these is the intensity of the atmosphere. This year's North American Connectathon includes more than 150 systems (up 25% over last year), which will execute over the course of one week tens of thousands of tests which will be reviewed by a team of more than 50 people. According to Bill Majurski, about 70% of the registrants will be using XDS in some form.
I recommend that product teams send at least 2 people to connectathon per product being tested, which gives a fixed cost of about two weeks of effort. You can do it with one (I've been there and done that), but that's a mountain of effort to put on one persons plate. Teams also put in about 1-3 weeks of effort pretesting each profile (not including development time). Most companies test more than one profile, and can often take advantage of overlaps in product requirements to reduce the aggregated time. Even so, it's still a large time committment. Spreading that effort out in a virtual event over the course of a year reduces the intensity.
But the intensity is one of the reasons why connectathons are so valuable. Participants have a week to succeed. To do so, they MUST work with their partners, sometimes deep into the night. There is no other choice, and this necessity makes for partnerships not heard of in the real-world. How, in a virtual event, can we ensure this kind of participation? Outside this room, these people are often stiff competitors, but inside they are your testing partners. You work face to face with your partners to succeed at this event, sometimes shouting down the row, skyping or talking with cell-phones, while making code changes live. A success is often celebrated with a beer or dinner later with these same people.
So a virtual connectathon has to have another purpose, and another way to ensure success, than the event I know and love. It's pretty hard to share a beer virtually, and without the deadline, impossible to get that kind of coordinated effort among competitors. Some other possibilities come to mind. One is to have shorter, more frequent regional events. We've done that for several years. IHE members attended a number of different events and demonstrations, some of which included connectathon like testing including the VITL Summit, the eHealth Connecticut Demonstration (2008), and PHIN's annual conference. While it adds value, it's not what I think Doug is asking for.
So what would a virtual, year-round testing event look like? Who are the stakeholders? What are the benefits? What are the costs? Who would pay for them?
Stakeholders and Benefits
- Governments
- Readily available testing of healthcare IT that meets regional requirements.
- HIE Organizations
- An opportunity to test their systems with new products as they become available.
- Healthcare Providers
- Products that have been tested more recently and frequently than annually. The ability to test homegrown solutions and integrations.
- Healthcare IT Vendors
- The ability to test more frequently than annually, the ability to test at much lower cost and intensity of effort, the ability to test with partners that you didn't have the opportunity to test with at connectathon.
There are a couple of things that go into the cost equation. You probably need a virtual private network to set up a "Connectathon" like network environment. You need someone to manage this, assign access, et cetera. You also need monitors / test proctors and someone to manage them. Connectathon itself requires a team of 3-4 people to manage all that, plus a team of about 50 monitors. You could probably get by with a team of 1-2 to manage the virtual environment and manage test results to start off with, but you still need a larger team to proctor the tests.
The IHE monitoring team is made up of volunteers. The concentration of time at Connectathon is what makes it possible for many volunteers to participate. Some of the monitors are friends of friends of IHE who got dragged in once and keep coming back, others are IHE committee chairs or participants, and a few others are contractors working on large projects (e.g., HITSP) who come as part of their work. Most come to Connectathon because they like the atmosphere, support the work being done, and they also get free travel to Chicago in January. Some are giving a week of their lives for nothing more than the experience on the Connectathon floor, using up vacation time to boot. Many are rather skilled IT people, some with very specialized skills.
Something else has to be done to provide value for monitors other than the free travel because there is no free travel once you go virtual, and you've also radically changed the atmosphere. Because of the diversity of experience, you'd probably need to pay several skilled individuals to do the job, and you'd need to invest in their skills as well. My guess is that to do as much work as the 50 volunteers do in one week over the course of a year, you'd need to hire 3-6 part-time contractors to do the same work, and you'd also have to spend some time training them, which might also include participation in new IHE work.
I'd ballpark that a virtual connectathon would have a budget of anywhere from a half-million to a million dollars depending upon how you got the monitors.
Paying for It
So, just to make it simple, let's say that the cost was a cool million dollars. Cheap at twice the price. How could that be funded? At $4000 per system you'd need to get 250 systems to participate in the virtual connectathon. That's more systems than participate in the annual connectathon, and an enrollment likely not to be reached for some time if ever. I could see maybe 100 systems after two years if the right incentives were present to participate, and in the first year, 25 - 50 systems if you were lucky.
We could maybe cut the networking costs and staffing requirements by getting the state HIE's to supply resources, equipment, et cetera to the effort. Maybe the RECs which already have developed some requirements of their own could support testing efforts by supplying monitoring and management skillsets in exchange for having a testing environment and audience they could use for their own purposes. Educational institutions could support testing as well, exchanging student assistance with testing for the training that the virutal connectathon experience provided and the educational credits that the institution offered. Hmm, those two weren't even in my original list of stakeholders.
We still lose the intenstity of the live event, but that could be supplied (in smaller doses) by other special purpose time driven events. State HIE X could sponsor an event designed to test conformance of a variety of systems to an Immunization message. REC Y could sponsor an event designed to test conformance of a variety of systems to their own region's initiatives. Sponsorship of an event might have its own costs born by the sponsor, with possible additional fees assessed to event participants, but those fees would have to be reasonable.
I could certainly see the advantage to vendors to be able to have a system connected year round where these sorts of tests could be routinely performed. It could eliminate a lot of duplicated effort around the country, and could enable many activities not possible in current Connectathons. Some of those same resources could be used to support other opportunities in education and innovation later, but I wouldn't want to get too defocused in the first year or two.
It's an idea worth persuing further. If I were on the board IHE USA, I'd be thinking long and hard about this one. We could make this idea work, and it could shortly become self-sustaining. It wouldn't be the same as Connectathon, but nothing virtual ever would be. If you happen to be at Connectathon this week and have read this post, I'd be interested in getting your feedback.
Sunday, January 16, 2011
More of Robin's Eggs
January 16, 2011
Robin's at it again. Here are the bookmarked final rules from 2010 and January 2011 which are available in Google Docs for viewing or download
And here's a worksheet you can use for commenting on Stage 2 objectives.
Robin's at it again. Here are the bookmarked final rules from 2010 and January 2011 which are available in Google Docs for viewing or download
- ONC Standards Final Rule
- ONC Permanent Certification Program Final Rule
- CMS Meaningful Use Incentives Final Rule
And here's a worksheet you can use for commenting on Stage 2 objectives.
Netizens 2.0 Response to PCAST
Government 2.0 (of the USA) meet Citizens 2.0 (of the World)
There is an amazing capacity among us humans to hear what we think is important, and not hear at all what we don't care about. We respond to what makes us feel strongly and not at all to that which doesn't. The PCAST report was something that many of us who blog responded to in these terms. Some of us who responded may not be interested in formulating a direct response to ONC. After all, this is a US issue, right? Others of us will revise our initial visceral responses into much more finely tuned words and phrases better suited to use with policy makers.
I've gathered up commentary from the Blogosphere to the PCAST Report to put together what I call the "Netizens 2.0" response. The only editing is to make it readable as a PDF document (so I have to change background and text colors), and removing unnecessary multimedia (YouTube videos of the PCAST Video, the Presidential Logo, copies of the PCAST report and pictures of the commentary authors). I'm also including the comments on the comments. I figure that there's as much or more expertise on the net that has already commented on the report as went into the report itself, and that expertise is already much more focused on healthcare, so it is vital reading.
This response WON'T have the finely edited turn of phrase that is typical of formal public responses to federal RFI's. It's rough, and even the very slight reformatting I did do wasn't quite as thorough as I would have liked because of the hours in which I have to fit this in. But it will content the cogent thoughts of very skilled professionals who also write well, and to a much broader audience.
Here are the list of postings that I included. If I missed your favorite, my apologies.
- Summarizing Early PCAST HIT Critiques: “Brilliant, but they didn’t do all their technical homework.”
- Why Context and Relevance
- My Review of the PCAST Report on HealthIT
- The Language of HealthIT
- Recasting e-health in the USA
- Realizing the Full Potential of Health IT to Improve Healthcare for Americans: Yet Another Path Forward?
- The Health IT report is very good; some opinionated suggestions
- Imaging and the PCAST (President's Council of Advisors on Science and Technology) Report
- The Spirit of PCAST
- Don’t re-invent the (clinical content) wheel…
- Toward a Universal Exchange Language for Healthcare
- PCAST and Clarke’s Third Law
- PCAST Report Comment: Looking Toward Truly Patient-Centered Health Data
- Realizing the Full Potential of Health IT to Improve Healthcare for Americans: The Path Forward
- PCAST Report on Health Information Technology
- Thoughts on the PCAST Report
- Understanding the PCAST Report
- IT Everything
All told it's about 65 pages of feedback. Happy reading.
-- Keith
P.S. Contrary to what the PCAST reports, modern search engine technology provides inadequate recall and precision on queries. I had to use both Google and Bing to find some of these reports, and even then, @VinceKuraitis did a much better job that either. Thanks Vince, yours was the first on the list.
Friday, January 14, 2011
IHE Workshop Results
Today I facilitated an IHE Workshop for the third time. This is always an interesting program. The first half of this half-day program is spent describing how IHE is structured, what it does, why, the benefits, how the processes work, et cetera.
The second half is done in five stages.
Step 0: We review the IHE profile proposal template.
Step 1: This step includes brainstorming interoperability problems in healthcare. We simply come up with an unconstrained list of problems that IHE might be able to solve.
Step 2: The next step divides the room up into teams of 3-5 people and having them select a particular problem to solve. Each team has to develop a profile proposal (short form), which includes four things:
A) Problem and Value Statement
B) Use Case
i) Current State
ii) Desired future state
C) Available Standards
D) Systems involved
Step 3: Teams present their proposal in 5-10 minutes, and answer any questions on them from the class. I also gently critique the proposals, explaining what might be done to make them a little better.
Step 4: The class votes on the proposals. Because of the timing, the class dwindled a bit from 12 to 6 people (it was the last class on the last day of HL7). So, I didn't let teams vote for their own proposal.
One proposal today got overwhelming support, and it was from the Opthomologist who worked solo with a bit of help from me. So, I'll be flying that one past some friends in IHE Eyecare to see if I can find a supporter of it to present at the next opportunity.
The problem described was lack of consistency in data collection on cataract surgeries. This is apparently a very common surgery (in general as well as in that specialty). The proposal was to develop either an OpenEHR Archetype or a CDA Document template that could be used to gather and report on the data pre and post-surgery, with use of SNOMED CT or LOINC Vocabulary, and restriction to appropriate units to report on vision acuity. A 20/20 in the US translates into 6/6 here in Australia, but there are also log scales.
I'll get the complete proposal from my student in my e-mail. We also looked at remote (home) monitoring, but that one didn't "win the prize". It had some valuable points also, and was well done, it just didn't have the same focus as the Cataract Surgery one. So, I'll take the output from that group and forward it to some folks in PCD next week also, and make sure that the team at least gets feedback on what is available.
The last proprosal was for ePrescribing, and had participation from AU, NZ, and CZ. The challenge here is that there really are NO common standards available for the electronic prescribing acrosss these regions, so the proposal was not terribly feasible. Even so, I promised to point them to the work being done by epSOS as a possible starting point.
Everybody gained something.
Next week I'll be doing something similar, but with much more limited time. Students will identify problems, and use existing IHE profiles they've had described to them earlier in the day to design a solution to an interoperability problem. I won't have to provide as much background for them because they'll have been at the Connectathon conference and will have also already toured the floor.
-- Keith
The second half is done in five stages.
Step 0: We review the IHE profile proposal template.
Step 1: This step includes brainstorming interoperability problems in healthcare. We simply come up with an unconstrained list of problems that IHE might be able to solve.
Step 2: The next step divides the room up into teams of 3-5 people and having them select a particular problem to solve. Each team has to develop a profile proposal (short form), which includes four things:
A) Problem and Value Statement
B) Use Case
i) Current State
ii) Desired future state
C) Available Standards
D) Systems involved
Step 3: Teams present their proposal in 5-10 minutes, and answer any questions on them from the class. I also gently critique the proposals, explaining what might be done to make them a little better.
Step 4: The class votes on the proposals. Because of the timing, the class dwindled a bit from 12 to 6 people (it was the last class on the last day of HL7). So, I didn't let teams vote for their own proposal.
One proposal today got overwhelming support, and it was from the Opthomologist who worked solo with a bit of help from me. So, I'll be flying that one past some friends in IHE Eyecare to see if I can find a supporter of it to present at the next opportunity.
The problem described was lack of consistency in data collection on cataract surgeries. This is apparently a very common surgery (in general as well as in that specialty). The proposal was to develop either an OpenEHR Archetype or a CDA Document template that could be used to gather and report on the data pre and post-surgery, with use of SNOMED CT or LOINC Vocabulary, and restriction to appropriate units to report on vision acuity. A 20/20 in the US translates into 6/6 here in Australia, but there are also log scales.
I'll get the complete proposal from my student in my e-mail. We also looked at remote (home) monitoring, but that one didn't "win the prize". It had some valuable points also, and was well done, it just didn't have the same focus as the Cataract Surgery one. So, I'll take the output from that group and forward it to some folks in PCD next week also, and make sure that the team at least gets feedback on what is available.
The last proprosal was for ePrescribing, and had participation from AU, NZ, and CZ. The challenge here is that there really are NO common standards available for the electronic prescribing acrosss these regions, so the proposal was not terribly feasible. Even so, I promised to point them to the work being done by epSOS as a possible starting point.
Everybody gained something.
Next week I'll be doing something similar, but with much more limited time. Students will identify problems, and use existing IHE profiles they've had described to them earlier in the day to design a solution to an interoperability problem. I won't have to provide as much background for them because they'll have been at the Connectathon conference and will have also already toured the floor.
-- Keith
Thursday, January 13, 2011
An overdue Ad Hoc Motorcycle Guy Harley Award ...
The Ad Hoc Harleys are headed into their second year. They were initiated on January 20th of 2010 (my Birthday). The whole point of the Ad Hoc Harley is to recongize the contributions of someone with regard to standards who would otherwise be an unsung hero.
This particular award goes to one of those heros I first met in 2003 at an educational meeting in Chicago. Since then I've watched him over the years invest tremendous amounts of time and effort into ensuring that more than a 1000 products work with each other, using more than 200 different specifications. Unlike many others who get recognized here, this is part of his job, but he does that job well, and as a result of his efforts millions of lives have been made better by more interoperable healthcare products. It is not unusual for a computer to review hundreds of thousands of complex transactions, but it is the rare individual who can claim some responsibility for the same. This person can, and has done so more more than a decade.
In his role as "Mother" he has raised up a number of children through the complex process of ensuring systems work together. He is a bit stern, and expects his children to grow up rather quickly, but he also manages to ensure that they do. For his efforts, I award the next Ad Hoc Harley to:
This particular award goes to one of those heros I first met in 2003 at an educational meeting in Chicago. Since then I've watched him over the years invest tremendous amounts of time and effort into ensuring that more than a 1000 products work with each other, using more than 200 different specifications. Unlike many others who get recognized here, this is part of his job, but he does that job well, and as a result of his efforts millions of lives have been made better by more interoperable healthcare products. It is not unusual for a computer to review hundreds of thousands of complex transactions, but it is the rare individual who can claim some responsibility for the same. This person can, and has done so more more than a decade.
In his role as "Mother" he has raised up a number of children through the complex process of ensuring systems work together. He is a bit stern, and expects his children to grow up rather quickly, but he also manages to ensure that they do. For his efforts, I award the next Ad Hoc Harley to:
This certifies that
Steve Moore of the Mallinckrodt Institute of Radiology
Has hereby been recognized for outstanding contributions to the forwarding of Healthcare Standardization
Congratulations Steve, and here's to another decade of testing fun. See you in a couple of days.
Wednesday, January 12, 2011
Convergence
Today's Q2 meeting in Templates with Structured Documents, Patient Care, Vocabulary and Tooling was a continuation of several discussions (of which my post on Triplets is one) that have occured over the week regarding templates, detailed clinical models and archetypes. The focus of the discussion was the refresh of the HL7 Templates Draft Standard for Trial use.
The Templates DSTU was both a great success and a great failure for HL7. It was successful in its ability to clarify what an HL7 Template is. There are now over 1000 templates that conform to the HL7 definition of what an HL7 template is, used in national and international programs all over the world, including the US, the EU, and Asia. In fact, I've participated in the development of a set of more than 100 templates through IHE and HL7 that have been reused in national programs in those regions. It's even the same set in those regions, which provides a remarkable amount of consistency in clinical information found in CDA documents.
The failure of the Templates DSTU is a failure not in the details of what a template is, but rather in the information that is used to keep track of it, locate it, vet it, et cetera. The XML representation of that metadata has gone largely unused. When we reviewed the Templates DSTU for the Template Registgry Requirements project, we found it lacking in several places.
We discussed working on some common definitions that would help us parse all of the terms that I mentioned in Triplets, and to define the concepts, identify super concepts, and describe the various differences. So you could imagine that we would have the concept of an Archetype, and that would be specialized to describe HL7 based archetypes and openEHR based archetypes. Similarly with Templates and/or Detailed Clinical Models.
The audience in the room was generally supportive of this idea, and there seems to be a general consensus that this would actually have value not to just HL7 but also to openEHR. Apparently this is a shift in the relationship that has changed recently, spawned by nobody knows what.
A point which I made to the room. We, sitting in the room, are the people who care about these distinctions. To the average healthcare provider, they are meaningless. It is to ALL our benefit to have a common way to describe these things that we ALL understand and to promote its use, because no matter what we call it, Doctors just want it all to work together.
In that veign, I learned of a remarkable piece of work done by Heath Frankel. He was able to create a solution which took information from an HL7 Version 2 message, store it into an OpenEHR based repository, map from the OpenEHR structuire to the IHE Referral Summary, and then submit it to an XDS registry. I've known about the technical feasibility of this for some time. I wrote a few months ago on converting from Version 2 to CDA. I've also generated CDA from several EHR database structures from several different vendor's systems in my career.
I think about combining this idea with the notions that Robert Warden has about Neutral Mapping. I can envision a world where their exists a neutral mapping between the OpenEHR Information Model and the HL7 Reference Information Model. I believe this to be readily managable for a core subset that has yet to be determined. If this were to exist, I can see ways in which existing OpenEHR tools could be used to benefit the development of HL7 Detailed Clinical Models and HL7 and IHE Templates, that clinicians could use.
I've recently seen some demonstrations of tools which already in use in this space, and after sitting down and putting all the pieces together, my brain just exploded.
I'm sure most of you by know are wondering if I'm headed into the Dark Side of openEHR. Fear not, I still remain a huge fan and strong supporter of HL7 and IHE. But what I see here are some possibilities and synergies in convergence that would greatly benefit all of healthcare IT. The real challenge will be whether there is a way to truly take the best of both worlds forward.
The Templates DSTU was both a great success and a great failure for HL7. It was successful in its ability to clarify what an HL7 Template is. There are now over 1000 templates that conform to the HL7 definition of what an HL7 template is, used in national and international programs all over the world, including the US, the EU, and Asia. In fact, I've participated in the development of a set of more than 100 templates through IHE and HL7 that have been reused in national programs in those regions. It's even the same set in those regions, which provides a remarkable amount of consistency in clinical information found in CDA documents.
The failure of the Templates DSTU is a failure not in the details of what a template is, but rather in the information that is used to keep track of it, locate it, vet it, et cetera. The XML representation of that metadata has gone largely unused. When we reviewed the Templates DSTU for the Template Registgry Requirements project, we found it lacking in several places.
We discussed working on some common definitions that would help us parse all of the terms that I mentioned in Triplets, and to define the concepts, identify super concepts, and describe the various differences. So you could imagine that we would have the concept of an Archetype, and that would be specialized to describe HL7 based archetypes and openEHR based archetypes. Similarly with Templates and/or Detailed Clinical Models.
The audience in the room was generally supportive of this idea, and there seems to be a general consensus that this would actually have value not to just HL7 but also to openEHR. Apparently this is a shift in the relationship that has changed recently, spawned by nobody knows what.
A point which I made to the room. We, sitting in the room, are the people who care about these distinctions. To the average healthcare provider, they are meaningless. It is to ALL our benefit to have a common way to describe these things that we ALL understand and to promote its use, because no matter what we call it, Doctors just want it all to work together.
In that veign, I learned of a remarkable piece of work done by Heath Frankel. He was able to create a solution which took information from an HL7 Version 2 message, store it into an OpenEHR based repository, map from the OpenEHR structuire to the IHE Referral Summary, and then submit it to an XDS registry. I've known about the technical feasibility of this for some time. I wrote a few months ago on converting from Version 2 to CDA. I've also generated CDA from several EHR database structures from several different vendor's systems in my career.
I think about combining this idea with the notions that Robert Warden has about Neutral Mapping. I can envision a world where their exists a neutral mapping between the OpenEHR Information Model and the HL7 Reference Information Model. I believe this to be readily managable for a core subset that has yet to be determined. If this were to exist, I can see ways in which existing OpenEHR tools could be used to benefit the development of HL7 Detailed Clinical Models and HL7 and IHE Templates, that clinicians could use.
I've recently seen some demonstrations of tools which already in use in this space, and after sitting down and putting all the pieces together, my brain just exploded.
I'm sure most of you by know are wondering if I'm headed into the Dark Side of openEHR. Fear not, I still remain a huge fan and strong supporter of HL7 and IHE. But what I see here are some possibilities and synergies in convergence that would greatly benefit all of healthcare IT. The real challenge will be whether there is a way to truly take the best of both worlds forward.
Expressions in HL7 Data Types R2 and Computable Clinical Guidelines
The HL7 Structured Documents Workgroup met with Clinical Decision Support today to discuss some of the issues with the HL7 Quality Measurement Format that would need to be addressed in the next release. Bob (Dolin) gave a quick update on the NQF status. Apparently the Meaningful Use measures have all been converted to HQMF, including all the value sets in all the Meaningful Use specified vocabularies (ICD-9-CM and SNOMED CT) and are in CMS hands. We heard that there may be some sort of comment / vetting process as a later phase.
The issue that Bob wanted to address is the way to represent an expression in a computational language in a measure. HL7 Data types release 2 includes the EXPR data type. This data type is an extension of a data type of type T which has one new component: expression. An example representation of this is shown below:
‹value xsi:type='EXPR_INT'›
‹expression mediatype='application/javascript'›
foo.value.value - bar.value.value
‹/expression›
‹/value›
Now, by itself, this isn't completely useful, but when you put it inside an Observation that you are defining, the expression can be used to define how the value is computed. There are a couple of other things that you need. One of these is a binding from the variables foo and bar above to specific classes.
The HL7 RIM has a way to create bindings for the derivationExpr component of the act class, but hasn't defined how to create bindings for EXPR_T. I'd stick with using the same mechanism for derivationExpr. What could be done is something like the following:
‹observation moodCode='DEF'›
‹value xsi:type='EXPR_INT'› ‹actRelationship typeCode='DRIV'›
‹expression mediatype='application/javascript'›
foo.value.value - bar.value.value
‹/expression›
‹/value›
‹localVariableName›foo‹localVariableName›
‹observation›
‹value value='1'›
‹/observation›
‹/actRelationship›
‹actRelationship typeCode='DRIV'›
‹localVariableName›foo‹localVariableName›
‹observation›
‹value value='2'›
‹/observation›
‹/actRelationship›
‹/observation›
What this essentially says is that the outermost definition of the observation has a value. That value is computed from information contained in two other named classes: foo and bar. These classes are then defined to be local variables representing the named observation classes.
So, why is this cool? Well, it's something only a geek could love. What it does is provide a mechanism whereby we can bind an HL7 class represented in XML to a programming language like javascript (Bob wanted to use GELLO, but I can hand you a book today on javascript if you really need it. And you can probably already figure out how to access the classes.
The next piece of this is that it allows certain computations to be defined based on the contents of other stuff.
What is missing from this is the binding rules that tell us how to evaluate the named portion of the expression. I cheated by using the binding rules for derivationExpr, which are very simple. Those rules state that the named variables are contained within in derived acts. I could have used other binding rules, e.g., that the named variables are contained within some other set of named variables.
What I like about this is that it gives me the missing pieces needed to define a Structured Document for a Clinical Guideline. Those two pieces are what I call level 3 and level 4 of clinical guidelines.
The structured clinical guideline in my head has four levels. Level 1 contains a header comprised of metadata used to allow the guideline to be found in a repository of guidelines and human readable content as an attachment, e.g., PDF or XHTML. Level 2 contains the information structured into sections where each section is addressible, coded, and has additional metadata describing it, along with human readable content in a format like XHTML. Level 3 are the definitions of things that need to be tracked to manage the guideline (e.g., heart rate, ejection fraction ratio, blood preassure, comorbidities, et cetera). Level 4 is a way to bind to ANY computational langauge, such as javascript (my preferred language for reasons of reduciong complexity).
So we never did get to discuss the idea of how to build this thing in clinical decision support like I had wanted to, because we could hardly get away from the discussion about how what Bob wanted was already in scope of VMR [sort of like swatting flies with a sledgehammer]. But now I know the pieces are there. It's time to start thinking more about how to put this together.
And see, I don't even need to worry about the Gello, Arden, GLIF, ... debate because any mediaType will do as the computable language. The standard need not state a preference.
So, it looks like there might be enough to define a quality process that has measurement built in. One of these days, I'm just gonna have to take a class on that six-sigma thingy-ma-bob.
G'Night all.
Keith
P.S. They tell me that there's a foot of snow back home. I hope you all are enjoying it.
The issue that Bob wanted to address is the way to represent an expression in a computational language in a measure. HL7 Data types release 2 includes the EXPR
‹value xsi:type='EXPR_INT'›
‹expression mediatype='application/javascript'›
foo.value.value - bar.value.value
‹/expression›
‹/value›
Now, by itself, this isn't completely useful, but when you put it inside an Observation that you are defining, the expression can be used to define how the value is computed. There are a couple of other things that you need. One of these is a binding from the variables foo and bar above to specific classes.
The HL7 RIM has a way to create bindings for the derivationExpr component of the act class, but hasn't defined how to create bindings for EXPR_T. I'd stick with using the same mechanism for derivationExpr. What could be done is something like the following:
‹observation moodCode='DEF'›
‹value xsi:type='EXPR_INT'› ‹actRelationship typeCode='DRIV'›
‹expression mediatype='application/javascript'›
foo.value.value - bar.value.value
‹/expression›
‹/value›
‹localVariableName›foo‹localVariableName›
‹observation›
‹value value='1'›
‹/observation›
‹/actRelationship›
‹actRelationship typeCode='DRIV'›
‹localVariableName›foo‹localVariableName›
‹observation›
‹value value='2'›
‹/observation›
‹/actRelationship›
‹/observation›
What this essentially says is that the outermost definition of the observation has a value. That value is computed from information contained in two other named classes: foo and bar. These classes are then defined to be local variables representing the named observation classes.
So, why is this cool? Well, it's something only a geek could love. What it does is provide a mechanism whereby we can bind an HL7 class represented in XML to a programming language like javascript (Bob wanted to use GELLO, but I can hand you a book today on javascript if you really need it. And you can probably already figure out how to access the classes.
The next piece of this is that it allows certain computations to be defined based on the contents of other stuff.
What is missing from this is the binding rules that tell us how to evaluate the named portion of the expression. I cheated by using the binding rules for derivationExpr, which are very simple. Those rules state that the named variables are contained within in derived acts. I could have used other binding rules, e.g., that the named variables are contained within some other set of named variables.
What I like about this is that it gives me the missing pieces needed to define a Structured Document for a Clinical Guideline. Those two pieces are what I call level 3 and level 4 of clinical guidelines.
The structured clinical guideline in my head has four levels. Level 1 contains a header comprised of metadata used to allow the guideline to be found in a repository of guidelines and human readable content as an attachment, e.g., PDF or XHTML. Level 2 contains the information structured into sections where each section is addressible, coded, and has additional metadata describing it, along with human readable content in a format like XHTML. Level 3 are the definitions of things that need to be tracked to manage the guideline (e.g., heart rate, ejection fraction ratio, blood preassure, comorbidities, et cetera). Level 4 is a way to bind to ANY computational langauge, such as javascript (my preferred language for reasons of reduciong complexity).
So we never did get to discuss the idea of how to build this thing in clinical decision support like I had wanted to, because we could hardly get away from the discussion about how what Bob wanted was already in scope of VMR [sort of like swatting flies with a sledgehammer]. But now I know the pieces are there. It's time to start thinking more about how to put this together.
And see, I don't even need to worry about the Gello, Arden, GLIF, ... debate because any mediaType will do as the computable language. The standard need not state a preference.
So, it looks like there might be enough to define a quality process that has measurement built in. One of these days, I'm just gonna have to take a class on that six-sigma thingy-ma-bob.
G'Night all.
Keith
P.S. They tell me that there's a foot of snow back home. I hope you all are enjoying it.
Subscribe to:
Posts (Atom)