Pages
Monday, December 15, 2008
A Story About Quality of Care
A new story entered my repetoire recently. My step-father died last night, after a long struggle to recover from open-heart surgery over six weeks ago. This was his second open heart surgery in two years. I'm still somewhere between Shock and Denial and the Pain and Guilt stage of grieving. As I go through these stages, questions go through my head on what we could have done differently to ensure he survived.
Two questions gnaw at me:
Was he getting the best care in the area that he could have? I cannot answer that question as well as I would like to be able to. Even so, my mother and I probably have better answers than most patients or their families would. I have a rather elaborate network of physician contacts located in that area and elsewhere. The gave me good anecdotal feedback on the quality of physician care provided by the hospital where his surgery was performed. My step-father has had heart problems for quite some time, and my mother was rather engaged to make sure he had high quality care. Even so, I feel like the amount of information we all had was really insufficient. What would have been most helpful to us would have been a simple listing by procedure of the success rate and number of patients treated at area institutions, classified by the risk category of the patient. My step-father would have fallen into the high-risk category.
Was his cardiologist was the best provider of care for him, or would another have been better? His most recent cardiologist determined that he had yet another valve failure, something his previous provider hadn't found. The newer provider could possible have found that problem in the workup for the open heart surgery he had two years ago. Might that have resulted in his survival?
How would I have objectively compared his two cardiologists? I would like to have seen how a similar case mix of patients faired for each provider. I don't know exactly how to measure success or failure, but others do. I would also be interested in comparing the costs for those results.
A logical outcome of how our health system works is that information about quality of care needs to flow through the entire system. When I'm purchasing health coverage, I really want to know whether my payer is providing the best possible healthcare for me and my family. We need to extend the measure of quality of care not just to the providers of care, but also to those organizations that manage and negotiate the costs and quality of the care that we can obtain. Payers seem to be willing to pay for performance. They should also be willing to report on their own.
As for myself, I think I will measure results by the stories I hear. Tell me yours.
Tuesday, December 9, 2008
Book Review
The book itself is fairly short, about 200 pages. Organized into five parts, the book first introduces the crisis in healthcare, talks about what's been done to reform healthcare in the last century, and describes why these efforts failed. Next it describes the Sentator's ideas for solving the problem. He closes by calling for change in the healthcare system that sticks. I found the first three parts of the book to be of little interest personally. However, I understand why they are included in the book.
The key focus of the book is the notion of a Federal Health Board, and independent body modeled after the Federal Reserve. The members of the board would be experts in healthcare appointed by the President, and approved by Congress. The board would set policy on how private insurers participate in Federal health programs. It would recommend coverage of proven drugs, procedures and therapies for the treatment of specific diseases. What makes the idea of the FHB march is the expansion of the Federal employee health benefits and Medicare programs to include plans that would allow members of the public to participate. It might also unify federal coverage so that members of the military, the Federal government, and those obtaining care under Medicare obtain similar benefits for similar costs.
Other key objectives of the FHB made in the book include:
- Focusing on prevention
- Ensuring universal coverage
- Equity not just for general healthcare, but also dental and mental health coverage
Daschle speaks somewhat about the use of electronic health records in the book, but devotes only a handful of pages to the topic. He notes that the US is woefully behind the rest of the world in use of healthcare information technology, and that we could save as much as five percent of our total healthcare spending (some $1.66 trillion in 2003) by implementation of a fully electronic healthcare system. He suggests tax breaks, loans or loan-guarantees to health-care institutions to enable them to upgrade their health IT systems.
This isn't a great book, but it does have some interesting insights. If you will be dealing with healthcare policy issues in the US, I'd recommend reading it.
Wednesday, December 3, 2008
The Right Tools
I spent about two days reading the DICOM specification, spent an hour talking to a DICOM expert (which if I had done sooner would have cut my DICOM reading in half), and a half day reading the XDS-I specification. Then I hit the web searching for DICOM toolkits written in Java. I found several, and after some detailed review, I picked a fairly reputable one to work with.
After adding 15 lines of Java code and modifying about 50 lines of a 600 line XSLT stylesheet, I had XDS-I enabled the application (it already understood XDS). The code and stylesheet modifications took about 2 hours to write.
An IHE profile in half a week; that's worth bragging about. But, I can go one better: Last Saturday night I used the implemention of the XDS-I that I had developed and the same toolkit to implement the PDI profile. It took me about six hours. Once again, the key was having the right tools.
Most of what I work with is open source, or freely available tools. I love Java, Tomcat, Xerces and Xalan. I use Eclipse as my Java IDE. One of the benefits of clearly written standards and integration profiles is that others are implementing them for me. That allows me to focus time and effort on improving other things. I don't need to do what others have done just to show I can do it better. I'd rather save my time and effort to work on things that others haven't done before.
I can no longer say that I know nothing about DICOM, but I can still honestly say that I didn't write one lick of code dealing with it. So, this year, I've been playing the role of a patient at RSNA IHE demonstration, and maybe next year I can play Radiologist.
Wednesday, November 26, 2008
Engage With Grace
Currently my step-father is in an ICU with a feeding tube and ventilator. He's comatose and my mother and his doctors are unsure why. I know that my mother and he have talked about his healthcare, and that she understands his wishes for care. She has his healthcare power of attorney.
My mother-in-law seven states away is also in a hospital room, recovering from a serious infection. She's in her 90's. Her five children and their family members are all engaged in her healthcare and know her desires. She has a living will enabling her children make decisions on her behalf should she become unable to. They've posted a sign in her room that tells the nurses and physicians to discuss her blood test results (a daily occurence because of long-term ongoing chemotherapy) with family members who are present.
My best friend's mother is dying. She is also in a serious situation, and the family has decided to withdraw life-support. They too are engaged and able to make decisions where they need to.
In all three of these cases, the family is completely engaged in the process, and knows the wishes of their loved ones. They started talking long ago.
Have you had discussions with your family members about how you would like to be treated? There's a simple way to start. Engage With Grace is a project designed to help you have this discussion. They provide one slide with five questions on it that you can discuss with your family members. You can download a copy of this resource here.
I hope you will look at it. I know that I'll be having this discussion with my family today in the car as we head off for the holidays to see my mother.
Friday, November 7, 2008
Firehoses
Monday
- Fly in
- Dinner with the organizers of the trip
Tuesday
- 2 hours on internal calls
- 2 hours meeting with the IT Director at Primary Children's Hospital
- 1 hour presenting the HL7 CDA and CCD to a Seminar of about 60 Informatics students and Staff at the University of Utah
- 3 hours meeting with a smaller student group working on various Public Health projects in the department
Wednesday
- 2 hours meeting with Informaticists at IHC
- 1 hours meeting various people from GE Healthcare
- 2 hours on a HITSP call taken in a cab, hotel room, and walking to my next meeting
- 1 hour meeting with a project team connecting researchers to the data available from Intermountain Healthcare, the University Health System, and the VA
- 1 hour getting an overview of a cool desktop devices that uses PCR to simultaneously identify multiple pathogens in about an hour, and actually understanding what it was doing! Thank you Scott, Molly and Kevin. Without your help this summer I would have been totally lost.
Thursday
- 2 hours on rounds at Intermountain Healthcare's new hospital
- 1 hour speaking on Standards Adoption
- 2 hours discussing clinical decision support, terminology and modelling with others from GE Healthcare
- Dinner with Stan Huff
Friday
- 3.5 hours meeting with a group of Informaticists at the VA
- Then I fly home.
My days started at 6:30 in the morning, and ended, well, just look at the time of this post. Salt Lake City is beautiful, and the mountains are covered in fresh snow that arrived Wednesday. While I wish I could have spent the weekend skiing, I haven't seen my family in a week, nor talked to my children (they have a schedule almost like mine this week, prepping for their appearance in a musical review in two weeks, and the time difference means that I constantly miss them when I call home).
I've been learning a great deal about public health and research this week, and been in the company of some extremely bright and educated people. I've also made a lot of new connections and spent time with a number of people that I don't usually see except at standards meetings. It's very interesting to see how these same folk take their standards expertise back into their day jobs.
I spent a good deal of time thinking about why standards are as hard as they are, and I'll share that in a subsequent post.
I'd like to thank Grant Wood of Intermountain Healthcare, Dr. Julio Facelli of the University of Utah, and Peter Haug of Intermountain Healthcare for making this trip possible. In addition, I'd like to thank Stan Huff, Joe Hales, Kathryn Kuttler, Susan Matney, Catherine Staes and Brett South, all of whom I found to be excellent hosts.
Wednesday, November 5, 2008
Not Huxley, but Shakespeare
I woke up this morning, and the world had seemingly changed overnight. I look forward to a great deal of that change, especially the investments in Healthcare IT that have been described elsewhere by the Obama campaign (see Presidential Politics and Healthcare IT).
I also look forward to change occuring over the next year inside HITSP. My sense is that much of that change will come from inside rather than be imposed from outside, but the uncertainty in the change of administration makes me mildly anxious. I am somewhat heartened this morning by John Halamka's thoughts on what may occur (see Healthcare IT in the Early Obama Administration).The US National program is one of the most brilliant things I think the Bush administration has executed in the last eight years, and coming from me, that's a concession.
- HITSP has hundreds of organizational members, from a variety of perspectives, and a significant number volunteers contributing to the development of a realizable and rational health information network.
- The NHIN's have made significant progress in the development of the backbone for our health information.
- CCHIT has contributed greatly to improve the capabilities of healthcare IT.
These programs need to continue. They all need fine tuning, but I'd hate to see any one of them go through a great upheavals. My advice to President-elect Obama, his coming administration, and the new Congress with respect to the activities of these organizations would be the following:
- Continue to support the work of the Office on the National Coordinator.
- See what HITSP, CCHIT and NHIN are doing before initiating any great changes.
- Engage with Healthcare IT leaders from those organizations before doing so.
When you compare the investment our National program to what nations elsewhere sped, it is a paltry sum. But the return on that investment is already huge. For every $1 ONC has committed to spending (some not even spent yet) on healthcare IT initiatives, states and regional initiatives have multiplied it nearly ten-fold.
The next year promises to be very interesting indeed. As Miranda said:
O, wonder! How many goodly creatures are there here!
How beauteous mankind is! O brave new world, that has such people in't!
-- Sheakspeare, The Tempest, Act V, Scene 1
Wednesday, October 29, 2008
Random thoughts on Vocabulary
Over the course of the last few weeks, here are some of the random observations that I have related to the topic:
1. Implementation guides for financial transactions seem to include all vocabulary choices necessary for implementation. Clearly if $ are involved, specifications need to be implementable, and a few organizations have figured out how to ensure that.
2. This simple idea could be very useful: Instead of "creating and maintaining" specific Federal vocabularies that are simply flat or simple hierarchical value sets, why wouldn't Federal agencies:
a. Work with appropriate SDO's to ensure that vocabulary terms necessary for Federal specifications are present with appropriate hierarchies in these vocabularies
b. Manage value sets from those vocabularies.
The benefit could be rather large, especially on the HIT side, where vocabulary maintenance from umpteen sources is extremely tedious and expensive.
3. The US may be a world economic leader, but it's economic impact often works against it in International standards efforts, especially in the area of vocabulary standards. Often the US just chooses to invent its own lists of terms, instead of working with other organizations.
4. Paying for what is done and diagnosed seems to make sense, even to representatives of payers that I've talked to over the last few months, but few seem to know how to implement it if it doesn't begin with the letters ICD. They need to come up to speed on vocabulary, it's not just a list of terms any more.
5. Everyone seems to think that its important to define terms, but few are willing to work with someone elses definition.
6. We can easily spend $100,000 and hours of a gold ribbon panel to redefine four terms, but try to find funding to fix the way hundreds of pages of documents are produced, in part to list out the terms needed for interoperable ELECTRONIC health records...
7. "All the good words are taken", An engineering complaint often heard during design when trying to identify a new object, also applies to standards.
8. We need a new way of norming (see Forming-storming-norming-performing) that includes informing. That way, when start to use and define terms, we can be consistent with what others have learned before (and avoid some of the same mistakes). As someone recently said to me: "Research is not the first instinct of the terminally innovative". I would add, "but it should be."
9. One of the greatest barriers in interoperability to overcome is the "Not Invented Here" syndrome. We need, as developers and users of standards, to avoid that syndrome in our own thinking.
10. You say potato, I say potato. It's all in the inflection. Often its not in what is said, but how it is said that has the impact. In other words, the actual words aren't important.
So, smile when you say that.
Tuesday, September 30, 2008
HITSP Public Comment Period Begins
Date: September 29, 2008
TO: Healthcare Information Technology Standards Panel (HITSP) and
Public Stakeholders - - FOR REVIEW AND ACTION
FROM: Michelle Maas Deane
HITSP Secretariat
American National Standards Institute
RE: Public Comment Period Begins for Personalized Healthcare, Consultations and Transfers of Care, Immunizations and Response Management, Public Health Case Reporting, Patient-Provider Secure Messaging and Remote Monitoring documents.
The Healthcare Information Technology Standards Panel (HITSP) announces the opening of a public comment period for the following HITSP documents:
- Personalized Healthcare Interoperability Specification (IS08) and referenced constructs
- Consultations and Transfers of Care Interoperability Specification (IS09) and referenced constructs
- Immunizations and Response Management Interoperability Specification (IS10) and referenced constructs
- Public Health Case Reporting Interoperability Specification (IS11) and referenced constructs
- Patient-Provider Secure Messaging Interoperability Specification (IS12) and referenced constructs
- Remote Monitoring Interoperability Specification (IS77) and referenced constructs
The public comment period will be open from today, Monday September 29, 2008 until Close of Business, Friday, October 24, 2008. HITSP members and public stakeholders are encouraged to review these documents and provide comments through the HITSP comment tracking system. The documents and the HITSP comment tracking system are accessible through http://www.hitsp.org/
All Panel and public comments received on these documents will be reviewed and dispositioned by the HITSP Technical Committees (TCs) in preparation for Panel approval in December.
HITSP members and public stakeholders are encouraged to work with the Technical Committees as they continue the process of standards selection and construct development. If your organization is a HITSP member and you are not currently signed up as a Technical Committee member, but would like to participate in this process, please contact jkant@himss.org
Friday, September 19, 2008
In humor there is Truth
Tuesday, September 16, 2008
Presidential Politics and Healthcare IT
Speaking as a volunteer for the McCain campaign was Stephen Parente, PhD, MPH and MSc. He presented the McCain plan for healthcare in four points.
- The first incentive is a $2,500 / person, $5,000 / family tax credit. The credit would be paid for by adjusting (removing?) the tax exemption provided on healthcare benefits. It wasn't clear to me whether that meant for individuals or corporations, but I'm sure you could read the McCain plan elsewhere on the web. Stephen asserted that this would be a break even prospect over 10 years. In part the reason it would break even is because the tax credit would be adjusted upwards on the rate of inflation in general, rather than the rate of healthcare cost inflation. In later discussion on this topic, the question of "single payer" came up, and Stephen responded that culturally the US was not ready for that step yet.
- Common to both plans was to guarantee access to healthcare for all. I was unable to determine from Stephen's presentation much more beyond that. He did indicate that this would be a fairly large investment, atypical of past Republican initiatives.
- Stephen spent some time making the point that the cost of the "same" health insurance plan in different states varies, by as much as 100% in cost. This is due to legislation passed in 1945, before the internet and the mobile populations that we have today. Large corporations have an ERISA exemption to this act. So, this incentive would make it possible for patients to purchase health insurance across state lines.
- The last incentive was unclear to me.
In discussions of the opportunities for Healthcare IT related to this plan, Stephen mentioned that the tax incentive could provide some opportunity. He did make the point that spending by consumers based on the tax credit would still be subject to consumer choices. He thought that one opportunity would be to develop a Health card that would enable the exchange of clinical information. He also discussed using and sharing of data available to payers through attachments, such as labs, with other providers, possibly enabled in some way through health cards and authentication technologies available in them.
When I asked a question about how the McCain plan would impact the ongoing work of ONC, including AHIC, HITSP, CCHIT, NHIN, and HISPC, Stephen responded by saying that this "is an open discussion that needs to happen."
Speaking as a volunteer for the Obama campaign was Blackford Middleton, MD, MPH, MSc. Blackford's presentation included highlighting of the three points of the Obama campaign. However, he stared first describing some of what the problems were. Many of us in the standards space have seen this data before, but using it seemed to show some awareness of the audience.
The three key points he touched on included:
- All access, and in presentation of data on patient satisfaction with the current system, made the point that US patients are ready for change.
- Modernization of the healthcare system. Included in this part of the presentation were some studies reporting where some of the costs are and where the benefits of EHR use go (most of it to others than providers). As part if this point, he discussed the investment of $10B in healthcare IT over 5 years, an investment on par with some of the topics presented by other countries in that session.
- Lastly, Blackford discussed connecting healthcare IT providers and public health, focusing on more wellness, instead of illness.
Blackford, when asked the same question about the role of ONC et. al., felt that the role of these organizations would be strengthened under the Obama plan. The opportunities for healthcare IT were discussed under point #2 above.
I found some interesting points in both presentations, but am far from an impartial observer, as I'm pretty well known as a liberal Democrat with regard to healthcare issues. I'll repeat the admonition that opinions mentioned in this blog are my own, and not those of my employer, or any of the organizations the I volunteer with. Some of my own observations follow:
Stephen needed to be introduced, as he's not necessarily well know in HL7 circles. His discussion of Healthcare IT, health cards as an opportunity for fixing the problems, and use of payer data, showed to me disconnect from the work of HL7. He did use the "Attachments" keyword, but that's only one part of a much bigger picture. When I asked the question about ONC and the alphabet soup, I felt his response was a little like a deer caught in the headlights. I found myself strongly questioning how a $2500 / adult tax credit could "trickle up" into investment in Healthcare IT. I also found myself further questioning how a tax credit that simply shifts money from an employer tax exemption to my pocket would change my overall healthcare costs. Those additional expenses need to be paid for in some way, and that will either come out of my benefits or salary increases, or will impact employment.
Blackford is already well known in Healthcare IT circles. He connected with the audience first by reporting on some of the reasons that we need to invest in Healthcare IT. His slides included geeky references to Star Trek and Dilbert. He had a much better story on the opportunities for Healthcare IT for this audience, and I personally think, for patients as well. Blackford is helping to create AHIC 2.0, and is well aware of the role and work of ONC, HITSP, HISPC and CCHIT in the US. Blackford did not discuss the details of the Obama plan, however, a complete document from the Obama campain describing those details was present in the program materials. I wish the McCain campaign had been smart enough to do the same.
Overall, I enjoyed the discussion of the US campaigns approaches to healthcare, but also have to question whether this was an appropriate use of HL7 member's time. If this had been a meeting of the HL7 US Affiliatiation (an imaginary body), I could see where this would fit. Given that this is the Plenary of HL7, I have to question the approach. Next time I'd actually like to see a presentation from what I still like to call the US National program, which would be much more comparable to what we saw from other HL7 member delegations.
Thursday, September 4, 2008
Competition
It used to be the case that market forces would eventually work this out. This leads to a win/lose zero-sum game, as I've mentioned in the past. One standard (and vendors and providers who've adopted that standard) win, and others lose (as do the patients of those providers).
Who benefits from this competition? One can argue that we get better standards that way, just like we get better products from competition, but history shows this isn't always the case.
If we go back a decade or more, can we really say that VHS was technically any better than BetaMax? I've seen a number of technical arguments for the Beta format, but what really drove the success of VHS was that the suppliers of that technology had better marketing and penetration at the end.
Can anyone who purchased an Blu-Ray format DVD say that they've benefited from competition? Folks who purchased Blu-Ray format may have benefited, but actually, while the competition was going on, they really lost out on a wider selection. How about HD DVD player vendors and owners? They all lost out.
My current windmill tilt has to do with competing standards for medical device communication as used in the home. Two organizations that I work with are pushing different approaches. One organization is looking at an approach that would utilize standards already used elsewhere in healthcare for medical device communication to apply them to home health. Another organization is looking at applying some new technologies and existing standards in a way that hasn't been done before in a new market segment. There are benefits and disadvantages to either, few of which seem to be related to technical capabilities.
From a technical perspective, it appears that either solution communicates the information needed. My hope is that the two organizations will go work it out with each other for a while. They agreed today to do just that, after 2 hours of lengthy discussion today. We'll see what they come up with.
Thursday, August 21, 2008
HITSP Webinar
August 21, 2008 (Thursday)
2:00-3:30PM/EDT
Webinar 7
Security, Privacy and Infrastructure
Audio
1.877.238.4697 (toll free)
1.719.785.5596 (toll)
Participant Code: 957195
Monday, August 18, 2008
AHIC Use Cases Available
Friday, August 15, 2008
Public Feedback on AHIC Use Cases Solicited
Today I recieved an e-mail sent by John Loonsk regarding new AHIC use cases, and extensions to existing use cases, summarized below:
Between March of 2006 and March of 2008, the American Health Information
Community (AHIC) published 13 use cases. In April of 2008, the AHIC began
the process of identifying 2009 priorities to serve as focus areas for standards
harmonization and other national HIT agenda activities. During the June
2008 and July 2008 AHIC meetings, there was approval for development of 1 new
“Use Case” and 13 “Extensions/Gaps”. The 14 documents approved by AHIC for
development include:
- General Laboratory Orders
- Order Sets
- Clinical Encounter Notes
- Medication Gaps
- Common Device Connectivity
- Scheduling
- Consumer Preferences
- Common Data Transport
- Newborn Screening*
- Medical Home: Co-Morbidity & Registries
- Maternal & Child Health
- Long Term Care – Assessment
- Consumer AE Reporting
- Prior-Authorization in Support of Treatment, Payment, & Operations
(* - Denotes that this document will be a Use Case)In the upcoming week, the first 5 Extension/Gap documents will be
made available for public feedback. There will be 1 round of public
feedback lasting 4 weeks. ONC intends to send out an email announcing the
availability of the documents. These documents should be posted to the HHS
website no later then Friday, August 22nd, 2008. Please continue to check
the use case website for updates: http://www.hhs.gov/healthit/usecases/
I'll post again when these use cases and extensions become available. Many organizations, including ANSI/HITSP and EHRVA will be organizing feedback on this work, and I expect professional societies such as ACP, HIMSS and others will be doing so as well. I urge you all to comment on these use cases and extensions, and to participate in the feedback process to AHIC.
Please post comments to this post if your organization will be providing feedback to AHIC, and you would like members to participate.
Tuesday, August 12, 2008
The making of sausages and standards
Laws are like sausages, it is better not to see them being made. -- Otto von Bismarck.
I sometimes feel the same way about standards. My best friend and I have been in software for more than 25 years. For more than half of that time, we've been working together, at four separate companies (and not just name changes or purchases, counting those, it would have to be 7 or 8). He's always been the one too see two sides of any argument, to drive consensus, and adeptly manage people (including me). I've always been, well, as he describes it "A Cowboy", or as I do, a tilter at windmills. I can tell you that I've knocked down a windmill or two, but that many others have won. Today, my friend is writing code for his own company for too many hours a day at his kitchen table, and I'm deeply involved in standards, and much to both our amazement, "Politics" (the word is often said from one side of the mouth, and then you have to spit afterwards).
Those who understand the strategic importance of standards to their organizations have involved their best leaders in these activities. That sometimes make standards development and selection a political process, and it has more than occasional skirmishes. I can think of about 4 or 5 offhand going on right now, without even blinking. Some of these are fairly mild, and others are real battles, and at least one is on the verge of war. Often these battles are held in areas populated by a number of rather innocent, and sometimes even, totally unaware bystanders.
Slightly more than a decade ago, I participated in my "first" standards activity. I was a member of the interest group for the W3C DOM2 (Document Object Model). The company I worked for had many people involved in W3C activities, include the product manager for the product team I led. She used to regale me with stories of the battles between two of the major players on how DOM2 would go, and which of their implementation feature sets would become part of the new standard. When two 500 pound gorillas have a dominance fight, everyone better clear some space. Her descriptions of the (dare I call them) discussions were reminiscent of scenes from Bloodsport, although not at all physical.
There are three ways of dealing with difference: domination, compromise, and integration. By domination only one side gets what it wants; by compromise neither side gets what it wants; by integration we find a way by which both sides may get what they wish. -- Mary Parker Follet
Often, one way to resolve these sorts of disputes is to make sure that neither of the gorillas win. Both are often willing to agree to a "draw", as neither gains any advantage from the outcome of the battle. But what about those not directly in conflict? Like most bystanders, they more than likely lose, at the very least valuable time, and in some cases, a chance to make a truly good choice.
Another way to resolve these sorts of disputes is to look very clearly at both sides of the issue, and try to craft a solution using the best aspects from both sides. When this does occur, the collaborative process really shines. This synthesis produces something advantageous to all, and we are no longer playing a zero-sum game.
What I learned from those days was that standards discussions that are motivated by financial incentive and not based on technical merit are to be avoided at best. If they cannot be avoided, then at the very least, financial motives should be very clearly laid on the table. In many organizations, there is a taboo on discussing practical realities such as cost. However, these considerations have a significant impact on the utility and uptake of the standard. We need to find ways to discuss the topic of implementation costs openly, and not hide debates related to that topic in some other guise.
Some questions I use to tackle this topic follow:
- Can it be implemented with open source tools?
- Can it be implemented with commercially available tools?
- Is implementation of this a senior project, a doctoral thesis, or a massively funded effort?
- Can I hire software engineers from any field to build it, or do they need to be specially trained?
- Can I buy a book on it?
- Can I take a class on it?
- Does it work with existing infrastructures, or will I have to rip and replace?
Later in my career, I entered healthcare and the realm healthcare standards. I found myself unwittingly not just in the middle of another battle, but as the field general in a war. This was a highly political disagreement between a pair standards organizations that I was a member of. Tactically, I believe that battle to have been concluded successfully, and the peace treaty seems to be holding. Strategically, I think it's still being fought, but as a rather meager guerrilla action. I know I'm not paying much attention to it these days. I learned a great deal from that war.
- Pick your battles.
- Stick to your principals.
- Respect the opposition.
- Religious battles are never won through logic.
- Persistence Pays
Pick your Battles
The technical work of standards development is time consuming enough, as anyone participating in it well knows. Fighting political battles takes time away from those activities at the very least, and often you wind up with nothing to show for it. In a world where you see the same people involved in many places, making an enemy in one place is something that can follow you around to many others. You don't become a target alone, but so does anyone else associated with you.
Stick to your Principals
Make sure that your issues are openly aired and debated. If there is something that you are unwilling to talk about, consider your motives. You never want to be put into a position of being embarrased by having your motives either questioned or exposed. If you cannot be shamed by that, then you can never lose from embarrasment.
Secondly, fight clean. Stick to technical merits. Avoid falacious arguments, ad hominem attacks and other dirty tactics. Standards development is a long process. What you lose from fighting dirty (even if you win) will stick with you for a long time, and can make future debates even more difficult.
Respect your Opponent
Act as if your opponent in any debate has good reasons for their position. Try to see those reasons. Remember what I said about trying to make sure that you aren't playing a zero-sum game. Learn how to agree to disagree, and then go eat sushi (or have a beer). Standards work is best done with people that can respect others with differing opinions. Mutual respect often allows all parties to find win-win solutions, and we can all benefit from that result.
Religious battles are never won through logic
Never try to teach a pig to sing; it wastes your time and it annoys the pig.
-- Robert Heinlein
Just about everyone I know who is involved in healthcare standards has a deep passion for their work. When that passion overrides intellect, useful discussion often ceases. Be aware, and don't engage yourself in the same mistake. If you find yourself trying to preach to the unconverted, and aren't succeeding, stop. You might very well be wasting the time of both parties, and worse yet everyone else around you.
In this particular field, everyone I run into is an expert, with credentials out the ying-yang. I've discovered that's really meaningless. Don't trust your gut. Try it out. What really counts is what happens when the paper hits the code. In your own arguments, stick to technical merits, and not your opinions, others can usually discern fact from opinion.
Failing that, if you have encounter a religious debate, there are at least two useful strategems:
- Identify the topic as being one that is "unresolvable" and work on other aspects (agree to disagree).
- Identify the topic as a pre-established "religious" debate that experts have been arguing about for X or more years, without any resolution. This tactic works only if your opponent is unwilling to move at all, but you are.
You can usually identify a religious debate by noting two polarities (x vs. y or black vs. white), where neither has yet dominated, and where expert opinion may still be divided on the topic. In these issues, the beliefs of both sides are often well documented. The document vs. message debate in computing has existed long before HL7 ever showed up on the scene. The element vs. attribute debate in markup languages has been in existence for 20 or more years. No solution to either of these debates has yet proposed itself. Most experts these days agree that there are places for each. Putting yourself in a position of espousing "one true way", when in fact there are many, is simply asking for failure. Who after all, wants to be associated with a radical?
Persistence Pays
In the end, being persistent counts.
One man scorned and covered with scars still strove with his last ounce of courage to reach the unreachable stars; and the world was better for this.
-- Don Quixote
P.S. I'll return to the topic of Genomics and Structured Family Histories soon.
Monday, July 14, 2008
Reporting Genetic Test Results
This is the second part of a three part series that started with Understanding Genetics. In this article, I will identify, and describe at a high level the standards that are needed to obtain genetic test results.
But first a little segue. A colleague reminded me recently that what helps us best to deliver is in having a real understanding of the why our customer needs something. Let's see if we can create a little fantasy that might help.
Imagine that you are in Tier 3 technical support at your company. Assume for the time being that all of your company's computers (and those of companies like yours) are the same; it's just the software that's different. A new technique in computer diagnostic now allows technicians like you to actually read the stored programs inside the computer (work with me here). A few years ago, your company and hundreds of others like it came together to work on a major project. They took one of the computers at random, and cataloged every bit in its memory, all 750 megabytes of it. Within this vast amount of data were somewhere between 65 and 80 thousand little subprograms, each of them anywhere from 10 to 15 thousand microinstructions in length. We know only a little bit about how the processor works. We can understand start and stop instructions, and can interpret some of the sequences of microinstructions that make up larger operational instructions, and have some basic understanding of some of these programs, but are still learning more every day.
Your job, given a particular computer malfunction, is this: Based on a particular set of symptoms, and other random information that comes your way about where the computer has been, and what subsystems it was built from, you need to:
Just to complicate matters, the computer that was selected at random is known to have a few wonky sub-programs installed on it that aren't quite right either. Also, addressing a particular memory location is not an exact science. It's more an art form, and the ways that you access it is by looking for sequences that you know typically precedes or follows the particular memory address you want. It's more like associative memory rather than RAM.
By the way, you have a budget to work with. You can read out vast sections of memory, but it is very expensive (like disassembly), or you can look for known problem causing sequences (like a virus scanner), which is faster and cheaper, but doesn't find everything.
Add to this that the information you have to work with and understand to identify a problem is not only growing at tremendous rates (see the second paragraph in Clinical Decision Support), but also being changed. What you knew yesterday might be different tomorrow. It may be that one of those wonky subprograms has now been replaced by a better sample.
This is just a small sample of the complexity that faces the clinical geneticist. Hopefully this little segue into an analogical fantasy might help you understand a little bit about how genetic testing works.
Now, back in the real world, we will start simple. A genetic test is, at its core, a laboratory test. This simplifies matters for us, because we can make use of the same standards used in ordering and reporting for laboratory tests. The most commonly used standard for ordering laboratory tests and reporting on results is HL7 Version 2. There are many different releases of HL7 Version 2 (we could call them variants, but that would just be too confusing), including 2.2, 2.3, 2.3.1, 2.4, 2.5, 2.5.1 and 2.6, and coming soon Version 2.7 (it isn't clear whether these would be alleles or mutations).
Various organizations have selected different releases of HL7 Version 2 messages for laboratory orders and results, including:
- HL7 Version 2.4
- Used in the original ELINCS implementation guide developed initially by the California Healthcare Foundation. This guide is now being completed by HL7 using HL7 Version 2.5.1
- HL7 Version 2.5
- Used in the Laboratory Technical framework from Integrating the Healthcare Enterprise,
- HL7 Version 2.5.1
- Selected by ANSI/HITSP, and recognized by Secretary Leavitt of Health and Human Services for use in the US for reporting laboratory results. ANSI/HITSP selected this version because it supports the conveyance of information required by CLIA regulations.
While HL7 Version 3 does support laboratory tests orders and results, this is still a work in progress.
HL7 CDA Release 2.0 (this is another gene altogether) has also been selected by ANSI/HITP and recognized by Secretary Leavitt for reporting laboratory results in a clinical document. ANSI/HITSP's selection of this standard is constrained by the IHE XD-LAB profile found in the IHE Laboratory Technical Framework. The XD-LAB integration profile also conforms to the HL7 Laboratory Claims Attachments Implementation Guide.
Finally, results reported in a laboratory result often use Logical Observation Identifiers and Codes or LOINC® to identify results (all of the examples above use LOINC).
Ordering the Test | Sequencing or re-sequencing, refers to the reading off the nucleotides (A, C, G and T) of the gene sequence directly. This is usually more expensive, but also the most accurate way to obtain a gene sequence. Some researchers use the term re-sequencing, because the Human genome has been sequenced once already. |
Clinical Question
So, in looking at why, what are the questions that are typically being asked? In genetic testing, there are six common clinical questions\. Half of these are related to specific genetically related diseases, and the other half to medications used for treatment. The "question" being asked by the clinician needs to be described in the order.
Tests on Genetic Conditions
Tests that identify variants associated with genetic conditions can assist the provider in determining if a patient:
- has a genetic condition,
- is at (increased) risk of contracting a genetically related disease, or
- carries a particular genetic variant and can potentially pass it onto their children.
Tests on Medications
Pharmacogenomic tests can tell a provider:
- Whether a particular medication will be effective or not in treatment,
- How quickly particular medication will be metabolized by the patient, or
- How toxic a medication may be to the patient.
LOINC Vocabulary terms have been proposed to represent each of these different kinds of tests results in panels. The SNOMED CT and RxNORM terminologies have been proposed to represent disease conditions and medications respectively. Some experts have noted that SNOMED CT does not provide great coverage for family related disease (i.e. genetic conditions) but feel that it is more important to use a common reference vocabulary, than it is to introduce vocabularies that are not yet used in healthcare. Use of these vocabularies will enable linkage of genetic data with other clinical data in the health record. I find myself in agreement with them.
Describing the Specimen
The specimen is the source of the DNA examined, as well as eventual source of the variant identified. Genetic material in a tumor specimens can have somatic or germline variations. A somatic variation occurs after cells have been formed, for example from UV damage to skin cells after too much sun exposure. A majority of cancers occur due to somatic changes. A germline variation is one that is incorporated into every cell. The last classification is for specimens of fetal tissue (prenatal). Identifying results based on these three categories. A proposed classification system for specimens uses the terms somatic, germline, or prenatal to describe the specimen.
Reporting the results
When reporting on the results of genetic tests, it is important to include the information in the report necessary for a healthcare provider in interpreting this information. The first step in reporting the results is to repeat everything in the order that was stated or clarified later during the ordering process. The reason for this is to allow subsequent reviewers of the result to understand the original intent of the provider ordering the test. Ordering a genetic test may be an iterative process. In reporting the test results it should be necessary only to report what was finally agreed upon.
Region of Interest
Once a genetic test is selected, the testing laboratory can specify more detail about the region of the gene that was examined. The human genome includes on the order of 3 billion base pairs (which fits into about 750 mb). At present, it isn't practical to sequence a single person's gene, and would take quite a long time (although that may change).
So, the testing laboratory determines where on the genome the test will focus. This region of interest can be described by identifying the genomic or transcriptional reference sequence (to align the region with the genome), the starting and ending nucleotides in the sequence (using the numeric portion of HGVS nomenclature), and a specific gene using the HGNC nomenclature (remember that "associative memory access", here it is). Much of this information is, or can be tied together in appropriate knowledge bases (and these are continuously being updated).
Interpretation
The next step is to report the interpretation. Each test types described in the previous sections above will require different values to interpret the results. Vocabulary has also been proposed in LOINC using LOINC Answer Codes, but the LOINC documentation does not presently describe how to relate LOINC Answer codes to the supplied data. Certainly any set of values used for these interpretations will also need to be mapped into SNOMED CT, to allow for their eventual use in clinical decision support systems that rely on SNOMED CT. Note that some interpretations will remain "inconclusive" (have you ever finished a technical support call only to get no solution to your problem).
Details
Because sequencing and genotyping is so expensive, it shouldn't be repeated unnecessarily. That means that enough detailed information should be conveyed in the result that future re-interpretation is possible. The average gene can contains from 10 to 15 thousand base pairs (think of these as the microinstructions), but this can vary dramatically, with some genes using millions of base pairs. This information is maintained by the testing laboratory and is absolutely essential in the initial interpretation of the results. When reported, these findings are summarized using the recommended standards. This will enable linkage of the genetic data to clinical genetic knowledgebases, so that interpretations can be maintained in a manner similar to other laboratory tests.
Different kinds of tests will require differing detailed results. A test that is attempting to identify a particular DNA marker or allele will need to describe what was found. Again, this identification can be performed using HGNC to describe the gene, NCBI Nucleotide Reference sequence identifiers, and HGVS nomenclature to describe the variations.
Analysis
Details are not sufficient. The final component of the report should include interpretation of the results performed by a geneticist. Genetics is so complex that providers will need that expertise to understand the results. This analysis may include references to research, educational materials, suggested treatments or additional testing.
The Final Step
For most clinical uses, once the provider's question about the patient's genetics is answered, many more questions are asked by the provider and by the patient. Some of these involve how to communicate the information to the patient and/or their relatives; others involve what the next steps should be in management of the patient's health. These fall outside of the domain of Healthcare Standards, so I will not dwell upon them. However, The American Society of Clinical Oncology published a policy statement that addresses some of these issues. I recommend reading it.
References:
American Society of Clinical Oncology Policy Statement Update: Genetic Testing for Cancer Susceptibility, Journal of Clinical Oncology, Vol 21, No 12 (Jun 15), 2003: pp 2397-2406 available on the web from http://jco.ascopubs.org/cgi/reprint/21/12/2397
This is an excellent article that describes many of the issues surrounding the need for, and appropriate use of genetic testing. While its principle audience is clinical oncologists, the explanations given for the ASCO positions are very clear, and address many issues that need to be considered with respect to genetic testing.
Acknowledgements
Thanks to Sandy Aronson, Director of IT, and Mollie Ullman-Cullere, both of HPCGG for arranging a tour of their genetic testing laboratory and answering my many questions on genetic testing. Thanks also to Mollie, and to Scott Bolte of GE Healthcare for their reviews of an early draft of this article. Accuracy is due to them, any errors are of course my own.
IHE Webinars
IHE North America Connectathon Webinar Series
The next two sessions in the Webinar series takes place Tuesday, July 15 and Wednesday, July 16:
IT Infrastructure: Profiles for Health Information Exchange Tuesday, July 15, 9:00 - 11:00 AM (all times CDT)
- XDS Affinity Domain Profiles
(XDS.a, XDS.b, Merge, Web Services, XCA, XDR, XDM, NAV, PIX/PDQ, PIX/PDQ V3)
- Bill Majurski, National Institute of Standards and Technology
Session 11: IT Infrastructure: Security and Privacy Tuesday, July 15, 1:00 - 3:00 PM (all times CDT)
- Security and Privacy (ATNA, EUA, XUA, BPPC, DSG)
- John Moehrke, GE Healthcare
Patient Care Coordination: Medical Document Content ProfilesWednesday, July 16 (all times CDT)
Part 1:
9:00 -10:00 AM PCC Content Profiles for Health Information Exchange
- Keith Boone, GE Healthcare
10:00 - 11:00 PCC Integration Profiles for Care Management and Query
- Keith Boone, GE Healthcare and Laura Bright, Bell Canada
Part 2:
1:00 - 1:30 PM Content Profiles for Prenatal Care
- Tone Southerland, Greenway Medical Technologies
1:30 - 2:00 Content Profile for Cancer Registries Pathology Reporting
- Wendy Scharber, Registry Widgets
2:00 - 2:30 Content Profile for Immunization Content
- Alean Kirnak, Software Partners LLC
2:30 - 3:00 Content Profile for Functional Status Assessment
- Marcia Veenstra, CPM Resource Center and Audrey Dickerson, HIMSS
The entire Webinar series is free, but participants are required to register in advance. Further information and a link to registration (via Webex) are available at http://www.ihe.net/north_america/connectathon2009.cfm.
Monday, July 7, 2008
Understanding Genetics
You can't do anything about your genetic background or your family history, but you can do something about the medicine you put in your mouth.
-- Dr. Andrew Glass of the Center for Health Research in Portland
My interest in genetics and family history started with the development of the Family History section of the HL7 Continuity of Care Document. It was recently expanded by the introduction of the Personalized Healthcare use case into the 2008 work cycle for ANSI/HITSP. We are all fairly well aware by now that increasing a provider's knowledge of a patient's genetics and family history will allow them to better select effective management. Over the course of the last six months, many of us have been getting a crash course in genetics, genomics and genetic testing, and the need for structured family histories. What follows below is what I have learned over the last six months about genetics. Given that this material is as new to me as it is to you, I've had this material reviewed by an expert for accuracy.
This is the first part of a three part series on Genetics and Family History, describing enough genetics for healthcare IT implementers that need to review and/or implement specifications produced by ANSI/HITSP for the Personalized Healthcare use case.
Part 1: Understanding Genetics follows below.
Part 2: Reporting Genetic Results will describe the standards needed to exchange the information described in Part 1.
Part 3: Family History and Risk Assessment will describe the necessary information to communicate in family histories, and the importance of this information in assessing risk, and determining the need for genetic testing or additional treatment.
Understanding Genetics
Genomic and family history data is an excellent source of information on health risks for a variety of conditions, both chronic and acute. By using family history or genetic testing to identify patients at high risk, the medical system is better able to predict the risk of disease, allowing patients and providers to make better care plans to address those risks, and ensure earlier detection and better preventative efforts.
Genetic information can also help predict how effective a medication will be, providing for better care by reducing side effects, avoiding toxicity and unnecessary therapies.
However, before we can begin to incorporate genetic testing data into EHR systems, we need to understand enough clinical genomics to correctly incorporate these results into healthcare IT systems.
I'm certain that most of you understand what DNA and chromosomes are, and that most humans have 22 pairs of chromosomes plus a pair of sex chromosomes (XX for females or XY for males) Half (23 chromosomes) come from each parent, for 46 altogether. However, there is a great deal of specialized vocabulary that goes beyond chromosomes that we all need to understand. I've translated this very specialized vocabulary into language that engineers can understand (recall that I consider myself to be in this category).
Most of you recognize a picture of a chromosome as a vaguely X shaped object. You can think of this as two identical lengths or strands of rope called chromatids (1). These strands are effectively tied in a knot together at the middle called the centromere (2). The strands at what is usually depicted in the top part of the chromosome are the short arms (3) and those at the bottom are the long arms (4).
Genes and Nucleotides
Along each arm are sequences of nucleotides, typically represented using the letters A, C, G and T making up the DNA. DNA and RNA are known as nucleic acids because they are made up of nucleotides. A gene is a distinct DNA sequence that provides instructions for producing a single protein that in turn produces a single trait, such as eye color (actually, there are several genes controlling eye color, and a single protein can cause multiple effects, but let’s keep it simple for the moment. The set of genes belonging to a person make up their genotype. A person normally has 2 full sets of genes, one from each set of chromosomes (Thus one set from each parent)
Alleles and Genotypes
Genes can have variations, known as alleles. Two commonly known alleles are for eye color, the brown and blue allele. Just because you have an allele for blue eye color in the gene controlling color, doesn't mean your eyes are blue. You might also have the allele for brown eyes in your second copy of that gene. That means that your eyes will be brown, because the brown allele is dominant and the blue allele is recessive. A dominant allele will be expressed when there is only one copy, whereas a recessive allele will be expressed only when it is present in both copies of the gene controlling that trait. There are other variations besides the most commonly know dominant and recessive, but we probably don't need to go into that level of detail. In the previous example, your genotype would be Blue/Brown, indicating that you have the alleles for blue eyes and for brown eyes in the gene controlling eye color.
Phenotype
The fact that you have brown eyes is known as your phenotype (for eye color), and basically amounts to which alleles in your genotype are being expressed. Other alleles can affect how the body metabolizes (or fails to metabolize) a particular drug, or increases or decreases the risk of a particular disease. Having a particular allele doesn't necessarily mean that you will have a particular disease or react to a particular treatment in a certain way.
Haplotype/Haplogroup
Sometimes, more than one gene in a group of closely linked genes tends to be inherited as a group, also known as a haplotype, or haplogroup.
HUGO and Gene Names
Most (if not all) genes relevant in genetic testing have already been identified as to their clinical significance. These genes will have a name and identifier issued by the Human Genome Organization, otherwise known as HUGO. HUGO maintains a database of names for genes known as the HGNC (Human Gene Nomenclature Committee) Database. These identifiers are the codes in the ontology of human genes.
Locus
Each gene occurs in particular locations on the chromosome, known as its locus, or in some cases, the gene may have multiple loci. Geneticists have special methods to represent loci. This basically involves recording the distance up (or down) the strand of DNA in the chromosome.
Mutations and Polymorphism
Something that is polymorphic has more than one (poly) form (morph). Many genes are polymorphic. Each form of a gene is known an allele, as I previously described above. Alleles that are common in the population are known as polymorphisms.
A genetic mutation is a permanent alteration in the form of a gene. Some of these alterations are detrimental, and others are advantageous, but many have no significant impact on the organism. A polymorphism is a variation that occurs in more than 1% of the population that does not cause disease. We tend to thnk of mutations as detrimental.
Mutations can be caused many different ways, including exposure to radiation or mutagenic chemicals, or simple accidents during replication. Mutations cannot be passed to offspring unless they occur in reproductive (sperm and egg) cells. It's unlikely that exposure to radiation or chemicals would ever cause a mutation like Spiderman or the Incredible Hulk, but it does make for fun reading.
Describing Genetic Variation
Just as you can "diff" two pieces of source code and build a script to turn one into another, you can also compare two gene sequences to each other, and explain how one differs from the other. In clinical genomics, these "edit scripts" also have a standard form, and can be used to describe a particular alteration that hasn't previously been identified. A nomenclature for describing these alterations has been recommended by the Human Gene Variation Society (HGVS) and can be found here: http://www.hgvs.org/mutnomen/recs-DNA.html. The use of a standardized nomenclature for describing gene alterations allows for subsequent review and analysis when new genetic research results become available.
Gene Sequences and DNA Markers
In order to identify genes and their alleles and polymorphisms we need sequences of nucleotides (A, C, G and T) to reference, known as DNA Markers. A collection of reference sequences has been put together in the GenBank® database maintained by the National Center for Biotechnology Information (NCBI).
SNP
Different kinds of polymorphisms have different names. The simplest is the change of a single nucleotide from one type to another, known as a SNP (for single nucleotide polymorphism), and pronounced "snip". These changes tend to occur in the DNA between genes, in areas that that are not functional. They are useful as DNA markers, identifying individuals or related individuals. NCBI maintains a database of SNPs known as dbSNP.
Resources:
Talking Glossary of Genetics, National Human Genome Research Institute, January 1999, available on the web at http://www.genome.gov/glossary.cfm
This is an excellent resource containing simple definitions of genetic terms, and a number of freely available images, including the two used in this article. The chromosome image was modified for this article.
Acknowlegements:
Thanks to Dr. Kevin Hughes of Massachusetts General Hospital for his review an comments on an early draft of this article, and to Scott Bolte (also of GE Healthcare) and Mollie Ullman-Cullere of Partners Healthcare for educating me and providing excellent reference material for this series.
Wednesday, July 2, 2008
Healthcare Revolutions
Wipe out ICD for billing
Why is it that experts in the field of healthcare standards routinely comment on the fact that billing codes are not suitable for providing data useful for clinical care, and yet we are required to report care provided using billing codes. If we really want to improve healthcare, would it not make sense to use the same measures on both the clinical and billing side? One of the principles of Six Sigma (and similar process improvement initiatives) is that you need to be able to accurately measure inputs and outputs of a process in order to improve it. Furthermore, having appropriately calibrated measurements is vital to the success of these efforts.
Why have we invested so much time and effort in the US National program1 to promote reference terminologies like SNOMED CT, and yet we require the use of a vocabulary originally designed for reporting mortality statistics, and an outdated version at that so that providers can get paid? Wouldn't it make sense to require that billing be done using clincial codes? Why do we need to spend so much time dealing with two different coding systems? Why should providers be the ones who have to make the conversion from one to the other?
Here's a radical idea. Why don't we require that the values used for billing codes come from a clinical reference vocabulary like SNOMED CT. Furthermore, we could select a reasonable value set from SNOMED CT that would allow clinical users of that vocabulary to roll up their SNOMED CT codes into the billing value set automatically. If, for some arcane reason you have an absolute need to be able to map to a vocabulary such as ICD-10, then when you creating the billing value set, do so in such a way that the mapping to ICD-10 is also automatable.
One of the advantages of this revolution would be to accellerate the adoption of clinical reference vocabularies, as recommended by the NCVHS, the Consolidated Health Informatics Initiative, and ANSI/HITSP. Another potential advantage would be to increase the value of claims data to providers. A third rationale would be that the "instruments" used to measure the practice of care, and the cost of care, would be calibrated on the same scale.
Create a Healthcare Price Index
We note that consumers do not have a good way to understand the costs of healthcare, either direct or indirect. Yet we do have a way to compare fuel economy for different automobiles, and have had ways to compare the cost of living in two different cities for many years. Why can't we create a basket of healthcare goods and services that meet the needs of various healthcare constituencies, and use that as a standard measure?
Different healthcare providers could report their costs for each of the items in that basket of goods, and different insurers could also describe what the consumer's payroll deductions and out of pocket costs would be for goods and services. We would be able to easily determine which plans provided better value based on our own needs for items in that basket, and be able to compare the value given by various healthcare providers.
This may be radical, but it shouldn't be revolutionary. It just applies some of the same principles we've been applying to the economy to the economics of healthcare. I'll bet we could hire a blue ribbon panel to develop the basket of goods for the cost of say, two or three useful new terminology definitions.
Determine which jobs we need to eliminate
This breakdown shows that most of the costs in healthcare are labor. The most productive way to take costs out of the healthcare system would be to cut labor costs. Obviously it makes more sense to cut the most expensive labor costs before the lesser costs. So, the question to answer becomes, who are we going to get rid of, and what are we going to replace them with?
This question is so radical that I'd like to hear your own revolutionary thoughts...
1 You'll see me describe the ONC/AHIC/HITSP/CCHIT/NHIN/HISPC activities as the US National Program from time to time, usually when I've recently had to describe it to someone from outside the US.
Monday, June 30, 2008
Public Comment Period Begins for HITSP Documents
The Healthcare Information Technology Standards Panel (HITSP) announces the opening of a public comment period for the following HITSP documents:
- HITSP/RDSS56 – HITSP Remote Monitoring Use Case Requirements, Design and Standards Selection
- HITSP/RDSS57 – HITSP Patient-Provider Secure Messaging Use Case Requirements, Design and Standards Selection
- HITSP/RDSS58 – HITSP Personalized Health Care Use Case Requirements, Design and Standards Selection
- HITSP/RDSS59 – HITSP Consultation and Transfers of Care Use Case Requirements, Design and Standards Selection
- HITSP/RDSS60 – HITSP Immunizations and Response Management Use Case Requirements, Design and Standards Selection
- HITSP/RDSS61 – HITSP Public Health Case Reporting Use Case Requirements, Design and Standards Selection
The public comment period will be open from Friday June 27th until Close of Business, Friday, July 25th. HITSP members and public stakeholders are encouraged to review these documents and provide comments through the HITSP comment tracking system. The documents and the HITSP comment tracking system are accessible through http://www.hitsp.org/ by clicking on the “Public Review and Comment” link on the right side of the screen.
All Panel and public comments received on these documents will be reviewed and dispositioned by the HITSP Technical Committees. Comments will be considered in the preparation of the Interoperability Specifications and associated constructs.
Friday, June 27, 2008
Clinical Decision Support
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. – Laurence J. Peter
Clinical decision support is of great interest right now in the United States. In September of 2007, the American Health Information Community released prototypes of six use cases, almost all of which call for clinical decision support in some way. Earlier this week, I spent about 45 minutes on the phone explaining the current state of clinical decision support standardization to a contractor working for ONC. It was an important call, even though unscheduled, and so I gave up about half the unscheduled time I had left in the day to the topic.
One reason for the interest in clinical decision support is that a number of activities can be streamlined through the use of these tools, improving care and reducing healthcare spending. These tools enable clinicians to access relevant information to provide safe and effective care. Having access to relevant information is critical. The volume of knowledge necessary to provide safe and effective care to patients is such that no human can know all that is necessary. This knowledge is growing at an extremely rapid pace. MEDLINE, a database of clinical research citations provided by the US National Library of Medicine adds hundreds of thousands of citations to its database annually.
Another reason for interest in clinical decision support is that these tools might enable automation of mandated reporting under various State and Federal regulations. A great deal of paper flows inside healthcare facilities in order to keep up with these reporting requirements. It takes a great deal of manual effort to keep this information flowing. Even when the reporting is done electronically, the information does not often originate from electronic sources, and must be manually gathered.
As the starting quote alludes to, clinical decision support is an incredibly complex problem. Before I dive into the details, I'd first like to describe what clinical decision support is. A common interpretation of clinical decision support is that it involves invoking an application (or interface or service), providing it with data, and having it invoke some action like alerting a provider or returning a treatment plan or suggestion for care (as in the case for drug interaction tools). This is just one of many different ways that clinical decision support is implemented.
Clinical decision support encompasses a variety of tools that assist healthcare providers in providing safe and effective care to patients. These tools take a variety of forms, including flowsheets, assessment instruments, drug interaction knowledge bases, vaccine forecasting tools, genetic risk assessment tools, chronic disease management solutions, population stratification tools, and quality measures.
- Flowsheets are designed to make the most relevant information available to healthcare providers quickly and easily to enable clinical decision-making.
- Assessment instruments support clinical decision-making by gathering certain information supplied by a provider or the patient, and using that information to compute results that can help providers implement appropriate care plans based on overall assessment scores.
- Drug interaction databases are probably the most common example of clinical decision support tools. These databases keep track of interactions between a drug and other drugs, allergies or conditions, and report potential difficulties when providers are ordering medications.
- Vaccine forecasting tools take information about a patients current allergies, conditions, medications, and vaccination history, and propose plans for immunizations that should be received and when.
- Chronic disease management solutions collect data on patients enrolled in a chronic disease management program, either through remote monitoring; telephone interviews, or electronic submissions from portals or provider EHR systems. These solutions review the gathered information and suggest various actions to take to facilitate management of the disease, including the scheduling appointments, patient contacts, et cetera.
- Genetic risk assessment tools gather information about a patient; including their medical history, family history, and genetic test results, and assess the patient's risk for genetically related diseases.
- Other tools allow populations to be ranked for disease risk to identify groups that may need alternate levels of care.
- Quality measures are clinical decision support tools that can be used to identify areas that need attention in a clinical practice.
All of these tools provide clinical decision support. Some are applied in the context of a single patient, while others apply to populations.
No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be. – Isaac Asimov
As the body that is responsible for selecting standards for the AHIC use cases, HITSP needs to take into account:
- existing standards,
- current works in progress,
- the impact of selected standards within and across use cases.
In March of this year, the HITSP Population Perspective TC and Care Management and Health Records TC hosted a Clinical Decision Support day at our face-to-face meeting. During that day, we heard from several experts in the area of clinical decision support, including developers of some of the clinical decision support standards described below. As a result of these discussions we learned a great deal about clinical decision support, including some of the information already provided in this blog.
In our model, we viewed Clinical Decision Support as a black box, through which we have three different kinds of inputs, and several different types of outputs. The inputs and outputs are of interest to HITSP because they represent areas where standards are needed to support interoperability. The three different inputs include:
- Algorithms, or knowledge about how to make inferences or assertions based on existing instance or world knowledge.
- Instance data describing the specific case that is being address by the clinical decision support application.
- Ontological or "world knowledge", representing facts about the world, such as what drugs interact badly, or how body parts are related, or the relationships between genes and diseases.
Standards exist for each of these three areas, but also a need for more work to make these standards implementable in the healthcare environment. The puzzle is still missing a few pieces.
Algorithms
Several standards exist to support clinical decision support algorithms that need to make yes/no answers. More work is needed in this area to integrate those standards into a set of coherent tools that can be used together.
Arden Syntax
Arden Syntax is perhaps the oldest and most well known standard used for clinical decision support. Arden Syntax originated in 1989 at the Arden Homestead. In 1992, version 1.0 became an ASTM standard. It transitioned to HL7 and version 2.0 became an ANSI and HL7 standard in 1999. The HL7 Clinical Decision Support workgroup is responsible for maintaining the standard. The current version of Arden Syntax is 2.7, which passed HL7 Ballot in May of this year.
Arden Syntax is designed for use by clinicians with little or no formal training in programming. The benefit of this design is that it makes it easy for clinicians to verify the clinical accuracy of a particular medical logic module that is written in that language. Hundreds of medical logic modules have been created for Arden Syntax and are available from the CPMC Medical Logic Module Library.
A key problem in Arden Syntax is known in clinical decision support standards circles as the "curly brace" problem. In short, Arden does not define how an implementation of the language integrates with the healthcare application using it. Arden Syntax leaves that integration to statements that appear within pairs of curly braces { and } inside a medical logic module.
GELLO
GELLO has its ancestry in other expression languages and tools for clinical decision support, including GLIF and GLEE. GELLO is an object-oriented language designed to:
- Extract information from electronic health records,
- Compute decision critieria,
- and abstract or derive summary information.
GELLO is based on the Object Constraint Language developed originally by IBM and now part of the UML Standard. GELLO is not a proper subset of OCL, nor is it a pure extension as it explicitly leaves out some capabilities of OCL. However, developers familiar with OCL should find GELLO syntax readily accessible.
GELLO, like OCL is a declarative, rather than procedural language. The simplest way to explain a declarative language is by example. PROLOG and SQL are declarative languages. You describe the result you want to the SQL or PROLOG interpreter, and it decides the best way to obtain the result. The same is true in OCL and GELLO. Declarative programming requires a different way of thinking when expressing a decision problem. When you describe how to make a decision, most people express the decision making process procedurally.
The GELLO language is sufficiently different from OCL that it does require a grammar to be specified. HL7 had previously published a grammar for the GELLO language. However, that grammar has several defects that needed to be corrected. Some ambiguities in either the published grammar or the GELLO language specification have resulted in differing implementations of the language.
The HL7 Clinical Decision Support workgroup is presently working on a new release of the grammar intended to correct these deficiencies. In this context there are several discussions within that work group on whether GELLO should be defined as a subset of OCL, or should be defined in a way that would allow it to be mapped to OCL. One benefit of this approach might be an increase the availability of GELLO language implementation. There are several existing implementations of OCL in existence, including several supported by open source.
There is also an open source project that is developing GELLO tools (see http://www.gello.org/), however no applications or software are available for download as of today.
Like Arden Syntax, GELLO does not explicitly define how GELLO is integrated with an electronic health record. However, GELLO can easily be integrated with any HL7 RIM-based data model due to the object-oriented nature of the language. A simple data model is given as an example in Annex D of the HL7 GELLO language specification. GELLO is intended to be used with something called the vMR or Virtual Medical Record, which is described in more detail in the next section.
Instance Data
With regard to instance data, one of the problems that needs to be resolved is how to represent the information needed to support clinical decisions support applications, and how to exchange that information between applications needing to provide clinical decision support services.
Virtual Medical Record
In 2001, Johnson, Tu, Musen and Purves published a paper titled "A virtual medical record for guideline-based decision support" which can be found in the Proceedings of the AMIA Library . This often-referenced paper describes the concept of the Virtual Medical Record, or vMR that can be used to support guideline-based decision support. The concept of the virtual medical record is so often referenced in clinical decision support circles that I found it surprising that this wasn't defined by HL7 as a standard already. However, in January of this year, the HL7 Clinical Decision Support work group submitted a project to create an implementation guide for the virtual medical record, and expects to complete this work in the months ahead. This project will recommend a data model to use with the vMR, based on existing HL7 standards. One proposal for this model has already been made and recommended by several members of HL7 and co-chairs of various HL7 working groups. That recommendation is to use the HL7 Care Record and Care Record Query draft standards for trial use (A DSTU is essential a "prototype" standard, released for implementation testing).
The Patient Care Coordination Domain of Integrating the Healthcare Enterprise recently released an update to the Query for Existing Data (QED) profile, which makes use of the aforementioned standards, and a new profile know as the Care Management (CM) profile, using those same standards.
HL7 Continuity of Care Document
On of the recent things we've learned about the HL7 Continuity of Care Document or CCD is that templates can be extremely powerful. Having defined a few dozen templates in the CCD, we now find those same templates appearing in numerous other standards and implementation guides from HL7, IHE, Continua and CDA for Common Document Types projects. The benefit for clinical decision support to this proliferation of the CCD templates is that we are finding the same templates being reused over and over again to record the same kinds of information. That degree of consistency is absolutely necessary when you want to apply decision support to information that may be coming from different healthcare IT applications deployed all over the country.
The IHE Patient Care Coordination Technical framework uses the CCD templates as the basis for much of its material. Where the CCD has defined a particular template for use, IHE has either used it directly, or used and further constrained it in its technical framework. One profile that makes surprising use of CCD templates is the Query for Existing Data profile described next. The Care Management and Health Records TC in ANSI/HITSP is looking at a similar approach in the HITSP component specifications, as that committee is already responsible for four specifications that make use of the IHE PCC profiles to exchange content.
Query for Existing Data
The IHE Query for Existing Data integration profile explains how to use the HL7 Version 3.0 Care Record and Care Record Query DSTUs to query an EMR for clinical information. This profile also makes use of the Continuity of Care Document templates, but enforcing the rules of those templates on the clinical statements returned by the query transaction. The QED profile supports the ability for a clinical decision support system inside an enterprise to query for specific clinical data from that enterprises EHR system.
Care Management
The IHE Care Management (CM) integration profile uses the same standards as QED, but for a different purpose. In this integration profile, the HL7 Version 3.0 Care Record messages play an extremely important role. They allow systems that IHE describes as "Guideline Managers" to publish parts of a guideline. The part of the guideline that is published is the list of specific clinical data they downstream systems are interested in receiving. These descriptions are given in enough detail so that EHR applications can automatically determine what information updates need to be sent and when. These information updates can also be provided using existing HL7 Version 2 messages to support legacy applications.
HL7 Decision Support Service
The HL7 Service Oriented Architectures workgroup is presently working on implementations of the Decision Support Service Functional Model DSTU. One possible implementation for this service would utilize the HL7 Care Record DSTU previously mentioned several times to generate a vaccine forecast for a patient. The way this service would work would be to publish a Care Record Event which contained the necessary information to generate a vaccine forcast using the Care Record message. The response message from the DSS would be another Care Record Event that contained two pieces of information:
- A validated list of immunizations
- A care plan containing a proposed vaccination schedule for the patient.
This follows the general structure needed for many clinical decision support problems (e.g., drug interaction checking). The care plans produced by the service can describe different treatment alternatives, provide the reasons for the suggested treatments and the goals and risks asociated with each.
World Knowledge
Ontological or world knowledge often appears in terminology systems, such as SNOMED CT and RxNORM, which require frequent updates as new world knowledge is provided. Standards are needed to support the exchange and update this information on a routine rather than exceptional basis.
IHE ITI Sharing Value Sets
Enter the Sharing Values Sets (SVS) profile supplement published recently by IHE for public comment. This profile is intended to provide a mechanism to for applications needing value sets to obtain them repositories that maintain them. This will allow common value sets to be maintained in central locations. One use for value sets in clinical decision support is to identify specific sets of values that describe an aggregate concept that a clinical decision support system needs to act upon. An example would be the set of values that are used to identify reportable or notifiable conditions from a reference vocabulary such as SNOMED CT. In this example, a limited subset of SNOMED CT codes could be maintained as a value set to identify a set of conditions that require reports to be made to state or Federal public health officials. The benefit of this to interoperability is that these value sets can then be accessed by a large number of applications systems as needed.
Summary
There is a great deal of activity going on through various standards organizations in the area of clinical decision support. Not all of the pieces are ready today, but it is no accident than many of the pieces of the puzzle are starting to fit into place. Those of us who spend a great deal of time in healthcare standards are paying attention, and making sure that the relevant standards bodies do as well.
There is still a great deal of work that needs to be done in the area of Clinical Decision Support. Future discussions on this topic within AHIC and ONC should continue to include those of us who have been working on these standards.
Let's get coordinated!
-- Keith
P.S. In the interest of full disclosure, I've been intimately involved with many of the aforementioned standards activities in IHE and ANSI/HITSP to some degree also in HL7. I am the principle editor of the IHE Query for Existing Data and Care Management profiles, and have been involved in many of the dicussions on GELLO and the Virtual Medical Record within HL7, and co-chair the CMHR TC in ANSI/HITSP.