One of the benefits of working with other experts is the ability to cross link information they generate with information that you generate to save time and effort. Today, John Moehrke writes about reasons why you shouldn't try to use audit logs as disclosure logs in his blog. I'm going to save myself a longer post and refer you to his blog because what he has to say about ATNA and Accounting of Disclosures is important before you read further.
This is another case of the "If I had a hammer" syndrome that I posted on 18 months ago, but in this case the tool is a crescent wrench and the object it is being pounded on is a phillips screw. If you really compare the requirements of an Audit Log and of a Disclosure Log you will see that you have almost none of the same business requirements, and only about a 60-70% overlap in the information requirements. Yes, there is a common core there, but that doesn't make one equivalent to, or a superset of the other. Some of the requirements also conflict with each other. An Audit log almost certainly includes more details used for forensic investigations that would never be released in a Disclosure Log.
So, what we have identified here is two different use cases with some overlapping requirements. This is a pretty common phenomena in computer architecture. The occurence of an overlap may point to a common ancestor in the analysis and design, but it does not imply equivialence or supersetting of requirements, nor need it. Sometimes overlaps are interesting and useful, and this one certainly is, but not nearly so much as some would expect.
Pages
▼
Monday, November 30, 2009
Addresses in CDA
Writing a book about a standard is different from implementing it. When you are implementing, you can get away with ignoring parts of the standard that aren't of concern to you. However, when writing a book about it, you need to cover details that wouldn't normally concern you, at least to explain to your audience why they do or don't matter, and when those rubriks apply. Somewhere in the middle of that is writing implementation guides. Because I'm now writing a book, I'm rereading the standard, and discovering things I didn't know. I'll be reporting these discoveries from time to time.
Over the weekend, I discovered something about addresses in CDA that I didn't previously know. Now I have an even deeper understanding (and perhaps some remaining confusion) about the AD data type.
There are about 27 different kinds of information that can appear in an address data type. They are all different kinds of address parts (ADXP) in the Version 3 Data Types standard. I knew that.
What I didn't know was the difference between <streetAddressLine> and <deliveryAddressLine>, and some fine details about the XML representation of these parts. You've probably never seen the <deliveryAddressLine> referenced in any implementation guide, but it should have been. The difference between a <streetAddressLine> and a <deliveryAddressLine> is the difference between a physical delivery address and a PO Box, rural route or other sort of delivery address. A dearth of examples probably contributes to my lack of knowledge.
The Version 3 data types standard (both release 1 and 2) represents the address data type (AD) as a list of Address parts (ADXP), which can repeat any number of times. Practically, some of these should only appear once (e.g., postal code, city, state, country or county), while others could appear multiple times (e.g., streetAddressLine or deliveryAddressLine). There is also hierarchy of address part types which seems to imply a whole/part relationship between elements of the hierarchy. For example, you would imagine that the <streetAddressLine> could contain a <streetName> element. However, the data types schema doesn't allow the content of a <streetAddressLine> to contain a <streetName> element. If you are going to parse any portion of the street address in detail, you cannot wrap the parsed elements in a <streetAddressLine> element.
The long and short of it is that I learned something new, and now, hopefully, you have too.
Over the weekend, I discovered something about addresses in CDA that I didn't previously know. Now I have an even deeper understanding (and perhaps some remaining confusion) about the AD data type.
There are about 27 different kinds of information that can appear in an address data type. They are all different kinds of address parts (ADXP) in the Version 3 Data Types standard. I knew that.
What I didn't know was the difference between <streetAddressLine> and <deliveryAddressLine>, and some fine details about the XML representation of these parts. You've probably never seen the <deliveryAddressLine> referenced in any implementation guide, but it should have been. The difference between a <streetAddressLine> and a <deliveryAddressLine> is the difference between a physical delivery address and a PO Box, rural route or other sort of delivery address. A dearth of examples probably contributes to my lack of knowledge.
The Version 3 data types standard (both release 1 and 2) represents the address data type (AD) as a list of Address parts (ADXP), which can repeat any number of times. Practically, some of these should only appear once (e.g., postal code, city, state, country or county), while others could appear multiple times (e.g., streetAddressLine or deliveryAddressLine). There is also hierarchy of address part types which seems to imply a whole/part relationship between elements of the hierarchy. For example, you would imagine that the <streetAddressLine> could contain a <streetName> element. However, the data types schema doesn't allow the content of a <streetAddressLine> to contain a <streetName> element. If you are going to parse any portion of the street address in detail, you cannot wrap the parsed elements in a <streetAddressLine> element.
The long and short of it is that I learned something new, and now, hopefully, you have too.
Wednesday, November 25, 2009
Turducken
Thinking about what it takes to get healthcare information communicated securely from one point to another reminded me of a Thanksgiving feast I still haven't tried yet.
A Turducken is a chicken, stuffed inside of a duck, stuffed inside of a turkey, and then cooked for a good long time. I'm told by people who've had them that they are:
My friends' third observation on Turducken was that it was worth the effort, and that parallels my own experience with standards based health information exchanges.
Have a happy Thanksgiving for those of you in the US, and for those of you who are not, have a good week.
UPDATE: My wife pointed out to me after having heard about this morning's post that you can purchase a Turducken already prepared. The same is true for the multiple standards put together by HITSP. See OHF, IPF and CONNECT for some examples. Another excellent comparison.
A Turducken is a chicken, stuffed inside of a duck, stuffed inside of a turkey, and then cooked for a good long time. I'm told by people who've had them that they are:
- Wonderful to eat
- Complicated to prepare
My friends' third observation on Turducken was that it was worth the effort, and that parallels my own experience with standards based health information exchanges.
Have a happy Thanksgiving for those of you in the US, and for those of you who are not, have a good week.
UPDATE: My wife pointed out to me after having heard about this morning's post that you can purchase a Turducken already prepared. The same is true for the multiple standards put together by HITSP. See OHF, IPF and CONNECT for some examples. Another excellent comparison.
Tuesday, November 24, 2009
Looking for CDA Stories
Writing on "The CDA Book" has started. It is amazing the amount of non-writing writing that you have to do when you start a book. The amount of reading you have to do is also pretty mind-numbing. Thanks to the web, at least some of that material is readily accessible, even 10 years after the fact.
I'm looking for CDA stories right now, especially stories of the early days of CDA, PRA, or KEG. If you have any of these you'd like to share, please let me know. I'm also interested in real-world, "today" CDA stories. If you have good (or bad) stories about CDA implementations today, I'd like to hear them. Finally, I'm curious as to whether I need to spend any time in the book on "The great SDO debacle of 2006". What are your thoughts?
I'm looking for CDA stories right now, especially stories of the early days of CDA, PRA, or KEG. If you have any of these you'd like to share, please let me know. I'm also interested in real-world, "today" CDA stories. If you have good (or bad) stories about CDA implementations today, I'd like to hear them. Finally, I'm curious as to whether I need to spend any time in the book on "The great SDO debacle of 2006". What are your thoughts?
Friday, November 20, 2009
Simplification
My how times change. In 2005, the largest problem facing healthcare IT interoperability in the US was the harmonization of standards. ANSI/HITSP was formed in October of 2005 with the goal of addressing this particular issue. It is now four years (and a month) since HITSP was created, and we no longer talk about the "Harmonization" problem. The problem facing us now is the "Simplification" of standards. I'm glad to see that we are moving onto the next problem, but hope that in so doing, we don't "unsolve" the previous one.
HITSP's great success in achieving its contract objectives goes largely unnoticed in this new phase because its objectives no longer address the most pressing issue. I've heard several complaints about how the HITSP specifications are too complex, long and not directive enough for implementors. These are valid complaints if you are trying to use these specifications as "implementation guides". They were designed to specify enough to address the harmonized use of standards. Iimplementation guides are much more than that. Many of us involved in HITSP have argued that we need to find a better way to communicate. However, the development of implementation guides requires a great deal more resources that were made available to HITSP under the ONC contract for harmonization, and the scope of the harmonization contract did not include that requirement.
To put this all into perspective, I did a little bit of research into what we (US taxpayers) are paying for with respect to implementation guides for healthcare standards. See http://www.fedspending.org/ for one place where you can dig up some of this data for yourself. My own survey was anything but scientific, but based on my findings, I will assert that a 50-100 page implementation guide seems to cover from 2 - 4 transactions, and costs anywhere from $175,000 to $350,000 dollars to develop. It is usally done in anywhere from 6 - 15 months. The costs don't seem to scale up linearly with complexity either, twice as much complexity results in more than twice the cost.
So, how do we move forward from here? The next step needs to be forward, towards simplification and education, yet not reject the harmonization that we just spend four+ years and spent tens of millions of dollars of both public and private funds addressing.
There are some concrete actions that we can take:
HITSP's great success in achieving its contract objectives goes largely unnoticed in this new phase because its objectives no longer address the most pressing issue. I've heard several complaints about how the HITSP specifications are too complex, long and not directive enough for implementors. These are valid complaints if you are trying to use these specifications as "implementation guides". They were designed to specify enough to address the harmonized use of standards. Iimplementation guides are much more than that. Many of us involved in HITSP have argued that we need to find a better way to communicate. However, the development of implementation guides requires a great deal more resources that were made available to HITSP under the ONC contract for harmonization, and the scope of the harmonization contract did not include that requirement.
To put this all into perspective, I did a little bit of research into what we (US taxpayers) are paying for with respect to implementation guides for healthcare standards. See http://www.fedspending.org/ for one place where you can dig up some of this data for yourself. My own survey was anything but scientific, but based on my findings, I will assert that a 50-100 page implementation guide seems to cover from 2 - 4 transactions, and costs anywhere from $175,000 to $350,000 dollars to develop. It is usally done in anywhere from 6 - 15 months. The costs don't seem to scale up linearly with complexity either, twice as much complexity results in more than twice the cost.
So, how do we move forward from here? The next step needs to be forward, towards simplification and education, yet not reject the harmonization that we just spend four+ years and spent tens of millions of dollars of both public and private funds addressing.
There are some concrete actions that we can take:
- Make simplification of standards an important topic for SDOs and profiling organizations to address.
The ebXML Reference information model and the HL7 RIM are great information models (of meaning), but what we are hearing from the "Internet" crowd is that we need to be closer to how the information modeled for use (see my ramblings on Synthensis). So, how can we simply go from meaning to use and back again?
- Develop tools to make simplification easier (tool development is cheaper in the long run than just throwing more labor at the problem).
The HL7 Templates workgroup is in the process of starting a project to build a templates registry. Imaging having an information resource that would pull together all the templates that you need to implement a HITSP construct in one place. One could readily use that resource to more quicky develop complete implementation guides that wouldn't have some of the challenges that our current HITSP specifications face.
- Move away from linear documents for specifications.
We are dealing with the information age, it's about time we moved away from linear documents, and the constraints that they place upon us. Developers want richly linked media to help them find what they are looking for. The HL7 V3 Ballot site is an overdone example of what I'm talking about, but it is surely better than a 100 page document to deliver the necessary content. One of the biggest challenges that HITSP faces is how to take the content that we have now in documents and make it something that would allow us to put together a real implementation guide.
- Figure out how to move away from single-SDO based interchange and vocabulary models
One thing that the world wide web and XML have taught us is the power of structured information that can be easily transformed. We have 5-6 different key standards for communication in healthcare, only two of which are in XML. We have some 7-9 different key vocabularies, with very similar high level models, yet no common terminilogy interchange format. Can we provide common model across the space of healthcare for both interchange and terminilogy standards.
Wednesday, November 18, 2009
ISO+ to UCUM Mapping Table
Please note the disclaimer: I am not an expert on ISO+, UCUM or laboratory units in general. Therefore, you should validate this data before using it in any clinical applications.
ISO+ Units needing a Mapping
ISO+ | UCUM | ISO+ | UCUM | ISO+ | UCUM | ISO+ | UCUM |
---|---|---|---|---|---|---|---|
(arb_u) | [arb'U] | 10.un.s/(cm5.m2) | dyn.s/(cm5.m2) | iu/mL | [iU]/mL | mL/hr | mL/h |
(bdsk_u) | [bdsk'U] | 10.un.s/cm5 | dyn.s/cm5 | k/watt | K/W | mm(hg) | mm[Hg] |
(bsa) | {bsa} | cm_h20 | cm[H20] | kg(body_wt) | kg{body_wt} | mm/hr | mm/h |
(cal) | cal | cm_h20.s/L | cm[H20].s/L | kg/ms | kg/m2 | mmol/(8.hr.kg) | mmol/(8.h.kg) |
(cfu) | {cfu} | cm_h20/(s.m) | cm[H20]/(s.m) | kh/h | kg/h | mmol/(8hr) | mmol/(8.h) |
(drop) | [drp] | dba | dB[SPL] | L/(8.hr) | L/(8.h) | mmol/(kg.hr) | mmol/(kg.h) |
(ka_u) | [ka'U] | dm2/s2 | REM | L/hr | L/h | mmol/hr | mmol/h |
(kcal) | kcal | g(creat) | g{creat} | lb | [lb_av] | ng/(8.hr) | ng/(8.h) |
(kcal)/(8.hr) | kcal/(8.h) | g(hgb) | g{hgb} | ng/(8.hr.kg) | ng/(8.h.kg) | ||
(kcal)/d | kcal/d | g(tot_nit) | g{tit_nit} | m/s | ms/s | ng/(kg.hr) | ng/(kg.h) |
(kcal)/hr | kcal/h | g(tot_prot) | g{tot_prot} | mas | Ms | ng/hr | ng/h |
(knk_u) | [knk'U] | g(wet_tis) | g{wet_tis} | meq/(8.hr) | meq/(8.h) | osmol | osm |
(mclg_u) | [mclg'U] | g.m/((hb).m2) | g.m/{hb}m2 | meq/(8.hr.kg) | meq/(8.h.kg) | osmol/kg | osm/kg |
(od) | {od} | g.m/(hb) | g.m/{hb} | meq/(kg.hr) | meq/(kg.h) | osmol/L | osm/L |
(ph) | pH | g/(8.hr) | g/(8.h) | meq/hr | meq/h | pa | pA |
(ppb) | [ppb] | g/(8.kg.hr) | g/(8.kg.h) | mg/(8.hr) | mg/(8.h) | pal | Pa |
(ppm) | [ppm] | g/(kg.hr) | g/(kg.h) | mg/(8.hr.kg) | mg/(8.h.kg) | sec | '' |
(ppt) | [pptr] | g/hr | g/h | mg/(kg.hr) | mg/(kg.h) | sie | S |
(ppth) | [ppth] | in | [in_us] | mg/hr | mg/h | ug(8hr) | ug(8.h) |
(th_u) | [todd'U] | in_hg | [in_i'Hg] | miu/mL | m[iU]/mL | ug/(8.hr.kg) | ug/(8.h.kg) |
/(arb_u) | /[arb'U] | iu | [iU] | mL/((hb).m2) | mL/{hb}.m2 | ug/(kg.hr) | ug/(kg.h) |
/(hpf) | [HPF] | iu/d | [iU]/d | mL/(8.hr) | mL/(8.h) | ug/hr | ug/h |
/(tot) | /{tot} | iu/hr | [iU]/h | mL/(8.hr.kg) | mL/(8.h.kg) | uiu | u[iU] |
/iu | /[iU] | iu/kg | [iU]/kg | mL/(hb) | mL/{hb} | ||
10*3(rbc) | 10*3{rbc} | iu/L | [iU]/L | mL/(kg.hr) | mL/(kg.h) | ||
10.L | 10.L/(min.m2) | iu/min | [iU]/min | mL/cm_h20 | mL/cm[H20] |
Units that are the same in ISO+ and UCUM | |||||||
---|---|---|---|---|---|---|---|
% | bar | g/L | L.s | mg | mmol/(kg.d) | ng/L | ueq |
/kg | Bq | g/m2 | L/(min.m2) | mg/(kg.d) | mmol/(kg.min) | ng/m2 | ug |
/L | cel | g/min | L/d | mg/(kg.min) | mmol/kg | ng/min | ug/(kg.d) |
/m3 | Cm | Gy | L/kg | mg/d | mmol/L | ng/mL | ug/(kg.min) |
/min | cm2/s | h | L/min | mg/dL | mmol/m2 | ng/s | ug/d |
/m3 | d | hL | L/s | mg/kg | mmol/min | nkat | ug/dL |
/min | dB | J/L | lm | mg/L | mol/(kg.s) | nm | ug/g |
/mL | deg | kat | m | mg/m2 | mol/kg | nmol/s | ug/kg |
1/mL | eq | kat/kg | m/s2 | mg/m3 | mol/L | ns | ug/L |
10*12/L | eV | kat/L | m2 | mg/min | mol/m3 | Ohm | ug/m2 |
10*3/L | kg | m2/s | mL | mol/s | Ohm.m | ug/min | |
10*3/mL | fg | kg.m/s | m3/s | mL/(kg.d) | mosm/L | pg | ukat |
10*3/mm3 | fL | kg/(s.m2) | mbar | mL/(kg.min) | ms | pg/L | um |
10*6/L | fmol | kg/L | mbar.s/L | mL/(min.m2) | mV | pg/mL | umol |
10*6/mL | g | kg/m3 | meq | mL/d | pkat | umol/d | |
10*6/mm3 | g.m | kg/min | meq/(kg.d) | mL/kg | pm | umol/L | |
10*9/L | g/(kg.d) | kg/mol | meq/(kg.min) | mL/m2 | ng | pmol | umol/min |
10*9/mL | g/(kg.min) | kg/s | meq/d | mL/mbar | ng/(kg.d) | ps | us |
10*9/mm3 | g/d | kPa | meq/kg | mL/min | ng/(kg.min) | pt | uV |
10.L/min | g/dL | ks | meq/L | mL/s | ng/d | Sv | V |
a/m | g/kg | L | meq/min | mm | ng/kg | t | Wb |
If you have corrections or comments, please post a response below. I will make every attempt to keep these tables up to date and accurate.
Updated November 19, 2009:
Deleted mapping from ISO+ mm to UCUM to (transformation error)
Corrected mapping from ISO+ sec to UCUM '' (was ')
Updated October 20, 2010
Corrected mm(hg) to mm[Hg]
Changed oms/kg to osm/kg
verified mapping for mas (Megasecond according to HL7 V2) to Ms
deleted mapping from lm/m2 to lm
Still trying to resolve issues around (hb)
verified dm2/s2 = REM
Updated October 23, 2020
Fixed a number of problems with capitalization: Changed wb to Wb, v to V, uv to uV, sv to Sv, pAmp to pA, ohm to Ohm, mv to mV, kpa to kPa, gy to Gy, cel to Cel, bq to Bq and 8h to 8.h
Removed f, n because while these prefixes are the same, they aren't units by themselves
Removed n.s because the proper form is ns
Breast Cancer Screening
Recent news discusses the controversy over new recommendations on breast cancer screenings. While the headlines focus on the changed recommendations, I’d rather pay attention to how interoperability healthcare IT can help to identify the at risk patients and enable them to receive the necessary care. I’ve asked my colleague Scott Bolte to cover the details for us, since this is one of his areas of both expertise and passion. Here's Scott on the topic:
Everyone hates feeling helpless and wants to be able control their life. Rarely is that more trueTo follow up on Scott's posting, I'll add that in 2008, ANSI/HITSP developed the IS08 Personalized Healthcare Interoperability Specification for the communication of detailed family histories in response to the the Personalized Healthcare Use Case. This specification includes the necessary detail for communicating these family histories in a wide variety of clinical documents. Healthcare providers with access to this information can thus readily identify patients who are at high risk, and act accordingly.
then when facing cancer, or trying to avoid cancer in the first place. It isn’t surprising then, that changes in breast cancer screening recommendations have provoked passionate debate by clinicians, clinical associations, patient advocacy groups, and patients themselves.
Earlier this week the US Preventative Services Task Force (USPSTF) changed the recommended age for routine screening for breast cancer from age 40 to 50. There were other changes too, but that's the most dramatic one. The USPSTF is the premier body in the United States for screening recommendations. It greatly influences clinical practice and reimbursement guidelines. However, their recommendation for change is not accepted by other authoritative organizations like the American Cancer Society (ACS).
I will not second-guess either the USPSTF or the ACS. What I will do is point out that these recommendations are for large groups of individuals with average risk. If you go beyond the headlines, you will find that the USPSTF guidelines still explicitly recognize some people have a genetic predisposition to develop cancer and that the new guidelines do not apply to them.
Most people have heard of “cancer genes” like BRCA1 and BRCA2. Actually, everyone has the BRCA1 and BRCA2 genes. All genes naturally come in a variety of forms called alleles. Some alleles are harmless. They lead to no measurable difference, or differences in cosmetic features like the color of your eyes. Other alleles are more significant, determining how your body processes drugs for example. But the alleles that we're worried about here are for BRCA1/2 genes and how that changes the chance that you develop a disease like breast cancer.
The BRCA1/2 genes tend to dominate discussions about breast cancer, but they account for less than 10% of breast cancers. Just because someone has breast cancer, even at an early age, they may not have a genetic predisposition. There are effective tools to identify risks of developing breast cancer, and one of the most useful is a detailed family history.
The advantage of a family history is it reflects both genetics and environment. It can capture factors such as where you live - with corresponding exposure to environmental pollutions, the family dinner table, exercise habits, and other components that increase or decrease risk. It is the interplay of genetics and environment that ultimately determine if you develop cancer.
If the new guidelines leave you feeling exposed, especially if you have a sister or aunt who has had breast cancer, I strongly recommend you assemble a detailed family history. Use a free web tools such as My Family Health Portrait from the US Surgeon General to survey not just breast cancer, but all other cancers since they are often interrelated. That will capture not only the extent of all cancers in your family, it will also collect critical details like maternal vs. paternal relations, and the age of onset of the disease.
With the family history in hand, you and a trusted clinician can determine if additional genetic testing is appropriate. Whether it is or not, the open conversation about clinical risk - in the context of your personal tolerance for more or less testing - will determine if the new USPSTF guidelines are appropriate for you. Having an ongoing dialog with the clinician, and trusting them when they determine if your risk is high, low, or average, puts you back in control.
Monday, November 16, 2009
Two books
I just bought myself a netbook. For the past 3 years the company notebook, and the 3 or 4 computers in my house have been sufficient, but now I need a "real" computer of my own, that can also travel when I do, and doesn't need to be wrestled from my wife.
The primary reason is that I'm now seriously considering writing "The" book, and it needs to be done on my own equipment. "The" book, will of course be "The CDA" book, but looking over my outline, there's no way I can produce "The CDA" book that I want, so it will have to start off by being "The LITTLE CDA Book" that contains most of what you need to know. So, it won't go into detail on the interworkings of the IHE PCC Technical Framework, the ANSI/HITSP Specifications, or the CCD. I won't spend a lot of time on CDA history (which I find fun, but most of you may not), but it hopefully will get you up to speed enough on CDA.
I haven't figured out all of the details about it yet. I'm not sure about how to approach the content, but have one or two working outlines. I haven't lined up a publisher. I don't even know where I'll find the time to do it (probably between the hours of crazy o'clock and insane thirty, with ocassional stretches to o'dark hundred). But I've been convinced for quite some time that it is needed, and was recently arm-twisted into thinking that I could do it at the last working group meeting.
Why am I telling you this? I'm setting myself up to succeed by telling you that I'm going to do it. I'm also looking for your input. What's needed in the "little CDA book"? The big one?
Post your feedback here, or e-mail me (see my e-mail address on the HL7 Structured Documents page).
The primary reason is that I'm now seriously considering writing "The" book, and it needs to be done on my own equipment. "The" book, will of course be "The CDA" book, but looking over my outline, there's no way I can produce "The CDA" book that I want, so it will have to start off by being "The LITTLE CDA Book" that contains most of what you need to know. So, it won't go into detail on the interworkings of the IHE PCC Technical Framework, the ANSI/HITSP Specifications, or the CCD. I won't spend a lot of time on CDA history (which I find fun, but most of you may not), but it hopefully will get you up to speed enough on CDA.
I haven't figured out all of the details about it yet. I'm not sure about how to approach the content, but have one or two working outlines. I haven't lined up a publisher. I don't even know where I'll find the time to do it (probably between the hours of crazy o'clock and insane thirty, with ocassional stretches to o'dark hundred). But I've been convinced for quite some time that it is needed, and was recently arm-twisted into thinking that I could do it at the last working group meeting.
Why am I telling you this? I'm setting myself up to succeed by telling you that I'm going to do it. I'm also looking for your input. What's needed in the "little CDA book"? The big one?
Post your feedback here, or e-mail me (see my e-mail address on the HL7 Structured Documents page).
Moving Forward
Those who cannot learn from history are condemned to repeat it. -- George Santayana
The resurrection of the debate between "CCR" and "CDA" of four years seems to ignore all that has occured since then. If we are not careful, we are doomed to repeat our mistakes, and even if we are, it would appear that we are at least condemned to repeat the labor leading from our successes.
Do you remember all the hullaballo in early 2007 celebrating the harmonization of the CCR and CDA into CCD? As one of the 14 editors of that specification who worked on it with members of HL7 and ASTM for more than I year, I certainly do. At the time, it was celebrated as being one of the great successes of harmonization. Most of us, having achieved the success of CCD moved on. We built on that information model to support a truly interoperable exchange for healthcare. Only now there are some who wish to see that work discarded because "it's not internet friendly".
Lest we forget, there's a lot more to agreeing on CCD that was needed to ensure interoperability. There are some 80 different value sets from more than 25 different vocabularies that have been incorporated into the standards for the selected use cases. There's also the necessity to secure the transport of that information through at variety of different topologies.
That took some three years of effort AFTER we resolved the CDA vs. CCR debate with the "both AND" of CCD. If your definition of BOTH AND has changed, (and apparently it has for some), then more work is needed on the CCR half. We would need to bring CCR up to the same level of interoperability that we did with CCD, and that will require yet more effort. Frankly, I'd rather spend that time working on making the existing standards better by taking the learnings from the internet crowd and the health informatics crowd back into the healthcare standards organzations. That's a BOTH AND that is a step in the right direction, instead of a step backwards.
The resurrection of the debate between "CCR" and "CDA" of four years seems to ignore all that has occured since then. If we are not careful, we are doomed to repeat our mistakes, and even if we are, it would appear that we are at least condemned to repeat the labor leading from our successes.
Do you remember all the hullaballo in early 2007 celebrating the harmonization of the CCR and CDA into CCD? As one of the 14 editors of that specification who worked on it with members of HL7 and ASTM for more than I year, I certainly do. At the time, it was celebrated as being one of the great successes of harmonization. Most of us, having achieved the success of CCD moved on. We built on that information model to support a truly interoperable exchange for healthcare. Only now there are some who wish to see that work discarded because "it's not internet friendly".
Lest we forget, there's a lot more to agreeing on CCD that was needed to ensure interoperability. There are some 80 different value sets from more than 25 different vocabularies that have been incorporated into the standards for the selected use cases. There's also the necessity to secure the transport of that information through at variety of different topologies.
That took some three years of effort AFTER we resolved the CDA vs. CCR debate with the "both AND" of CCD. If your definition of BOTH AND has changed, (and apparently it has for some), then more work is needed on the CCR half. We would need to bring CCR up to the same level of interoperability that we did with CCD, and that will require yet more effort. Frankly, I'd rather spend that time working on making the existing standards better by taking the learnings from the internet crowd and the health informatics crowd back into the healthcare standards organzations. That's a BOTH AND that is a step in the right direction, instead of a step backwards.
Thursday, November 12, 2009
Educating the Healthcare Professionals of the Future
One of my favorite activities is engaging with students who are learning about health informatics, and teaching them about what is going on in the healthcare standards space. Over the past two years I've been fortunate enough to speak at Harvard, the University of Utah, Northeastern University, and undergraduate students at Stonehill College. These opportunities also tickle my funny bone, because many of these organizations consider me to be underqualified as a student in their graduate health informatics programs. I also teach HL7 standards at HL7 Working group meetings, and given seminars in person and online on IHE Profiles and HITSP speciifcations. I love to teach, and I've been told that I'm pretty good at it by professional educators.
Recently, an educator prompted me to talk about the educational needs of health information professionals.
What do health information professionals need to know? I recently spoke to a class of undergraduates heading into the health information field at a local college near my home on that very topic. Healthcare Information professionals need to be able to navigate amongst a complex set of issues including:
Take a little test, which among these 56 acronyms do you recognize. Do you know what it stands for, or how it is defined? Where would you go to get more information about it? If you get all of them without reference to external resources, I'll be impressed, but I expect you'll be able to figure them all out pretty rapidly through the web.
The imporance of knowing how to find out was highlighted to me several times today in another classroom setting. A question came up in the class on how one would roll up codes used to represent race and ethnicity. I pointed out to the students that there is a) Federal policy in this country for representing (and rolling up) this information, and b) excellent reference terminologies that would enable them to correctly answer the question. I wouldn't expect these information professionals to know that, but I did point out that they need to ask the question of "has someone else already addressed this issue".
Not too long ago, my home state set policy about tracking race information to help determine racial disparities in the delivery of care. They wanted more detail than the OMB 6 categories, for which I applaud them. However, after a little digging I learned that the policy had not been informed by some of the existing work being recommended or already adopted at the national level. They didn't ask the question.
Why not? I'm not sure, but I know that those health information professionals (and policy makers) are already overwhelmed with information, and the important stuff, while all on the web, gets lost in the noise.
So, the first skill that health information professionals need to learn is not about any particular set of standards, agencies, policies, et cetera, but rather critical skills in information retrieval. How can I find out what is important? Where are good sources of information? How do I develop reliable sources of my own? I was asked 3 questions today about HL7 Version 2 that I didn't know the answer to, but I had the answers within an hour. They need to be able to demonstrate that skill.
Being able to read critically, and identify salient points quickly is a crucial skill that we simply don't teach people. In addition to finding relevant information, they also need to learn how to plow through it. In one week I read three different versions of HITECH. It wasn't fun, but it was necessary. Many of you have slogged through HITECH, HIPAA, ARRA, MMA or other legislation or regulations that impact our field, or waded through some recent medical research, or read and commented on recenly published specifications. Ask yourself, did anyone ever teach you how to go through those documents? There are education programs that teach these skills, but not any I've ever encountered in my educational experience. Can you spend an hour with a 100 page document and identify the top 10 issues that you need to be aware of? Health information professionals need to be able to demonstrate that skill as well.
Finally, the last skill is being able to communicate clearly and simply. So much of what health information professionals need to do involves teams of people with a variety of very different and complex skills. These teams include doctors dealing with highly specialized medical knowledge, medical researchers dealing with new drug pathways and a complicated array of regulations for clinical trials, to the billing specialist dealing with financial transactions among a half dozen different agencies all responsible for some portion of a patient's bill, to the IT staff dealing with a spaghetti network of technologies that all need to interconnect. I can dive down into gobbelty-gook with the best of the geeks out there, but when I show real skill is when I've been able to explain some of this gobbelty-gook to a C-level. Teaching Simple English to healthcare professionals is a pretty good idea. I think it should be a required course for anyone who has to write specifications, policy, contracts, proposals, requests for proposals, laws or regulation on any topic (and would go a long way towards making all of our lives easier).
There are many other skills that a health information professional needs, and they are important. But those three skills (information retrieval, critical reading, and simple communications) are fundamental for any information professional. This is especially true for those working in healthcare.
Recently, an educator prompted me to talk about the educational needs of health information professionals.
What do health information professionals need to know? I recently spoke to a class of undergraduates heading into the health information field at a local college near my home on that very topic. Healthcare Information professionals need to be able to navigate amongst a complex set of issues including:
- Technology -- EHR, EMR, HIS, LIS, RIS, PACS and PHR
- Law and Regulation -- ARRA/HITECH, MMA, CLIA, HIPAA, ICD-10/5010 Regulations
- Federal, State and Local Agencies
- Quality and Policy Setting Organizations -- Joint Commission, HIT Federal Advisory Committees, NCQA and NQF
- Standards, Terminology and Standards Development Organizations
Take a little test, which among these 56 acronyms do you recognize. Do you know what it stands for, or how it is defined? Where would you go to get more information about it? If you get all of them without reference to external resources, I'll be impressed, but I expect you'll be able to figure them all out pretty rapidly through the web.
ANSI | HCPCS | IHE | PHR | |
ARRA | HIE | ITI | RHIO | |
ASTM | HHS | ISO | RIS | |
CDA | HIMSS | JCAHO | SCRIPT | |
CCD | HIPAA | LIS | SNOMED CT | |
CDC | HIS | LOINC | TC-215 | |
C32 | HITECH | NCPDP | TPO | |
C83 | HITPC | NQF | V2 | |
CMS | HITSC | NCQA | V3 | |
CLIA | HITSP | MLLP | WEDI | |
CPT | HL7 | NEMA | X12N | |
DICOM | ICD-9-CM | OASIS | XDS | |
EHR | ICD-10-CM | ONC | 4010 | |
EMR | IETF | PACS | 5010 |
The imporance of knowing how to find out was highlighted to me several times today in another classroom setting. A question came up in the class on how one would roll up codes used to represent race and ethnicity. I pointed out to the students that there is a) Federal policy in this country for representing (and rolling up) this information, and b) excellent reference terminologies that would enable them to correctly answer the question. I wouldn't expect these information professionals to know that, but I did point out that they need to ask the question of "has someone else already addressed this issue".
Not too long ago, my home state set policy about tracking race information to help determine racial disparities in the delivery of care. They wanted more detail than the OMB 6 categories, for which I applaud them. However, after a little digging I learned that the policy had not been informed by some of the existing work being recommended or already adopted at the national level. They didn't ask the question.
Why not? I'm not sure, but I know that those health information professionals (and policy makers) are already overwhelmed with information, and the important stuff, while all on the web, gets lost in the noise.
So, the first skill that health information professionals need to learn is not about any particular set of standards, agencies, policies, et cetera, but rather critical skills in information retrieval. How can I find out what is important? Where are good sources of information? How do I develop reliable sources of my own? I was asked 3 questions today about HL7 Version 2 that I didn't know the answer to, but I had the answers within an hour. They need to be able to demonstrate that skill.
Being able to read critically, and identify salient points quickly is a crucial skill that we simply don't teach people. In addition to finding relevant information, they also need to learn how to plow through it. In one week I read three different versions of HITECH. It wasn't fun, but it was necessary. Many of you have slogged through HITECH, HIPAA, ARRA, MMA or other legislation or regulations that impact our field, or waded through some recent medical research, or read and commented on recenly published specifications. Ask yourself, did anyone ever teach you how to go through those documents? There are education programs that teach these skills, but not any I've ever encountered in my educational experience. Can you spend an hour with a 100 page document and identify the top 10 issues that you need to be aware of? Health information professionals need to be able to demonstrate that skill as well.
Finally, the last skill is being able to communicate clearly and simply. So much of what health information professionals need to do involves teams of people with a variety of very different and complex skills. These teams include doctors dealing with highly specialized medical knowledge, medical researchers dealing with new drug pathways and a complicated array of regulations for clinical trials, to the billing specialist dealing with financial transactions among a half dozen different agencies all responsible for some portion of a patient's bill, to the IT staff dealing with a spaghetti network of technologies that all need to interconnect. I can dive down into gobbelty-gook with the best of the geeks out there, but when I show real skill is when I've been able to explain some of this gobbelty-gook to a C-level. Teaching Simple English to healthcare professionals is a pretty good idea. I think it should be a required course for anyone who has to write specifications, policy, contracts, proposals, requests for proposals, laws or regulation on any topic (and would go a long way towards making all of our lives easier).
There are many other skills that a health information professional needs, and they are important. But those three skills (information retrieval, critical reading, and simple communications) are fundamental for any information professional. This is especially true for those working in healthcare.
Wednesday, November 11, 2009
IHE PCC Selects Profiles for 2010-2011 Season
IHE Patient Care Coordination met today and yesterday to review profile proposals that will be developed for the 2010 - 2011 season. We selected 5 work items to move forward, with a sixth likely to be committed to in the next three weeks.
We see in these proposals several exciting events for Patient Care Coordination. While we had some communication bobbles on the Chronic Care Coordination profile, we are excited about the fact that this work comes to us from Australia, with the support of others in the UK. Clearly we'll have some time zone challenges, but this will certainly strengthen our domain.
Several new members joined us based on their interested in the Nursing activities, and on the Newborn Discharge summary. We are very excited to be moving into the Pediatric realm. Finally, we expect this year to finalize the work we have been engaged in over the past three years in perinatal care. I see culmination of that work appearing in our first "workflow" profile, which joins many pieces together from multiple IHE domains into a workflow that supports the process of caring for a mother to be.
Finally, this was my first technical meeting where I wasn't a TC cochair. I must admit I did enjoy being able to lay back and watch others put their stamp on PCC, and help us develop new and better processes for communicating amonst ourselves, and with our audiences.
We also confirmed our schedule for the year, below:
- Nursing Summary/Perioperative Plan of Care
- Perinatal Workflow
- Post-Partum Visit Summary
- Newborn Discharge Summary
- Completion of the APR/LDR profiles
- Chronic Care Coordination (under review, final decision in 3 weeks)
We see in these proposals several exciting events for Patient Care Coordination. While we had some communication bobbles on the Chronic Care Coordination profile, we are excited about the fact that this work comes to us from Australia, with the support of others in the UK. Clearly we'll have some time zone challenges, but this will certainly strengthen our domain.
Several new members joined us based on their interested in the Nursing activities, and on the Newborn Discharge summary. We are very excited to be moving into the Pediatric realm. Finally, we expect this year to finalize the work we have been engaged in over the past three years in perinatal care. I see culmination of that work appearing in our first "workflow" profile, which joins many pieces together from multiple IHE domains into a workflow that supports the process of caring for a mother to be.
Finally, this was my first technical meeting where I wasn't a TC cochair. I must admit I did enjoy being able to lay back and watch others put their stamp on PCC, and help us develop new and better processes for communicating amonst ourselves, and with our audiences.
We also confirmed our schedule for the year, below:
- February 1-4, 2010 (Decide Technical Direction of Profiles)
- April 26-29, 2010 (Prepare profiles for public comments)
- July 12-15, 2010 (Prepare profiles for trial implementation)
Finally, we hope to hold an open PCC planning meeting for HIMSS 2010 attendees, where we can tell more providers about what we are doing, and solicit their input. I'll provide more details on that meeting as they solidify, and I hope to see you there.
Keith
Taking cost out of the system
A lot of the work I've been doing is focused on taking costs out of healthcare. One of the principles I try to apply rigorously is to ensure that the largest cost burdens are borne by the systems at the center. Image that you have 100 systems connecting to one central hub. Imagine further that there is some complex processing needs to take place during commmunications between those systems and the hub. Where do you put the expensive node? Why at the center of course. Similarly, you avoid trying to change workflows at the edges because those also incure costs.
Yet when we talk about quality reporting, most of the quality reporting initiatives put the burden at the edge, and everyone reports nicely computed measures to the center. Instead of incurring costs at a few centralized hubs, providers at the edge are incuring pretty substantial costs (see Cost to Primary Care Practices of Responding to Payer Requests for Quality and Performance Data in the Annals of Family Medicine).
What if, instead of reporting the measures, we reported the values that went into the measurement using existing workflows. What if the centralized hubs were responsible for computing the measures based on the "raw data" recieved. Yes, the centralized hubs would need to do a lot more work, BUT, even if that work were two or three orders of magnitude larger than it is today, the number of edge systems is 5 to 6 orders of magnitude larger than the number of central hubs. If you have 100,000 systems communicating with you, it's certainly in your best interest to make "your job" simpler and easier and reduce your costs. But if you are a centralized system, and "your job" also includes paying for 60% of healthcare costs, then you have a different economy to consider. The costs incurred at the edge don't impact you today, but they will indirectly impact your bottom line tommorow.
The HL7 QRDA specification goes a long way towards relating the data used to compute quality measures back to the data used in Electronic Medical Record systems. However, it still requires more effort at the edge than some other approaches, as it still requires computatation at the edge. It also needs to be built upon a foundation that is designed for quality reporting rather than clinical documentation.
The HL7 eQMF specification strikes at the problem from a different angle and takes a slightly different approach. This specification should be able to:
a) Define the raw data needed to compute measures,
b) Specify how the measures themselves are computed.
If it performs both of these functions, then electronic medical record systems should be able to report the "data they have" to systems that can compute quality measures. This should result in a far lower implementation burden than trying to get thousands of different organizations to implement and report on these computations, and it will also help to stabililize the measures. The measures will all be computed the same way based on the raw data. Variations in how the measure is interpreted are eliminated or dramatically reduced. This should result in even better (or at least more consistent) measures.
IHE has developed a profile for Care Management that could readily support the reporting of the raw data (ok, so it is HL7 Version 3, SOAP and Web Services based, but that IS another discussion). The missing specification in that profile is the one that tells it what data needs to be reported. That could easily be eQMF. I live in hope.
Yet when we talk about quality reporting, most of the quality reporting initiatives put the burden at the edge, and everyone reports nicely computed measures to the center. Instead of incurring costs at a few centralized hubs, providers at the edge are incuring pretty substantial costs (see Cost to Primary Care Practices of Responding to Payer Requests for Quality and Performance Data in the Annals of Family Medicine).
What if, instead of reporting the measures, we reported the values that went into the measurement using existing workflows. What if the centralized hubs were responsible for computing the measures based on the "raw data" recieved. Yes, the centralized hubs would need to do a lot more work, BUT, even if that work were two or three orders of magnitude larger than it is today, the number of edge systems is 5 to 6 orders of magnitude larger than the number of central hubs. If you have 100,000 systems communicating with you, it's certainly in your best interest to make "your job" simpler and easier and reduce your costs. But if you are a centralized system, and "your job" also includes paying for 60% of healthcare costs, then you have a different economy to consider. The costs incurred at the edge don't impact you today, but they will indirectly impact your bottom line tommorow.
The HL7 QRDA specification goes a long way towards relating the data used to compute quality measures back to the data used in Electronic Medical Record systems. However, it still requires more effort at the edge than some other approaches, as it still requires computatation at the edge. It also needs to be built upon a foundation that is designed for quality reporting rather than clinical documentation.
The HL7 eQMF specification strikes at the problem from a different angle and takes a slightly different approach. This specification should be able to:
a) Define the raw data needed to compute measures,
b) Specify how the measures themselves are computed.
If it performs both of these functions, then electronic medical record systems should be able to report the "data they have" to systems that can compute quality measures. This should result in a far lower implementation burden than trying to get thousands of different organizations to implement and report on these computations, and it will also help to stabililize the measures. The measures will all be computed the same way based on the raw data. Variations in how the measure is interpreted are eliminated or dramatically reduced. This should result in even better (or at least more consistent) measures.
IHE has developed a profile for Care Management that could readily support the reporting of the raw data (ok, so it is HL7 Version 3, SOAP and Web Services based, but that IS another discussion). The missing specification in that profile is the one that tells it what data needs to be reported. That could easily be eQMF. I live in hope.
Monday, November 9, 2009
HITSP ANNOUNCES Public Comment Period on 41 Specifications
The Healthcare Information Technology Standards Panel (HITSP) announces the opening of the public comment period for the following Interoperability Specifications (IS), Capabilities (CAP), Requirements Design and Standards Selection (RDSS) and other construct documents (see below). The public comment period on these documents will be open from Monday, November 9th until Close of Business, Friday, December 4th. HITSP members and public stakeholders are encouraged to review these documents and provide comments through the HITSP comment tracking system at http://www.hitsp.org/.
- RDSS157 - Medical Home
- IS06 - Quality
- IS92 - Newborn Screening
- IS158 - Clinical Research
- CAP99 - Communicate Lab Order Message
- CAP117 - Communicate Ambulatory and Long Term Care Prescription
- CAP118 - Communicate Hospital Prescription
- CAP119 - Communicate Structured Document
- CAP120 - Communicate Unstructured Document
- CAP121 - Communicate Clinical Referral Request
- CAP122 - Retrieve Medical Knowledge
- CAP123 - Retrieve Existing Data Related Constructs
- CAP126 - Communicate Lab Results Message
- CAP127 - Communicate Lab Results
- CAP128 - Communicate Imaging Reports
- CAP129 - Communicate Quality Measure Data
- CAP130 - Communicate Quality Measure Specification
- CAP135 - Retrieve Pre-Populated Form for Data Capture
- CAP138 - Retrieve Pseudonym
- CAP140 - Communicate Benefits and Eligibility
- CAP141 - Communicate Referral Authorization
- CAP142 - Retrieve Communications Recipient
- CAP143 - Consumer Preferences and Consent Management
- TP13 - Manage Sharing of Documents
- TP20 - Access Control
- TP50 - Retrieve Form for Data Capture
- T68 - Patient Health Plan Authorization Request and Response
- TP22 - Patient ID Cross-Referencing
- T23 - Patient Demographics Query
- C34 - Patient Level Quality Data Message
- C80 - Clinical Document and Message Terminology
- C83 - CDA Content Modules
- C105 - Patient Level Quality Data Using HL7 Quality Reporting Document Architecture (QRDA)
- C106 - Measurement Criteria Document
- C151 - Clinical Research Document
- C152 - Labor and Delivery Report
- C154 - Data Dictionary
- C156 - Clinical Research Workflow
- C161 - Antepartum Record
- C163 - Laboratory Order Message
- C164 - Anonymize Newborn Screening Results
HITSP members and public stakeholders are encouraged to work with the Technical Committees/Tiger
Teams as they continue the process of standards selection and construct development. If your organization is a HITSP member and you are not currently signed up as a Tiger Team or Technical Committee member, but would like to participate in this process, please register here: http://www.hitsp.org/membership.aspx.
Thursday, November 5, 2009
Synthesis
‘When I use a word,’ Humpty Dumpty said, in a rather scornful tone, ‘it means just what I choose it to mean, neither more nor less.’It's been interesting reading the shift in discussions around REST vs. SOAP in the blogosphere this week now moving towards HTTP and HTML , or device-based connectivity. See blog posts from John Halamka, Sean Nolan, and Wes Rishel. My head exploded with insight -- and the sleep that I promised myself is gone by the wayside.
‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’
‘The question is,’ said Humpty Dumpty, ‘which is to be master – that’s all.’
-- Lewis Carol, Alice in Wonderland
I'm a web, HTML and XML geek from way back. In 2001 I claimed 7 years of experience with XML (a test my employer passed). I've got dog-eared copies of the HTTP specifications (as well as HTML and XML specs) sitting on my shelf that are rather aged. In the thirty years since the development of the OSI seven layer model, we've now seen a shift in how we view HTTP. Most mappings of the web stack refer to HTTP as an "application layer" protocol, but SOAP, REST, Web Services and Web 2.0 seem to have driven it down the stack to "transport" by layering yet more on top of it.
The complexity of what has been identified as "SOAP" in all these discussions is not SOAP at all, but rather the information models in SOAP. There's an important difference between the information models that SOAP and RESTful implementations offer that needs to be considered. These models by the way, are not demanded of SOAP and REST, they just happen to be broadly adopted models that are often associated with these different protocols.
What REST implementations typicall offer that SOAP typically does not is something that HL7 geeks will recognize as a "model of use". Models of use offer up business friendly names and representations for sometimes fairly complex semantic constructs (and they do it compactly). The business concepts map closely to the Business Viewpoint of the HL7 SAEAF model.
What ebXML, HL7 Version 3, and similar protocol specifications offer up through SOAP that REST does not is a model of meaning. The model of meaning maps closely to the Information Viewpoint represented in the HL7 SAEAF model. Models of meaning are more complex, and contain a lot more explicit information, but they are bigger and harder to understand. They become a language in which one must express the meaning of "simple" business concepts (although I note those concepts are not really all that simple).
Models of use are easy for people to understand and to perform simple and often very useful computations with (e.g., pretty UI). Models of meaning are easy to perform complex and often revealing computations with (e.g., clinical decision support). Geeks like me who've been immersed in various models of meaning don't have large problems speaking those languages and crossing between them, but trying to teach people new languages is rather hard after a certain age. I seem to have a knack for computer languages that I just wish applied to spoken ones.
The benefits of models of use are conciseness and direct applicability to business processes, but to cross "models of use" boundaries often requires a great deal more translation (e.g., from clinical to financial). That's because the concepts communicated in a model of meaning assume a great deal of implicit domain knowledge. The hidden domain knowledge into the model of use makes translations hard.
The benefits of models of meaning are explicit representations of domain knowledge using a controlled information model. All the possible sematic relationships are explicitly stated and controlled. This simplifies translation between different models of meaning because one can work at the more atomic level of the controlled information model. This is why (computer or human) language translators build "parse trees" first, and translate from those "models of meaning". Models of meaning are more readily marshalled into data storage systems.
The importance of models of meaning in healthcare IT comes into play when we start talking about clinical decision support. I illustrated one of these examples in Gozinta and Gosouta back in August. In short, the "model of use" described in the guideline needs to be translated into a "model of meaning" representation in order to compute the guideline through a decision support rule.
So, I think I've successfully convinced myself that we need both model of use and model of meaning in the HIT standards space. The simple business oriented representations are needed to make implementations easier for engineers. The more complex information models are needed to compute with.
I think I see a way through the muddle, but it will take some time. The right solution will not just adopt the first model of use that comes to us. We will need to put some thought into it. I believe that we can provide some motion towards an answer that could begin to use in 2013 (or earlier), would be easily adapatable with solutions deployed for 2011.
But if we move towards a model of use in communication patterns, we run into a translation problem that someone has to address.
In a nutshell, WE need to fully specify (in a normative way) translations from model of use to model of meaning and back. The former is easy (with a common model of meaning), the latter more difficult. Compilers are easy (use to meaning), but decompilers are hard (meaning to use). When I say WE, I've got all my big hats on: HL7, IHE and HITSP. And, we need to agree on a common model of meaning (and this we is the SCO, for which I have no hat). The HL7 RIM is a really good start for a reference information model in healthcare (Wes and I both know that you can say almost anything in HL7 V3, and I have the V3 model to prove it.)
Having a common reference model provides the interlingua that will truly allow for interoperable healthcare standards. If all models of use can be expressed in one (and nearly only one) model of meaning based on a common reference model, then translation between the models of use becomes a real possibility. I know I can translate the "transports" that we've all been talking about into a model of use that would make a number of nay-sayers really happy.
There's also a way to use the same WSDL to enable either SOAP or RESTful transports which makes interfacing a lot easier and more negotiable. The last problem is how to secure all of this RESTfully, which I'm somewhat unsure of. I'm not sure it's safe to leave in the hands of the giants that gave us SOAP and WS-* (and insisted on XDS.b) but maybe they've learned their lesson
There's a lot more engineering that is needed to really make this work, and this blog posting is already too long to go into all the details. The solution isn't simple (making hard problems easy never is) and it needs to address a lot of different business considerations. There's also a need to address the migration issues for the current installed base (not just one, but at least 10 different HIE's in the US are using the HITSP protocols, many in production, and that doesn't count the Federal agencies, and a heck of a lot more internationally have been using the IHE specifications upon which the HITSP protocols are built for even longer).
My main concerns about all of this discussion is CHURN and disenfrancisement. Over the past five years we've taken huge steps forward, and this seems like a big step backwards. It may be a step backwards that prepares for a huge leap ahead, and because of that, I'm willing to engage. I get what REST can do (this blog and my whole standards communications campaign are built on RESTful protocols). The concern about disenfranchisement is the suggestion that a group of uber architects could do this quickly and outside the bounds of a governance model that organizations like IHE, HITSP and HL7 impose. If this is to work, it needs the buy-in of those organizations, and their constituencies. It needs to have two key goals: simplicy and compatibility with the industry investments of the last five years. XML was a three year long project that replaced SGML and changed the world. It had those same two key goals.
If we can synthesizes models of meaning and models of use together, we will truly have a model of meaningful use.
I'll probably get a heap of flack for this post tomorrow (or at least the pun), but what can I say?
Wednesday, November 4, 2009
Hard vs. Easy and Real Metrics
I've been following the discussions on the SOAP vs. REST and HITSP selected transports. Some of the details are well documented on John Halamka's blog, and in other articles I've written here in response. In a way, it seems to me to be more of a public dialog about the percieved simplicity of REST opposed to the percieved difficulty in implementing the HITSP selected transport (XDR).
I'm going to relate some numbers about XDS. If you understand XDS and XDR, you understand that the XDS outbound provide and register transaction is EXACTLY the same to a XDS repository, or to an XDR Document Consumer. You did say reuse and simplicity were important, right? What could be easier than that. Processing the XDR inbound transaction is actually easier than the outbound transaction.
But if you are trying to implement the transport protocol yourself, you've already attacked the wrong problem and it's a waste of your time. But even though it's a hard problem, it's NOT that hard -- I should know having done it thrice. Let me tell you a little bit about my experiences here:
From Scratch is The Hard Way
Raw XDS (without audits, TLS or CT) cost me about six weeks of effort six years ago (the first year) to "build from scratch" in Java (in about 4000 lines of code). That INCLUDES connectathon testing. There's another 3000 lines of code that dealt with CDA stuff that was product specific that also went into the effort. The XDS part was the easy bit.
A couple of years ago, I rebuilt XDS transactions using a Java ESB and XSLT transforms over the CDA document. I did that in four weeks of effort, INCLUDING connectathon testing. The magic is all in about 5000 lines of XSLT, about 2000 of which I wrote. About 200 lines of XSLT are a code generator, and 3000 of those are machine XSLTs generated from data contained within the IHE PCC Technical Framework Of the remaining: 800 are hand tuned XSLT for the most common PCC entries, 400 in cda generation utilities, and another 600 in a converter that takes a CDA document and turns it into an XDS.b Provide and register transaction. There's a little bit of custom Java glue in the ESB. This is my main toolkit to test IHE profiles, and I've used it with four different profiles, and three different sources for the data that went out in two different connectathons. I routinely test the IHE profiles I help author at connectathon because it's one way for me to prove that they work and that IHE is not headed into the stratosphere.
TLS
If your issue is with TLS, I also feel your pain, but rather than make you go through it, I want you to learn from mine. The first time I dealt with TLS (five years ago), I spent much time (four weeks) on it to get it perfect, I wrote a FAQ on it that is fairly well known in IHE circles. It includes source code to make TLS work with the IHE ATNA profile (again in Java). The code base for this is VERY small (500 lines), the documentation in the FAQ is MUCH more important and hard won knowledge. The audit trail code was more slogging than craft, and was about 1500 lines (that's not in the FAQ). Last year, another engineer with a similar build as I and long hair wondered why everyone was asking him ATNA questions (they were looking for me) -- but he'd read the FAQ and had all the answers. Maybe this year it'll be you.
In the overall scheme of things, 8-10 weeks to build and test a secure transport protocol from scratch is really a drop in the bucket, but I love my ONE day experience. I implemented XDS edge system transactions again, and got it working in ONE day using open source tools (including TLS and Auditing). While that time obviously doesn't include connectathon testing -- those same tools have been through three years of connectathon tests by numerous vendors. I challenge you to do that in a day the REST way.
Open Source is The Easy Way
Other people and organizations have addressed the Document Source side of the "provide and register" transaction repeatedly, and have provided freely available open source Java solutions. I can count six in my head, but Open Health Tools is probably the most well known in the US, and there's also the Federally sponsored CONNECT project. If you are a C#/.Net geek, dig a little into the Microsoft open source registry project. You'll find some good documentation and sample code C#/.Net code to support the provide and register transaction there as well. It's a little less refined, but you should be able to make it workable.
As for the inbound side, if all you want is the document attachments, nothing could be easier than the following few lines of Java (ripped off from a connectathon tested implementation):
public void visitAttachments(SOAPMessage m, Visitor v)
{
Iterator i = m.getAttachments();
if (i == null) return;
AttachmentPart part = (AttachmentPart)i.next();
while (part != null)
{ v.visit(part);
part = (AttachmentPart)i.next();
}
}
I'm sure a similar C# implementation exists, I'm just not a C# coder, and don't know much about WCF. Look to the documentation on the Microsoft open source site for implementing the Document Consumer. Some of those same patterns will work as the reciever of the SOAP message to unpack the attachments.
If you want to do more with the inbound metadata associated with those parts you can readily do that. The metadata elements are only a single XPath query away from the XML body of the SOAP message. These queries are about two lines of code in Java or C#, and any relatively experienced XML geek will know how to find it. If you aren't that experienced, there's a dozen or so books at your local bookstore targeting your favorite programming language.
Using open source does not involve overly complex code. The key is being willing to read through an understand a little bit about what someone else did, and learn from it. If you have to write more than 200 lines of code to make an open source based XDR implementation work (not finished, just working), I'd be very surprised. If it takes you more than two days to get both halves working, I'd also be surprised. If you want some pointers, drop me a line.
It was fun once, but now I've been there and done that. Writing transports myself is something that I've learned I'd rather let someone else do. That way, I can focus on the real issues. So let's give this debate a rest.
Keith
NOTE 1: I'm a big fan of open source. However, please don't take my mention of these tools as any endorsement by me or my employer for these specific tools. You must evaluate the suitability and fitness of any software you use for your own purposes.
NOTE 2: I count my lines of code raw, and have about a 33% comment to code ratio (in java), mostly in javadoc. And yes, I remember these numbers... I've been gathering personal metrics for years.
I'm going to relate some numbers about XDS. If you understand XDS and XDR, you understand that the XDS outbound provide and register transaction is EXACTLY the same to a XDS repository, or to an XDR Document Consumer. You did say reuse and simplicity were important, right? What could be easier than that. Processing the XDR inbound transaction is actually easier than the outbound transaction.
But if you are trying to implement the transport protocol yourself, you've already attacked the wrong problem and it's a waste of your time. But even though it's a hard problem, it's NOT that hard -- I should know having done it thrice. Let me tell you a little bit about my experiences here:
From Scratch is The Hard Way
Raw XDS (without audits, TLS or CT) cost me about six weeks of effort six years ago (the first year) to "build from scratch" in Java (in about 4000 lines of code). That INCLUDES connectathon testing. There's another 3000 lines of code that dealt with CDA stuff that was product specific that also went into the effort. The XDS part was the easy bit.
A couple of years ago, I rebuilt XDS transactions using a Java ESB and XSLT transforms over the CDA document. I did that in four weeks of effort, INCLUDING connectathon testing. The magic is all in about 5000 lines of XSLT, about 2000 of which I wrote. About 200 lines of XSLT are a code generator, and 3000 of those are machine XSLTs generated from data contained within the IHE PCC Technical Framework Of the remaining: 800 are hand tuned XSLT for the most common PCC entries, 400 in cda generation utilities, and another 600 in a converter that takes a CDA document and turns it into an XDS.b Provide and register transaction. There's a little bit of custom Java glue in the ESB. This is my main toolkit to test IHE profiles, and I've used it with four different profiles, and three different sources for the data that went out in two different connectathons. I routinely test the IHE profiles I help author at connectathon because it's one way for me to prove that they work and that IHE is not headed into the stratosphere.
TLS
If your issue is with TLS, I also feel your pain, but rather than make you go through it, I want you to learn from mine. The first time I dealt with TLS (five years ago), I spent much time (four weeks) on it to get it perfect, I wrote a FAQ on it that is fairly well known in IHE circles. It includes source code to make TLS work with the IHE ATNA profile (again in Java). The code base for this is VERY small (500 lines), the documentation in the FAQ is MUCH more important and hard won knowledge. The audit trail code was more slogging than craft, and was about 1500 lines (that's not in the FAQ). Last year, another engineer with a similar build as I and long hair wondered why everyone was asking him ATNA questions (they were looking for me) -- but he'd read the FAQ and had all the answers. Maybe this year it'll be you.
In the overall scheme of things, 8-10 weeks to build and test a secure transport protocol from scratch is really a drop in the bucket, but I love my ONE day experience. I implemented XDS edge system transactions again, and got it working in ONE day using open source tools (including TLS and Auditing). While that time obviously doesn't include connectathon testing -- those same tools have been through three years of connectathon tests by numerous vendors. I challenge you to do that in a day the REST way.
Open Source is The Easy Way
Other people and organizations have addressed the Document Source side of the "provide and register" transaction repeatedly, and have provided freely available open source Java solutions. I can count six in my head, but Open Health Tools is probably the most well known in the US, and there's also the Federally sponsored CONNECT project. If you are a C#/.Net geek, dig a little into the Microsoft open source registry project. You'll find some good documentation and sample code C#/.Net code to support the provide and register transaction there as well. It's a little less refined, but you should be able to make it workable.
As for the inbound side, if all you want is the document attachments, nothing could be easier than the following few lines of Java (ripped off from a connectathon tested implementation):
public void visitAttachments(SOAPMessage m, Visitor v)
{
Iterator i = m.getAttachments();
if (i == null) return;
AttachmentPart part = (AttachmentPart)i.next();
while (part != null)
{ v.visit(part);
part = (AttachmentPart)i.next();
}
}
I'm sure a similar C# implementation exists, I'm just not a C# coder, and don't know much about WCF. Look to the documentation on the Microsoft open source site for implementing the Document Consumer. Some of those same patterns will work as the reciever of the SOAP message to unpack the attachments.
If you want to do more with the inbound metadata associated with those parts you can readily do that. The metadata elements are only a single XPath query away from the XML body of the SOAP message. These queries are about two lines of code in Java or C#, and any relatively experienced XML geek will know how to find it. If you aren't that experienced, there's a dozen or so books at your local bookstore targeting your favorite programming language.
Using open source does not involve overly complex code. The key is being willing to read through an understand a little bit about what someone else did, and learn from it. If you have to write more than 200 lines of code to make an open source based XDR implementation work (not finished, just working), I'd be very surprised. If it takes you more than two days to get both halves working, I'd also be surprised. If you want some pointers, drop me a line.
It was fun once, but now I've been there and done that. Writing transports myself is something that I've learned I'd rather let someone else do. That way, I can focus on the real issues. So let's give this debate a rest.
Keith
NOTE 1: I'm a big fan of open source. However, please don't take my mention of these tools as any endorsement by me or my employer for these specific tools. You must evaluate the suitability and fitness of any software you use for your own purposes.
NOTE 2: I count my lines of code raw, and have about a 33% comment to code ratio (in java), mostly in javadoc. And yes, I remember these numbers... I've been gathering personal metrics for years.
Monday, November 2, 2009
Laboratory Orders
Today several members of HITSP Care Management and Health Records TC met at the National Library of Medicine to discuss the development of a value set for creating an interoperable set of laboratory order codes. Present at this meeting was an unprecedented collaboration of people representing healthcare providers, laboratory vendors, HIT Vendors, HIE developers and payors. Many of those participating were also involved in testimony before HIT Policy Committee's information exchange workgroup, and are experts in the field. You can read some of that testimony on the HIT Policy committee meetings web site. One of the common themes of that meeting was the need to make it easier to deliver a working laboratory interface with a delivered EMR system, and the desire to work with standardized codes.
This meeting was put together by Dr. Clem McDonald and his staff as a result of his work for HITSP a couple of months ago. Using data from several sources, including the Indiania HIE, United Healthcare and a few other sources, Clem and his team were able to identify a set of about 300 LOINC order codes that cover about 98 - 99% of the most common laboratory orders.
We are fairly close to a resolution based on the results of the meeting today. At this stage, it appears that the significant discussions are no longer about whether there is a need for common laboratory orders, but rather, how to maintain such a set, and what should be included in it. I attribute the success we've had thus far to an appropriate scoping of the problem. Some of the more complex topics are panels, reflex testing, and custom laboratory order codes.
We've addressed these complexities in several ways:
1. The approach is intended to address common laboratory orders, rather than to boil the ocean. My own success criteria is being able to address 80% of the most common orders. This reduces the laboratory interfacing problem to mapping order codes for the 20% remaing in the tail. This could readily reduce the implementation effort for laboratory interfaces to 1/3 or less of the current effort. Certainly in this era of healthcare interoperability, there are more valuable things to be doing than mapping codes.
2. We've scoped out certain levels of complexity, so that if we cannot address a particular topic (e.g., complex panels or reflex testing), we'll remove them from the problem space we are trying to address. This simplifies the problem and gives us something to work on later as we gain experience with the simpler solutions. We can see what works and later try to address the more complicated issues.
3. Finally, the solution is not intended to replace existing functional interfaces, replace the ability of providers and laboratories to develop codes for custom orders, or agree on codes for tests not in the value set. We want systems that support the laboratory order capability to demonstrate the ability to deal with this code set without forcing change on what already works (if it ain't broke, don't fix it).
In reporting these results of this meeting to the HITSP leadership this afternoon, we recieved several accolades for having addressed what has been a long standing problem in laboratory orders. The problem isn't solved yet, but I will agree that we've made significant progress. In the coming weeks, ANSI/HITSP will be publishing the initial value set of laboratory order codes in the HITSP C80 Clinical Document and Message Vocabulary specification, as well as specifications of laboratory messages (HITSP C163 Lab Order Message) and a new capability (HITSP Capability 99 Laboratory Orders) that is intended to address some of these issues.
The important next steps coming out of this meeting are:
1. Reviewing the work of HITSP during the 30 day public comment period.
2. Development of standards to support the exchange of an order compendium between a laboratory and HIT system. The American Clinical Laboratory Association is presently working on a framework to support this effort.
3. Establishing a home for this value set, and a model of governance to maintain it.
This is phenomenal progress, and I'd like to thank everyone who has participated. We may not always see eye to eye, but at this point, we are all looking at the same problem, and working to develop a consensus on how to resolve it.
This meeting was put together by Dr. Clem McDonald and his staff as a result of his work for HITSP a couple of months ago. Using data from several sources, including the Indiania HIE, United Healthcare and a few other sources, Clem and his team were able to identify a set of about 300 LOINC order codes that cover about 98 - 99% of the most common laboratory orders.
We are fairly close to a resolution based on the results of the meeting today. At this stage, it appears that the significant discussions are no longer about whether there is a need for common laboratory orders, but rather, how to maintain such a set, and what should be included in it. I attribute the success we've had thus far to an appropriate scoping of the problem. Some of the more complex topics are panels, reflex testing, and custom laboratory order codes.
We've addressed these complexities in several ways:
1. The approach is intended to address common laboratory orders, rather than to boil the ocean. My own success criteria is being able to address 80% of the most common orders. This reduces the laboratory interfacing problem to mapping order codes for the 20% remaing in the tail. This could readily reduce the implementation effort for laboratory interfaces to 1/3 or less of the current effort. Certainly in this era of healthcare interoperability, there are more valuable things to be doing than mapping codes.
2. We've scoped out certain levels of complexity, so that if we cannot address a particular topic (e.g., complex panels or reflex testing), we'll remove them from the problem space we are trying to address. This simplifies the problem and gives us something to work on later as we gain experience with the simpler solutions. We can see what works and later try to address the more complicated issues.
3. Finally, the solution is not intended to replace existing functional interfaces, replace the ability of providers and laboratories to develop codes for custom orders, or agree on codes for tests not in the value set. We want systems that support the laboratory order capability to demonstrate the ability to deal with this code set without forcing change on what already works (if it ain't broke, don't fix it).
In reporting these results of this meeting to the HITSP leadership this afternoon, we recieved several accolades for having addressed what has been a long standing problem in laboratory orders. The problem isn't solved yet, but I will agree that we've made significant progress. In the coming weeks, ANSI/HITSP will be publishing the initial value set of laboratory order codes in the HITSP C80 Clinical Document and Message Vocabulary specification, as well as specifications of laboratory messages (HITSP C163 Lab Order Message) and a new capability (HITSP Capability 99 Laboratory Orders) that is intended to address some of these issues.
The important next steps coming out of this meeting are:
1. Reviewing the work of HITSP during the 30 day public comment period.
2. Development of standards to support the exchange of an order compendium between a laboratory and HIT system. The American Clinical Laboratory Association is presently working on a framework to support this effort.
3. Establishing a home for this value set, and a model of governance to maintain it.
This is phenomenal progress, and I'd like to thank everyone who has participated. We may not always see eye to eye, but at this point, we are all looking at the same problem, and working to develop a consensus on how to resolve it.