The Boston Metro region is suffering from a > 100-year event. The flooding of this week and two weeks ago have broken all records for rainfail in the region for the month. Earlier today my basement started flooding (it's better now). Fortunately I have a disaster plan, but I forgot to update it when I moved my office downstairs, and have also discovered a few things that need to change (like getting power strips above ground level).
Looking at how a mild or even major disaster could affect me gives me a lot of insight into the necessity of a disaster plan for single physician offices.
Flooding: I have a sump pump which was effective in reducing water levels around areas that were causing the flooding in the first place. I've also used it to elimanate water from the hot water heater.
Fire: There's three fire extinguishers in the house (kitchen, garage and workshop), and we live two minutes from the fire department. Beyond that, I have a safety deposit box where important papers are kept.
Power: A loss of power would mean no computing (after three hours), no lights, and no refrigeration or climate control (heat or cooling). I have a backup generator to supply power to the essential storage equipment (the refrigerator and freezer, and if necessary the oil furnace), and oil lamps and flashlights with plenty of batteries. I bought the backup generator because it cost less than the freezer full of food I was expecting to lose one winter when the power went out, and it's paid for itself about 5 times.
Connectivity: Even if power doesn't go, I could lose cable connectivity which would mean that I have no internet or telephone. There are three cell phones in the house, and at least two of them support internet sharing, so if cable goes, I've got a backup network connection, and if my IP connection is down, my VOIP provider will forward the call to our cell phone.
Hardware: My children and I now have duplicate netbooks, if one fails, we can transfer data to the other, and I have a garage full of spare parts, and readily available computer expertise to fix problems.
Backups: Some of those spare parts have been recycled into external hard drives which we use for backup storage. We don't have any offsite backups, but then, I'm not really making enough use of personal computing to warrant that, and the critical files I have on my office laptop are backed up to servers at the downtown office. The super critical files are on my person just about at all times, since they are attached to my key chain. That's my backups for the CDA book.
What's your plan?
Tuesday, March 30, 2010
Monday, March 29, 2010
Claims Attachments Redux
"The time has come," the walrus said,
"To talk of many things:
The HIPAA law enacted by Congress in 1996 included provisions requiring HHS to develop regulation around the standards used for electronic claims attachments.
In September of 2005, nearly 10 years later, HHS published a notice of proposed rulemaking for claims attachments.
The Healthcare Reform legislation recently signed into law 14 years later includes provisions requiring HHS to develop regulation around the standards used for electronic claims attachments.
Of shoes--
The proposed rule would have named the X12 275 and 277 transactions and ...
and ships--
the HL7 Implementation guides using CDA Release 1.0.
and sealing-wax--
Since then, new legislation and regulation under ARRA and HITECH pave the way for information exchange between healthcare providers and patients using...
Of cabbages--
the ASTM CCR...
and kings--
and the HL7 CDA Release 2.0 standard using the HL7 CCD Implementation guide..
And why the sea is boiling hot--
In this time of patient and provider engagement, we (HL7) should consider updating these implementation guides to make use of the work that has been done over the last five years to take advantage of the CCD templates.
And whether pigs have wings."
This won't happen without your help. Discussions are starting in HL7 as to whether these guides should be updated to make use of the work that has been done in IHE, HL7 and ANSI/HITSP. Please engage and use your voice and expertise.
For more information on what I'm proposing, see what I wrote late last year regarding moving attachments forwards.
P.S. Applogies to Lewis Carrol
"To talk of many things:
The HIPAA law enacted by Congress in 1996 included provisions requiring HHS to develop regulation around the standards used for electronic claims attachments.
In September of 2005, nearly 10 years later, HHS published a notice of proposed rulemaking for claims attachments.
The Healthcare Reform legislation recently signed into law 14 years later includes provisions requiring HHS to develop regulation around the standards used for electronic claims attachments.
Of shoes--
The proposed rule would have named the X12 275 and 277 transactions and ...
and ships--
the HL7 Implementation guides using CDA Release 1.0.
and sealing-wax--
Since then, new legislation and regulation under ARRA and HITECH pave the way for information exchange between healthcare providers and patients using...
Of cabbages--
the ASTM CCR...
and kings--
and the HL7 CDA Release 2.0 standard using the HL7 CCD Implementation guide..
And why the sea is boiling hot--
In this time of patient and provider engagement, we (HL7) should consider updating these implementation guides to make use of the work that has been done over the last five years to take advantage of the CCD templates.
And whether pigs have wings."
This won't happen without your help. Discussions are starting in HL7 as to whether these guides should be updated to make use of the work that has been done in IHE, HL7 and ANSI/HITSP. Please engage and use your voice and expertise.
For more information on what I'm proposing, see what I wrote late last year regarding moving attachments forwards.
P.S. Applogies to Lewis Carrol
Friday, March 26, 2010
New England HIMSS 5th Annual Public Policy Forum
Next week I'll be attending the 5th Annual Public Policy Forum being put together by the New England Chapter of HIMSS.
If you happen to be in the New England (especially Boston) area, it should be an interesting event. Speakers include leading elected officials from the New England area, as well as C-levels from a number of healthcare organizations, insurers and officials from public health agencies.
A quick review of the agenda follows:
If you happen to be in the New England (especially Boston) area, it should be an interesting event. Speakers include leading elected officials from the New England area, as well as C-levels from a number of healthcare organizations, insurers and officials from public health agencies.
A quick review of the agenda follows:
- The National HIMSS Healthcare Reform Perspective
- A Look at the Dynamics of Public and Private Initiatives
- A Review of Existing Health Policy Reform at the State Level: What Works / What Does Not
- A Panel Discussion on ARRA and the Impact on Providers
- Health Plans and their 2010 Challenges
- The National Perspective from the United States Senate
Serenity
“ | God, grant me the serenity To accept the things I cannot change; The courage to change the things that I can; And the wisdom to know the difference. |
- Reject the scope of the effort out of hand, or attempt to broaden or replace it with a different scope, usually to address some percieved need of the writer
- Reject a particular definition or conformance requirement, or attempt to rewrite it according to a scope that is different from what is stated in the standard.
I've learned that dealing with these sorts of issues is simply part and parcel of the effort. I wish it were easier, but I don't have the wisdom to know whether this is something I can change or not. Lacking that, I will try through education.
Recently I've been developing a short class on Standards for software engineers. Not just healthcare standards, but IT standards in general. In it, I make the point that the time to address questions of scope are when discussions of scope are going on, and the time to comment on the standard is when it is being voted on, and not visa versa. The point is that often what you can change is what is presently under discussion, not the preconditions already agreed to previously as a waypoint to getting to this stage.
I've seen the same thing in comments on regulation. Effective comments understand what can be changed, inneffective ones rail against the law or process that require it. These are again questions of scope and/or process, and the time to raise and get those addressed is not when the regulation is being proposed, but when the law that requires it is.
Not too long ago someone on the HISTalk pages commented that "NIST is disregarding usability" in its testing programs (see comment from All Hat No Cattle). Does the writer understand that this testing program has specific certification criteria already identified in regulation (see 45 CFR Part 170), and that NIST cannot pick and choose what it wants to do? Do they also understand that NIST is working separately on usability?
I don't want to pick on this particular example too hard. The point I want to make is that these sorts of comments aren't effective. You do have to understand the scope of the work being done (and what is being done elsewhere). Failing that, you run into the danger of having the bozo bit flipped on you.
On the other hand, as a commentor on standards activities, regulation and law, I also wish that the writers and editors of these were willing to compromise on those parts that could be changed, and were unstinting only on those parts that couldn't be, and had a clue about which was which.
So, in closing, I offer the writers of standards, profiles, regulation and policy this little prayer:
“ | Grant me the serenity to accept the changes that can be made, the courage to retain that which should not be changed, and the wisdom to know the difference. |
As one of those authors, I intend to meditate on it myself.
Thursday, March 25, 2010
Additional Comments on the Certification Proposed Rule
The Certification Proposed rule assigns to NIST the responsibility of developing a testing framework and and subsequently NVLAP the responsiblity for the laboratory accreditation program.
In its role as a developer of a testing framework, NIST is developing tools that will be used to test electronic medical record systems with regard to standards compliance and certification critieria. These systems are coming under increased scrutiny by the FDA as medical devices. As a maker of a medical device, a vendor is responsible for "validating" the tools they use they use to test the proper function of the medical devices.
Validation is a specialized term, but basically it means: Make sure all is well and good with the tool, that it is appropriate for the use, has been tested thoroughly, has good design, is under version control, and/or that the supplier has adequate quality processes, et cetera (Please Note, I AM NOT questioning the quality of the NIST work).
I expect that vendors will be using the NIST testing framework to verify their implementations of the standards selected under 45 CFR Part 170 and also under the certification criteria contained therein. So I think we need more than just a statement that NIST will be responsible for this testing framework and development in the Certification Rule. I'm not sure what that statement needs to be, but when I do figure it out, I will report it here.
I'd also like to see more being done to make it easier and less costly for vendors to use the NIST tools in verifying product conformance. That means that there needs to be more visability into the processes that NIST uses to create and test these tools. I also think that vendors can help contribute in this area.
I've been bugging the people I know in NIST about this for a while, but its probably time to bug them some more.
In its role as a developer of a testing framework, NIST is developing tools that will be used to test electronic medical record systems with regard to standards compliance and certification critieria. These systems are coming under increased scrutiny by the FDA as medical devices. As a maker of a medical device, a vendor is responsible for "validating" the tools they use they use to test the proper function of the medical devices.
Validation is a specialized term, but basically it means: Make sure all is well and good with the tool, that it is appropriate for the use, has been tested thoroughly, has good design, is under version control, and/or that the supplier has adequate quality processes, et cetera (Please Note, I AM NOT questioning the quality of the NIST work).
I expect that vendors will be using the NIST testing framework to verify their implementations of the standards selected under 45 CFR Part 170 and also under the certification criteria contained therein. So I think we need more than just a statement that NIST will be responsible for this testing framework and development in the Certification Rule. I'm not sure what that statement needs to be, but when I do figure it out, I will report it here.
I'd also like to see more being done to make it easier and less costly for vendors to use the NIST tools in verifying product conformance. That means that there needs to be more visability into the processes that NIST uses to create and test these tools. I also think that vendors can help contribute in this area.
I've been bugging the people I know in NIST about this for a while, but its probably time to bug them some more.
Wednesday, March 24, 2010
Implementing Standards
I've spent a number of years (probably 15 or more), implementing various IT and healthcare standards. Over that time I've run into many issues implementing standards.
Finding the right standard can be difficult. The best way I know to do it is to ask an expert (or better yet, several experts). Fortunately, I happen to know a lot of experts, but that isn't true for others. There are a couple of places where you can find experts on standards. One of these are the IHE implementor Google groups.
For this, Google is your friend, or Bing, or Ask or any other search engine.
The key thing you need to be able to do is find the right set of keywords for your search. A few good ways to find keywords are to use a thesaurus or to find terms from other documents you find in your search. You need to be willing to spend some time searching, at least an hour or two before you give up. The less you know, the more time you need to be willing to spend searching.
Understanding the Standard
Having found a standard (or being appropriately directed to it), the next problem that you may encounter is in understanding it. Reading the content of a standard is different from reading just about anything else you will find. It's often the hardest reading that an engineer will have to do, especially if you've never dealt with the publisher of the specification before. That's because you have to be able to enter the mind set of the standard developer after the fact, rather than participate in the same journey that he or she did in writing the standard. Getting there is more than half the battle, but the standard doesn't describe the path that its developers took to reach the conclusions they did.
There are a few techniques that can be helpful here.
The first it to inform yourself about the particular development process used, and the structure of standards produced by a given SDO. Anyone who is familiar with Internet RFCs or W3C standards should be able to report familiar patterns in these standards. Recognizing these patterns in a standard is an important aspect of being able to understand them. Usually you should be able to find the publishing guidelines for a particular SDO. These can be very helpful in understanding the overall structure of their publications. It will also help by telling you which of the parts of the standard are formulaic (prescribed by the publishing guidelines), vs. which are novel, and what kind of content appears in each section of the document.
Another method is to obtain an understanding the terms AS THEY ARE USED by the SDO or the Standard itself. Most standards will have a definitions section which defines key terms as they are used by the standard. The SDO may also have a glossary defining terms that it uses in several standards. Common terms that you may already understand can readily become confusing when an SDO defines them in a way that is different from what you think you know. Dictionary definitions and SDO definitions are two different things. The dictionary definition tells you all the ways a term can be used and what it means. An SDO definition is meant to provide a precise meaning for a term as it is used in its standards. You may not like a particular definition given by a standard, but that's beside the point. You need to be able to understand the terminology of the SDO in the way it is used. There is some common terminology around conformance terms that are used by many SDOs; for that, familiarize yourself with RFC 2119.
Something else that helps is reading the back story behind a standard. In some cases this can be difficult to find, but in others it is easier. Some SDOs produce other publications before they produce a standard. For example the W3C often develops requirements documents before it develops the standard. These documents will often help explain why the standard works a particular way. In other cases, you can find mailing list archives, wikis or meeting minutes that will explain a particular decision. For example, you can check Google Groups for various IHE committee lists and discussions, or the HL7 web site or wiki for Working Group Minutes and it's mailing lists, or the ANSI/HITSP List Server.
Finally, some SDOs require one or more trial or reference implementations (e.g., OASIS) before making something a standard. Their is huge benefit here because these implementations can be very informative to one who has to implement a standard. A reference implementation can be used as a model for your own implementation, or it can be used directly to implement a particular standard.
Implement vs. Use
One question that you should really ask is if you really need to implement the standard at all. You don't need to implement a standard to use it effectively. In fact, the most effective users often don't implement a standard at all, instead they rely on the work of others (see The Right Tools for an example). One of the very beneficial developments around standards is that you can usually find implementations that have already been written for you. Your job becomes one of finding the best implementation to use, rather than developing the implementation yourself. This is a win scenario for you, because you get to move on to the fun stuff, and your customers get the benefits of your use of the standard.
- Finding the right standard for a problem often requires an understanding of what a standard does and how it works. It's a circular problem like trying to find the right spelling for something: How can you look something up in the dictionary if you don't know how to spell it?
- Standards are the end product of a design process. If you haven't been through the design process yourself, understanding why the standard is designed a particular way can be difficult.
- [Good] Standards don't usually tell you how to design an application, rather they tell you what your application needs to do.
Finding the right standard can be difficult. The best way I know to do it is to ask an expert (or better yet, several experts). Fortunately, I happen to know a lot of experts, but that isn't true for others. There are a couple of places where you can find experts on standards. One of these are the IHE implementor Google groups.
For this, Google is your friend, or Bing, or Ask or any other search engine.
The key thing you need to be able to do is find the right set of keywords for your search. A few good ways to find keywords are to use a thesaurus or to find terms from other documents you find in your search. You need to be willing to spend some time searching, at least an hour or two before you give up. The less you know, the more time you need to be willing to spend searching.
Understanding the Standard
Having found a standard (or being appropriately directed to it), the next problem that you may encounter is in understanding it. Reading the content of a standard is different from reading just about anything else you will find. It's often the hardest reading that an engineer will have to do, especially if you've never dealt with the publisher of the specification before. That's because you have to be able to enter the mind set of the standard developer after the fact, rather than participate in the same journey that he or she did in writing the standard. Getting there is more than half the battle, but the standard doesn't describe the path that its developers took to reach the conclusions they did.
There are a few techniques that can be helpful here.
The first it to inform yourself about the particular development process used, and the structure of standards produced by a given SDO. Anyone who is familiar with Internet RFCs or W3C standards should be able to report familiar patterns in these standards. Recognizing these patterns in a standard is an important aspect of being able to understand them. Usually you should be able to find the publishing guidelines for a particular SDO. These can be very helpful in understanding the overall structure of their publications. It will also help by telling you which of the parts of the standard are formulaic (prescribed by the publishing guidelines), vs. which are novel, and what kind of content appears in each section of the document.
Another method is to obtain an understanding the terms AS THEY ARE USED by the SDO or the Standard itself. Most standards will have a definitions section which defines key terms as they are used by the standard. The SDO may also have a glossary defining terms that it uses in several standards. Common terms that you may already understand can readily become confusing when an SDO defines them in a way that is different from what you think you know. Dictionary definitions and SDO definitions are two different things. The dictionary definition tells you all the ways a term can be used and what it means. An SDO definition is meant to provide a precise meaning for a term as it is used in its standards. You may not like a particular definition given by a standard, but that's beside the point. You need to be able to understand the terminology of the SDO in the way it is used. There is some common terminology around conformance terms that are used by many SDOs; for that, familiarize yourself with RFC 2119.
Something else that helps is reading the back story behind a standard. In some cases this can be difficult to find, but in others it is easier. Some SDOs produce other publications before they produce a standard. For example the W3C often develops requirements documents before it develops the standard. These documents will often help explain why the standard works a particular way. In other cases, you can find mailing list archives, wikis or meeting minutes that will explain a particular decision. For example, you can check Google Groups for various IHE committee lists and discussions, or the HL7 web site or wiki for Working Group Minutes and it's mailing lists, or the ANSI/HITSP List Server.
Finally, some SDOs require one or more trial or reference implementations (e.g., OASIS) before making something a standard. Their is huge benefit here because these implementations can be very informative to one who has to implement a standard. A reference implementation can be used as a model for your own implementation, or it can be used directly to implement a particular standard.
Implement vs. Use
One question that you should really ask is if you really need to implement the standard at all. You don't need to implement a standard to use it effectively. In fact, the most effective users often don't implement a standard at all, instead they rely on the work of others (see The Right Tools for an example). One of the very beneficial developments around standards is that you can usually find implementations that have already been written for you. Your job becomes one of finding the best implementation to use, rather than developing the implementation yourself. This is a win scenario for you, because you get to move on to the fun stuff, and your customers get the benefits of your use of the standard.
Tuesday, March 23, 2010
NHIN-CONNECT Code-a-Thon Challenge Announcement
Posting this announcement for a friend ...
American Academy of Family Physicians
National Research Network (AAFP NRN) Director – Wilson Pace, MD
Contacts
The Health Information Technology Initiative at Florida
International University
Is pleased to announce the
International University
Is pleased to announce the
NHIN-CONNECT
Code-a-Thon Challenge
April 28 thru April 29, 2010 Miami, Florida
Co-Hosted by:
Code-a-Thon Challenge
April 28 thru April 29, 2010 Miami, Florida
Co-Hosted by:
National Science Foundation FIU-FAU Industry/University Cooperative Research CenterCenter for Advanced Knowledge Enablement (I/UCRC-CAKE) Director - Naphtali David Rishe, Ph.D.
Open Health Tools (OHT) Executive Director – Skip McGaughey
National Research Network (AAFP NRN) Director – Wilson Pace, MD
The Challenge
The “challenge” is to take a well-known example of the HL7 CCD, the John Halamka CCD published by HITSP, and create innovative stylesheets for display of the CCD contents to a primary care physician taking calls from patients after office hours. The challenge is to improve the functionality of the CDA stylesheet produced by HL7 for use in NHIN-CONNECT Universal Client applications that display CCD for different modalities and form factors - smartphones, netbooks, and full size displays.
The key considerations for qualifying entries include, but not limited to an error-free demonstration, a clear presentation of the improved value for the primary care physician working outside of office hours, an attractive and appealing GUI display, an efficient use of the physician's time, and improved physician decision making with innovative data display capabilities illustrated with data from the Halamka CCD.
The key considerations for qualifying entries include, but not limited to an error-free demonstration, a clear presentation of the improved value for the primary care physician working outside of office hours, an attractive and appealing GUI display, an efficient use of the physician's time, and improved physician decision making with innovative data display capabilities illustrated with data from the Halamka CCD.
Participation is open to students, university faculty, and professionals who dedicate individual effort. The Open Health Tools Academic Outreach Project strongly encourages students to participate in the challenge. Further details may be found at http://hit.fiu.edu and http://www.openhealthtools.org/
All participants will be required to donate the stylesheet copyrights to the NHIN-CONNECT Open Source Community.
Related Documents & References:
[1] John Halamka Stylesheet –http://services.bidmc.org/geekdoctor/johnhalamkaccddocument.xml
http://geekdoctor.blogspot.com/2007/12/standards-for-personal-health-records.html
http://geekdoctor.blogspot.com/2007/12/standards-for-personal-health-records.html
[2] Alschuler Associateshttp://www.alschulerassociates.com/library/?topic=presentations
[3] CDA and CCD Specificationshttp://www.hl7.org/
[4] CONNECT Community Portalhttp://www.connectopensource.org/about/events/code-a-thon-april-2010
Contacts
Tom M. Gomez | Florida International University | 917-304-7149 | gomezt@cs.fiu.edu |
Dan Russler, MD | Open Health Tools | 404-276-1718 | dan.russler@oracle.com |
Monday, March 22, 2010
Friday, March 19, 2010
Where to learn more about the Healthcare Reform Bill
This is one of those topics in my day job that gets very personal. Two days I recieved an e-mail from a family member expressing concern over something she read about the healthcare bill on the web. I spent a good half hour or so researching the topic and sent her back an e-mail describing why what she had read was not worth reading.
I just finished reading an excellent post by Timothy Jost on the Health Affairs blog titled The Health Care Reform Reconciliation Bill. I wish I'd had that resource two days ago. It's an excellent high level summary of what is happening in congress right now and how we got here, and what to expect next.
A friend posted his own support for the bill on the Free Beer Party Blog here. I happen to echo many of the sentiments he expresses.
I'll get back to healthcare standards next week, hopefully with a more cheerful attitude after this weekend's vote on a different kind of standard for healthcare.
I just finished reading an excellent post by Timothy Jost on the Health Affairs blog titled The Health Care Reform Reconciliation Bill. I wish I'd had that resource two days ago. It's an excellent high level summary of what is happening in congress right now and how we got here, and what to expect next.
A friend posted his own support for the bill on the Free Beer Party Blog here. I happen to echo many of the sentiments he expresses.
I'll get back to healthcare standards next week, hopefully with a more cheerful attitude after this weekend's vote on a different kind of standard for healthcare.
Thursday, March 18, 2010
More on UCUM
The Unified Codes for Units of Measure or UCUM is a code system maintained by the Regienstrief Institute. It was principally developed by Dr.'s Gunther Schadow and Clem McDonald. UCUM supports the representation units of measure with less ambiguity than existing combinations of the ISO and ANSI Units of measure used in HL7 Version 2.
UCUM is an interesting code system because:
A. It is infinite in size.
B. It describes a grammar for creating codes
C. There are multiple codes for representating effectively identical concepts.
D. There are reductions to canonical forms.
Note: The same things can be said for unit expressions using either the ISO or ANSI unit standards, so UCUM isn't unique in this regard.
Infinite Size
We tend in information processing systems to like complete lists, but units have mathematical properties that defy such list making. Units are compositional. Any unit can be multiplied or divided by another unit, or raised to a power, or be multiplied (or divided by) a constant to produce another unit expression.
UCUM allows you to compose two unit expressions with / or . to perform multiplication or division, or append an exponent to a unit expression to raise it to an arbitrary power (e.g., m3 for cubic meters). You can multiply or divided by integer constants (e.g., s = min/60). There are shorthand representations for constant powers of 10 (e.g., 10 to the third power is 10*3). Even better, common metric prefixes can be used with metric units to generate for example: kg or mg from g.
A Grammar
Computer geeks like me learn about grammars for languages pretty early on in our education. The grammar for a programming language can run on for pages. The grammar for UCUM is remarkably short, containing about 10 productions.
The key things to remember are that:
1. Parenthesis override precedence rules,
2. Exponentiation has higher precedence than multiplication (e.g., 10.m2 = 10 meters squared) or division
3. Constants may only be positive integers when used with multiplication or division, but may be positive or negative when exponentiating.
4. Anything inside {} is an annotation with an effective unit value of 1.
5. Metric prefixes may only be used with metric units. You cannot say "kilo-inch" in UCUM.
Multiple Representations and Canonical Forms
From an engineering perspective you can say kilometer a bunch of different ways in UCUM:
1000.m
10*3.m
km
This may or may not be a "defect", I won't even bother arguing the point. What is important is that UCUM tells you how you can determine that these are identical, because it gives you all the information you need to rationalize these expressions into a canonical form. The one thing that UCUM doesn't do is define a canonical form, but it one be readily developed algorithmically. I leave that excercise to the reader.
But just in case you need to use UCUM units before you work that excercise out, you can check out some worked examples.
P.S. Thanks Paul for the inspiration...
UCUM is an interesting code system because:
A. It is infinite in size.
B. It describes a grammar for creating codes
C. There are multiple codes for representating effectively identical concepts.
D. There are reductions to canonical forms.
Note: The same things can be said for unit expressions using either the ISO or ANSI unit standards, so UCUM isn't unique in this regard.
Infinite Size
We tend in information processing systems to like complete lists, but units have mathematical properties that defy such list making. Units are compositional. Any unit can be multiplied or divided by another unit, or raised to a power, or be multiplied (or divided by) a constant to produce another unit expression.
UCUM allows you to compose two unit expressions with / or . to perform multiplication or division, or append an exponent to a unit expression to raise it to an arbitrary power (e.g., m3 for cubic meters). You can multiply or divided by integer constants (e.g., s = min/60). There are shorthand representations for constant powers of 10 (e.g., 10 to the third power is 10*3). Even better, common metric prefixes can be used with metric units to generate for example: kg or mg from g.
A Grammar
Computer geeks like me learn about grammars for languages pretty early on in our education. The grammar for a programming language can run on for pages. The grammar for UCUM is remarkably short, containing about 10 productions.
The key things to remember are that:
1. Parenthesis override precedence rules,
2. Exponentiation has higher precedence than multiplication (e.g., 10.m2 = 10 meters squared) or division
3. Constants may only be positive integers when used with multiplication or division, but may be positive or negative when exponentiating.
4. Anything inside {} is an annotation with an effective unit value of 1.
5. Metric prefixes may only be used with metric units. You cannot say "kilo-inch" in UCUM.
Multiple Representations and Canonical Forms
From an engineering perspective you can say kilometer a bunch of different ways in UCUM:
1000.m
10*3.m
km
This may or may not be a "defect", I won't even bother arguing the point. What is important is that UCUM tells you how you can determine that these are identical, because it gives you all the information you need to rationalize these expressions into a canonical form. The one thing that UCUM doesn't do is define a canonical form, but it one be readily developed algorithmically. I leave that excercise to the reader.
But just in case you need to use UCUM units before you work that excercise out, you can check out some worked examples.
P.S. Thanks Paul for the inspiration...
Wednesday, March 17, 2010
The Deluge
Being out sick for a couple of days meant that I spent today wading through a mountain of e-mails, tweets and RSS feeds. The most common topic in my twitterverse were comments on the meaningful use IFR and NRPM.
Thus far, I've briefly skimmed comments from:
There's some remarkable agreement among all of these comments:
1. Quality Measures need to be streamlined.
2. An incremental approach to incentives should be used.
3. The bar is set too high in some places (e.g., CPOE).
Of course, my survey is anything but scientific.
If you care to, you can read all the feedback on the proposed rule for incentives (use keyword: CMS-2009-0117) and the interim final rule on standards and certification (use keywork: HHS-OS-2010-0001) at http://www.regulations.gov/.
Next comes the interminable silence while CMS and ONC cycle through the thousands of pages of these comments and we see how they decide to take these comments into account later this spring.
Thus far, I've briefly skimmed comments from:
- Chillmark Research (I love the editorial comic at the top of this one)
- HL7
- WEDI
- Congress
- HIMSS
- eHealth Initiative
- Premier
- EHR Association
- HIT Policy Committee
- Consumers
- Markle
- HFMA
- CCHIT
There's some remarkable agreement among all of these comments:
1. Quality Measures need to be streamlined.
2. An incremental approach to incentives should be used.
3. The bar is set too high in some places (e.g., CPOE).
Of course, my survey is anything but scientific.
If you care to, you can read all the feedback on the proposed rule for incentives (use keyword: CMS-2009-0117) and the interim final rule on standards and certification (use keywork: HHS-OS-2010-0001) at http://www.regulations.gov/.
Next comes the interminable silence while CMS and ONC cycle through the thousands of pages of these comments and we see how they decide to take these comments into account later this spring.
Thursday, March 11, 2010
Healthcare e-mail -- NOT an Option
I was asked the other day why we cannot communicate health information via e-mail. We certainly communicate enough other information over e-mail and even use it in part to establish banking, health and other vendor account access through that channel.
I want you to follow along with me in a little experiment.
Each of the Recieved: headers in the e-mail is placed there by an SMTP server. Therefore, there are six separate servers where my e-mail to myself could have been stored before being subsequently forwarded to the next one in the chain. Each of the servers has at least one, and possibly several connections to an outside or internal network. They may also be backed up periodically, storing a permanent copy of the e-mail that was in transit.
In only two cases was the communication between the servers reported to be protected by encryption, but other cases may have been ecrypted, or physically secured within a data center. In at least one case, the encryption was the weak RC4 cypher reported to be "unsafe at any key size".
If I were to send a credit card number through this path, there are six different servers that could be attacked to get that number, six different networks that could be sniffed to pick out the SMTP traffic that would contain it, and possibly a backup or two that might also contain the information that hadn't been forwarded. I have no clue who manages these servers, networks and their backups, and whether they have the latest patches, et cetera (But I can certainly identify the product and in some cases even the version number of it being used, which makes attacking them much easier). Would I send a credit card number through this channel? No way.
So as a model to communicate healthcare information, e-mail by itself leaves quite a bit to be desired. Something more is needed. Additional security can be provided by S/MIME, but then you need to obtain certificates, and S/MIME isn't ubiquitiously available on all e-mail clients. There is an additional human factor to consider. An e-mail message is very easily forwarded from one person to another. The e-mail tools make it very simple to do this, and they don't make it simple to ensure that when the message is forwarded, that it is transmitted securely. So even though a single e-mail itself can be secured, it's just not the right way to do it because there are many other risks that make the content of the communication difficult to protect adequately.
There've been some attempts to develop new standards for e-mail that would make it safer, but that's a lot of infrastructure to update, and so those attempts haven't made much headway.
So why is it safe to use e-mail to establish accounts? Well it really isn't, but: The amount of information that a vendor sends you in in an e-mail to establish an account is really just to confirm the e-mail is correctly linked to your identity, which you provided them over a secured (https:) channel. Usually it includes a conformation code that can only be used once, and having been used once cannot be used again. The time between signing up and recieving that e-mail communication (and making use of it) is also very short in most cases, which leaves little opportunity for someone to use it against you. If I do happen to be given login credentials over an e-mail channel, I immediately use them to login and change my password over the secure website. That way I'm sure my password isn't sitting on some server somewhere where who only knows can read it.
P.S. In Outlook, open an e-mail fully, then click on the View Menu and select Options. In the dialog that is displayed, look at Internet Headers.
In G-mail, open the mail. Then click on the down arrow next to the Reply button in the upper right hand corner of the e-mail. Then select "Show Orginal".
I want you to follow along with me in a little experiment.
- Send e-mail from one e-mail account you have to another.
- Open up your e-mail client where you recieved the e-mail you sent to yourself.
- User whatever option is necessary to view your e-mail headers (for Outlook and G-mail, see the postscript below if you don't know how to do this).
- Count the number of times you see the string "Recieved:"
- Make a list of the IP addresses and host names you see on each line starting with "Recieved:"
- Identify who manages each of the servers identified on these recieved lines.
Each of the Recieved: headers in the e-mail is placed there by an SMTP server. Therefore, there are six separate servers where my e-mail to myself could have been stored before being subsequently forwarded to the next one in the chain. Each of the servers has at least one, and possibly several connections to an outside or internal network. They may also be backed up periodically, storing a permanent copy of the e-mail that was in transit.
In only two cases was the communication between the servers reported to be protected by encryption, but other cases may have been ecrypted, or physically secured within a data center. In at least one case, the encryption was the weak RC4 cypher reported to be "unsafe at any key size".
If I were to send a credit card number through this path, there are six different servers that could be attacked to get that number, six different networks that could be sniffed to pick out the SMTP traffic that would contain it, and possibly a backup or two that might also contain the information that hadn't been forwarded. I have no clue who manages these servers, networks and their backups, and whether they have the latest patches, et cetera (But I can certainly identify the product and in some cases even the version number of it being used, which makes attacking them much easier). Would I send a credit card number through this channel? No way.
So as a model to communicate healthcare information, e-mail by itself leaves quite a bit to be desired. Something more is needed. Additional security can be provided by S/MIME, but then you need to obtain certificates, and S/MIME isn't ubiquitiously available on all e-mail clients. There is an additional human factor to consider. An e-mail message is very easily forwarded from one person to another. The e-mail tools make it very simple to do this, and they don't make it simple to ensure that when the message is forwarded, that it is transmitted securely. So even though a single e-mail itself can be secured, it's just not the right way to do it because there are many other risks that make the content of the communication difficult to protect adequately.
There've been some attempts to develop new standards for e-mail that would make it safer, but that's a lot of infrastructure to update, and so those attempts haven't made much headway.
So why is it safe to use e-mail to establish accounts? Well it really isn't, but: The amount of information that a vendor sends you in in an e-mail to establish an account is really just to confirm the e-mail is correctly linked to your identity, which you provided them over a secured (https:) channel. Usually it includes a conformation code that can only be used once, and having been used once cannot be used again. The time between signing up and recieving that e-mail communication (and making use of it) is also very short in most cases, which leaves little opportunity for someone to use it against you. If I do happen to be given login credentials over an e-mail channel, I immediately use them to login and change my password over the secure website. That way I'm sure my password isn't sitting on some server somewhere where who only knows can read it.
P.S. In Outlook, open an e-mail fully, then click on the View Menu and select Options. In the dialog that is displayed, look at Internet Headers.
In G-mail, open the mail. Then click on the down arrow next to the Reply button in the upper right hand corner of the e-mail. Then select "Show Orginal".
Wednesday, March 10, 2010
On Governance for Developing a Policy Framework and Other Updates
One of the things I love about HIMSS is the social interaction that I get with people that I don't get from many other venues. I had an interesting discussion with someone about how his organization put together a policy framework for health information exchange in a few short months.
The information exchange included a number of governmental agencies and participating healthcare organizations in a large city. I believe the total number of agencies and organizations exceeded two dozen. There were a few unique attributes to the governance model used to put together this framework that he expressed to me.
This was another one of those meetings where both the party I was talking to and I had run out of business cards. I never did get over to the CONNECT booth where he was at, because I had other duties that day at the Interoperability Showcase that day.
P.S. Oh, and on that other "secret" issue I reported on earlier this week: The standard that the Federal Government wishes to harmonize into the healthcare space is NIEM or the National Information Exchange Model. The feedback I've gotten from at least two sources indicates that I was "right on" in what I reported two days ago here. HL7 has started investigating NIEM. The initial feedback from their investigation didn't set off any major firestorms so there may be some promise here. I've been looking at it, skimming mostly, but expect to pay more attention as soon. This site looked interesting.
I imagine we'll hear something about the new harmonization effort in the next week or so, although who knows whether this will be a big splash or a stealth program. I suspect the former even though it started as the latter.
I also expect that this harmonization activity will tie in somehow with the NHIN Direct activities. At the moment, that web site seems to be just a sequence of blogs from one person. If NHIN DIRECT really wants to accomplish a draft by very early May, real discussions and work needs to get started alot quicker. The time frames they suggest are rather agressive for drafting specifications from a volunteer base. I'm wondering how they plan to crack that nut, and how much time they expect members to put into this effort. I've been through crazy deadlines before on ONC projects (remember the HITSP 90 day diversion?), and I'm not sure I want to do it again. But I am interested... and waiting to hear more.
The information exchange included a number of governmental agencies and participating healthcare organizations in a large city. I believe the total number of agencies and organizations exceeded two dozen. There were a few unique attributes to the governance model used to put together this framework that he expressed to me.
- They put together more than 100 use cases for information exchange that were based on real business cases. In and of itself, this was probably a monumental task, however, because they had these business cases, they were able to explain the reasons that they needed particular abilities in the exchange. This came into play frequently later.
- The legal end included lawyers who were tasked with figuring out how to set up the appropriate policy to accomplish the goals of the exchange. Most of the time it seems that lawyers are tasked with protecting an organization from liability, rather than figuring out how to accomplish a task. Yes, protecting the organization (and the patients) was one of their responsibilities, but they were also tasked with getting the work done.
- The various workgroups met on a frequent (but not extreme) basis to discuss the policy framework. Silence implied consent to the framework that was proposed, so you had to show up to voice your concerns.
- Blocking problems were addressed by returning to the business case and explaining the need. "Do you really want to prevent us from X?" was the question that was raised, where X was the goal of the business case. That was often enough to restir interest in figuring out how to solve the problem. But just in case...
- There was also very strong top down support from the local leadership that was able to provide a forcing function. When problems came up that seemed unresolvable, the leader was able to provide pressure to break the log-jam.
- The organization was very strong technically. The person I spoke with had a very sound understanding of how to implement access control mechanisms to support the policy framework using standards-based technology. I've run into only a few people outside of the security specialty in healthcare standards who understand this well. This guy could probably have written the IHE Whitepaper on access controls in his sleep.
- They had enough resources to accomplish the task, not only from the policy development side, but also from the technical end.
This was another one of those meetings where both the party I was talking to and I had run out of business cards. I never did get over to the CONNECT booth where he was at, because I had other duties that day at the Interoperability Showcase that day.
P.S. Oh, and on that other "secret" issue I reported on earlier this week: The standard that the Federal Government wishes to harmonize into the healthcare space is NIEM or the National Information Exchange Model. The feedback I've gotten from at least two sources indicates that I was "right on" in what I reported two days ago here. HL7 has started investigating NIEM. The initial feedback from their investigation didn't set off any major firestorms so there may be some promise here. I've been looking at it, skimming mostly, but expect to pay more attention as soon. This site looked interesting.
I imagine we'll hear something about the new harmonization effort in the next week or so, although who knows whether this will be a big splash or a stealth program. I suspect the former even though it started as the latter.
I also expect that this harmonization activity will tie in somehow with the NHIN Direct activities. At the moment, that web site seems to be just a sequence of blogs from one person. If NHIN DIRECT really wants to accomplish a draft by very early May, real discussions and work needs to get started alot quicker. The time frames they suggest are rather agressive for drafting specifications from a volunteer base. I'm wondering how they plan to crack that nut, and how much time they expect members to put into this effort. I've been through crazy deadlines before on ONC projects (remember the HITSP 90 day diversion?), and I'm not sure I want to do it again. But I am interested... and waiting to hear more.
Monday, March 8, 2010
Certification NRPM Comments
I started reading the Certification NPRM this morning around 10:00. The first thing I did was bookmark the rule, which you can find here. I finished that around 10:50. It's now 11:40 and I'm done reading the regulation text starting on page 146 of the NPRM. I can see the fingerprints of NIST on this document. It is remarkably clear and I have only a few comments on the entire proposed rule:
1. Kudus to the government for using standards! The new rule uses international standards: ISO/IEC Guide 65:1996 (General requirements for bodies operating product certification systems) and ISO/IEC 17025:2005 (General requirements for the competence of testing and calibration laboratories) to describe the requirements of certification and testing organizations. But shouldn't these documents be incorporated into the rule by reference along with the location where they can be obtained?
2. As proposed, the sunset of the temporary certification program begins as soon as the first permanent certification body is announced by ONC. I would prefer to see a set end date for the temporary certification program. Deadlines produce results, and a clear end date would provide incentives for organizations to be ready for the permanent program.
3. I'm not terribly sure about the value of two accreditation process, one for testing bodies and the other for certification bodies. I understand the disctinction between testing and certification, but wonder if it is necessary to have two separate accredited bodies as part of the certification process. There is a great deal of overlap in the requirements to be accredited for either and an expectation that at least some organizations will do both. It seems as if having two accrediting processes could increase certification costs without providing much additional value.
4. This is really just a minor nit: In two places the rule talks about the effect of revocation of status for a certifying body on the certifications that it issued. This can be found in sections §170.470 and §170.570, both titled: Effect of revocation on the certifications issued to complete EHRs and EHR modules. I'd like to see text in the rule that says that if a certification is called into question that the orginization whose recieved it is entitled to a refund of fees paid for certification. After all, if you paid for something and got a defective item, you should be entitled to get your money back.
5. NVLAP (National Voluntary Laboratory Accreditation Program) is never defined or expanded in the regulation text itself.
This is one I'm not going to worry about. If the rule goes final as written, I'll still be happy with it. Thanks again NIST!
P.S. NIST is publishing the test methods that are cited in the NPRM and is seeking public comment on those also. Take a look at the first wave.
P.P.S. The PDF I bookmarked was the "corrected" rule that Brian Ahier mentions on his blog.
P.P.P.S. Updated to fix broken links.
1. Kudus to the government for using standards! The new rule uses international standards: ISO/IEC Guide 65:1996 (General requirements for bodies operating product certification systems) and ISO/IEC 17025:2005 (General requirements for the competence of testing and calibration laboratories) to describe the requirements of certification and testing organizations. But shouldn't these documents be incorporated into the rule by reference along with the location where they can be obtained?
2. As proposed, the sunset of the temporary certification program begins as soon as the first permanent certification body is announced by ONC. I would prefer to see a set end date for the temporary certification program. Deadlines produce results, and a clear end date would provide incentives for organizations to be ready for the permanent program.
3. I'm not terribly sure about the value of two accreditation process, one for testing bodies and the other for certification bodies. I understand the disctinction between testing and certification, but wonder if it is necessary to have two separate accredited bodies as part of the certification process. There is a great deal of overlap in the requirements to be accredited for either and an expectation that at least some organizations will do both. It seems as if having two accrediting processes could increase certification costs without providing much additional value.
4. This is really just a minor nit: In two places the rule talks about the effect of revocation of status for a certifying body on the certifications that it issued. This can be found in sections §170.470 and §170.570, both titled: Effect of revocation on the certifications issued to complete EHRs and EHR modules. I'd like to see text in the rule that says that if a certification is called into question that the orginization whose recieved it is entitled to a refund of fees paid for certification. After all, if you paid for something and got a defective item, you should be entitled to get your money back.
5. NVLAP (National Voluntary Laboratory Accreditation Program) is never defined or expanded in the regulation text itself.
This is one I'm not going to worry about. If the rule goes final as written, I'll still be happy with it. Thanks again NIST!
P.S. NIST is publishing the test methods that are cited in the NPRM and is seeking public comment on those also. Take a look at the first wave.
P.P.S. The PDF I bookmarked was the "corrected" rule that Brian Ahier mentions on his blog.
P.P.P.S. Updated to fix broken links.
Thursday, March 4, 2010
The title of this post is a secret. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
I really enjoy what this administration is doing with open government. I really appreciate the
FACA blog and the Recovery web sites. I even enjoy seeing frequent communications Dr. Blumenthal sitting in my inbox. But every now and then something happens that gives me heartburn.
FACA blog and the Recovery web sites. I even enjoy seeing frequent communications Dr. Blumenthal sitting in my inbox. But every now and then something happens that gives me heartburn.
Without much fanfare on February 19th our Federal Government announced Task Order 2697 entitled Recovery - Harmonization of Standards and Interoperability. The body of the announcement included this short blurb describing the request:
The purpose of this requirement is to obtain Contractor support services to harmonize standards and interoperability specifications to achieve ubiquitous implementation of standards, promote wider use of standards, and increased level of interoperability across the nation in health information technology (HIT). The overall purpose of the Office of the National Coordinator for Health Information Technology (ONC) programs is to facilitate and expand the secure, electronic movement and use of health information among organizations according to nationally recognized standards.
The response is due in 7 days (March 11th). The announcement concludes with the following notice:
Interested vendors with a CIO-SP2i task order contract may find this solicitation on the NITAAC website at http://nitaac.nih.gov/ under Task Order 2697 titled, “Harmonization of Standards and Interoperability Specifications” (RFP 10-233-SOL-00072).
The contractor listing for CIO-SP2i task orders can be found on the NITACC web site. Because I don’t work for one of the companies that hold a CIO-SP2i Task Order contract with the NIH, I cannot read the Task order. Nor apparently may any other member of the public that may be impacted by these activities. As I read the brief description of the Task Order in this announcement, it signals the way that HITSP Standards Harmonization process is being replaced. I certainly hope the task order includes a mechanism to engage with the members of the previous harmonization process.
I’m curious about a number of things. What is in this contract? Why was this method of contracting
chosen? Who wrote the Task Order? What are its objectives? I’m sure I’ll learn more in 10 days, but why must I wait?
chosen? Who wrote the Task Order? What are its objectives? I’m sure I’ll learn more in 10 days, but why must I wait?
I have to admit a great deal of concern about this particular task order. The way that it has been done is not an open process. There’s no way to disengage rumor from facts. Even inside the ONC it is apparently difficult to find out what’s going on with this one.
All of what I’ve heard below are unsubstantiated rumors that have been reported to me over the last week. This is simply an example of what happens when things are hidden, I don't know whether any one of them is true or not:
- I’ve heard this process was needed because it was the only way to act quickly.
- I’ve heard this process was used to shut out one contractor or another.
- I’ve heard this process will provide ONC with much more control over the activities that are to be performed.
- I’ve heard that the task order contains requirements to utilize a standard developed by the Department of Justice to augment and/or replace work already done in ANSI/HITSP.
- I've heard that subcontractors are under NDA with regard to the content of the task order.
I could do without the heartburn and lack of sleep this is causing me right now. ONC has known the inception of the HITSP contract when it would expire. They’ve had plenty of time to put out an open request for proposal through a normal bidding process to keep it going or to replace it. There’s no reason to keep this a secret. Starting what should be an open process this way is no way to engender any trust.
Oh, and the Post title is "Heartburn".
Wednesday, March 3, 2010
Happy New Year
HIMSS is like New Year’s for me. It’s the culmination of a year or more of work at IHE, HL7 and HITSP, and the start of many new things afterwards. I’ve had several interesting conversations this week at HIMSS, and between spending all of my time on the show floor, talking to customers, talking to other vendors, and talking to government officials, I’ve had precious little time to write.
The Interoperability showcase is going extremely well. We have about an acre of space on the show floor, larger than any other booth on the floor. This is an incredible accomplishment. The IHE Showcase covered less than 2500 square feet five years ago. This year it is more than 10 times that size, and for good reasons (see There’s reason for that).
Today, I’ll simply tell you the kinds of things that I have learned so far. Later this week and next I tell you a lot more about what I learned:
- There is an underserved population of healthcare providers who need access to clinical information.
- There are some things I can do differently in my blog that I found out at the Meet the Blogger’s session in the Social Media center. I’ll be putting some of those into action here later this week.
- I heard more about what could replace HITSP in the future. I expect that it will be much differently organized, and will have more focus on the NHIN work. I also learned that I couldn’t read the recent tender for harmonization because I’m not employed by one of the contractors who can respond to it. I find that extremely frustrating on several levels.
- You can successfully put together a policy framework and supporting technology across several dozen public and private agencies in a short period of time.
- The new Certification proposed rule (that I won’t even read until some time next week) is now available. I have a lot more faith in this proposed regulation than I do in others simply because I trust science more than I do art. I’m sure that I will still have feedback on it, but the results I’ve already heard thus far indicate that this one will be a breath of fresh air compared to the prior two. Thank you NIST.
- There are at least 10 different health exchanges using IHE profiles that aren’t yet on the map but soon will be.
Subscribe to:
Posts (Atom)