Friday, November 30, 2012

How to I tell you this? It may be what you do, not your software

One of the biggest challenges of a technologist like myself is trying to explain to a physician that there may be a better way to do things.  This came up on the #HITsm tweetchat today in Question 3:
And my response was: Sometimes you need to understand the workflows just to tell providers what is wrong with them.

A perfect example of this showed up the other day as I listened to members of a House Subcommittee talk about Health IT and the Meaningful Use program.

If you list to this video around 1:11:35, you can hear an anesthesiologist explain how he cannot do med/med/allergy interaction checking on his patients in the operating room.

I'm not a Doctor, but I do know that most surgeries are scheduled well ahead of time.  I also know that the anesthesiologist has a pretty good idea of what medications he or she may be using.  It isn't Health IT that is broken here.  It's the workflow that's broken, and perhaps even the culture. If you can identify the problem, surely, you can figure out a solution.  The first thing that comes to my mind is to check for interactions using the EHR pre-operatively.

When your workflow doesn't work with your technology, perhaps it's not your technology, but rather your workflow that's broken.


Thursday, November 29, 2012

Standards REMIX

In response to a comment on this morning's post elsewhere, I just HAD to do this REMIX of a popular XKCD strip:


Thanks to Randall Monroe (creator of XKCD) for being so free (and so clever) with his content.

Changing the Way Standards are Developed

ONC has been arguing for some time that we need to change the way that standards are developed.  From their perspective, the process takes too long, and doesn't result in sufficient implementation.  I have to agree that standards development processes that JUST focus on producing the standards aren't as successful as those which produce working code.

Most standards bodies recognize this.  For example, W3C requires at least one and prefers two implementations of each feature to advance to publication as a W3C Recommendation (standard).  OASIS requires three statements of use before advancement to an OASIS standard.  IETF requires two implementations to move to draft standard level.  IHE requires successful Connectathon testing of three separate implementations, in two different regions, with tests covering all actors at at least one Connectathon.  HL7 has a requirement to identify at least two implementers for projects on the Standards track, but doesn't have formal evaluation of implementations process that is consistently applied by work groups before advancing to DSTU or Standard.

The Direct Project started by ONC was intended to change that model, and became the pattern for what the S&I Framework does today.  There's a lot more focus on working code, pilots and implementation at the same time as specifications are being developed. But there are some limitations to these initiatives as well.

Neither the Direct Project, nor the S&I Framework is a Standards Development Organization.  These are simply ONC funded and coordinated projects.  To be successful, they need to work with organizations like HL7, IHE, IETF, WEDI and X12 to further develop the specifications as Voluntary Consensus Standards.  While Direct worked with IHE (see Support for Metadata Limited Document Sources) and used some IHE specifications (XDR and XDM), the core Direct Applicability Statement still needs to go through the IETF process (which it was targeted for) to ensure continued maintenance, and validate the consensus achieved.  At present, the "owner" of that document is still the "Direct Project" as far as I can tell, and I've heard nothing about advancement of it through IETF as was intended.

In S&I Framework, there has been more cooperation with SDO's, and key specifications like the Consolidated CDA (which was rolled into the Transitions of Care initiative), HQMF Release 2 and QRDA Release 2 (used by Query Health), Health eDecisionsLaboratory Ordering and Reporting and others being coordinated with an HL7 Ballot.  Other initiatives, such as the ABBI project, have yet to develop any sort of formalized relationship with IHE or HL7.  Members of both of those organizations have done significant work and are planning more which could advance the goals of the ABBI project.

The much celebrated success of the Direct Project is often overstated, both in terms of time and number of implementations.  Overall, the Direct Project was "completed" in the same time frame as an IHE profile, or an HL7 DSTU.  What it did result in that IHE and HL7 don't always succeed in is the development several implementations very quickly.  There's at least one open-source, and several commercial offerings.

What is missing from the celebrated successes are two factors.  The first is the time ONC spent in advance of the project setting things up.  Because most of us never saw it, ONC gets away with not reporting it when they talk about timelines.  Add 3-6 months to each ONC project before it ever gets off the ground in a public way, and you'll have a better idea about time spent.  This same kind of time is also spent in HL7 and IHE advancing new projects.  We have a much better record of what happens, because it is all done openly and transparently.  Yes, you do have to be a member to see all of the detail, but much of it is freely available to the public in both IHE and HL7.  To be part of starting a project in S&I, you need to be invited to the table (or a White House Meeting) , and even then, the agenda is often pretty well established before you arrive.

With regard to implementations, well, ONC has a few levers that SDOs don't.  The first of these is that it is a regulatory agency.  Anyone who is paying attention is aware that standards that go through the S&I Process are likely to be cited in regulation.  And once cited, well, that pretty much creates the demand and a market for implementations.  The other lever is money.  ONC had plenty of money invested in State HIEs (more than $500M).  Through these they were able to control WHAT the state would implement with respect to HIE technology.  Several Directors of State HIE programs told me about the pressure that ONC was placing on them to implement the Direct Project specifications, to the exclusion even of other plans, some of which had substantial development investments.

The S&I Framework itself evolved out of something like 11 different contracts.  Given the time and staffing involved, I estimate that ONC spent between $10M and 20M over two years, and I've probably undershot.  Some of that was spent on development, implementation and testing resources.  When you have that kind of leverage, it makes for a very responsive market with respect to implementations.

The "big money" [the initial $2B given to ONC] ran out in September of this year, so any continuing work on S&I Framework comes from ONC's operating budget.  We'll see how well S&I succeeds in the coming year since it's no longer possible for ONC to trade money for time.  There has been lots of great work developed through S&I, and I even include Direct in that, but there are some things where it could be greatly improved.

  1. More transparency and openness in the governance of what projects are done, and how they are selected and initiated.  This is ONE of the key failings of S&I (and Direct) in openness.  SDOs have an open and transparent process NOT just to create standards, but also in selecting what standards to go forward with.
  2. Greater collaboration with MORE standards bodies.  I love some of the activity that is going on with HL7, but ONC has yet to establish a relationship with IHE International, which has the lowest cost membership model of any of they SDOs they've worked with thus far (it's free).  My hope is that ONC will join us next week (they've been invited) to discuss profiling of OAuth 2.0 for Healthcare uses.
  3. Documentation  of project procedures and greater consistency across initiatives.  There's too little documentation of process, and too much inconsistency across projects.  As someone who's been involved in numerous S&I Projects, I'm still confused about process when I join a new project.

As a final note, just in case you think that outputs of the S&I Framework are standards (without the assistance of an SDO), or that it is itself an SDO, the Federal Government wouldn't.  See OMB Circular A-119 on the definition of Voluntary, Consensus standards.  ONC is a Federal agency, not a private sector organization.  And the process they use, while it includes consensus in some parts, doesn't in the selection of projects to move forward.

S&I Framework will need the ongoing assistance of SDOs like HL7, IHE, WEDI and others to continue to move forward in creating standards.  Without them, what ONC will create is what Direct is today, "A Government Unique Standard".

  -- Keith

Wednesday, November 28, 2012

Ack, Nak and MAC

ACK is short for Acknowledgment.  There's even an ASCII code for it (6).  Along with ACKs, there are also NAKs (ASCII 15), or Negative Acknowledgments.  Related to ACK and NAK these are MACs.  ACK and NAK are used in messaging to communicate understanding (or lack of it) to messages that have been send.

Message Authentication Codes (MACs) are used to ensure that the message sent between two systems is communicated ungarbled.  A MAC is usually a computation over the message that produces a short code.  When the same computation is performed over the data on the receiver side, if they don't get the same code, they know the message didn't come over correctly.  The simplest of these is an XOR over all data bytes.  Other more complex computations include CRC, and cryptographic algorithms such as SHA-1.  These all operate pretty much at the syntactic or even lower levels of granularity in the transmission.

When messages are sent by computer, the MAC is computed and sent by the originator.  If the message is good from the receiver's perspective, it sends back an ACK.  If not, it sends back a NAK.  In early communications protocols, these messages used the ACK and NAK ASCII characters, but these days they are a bit more complex.

Humans use ACKs, NAKs, and MACs too, but differently.  It's fairly common in various leadership training classes to hear experts talk about listening skills.  One skill that is often taught is reflecting: Responding back to the speaker in your own words your understanding of what they just said.  It's also a skill taught to messengers and other communicators.  While it requires some level of practice, this is still pretty easy to do between humans.

The protocol goes something like this:
MSG: Speaker: "..."
ACK: Responder: So, [MAC = essential points from ... above], right?
ACK: Speaker: Right.

OR

MSG 1: Speaker: "..."
ACK: Responder: So, [MAC = essential points from ... above], right?
NAK: Speaker: Wrong.

Communication could stop here, but more typically, we fall back to a different level of communication:

MSG 2: Speaker: re-explains ... in a different way
ACK: Responder: Ah, so [MAC = essential points from ... WITH correction].
ACK: Speaker: Right.

The point of reflective listening is that the essential semantics are conveyed back to the speaker, but the definition of essential is focused on the original speakers point of view.  The MAC is never communicated, because we leave it up to the responder to figure it out, and the speaker to evaluate whether they got it right.

This isn't very easy at all to do between computers.  IEEE originally defined interoperability as:
... the ability of two or more systems or components to exchange information and to use the information that has been exchanged.
(NOTE: The new definition is slightly different):

A key point in this definition is that the information being used by the receiver may not be what the sender considers to be essential.  [This leads to an interesting correspondence to the term "secondary use", which describes cases in which how the receiver uses or considers data varies from how the sender was designed to use or communicate it.]

It is fairly common in messaging environments to reflect back the original content received, perhaps (in a few cases) removing that content that wasn't stored or acted upon by the receiver.  This becomes a "Semantic MAC", indicating what parts of the message were successfully communicated.  But often, we don't ever get that feedback, or as many implementations do, we just get copy the content received without any evaluation of it.

Of what use would a semantic MAC be in Healthcare IT?  Consider my youngest daughter's story (starting at 1'25'')  about getting her ear drops.  If the receiver of the e-prescribing message had sent back a "semantic MAC" that indicated what would be given to my daughter, the sender would have known that it didn't understand what ear drops should be given.

Doing this would be very complicated.  The problem is that the final receiver of the message is often several systems removed from the sender, and the content of the message wouldn't even be inspected until after the sender had moved on to other things (possibly long after).  The original message sender may not even be accessible to the final receiver (and vice versa).  This is why HL7 supports both message acknowledgments and application acknowledgments to messages.  The former validates syntax, the latter semantics.

The challenge with application acknowledgment is that in order for it to be effective, it has to be communicated to the originating system, not to the message deliverer, and the sooner the better to avoid the need to retain the sender's context.

When you receive a nice gift through the mail or UPS, do you thank the driver or mail carrier?  No, you write a thank you letter (or e-mail) to the original message sender.  And if you receive a package you didn't expect, the most effective thing to do is contact the sender to ask what it is and why you are receiving it.  Asking the delivery driver why it showed up won't help you.  If you receive a package at 5:00pm on the west coast from a business on the east coast that closes at 6:00pm (EST), trying to call them to find out what happened won't succeed until the next day, when they open up again.  This reflects another challenge of application acknowledgments, which is that the message originator may not always be available when the receiver is ready to acknowledge the message.


You might think it would be easier if every system was always on and always accessible over the interwebs so that we could avoid the need for store and forward.  But few store and forward systems do just that.  Often they add value to the communications, reducing costs and increasing effectiveness in other ways.


There is no guarantee of immediacy in the communication between the message originator and the final receiver of the message.  To resolve this issue, we  take additional steps to ensure that communications are not garbled semantically when the standard is developed.  When a standard is designed, we decide which parts of the content are essential by marking them as being required, recommended or optional.   These terms describe the "essentialness" of the elements in the communication.


I get numerous questions about whether a given component of a CDA document, or XDS message, or other standard HAS to be present, or if it can be null.  There's no amount of documentation in the standards that will eliminate these questions.  The reason for that is because even though these standards are developed through consensus, some systems just won't be able to support them without being changed.  Change is hard, and has costs, and many would like to avoid it.

If you are a message sender, try to put yourself in the receivers shoes.  They need to be able to clearly understand what you've communicated.  If you omit stuff because it is hard, you make the job of the receiver even harder.  Most of the Health IT communications today are not really dialogues, but rather a sequence of monologues.  So you have to make sure your communication is clear, the first time.

Tuesday, November 27, 2012

Hashtag Soup: Relating QDM, HQMF, eMeasures, QueryHealth, QRDA, SIFramework and MeaningfulUse Stage2

This showed up in my inbox yesterday:
Hello Keith, 
I am trying to figure out how the following abbreviations are connected - NQF's QDM, HQMF, eMeasures, Query Health, QRDA, QDM based QRDAs, others.
Rather than individual definitions, I am a bit in the dark around how each of these are interconnected.

Appreciate your help.
Warm Regards, Shyam 
It's a question worthy of a full post, rather than a brief answer, so here goes.


QDM is the National Quality Forum's (NQF) Quality Data Model.  It is an information model representing the essential data needed to generate quality measures.  Because it is an information model, it doesn't necessarily go into the level of detail needed in an implementation, but it certainly describes the high level structures that an implementation needs to compute quality measures.

eMeasures is a term describing the electronic representation of quality measures.  In common use, it often refers to the electronic measures that NQF developed to represent the quality measures required under the ONC & CMS Meaningful Use regulations.  It is also used to refer to the HL7 HQMF.

HQMF stands for Health Quality Measure Format.  This is an HL7 Draft Standard for Trial Use (DSTU).  The DSTU is presently being reballoted by HL7 for a second release.  This is an electronic format for the representation of quality measures.  Release 1 is currently used by NQF to deliver eMeasures for Meaningful Use.  Release 2 was developed in large part based on pilot work being developed by Query Health.

Query Health is an ONC Standards and Interoperability Framework project whose purpose is to develop standards to enable sending the questions to the data.  Its key goal is to enable clinical research.  We used HQMF in query health because the kinds of questions that Quality Measures need answers too are often the same kinds of questions that show up in Clinical Research.  HQMF is a declarative format for expressing those questions.  We revised and prototyped a new schema for HQMF that is simpler, easier to read, and able to be computed in a variety of programming environments.  I've written quite a bit about Query Health on this blog.

QRDA stands for Quality Reporting Data Architecture.  If HQMF/Query Health/eMeasures represent the question, then QRDA represents the answers.  QRDA is an HL7 implementation guide on CDA Release 2 that describes the format for reporting quality data on a single patient (Category I), or aggregate results on multiple patients (Category III).  The former is a DSTU, the latter nearly so.  There is also an implementation guide showing how data modeled using the QDM can be represented in a QRDA.  Both Category I and Category III specifications have been identified as being required standard formats for reporting quality measures under the Meaningful Use 2014 Certification Criteria.

MAT is the Measure Authoring Tool.  This is a tool for creating eMeasures currently being maintained by NQF, but which will be transitioned to a new maintainer in early 2013.

VSAC is the NLM Value Set Authority Center, where value sets used for eMeasures and other standards used in Meaningful Use regulation are published.

If you want a poster-sized PDF of the content, you can get it via Google Drive:


Monday, November 26, 2012

What-duhl?

WADL stands for Web Application Description Language.  It is a member submission to the W3C by Sun Microsystems (now Oracle), written by Marc Hadley (now at MITRE).  WADL is to REST as WSDL is to SOAP, which interestingly enough, makes it both a useful documentation and code generation tool for RESTful web service development, and anathema to the RESTful in-crowd (of which, I'm apparently not a member).  I'd also note that you can use WSDL for the same purpose (which is apparently even worse).

WADL isn't a standard, but it was certainly meant to fill a gap in standards for RESTful web services.  I've been playing around with WADL a bit to define and document the ABBI protocol.  What I found useful about WADL is that it makes me think about (and document) all the necessary  aspects of services that an implementer needs.  The advantages of using WADL are pretty significant.  I can get a lot of documentation out of WADL that would be difficult to create in the same way using a word processor.

There are several tools available that will take a WADL description and turn it into documentation of a RESTful API, others that will turn it into code, and yet others that will generate WADL from code.  All-in-all, useful stuff when you have a small team (as I do).  Sure, I don't need any tools to do this by hand, but why not use them if they make my job easier.  One of the things I like about WADL is that it helps me to order my thinking, and to make it easier to understand the API.  I can still read the XML and understand what it is doing.

One of my experiments in playing with WADL was to restructure the OAuth 2.0 RFC in WADL form, since we are also talking about using OAuth 2.0 for ABBI.  OAuth 2.0 doesn't define the resource URLs, it just defines what they need to do, which makes them a <resource_type> in WADL instead of a <resource> proper.  The tool I was using to generate documentation didn't support <resource_type> so I fixed it to do so (and made some other tweaks to it).

I haven't finished documenting either the ABBI API or the OAuth 2.0 specification in WADL, but the results are promising enough that I've posted them both to my prototype implementation site.  You can find them at the links above.









Wednesday, November 21, 2012

Happy Thanksgiving and a Reminder to Engage with Grace

For those of you in the US, have a great Thanksgiving holiday.  Starting in 2008, it's become somewhat traditional in the Blogsphere to talk about Engage with Grace around this time, especially as families gather for this traditional US holiday.



On a lighter note, I discovered today that my daughter has December 11th off from school, so she'll likely be coming with me to the S&I Framework meeting to hear about her namesake.


Tuesday, November 20, 2012

Sneak Peak at the IHE North American Connectathon Conference 2013


January 30, 2013 at Hyatt Regency Chicago, IL. Register today!

IHE USA is proud to announce the IHE Connectathon Conference 2013, Wed. January 30, 2013 at the Hyatt Regency in Chicago, IL. The conference is the cornerstone of the North American Connectathon. Join us as we discuss the many ways that technology and IHE is enabling the achievement and sustainability of the meaningful use of healthcare information technology. Read more about the conference and register today.  


Achieving and Sustaining Meaningful Use: The Role of Standards and Integration

The goals of delivering meaningful use of healthcare IT focuses on a wide body of stakeholders and quality measurements making this achievement epic in the industry. However, none of these goals would be possible without the efficient use of interoperable systems that enable the quality care. IHE is the grandfather of interoperability and the foundation for new technology that enables the seamless transfer of data across the healthcare continuum.

Learn more at the IHE Connectathon Conference and educational sessions as we highlight the unique goals required to achieve meaningful and IHE’s role in their development including:
  • Connecting clinicians, patients, and their families with the tools and resources needed to enable care in a seamless, meaningful, transparent way. 
  • Achieving quality and efficiency of data as related to the delivery of care. 
  • Empowering patients at home and beyond. 
  • mHealth ecosystem that extends access and connectivity to individuals delivering care.

Register today for a full day of exciting and dynamic educational sessions focused on the role of achieving meaningful use through interoperability and IHE. 

Optimized for Who?

I was reading this article on the data that patients need over at e-patient.net this morning.  Included was information on medications (or other treatments) that didn't work.

I let my mind wander (it does that on it's own if I'm not careful), and it roamed back to my days as a service department manager.  When a computer was brought in for service, we'd ask customers about the problem, and what they attempted to do to fix it.  Failed attempts were often even more informative (and time saving) than the symptoms.  The more symptoms you had, the more diagnostic pathways it opened up.  Rarely did a confluence of symptoms point to a single cause, because of the interconnected nature of the various components.  But a failed attempt to solve the problem could rule out a whole subsystem.

The franchise I worked for (ComputerLand) and various manufacturers would supply us with diagnostic maps providing diagnostic and repair procedures to resolve various problems.  Initially, my technicians would follow these procedures by the letter.  Over time though, we often found "shortcuts".  As computer service technicians, my team was measured primarily by how many computers, printers and monitors they fixed (but I also monitored revenue and margin).  So, they would optimize the procedures to allow them to complete more repairs.

You can optimize a diagnostic map and repair procedure by several measures: time spent, cost of repair, or revenue generated, and customer satisfaction.  For example, the simplest and fastest way to resolve just about any problem with a hard drive was to replace it.  But that is also the most expensive solution, and the least satisfactory to the customer.  The warranty repair procedures provided by manufacturers were clearly optimized to reduce their part costs.  They'd take a cheaper, but less likely to resolve the problem path first.  They'd rather replace a drive cable or cheap IDE controller, than the more expensive drive.

General repair procedures, even though it wasn't obvious, were optimized to increase profit.  They'd replace an expensive part when a simpler repair could have been used.  But they were also optimized to deal to ensure repair and reduce technician time.  That was probably a much more complicated optimization problem.  My tech's short-cuts invariably re-optimized procedures to reduce their time, and then, I'd have to remind them that time was not our only interest, so was income and customer satisfaction.

One particular problem I recall was for a squeaky hard drive.  Back in the days of the full size (5¼"x8"x3½") drives, many high-quality/high-capacity drives had a static discharge tab containing a graphite contact that touched the drive spindle at the base of the drive.  The purpose of this tab was to ensure that any static build-up was discharged to ground rather than affecting the drive electronics.  Over time, the graphite contact would wear away, and when the spindle hit metal instead of graphite, the drive would emit a truly annoying squeal.

There were three solutions to this problem.  The "overkill" solution was to replace the hard drive.  We agreed we would offer this solution last.  Most often, customers taking this solution were looking for a bigger/smaller/faster drive anyway.  The "recommended" solution was to remove and replace the graphite tab.  This was about a half-hour bench job, and would cost about $65 (fifteen  years ago), plus parts ($5 for the tab, for which our markup was %1000).  This left the drive at "spec" and solved the problem.  We didn't always keep these parts on hand, because they were infrequently needed, and easily obtained (1-day turn-around at a local electronics supply warehouse).  But sometimes they were out too, and it could take a week if ordered from somewhere else.

The simplest option took five minutes, and involved removing the tab with a pair of scissors.  The tab itself wasn't necessary to the functioning of the hard drive.  It served a protective purpose that wasn't truly necessary most of the time (kind of like an appendix).  I never heard of a drive where we removed this tab failing because of the removal.  This was a five minute bench job (for which $35 was my minimum charge).

We'd offer our customers these choices in cost order from lowest to highest, and recommend the middle solution (replace the graphite contact) as the "best" choice.  My "higher-end", more knowledgeable customers  would take the first choice (cut it off), and the next time they had the problem, would do it themselves.  My mid-range customers would take option 2 or 3 (depending on whether they wanted a new hard drive or not).  Option 1 became more favorable when we didn't have the parts on hand, and our supplier was out too.

Making these choices clear to the customer was really the key to optimizing for customer satisfaction.

Back then, the machine learning algorithms for optimizing decision trees like those in diagnostic maps weren't part of the basic curriculum.  These days, they most certainly are.  It would be interesting to see how different healthcare treatment choices optimize from the patient, provider and payer perspectives.

I recall my wife's knee surgery.  She had been recommended Physical Therapy first and she took that option.  After three years, she finally got fed up and had the surgery, which worked great for her.  My insurer paid more for the three years of off-and-on PT than they did for the knee surgery.  

They were optimizing for a short-term horizon, and they lost out.  That particular problem represents a prisoner's dilemma.  If all payers provided the most cost effective treatment for patients, everyone wins.  But if one payer defects (taking the short-term win), others lose.  The challenge here is that we all wind up losing.  Eventually, everyone defects, and all lose.  We wind up paying more, because the cheaper treatment gets the patient off the rolls, rather than healed.

I realize that not all cases are as treatable with surgery a my wife's was. That situation simply provides an example that I imagine occurs enough times that it's worth looking at in more detail.

Monday, November 19, 2012

CQM Value Set Challenges in MeaningfulUse Stage2

I cannot take credit for finding this issue.  One of the teams I work with discovered this particular challenge in working with the value sets for clinical quality measures.

It seems that several value sets in the Clinical Quality Measures have been used to identify appropriate medications for treatment, but have also been reused to identify patient allergies.

What is worry-some about this is how it impacts reporting.  Think this through:

  • A  patient has been seen by a provider in the last year.  That makes them part of an initial population for a measure.
  • They've been diagnosed with a particular disease.  That would make them appear in the denominator.  
  • If they've been given a particular medication formulation, they also fall into the numerator.
  • The measure makes allowances for patients who are allergic to the treatment regimen.  But in the implementation, the measure exclusion or exception criteria REUSES the medication value set as a value set describing a medication allergy.

This is where the problem shows up, and it has two consequences.
  1. In the EHR, the substance that the patient is allergic to is recorded, not a specific medication formulation.  
  2. In reporting exceptions / exclusions, the EHR would report the substance, but this wouldn't match what the conformance tools check for, so the EHR would need to implement a work-around.

Formulation vs. Allergen

You cannot go from medication formulations to allergens and still preserve the meaning of the value set.  Unfortunately, the value set for treatment only lists drug formulations.  It doesn't explain which ingredients are relevant for treatment in the quality measure.  

While it is quite possible to go from formulations to active ingredients in RxNORM, and also possible to match medication allergens encoded in RxNORM to drugs that contain that active ingredient, it doesn't help.  All that does is identify one or medications in the treatment value set as being one the patient shouldn't be given.  It doesn't necessarily tell you whether there are other acceptable alternatives, nor whether there is an intention to provide an exception or exclusion if those alternatives are available.
Some formulations have two (or more) active ingredients, one of which could be the reason it is included in a value set for treatment of a particular condition, yet the patient could be allergic to another of the ingredients.  So, you'd avoid that formulation for treatment, but it wouldn't excuse the provider from finding another formulation that did contain the necessary ingredient for treatment, but didn't contain the allergen.

In other cases, related drugs are similar enough that if a patient is intolerant of one, it is sufficient to rule out others (e.g., an allergy to penicillin might also rule out amoxicillin). 

Work-Arounds

There is a work-around, but it isn't pretty.  As I mentioned above, you can determine that a patient has an allergy to an active ingredient in a medication using RxNORM.  So if the treatment value set includes medications A and B, where A contains X, and B contains X + Y, you can make a list of ingredients: X, Y.  Then any patient who is allergic to X or Y can be identified as being allergic to at least one of the medications in the treatment value set.  You can even select an appropriate "proxy" medication, by ensuring that the medication you report an allergy to includes the ingredient that the patient is allergic to.  So if you have two patients, P1 allergic to X, you might report medication A, and P2 allergic to Y, you might report medication B, as being the best proxies for these allergies.

The failure here is that a provider should possibly have considered medication A for the patient, as that might have been an effective treatment.  While it's not possible to figure out what the allergen value sets must be via an algorithm, it is very easy to generate all possible solutions and have someone choose (using clinical judgement) which solution is appropriate in each of the affected cases.

What should be done?

I'm told that most of the data necessary to correct this issue is both readily available, and has been offered to the PTB (powers that be) to resolve this issue.  The challenge is that these value sets are based on measures which have been vetted by measure authorizing bodies (like NQF and Joint Commission), and changing the value sets (perhaps) changes the meanings of the measures (but not the intent).  So, while most technical folks would think (like I do), that the easy answer is to publish the correct value sets, there are some challenges to that solution that impact provenance of the measures.

I've heard a couple of solutions offered to resolve the issue.

  1. Address the issue, publish a work-around (in detail), and provide folks with enough time and/or freedom to implement measures appropriately.  "Freedom" here might include loosening some of the validation criteria for conformance tests, so that the measures could be implemented using correct data.
  2. Put a push on to fix the value sets in time for Stage 2 implementation.  This is the technically easy, but organizationally difficult solution.
Of course, I'd prefer to do it right the first time, so #2 is my preferred solution.  I've been on the other end of an ONC hurry-up to finish things.  I know it sucks to be on that end of things, and that it's also risky.  But is it better to fix it now, or spend a year gathering quality measure data that won't be comparable to anything else when we finally do fix the problem?

Either way it gets resolved, my hope is that it is fixed and the PTB let us know what the plans are to get a permanent fix together.

 -- Keith

P.S.  This isn't the only problem with value sets, just the most critical one to be solved.


Update: December 2, 2012

Apparently, fixes are in the works.  This communication showed up in my inbox over the weekend.
CMS and ONC are working to release a public-facing tool to allow reporting and tracking of potential issues or bugs identified during the implementation process for the 2014 eCQMs released in October 2012. We encourage you to report these to the EHR Incentive Program Information Center at 1-888-734-6433 or email HIT_Quality_Measurement@cms.hhs.gov

Thank you to those who have reported issues to this point; CMS, ONC and NLM are working to resolve and will contact the reporter when a solution is agreed upon.
An update to the Value Set Authority Center (https://vsac.nlm.nih.gov) at NLM is anticipated in the near future which will include removal of the label “provisional” from value sets and codes that have been added to their respective terminologies since the measure release as well as correction of value sets related to medications and allergies. Please look to further communications from CMS, ONC, and NLM as to when this update will be released.


Friday, November 16, 2012

On MeaningfulUse Stage3 Proposals

This document (docx) containing proposals for Meaningful Use Stage 3 has been circulating since the November HIT Policy Committee meeting.  I have at least half-dozen copies and links in my in-box already, and you probably have at least one link in yours.  The document is divided into several chunks:

  1. A list of proposed Stage 3 objectives in table form, comparing Stage 2 final rule with the Stage 3 proposal, including several new proposals, and a suggestion to provide objectives focused on a single disease area (Cardiovascular disease)
  2. A section containing questions about processes for development and implementation of Quality Measures.  These are NOT questions about which quality measures to include in Stage 3, but rather more fundamental questions about the use, creation and architecture for quality measures in stage 3.
  3. A section on security, containing questions in three areas (7 questions total), three questions on user (provider) authentication, one on HIPAA and risks, and three questions on accounting for disclosures.
  4. Finally, ONC added 13 questions addressing various areas.
The final RFC will likely be published in the Federal Register sometime in the next two weeks.  I have some initial thoughts on the draft proposed objectives that was discussed at the HIT Policy Committee meeting:
  1. Consider that mandating use of an interface (e.g., for lab ordering) will be very hard for providers to implement if there is no mandate to implement the other side.  Most of Meaningful Use requirements must be met by providers by using the "certified capabilities" of the EHR (there is only one exception to this called out by CMS in Stage 2, and none in Stage 1).  Objectives like this should follow the "where available" pattern used for much of public health reporting, or other incentives should be provided to ensure that the other side is available first.
  2. Questions about "what barriers would there be to..." would be best addressed by statistics on numbers of providers able to meet the MU Stage 2 objective before tightening it further.  In other words, it is too soon to tell.
  3. Consider the pre-requisites for technology implementation and deployment before setting objectives using that technology. Some objectives rely heavily on decision support (e.g., maintain medication and problem lists), but necessary standards have yet to be adopted (c.f., ONC Health eDecisions Project). 
  4. Consider cross-referencing.  The first objective discusses drug-drug-interactions (using DDI as an acronym without defining it), but later objectives note that standards work on contraindications is still necessary.  If standards are needed for contraindications, that is also true for DDI, and they should be similar in structure.
  5. Define your acronyms (e.g.,  DDI,  DECAF, et cetera).
  6. Capability to send... needs transport and standards defined.

I'll be reviewing these in much greater detail, and will also be reviewing the remaining sections in the coming weeks.  It's a shame that this is being released during a season where a) many are taking time off, and b) where most affected parties not taking time-off are heads down implementing the stage 2 rules.  At this early point in development of Stage 3, I see no reason why the proposals couldn't be given a 90-day comment period.


Thursday, November 15, 2012

Why we don't need more HealthIT Standards Development

I spent a couple of hours yesterday listening to testimony given to the House Subcommittee on Technology and Innovation.  Testifying were Farzad Mostashari, Dr. Charles Romine of NIST, Mark Probst of Intermountain Healthcare, Rebecca Little of Medicity, and Professor Willa Fields, incoming chair of HIMSS.  Brian Ahier summarizes the hearing rather well.

One of the challenges in listening to this hearing for me were comments from Mr. Probst.  His concern is the lack of core, foundational standards, which he believes have yet to be produced.  I've heard this complaint before, and given than I'm deep into standards development, it always concerns me.  It even further concerns me that such a statement could be made when there's been SO much standards work over the last five years, as compared to the five years before that.

Why this is challenging should be clear when you read through what is currently going on after his list of 7 core standards below.  When he fails to point out these activities to lawmakers in his testimony, Mr. Probst fails to educate them on what the industry is doing.  Instead of hearing, "yes we are making progress" as he stated in his testimony, the message that gets communicated is that we haven't done enough.  There is a big difference between not finished, and not started.

I went back over his testimony and his answers to questions, and transcribed several portions of it which don't appear in the written record.  This first part comes from his verbal testimony starting  24'47" in.
Probst: I simply do not believe that the current voluntary approaches to standards definition work.  In my opinion what is needed is a mandate to:
  1. Define the set of information system related standards which will be applied to healthcare.
  2. Ensure accountability, to appropriately develop the standards and document the standards
  3. Set a time frame to define and document the standards, measured in months, not years.
  4. Establish a realistic time frame in which the HIT community must adopt a federally supported set of standards, say 10 to 15 years.
I've been working on standards for more than a decade, including some of the "foundational, core standards" of the Internet.  Does XML, XPath, XSLT or DOM2 ring any bells?  These were NOT defined in months, but rather years.  And yes, they were adopted over time, but not nearly so tremendous a timeframe as say a decade or more.  Yes, as Mr. Probst states, "adoption of standards" is hard, but it surely is not something that need take the HIT community so long as 10-15 years.   But we get only months for standards development?  He should come down into the trenches with the rest of us working on these standards.

This next part comes from the subcommittee chair's questions probing on Mr. Probst's remarks.  This discussion starts around 42'15" in the hearing.

Rep Ben Quayle (subcommittee chair): Mr. Probst, I want to get to your testimony, becuase in your testimony you stated that Voluntulary Consensus Built Standards Don't Work in the Healthcare Industry. In previous hearings, we've had NIST here a lot, and one of the main things with NIST, is that it is very consensus driven with the stakeholders, and it has worked very well. Why do you not think that in the healthcare industry, that is the best way to go, and instead come up with a set of standards from a top down approach rather than from a voluntary consensus approach. I just want to get your take on that.
Mark Probst: Well, I think the very fact that we are having this conversation suggests that it hasn't worked.  We've been doing it for a very long time.  That is not to slam HL7 or DICOM or any other groups that have been working on those standards.  There are varying incentives in those groups, the people that form those groups have different rationale for why they want standards, or what standards that they might like.
But again I think the fact that we haven't come to some basic standards like the guage of rail that they did in Australia.  We're dealing with all the discussions around health information exchange and what kind of contraptions we can we put together to move data from one system to another, that loses fidelity and costs time.  I just think that history is a good educator for the future.  And I don't see how we are going to get to standards without some basic direction on some basic core standards.
Quayle: And if we are going to have that direction, how in your estimation do we set those standards so that we can still have the flexibility for technological innovation going forward.  Since that seems to be from past testimony on the consensus building, where we have had some really good innovation, but in the way that you are kind of seeing this and the outlook, how do we leave that flexibility in place so that the innovation can continue to progress.
Probst: What we don't want is standards that suggest everything that we have to do.  But we do need standards, and I listed several of them in my written testimony.  Basic, core, foundational IT standards put in place.  If those are put into place, then innovation happens.Then you have Internet kinds of innovation that can occur, ubiquitously across large groups of people.  That's the gist of my testimony.
Now, let's have a look at those "Basic, Core, Foundational Standards" that he referred to, that we need to work on.  My definition of "foundational" is that which you build upon.


  1. Standard terminologies.
  2. Detailed clinical models.
  3. Standard clinical data query language based on the models and terminology.
  4. Standards for security (standard roles and standards for naming of types of protected data).
  5. Standard Application Program Interfaces.
  6. Standards for expressing clinical decision support algorithms.
  7. Patient identifiers.
Are they foundational?  Standards terminologies and patient identifiers certainly meet my definition.  What I find interesting is that he doesn't reference transport at all.  Surely, if you were to continue the analogy to the Internet, there needs to be an "HTTP" over which the "HTML" was carried.  Yes, we have Direct and Exchange.  Perhaps Mr. Probst believes these are good enough. 

We do have standard terminologies for problems (SNOMED CT), labs (LOINC), medications (RxNORM) and medication allergies (RxNORM again), and they have been adopted in Meaningful Use.  Yes, we are missing comprehensive standard terminologies for non-medication allergies, and that clearly needs work.

We don't have standards for patient identifiers in the US.  But standards for patient identifiers do exist.  It's not an issue for standards developers, but rather one for Congress (search for "unique health identifier").  That law is still on the books.

The rest don't fit my definition of foundational.  Even so, much of this is work either already done, or in flight.  And [all too] often, driven by the government agency (ONC) that has a mandate to select the standards.

HL7, ISO and CIMI have been working on Detailed Clinical Models for several years.  The ISO work goes back to 2009 and earlier.  Will a mandate increase the speed in which this work will be completed?  I think not.  Any DCM I have ever seen builds on top of a framework of existing standards including a reference information model (either HL7's RIM, or OpenEHR's Reference Model), and terminology.  Yes, this important work.  But foundational?  This is like saying that RDF, OWL and the Dublin Core are foundational to the Internet.

Standard clinical data query language based on the models and terminology?  We have quite a few of these standards already.  Some use OWL, others GELLO, others AQL (OpenEHR), and HL7 is developing HQMF based on V3 and the RIM.  I've written about HQMF quite a bit as it applies to the ONC Query Health activity.  I agree that this is necessary work.  Will a mandate increase the speed in which it is done?  We are now piloting Query Health after several months of consensus building.  I expect these specifications to be ready for consideration in Stage 3.

On standards for security, e.g., roles and naming types of protected data, I would refer Mr. Probst to the HL7 Role Based Access Control Permission Catalog (circa 2010).  Yes, more work is needed here, but probably more so in the case of PKI deployment.

With regard to standard programming interfaces, perhaps VMR or CTS2 might excite him.  The ONC ABBI project is building a RESTful API to provide patients with access to their health information.  HL7 has also developed FHIR which might meet some of what he is looking for.  I expect the ABBI specifications to be ready for consideration in Stage 3.

Clinical Decision Support languages is something that I've written about before.  Frankly, I'm much happier using the programming language tool that fits the algorithm that needs to be implemented.  Even so, the ONC Health eDecisions project and HL7 are working on this.  I expect the standards to be ready for consideration in Stage 3.

I'm a little tired of hearing that standards aren't moving fast enough, or that we haven't done enough.  Few people in the Health IT industry have the luxury of spending their full time on standards development.  Even I have responsibilities outside of my standards work.  And I know from experience IN OTHER INDUSTRIES, that no other industry moves any faster with respect to the development of standards.  I'll probably get hammered by some of my Health IT colleagues for saying this, but it isn't standards development where the pace needs to increase, but rather in adoption and more importantly, in DEPLOYMENT.  I disagree that HIT Community needs 10 years or more to deploy the standards that we have today.

But to be fair, we also need balance.  HHS needs to realize that in order for there to be innovation, as Rep. Quayle so strongly emphasized, there also needs to be time to innovate with the standards we are adopting.  Without that time, the innovation that HHS and our lawmakers are looking for won't happen until after the Meaningful Use program is over, and that would be a shame.



Tuesday, November 13, 2012

Those who remember the past are condemned to watch others repeat it.

This tweet (and also the title of this post) from a college buddy had me rolling on the floor laughing this morning.

Now that we are past the elections, I suppose it's time that I jump on the post-election post bandwagon.  Having worked on Healthcare IT through several administrations, I find myself somewhat amused by all the pre-election discussion of what would happen to Healthcare IT depending on who gets elected.

What is clear in the Health IT space is that there will be change. The change I see in the Health IT space is a shift towards business intelligence, mobile technology and patient engagement.  Organizations which adopt patient enabling technology, or more cost-effective mobile computing technology or which are more aware of what their information systems know and can tell them will continue to advance.  Those that don't will eventually find themselves in decline.  The changes are driven by consumers, shaped by experts and championed by politicians.

Yes, a change in leadership will have an impact, and it's not irrelevant.  A new leader can advance or slow a trend, but will hardly ever reverse it.  Once the boulder is rolling downhill, the only successful strategy for a leader is to get behind it.  If you are left in front of it trying to hold it back, your choices are to be run over, or to dodge.

One way to understand where we are going is to understand where we have been.  There are good and very obvious reasons for doing that, but with respect to changing administrations, one reason is a bit more subtle.  Every administration will inevitably revisit some decisions of the previous one, retracing many of the steps.  What I'm very glad for right now is that we have a few more years before we have to rethink what we thought through before only to come to largely the same conclusions.

Friday, November 9, 2012

An expression language for use in HQMF


What appears below is what I just finished writing as a suggested addition to the HQMF Release DSTU that is being balloted until next Wednesday.  It's a more formal definition of what I called Simple Math previously.  But it goes beyond that because it also defines the binding of variables, and defines how classes and attributes are named in the context of HQMF.
I'm not fully satisfied by this solution because it still requires quite a bit of mapping from RIM semantics to implementation detail, but I think it's the best choice to resolve the issues of computation.  For those of you who are going to say again that I should be using JavaScript, GELLO or XPath, or your other favorite language PLEASE READ the second paragraph below.  I already am.

Appendix B: An expression language for use in HQMF
The HQMF specification does not describe an expression language for use in the <expression> element of values found in MeasureObservationDefinition, nor in the <value> element of the joinCondition.  HQMF implementations may be based on a number of different programming languages and execution environments.  Several prototype HQMF interpreters have been created.  Some use JavaScript, others SQL, and others XQuery.  Thus, a single choice for the expression language is not obvious.  This presents a challenge for implementers, as the lack of a platform neutral expression language means that there is no single expression of a quality measure that could be implemented on multiple platforms.  

The goal of the language described below to offer a solution to this challenge.  It provides a way to include computable expressions in HQMF.  The language is designed in a way that simple regular expression substitutions might be all that is necessary to turn the expression into executable code in a variety of platforms.  This language is not intended to create a new language to supersede C, C++, GELLO, Java, JavaScript, Perl, Ruby or XQuery.  In fact, the expressions allowed by this language should all be legal expressions in each of these languages, with identical evaluations given an appropriate binding mechanism.  Thus, it becomes a (nearly) common subset for writing expressions that can be translated to a variety of implementation platforms.

While there are many implementations of JavaScript, GELLO, and other programming languages available, it is not always feasible to integrate these implementations into existing execution environments. The feasibility is not just based on technical capability.  For example, while JavaScript interpreters are widely available, and many can be used in conjunction with, or have already been integrated into SQL databases, some data centers would object to installations or use  of software that has not undergone stringent testing.  However, the same data center may allow use of existing SQL language capabilities to achieve similar goals.

This appendix demonstrates the feasibility of defining an expression language that is a subset of many common programming languages.  This can be done in such a way as to allow implementations to simply reuse expressions found inside an HQMF instance to execute inside their programming environment.

B.1 Identifiers
Identifiers in the language start with an alphabetic character, and may be followed by alphabetic characters, numeric characters, and the _ symbol.  Identifiers are used to reference bound variables, class members and functions.

Implementations are required to recognize the ASCII alphabetic characters (A-Z and a-z), Arabic numerals (0-9), and the _ character.  The alphabetic characters A and a must be disctinct (no case folding). 

While some SQL implementations may case fold identifiers used in tables and columns, it is possible to quote these identifiers to ensure exact matches.

There is no length limit on identifiers.  It is up to an implementation to address implementation language specific length limitations when translating identifiers in an  HQMF expression to an appropriate value.

identifier ::=  [a-zA-Z][a-zA-Z0-9_]*

B.2 Literal Constants
Numeric constants can be integers or real numbers. There are no string or character constants in this language subset.  While strings are useful in a general programming context, they are not needed in use cases where expression evaluation is necessary for HQMF.

literal ::= integer | real | timestamp

B.2.1 Integers
Integers are represented using an optional sign (-), followed by a sequence of digits.  The sequence of digits must be either a single 0, or a non-zero digit followed by zero or more additional digits.

integer ::= (-)? (0|[1-9][0-9]*)

Implementations must support at least 32-bit precision for integers.

B.2.2 Real Numbers
Real numbers are expressed using an optional negative sign, an integer component, a decimal point, and a decimal component, followed by an optional exponent.

real ::= [-] (0|[1-9][0-9]*).[0-9]+ [(e|E)(+|-)([1-9][0-9]*)]

Implementations must support at least IEEE double-precision real numbers.

B.2.3 Time Stamps
Time Stamps are represented in ISO 8601 Notation without punctuation (as used in HL7 Version 3 rather than in W3C Schema), and between quotes.  Thus, 6:30AM ET, on January 20th, 1965 would appear as "196501200630-0600".

timestamp ::= " [0-9]{1-12}(.[0-9]{1-6})?((+|-)[0-9]{1,4})? "

B.3 Operators
B.3.1 Arithmetic Operators
Arithmetic operators include +, -, * and / supporting addition, subtraction and negation, multiplication and division.  Precedence of operators is negation, multiplication, division, addition and substraction.  The parentheses characters ( and ) are used to override the order of operations.
Implementations must support these operators, and are permitted to support other arithmetic operators.

add-op                    => '+' | '-'
mult-op                   => '*' | '/'

B.3.2 Logical Operators
The logical operators are AND, OR and NOT().  NOT is a unary operator and has higher precedence than the other operators.  AND has higher precedence than OR.

B.3.3 Comparison Operators
Comparison operators include <, >, >=, <=, ==, and !=.  The precedence of these operators is == and !=, followed by <,>,<= and >=.   These operators are of lower precedence than arithmetic operators.  

Note:  == and != were chosen rather than = and <> to simplify substitution.  Replacing = by itself with == is harder to do correctly than replacing == with a =.  Given that == is used for equality, != becomes the natural symbol (from C and Java languages) for inequality.

eq-op                     => == | !=
rel-op                    => <= | >= | < | >

B.4 Grammar
The intent of the grammar specification below is not to enable implementors to “parse” expressions using this language, but to express the intended effects of evaluations of expressions used in this language.

or-expression             => and-expression (OR and-expression)*
and-expression            => not-expression (AND not-expression)*
not-expression            => NOT(not-expression)
                          |  relational-expression
relational-expression     => equivalence-expression (rel-op equivalence-expression)*
equality-expression       => addition-expression (eq-op addition-expression)*
addition-expression       => multiplication-expression (add-op multiplication-expression)
multiplication-expression => primary-expression (mult-op primary-expression)
primary-expression        => literal
                          |  - primary-expression
                          |  ( expression )
                          |  identifier-expression ( arg-list )
                          |  identifier-expression
arg-list                  => expression (, expression)*
identifier-expression     => identifier (. identifier)*

B.5 Language Binding
Variables are bound to data in the implementation model through use of the localVariableName element in data criteria.  The scope of the localVariableName values is to the entire document, so that all localVariableName values must be unique.

Each local variable in an HQMF expression represents an object that is structured as based on the RIM class from which it is derived.  Thus, the language can provide access to the RIM and navigational capabilities of the RIM.  Implementations must map these accesses into appropriate references in their implementation model.

B.5.1 Class Attributes
Attributes of a class are accessed using the . operator.  For example, to access the Act effectiveTime attribute of a class using the “myLocalAct” local variable name, you would reference it as myLocalAct.effectiveTime.  Data type properties are accessed in a similar fashion.  To obtain the low value of the effectiveTime, one would write myLocalAct.effectiveTime.low.  An implementation would then map references to myLocalAct.effectiveTime.low to the appropriate language and implementation specific representation.

The use of references to models in the various act criteria enables implementations to provide implementation specific models for different kinds of information references.  Thus, an implementation could map encounter.effectiveTime.low into a reference to the admitDate column of a visit table, but would map procedure.effectiveTime.low into the startTime column of the procedure table.

B.5.2 Associations
While RIM attributes and the properties of data types are uniquely named, a further challenge is accessing information from an associated RIM class while another RIM class is in scope.  For example, consider the case of computing the average waiting time for a patient during an ED visit.  Suppose that two observations are captured in the EHR, one being the patient arrival time, and the other being the time that they were first seen by a healthcare provider.  These data of interest could be represented as shown below in the DataCriteriaSection.

<entry>
  <localVariableName>Arrival</localVariableName>
  <actCriteria>
    ...
     <code code="441968004" codeSystem="2.16.840.1.113883.6.96"
      displayName="time of arrival at healthcare facility" />
  </actCriteria>
</entry>
<entry>
  <localVariableName>Seen</localVariableName>
  <actCriteria>
    ...
    <code code="308930007" codeSystem="2.16.840.1.113883.6.96"
      displayName="seen by health professional" />
  </actCriteria>
</entry>

In order to compute a quality measure which reports the average wait time the measureObservationDefinition could be defined as follows:
<measureObservationDefinition>
  ...
  <code code='AGGREGATE' codeSystem='2.16.840.1.113883.5.4'/>
  <value><expression>Seen.effectiveTime - Arrival.effectiveTime</expression></value>
  <methodCode code='AVERAGE' codeSystem='2.16.840.1.113883.5.84' />
  <precondition>
    <joinCondition>
      <value>Seen.componentOf.encounter.id = Arrival.componentOf.encounter.id</value>
    </joinCondition>
  </precondition>
  ...
</measureObservationDefinition>
The joinCondition ensures that the Seen and Arrival effective times being compared are from the same encounter. To do so, however, they must reference the encounter identifier, found in a different class than the observation.  In the XML representation of the RIM classes, these are referenced via the component act relationship.  The ActRelationship class in the RIM is has a typeCode attribute, when given the value of “COMP” describes the type of actRelationship being referenced as a component.  This typeCode attribute contains one of the values from the ActRelationshipType vocabulary.  However, COMP is neither convenient nor memorable as used in an expression.

This issue was resolved in the HL7 R-MIM Designer by creating a set of rules for naming relationships used in an R-MIM diagram.  This language proposes to use a subset of those names to access related classes from their parents or children.

There are four associations for which names are needed.  Each type of association relates a source class and a target class.  The name of the association depends upon the direction in which the association is made.  For example, to relate a document to one of its addenda, you would say document.addendum, but to go from the addendum to the parent document, you would say document.addendumOf.

B.5.2.1 Act Relationship Association Members
Table 1 below lists the names to use to reference from one act to another via the Act Relationship association class.  The first column of this table provides the ActRelationshipType.  The second column provides the name of this association in the usual traversal direction, and the third column, when the association is read in the reverse direction.

The RIM Act Relationship class has members of its own.  These are accessed via the name of the relationship.  To access components of the associated act class, use the appropriate RIM act class name (act, encounter, observation, procedure, substanceAdministration, or supply). While not strictly necessary to disambiguate between the relationship and the target (or source act), use of the RIM class helps to clarify relationships enabling appropriate mapping in the implementation environment.

Table 1 Act Relationship Member Names
ActRelationshipType
Source->Target
Target->Source
APND
addendum
addendumOf
ARR
arrivedBy
arrivalFor
AUTH
authorization
authorizationOf
CAUS
causeOf
cause
CHRG
charge
chargeFor
CIND
contraindication
contraindicationFor
COMP
component
componentOf
COST
cost
costOf
COVBY
coverage
coverageOf
CREDIT
credit
creditTo
CTRLV
controlVariable
controlVariableFor
DEBIT
debit
debitTo
DEP
departedBy
departureFor
DOC
documentationOf
documentation
DRIV
derivedFrom
derivation
ELNK
links
linkedBy
EXPL
explanation
explanationFor
FLFS
inFulfillmentOf
fulfillment
GEN
generalization
specialization
GEVL
evaluationOf
evaluation
GOAL
goal
goalOf
INST
definition
instantiation
ITEMSLOC
itemStorage
itemStorageFor
LIMIT
limitation
limitationOf
MFST
manifestationOf
manifestation
MITGT
mitigates
mitigatedBy
MTCH
matchOf
match
NAME
conditionNamed
assignedConditionName
OBJC
maintenanceGoal
maintenanceGoalOf
OBJF
finalGoal
finalGoalOf
OCCR
occurrenceOf
occurrence
OPTN
option
optionFor
OREF
referencedOrder
referencedBy
OUTC
outcome
outcomeOf
PERT
pertinentInformation
pertainsTo
PRCN
precondition
preconditionFor
PREV
predecessor
successor
REFR
reference
referencedBy
REFV
referenceRange
referenceRangeFor
REV
reversalOf
reversal
RISK
risk
riskOf
RPLC
replacementOf
replacement
RSON
reason
reasonOf
SAS
startsAfterStartOf
startsBeforeStartOf
SCH
scheduleRequest
requestedBy
SEQL
sequelTo
sequel
SPRT
support
supportOf
SPRTBND
boundedSupport
boundedSupportOf
SUBJ
subject
subjectOf
SUCC
predecessor
successor
SUMM
summary
summaryOf
TRIG
trigger
triggerFor
UPDT
updateOf
update
VRXCRPT
verbatimExcerptFrom
verbatimExcerpt
XCRPT
excerptFrom
excerpt
XFRM
transformationOf
transformation

B.5.2.2 Participation Association Members
Participations are associations between and Act and a Role.  Using the wait time example, suppose that instead of capturing the “time seen by a provider” in an observation, this information was (more correctly) modeled using the participation time of the encounter performer and the participation time of the service delivery location.  In this case, the expression being computed would be:
  <value>
    <expression>EDVisit.performer.time.low - EDVisit.location.time.low</expression>
  </value>

Table 2 below shows the names used to reference the participation class from the act in the second column, or from the role in the third column based on the participation type in the first column.
Unlike Act Relationships, participations are typically traversed in the direction from act to participation.

Table 2 Participation Type
ParticipationType
Act->Participant
Role->Participation
ADM
admitter
admission
ATND
attender
attenderOf
AUT
author
origination
AUTHEN
authenticator
authenticated
BBY
baby
babyOf
BEN
beneficiary
beneficiaryOf
CALLBCK
callBackContact
callBackAvailability
CON
consultant
consultation
COV
coveredParty
coveredPartyOf
CSM
consumable
consumedIn
CST
custodian
custodianship
DEV
device
deviceOf
DIR
directTarget
directTargetOf
DIS
discharger
discharge
DIST
distributer
distributed
DON
organDonor
organDonation
DST
destination
destinationOf
ELOC
dataEntryLocation
dataEntryLocationOf
ENT
dataEnterer
dataEntry
ESC
escort
escort
HLD
holder
contractHeld
IND
indirectTarget
indirectTargetOf
INF
informant
informationGiven
IRCP
informationRecipient
informationReceived
LA
legalAuthenticator
legallyAuthenticated
LOC
location
locationOf
NOT
notificationContact
contactFor
NRD
nonReusableDevice
nonReusableDeviceOf
ORG
origin
originOf
PPRF
primaryPerformer
performance
PRCP
primaryInformationRecipient
informationReceived
PRD
product
productOf
PRF
performer
performance
RCT
recordTarget
recordTargetOf
RCV
receiver
receiverOf
RDV
reusableDevice
reusableDeviceOf
REF
referrer
referral
REFB
subjectReferrer
subjectReferral
REFT
subjectReferredTo
referral
RESP
responsibleParty
responsibleFor
RML
remoteLocation
remoteLocationOf
SBJ
subject
subjectOf
SPC
specimen
specimenOf
SPRF
secondaryPerformer
performance
TRANS
transcriber
transcription
TRC
tracker
tracking
VIA
via
viaOf
VRF
verifier
verification
WIT
witness
witness

B.5.2.3 Role Link Association Members
Like Act Relationships, Role links are associations between two classes of the same type.  They appear infrequently in HL7 Version 3 models.  Table 3 below provides the names of role links from source to target in the second column, and from target to source in the third column, based on the role link type found in the first column.

Table 3 Role Link Assocation Names
Role Link Type
Source->Target
Target->Source
BACKUP
backupFor
backup
DIRAUTH
directAuthorityOver
directAuthority
INDAUTH
indirectAuthorithyOver
indirectAuthority
PART
part
partOf
REL
relatedTo
related
REPL
replacementOf
replacedBy

B.5.2.4 Player and Scoper Associations
A role is associated with two entities.  The first entity, known as the “player” of the role, is a person, place, organization or thing which performed or participates in the act.  The second entity, known as the “scoper” defines the scope in which the player acts.  The Role Class and whether the entity is playing the role, or scoping the role determines the name of the association.

Table 4 below provides the names of role relationships to the playing entity in the second column, or the scoping entity in the third column, based on the role class found in the first column.

Table 4 Player and Scoper Associations
Role Class
Playing Entity
Scoping Entity
ACCESS
access
accessed
ACTI
activeIngredient
activeIngredientOf
ACTM
activeMoiety
moietyOf
ADMM
product
administeringParty
ADTV
additive
additiveOf
AFFL
affiliate
affiliator
AGNT
agent
representedEntity
ALQT
aliquot
aliquotSource
ASSIGNED
assignedPerson
representedOrganization
BASE
base
baseOf
BIRTHPL
birthplace
birthplaceFor
CAREGIVER
careGiver
careGiverOf
CASEBJ
caseSubject
caseMonitor
CASESBJ
caseSubject
caseReporter
CERT
certifiedParty
certifyingParty
CHILD
child
parent
CIT
citizenPerson
politicalEntity
COLR
color
colorAdditiveOf
COMPAR
commissioningParty
commissionedParty
CON
contactParty
representedParty
CONT
content
container
COVPTY
coveredParty
underwriter
CRED
credentialedPerson
credentialIssuer
CRINV
investigator
sponsoringOrganization
CRSPNSR
researchSponsor
researchAuthorizer
DEPO
deposited
location
DST
distributedProduct
distributor
ECON
emergencyContact
representedParty
EMP
employee
employer
EXPR
exposedParty
exposingParty
FLVR
flavor
flavorAdditiveOf
GEN
specializedKind
generalizedKind
GRIC
specializedKind
genericKind
GUAR
guarantor
promisor
GUARD
guardian
ward
HCFAC
healthcareFacility
identifyingAuthority
HLD
held
holder
HLTHCHRT
healthChart
subjectPerson
IACT
inactiveIngredient
inactiveIngredientOf
IDENT
identifiedEntity
identifyingAuthority
INGR
ingredient
ingredientOf
INST
instance
kindOf
INVSBJ
investigativeSubject
investigatorSponsor
ISLT
isolate
source
LIC
licensedPerson
licenseIssuer
LOCE
locatedEntity
location
MANU
manufacturedProduct
manufacturer
MBR
member
group
MIL
militaryPerson
militaryServiceOrganization
MNT
maintainedEntity
maintainer
NOK
nextOfKinContact
representedParty
NOT
notaryPublic
politicalEntity
OWN
ownedEntity
owner
PART
part
whole
PAT
patient
provider
PAYEE
payee
invoicingParty
PAYOR
invoicePayor
underwriter
POLHOLD
policyHolder
underwriter
PROV
healthcareProvider
identifyingAuthority
PRS
relationshipHolder
personalRelationshipWith
PRSN
presentingPerson
location
PRSV
preservative
preservativeOf
QUAL
qualifiedEntity
qualificationGrantingEntity
RESBJ
researchSubject
researchSponsor
RET
relailedProduct
retailer
RGPR
regulatedProduct
regulator
ROL
player
scoper
SCHED
schedulabeEntity
schedulingEntity
SCHOOL
educationalFacility
identifyingAuthority
SDLOC
location
serviceProvider
SGNOFF
signingAuthority
representedParty
SPEC
specimen
specimenSource
SPNSR
sponsor
underwriter
STAK
stakeholder
entityInWhichStakeIsHeld
STBL
stabilizer
stabilizerOf
STD
student
school
STOR
storedEntity
storageLocation
SUBS
prevailingEntity
subsumedEntity
SUBY
subsumingEntity
subsumedEntity
TERR
territory
governingEntity
THER
manufacturedProduct
manufacturer
UNDWRT
underwriter
underwrittenParty
WRTE
warranteedProduct
warrantingEntity

B.6 Language Runtime
TBD
See Simple Math for a proposal.  Basically, steal JavaScript's Math package, and support a few constants (e.g., PI, E).

B.7 Extensibility
This language is intended to be extended to support implementation capabilities not specified in this appendix.  Implementations are free to provide additional runtime capabilities (e.g., specialized functions), or language syntax (e.g., octal representation of numeric constants), or semantic constructs (e.g., a ternary conditional operator, such as the C/Java ?: operator).  However, such extensions should not be expected to be interoperable across implementation platforms.


B.8 Notes on Syntax 

  • Whitespace is allowed where you would expect it to be (e.g., between operators and operands).  
  • There are no multi-step expressions. Line breaks have no syntactic meaning since each expression is expected to result in one value in the context of HQMF where these expressions are used.  
  • There are no comments because comments can be included in the XML where the expressions appear using existing XML commenting capabilities.
  • Extra syntactic sugar (such as an optional + sign before a numeric literal to indicate that it is positive) have been eliminated to ensure compatibility across the widest variety of implementation platforms.
  • There are no bit-wise operators because these were not necessary in the HQMF use cases expressed in developing this specification.
  • There are no strings because these were similarly not necessary.
  • There are no unsigned integers because integers sufficed for the computations in the HQMF use cases explored.
  • There was no need to distinguish between float and double, so only one "real" type is specified.
  • Float, unsigned, short and similar variations on numeric types are language optimizations which only further complicate things, so they were dropped.