Pages

Thursday, October 13, 2011

On the Xerox Expert Review Usability Criteria

This is my last post for a little while on the NIST Technical Evaluation, Testing and Validation of the Usability of Electronic Health Records (NISTIR 7804) report.  In this post I'm going to look at the criteria in Appendix B in detail.  Appendix B is the form for expert review which is part of the process described in section 6 of the report.

So, if I'm talking about the NIST report, why is Xerox in the title. As it turns out, a lot of this material comes from Xerox work developed back in the mid-90's.  Back then, Xerox PARC was a bit past its heyday as a powerhouse in this space, essentially having invented the GUI back in the early 80's, but failing to capitalize on it.

But back to the main point:

These are the various subsections of the appendix, with my overall comments on each (As I previously mentioned, there are 198 items to be reviewed described in this appendix).  If you want the executive summary, skip to the end.
  1. Use Errors
    1. Patient Identification
      1.A.3 When a second user opens a patient's chart, is the first user logged out?  It isn't clear what is meant by this, when a second user at the same workstation logs in?  Or at a different workstation.  Clearly there need to be cases where two users are logged in at different workstations doing different things with the same patient, but this criteria isn't clear what it is talking about.
      1.A.5 When an application (e.g., imaging) is opened from within the EHR, does the display have a title with an accurate, unique patient identifier?  This one is dicey.  The EHR as a platform has to have a way to launch secondary applications.  The content of secondary applications is NOT under control of the EHR.  It may or may not be patient specific.  This is a good test, but only when both apps are under common control and integration is tight instead of loose.
      1.A.6 If an action will cause data to be destructively overwritten with another patient's data, is the user alerted?  For EVERY EHR I've ever seen, this is NOT possible.
      1.A.7 If there are other patient records with highly similar identities, (e.g., Jr., Multiple birth patient, same first and last name), is the user alerted?  OK, I must not get this.  In ALL EHRs I've ever seen, to select a patient by name, you will get a list of patients with similar names, so sure.  But when selecting by patient id/MRN, you won't ever be asked if you meant another patient with a similar name (or ID or MRN for that matter).
      1.A.8 If multiple records for a patient have been merged, are users provided feedback about this action?  What does this mean?  Is it being suggested that after merge of two patient ids that the record somehow display a flag indicating a merge was performed?  A "merge" is a correction process that addresses an existing mistake (patient not matched correctly).  But, that mistake is not being addressed in "patient matching" step to begin with.
      1.A.9 If information is copied from the record of one patient and pasted to another, is feedback provided to anyone viewing the record that the information was pasted from the record of a different patient?  OK, what problem are you trying to solve with this one?  I don't know of a system that makes this easy in any way, and if someone actually takes the steps necessary to do it, they probably have a very good reason.
      1.A.10 If information is copied from a previous note of the same patient, is feedback provided to anyone viewing the record what information was pasted without further editing?  This is such a common occurrence that I'm certain the mitigation is worse from a usability perspective that the danger it might be trying to prevent.  Every time your provider asks you "has there been any change to your family history", and based on your response of "no" the family history is copied from current family history to the current note, there would be an indicator.  Perhaps a (no change) indicator would be sufficient, but even there, I wonder if it is necessary.  What is the risk that "feedback is provided anyone viewing the record  that the information was pasted without further editing" is trying to mitigate.  That isn't clear, and without clearly identifying the risk, the mitigation seems problematic.  At the very least, some users will find it annoying, and that goes against other principles of usability. 

      In general, it seems that there is a concern in this section that computers make it easy to do copy and paste errors, but in the context of an EHR, I don't see this happening easily in any way across patients.  In fact, copy/paste could easily result in incorrect data from belonging in one field being placed in another for the same patient.  Their are some mitigation strategies identified in this set of criteria that seem less than ideal (e.g., flagging data copied from another record).  A simple mitigation strategy would be to clear copy/paste buffers between patient selections, so that you can ensure that copy/paste of data from one patient to another cannot occur easily.  The separation of "mitigation strategy" from "risk" is important.  A single risk can have several mitigation strategies, and different strategies may be appropriate for different situations and UI designs.


      One common problem in patient identification is when names change between registration events, either because of an initial entry error, or a legal name change, or even something as simple as using common vs. legal name.  I had a problem with this recently with an ePrescription because what I call my daughter is not how my insurer knows her.  This is a "process issue" during patient registration, where the registrar needs to be sure to get legal name, and there may be mitigation strategies to address this "use error"that use name and other demographic matching to identify already registered patients who might be the same person (e.g., same name part, address and birth date).

      On patient merge, I think there are three issues:  One is to be sure that when two patients are merged, there is a clear way to identify what the impacts of the pending merge will be to the user performing the merge, secondly, there should be a way to "undo" that potentially destructive operation (a general guideline for usability), and there should be some way to determine the various ways that a record was previously known by.  I'm not sure that "flagging" a merged record is the best way to accomplish the goals, because I'm not actually clear on what the potential "problem" is that the flag is trying to mitigate.
    2. Mode Errors
      1.B.7 Are test actions separated from production actions (e.g., test accounts used rather than test modes for new functionality)?
      I don't see this isn't a usability issued, but a deployment, configuration management and process issue.  There are test environments, test users and test patients in many organizations, and each have different purposes. You don't want to mess with real patient data to verify a new deployment into the production environment, but you do have to be able to verify a functional production environment.  Appropriate use of test patients also allows new users in production environment verify that they can indeed access that environment with their new account.1.B.8 Are special modes (e.g., view only, demonstration, training) clearly displayed and protected against any patient care activities being conducted in them on actual patients?
      Here, I would put the ? after "clearly displayed" and stop.  With regard to view only/demonstration/training systems being protected against any patient care activities being conducted on actual patients, this is really a deployment issue.  View/demonstration/training systems shouldn't be readily accessible from environments where patient care is being performed.   But, that relies on the organization deploying these environments to address these issues, not the EHR.  The EHR has NO WAY to tell whether there is a real doctor and patient at the other end, or a trainer and new users.
    3. Data Accuracy
      1.C.8 If numbers are truncated to a smaller number of digits, is feedback provided immediately to the user and to others who view the information at a later time?
      First question that comes to mind is whether it is appropriate for the EHR to limit the precision of an entered result.  e.g., if the nurse enters temp as 98.65 and system rounds to 98.7, and temp measuring equipment is only precise to tenth of a degree, this seems appropriate.  So, immediate feedback is easy.  Now, next question, is what is recorded when nurse saves the record 98.7 or 98.65.  If 98.7, how would the system know (and would it even be appropriate for the system to report on truncated precision), and is this even a good idea.
      From a usability perspective, data should only be able to be entered to the precision that the system can save it, I don't like "automatic" precision reduction or rounding when I enter a number.  Basically, the system just told me I wasted my time entering that extra digit.  BTW:  You'll note that I use the term precision, when the criteria talks about truncation.  A system had better not truncate 1000 to 100, so I assume that this criteria is talking about precision adjustments.   A better way to state this critieria is:
      1.C.8 If the precision of an entered value is adjusted by the system, is this adjustment appropriate, and if so, is it shown to the user before the information is saved?
      1.C.9 If outdated orders are automatically removed from the display, is feedback provided to the user?
      This one clearly came out of someone's experience where this particular situation occurred.   What is the purpose of this display?  What is the purpose of removing the outdated order from it?  Given this data, is it appropriate to remove information?  If so, is it appropriate to alert the user?  I don't know because the risk really isn't identified, only a mitigation.
    4. 1.C.10 If a user enters the date of birth of a patient instead of the current date, is the user alerted?
    5. 1.C.11 
    6. If a user enters the current date as the patient’s birth date, is the user alerted?
      I'd combine these two into one criteria:  Are dates checked to ensure that they are reasonable values for the given situation, and if not, is the user alerted?  In a labor/delivery system, quite often the DOB and today are the same.
    7. Data Availability
      1.D.3 If the content of an unsigned note is only available to the provider who wrote the note, are other providers aware that there is an unsigned note?
      Sure, and in some systems, they can even read it too, but it's clear that the note is not yet signed, and the chart is still open.  This allows two providers to confer on the same patient from different locations (+1 for usability), or another provider to finish the note started by a first provider (e.g., in cases where the first provider is no longer available to complete the care) (another +1 for usability), or for an attending to review work of an attending (another +1 for usability).  And in other cases, organizations have policies that say, nope, these are completely hidden.  It depends on how the system is used, and what the policies of an organization are on this.
      I'd rewrite this as:
      1.D.3 Are the contents of unsigned notes clearly identified as being notes in progress, and only accessible to appropriate users according to organizational policy.
    8. Interpretation
      1.E.4 Are generic or brand names of medications used consistently?
      The list of medications used by an organization is almost always configured according to that organization's requirements, and can also be influenced by requirements of the payers and PBMs they work with.  That may mean consistent or inconsistent use of generic/brand names in the formulary that is completely outside of the control of the EHR vendor.  Yes, it is a good thing to look for, but that's not an EHR usability issue, its one of how medicine is practiced.
    9. 1.E.5 
    10. Is terminology consistent and organized consistently (e.g., clinical reminders use What/When/Who structure)?
      Lost me there.  What/When/Who structure is clearly a UI guideline I should know about somewhere, what's the reference?  On terminology, uhm, that's Federally mandated for a lot of stuff:  NDC, ICD, CPT, SNOMED CT, RxNORM, et cetera.  EHR vendors DO NOT control those terminologies, the organizations that maintain them do.  
      In other cases, user interface terminology is under control of the organization using the EHR (see EHR is a platform)
      1.E.6 
    11. Are negative structures avoided (e.g., Do you not want to quit?)
      Who built that form?  Again, see EHR is a platform.
      1.E.7 Are areas of the interface that are not intended for use by certain categories of users clearly indicated?
      Nope.  There is no indication at all that this area of the interface even exists if you aren't intended to use it.  But wait, see also: EHR is a platform 
    12. Recall
      Is auto-fill avoided where there is more than one auto-fill option that matches?
      The risk as I can best determine it, is that automatic fill could be the source of possible errors which could be patient safety risks.  But it is also a way to speed the user through the system.  Once again, from a usability perspective, what is the risk this mitigates?
      1.F.6 Are attempts to save a patient record under a different name warned of the risk of destruction of patient data?
      Really?  I've never seen a system that supports that possibility.  I don't doubt they might exist, but I don't usually see "File Save As..." in EHR Menus.
    13. Feedback
      1.G.1 Do automated changes to medication types, doses and routes trigger feedback to the user and the ability to easily undo?
      1.G.4 Do automated changes to medication orders (to match the formulary or for economic reasons) trigger feedback to the user?
      I would generalize these two.  1.G.1, Are user entered fields changed by the system and if so, are normalization of field values appropriate, and does the user have the opportunity to see the changes before the information is saved? This doesn't apply just to medications, although that might be a common example.  It can also apply to other entry fields, including test and procedure orders, diagnosis, et cetera.
      1.G.3 Are automated merges of patient record data done with sufficient feedback and active confirmation from the user?
      I had to stop and think about what they meant about automated merges here.  I presume they mean cases where the system automates the process of merging two records selected by the authorized user.  This seems also to duplicate 1.A.8
    14. Data Integrity
      1.H.3 Is it possible for corrupted backup data to update accurate patient information permanently?
      It depends on whose backup hardware/software that the organization uses.  If your backup is copying /dev/hd1 to /dev/tape, and your restore is the reverse, you got what you paid for, and corrupted backup data could update patient data.  Now, some EHR systems come with database and backup solutions built into the cost, and others do not, and some vendors offer solutions either way.  How is this a usability issue?  What is the EHR expected to do? 
  2. Visibility of System Status
    No comments worth reporting.
  3. System/Real World Match
    3.9 Have uncommon letter sequences been avoided whenever possible?
    Say what?  I'm not understanding what this is trying to address.
  4. 3.10 
  5. Does the system automatically enter leading or trailing spaces to align decimal points?
  6. 3.11 
  7. Does the system automatically enter commas in numeric values greater than 9999?
    This goes back to 1.G.1.  This is data normalization, and one needs to determine whether it is appropriate or not given the context.
  8. 3.13 
  9. Has the system been designed so that keys with similar names do not perform opposite
  10. (and potentially dangerous) actions?
    This one had me scratching my head.  Page Up/Page Dn are the most similarly named keys on my keyboard, and they do the exact opposite things.  I think this might have come from medical devices with custom keyboards. 
  11. User Control and Freedom
    4.6 Can users reduce data entry time by copying and modifying existing data?
    See comments at the end of section A above.
  12. 4.12 
  13. Can users set their own system, session, file and screen defaults?
    Perhaps, but there are competing issues here with respect to consistency of interface, user training, et cetera.  I once tried to do tech support with an Excel user who used Lotus-1-2-3 interface keys.  What a nightmare.  To much user customization can be a real "usability" problem from other perspectives (see my Dr. Bob/Dr. Smith story in EHR is a Platform (names changed to protect the guilty).
  14. Consistency and Standards
    5.11 Are menu items left-justified, with the item number or mnemonic preceding the name?
    While they are left-justified, the mnemonic is underlined, as defined in a 1987 specification for common user access to computer applications.  Is that bad?
  15. Recognition, Diagnosis and Recovery
  16. Prevention
    7.1 If the database includes groups of data, can users enter more than one group on a single screen?
    First, I'm not sure why this belongs in the "prevention category".  Second, "if the database includes groups of data" should be, "if there are logical groups of data".  You really don't know what the "database" contains.
  17. 7.2 
  18. Have dots or underscores been used to indicate field length?
    Uh, not since glass CRT days.
    7.8 Does the system intelligently inter variations in user commands?
    I don't know what that means, but I suspect that if 6.7 had been followed, it would have said "interpret".
  19. Recognition Rather than Recall
    8.6 Are long columnar fields broken up into groups of five, separated by a blank line?
  20. Uh, not since glass CRT days. 
  21. Flexibility and Minimalist Design
    9.3 Do expert users have the option of entering multiple commands in a single string?
    Uh, not since glass CRT days. 
  22. Aesthetic and Minimalist Design
  23. Help/Documentation
    11.4 Is the help function visible; for example, a key labeled HELP or a special menu?
  24. Pleasurable/Respectful Interaction
    12.8 Do the selected input device(s) match environmental constraints?
    12.9 If the system uses multiple input devices, has hand and eye movement between input devices been minimized?
    12.10 If the system supports graphical tasks, has an alternative pointing device been provided?
    12.11 Is the numeric keypad located to the right of the alpha key area?
    12.12 Are the most frequently used function keys in the most accessible positions?
    Uh, who supplied this hardware?  Most often it is the organization using the EHR.  This really all needs to be reworded.
    12.8 Is the system designed to support appropriate input hardware for the tasks being performed?
    One of the sites I visited a while back ONLY used a numeric keypad and appropriate training for input.  I was told that from a usability perspective, the investment in SIMPLE hardware, and well designed interface with it was a GREAT improvement in usability, user satisfaction, and error rates.
  25. Privacy
    13.1 Are protected areas completely inaccessible?
    13.2 Can protected or confidential areas be accessed with a certain password?
    Putting these two right next to each other makes it clear that both questions yes isn't consistent.  Add "under normal circumstances" to the end of the first question.  13.2 needs to be rewritten to because it assumes "password protection", when many other mechanisms could also be appropriate.  13.2 Are there mechanisms that allow  protected or confidential areas be accessed when necessary.
Overall, I found a number of issues within this appendix.

The two that made me want to scream were:
  • 5.11 Are menu items left-justified, with the item number or mnemonic preceding the name?  
  • 7.2 
  •  Have dots or underscores been used to indicate field length? 
I've not seen a UI guidelines like these for more than two decades, and they were applicable to very different kinds of hardware and operating systems in use today.

Some recommendations:
  1. Find some updated materials to reuse.  Content copyrighted in 1995 is just not recent enough. 
  2. Discuss distribution of usability items to hardware, platform and applications in the Expert Review.
  3. Reference common UI guidelines for various platforms, and suggest that one of the expert review criteria be:  Is there a UI guideline for A) the platform, B) the EHR, and is it consistently followed. 
  4. Take a more risk based approach.  Describe the usability issues (risks) that need to be considered, instead of specific mitigations which may or may not be appropriate depending on platform for those items that must be addressed by the EHR.
An expert review of usability features for particularly common hardware and platforms would be particularly beneficial to industry once we have a good set of review criteria.  It would help identify issues that need to be addressed by the EHR because they are not well addressed by the platform (e.g., multiple overlapping windows on iPad as an example) upon which it runs.

I won't get into some other feedback I've seen on problems with Expert reviews.  Better minds than mine follow that sort of research.  I've done my bit with this series, and will be happy to get back to my other duties.

Note:  I've hammered this report hard.  I know it.  Initially I planned on doing a quick summary like I usually do for these things, and a more detailed summary to follow up.  The deeper I dug, the digger I deeped.

The Top 10
For those who've no been following all my posts on the NIST Technical Evaluation, Testing and Validation of the Usability of Electronic Health Records (NISTIR 7804) report, Let me go back and summarize my top 10 points:
  1. The EHR is a platform that enables providers to practice medicine.  It can be customized to address a wide variety of workflows.  Many of the customizations can introduce usability issues, but when you let people write code, it's dang near impossible to control the outcome.
  2. The rationale for usability testing of EHRs because "usability" is the primary barrier to adoption is not supported by the available evidence.  Adoption of EHRs outside the US is far in advance of those in the US, and either Europe and Asia have huge advantages on the US with respect to usability, or something else is at work.
  3. Usability is still an important factor, because failures in usability are a key indicator of other problems which can contribute to EHR implementation failures, just as in any other IT deployment project.
  4. When identifying adverse events, a good taxonomy is needed; such as that suggested by JACHO.
  5. Validation testing through user studies can be very costly.
  6. Usability for Safety and Usability for Market adoption are different things.  Government experience is in the former, some parts private industry (e.g. Apple) do well with the latter. 
  7. The criteria Expert review, while less expensive, cannot rely on 15-year old guidelines (this post).
  8. Review guidelines should address risks, rather than detailed best practices, and existing guidelines should be reused where possible (this post).
  9. Segmenting issues to hardware, platform and application can make expert review scale (this post).
  10. There are other issues with expert review that I'm not an expert on. 

No comments:

Post a Comment