Monday, December 13, 2010

Prioritizing ONC Initiatives for HealthIT and MeaningfulUse

Recently ONC published a request for feedback on the Standards and Interoperability Framework Prioritization process and proposed initiatives. An overview of the S&I framework can be found here (pdf).

They ask two key questions on the prioritization criteria (xls):
  • Are the current criteria appropriate and sufficient to evaluate Initiatives?
  • Are there additional criteria within the four categories that should be included?
And then they ask you to assess the initiatives (ppt) based on your own weights using the spreadsheet.

The S&I Prioritization framework is a good start. It provides some process on making decisions about how initiatives are to be prioritized, but doesn’t get into several details that are of interest to the healthcare industry. Notably absent from this spreadsheet is who gets to participate in these evaluations and how. A prioritization process that does not include affected stakeholders is of limited value.  There needs to be more detail added to the prioritization framework to determine how stakeholder input is provided.

The prioritization spreadsheet represents a decision making workflow that should be addressed in stages. Relevance should not be weighed against or alongside feasibility. If the project isn’t relevant, then feasibility doesn’t matter and visa-versa.

The scoring and weighting is completely unspecified in the priorities. It would have been better to provide some guidance in this framework so that it could be spit on or applauded. Having nothing at all is not the best way to get feedback. On having both scores and weights, I’d simply drop one or the other altogether. The scores ought to be used to assess a SINGLE initiative, not compare two or more against each other.  Scoring should be relatively straightforward and something that the committee can reach concensus on.  A five point scale ranging from Low, Low-Medium, Medium, Medium-High, High, or similar is easiest to use to reach consensus.  The scores are used to facilitate final decision making, not to "make" the decision for you.  This accounts for individual weighting of the importance of the activities.

These initiatives cannot really be compared to each other, but are part of a portfolio of initiatives. An initiative that completely nails ONE goal, at low cost, high chance of success, and with few resources SHOULD be strongly considered. But one that hits 10 on goals, costs 5 times as much, and has a 50% chance of success, might likely be scored as being equivalent to the former using a linear weighting system, but should be INTENSELY reviewed before adoption.

The first stage of the evaluation should assess the relevance of the initiative. Initiatives that are not relevant need no further evaluation. Relevance of an initiative should also account for subsequent initiatives which it enables. In IHE, relevance of profile proposals is addressed by the planning committee. For the S&I framework, these might be assessed by the HIT Policy Committee.

The next stage of evaluation should address feasibility. Initiatives which cannot be done because they are not yet feasible should be postponed until such time as they become feasible, and a reassessment of relevance should be done at that time also. For initiatives which are not feasible, a question that should be asked is what could make this initiative feasible. That question may identify enabling initiatives which should be done first. In IHE, the feasibility is assessed by the technical committee. In the same vein, feasibility for S&I would be assessed by the HIT Standards Committee.

Relevance and feasibility feed into a third decision making step. That step compares costs of proceeding with the project (which can be assessed during the feasibility phase) to the expected benefits (which can be assessed during the relevance phase). There really isn’t any tab in the ONC S&I Prioritization Framework that addresses cost/benefit or return on investment. The costs should also look at opportunity cost. The prioritization framework must assume finite resources to complete initiatives. Executing one initiative may consume resources needed for another (opportunity cost). It should also consider what other initiatives might be enabled (benefits) by a project. There needs to be some guidance in the framework to determine the extent of resources available for execution. Cost/Benefit and ROI decisions should be jointly assessed based on relevance and feasibility.  Aligned with cost/benefit or ROI evaluation, is a determination of likelyhood of effectiveness. Some of that is described in the usability/accountability tab. If the question of effectiveness cannot be answered, then research or pilots should be done first prior to starting a large project.

This is essentially the same thing that any good organization goes through in prioritizing projects to be completed. Will this project meet the needs of our organizational stakeholders and their mission? Can it be done? Are the benefits worth the cost (what is the ROI)? Will it be useful and effective?

If you have looked at the spreadsheet supplied by ONC, you’ll note that I’ve addressed three of their four areas, and did not address Evidence-Based Medicine and Research Support. These are simply questions focused on relevance of the initiative, as focused by existing EBM and Research initiatives. I see no need to call these out separately.

So, here are the changes I recommend to the Prioritization Framework:
  1. Group everything on Relevance in one place (including applicability to EBM and Research Goals)
  2. Add a section on costs and potential savings / benefits. It may be hard, but any good organization estimates costs and benefits before initiating projects.
  3. Develop a way to determine available resources, and ensure that each project / initiative specifies resource needed for success, INCLUDING volunteer resources.
  4. Ensure that the prioritization process includes adequate industry input from providers, payers, vendors and consumers. Each project should have input from affected industry stakeholders, not just assessments from HIT FACAs.
  5. Drop weights, and use simpler scoring criteria, recongizing that weights are subjective and that initiatives cannot be compared to each other

There are two things that the prioritization process also needs to account for.  One is that there must be an opportunity to say NO to an initiative.  HITSP never had that opportunitity and was expected to "scale up" 100% year over year.  If NO cannot be said to a proposed S&I initiative, then the same problem will appear there.  Also, S&I needs to account for, and allow for initiatives to fail.  Failures teach as much or more as successes.  If we aren't failing, we aren't trying hard enough.

Tomorrow (or Wednesday if I run out of time), I'll post my assessment of the initial proposals...


  1. There needs to be at least one feedback loop to judge progress against plan, budget, resources, and stakeholder engagement. Breakdowns in those areas should cause reevaluation and plan modification, if not cancellation. Without such feedback there is no incentive to make accurate assessments up-front.

  2. I'm reviewing the proposed initiatives now. I note that the spreadsheet is also asking the applicable Stage 1-3. That's a scheduling question, not a question of relevance ... if you agree to do it, that could be input to planning for when.