Pages

Monday, June 1, 2015

Precision Research vs. Precision Care

One of the topics that shows up repeatedly in discussions of Comparative Effectiveness Research is the mismatch between data quality requirements in research, and that presently used in EHRs for patient care, for example, as noted in this article.

What then, are the impacts of research on care, if data gathered for care is not as precise as that used for research?

Because of these differences:

  1. Patients who qualify for an intervention according to EHR data may include patients that wouldn't qualify according to the guideline produced based on the research.
  2. Patients who should have qualified but didn't according to EHR data might be missed because the EHR does not capture data according to the guideline.
  3. Interventions provided according to the EHR may not be the same interventions specified according to the guideline.
  4. Interventions captured by the EHR that should have been appropriate according to the guideline might not be captured in a way that they are recognized as being appropriate.
  5. We very likely aren't capturing the outcomes in either case, and if we are, we likely have similar challenges with regard to that data capture.
So we have noise that would introduce variability in who gets treated, and in capturing accurately who was treated and what their outcomes were.

My question is, that if research indicates the number needed to treat is say 50, what is it really given the differences between theory and practice?  Is the promise of all of this precision in medicine real? If not, what needs to happen to make it so?

1 comment:

  1. Somewhere between one and ten billion of course. The quality of the statistical work I've seen published ranges from staggeringly bad to pretty good. Given what you've said, the question cannot be answered.

    The magnitude of the impact of variance in measurements depends profoundly upon the statistical basis of the underlying experiments and conclusions. So again, unanswerable in the absence of some clues.

    A lot of this is the result of three major problems, and the result is cargo cult science.
    1) Most clinicians went into medicine in part because they hate math. They have no feel for it. And their exposure to statistical education worsens the hatred. Most statistical education is based on the incorrect assumption that the student wants to acquire the skills needed to design new statistical methods. So they teach things like deriving the maths behind regression analysis. Those courses merely confirm that statistics is deserving of hatred.
    2) Medical schools have not created courses in "how to use statistical methods". Many clinicians might continue to hate statistical methods, but teaching what the methods do and how to use them need not be an exercise in mental torture like deriving proofs. Books like Mastering Metrics by Angrist and Pischke show how it can be less painful. The absence of such courses mean that only the exceptional clinician understands statistics.
    3) Journals and reviewers do not demand the base information needed to determine statistical strength, etc. be published. So there is no reward for even making an effort. It doesn't improve the odds of getting published.

    The result is most published reports are similar to the cargo cult airfields. They create the appearance of statistics without any of the substance.

    ReplyDelete