I've been thinking about precision medicine and standards a lot lately for my day job. One of the things I have to work out is how to prioritize what to work on, and how to develop a framework for that prioritization.
I like to work by evidence (objective criteria) rather than eminence (subjective criteria), and so I need some numbers. Which means that I need measures. In any process improvement effort, there are three fundamental kinds of measures that you can apply.
If I were to rank these measurements in terms of value and ability to move the needle on the scale, you've got bronze, silver and gold (a scoring system that works both by weight and by monetary value).
But, in my world, they also rank by time over which they are implemented. You have to first have the infrastructure deployed (which means it must be available with support for the needed technology), and then the workflow designed, and the processes implemented and executed before you will see changes in outcomes.
Let me give you an example. If you want to measure the impacts of Sexual Orientation and Gender Identity on health outcomes, you first need the standards to record that information readily available (structure), then you need to have processes designed to support the capture of that information (also a structure measure), then those processes need to be implemented and executed (process measures), and then you need standards designed to exchange that data (structure measures), the software available (structure) and configured to perform that exchange (process), and then data exchanged that includes SOGI to occur (process). And then you can get to outcomes.
This is a pipeline that has 7 segments that need to be completed before we can use SOGI data in research (the desired outcome). Working on any of these segments can proceed in parallel BUT that's challenging to coordinate. Some of the work has already been done at the federal level to promote utilization of the standards for federal reporting for HRSA Health Center grantees (a.k.a. Federally Qualified Health Centers), but hasn't been included in requirements for other healthcare providers. For example, birth sex (a signal that SOGI data is surely relevant) is part of the Common Clinical Data Set (CCDS, now know as USCDI) that MUST be able to be exchanged by EHR systems, but the SOGI data itself is NOT part of this definition, and so while it may be available in an EHR, neither the processes to capture, nor the C-CDA documents being exchanged may actually do anything with SOGI data.
There's two missing segments in this seven segment pipeline. The necessary standards are there, the mechanism of exchange is there, the process for exchange exists, but implementing the workflow to capture the data isn't incented in any way by current regulations (which focus on USCDI exchange), nor is exchange of that data mandated.
So, I can identify where the work needs to be done, and I can assess (measure) even, how much work that is.
Now, compare this work to the effort associated with capturing data from wearables (e.g., blood pressure, heart rate, physical activity, blood glucose, sleep cycles, et cetera). There's a lot more missing segments. I know what that work is, and I can also estimate how much work there is here.
Now, suppose for the sake of argument that I needed to choose just one of these to work on. How would I justify one over the other? NOTE: This is completely for the sake of argument, and I've set up an arbitrary a/b scenario. There's a reason for this. If I can build a framework for making that decision, I can extend that framework in ways that allow me to make decisions about how to SPLIT the work and how much to invest in each. That's just how math works.
So, how to get to prioritization. I need a different sort of evaluation and to measure in a different way. What can I measure? I can measure the quantity of research that might be enabled. I could take a crack at, but it would at best be a guess on the impacts of that research to patient care, or cost. I could look at the impacts of the diseases that research affects.
And then there's the missing link problem. If there are 2 of 7 links missing before I can get to outcomes, what's the value of working on broken link 1 or broken link 2?
This is risk assessment turned on its head, as my friend Gila once put it, it's opportunity assessment. But using the framework for risk assessment is still valid in this case, it's just that what you measure is different.
And now, I think I have the start of a sketch for the framework for my answer.
Time to do some more reading.
Keith
I like to work by evidence (objective criteria) rather than eminence (subjective criteria), and so I need some numbers. Which means that I need measures. In any process improvement effort, there are three fundamental kinds of measures that you can apply.
- Structure
- Measures that demonstrate appropriate systems, structures and processes are in place to support the outcome.
- Process
- Measures that demonstrated that processes to support the outcome are being executed.
- Outcome
- Measures of the outcome.
But, in my world, they also rank by time over which they are implemented. You have to first have the infrastructure deployed (which means it must be available with support for the needed technology), and then the workflow designed, and the processes implemented and executed before you will see changes in outcomes.
Let me give you an example. If you want to measure the impacts of Sexual Orientation and Gender Identity on health outcomes, you first need the standards to record that information readily available (structure), then you need to have processes designed to support the capture of that information (also a structure measure), then those processes need to be implemented and executed (process measures), and then you need standards designed to exchange that data (structure measures), the software available (structure) and configured to perform that exchange (process), and then data exchanged that includes SOGI to occur (process). And then you can get to outcomes.
This is a pipeline that has 7 segments that need to be completed before we can use SOGI data in research (the desired outcome). Working on any of these segments can proceed in parallel BUT that's challenging to coordinate. Some of the work has already been done at the federal level to promote utilization of the standards for federal reporting for HRSA Health Center grantees (a.k.a. Federally Qualified Health Centers), but hasn't been included in requirements for other healthcare providers. For example, birth sex (a signal that SOGI data is surely relevant) is part of the Common Clinical Data Set (CCDS, now know as USCDI) that MUST be able to be exchanged by EHR systems, but the SOGI data itself is NOT part of this definition, and so while it may be available in an EHR, neither the processes to capture, nor the C-CDA documents being exchanged may actually do anything with SOGI data.
There's two missing segments in this seven segment pipeline. The necessary standards are there, the mechanism of exchange is there, the process for exchange exists, but implementing the workflow to capture the data isn't incented in any way by current regulations (which focus on USCDI exchange), nor is exchange of that data mandated.
So, I can identify where the work needs to be done, and I can assess (measure) even, how much work that is.
Now, compare this work to the effort associated with capturing data from wearables (e.g., blood pressure, heart rate, physical activity, blood glucose, sleep cycles, et cetera). There's a lot more missing segments. I know what that work is, and I can also estimate how much work there is here.
Now, suppose for the sake of argument that I needed to choose just one of these to work on. How would I justify one over the other? NOTE: This is completely for the sake of argument, and I've set up an arbitrary a/b scenario. There's a reason for this. If I can build a framework for making that decision, I can extend that framework in ways that allow me to make decisions about how to SPLIT the work and how much to invest in each. That's just how math works.
So, how to get to prioritization. I need a different sort of evaluation and to measure in a different way. What can I measure? I can measure the quantity of research that might be enabled. I could take a crack at, but it would at best be a guess on the impacts of that research to patient care, or cost. I could look at the impacts of the diseases that research affects.
And then there's the missing link problem. If there are 2 of 7 links missing before I can get to outcomes, what's the value of working on broken link 1 or broken link 2?
This is risk assessment turned on its head, as my friend Gila once put it, it's opportunity assessment. But using the framework for risk assessment is still valid in this case, it's just that what you measure is different.
And now, I think I have the start of a sketch for the framework for my answer.
Time to do some more reading.
Keith
No comments:
Post a Comment