Tuesday, February 26, 2019

A Brief summary of my IHE ACDC Profile and A4R Whitepaper Proposals

This week I'm at IHE meetings, and am submitting one of the first (and second) out-of-cycle proposals to IHE workgroups following the continuous development process adopted by IHE PCC and IT Infrastructure (and under consideration in QRPH).

The first of these is the ACDC (Assessment Curation and Data Collection Profile), which advances assessments in a new way that hopefully addresses the challenge that there are literally tens of thousands of assessment instruments that we'd like to be able to exchange in an interoperable manner.  The goal is here is to disconnect the encoding of assessment question and answer concepts from standardized vocabularies such as LOINC and SNOMED CT (which can take some time), yet still enable the capture of assessment data, and handle the encoding task in volume at scale using a mechanism that still needs some R&D.

The profile addresses two connected use cases: the process of assessment instrument acquisition (e.g., by a provider or vendor from an assessment instrument provider or curator), and the process of data collection through a singluar common resource, identified by it's Questionnaire canonical url (basically, a web accessible URL that also acts as a unique identifier).

The first use case allows the assessment instrument curator to make it possible for an assessment instrument acquirer to search for, and explore available instruments and metadata, and eventually get back enough information to make an acquisition decision, and after taking the necessary steps to obtain access (e.g., licensing, click-through, or whatever), to then acquire the executable content (the full Questionnaire resource).

The next use case allows a system that has access to use an executable assessment instrument to ask an assessor application to gather the essential data to make the assessment (it could be through a user interface that simply asked the provider or patient to answer some questions, or it could be more complex, involving answering questions based on available EHR data, or provide some adaptive responses based on other data).  The response to this inquiry would be something like a Bundle containing a) the QuestionnaireResponse, new resources that may have been created as a result of processing the Questionnaire, and perhaps some ClinicalImpression resources that provide the assessment evaluation.

For example, a questionnaire implementing APGAR Score might result in one QuestionnaireResponse resource, and six ClinicalImpression resources, one each for the scores associated with the 5 components of the APGAR score (respiration, heart rate, muscle tone, reflex response and color) and the overall APGAR score result (the sum of all the component scores).

Assessment instruments are commonly used to collect data essential in clinical research regarding a patient's current cognitive or functional status, data related to social determinants of health at the start of a research program, or during the course of research (to determine patient progress).  They are also used to collect data about patient reported outcomes, most often at the end of the research intervention, but perhaps also periodically (also to assess progress).

As we look at the R&D effort that is undertaken to address the scaling problem, we'll be keeping track of our findings regarding what efforts we attempt, and our success in addressing the scaling problem, and will report this in the Assessments for Research (A4R) white paper.  The purpose of doing the white paper gives us the opportunity to explore different approaches and publicize our findings in a way that helps us drive forward progress on the scaling problem associated with encoding assessment instruments in an interoperable manner.

Some thoughts as we've discussed this profile thus far:

  1. It's important to classify assessment data elements by a set of categories that might be important for research.  The hierarchical structure of the Questionnaire allows for groups which enable us to introduce groups that can be used to record classification codes.  For example, if one is performing research on alcohol use, one might be interested in seeking a wide variety of data from multiple assessment instruments.  Enabling use of classification codes in the Questionnaire will enable groups of related questions from multiple instruments to be identified.
  2. How should automated pre-population of responses be addressed.  For example, in cases where there is sufficient data in an EHR system to answer common questions re: Age, gender, et cetera, how might this be enabled in the encoding of the questionnaire resource.
A lot of what goes into these proposals is based on work on assessments and automated data capture for research that started in IHE in 2006 on the RFD profile, in 2007 in the Assessments work done by Patient Care Coordination, and continued through ONC efforts on Structured Data Capture in 2012 (which greatly influenced the work of the FHIR Questionnaire and QuestionnaireResponse resources), and current work being done on Patient Reported Outcomes in FHIR by HL7.

As John Moehrke said, this work won't be "Done Dirt Cheap", which also means it likely also won't be a dirty deed that doesn't get the job done either.  I think we're about to rock on assessments.





Update: Both proposals were accepted today. More later as the work progresses.

0 comments:

Post a Comment