Tuesday, October 29, 2019

ML and AI in HealthIT ... things to think about

A very long time ago (at the beginning of my career in HealthIT), I got do do some really cool work on the front end and processing infrastructure for a set of machine learning and linguistic services that would automatically extract problems, medications, allergies and procedures and for problems, even code the diseases into a subset of SNOMED CT.  This was before most people had even heard about SNOMED CT, so some significant effort there.  We also did some work in the ICD-9-CM coding space as well, based on a software product that the company I worked for had purchased that was essentially the life's work of a physican / informaticist.

The product worked remarkably well from a technical perspective, and had a pretty decent precision/recall curve.  In fact, as configured, the system was shown to do as well as professional ICD-9-CM coders.

The biggest challenge it had was basically two-fold:

  1. It needed to be incorporate expert feedback to refine future results (we simply hadn't had the time to develop that feature).
  2. The original product couldn't explain how it got to a particular result, although subsequent ones could do a bit better.
To simplify, it couldn't argue for itself, or accept any corrections.

ML and AI are often "black boxes".  Most of what people are talking about with regard to AI today are implementations of some form of neural network.  Can anyone really explain what the weights and connections in a neural net mean?  This is a hard AI problem.  Machine learning algorithms have there own set of "hidden parameters" that drive their outputs, but that cannot always be easily explained.

Yet, we're expect to trust these things.  And they applied to hard problems that humans only solve correctly 90% of the time, and do slightly better.  How do you develop trust for something that's wrong for 1 out of 20 cases?  And yet we can trust a human because they can explain their reasoning, even when the result of it turned out wrong.

Interestingly enough, even when computers do as well as humans, the two together often do better than individually b/c the humans may understand nuance that the computer misses and vice versa.

If you are trying to implement AI (or ML), consider: 
  1. How do you work in user feedback about the goodness of the proposed solutions, 
  2. How will you explain to the user why this result is good.

0 comments:

Post a Comment