Thursday, June 25, 2020

Zoombie Jamborie

A camp song for 2020. If you've never heard Zombie Jamborie, on which it's based, here's one of the many recordings.

Well, now, back to back, telly to telly,
well, I don't give a damn 'cause I've done three already,
back to back and telly to telly at the zoombie jamboree
(Now hear the chorus)
back to back and telly to telly at the zoombie jamboree

zoombie jamboree took place with an at home functionary (Why?)
zoombie jamboree took place to be revolutionary
zoombies from all parts of the island. (Where?)
Some of them are great calypsonians. (Some.)
Since the season was COVID time they got together in groups of nine.

Oh, what ya' doin'?
Well, now, back to back, telly to telly,
well, I don't give a damn 'cause I've done six already,
back to back and telly to telly at the zoombie jamboree
(You can feel that)
back to back and telly to telly at the zoombie jamboree

One zoombie's phone wouldn't behave
Her echo's a painful sonic wave
In the one hand she's holding a quart of wine,
  in the other she's pointing she can't hear mine
I says, "Oh, no, my turtle-dove, I think you've got a bug."
Well, believe me folks, I had to run. (Why?)
four hours of a zoombie ain't no fun! (Don't knock it!)

Oh, what you doin'?
Well, now, back to back, telly to telly, well,
I don't give a damn 'cause I've done ten already,
back to back and telly to telly at the zoombie jamboree
(Oh, what a good game)
back to back and telly to telly at the zoombie jamboree

Right then and there she chats her tweet
"I'm a-going to try again, my sweet
I'm gonna call again and then retry."
Then says "Ok, I'm back, you guys!"
"I may be lyin' but you should see (What?)
My slides on this here zoombie." (Blah!)
Well, I never seen those slides in Life, I
crashed my zoombie without WiFi? (Yes!)

Well, now, back to back, telly to telly, well,
I don't give a damn 'cause I've done scores already,
back to back, and telly to telly at the zoombie jamboree (You're all alone, you know)
back to back, and telly to telly at the zoombie jamboree

Wednesday, June 17, 2020

Interpreting a Negative, part 2


likelihood ratio
In Interpreting a Negative, I talked about my lack of success in interpreting my negative COVID-19 test result.  I've made a bit more progress, although I haven't yet gotten back a response from my provider on my two questions.  For what it's worth, I learned a lot about this in my MBI degree program, but since I don't deal with test ordering or interpretation on a routine basis, I know about the math (this is a great article that can help you with the math in this post), but haven't had any need for application of it since graduate school.

You generally hear about the accuracy of laboratory tests used in the US based on reporting sensitivity and specificity.  These values help providers evaluate the likelihood of a true positive or true negative.  These values aren't generally included in the laboratory result, but you can often get to them by knowing who did the test (the lab), and what test equipment they used, or by looking for the LOINC codes (if you know where to find them), and traversing what that tells you back to the laboratory equipment.

You might also hear about the positive and negative predictive value (or agreement), abbreviated PPV/PPA and  NPV/NPA respectively.  This is what the COVID-19 test used on me reports to the FDA.  It compares the results from the Aptima test to those of another test (Panther Fusion) made by the same manufacturer (which could introduce another source of error, but according to the manufacturer's testing, that test is perfect).

That's based on the manufacturer's testing results, and doesn't necessarily account for real world implementation.  Variations in workflow, quality, et cetera, and assumptions under which the test is performed can have an impact on "accuracy".  In the case of COVID-19 laboratory tests, you can find the results of other's evaluations (e.g., one done by Norwell Health Laboratories).  For the Aptima test, there's one of those in the second row of the table found  at the link.  FWIW: That same lab also analyzed the reference test (Hologic Panther Fusion) used in the Hologic report on the Aptima.

As a patient, the first question I have from a test result is "How should this result affect my behavior?"

  • For a positive, do I seek treatment, further testing, et cetera.
  • For a negative, does that mean I don't have a problem, or should I seek further testing later (and if so, when)?

I won't go into the first issue for positives in detail.  I will say that both my wife and I actually decline certain diagnostics because false positive rates are high enough, and the therapeutic value of a true positive result is of limited value at our ages.

There are four different kinds of results that a laboratory test can produce:
True Positive: A positive result when in fact, you actually have a disease
False Positive: A positive result when in fact, you do not actually have the disease.
True Negative: A negative result when in fact, you do not actually have the disease.
False Negative: A negative result when in fact, you actually have the disease.

You can generally find these values in those tables I referenced.  For two tables I referenced, the values I have are:

Result TypeHoligic Aptima ReportedNorthwell Health Laboratories
True Positive5071
False Positive10
True Negative5475
False Negative04

I can use these numbers to compute two other numbers, called the positive likelihood ratio (LR+), and negative likelihood ratio (LR-) using the following formulae:

Sensitivity = TP / (TP + FN)
Specificity = TN / (TN + FP)
LR+ = Sensitivity / (1 - Specificity) = (TP / (TP + FN)) / (FP / (TN + FP)) 
LR- = (1 - Sensitivity) / Specificity = (FN / (TP + FN)) / (TN / (TN + FP)) 

Result TypeHoligic Aptima ReportedNorthwell Health LaboratoriesBoth
Sensitivity50 / 50 = 100%71 / 75 = 94.7%121 / 125 = 96.8%
Specificity54 / 55 = 98.2%75 / 75 = 100%129 / 130 = 99.2%
LR+100% / 1.8% = 55.694.7% / 0 = ∞96.8% / 0.8% = 121
LR-0% / 98.2% = 05.3% / 100% = 0.0533.2% / 99.2% = 0.032

As you can see, I also combined both evaluations into a composite result.

With the LR- value, I can now estimate the probability that my negative result is correct, but I need ONE more number.  That's the pre-test probability I had COVID-19.  There are a lot of different ways that I could come up with that number.  The most obvious one is to assess it based on the prevalence of disease in my area.

OK, so now let's think about this: Do I consider my country?  My state?  My county?  My town?  My region?  Where would I even find this data?  I might start with my state's dashboard.  But that doesn't really say anything about disease prevalence, just a report of increasing cases / 100000 (and that data is out of date for the actual prevalence, b/c COVID has an incubation period of about 5-14 days).

So back searching I go, and I find this paper on estimating prevalence, and it references a few others.  Since I live within Massachusetts, but shop in Rhode Island (b/c it has the closest grocery stores), I might want to consider both regions.   I can read off results that tell me I need to look at values for prevalence somewhere between 2 and 4%.  Because this paper reflects a novel method (e.g., untested), I should go look elsewhere too. An alternative model suggests multiplying the reported case rate by 10.  That would give me 14.5% (100158 * 10 / 6.893M) for my state, or about the same for my county.

Now, let's plug those numbers in and math some more using these equations:
Pretest Odds = Pretest Probability / (1 - Pretest Probability)
Post-test Odds = Pretest Odds * LR-
Post-test Probability = Post-test Odds / (1 - Post-test Odds)

Pretest
Probability
Pretest
Odds
Post Test
Odds
Post Test
Probability
2% 0.020 0.053*0.020 = 0.0010 0.1% 
4% 0.042 0.053*0.042 = 0.0022  0.2%
14.5% 0.170 0.053*0.170 = 0.0091 0.9%

You'll note I didn't bother computing the results based on the Hologic reporting, because according to the manufacturer, it doesn't produce false negatives, and so I'd just get 0% anyway.  I also didn't bother computing the results based on both because the Norwell Health Laboratories Reported results give me an upper bound.

What this tells me is, based on whatever prevalence data I believe in (more likely the higher number), I have less than 1 chance in 100 of it being wrong.  That's what I wanted to know in the first place.

Without the pretest probability, the lab cannot possibly report the likelihood of the result being incorrect.  Other variations in testing might effect this particular labs "accuracy" in reporting on the test, and of course, I don't have any way of knowing that information.  But using this math, I could say that even if there performance of the test had twice the false negative rate as the Northwell reported results, the chances that my test result were a false negative were less than 1 chance in 50.

I'm pretty good with that.  Applying what else I know (including other influenza-like but not COVID-like symptoms), I can pretty much assume that I'm not COVID-19 positive pretty reliably.

Why spend all of this time figuring out what others might just assume?
There are three reasons:

1. Because it's me, and I want to know.  Is there a reason I shouldn't?
2. Because I'm curious.
3. Because I understand that these tests have been released for use without the same level of testing that happens under normal circumstances, because some of the tests (e.g., the Abbott rapid test) have come under fire b/c of their accuracy, and as a result, I want to understand what risks I'm taking not just on behalf of myself, but also my family based on how I behave after getting this result.

   Keith

P.S. There's a thing called a nomogram depicted above that can help you do all this math geometrically.  All you need to know is pretest probability and likelyhood ratio, that a ruler will compute the post-test probability for you.  I had one somewhere (it came with one of my Informatic's text books), you can print one out.  But first you needed to know how to get the key value, the likelihood ratio.  Using that, and the likelihood ratio for a positive result (55-121), if that had been my result, my post-test probability would be somewhere between 70 and 100%.






Monday, June 15, 2020

Interpreting a Negative

No, this is not a recap about reporting negatives in CDA (or in FHIR for that matter), instead, this is about how to interpret (or fail to interpret or explain how to interpret a negative test result).

If you haven't been on a call with me recently, it might be because I've had a flu.  What type of flu remains to be determined, though I'm fairly sure it's NOT COVID-19.  How do I know?  Well, I got tested.  Why am I not sure?  Because negative test results don't necessarily mean I don't have COVID-19, it just means that what I have is not detectable as SARS COV-2.

Wednesday morning I woke up fuzzy, feeling feverish (but no temp), and generally out of sorts, after having trouble sleeping.  I contacted my doctor (to apologize for missing a work related meeting), and he suggested I get tested (even though my symptoms were not specific to COVID-19).  So, I went and got tested at a drive up testing site at a nearby mall. Here's what it looks like.


It takes all of five minutes to take the sample.  It's not pleasant to have a swab that far back in your nasal cavity, but it's not really that painful either, just uncomfortable.  I wouldn't do it for fun.  I also got about 10 printed (full color) pages of stuff about the test, COVID-19, what to do if positive, et cetera, in three languages (looked like Spanish and Portuguese to me), reproducing two different information packets from CDC with overlapping information.  Stuff which I've already seen a dozen times or more.

I got a very nice recorded phone call the next morning after my test, telling me the result came back negative, and how I should treat that information.  But it was the usual, extremely digested baby-food level of information that is normally given to patients.

What I basically wanted to understand was, given the test reported that I was negative, what was the likelihood that result was wrong (a false negative).  So I went looking for more information.

So after the phone call, I looked at my personal e-mail.  I had received an e-mail from Quest (the lab) telling me that I had a new result that morning.  The e-mail showed up at 5am, about 4 hours before the phone call.  I checked the lab result, on Quest's portal, and felt not much more educated.  
It could have been from one of two different testing systems, Aptima and Panther (both from Hologic).  There were four different links (one for physicians and one for patients) to data about the testing system.  It was a typical reproduction of what is reported to FDA, so NOT that useful.  And the reported test result in the portal was again, standard pap (as in food for children).

Of course, being done by Quest, and having signed up, it was also available in my iPhone's Apple Health app, in FHIR format. The sad thing there is that the only value given for code was a string (SARS COV 2 RNA(COVID 19), QUALITATIVE NAAT), no LOINC code, nothing telling me much about the test itself.

Eventually, I determined from Quest (by zip code and lab) what the right LOINC code might be (I think the test was from Aptima), but am still uncertain, because it's reported based on where the test was sent, and I honestly am not certain (it could have been performed in MA or NY), because nothing in the lab report tells me that.  

There's data also available in MyChart, with actual links.  Though I cannot copy and past links from MyChart, nor can I click through them (it's a stupid s**t safety restriction that makes sense ONLY when you don't know how to implement a whitelist for websites).

So, next up I start looking at studies around false negatives for COVID-19, and actual sensitivity/specificity values for the test equipment in use based on real-world testing.  And honestly, I'm still feeling uninformed.  What I really want to know is the NPV, given that I have a negative result, what's the likelihood it's a true negative.  You cannot actually compute the NPV from Sensitivity or Specificity, you have to have the raw data to get that (or an estimate of disease prevalence).  Here's a good diagram showing all the details 


Unfortunately, none of the studies I dig into can really give me these details at a level that makes me feel comfortable interpreting an answer.

So, I asked my doctor the following two questions:
A) What's the NPV for this particular test (Hologic Aptima)?
B) What's your assessment of my pre-test probability, and how did you come up with that value?

We'll see how he answers and where I go from here.  Basically, my assumption based on symptoms is that I had a gastro-intestinal sort of flu infection rather than COVID-19.  Oh yeah, I'm still feeling icky, so trying to figure this stuff out while unwell is not my favorite past-time, nor the best time for me to figure it out.


   Keith





Tuesday, June 2, 2020

Towards of Common Lexicon for Requirements

Interoperability requirements are notoriously unbalanced, and the explanation for that can be readily found in the oft-quoted Postel's law.

Senders (clients) should have care, receivers (servers) should be forgiving.  This creates a non-uniformity in specifying requirements within an interchange.

Those of us who work in this space are quite commonly familiar with the SHALL/SHOULD/MAY requirements language of RFC-2119.  Some attempts have been made to get that to address the reality of interoperability (see RFC-6919), but humor aside, there's a real challenge here for interoperability experts.

We want to be able to simply express for an exchange the following notions:

  1. The server must support this parameter/field, the client is free to use it as it wishes.
  2. The client must/is recommended/may send this field, the server is free too use it if need be.
  3. The server requires this of the client, it must comply.
  4. The server must support this to address deficient clients, but clients claiming conformance must do it differently if they expect their claims to be taken seriously.
  5. If the client has this data, it must be sent in this field to the server.  If the client doesn't have it, it can be omitted (or marked as not available, null, or in some other way).
Various different attempts have been made to address this:
Version 3 uses the terms mandatory, required and optional.  IHE has defined R2 (required if known), but this terminology is neither ubiquitous, nor readily understood.  FHIR includes "must support", but requires an implementation guide to define the meaning of it in practice.

Data sent in these transmissions have different permanence.  In some cases, a sent field is simply acted upon (or not) by the server, depending on it's function and is subsequently discarded, in others, it is forever indelibly scarred into the ether as an anomaly in the space-time continuum that a forward observer can reliably detect (i.e., written to storage).  And there are the in-between cases of these.

Cardinality constraints have often been looked at as a way to address this issue.  When the minimum cardinality is 0, an item it optional, when greater than 0, required.  But that fails the case of sometimes I must send it (because I know), but when I don't know I don't have to send it.  The data has utility, and use of it can affect outcomes significantly.

The value in an exchange of defining such a common lexicon for describing interchange requirements would be that it would enable the requirements to be readily determined of the data element from the perspective of either the client or the server.

I don't have the words for these concepts, but to explain the value for them, I have about 300 lines of CapabilityStatement generated for the SANER client, and another 300 lines for the server.  There are 30 lines of differences between the two, and these are all about SHALL/SHOULD/MAY sort of requirements, addressing this disparity in viewpoints.

Which leads me to an exercise to be performed, which is to take the 5x5 grid of SHALL/SHOULD/MAY/SHOULD NOT/SHALL NOT requirements for a client against those of a server (and the "if known" variants), determine how many of these are meaningful, and from their, determine what those meanings actually say.

It may well be that as for time, there's an algebra to these client server relationships, and a way to describing them meaningfully.  And as in all relationships, I'm sure I'll find some cases that are simply disfunctional.

  Keith






Friday, May 8, 2020

Am I crazy, or just in SANER?

What does it mean to be in SANER?


  1. Are you overwhelmed by calls and crises and yet have a burning desire to help?
  2. Do you believe that something could be done differently even in the midst of a crisis?
  3. Do you think we could actually roll something out before the end of the year?  Before the next flu season?
  4. Do you think technology might have something to offer against COVID-19?
  5. Do you want to set digital paper on FHIR?
  6. Do you have public health portalitis (inflamation and irritation caused by too many portals calling themselves connectivity solutions)?

Then it my well be that you are in SANER too.





Tuesday, May 5, 2020

It's not always about the EHR

This Fortune article was an interesting read.  But it doesn't tell the whole story, and honestly, I don't think everyone knows the whole story, and most don't even know half of it.

Before SARS Corona Virus 2, a.k.a, COVID-19:

  1. There were no codes for COVID-19 (the disease) or SARS-COV-2 (the Virus), 
  2. There were no devices or test codes for now more than a dozen tests to detect the disease or the virus, 
  3. Nor were there result codes to encode the virus, 
  4. Nor did value sets exist identify symptoms of the disease.

Once there were codes, there were also hospital systems under a lot of stress, so even if an EHR vendor had a vocabulary or system update, hospital operations manager's weren't going to authorize a configuration change to the hospital's critical systems in a time of emergency.  And the health information management and technology professionals?  They were inundated with critical priorities to do things like enable tele-medicine applications, and develop processes for manual communications with public health. It wasn't about the EHR, it was about lack of preparation.

Collecting data for a research protocol is an involved process.  It's not just "give me the data over an API", it's a much more formal Extract/Transform/Load (ETL).  Batch APIs for FHIR exist, and are designed in fact to facilitate such efforts, but are honestly less than a year old, and haven't been widely deployed in existing systems. Where they are available in newer versions, see above notes on "not going to authorize a change in critical Health IT systems DURING a crisis".

Some systems already have the capacity to normalize and collect data for research.  The "All of Us Research" program from NIH has been going on for more than a year, numerous health systems are sending patient data to a centralized, NATIONAL research repository housed at Vanderbuilt university (or more accurately, in a cloud they control).  But this is a research protocol, there's a consent process, and not everyone joins.  To date, about 1 in 1000 people in the US have agreed to participate, about about 3/4 of them have completed the initial program steps (some of which require an in-person visit and have been suspended until further notice until the crisis abates).  That's a lot of data, but today 4 times as many have tested positive for COVID-19 than are participating in that research program.  Research is important, treatment for COVID-19 is important, but one doesn't just throw down a research protocol and tell patients to they have to participate without consent, and one doesn't start in the middle of a crisis to plan for it. The time to plan for a crisis is before it happens.  It's not that the EHR cannot send data to research, it's that we didn't plan for rolling our research programs in response to a crisis.

We all do crisis planning for our information systems, some better than others.  But we think about crisis planning most often in terms of how to maintain stability within a crisis, NOT how do we refocus our efforts on system XYZ to help abate this crisis.  About the only people who do think this way are those who are thoroughly engaged in disaster preparedness and response scenarios.  And for many of those, it's 95% waiting for something to happen, and then 5% of run like hell to respond.  They do simulations, and drills, and play "war games".  When's the last time your program even considered that?  When's the last time your IT department ran a generator test, and did they consider testing for water leakage (remember that from Hurricaine Sandy)?  Does your disaster response plan for your EHR system include "updating to latest version and patches" to support novel issues in a disaster?  How about updating to latest vocabulary?  Can you add vocabulary terms in near real time to your EHR and have everyone take advantage of them?  Do you have plans for rolling out new workflows in your facility to address how to code differently than you have been to address a disaster?

If you answered yes to all of these questions, I really want to talk to you and find out where you got the support to do this.  If you aren't thinking about how to do this now, you should be.  Yes, we are building the plane as we are flying it, and we don't know where yet we are going because we DON'T have all the answers, but at least we know which direction is North, and we have some idea about our current heading, and where we want to go.  And we'll know more as time progresses.

This isn't a failing of existing programs, because in all the work that's been done with EHR systems, public health was barely an afterthought, and the agencies supporting these efforts have limited mandates with respect to public health.  They can only work within that mandate.  Public health has long needed funding and support to address a disaster of the magnitude we are now experiencing, and while experience is the best teacher, it's also the hardest.  And basically, we got what we paid for.  But, now we are learning.  I'm sure getting those mandates will be a bit easier now (and in fact, some of them have already been issued).  It wasn't a failure of the EHR program or it's mandate, it was a failure to supply an appropriate mandate for public health.

Things aren't going to get better overnight.  But because we took action to mitigate the impacts (not fast enough, but enough to get by if we continue them), we have some time to prepare for the next wave.  I'm determined to make sure that some of that infrastructure that we should have had will be in place before that hits.  It's not about what the EHR can do. It's about what I can do, and what others can do.

     Keith

Wednesday, April 29, 2020

Local First .. A SANER Approach

As I think about various models for communicating data to public health agencies, I keep thinking about a federated model, where hospitals push to their local public health network, and the local public health authorities then push data upwards to state and federal agencies.  There's a good reason for this, based on my own experience.  I live fairly close to Boston, and lived even closer in 2013, the year of the Boston Marathon Bombing.

Boston emergency management officials immediately knew when the bombs first struck what the state of the EDs were in the area, and were able to mostly route patients appropriately, and coordinate efforts.  While that same article notes that the number of available operating rooms and ICUs was not known, it also mentions practice and drill which very likely made it possible for hospitals to quickly clear and prepare operating rooms to treat incoming patients.

I think also about what's happening in thee City of Chicago right now, with Rush Medical coordinating efforts to capture data for the City's public health department, and then local public health passing that same data on to federal agencies on the hospital's behalf, and it just makes sense.  It certainly makes a lot more sense than what I've heard elsewhere, where hospital staff are having to collect data, log into different portals and send data to local or state public health, and then also to two different federal agencies, all the while a slightly different data feed containing similar data is silently being sent to the state department of health from a past program intended to meet the very same need.

I can't and won't argue the point that FEMA and CDC both need the data that is being requested.  But I will say that there should be a local public health network that supports this sort of communication without placing additional burdens on hospital staff.  Let the locals push to the state, and the state to the federal government as needed, and when needed (e.g., in cases of a declared emergency).  Don't make 6000+ hospitals do the same thing twice or thrice (even if with different data sets), when 50-odd state agencies could do it more efficiently and in bulk with better quality control.  Oh, and maybe fund that (or use existing funds that have already been allocated for that very kind of thing).

And when the emergency is over, the state or local public health agencies should still keep getting what they need to address local disaster response, much like what Boston had during the Marathon bombing.  It's too late after the disaster happens to "turn it on", and in fact, the switch might not even be accessible if you wait that long.

Compare the Boston stories to Dirk Stanley's story about being at the epicenter of 9/11, and you'll see that we've come a long way in handling local disasters, but still we can do better.  Even with Boston's amazing response, there are notes in some of my reading about it regarding the lack of information about operating rooms and ICUs.

For me, The SANER Project might have been inspired by COVID-19, and one nurse informaticist's complaint to me about the crazyness she was experiencing in trying to get data where it needed to go, but I've spent the last decade an then some looking at the challenges public health has been facing since AHIC first offered ANSI/HITSP what some of us still call "The Bird Flu Use Case", and which was preceded by the "Hurricane Katrina" use case, and before than the "Anthrax Use Case".  All of these were about public health and emergency response.  The standards we wanted weren't ready then, but they are now.  And so am I.  Let's get it right this time.