Thursday, June 25, 2020

Zoombie Jamborie

A camp song for 2020. If you've never heard Zombie Jamborie, on which it's based, here's one of the many recordings.

Well, now, back to back, telly to telly,
well, I don't give a damn 'cause I've done three already,
back to back and telly to telly at the zoombie jamboree
(Now hear the chorus)
back to back and telly to telly at the zoombie jamboree

zoombie jamboree took place with an at home functionary (Why?)
zoombie jamboree took place to be revolutionary
zoombies from all parts of the island. (Where?)
Some of them are great calypsonians. (Some.)
Since the season was COVID time they got together in groups of nine.

Oh, what ya' doin'?
Well, now, back to back, telly to telly,
well, I don't give a damn 'cause I've done six already,
back to back and telly to telly at the zoombie jamboree
(You can feel that)
back to back and telly to telly at the zoombie jamboree

One zoombie's phone wouldn't behave
Her echo's a painful sonic wave
In the one hand she's holding a quart of wine,
  in the other she's pointing she can't hear mine
I says, "Oh, no, my turtle-dove, I think you've got a bug."
Well, believe me folks, I had to run. (Why?)
four hours of a zoombie ain't no fun! (Don't knock it!)

Oh, what you doin'?
Well, now, back to back, telly to telly, well,
I don't give a damn 'cause I've done ten already,
back to back and telly to telly at the zoombie jamboree
(Oh, what a good game)
back to back and telly to telly at the zoombie jamboree

Right then and there she chats her tweet
"I'm a-going to try again, my sweet
I'm gonna call again and then retry."
Then says "Ok, I'm back, you guys!"
"I may be lyin' but you should see (What?)
My slides on this here zoombie." (Blah!)
Well, I never seen those slides in Life, I
crashed my zoombie without WiFi? (Yes!)

Well, now, back to back, telly to telly, well,
I don't give a damn 'cause I've done scores already,
back to back, and telly to telly at the zoombie jamboree (You're all alone, you know)
back to back, and telly to telly at the zoombie jamboree

Wednesday, June 17, 2020

Interpreting a Negative, part 2


likelihood ratio
In Interpreting a Negative, I talked about my lack of success in interpreting my negative COVID-19 test result.  I've made a bit more progress, although I haven't yet gotten back a response from my provider on my two questions.  For what it's worth, I learned a lot about this in my MBI degree program, but since I don't deal with test ordering or interpretation on a routine basis, I know about the math (this is a great article that can help you with the math in this post), but haven't had any need for application of it since graduate school.

You generally hear about the accuracy of laboratory tests used in the US based on reporting sensitivity and specificity.  These values help providers evaluate the likelihood of a true positive or true negative.  These values aren't generally included in the laboratory result, but you can often get to them by knowing who did the test (the lab), and what test equipment they used, or by looking for the LOINC codes (if you know where to find them), and traversing what that tells you back to the laboratory equipment.

You might also hear about the positive and negative predictive value (or agreement), abbreviated PPV/PPA and  NPV/NPA respectively.  This is what the COVID-19 test used on me reports to the FDA.  It compares the results from the Aptima test to those of another test (Panther Fusion) made by the same manufacturer (which could introduce another source of error, but according to the manufacturer's testing, that test is perfect).

That's based on the manufacturer's testing results, and doesn't necessarily account for real world implementation.  Variations in workflow, quality, et cetera, and assumptions under which the test is performed can have an impact on "accuracy".  In the case of COVID-19 laboratory tests, you can find the results of other's evaluations (e.g., one done by Norwell Health Laboratories).  For the Aptima test, there's one of those in the second row of the table found  at the link.  FWIW: That same lab also analyzed the reference test (Hologic Panther Fusion) used in the Hologic report on the Aptima.

As a patient, the first question I have from a test result is "How should this result affect my behavior?"

  • For a positive, do I seek treatment, further testing, et cetera.
  • For a negative, does that mean I don't have a problem, or should I seek further testing later (and if so, when)?

I won't go into the first issue for positives in detail.  I will say that both my wife and I actually decline certain diagnostics because false positive rates are high enough, and the therapeutic value of a true positive result is of limited value at our ages.

There are four different kinds of results that a laboratory test can produce:
True Positive: A positive result when in fact, you actually have a disease
False Positive: A positive result when in fact, you do not actually have the disease.
True Negative: A negative result when in fact, you do not actually have the disease.
False Negative: A negative result when in fact, you actually have the disease.

You can generally find these values in those tables I referenced.  For two tables I referenced, the values I have are:

Result TypeHoligic Aptima ReportedNorthwell Health Laboratories
True Positive5071
False Positive10
True Negative5475
False Negative04

I can use these numbers to compute two other numbers, called the positive likelihood ratio (LR+), and negative likelihood ratio (LR-) using the following formulae:

Sensitivity = TP / (TP + FN)
Specificity = TN / (TN + FP)
LR+ = Sensitivity / (1 - Specificity) = (TP / (TP + FN)) / (FP / (TN + FP)) 
LR- = (1 - Sensitivity) / Specificity = (FN / (TP + FN)) / (TN / (TN + FP)) 

Result TypeHoligic Aptima ReportedNorthwell Health LaboratoriesBoth
Sensitivity50 / 50 = 100%71 / 75 = 94.7%121 / 125 = 96.8%
Specificity54 / 55 = 98.2%75 / 75 = 100%129 / 130 = 99.2%
LR+100% / 1.8% = 55.694.7% / 0 = ∞96.8% / 0.8% = 121
LR-0% / 98.2% = 05.3% / 100% = 0.0533.2% / 99.2% = 0.032

As you can see, I also combined both evaluations into a composite result.

With the LR- value, I can now estimate the probability that my negative result is correct, but I need ONE more number.  That's the pre-test probability I had COVID-19.  There are a lot of different ways that I could come up with that number.  The most obvious one is to assess it based on the prevalence of disease in my area.

OK, so now let's think about this: Do I consider my country?  My state?  My county?  My town?  My region?  Where would I even find this data?  I might start with my state's dashboard.  But that doesn't really say anything about disease prevalence, just a report of increasing cases / 100000 (and that data is out of date for the actual prevalence, b/c COVID has an incubation period of about 5-14 days).

So back searching I go, and I find this paper on estimating prevalence, and it references a few others.  Since I live within Massachusetts, but shop in Rhode Island (b/c it has the closest grocery stores), I might want to consider both regions.   I can read off results that tell me I need to look at values for prevalence somewhere between 2 and 4%.  Because this paper reflects a novel method (e.g., untested), I should go look elsewhere too. An alternative model suggests multiplying the reported case rate by 10.  That would give me 14.5% (100158 * 10 / 6.893M) for my state, or about the same for my county.

Now, let's plug those numbers in and math some more using these equations:
Pretest Odds = Pretest Probability / (1 - Pretest Probability)
Post-test Odds = Pretest Odds * LR-
Post-test Probability = Post-test Odds / (1 - Post-test Odds)

Pretest
Probability
Pretest
Odds
Post Test
Odds
Post Test
Probability
2% 0.020 0.053*0.020 = 0.0010 0.1% 
4% 0.042 0.053*0.042 = 0.0022  0.2%
14.5% 0.170 0.053*0.170 = 0.0091 0.9%

You'll note I didn't bother computing the results based on the Hologic reporting, because according to the manufacturer, it doesn't produce false negatives, and so I'd just get 0% anyway.  I also didn't bother computing the results based on both because the Norwell Health Laboratories Reported results give me an upper bound.

What this tells me is, based on whatever prevalence data I believe in (more likely the higher number), I have less than 1 chance in 100 of it being wrong.  That's what I wanted to know in the first place.

Without the pretest probability, the lab cannot possibly report the likelihood of the result being incorrect.  Other variations in testing might effect this particular labs "accuracy" in reporting on the test, and of course, I don't have any way of knowing that information.  But using this math, I could say that even if there performance of the test had twice the false negative rate as the Northwell reported results, the chances that my test result were a false negative were less than 1 chance in 50.

I'm pretty good with that.  Applying what else I know (including other influenza-like but not COVID-like symptoms), I can pretty much assume that I'm not COVID-19 positive pretty reliably.

Why spend all of this time figuring out what others might just assume?
There are three reasons:

1. Because it's me, and I want to know.  Is there a reason I shouldn't?
2. Because I'm curious.
3. Because I understand that these tests have been released for use without the same level of testing that happens under normal circumstances, because some of the tests (e.g., the Abbott rapid test) have come under fire b/c of their accuracy, and as a result, I want to understand what risks I'm taking not just on behalf of myself, but also my family based on how I behave after getting this result.

   Keith

P.S. There's a thing called a nomogram depicted above that can help you do all this math geometrically.  All you need to know is pretest probability and likelyhood ratio, that a ruler will compute the post-test probability for you.  I had one somewhere (it came with one of my Informatic's text books), you can print one out.  But first you needed to know how to get the key value, the likelihood ratio.  Using that, and the likelihood ratio for a positive result (55-121), if that had been my result, my post-test probability would be somewhere between 70 and 100%.






Monday, June 15, 2020

Interpreting a Negative

No, this is not a recap about reporting negatives in CDA (or in FHIR for that matter), instead, this is about how to interpret (or fail to interpret or explain how to interpret a negative test result).

If you haven't been on a call with me recently, it might be because I've had a flu.  What type of flu remains to be determined, though I'm fairly sure it's NOT COVID-19.  How do I know?  Well, I got tested.  Why am I not sure?  Because negative test results don't necessarily mean I don't have COVID-19, it just means that what I have is not detectable as SARS COV-2.

Wednesday morning I woke up fuzzy, feeling feverish (but no temp), and generally out of sorts, after having trouble sleeping.  I contacted my doctor (to apologize for missing a work related meeting), and he suggested I get tested (even though my symptoms were not specific to COVID-19).  So, I went and got tested at a drive up testing site at a nearby mall. Here's what it looks like.


It takes all of five minutes to take the sample.  It's not pleasant to have a swab that far back in your nasal cavity, but it's not really that painful either, just uncomfortable.  I wouldn't do it for fun.  I also got about 10 printed (full color) pages of stuff about the test, COVID-19, what to do if positive, et cetera, in three languages (looked like Spanish and Portuguese to me), reproducing two different information packets from CDC with overlapping information.  Stuff which I've already seen a dozen times or more.

I got a very nice recorded phone call the next morning after my test, telling me the result came back negative, and how I should treat that information.  But it was the usual, extremely digested baby-food level of information that is normally given to patients.

What I basically wanted to understand was, given the test reported that I was negative, what was the likelihood that result was wrong (a false negative).  So I went looking for more information.

So after the phone call, I looked at my personal e-mail.  I had received an e-mail from Quest (the lab) telling me that I had a new result that morning.  The e-mail showed up at 5am, about 4 hours before the phone call.  I checked the lab result, on Quest's portal, and felt not much more educated.  
It could have been from one of two different testing systems, Aptima and Panther (both from Hologic).  There were four different links (one for physicians and one for patients) to data about the testing system.  It was a typical reproduction of what is reported to FDA, so NOT that useful.  And the reported test result in the portal was again, standard pap (as in food for children).

Of course, being done by Quest, and having signed up, it was also available in my iPhone's Apple Health app, in FHIR format. The sad thing there is that the only value given for code was a string (SARS COV 2 RNA(COVID 19), QUALITATIVE NAAT), no LOINC code, nothing telling me much about the test itself.

Eventually, I determined from Quest (by zip code and lab) what the right LOINC code might be (I think the test was from Aptima), but am still uncertain, because it's reported based on where the test was sent, and I honestly am not certain (it could have been performed in MA or NY), because nothing in the lab report tells me that.  

There's data also available in MyChart, with actual links.  Though I cannot copy and past links from MyChart, nor can I click through them (it's a stupid s**t safety restriction that makes sense ONLY when you don't know how to implement a whitelist for websites).

So, next up I start looking at studies around false negatives for COVID-19, and actual sensitivity/specificity values for the test equipment in use based on real-world testing.  And honestly, I'm still feeling uninformed.  What I really want to know is the NPV, given that I have a negative result, what's the likelihood it's a true negative.  You cannot actually compute the NPV from Sensitivity or Specificity, you have to have the raw data to get that (or an estimate of disease prevalence).  Here's a good diagram showing all the details 


Unfortunately, none of the studies I dig into can really give me these details at a level that makes me feel comfortable interpreting an answer.

So, I asked my doctor the following two questions:
A) What's the NPV for this particular test (Hologic Aptima)?
B) What's your assessment of my pre-test probability, and how did you come up with that value?

We'll see how he answers and where I go from here.  Basically, my assumption based on symptoms is that I had a gastro-intestinal sort of flu infection rather than COVID-19.  Oh yeah, I'm still feeling icky, so trying to figure this stuff out while unwell is not my favorite past-time, nor the best time for me to figure it out.


   Keith





Tuesday, June 2, 2020

Towards of Common Lexicon for Requirements

Interoperability requirements are notoriously unbalanced, and the explanation for that can be readily found in the oft-quoted Postel's law.

Senders (clients) should have care, receivers (servers) should be forgiving.  This creates a non-uniformity in specifying requirements within an interchange.

Those of us who work in this space are quite commonly familiar with the SHALL/SHOULD/MAY requirements language of RFC-2119.  Some attempts have been made to get that to address the reality of interoperability (see RFC-6919), but humor aside, there's a real challenge here for interoperability experts.

We want to be able to simply express for an exchange the following notions:

  1. The server must support this parameter/field, the client is free to use it as it wishes.
  2. The client must/is recommended/may send this field, the server is free too use it if need be.
  3. The server requires this of the client, it must comply.
  4. The server must support this to address deficient clients, but clients claiming conformance must do it differently if they expect their claims to be taken seriously.
  5. If the client has this data, it must be sent in this field to the server.  If the client doesn't have it, it can be omitted (or marked as not available, null, or in some other way).
Various different attempts have been made to address this:
Version 3 uses the terms mandatory, required and optional.  IHE has defined R2 (required if known), but this terminology is neither ubiquitous, nor readily understood.  FHIR includes "must support", but requires an implementation guide to define the meaning of it in practice.

Data sent in these transmissions have different permanence.  In some cases, a sent field is simply acted upon (or not) by the server, depending on it's function and is subsequently discarded, in others, it is forever indelibly scarred into the ether as an anomaly in the space-time continuum that a forward observer can reliably detect (i.e., written to storage).  And there are the in-between cases of these.

Cardinality constraints have often been looked at as a way to address this issue.  When the minimum cardinality is 0, an item it optional, when greater than 0, required.  But that fails the case of sometimes I must send it (because I know), but when I don't know I don't have to send it.  The data has utility, and use of it can affect outcomes significantly.

The value in an exchange of defining such a common lexicon for describing interchange requirements would be that it would enable the requirements to be readily determined of the data element from the perspective of either the client or the server.

I don't have the words for these concepts, but to explain the value for them, I have about 300 lines of CapabilityStatement generated for the SANER client, and another 300 lines for the server.  There are 30 lines of differences between the two, and these are all about SHALL/SHOULD/MAY sort of requirements, addressing this disparity in viewpoints.

Which leads me to an exercise to be performed, which is to take the 5x5 grid of SHALL/SHOULD/MAY/SHOULD NOT/SHALL NOT requirements for a client against those of a server (and the "if known" variants), determine how many of these are meaningful, and from their, determine what those meanings actually say.

It may well be that as for time, there's an algebra to these client server relationships, and a way to describing them meaningfully.  And as in all relationships, I'm sure I'll find some cases that are simply disfunctional.

  Keith