Poll: Accuracy of Test Interpretation

Now, to take this on to where it needs to go, can you see the point I'm making, which is perhaps more interesting?

The figure quoted is correct if Wrath's clinical probability of being affected is that of the general population. Actually, that is difficult to imagine unless the condition in question is symptom-free in the vast majority of cases. Most interesting diseases do show some clinical signs at some stage.

Now, if Wrath is clinically symptom-free, then his probability of being afffected is the probability that someone showing no symptoms is affected. If only half of all sufferers show some clinical signs, this is already down to 0.05%, as only 1 in 2,000 of the asymptomatic population is affected.

The probability that the doctor is right is in fact only 4.72%, in this situation.

On the other hand, if the reason his doctor wanted to test him is that he came in demonstrating clear clinical signs suggestive of the condition in question, then his probability of being affected is the probability that anyone showing these clinical signs has the condition. This depends a lot on how pathognomonic the clinical signs are for the disease. But let's say he was a very typical case, and that 80% of people with these presenting signs actually have the condition.

Now look at the graph, and what it does over at the right-hand side, at the 80% probability of infection level (hint, it's the line that is almost indistinguishable from the 100% abscissa at that level).

There is a 99.75% probability that the doctor is right.

This explains why it is vital to take the real likelihood that the patient you are looking at is affected into consideration when interpreting tests like this. That is the conclusions you have come to from your clinical examination and history-taking. Otherwise, if you use a figure for incidence in the general population regardless of the individual's own circumstances, positive results are always judged to be very probably wrong and negative results to be very probably right.

Not much point doing the test if that's how you think.

In fact, it's a good illustration that it's statistically valid to say that if the test gives you the answer you were expecting, it's probably right, but if it gives you a result you weren't expecting, be very cautious. In practice, the unexpected result has to be re-checked by a reference method.

If you are screening well people, it will be the positive results you regard with suspicion, but if you are testing on a strong clinical evidence the positive result is pretty safe to accept, and you may well want to check a negative (depending on how suspicious you were in the first place, refer to graph again).

Rolfe.
 
Now that the answer's been given, can someone tell me: is the answer exactly 10% or just close to 10%?
 
Rolfe said:
The thing is, to get the predictive value of a test (which is what Wrath is asking), you need to know the incidence of the condition in the population representative of the individual being tested. This is obviously higher if that population is "sick people with clinical signs typical of the disease in question". In fact, the relevant figure is the clinical probability that this individual is affected.
Actually, this isn't even correct.

Rolfe is confusing the utility of clinical indications with test accuracy. The significance of the test's results to the diagnosis depends upon the proportion of the population that actually has the condition. Its accuracy does not depend on that value.
 
JamesM said:
Now that the answer's been given, can someone tell me: is the answer exactly 10% or just close to 10%?
The answer is about 9%. Since there were a limited number of poll options, 10% was the correct answer.
 
JamesM said:
Now that the answer's been given, can someone tell me: is the answer exactly 10% or just close to 10%?
9.02% with the number of "decimal places" set to 2.

Rolfe.
 
Wrath of the Swarm said:
The accuracy covers both true positives and true negatives. If I had specified the rate of alpha error only, you would have needed to know the beta error. But since I didn't, you didn't.

But by playing around with the false posertive negative numbers I can keep the total accuracy at 99% while geting a number of different answers to you question (particuly sine you put the about in I can get the chances up to to 100% (10,000 tested wrong 9 times all false negatives) (ok that quite a bit higher than 99% acuricy unless you have ver big error bounds
 
Obviously, performing more than one test increases the chance of getting the correct result significantly.

But Rolfe can't distinguish between the accuracy of the test and the usefulness of combining it with another selection procedure. She's also ignoring the important point that many conditions (like certain kinds of cancer) don't have obvious symptoms.

I'm just glad she's a vet in another country instead of a doctor here. She wouldn't have made it through med school, of course, so I suppose the actual risk she would pose is minimal.
 
geni said:
But by playing around with the false posertive negative numbers I can keep the total accuracy at 99% while geting a number of different answers to you question (particuly sine you put the about in I can get the chances up to to 100% (10,000ed test wrong 9 times all false negatives)
Since I didn't specify different values for alpha and beta error, the single value I have for the accuracy holds for both.

Thanks for trying, though.
 
Wrath of the Swarm said:
Since I didn't specify different values for alpha and beta error, the single value I have for the accuracy holds for both.

Thanks for trying, though.

Simplify it's two years since I did sats I fail to see why it should hold for both rather than the sum of both.
 
Because I gave you an overall accuracy. Given any particular input, the test has a 99% chance of giving the correct answer. That holds whether the person has the disease or not.

In reality, tests don't always have equal chances of false positives and false negatives. That's not the case for the hypothetical test, though.
 
Wrath of the Swarm said:
The significance of the test's results to the diagnosis depends upon the proportion of the population that actually has the condition. Its accuracy does not depend on that value.
NO, no and thrice no.

This is the whole point. This is the mistake most likely to be made by young graduates who have been brainwashed by statistics of the sort Wrath is peddling.

Now do you see why I went for the answer early and widened the question?

Predictive value of a test depends on the sensitivity, the specificity, and the prevelance of the condition in a population representative of the patient in question. ("Accuracy" is a meaningless term in the context of a test of this nature.)

The prevelance of the condition in a population representative of the patient in question means the prevelance of the condition in a population presenting clinically exactly as the patient in question presents. That is, the clinical probability that the patient in question has the disease.

For example, people often quote the incidence of FeLV (feline leukaemia virus) as 1% in the population as a whole. But if you confine your FeLV testing to cats presented chalk white with a lymphocyte count of > 20 x 10<SUP> 9</SUP>/l, the proportion of correct positive results you get will be a hell of a lot less than the 66.89% the sums Wrath would like you to do might suggest. Let us assume that the specificity of the FeLV test is about 98% (as it is). Since these cats are perhaps 70% likely to be infected (that is, if the only cats you test are in that group, 70% of the cats you test will be infected), only 0.87% of your positive results will be wrong.

Conversely, if you spend all your time screening healthy pedigree cats from tested-negative households (people do do this, prior to breeding), where there is maybe only a 1 in 10,000 chance that a cat has sliped the net and become infected, 99.51% of your positives will be wrong.

Experienced clinicians understand this. People who've just blindly read a rather superficial statistical explanation of predictive value don't, and routinely underestimate the reliability of a ositive result form a clinically sick uindividual with suggestive clinical signs.

I know it's hard to get your brain round, Wrath, but do try.

Rolfe.
 
Wrath of the Swarm said:
Most medical students (and a significant fraction of doctors) get the question wrong.

Link please.

Nothing you say can be trusted at all. Nobody should believe a word you say unless you provide links and evidence backing up your claims.
 
Well, since this example has been ruined by that whore Rolfe, let's go over the math.

p = fraction of people who have the condition
x = error rate of the test

(1-p)x = fraction of false positives
p(1-x) = fraction of true positives

When are these values equal?

x - xp = p - xp
x = p

When the accuracy of the test is equal to the proportion of the population that has the condition, any positive result has a 50% chance of being correct. The calculation becomes more complicated if we presume there are different error rates for false and true positives, of course.
 
This thread has been reported but I see no breaking of the forum's rules. No action will be taken.
Replying to this modbox in thread will be off topic  Posted By: Luciana



Civility, however, is always desirable...
 
Wrath of the Swarm said:
Because I gave you an overall accuracy. Given any particular input, the test has a 99% chance of giving the correct answer. That holds whether the person has the disease or not.

In reality, tests don't always have equal chances of false positives and false negatives. That's not the case for the hypothetical test, though.
This is meaningless. You didn't say that the test was both 99% sensitive and 99% specific. I had to assume it before I could even begin.

Suppose the test was 100% specific and only 98% sensitive (bloody good test if it managed that). Would you, by that reasoning, still call that "99% accurate"? However, in that case, all positive results are correct, so the doctor knows he's right a priori.

Rolfe.
 
Rolfe said:
NO, no and thrice no.

This is the whole point. This is the mistake most likely to be made by young graduates who have been brainwashed by statistics of the sort Wrath is peddling.

Now do you see why I went for the answer early and widened the question?

Predictive value of a test depends on the sensitivity, the specificity, and the prevelance of the condition in a population representative of the patient in question.
Wrong.

If there are diagnostic criteria that must be met before a test is performed, that's performing two different tests. One just isn't done in a laboratory. We then must consider the error rate of the initial screening by symptoms. After all, surely it's not falliable.

What Rolfe describes (winnowing the population before lab tests are performed) is good medicine, but she's incorrectly asserting what she's doing. We're talking about whether the test is correct or not, but she's talking about whether clinical judgments based on its result are correct, and that's a completely different issue.
 
Rolfe said:
This is meaningless. You didn't say that the test was both 99% sensitive and 99% specific. I had to assume it before I could even begin.
I did say that. I said the test is 99% accurate. That sets both values. If I said that the test would correctly identify a person with the condition 99% of the time, then there wouldn't be enough information for anyone to answer the question - you'd know the false negative rate, but not the false positive. But that isn't what I said.

It's a good thing you can look up the answers on a chart, because you sure as hell can't handle the concepts involved.
 
Hmmm. Seeing as the cat is already out of the bag :) let's work this out using a population of 1,000,000 people. As the disease affects one out of every 1,000 people, we know that 1,000 people will be infected.

Of the one thousand people that are infected 990 will be told they have the disease and 10 will test negative.

Of the remaining 999,000 people who don't have the disease, 1% (i.e. 9,990) will be told they tested positive and the remaining 989,010 will be told they tested clear.

So a total of 990 + 9,990 = 10,980 people will be told they tested positive and of those only 990 people will really be ill.

So if you are told you tested positive for the disease, the chances that you actually have it are:

990 / 10,980 = 9.016393443 %
 
Wrath said:
It must be nice to be able to psychically determine what someone's position is before they state it and tear apart the holes in the arguments they haven't made yet.
You mean there wasn't a hidden agenda here? Wow, fooled me, too.

~~ Paul
 
Correct.

This is why people shouldn't be overly concerned about screening tests that return positive results. One positive HIV test doesn't mean very much - which is why when someone is found to be HIV positive, a second round of testing commenses with a more expensive but higher-quality test that's less likely to give the wrong answer.

Of course, it's always possible that some unlucky person will get a false positive for multiple tests... but that's not as bad as the poor saps who get a false negative and never go on for more testing.

Anyway, it has been shown that a very large number of medical students have problems with this question - and even doctors interpreting the results of things like mammograms, PSAs, and HIV tests. A lot of research has gone into ways of presenting test data that are less likely to cause people to reach the wrong conclusions. When results are returned in terms of population frequency, people are much less likely to misunderstand what the tests mean.
 

Back
Top Bottom