The human brain struggles with comprehending risk. We find it difficult to translate the mathematical fact of probability into an accurate assessment of danger. This can be especially true in medicine, where emotion frequently clouds rational thinking.
In one study, Gigerenzer and his colleagues asked doctors in Germany and the United States to estimate the probability that a woman with a positive mammogram actually has breast cancer, even though she’s in a low-risk group: 40 to 50 years old, with no symptoms or family history of breast cancer. To make the question specific, the doctors were told to assume the following statistics couched in terms of percentages and probabilities about the prevalence of breast cancer among women in this cohort, and also about the mammogram’s sensitivity and rate of false positives:
The probability that one of these women has breast cancer is 0.8 percent. If a woman has breast cancer, the probability is 90 percent that she will have a positive mammogram. If a woman does not have breast cancer, the probability is 7 percent that she will still have a positive mammogram. Imagine a woman who has a positive mammogram. What is the probability that she actually has breast cancer?
The trick is to think in terms of “natural frequencies” — simple counts of events — rather than the more abstract notions of percentages, odds, or probabilities. As soon as you make this mental shift, the fog lifts.
This is the central lesson of “Calculated Risks,” a fascinating book by Gerd Gigerenzer, a cognitive psychologist at the Max Planck Institute for Human Development in Berlin.
In a series of studies about medical and legal issues ranging from AIDS counseling to the interpretation of DNA fingerprinting, Gigerenzer explores how people miscalculate risk and uncertainty. But rather than scold or bemoan human frailty, he tells us how to do better — how to avoid “clouded thinking” by recasting conditional probability problems in terms of natural frequencies.
The correct answer is roughly 9 percent.
How can it be so low? Gigerenzer’s point is that the analysis becomes almost transparent if we translate the original information from percentages and probabilities into natural frequencies:
Eight out of every 1,000 women have breast cancer. Of these 8 women with breast cancer, 7 will have a positive mammogram. Of the remaining 992 women who don’t have breast cancer, some 70 will still have a positive mammogram.
Imagine a sample of women who have positive mammograms in screening. How many of these women actually have breast cancer?
Since a total of 7 + 70 = 77 women have positive mammograms, and only 7 of them truly have breast cancer, the probability of having breast cancer given a positive mammogram is 7 out of 77, which is 1 in 11, or about 9 percent.
Notice two simplifications in the calculation above. First, we rounded off decimals to whole numbers.
That happened in a few places, like when we said, “Of these 8 women with breast cancer, 7 will have a positive mammogram.”
Really we should have said 90 percent of 8 women, or 7.2 women, will have a positive mammogram. So we sacrificed a little precision for a lot of clarity.
Second, we assumed that everything happens exactly as frequently as its probability suggests. For instance, since the probability of breast cancer is 0.8 percent, exactly 8 women out of 1,000 in our hypothetical sample were assumed to have it. In reality, this wouldn’t necessarily be true.
Things don’t have to follow their probabilities; a coin flipped 1,000 times doesn’t always come up heads 500 times. But pretending that it does gives the right answer in problems like this.
Although reformulating the data in terms of natural frequencies is a huge help, conditional probability problems can still be perplexing for other reasons. It’s easy to ask the wrong question, or to calculate a probability that’s correct but misleading.
No comments:
Post a Comment