Bayesianity: How Scientists Think About Evidence

Most people don’t understand conditional probability and Bayes’s Theorem, which are the scientifically correct tools for reasoning using probabilities. I am going to give a simple example that I guarantee most people will understand AFTER they see the answer, and that I guarantee most people will NOT understand BEFORE they see the answer.

If you get this wrong, and then understand the answer, you might feel stupid because the answer is not difficult. You shouldn’t feel stupid. Instead you should feel SMARTER! This kind of reasoning should be taught in high school but it usually isn’t. There’s no shame in not having learned it — although to some people it is truly common sense, most people’s brains do not use this logic naturally and need to be taught.

Here’s the situation (the numbers are realistic but rounded off to make the math simpler). Women are recommended to get their first mammogram when they reach 40, to test for breast cancer. The following facts are known about breast cancer and mammograms for 40-year-old women who haven’t yet been tested or diagnosed:

1) 1% of these women have breast cancer
2) If they have breast cancer, the mammogram has an 80% chance of detecting it and returning a “positive” result, and a 20% chance of missing it and returning an incorrect “negative” result.
3) If they don’t have breast cancer, the mammogram has a 90% chance of correctly saying “negative” and a 10% chance of falsely saying “positive”.

In other words, the test is accurate but not perfect, and if you get a positive result you have to get further more expensive testing to confirm it or contradict it.

Here is the key question which very few people know how to answer: if you go in and get tested and the results are positive, what is the chance you actually have breast cancer, based on this information?

Obviously it’s now more than 1%, because it was 1% before you took the test and you now have new evidence that increases the chance you have it, but it’s less than 100% because the test sometimes gives a wrong answer.

Please answer in the comments so I can get a good-sized statistical sample and we can learn how good at scientific thinking people here are. Each time a comment arrives I will hide it temporarily so as not to give the answer away too soon.

Advertisements

About Polymath

Discoverable with effort
This entry was posted in Uncategorized. Bookmark the permalink.

4 Responses to Bayesianity: How Scientists Think About Evidence

  1. Jehu says:

    Let’s say 1000 women go in to get tested. That’s a nice big number convenient for mental arithmetic.
    10 of them really have breast cancer. Of those 10, 8 are told they drew the black card and 2 are given a fake white card. Now, of the 990 women who DON’T have a real black card, 900 of them are told truthfully that they’ve a white card, 90 have a fake black card.
    So there are 98 black cards. Of those, 8 are legit. So your chances of actually having breast cancer given a positive test are 8 in 98, which is a touch more than 8%. Damn that’s a crappy test.
    There are 902 white cards. Given that you have a white card, your chances of having breast cancer are 2 in 902, which is a touch more than 0.2%.
    With false positives this high, this test of is dubious utility.

  2. indpndnt says:

    7.4766%

  3. Polymath says:

    Slight error, 99 out of 990 are false positives not 90 out of 990, so the probability is 8/107 not 8/98.

  4. libertascap says:

    Given a positive result and a known 10% false positive rate, the probability that this patient has breast cancer is now 90%.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s