1 in
Prevalence: 0.01%
1% 20% 40% 60% 80% 100%
0.1% 20% 40% 60% 80% 90%
Adjust parameters and click "Generate Results"
Truth Failed Test (Flagged) Passed Test (Not Flagged) Total

📊 Results Summary

How This Works

This tool corrects for base rate neglect (also called base rate bias or the base rate fallacy) - a common cognitive bias where people ignore how the prevalence of a problem affects test outcomes.

The simulator applies Bayes' theorem to show you four outcome categories instead of just one accuracy number. It displays results in frequency format (actual counts of people) rather than percentages, which helps our intuitions work better when reasoning about probabilities.

Even highly accurate tests can produce more false positives than true positives when screening for rare conditions. This has profound implications for mass screening programs in health, security, and education.

Real-World Examples

Mass screenings for low-prevalence problems appear across many domains:

Note: The mammography and PSA examples use parameters extrapolated from the Harding Center's long-term screening data to match their published false-alarm rates. The Harding fact boxes provide more comprehensive outcomes (e.g., mortality, overdiagnosis, quality-of-life effects), whereas this tool focuses solely on test-classification accuracy. The purpose is illustrative, not a substitute for full cohort data.

Important considerations: These outcome estimates assume the stated accuracy rates are correct. In reality, accuracy may be inflated (tests work better in labs than in the field), false negatives may be deflated (dedicated attackers game the system), and secondary screenings may cause unintended harm. This tool calculates hypothetical effect estimates for screenings of this structure along only one causal pathway of four — the classification pathway. It does not compute net effects, accounting for test classification, strategy, information, and resource reallocation.