Statistics

Bayes' theorem calculator

Report a bug

Share calculator

Add our free calculator to your website

Please enter a valid URL. Only HTTPS URLs are supported.

Use as default values for the embed calculator what is currently in input fields of the calculator on the page.
Input border focus color, switchbox checked color, select item hover color etc.

Please agree to the Terms of Use.
Preview

Save calculator

The Basics in Plain Language

Bayes’ Theorem helps you adjust your beliefs based on new information. Think of it like a math tool for answering: “How likely is my guess now that I’ve seen the evidence?”

Imagine you’re trying to figure out if it’ll rain today. Bayes’ Theorem uses three key pieces of information:

  1. Your initial guess (e.g., 20% chance of rain).
  2. How likely the evidence is if your guess is true (e.g., 90% chance of dark clouds when it rains).
  3. How often the evidence happens in general (e.g., 10% chance of dark clouds on any day).

The formula combines these to give you an updated probability:

Updated Belief=Initial Guess×Evidence LikelihoodTotal Chance of Evidence\text{Updated Belief} = \frac{\text{Initial Guess} \times \text{Evidence Likelihood}}{\text{Total Chance of Evidence}}

Try the Calculator

This tool lets you solve for any missing value. Just fill in three percentages (0–100%) and select what to calculate:

FieldWhat It MeansExample (Rain Forecast)
P(H): PriorYour starting belief before evidence20% chance of rain today
P(E⎮H): LikelihoodChance of seeing evidence if your guess is true90% chance of dark clouds if it rains
P(E): Total EvidenceHow common the evidence is overall10% of days have dark clouds
P(H⎮E): PosteriorYour updated belief after the evidenceCalculator solves this!

Example:
If you see dark clouds (evidence), the calculator might tell you the rain chance jumps from 20% to 64%.

Real-Life Examples

1. Medical Tests: Why “95% Accurate” Can Mislead

  • Prior: Only 1% of people have Disease X.
  • Likelihood: Test is 95% accurate for sick patients.
  • False Alarms: Test is 5% wrong for healthy people.
  • Total Evidence:
    (95%×1%)+(5%×99%)=5.9%(95\% \times 1\%) + (5\% \times 99\%) = 5.9\%
  • Updated Belief:
    95%×1%5.9%16%\frac{95\% \times 1\%}{5.9\%} \approx 16\%
    A positive test means just 16% risk, not 95%!

2. Spam Emails: How “Free” Triggers Filters

  • Prior: 2% of emails are spam.
  • Likelihood: 80% of spam emails say “free.”
  • False Alarms: 0.1% of real emails say “free.”
  • Updated Belief:
    80%×2%(80%×2%)+(0.1%×98%)94%\frac{80\% \times 2\%}{(80\% \times 2\%) + (0.1\% \times 98\%)} \approx 94\%
    An email with “free” has a 94% spam chance.

Step-by-Step Calculator Guide

Scenario: You want to know the chance of having a rare allergy (1% prior) after testing positive (test is 90% accurate for true cases, 8% false positives).

  1. Input Prior: 1% (how common the allergy is).
  2. Input Likelihood: 90% (test accuracy if you’re allergic).
  3. Input Total Evidence:
    (90%×1%)+(8%×99%)=8.82%(90\% \times 1\%) + (8\% \times 99\%) = 8.82\%
  4. Calculate Posterior:
    90%×1%8.82%10.2%\frac{90\% \times 1\%}{8.82\%} \approx 10.2\%
    Result: A positive test means only a 10% chance you actually have it!

Common Mistakes to Avoid

  1. Ignoring the Base Rate: Don’t forget the starting probability (e.g., rare diseases stay rare even with positive tests).
  2. Confusing “Accuracy”: A test’s “95% accuracy” doesn’t mean a 95% chance you’re sick—it depends on how common the disease is.
  3. Forgetting False Positives: Always ask, “How often does this evidence happen by accident?”.

Why Bayes’ Theorem Matters Today

  • AI & Netflix Recommendations: Updates predictions based on what you watch.
  • Self-Driving Cars: Adjusts decisions using real-time sensor data.
  • COVID Testing: Helps interpret results in low-risk vs. high-risk groups.

FAQ

Can I use percentages instead of decimals?

Yes! The calculator works with 0–100% inputs (no need for 0.05 = 5%).

What if I don’t know “Total Evidence”?

Select “Calculate P(E)” in the tool. It uses:
P(E)=(P(EH)×P(H))+(False Positive Rate×(100%P(H)))P(E) = (P(E|H) \times P(H)) + (\text{False Positive Rate} \times (100\% - P(H)))

Does Bayes’ Theorem work for multiple updates?

Absolutely! Use the posterior (updated belief) as your new prior for the next piece of evidence.