Conditional Probability & Independence
Sometimes knowing that one thing happened changes the probability of another. If it’s cloudy, rain is more likely. If you drew an ace from a deck, the chance of drawing another ace changes. This idea — how one event affects another — is called conditional probability.
Part 1: What Is Conditional Probability?
The probability of event A happening given that B has already happened is written:
Think of it this way: once we know B happened, our entire universe shrinks to just the outcomes where B is true. We then ask: of those outcomes, how many also have A?
Let’s model this with two overlapping bell curves representing events A and B. The overlap region represents outcomes where both happen:
Experiment with the sliders:
- When the overlap is large relative to P(B), P(A|B) is high — knowing B happened makes A very likely
- When the overlap is small relative to P(B), P(A|B) is low — B happening doesn’t help A much
- Keep the overlap less than or equal to the smaller of P(A) and P(B) to stay valid!
Part 2: Visualizing the Shrinking Universe
Here’s another way to think about it. The full distribution represents all possible outcomes. When we condition on B, we zoom in on just the B region:
As you increase the zoom, you’re focusing more on the B region. The ratio of the green area (A within B) to the red area (all of B) gives you P(A|B).
Part 3: Independence — When Knowing Doesn’t Help
Two events are independent if knowing one happened tells you nothing about the other. Mathematically:
This happens exactly when:
Let’s test this. Set P(A), P(B), and the overlap. When the overlap equals the product P(A) * P(B), the events are independent:
Try to make the events independent! Adjust P(A and B) until it equals P(A) * P(B). When you succeed, notice that P(A|B) equals P(A) — conditioning on B doesn’t change the probability of A at all.
For example: if P(A) = 0.4 and P(B) = 0.5, then independence requires P(A and B) = 0.2.
Part 4: Dependent Events — When Conditioning Matters
When events are dependent, P(A|B) differs from P(A). The bigger the difference, the stronger the dependence.
Real-world examples:
- Dependent: Drawing cards without replacement — the first draw changes what’s left
- Independent: Flipping a coin twice — the first flip doesn’t affect the second
- Dependent: Weather today and tomorrow — sunny today makes sunny tomorrow more likely
- Independent: Your birthday and your favorite color — no connection
Part 5: Bayes’ Theorem — Reversing the Condition
What if you know P(B|A) but need P(A|B)? Bayes’ Theorem lets you flip the condition:
Medical Test Problem: A disease affects 1% of the population (P(disease) = 0.01). A test correctly detects the disease 95% of the time (P(positive | disease) = 0.95). The test has a 5% false positive rate, so P(positive) is about 0.059.
Use Bayes’ Theorem: if you test positive, what’s the actual probability you have the disease?
Set the sliders: prior = 0.01, likelihood = 0.95, evidence = 0.059. The answer might surprise you!
Wrapping Up
| Concept | Key Formula |
|---|---|
| Conditional Probability | P(A|B) = P(A and B) / P(B) |
| Independence | P(A and B) = P(A) * P(B) |
| Independence test | P(A|B) = P(A) means independent |
| Bayes’ Theorem | P(A|B) = P(B|A) * P(A) / P(B) |
Conditional probability is the foundation of statistical reasoning. Every time you update your beliefs based on new evidence, you’re doing Bayes’ Theorem — whether you realize it or not.