Statistics

Conditional Probability & Independence

Sometimes knowing that one thing happened changes the probability of another. If it’s cloudy, rain is more likely. If you drew an ace from a deck, the chance of drawing another ace changes. This idea — how one event affects another — is called conditional probability.


Part 1: What Is Conditional Probability?

The probability of event A happening given that B has already happened is written:

P(AB)=P(AB)P(B)P(A \mid B) = \frac{P(A \cap B)}{P(B)}

Think of it this way: once we know B happened, our entire universe shrinks to just the outcomes where B is true. We then ask: of those outcomes, how many also have A?

Let’s model this with two overlapping bell curves representing events A and B. The overlap region represents outcomes where both happen:

P(A) — spread of A0.5
0.10.9
P(B) — spread of B0.5
0.10.9
P(A and B) — overlap0.2
0.050.5
P(AB)=P(AB)P(B)=0.20.5P(A \mid B) = \frac{P(A \cap B)}{P(B)} = \frac{0.2}{0.5}
-8-6-4-22468Event AEvent BA and B (overlap)
Try This

Experiment with the sliders:

  • When the overlap is large relative to P(B), P(A|B) is high — knowing B happened makes A very likely
  • When the overlap is small relative to P(B), P(A|B) is low — B happening doesn’t help A much
  • Keep the overlap less than or equal to the smaller of P(A) and P(B) to stay valid!

Part 2: Visualizing the Shrinking Universe

Here’s another way to think about it. The full distribution represents all possible outcomes. When we condition on B, we zoom in on just the B region:

Zoom into B region1
0.53
-8-6-4-22468All outcomesP(B) regionA within B

As you increase the zoom, you’re focusing more on the B region. The ratio of the green area (A within B) to the red area (all of B) gives you P(A|B).


Part 3: Independence — When Knowing Doesn’t Help

Two events are independent if knowing one happened tells you nothing about the other. Mathematically:

A and B are independent    P(AB)=P(A)\text{A and B are independent} \iff P(A \mid B) = P(A)

This happens exactly when:

P(AB)=P(A)×P(B)P(A \cap B) = P(A) \times P(B)

Let’s test this. Set P(A), P(B), and the overlap. When the overlap equals the product P(A) * P(B), the events are independent:

P(A)0.4
0.10.8
P(B)0.5
0.10.8
P(A and B)0.2
0.010.6
P(A)×P(B)=0.4×0.5P(A) \times P(B) = 0.4 \times 0.5
P(AB)=0.2P(A \cap B) = 0.2
P(AB)=0.20.5,P(A)=0.4P(A \mid B) = \frac{0.2}{0.5}, \quad P(A) = 0.4
-8-6-4-22468Event AEvent BA and B
Try This

Try to make the events independent! Adjust P(A and B) until it equals P(A) * P(B). When you succeed, notice that P(A|B) equals P(A) — conditioning on B doesn’t change the probability of A at all.

For example: if P(A) = 0.4 and P(B) = 0.5, then independence requires P(A and B) = 0.2.


Part 4: Dependent Events — When Conditioning Matters

When events are dependent, P(A|B) differs from P(A). The bigger the difference, the stronger the dependence.

Shift A relative to B0
-33
Overlap width1.5
0.53
-8-6-4-22468Event AEvent BJoint region
Connection

Real-world examples:

  • Dependent: Drawing cards without replacement — the first draw changes what’s left
  • Independent: Flipping a coin twice — the first flip doesn’t affect the second
  • Dependent: Weather today and tomorrow — sunny today makes sunny tomorrow more likely
  • Independent: Your birthday and your favorite color — no connection

Part 5: Bayes’ Theorem — Reversing the Condition

What if you know P(B|A) but need P(A|B)? Bayes’ Theorem lets you flip the condition:

P(AB)=P(BA)P(A)P(B)P(A \mid B) = \frac{P(B \mid A) \cdot P(A)}{P(B)}
P(A) — prior0.1
0.010.5
P(B|A) — likelihood0.8
0.11
P(B) — evidence0.3
0.10.8
P(AB)=0.8×0.10.3P(A \mid B) = \frac{0.8 \times 0.1}{0.3}
-8-6-4-22468Prior P(A)Posterior P(A|B)Evidence P(B)
Challenge

Medical Test Problem: A disease affects 1% of the population (P(disease) = 0.01). A test correctly detects the disease 95% of the time (P(positive | disease) = 0.95). The test has a 5% false positive rate, so P(positive) is about 0.059.

Use Bayes’ Theorem: if you test positive, what’s the actual probability you have the disease?

Set the sliders: prior = 0.01, likelihood = 0.95, evidence = 0.059. The answer might surprise you!


Wrapping Up

ConceptKey Formula
Conditional ProbabilityP(A|B) = P(A and B) / P(B)
IndependenceP(A and B) = P(A) * P(B)
Independence testP(A|B) = P(A) means independent
Bayes’ TheoremP(A|B) = P(B|A) * P(A) / P(B)

Conditional probability is the foundation of statistical reasoning. Every time you update your beliefs based on new evidence, you’re doing Bayes’ Theorem — whether you realize it or not.

Take the Quiz