Rigorous Reasoning

Bayesian Probability

Bayesian Probability: Updating Belief with Evidence

How priors, likelihoods, and evidence interact in rational belief revision

Students learn the logic of Bayesian reasoning, including priors, likelihoods, posterior updating, base rates, and the disciplined use of probabilistic evidence in inquiry and decision making. The unit emphasizes qualitative Bayesian discipline first and then builds toward simple quantitative updates.

InductiveAdvanced300 minutes0/4 lessons started

Study Flow

How to work through this unit without overwhelm

1. Read the model first

Each lesson opens with a guided explanation so the learner sees what the core move is before any saved response is required.

2. Study an example on purpose

The examples are there to show what strong reasoning looks like and where the structure becomes clearer than ordinary language.

3. Practice with a target in mind

Activities work best when the learner already knows what the answer needs to show, what rule applies, and what mistake would make the response weak.

Lesson Sequence

What you will work through

Open lesson 1
Lesson 1

Priors, Evidence, and Posterior Belief

Introduces the central components of Bayesian reasoning and why evidence updates rather than replaces prior belief.

Start with a short reading sequence, study 1 worked example, then use 15 practice activitys to test whether the distinction is actually clear.

Guided reading1 worked example15 practice activitys
Concept15 activities1 example
Lesson 2

Base Rates and Conditional Probability

Shows how to structure a probabilistic problem so that base rates and conditionals are not confused, and performs simple quantitative Bayesian updates using a natural-frequency format.

Read for structure first, inspect how the example turns ordinary language into cleaner form, then complete 15 formalization exercises yourself.

Guided reading2 worked examples15 practice activitystranslation support
Formalization15 activities2 examples
Lesson 3

Bayesian Comparison of Rival Hypotheses

Connects Bayesian updating to comparative reasoning between competing hypotheses using the Bayes factor and qualitative Bayesian comparison.

Use the reading and examples to learn what the standards demand, then practice applying those standards explicitly in 15 activitys.

Guided reading2 worked examples15 practice activitysstandards focus
Rules15 activities2 examples
Lesson 4

Capstone: Bayesian Judgment on Real Evidence

An integrative lesson that asks students to apply Bayesian updating to mixed evidence scenarios: identify priors, compute likelihoods under rival hypotheses, update to a posterior, and communicate the result in calibrated language.

Each lesson now opens with guided reading, then moves through examples and 2 practice activitys so you are not dropped into the task cold.

Guided reading1 worked example2 practice activitys
Capstone2 activities1 example

Rules And Standards

What counts as good reasoning here

Respect Base Rates

A probabilistic judgment should not ignore background prevalence or prior probability when the context makes it relevant.

Common failures

  • A striking test result is treated as if it overrides the base rate automatically.
  • Rare-event contexts are assessed as though all hypotheses started equally likely.

Distinguish P(E | H) from P(H | E)

A likelihood is not the same thing as a posterior probability. Swapping them is the 'prosecutor's fallacy'.

Common failures

  • The probability of evidence given a hypothesis is mistaken for the probability of the hypothesis given the evidence.
  • Diagnostic accuracy is confused with posterior certainty.

Update Proportionately to Evidence

Belief revision should reflect both prior plausibility and the relative explanatory weight of the evidence — not the vividness or novelty of the evidence.

Common failures

  • A small piece of evidence causes an excessive revision.
  • Strong contrary evidence produces almost no change in confidence.

Compare Evidence Under All Rival Hypotheses

The weight of evidence depends not only on how well it fits the favored hypothesis, but also on how well it fits the rivals.

Common failures

  • Asking only whether the evidence fits H and ignoring whether it fits not-H equally well.
  • Treating evidence as strong because it 'supports' H without checking whether it also supports rival hypotheses.

Formalization Patterns

How arguments get translated into structure

Bayesian Update Schema

Input form

evidence_assessment_problem

Output form

prior_likelihood_posterior_analysis

Steps

  • State the hypothesis under evaluation.
  • Identify the relevant prior probability or base rate.
  • State how likely the evidence would be if the hypothesis were true (the likelihood).
  • State how likely the evidence would be if the hypothesis were false (the false positive rate).
  • Compute or estimate the posterior proportionately.

Common errors

  • Skipping the prior entirely.
  • Using only one-sided likelihood information.
  • Treating the posterior as certainty rather than a revised degree of support.

Qualitative Bayesian Comparison

Input form

competing_hypotheses_with_evidence

Output form

relative_support_judgment

Steps

  • State the competing hypotheses.
  • Compare their priors qualitatively (which was more plausible before the evidence?).
  • Compare how strongly each predicts the evidence.
  • Multiply qualitatively: a higher prior and a higher likelihood both push the posterior up.
  • State the posterior ranking with appropriate caution.

Common errors

  • Assuming the hypothesis with the most vivid story automatically gets the higher posterior.
  • Ignoring that weaker priors can sometimes be overcome by much stronger evidence.

Concept Map

Key ideas in the unit

Prior Probability

The degree of confidence assigned to a hypothesis before the new evidence is taken into account.

Likelihood

The probability of the observed evidence on the assumption that a given hypothesis is true, written P(E | H).

Posterior Probability

The revised degree of confidence in a hypothesis after incorporating new evidence, written P(H | E).

Base Rate

The background prevalence or prior frequency relevant to the hypothesis being assessed.

Bayes Factor

The ratio of likelihoods under two rival hypotheses, P(E | H1) / P(E | H2), which captures how strongly evidence favors one over the other.

False Positive Rate

The probability that a test or indicator yields a positive result when the hypothesis is false, written P(E | not-H).

Calibration

The property of having stated confidence match long-run accuracy — 70%-confident predictions should come true about 70% of the time.

Assessment

How to judge your own work

Assessment advice

  • What was the prior before the new evidence arrived?
  • How strongly would this evidence have been expected under the hypothesis?
  • How strongly would this evidence have been expected under the negation of the hypothesis?
  • Collapsing the distinction between prior support and posterior support.
  • Ignoring the denominator of the posterior calculation.
  • What is the background prevalence?
  • Am I mixing up P(E | H) with P(H | E)?
  • Did I account for the false positive rate?
  • Treating a high likelihood as the same thing as certainty about the hypothesis.
  • Forgetting to include the non-H cases in the denominator.
  • How plausible were the hypotheses before the evidence?
  • Which hypothesis predicted the evidence better?
  • Does the Bayes factor justify the size of the update I'm making?
  • Treating Bayesian comparison as if it required certainty rather than differential support.
  • Ignoring the prior because the evidence feels overwhelming.
  • Did I write out the base rate before computing anything?
  • Did I compute likelihoods for every rival?
  • Is my verdict expressed in calibrated language?
  • Letting the test's sensitivity alone decide the verdict.
  • Converting a natural-frequency answer back into a probability without keeping the denominator visible.

Mastery requirements

  • Identify Priors And LikelihoodsSuccessful Analyses · 6_successful_analyses
  • Avoid Base Rate NeglectPercent Consistent · 80_percent_consistent
  • Compute Posterior From Natural FrequenciesSuccessful Calculations · 6_successful_calculations
  • Compare Posteriors Across HypothesesSuccessful Comparisons · 4_successful_comparisons

History Links

How earlier logicians shaped modern tools

Thomas Bayes

In 'An Essay Towards Solving a Problem in the Doctrine of Chances' (published posthumously in 1763), laid the foundation for probabilistic belief updating in light of evidence.

Bayesian inference, posterior updating, and evidence-sensitive probability judgments.

Pierre-Simon Laplace

In his Theorie analytique des probabilites, extended probabilistic reasoning and helped turn Bayesian ideas into a general inferential framework applicable to science, astronomy, and decision-making.

Applied probability, model comparison, and systematic uncertainty reasoning.

Amos Tversky and Daniel Kahneman

Documented systematic failures in intuitive probability reasoning, including base-rate neglect, the conjunction fallacy, and representativeness-driven errors.

The modern understanding of where and why intuitive Bayesian judgment goes wrong.