Rigorous Reasoning

Bayesian Probability

Base Rates and Conditional Probability

Shows how to structure a probabilistic problem so that base rates and conditionals are not confused, and performs simple quantitative Bayesian updates using a natural-frequency format.

Read for structure, not just vocabulary. The goal is to learn how natural-language claims are converted into a cleaner formal shape.

InductiveFormalizationLesson 2 of 40% progress

Start Here

What this lesson is helping you do

Shows how to structure a probabilistic problem so that base rates and conditionals are not confused, and performs simple quantitative Bayesian updates using a natural-frequency format. The practice in this lesson depends on understanding Likelihood, Posterior Probability, Base Rate, and False Positive Rate and applying tools such as Respect Base Rates and Distinguish P(E | H) from P(H | E) correctly.

How to approach it

Read for structure, not just vocabulary. The goal is to learn how natural-language claims are converted into a cleaner formal shape.

What the practice is building

You will put the explanation to work through evaluation practice, quiz, diagnosis practice, analysis practice, rapid identification, and argument building activities, so the goal is not just to recognize the idea but to use it under your own control.

What success should let you do

Analyze 6 Bayesian cases with explicit base rate, likelihood, false positive rate, and natural-frequency calculation.

Reading Path

Move through the lesson in this order

The page is designed to teach before it tests. Use this sequence to keep the reading, examples, and practice in the right relationship.

Read

Build the mental model

Move through the guided explanation first so the central distinction and purpose are clear before you evaluate your own work.

Study

Watch the move in context

Use the worked examples to see how the reasoning behaves when someone else performs it carefully.

Do

Practice with a standard

Only then move into the activities, using the pause-and-check prompts as a final checkpoint before you submit.

Guided Explanation

Read this before you try the activity

These sections give the learner a usable mental model first, so the practice feels like application rather than guesswork.

Core idea

The base rate sets the stage

The base rate is the background prevalence of the hypothesis in the population you're considering. If you're testing for a disease that affects 1% of people, the base rate is 1%. The base rate is your prior when you have no individual information about the person or case in front of you.

The base rate matters because it sets the scale of the problem. A very rare condition means most positive tests will be false positives, even when the test is quite accurate. A very common condition means most positive tests will be real. Without the base rate, you cannot interpret a test result correctly — you only know part of the picture.

What to look for

  • Ask what fraction of cases in this population the hypothesis applies to.
  • Treat the base rate as the prior when no individual evidence is available.
  • Use the base rate to set the scale for the rest of the problem.
The base rate is the prior in disguise — never skip it.

Key technique

Natural frequencies make Bayesian problems manageable

Instead of working with percentages or decimal probabilities, it is often easier to imagine a population of 1,000 or 10,000 and count how many fall into each category. This 'natural frequency' approach turns confusing probability problems into simple counting.

For example, in a population of 10,000: if the base rate is 1%, 100 have the disease. If the test catches 99% of true cases, 99 of those test positive. Of the 9,900 who don't have the disease, 1% — or 99 — will test positive anyway. So out of 198 positive tests, 99 are real. The posterior P(disease | positive) is 99/198 = 50%. Counting bodies instead of multiplying probabilities makes the structure visible.

What to look for

  • Pick a round population size.
  • Count cases in each category (has disease, tests positive, both).
  • Compute the posterior by dividing true positives by all positives.
Natural frequencies turn Bayesian calculation into counting, which is much easier to do correctly.

Critical distinction

P(E | H) is not P(H | E)

The probability of evidence given a hypothesis and the probability of a hypothesis given evidence are two different quantities. They point in opposite directions. Swapping them is called the 'prosecutor's fallacy' because it has shown up in real criminal trials, where the probability of matching DNA given innocence was confused with the probability of innocence given matching DNA.

These quantities can be very far apart. P(positive test | disease) might be 99%, while P(disease | positive test) can be 9% — depending entirely on the base rate and the false positive rate. A high likelihood does not by itself imply a high posterior. Any Bayesian argument that moves from P(E | H) directly to P(H | E) without considering the prior and the false positive rate is broken.

What to look for

  • Write out which direction of the conditional you have.
  • Check whether the argument silently swapped directions.
  • Never treat a high likelihood as direct evidence of a high posterior.
P(E | H) and P(H | E) are different — swapping them is one of the most consequential errors in applied probability.

Assembly

The full Bayesian update

Putting the pieces together: the posterior depends on the prior (base rate), the likelihood under H, and the likelihood under not-H (the false positive rate). In natural-frequency form: take the expected number of true positives and divide by the expected number of positive tests overall.

Bayes' theorem is the formal statement of this idea: P(H | E) = [P(E | H) × P(H)] / P(E), where P(E) = P(E | H) × P(H) + P(E | not-H) × P(not-H). The formula looks complicated, but natural frequencies let you compute the same quantity without ever writing it out. Use whichever form feels clearer for the problem at hand.

What to look for

  • Identify the prior, the likelihood under H, and the false positive rate.
  • Use natural frequencies or Bayes' theorem as convenient.
  • Double-check that your answer is between 0 and 1 and moves in the right direction.
Bayes' theorem is just bookkeeping for the natural-frequency picture.

Core Ideas

The main concepts to keep in view

Use these as anchors while you read the example and draft your response. If the concepts blur together, the practice usually blurs too.

Likelihood

The probability of the observed evidence on the assumption that a given hypothesis is true, written P(E | H).

Why it matters: Likelihood measures how well a hypothesis predicts the evidence — not how probable the hypothesis is.

Posterior Probability

The revised degree of confidence in a hypothesis after incorporating new evidence, written P(H | E).

Why it matters: The posterior is the result of rational belief updating — what you should believe now.

Base Rate

The background prevalence or prior frequency relevant to the hypothesis being assessed.

Why it matters: Ignoring the base rate is the most common and most costly Bayesian error.

False Positive Rate

The probability that a test or indicator yields a positive result when the hypothesis is false, written P(E | not-H).

Why it matters: A positive indicator is only meaningful when the false positive rate is small relative to the true positive rate.

Reference

Open these only when you need the extra structure

How the lesson is meant to unfold

Concept Intro

The core idea is defined and separated from nearby confusions.

Formalization Demo

The lesson shows how the same reasoning looks once its structure is made explicit.

Worked Example

A complete example demonstrates what correct reasoning looks like in context.

Guided Practice

You apply the idea with scaffolding still visible.

Independent Practice

You work more freely, with less support, to prove the idea is sticking.

Assessment Advice

Use these prompts to judge whether your reasoning meets the standard.

Mastery Check

The final target tells you what successful understanding should enable you to do.

Reasoning tools and formal patterns

Rules and standards

These are the criteria the unit uses to judge whether your reasoning is actually sound.

Respect Base Rates

A probabilistic judgment should not ignore background prevalence or prior probability when the context makes it relevant.

Common failures

  • A striking test result is treated as if it overrides the base rate automatically.
  • Rare-event contexts are assessed as though all hypotheses started equally likely.

Distinguish P(E | H) from P(H | E)

A likelihood is not the same thing as a posterior probability. Swapping them is the 'prosecutor's fallacy'.

Common failures

  • The probability of evidence given a hypothesis is mistaken for the probability of the hypothesis given the evidence.
  • Diagnostic accuracy is confused with posterior certainty.

Update Proportionately to Evidence

Belief revision should reflect both prior plausibility and the relative explanatory weight of the evidence — not the vividness or novelty of the evidence.

Common failures

  • A small piece of evidence causes an excessive revision.
  • Strong contrary evidence produces almost no change in confidence.

Compare Evidence Under All Rival Hypotheses

The weight of evidence depends not only on how well it fits the favored hypothesis, but also on how well it fits the rivals.

Common failures

  • Asking only whether the evidence fits H and ignoring whether it fits not-H equally well.
  • Treating evidence as strong because it 'supports' H without checking whether it also supports rival hypotheses.

Patterns

Use these when you need to turn a messy passage into a cleaner logical structure before evaluating it.

Bayesian Update Schema

Input form

evidence_assessment_problem

Output form

prior_likelihood_posterior_analysis

Steps

  • State the hypothesis under evaluation.
  • Identify the relevant prior probability or base rate.
  • State how likely the evidence would be if the hypothesis were true (the likelihood).
  • State how likely the evidence would be if the hypothesis were false (the false positive rate).
  • Compute or estimate the posterior proportionately.

Watch for

  • Skipping the prior entirely.
  • Using only one-sided likelihood information.
  • Treating the posterior as certainty rather than a revised degree of support.

Qualitative Bayesian Comparison

Input form

competing_hypotheses_with_evidence

Output form

relative_support_judgment

Steps

  • State the competing hypotheses.
  • Compare their priors qualitatively (which was more plausible before the evidence?).
  • Compare how strongly each predicts the evidence.
  • Multiply qualitatively: a higher prior and a higher likelihood both push the posterior up.
  • State the posterior ranking with appropriate caution.

Watch for

  • Assuming the hypothesis with the most vivid story automatically gets the higher posterior.
  • Ignoring that weaker priors can sometimes be overcome by much stronger evidence.

Worked Through

Examples that model the standard before you try it

Do not skim these. A worked example earns its place when you can point to the exact move it is modeling and the mistake it is trying to prevent.

Worked Example

Spam Detection

Even with a high base rate and a very accurate filter, the posterior is not 100%. It is high enough for routine filtering but not high enough to auto-delete without review.

Prior

Base rate: 40% of messages are spam.

True Positive Rate

The filter catches 95% of actual spam.

False Positive Rate

The filter mistakenly flags 5% of non-spam.

Natural Frequency Analysis

In 1,000 messages: 400 are spam, 600 are not. 95% of 400 = 380 true positives. 5% of 600 = 30 false positives. Total flagged: 410. Posterior P(spam | flagged) = 380/410 ≈ 93%.

Worked Example

Rare-condition screening (standard case)

Even a 99%-accurate test applied to a 1-in-1,000 condition produces a posterior near 9%, not 99%. Base rate dominates.

Prior

Base rate: 1 in 1,000 has condition.

True Positive Rate

99%.

False Positive Rate

1%.

Natural Frequency Analysis

In 10,000 people: 10 have the condition, 9,990 don't. Of the 10, about 10 test positive (9.9 rounded). Of the 9,990, 1% = 99 test positive. Total positives: ~109. Posterior P(condition | positive) ≈ 10/109 ≈ 9%.

Pause and Check

Questions to use before you move into practice

Self-check questions

  • What is the background prevalence?
  • Am I mixing up P(E | H) with P(H | E)?
  • Did I account for the false positive rate?

Practice

Now apply the idea yourself

Move into practice only after you can name the standard you are using and the structure you are trying to preserve or evaluate.

Evaluation Practice

Inductive

Structure the Bayesian Update

For each case, identify the base rate, the likelihood under the hypothesis, and the false positive rate. Then compute the posterior using natural frequencies.

Four Bayesian update problems

Show your work using a population of 10,000. Then state the posterior and a one-sentence interpretation.

Case 1 — Rare disease test

A disease affects 1 in 10,000 people. A screening test is 99% accurate (99% true positive rate and 99% specificity, i.e., 1% false positive rate). A randomly screened person tests positive. What is P(disease | positive)?

Notice how the rarity of the disease dominates the posterior.

Case 2 — Common virus test

A common virus affects 30% of people in flu season. A test is 90% accurate in both directions. A random person tests positive. What is P(virus | positive)?

With a high base rate, the posterior is much closer to the likelihood.

Case 3 — Spam filter

In a given inbox, 40% of incoming messages are spam. A spam filter flags 95% of actual spam and mistakenly flags 5% of non-spam. A message is flagged. What is P(spam | flagged)?

What's the posterior, and how confident should the user be in the flag?

Case 4 — Security alert

Of all login attempts from new devices on a secure system, 0.1% are malicious. A detection model catches 99% of malicious logins and flags 2% of benign ones. A login is flagged. What is P(malicious | flagged)?

Does the detector's accuracy justify blocking the login, or is further verification needed?

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Quiz

Inductive

Scenario Check: Base Rates and Conditional Probability

Each question presents a scenario or challenge. Answer in two to four sentences. Focus on showing that you can use what you learned, not just recall it.

Scenario questions

Work through each scenario. Precise, specific answers are better than long vague ones.

Question 1 — Diagnose

A student makes the following mistake: "Ignoring the base rate when the event is rare." Explain specifically what is wrong with this reasoning and what the student should have done instead.

Can the student identify the flaw and articulate the correction?

Question 2 — Apply

You encounter a new argument that you have never seen before. Walk through exactly how you would compute posterior from base rate, starting from scratch. Be specific about each step and explain why the order matters.

Can the student transfer the skill of compute posterior from base rate to a genuinely new case?

Question 3 — Distinguish

Someone confuses base rate with likelihood. Write a short explanation that would help them see the difference, and give one example where getting them confused leads to a concrete mistake.

Does the student understand the boundary between the two concepts?

Question 4 — Transfer

The worked example "Spam Detection" showed one way to handle a specific case. Describe a situation where the same method would need to be adjusted, and explain what you would change and why.

Can the student adapt the demonstrated method to a variation?

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Evaluation Practice

Inductive

Strength Ranking: Base Rates and Conditional Probability

Rank these inductive arguments from strongest to weakest. Explain what makes one stronger than another.

Practice scenarios

Work through each scenario carefully. Apply the concepts from this lesson.

Argument 1

In a survey of 10,000 patients across 15 hospitals, the new treatment showed a 40% improvement over the control group.

Argument 2

My three friends who tried the supplement said they felt better, so the supplement probably works.

Argument 3

In every chemistry experiment conducted over 200 years, mixing sodium and chlorine has produced table salt.

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Diagnosis Practice

Inductive

Sample Critique: Base Rates and Conditional Probability

Evaluate the sampling method in each scenario. Identify potential biases and suggest improvements.

Practice scenarios

Work through each scenario carefully. Apply the concepts from this lesson.

Study A

To learn about national reading habits, researchers surveyed visitors at a book festival and found that 95% read more than 10 books per year.

Study B

A tech company surveyed its own users about smartphone satisfaction and concluded that 88% of Americans are satisfied with their phones.

Study C

Researchers randomly selected 5,000 households from every state and conducted in-person interviews about dietary habits.

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Analysis Practice

Inductive

Analogy Builder: Base Rates and Conditional Probability

Assess the strength of each analogical argument. Identify relevant similarities and differences, then explain whether the analogy supports the conclusion.

Practice scenarios

Work through each scenario carefully. Apply the concepts from this lesson.

Analogy 1

The human brain is like a computer. Computers can be reprogrammed. Therefore, human habits can be reprogrammed.

Analogy 2

A company is like a ship. A ship needs a captain. Therefore, a company needs a strong CEO.

Analogy 3

Earth and Mars are both rocky planets with atmospheres. Earth supports life. Therefore, Mars might support life.

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Evaluation Practice

Inductive

Deep Practice: Base Rates and Conditional Probability

Evaluate the inductive strength of each argument. Consider sample size, representativeness, and alternative explanations.

Complex inductive arguments

Rate each argument's strength on a scale of 1-5 and justify your rating with specific criteria.

Argument 1

A pharmaceutical company tested its new pain reliever on 200 adults aged 18-65 and found 78% reported reduced pain. They conclude the drug is effective for all adults.

Argument 2

Over 30 years of weather data from 50 stations show that average temperatures in the region have increased by 1.5 degrees Celsius. Scientists project this trend will continue.

Argument 3

A survey of 5,000 randomly selected voters across all states found 52% favor the policy. The margin of error is 1.4%. Political analysts predict the referendum will pass.

Argument 4

Every iPhone model released in the past 10 years has been more expensive than the last. Therefore, the next iPhone will be even more expensive.

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Evaluation Practice

Inductive

Real-World Transfer: Base Rates and Conditional Probability

Evaluate real-world inductive arguments from media, science, and daily life. Apply the criteria you have learned to assess their strength.

Induction in practice

Evaluate each real-world argument. Identify the type of induction and assess its strength.

News claim

A news article reports: 'Based on polling data from 1,200 likely voters in swing states, the candidate leads by 3 points with a margin of error of 2.8 points.' How strong is the inductive basis for predicting the election outcome?

Consumer reasoning

A product has 4.8 stars from 15,000 reviews on Amazon. A friend says: 'With that many positive reviews, the product must be excellent.' Evaluate this reasoning, considering potential biases in online reviews.

Scientific claim

A nutrition study followed 50,000 people for 20 years and found that those who ate fish twice weekly had 25% fewer heart attacks. The researchers conclude fish consumption reduces heart attack risk. What would strengthen or weaken this conclusion?

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Rapid Identification

Inductive

Timed Drill: Base Rates and Conditional Probability

Quickly classify each argument's inductive type (enumerative, analogical, statistical, causal) and rate its strength on a 1-5 scale. Speed and accuracy both matter.

Rapid inductive classification

Classify the inductive type and rate the strength (1-5) for each item. Target: under 45 seconds per item.

Item 1

The last 20 volcanic eruptions on this island occurred between March and June. The next eruption will likely occur between March and June.

Item 2

A clinical trial with 8,000 participants found the drug reduced symptoms by 35% compared to placebo, with p < 0.001.

Item 3

My neighbor's golden retriever is friendly. My cousin's golden retriever is friendly. Therefore, the golden retriever I meet at the park will probably be friendly.

Item 4

Every time the factory increased shifts, accident rates went up within two weeks. Adding a third shift will likely increase accidents.

Item 5

In a poll of 150 college students at one university, 73% supported the policy. Therefore, most college students nationwide support it.

Item 6

Countries that invested heavily in renewable energy in the 2010s now have lower energy costs. Investing in renewables lowers energy costs.

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Evaluation Practice

Inductive

Peer Review: Base Rates and Conditional Probability

Below are sample student evaluations of inductive arguments. Assess each student's analysis: Did they correctly identify the argument type? Did they properly evaluate its strength? What did they miss?

Evaluate student analyses

Each student evaluated an inductive argument. Assess their work and identify what they got right and wrong.

Student A's analysis

Original argument: 'A survey of 200 Twitter users found 80% support the policy.' Student A wrote: 'This is a strong statistical argument because the sample size of 200 is large enough for reliable results.'

Student B's analysis

Original argument: 'The sun has risen every day for billions of years, so it will rise tomorrow.' Student B wrote: 'This is a weak inductive argument because past observations cannot guarantee future events. The sample is biased toward observed sunrises.'

Student C's analysis

Original argument: 'Rats given the chemical developed tumors. Therefore, the chemical likely causes cancer in humans.' Student C wrote: 'This is a strong analogical argument. Rats and humans share 85% of their genes, so results should transfer directly.'

Student D's analysis

Original argument: 'Five out of five mechanics I consulted said the transmission needs replacing.' Student D wrote: 'Strong inductive argument. Five independent experts agree, and mechanics have domain expertise. The only weakness is the small number of mechanics consulted.'

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Argument Building

Inductive

Construction Challenge: Base Rates and Conditional Probability

Build strong inductive arguments from scratch. You are given a conclusion to support. Construct the best evidence, explain your sampling, and address potential weaknesses.

Build inductive arguments

For each conclusion, construct the strongest possible inductive support. Specify your evidence and methodology.

Task 1

Build an inductive argument supporting: 'Bilingual children develop stronger executive function skills.' Describe what study you would design, your sample, and why your evidence would be convincing.

Task 2

Construct an analogical argument that compares managing a sports team to managing a software development team. Make the analogy as strong as possible by identifying at least four relevant similarities.

Task 3

Build a causal inductive argument supporting: 'Reducing class sizes improves student performance.' Specify what data you would need and how you would rule out confounding variables.

Task 4

Create a strong statistical argument about voter turnout among young adults. Describe your sampling method, sample size, and why your approach avoids common biases.

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Diagnosis Practice

Inductive

Counterexample Challenge: Base Rates and Conditional Probability

For each inductive generalization, find or construct a counterexample that weakens the argument. Explain how your counterexample undermines the conclusion and what it reveals about the argument's limits.

Counterexamples to inductive generalizations

Each generalization seems reasonable. Find cases that challenge or refute it.

Generalization 1

Every tech startup that received Series A funding has gone on to achieve profitability. Therefore, receiving Series A funding leads to profitability.

Generalization 2

In every observed case, countries with higher education spending have higher GDP per capita. Therefore, increasing education spending will raise GDP per capita.

Generalization 3

All mammals observed so far give live birth. Therefore, all mammals give live birth.

Generalization 4

Every patient in the trial who received the drug recovered within a week. Therefore, the drug is an effective treatment.

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Analysis Practice

Inductive

Integration Exercise: Base Rates and Conditional Probability

These exercises combine inductive reasoning with deductive logic, explanation assessment, or problem-solving. Apply multiple reasoning tools to reach well-supported conclusions.

Cross-topic inductive exercises

Each scenario requires inductive reasoning plus at least one other reasoning type.

Scenario 1

A study of 10,000 workers found that those who take regular breaks are 23% more productive. A company policy states: 'If a practice is shown to increase productivity by more than 15%, it shall be adopted.' Evaluate the inductive strength of the study, then apply the deductive rule to determine what the policy requires.

Scenario 2

Historical data shows that all five previous product launches in Q4 outperformed Q1 launches. Marketing proposes launching the next product in Q4. However, the market conditions have changed significantly due to new competitors. Evaluate the inductive argument and explain (abductively) why past patterns might not hold.

Scenario 3

A nutrition study found that people who eat breakfast perform better on cognitive tests. A school is considering a mandatory breakfast program. Evaluate the causal inference, identify confounders, and design a problem-solving approach to determine whether the program would work.

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Diagnosis Practice

Inductive

Misconception Clinic: Base Rates and Conditional Probability

Each item presents a common misconception about inductive reasoning or statistics. Identify the error, explain why it is wrong, and describe how the reasoning should actually work.

Common inductive misconceptions

Diagnose and correct each misconception about inductive reasoning.

Misconception 1

A student says: 'A larger sample size always makes an inductive argument stronger, regardless of how the sample was collected.'

Misconception 2

A student claims: 'Correlation proves causation as long as the correlation is strong enough. A 0.95 correlation coefficient means X definitely causes Y.'

Misconception 3

A student writes: 'An inductive argument with true premises and a true conclusion is a strong argument.'

Misconception 4

A student argues: 'Since inductive arguments can never be certain, they are all equally unreliable. You might as well flip a coin.'

Misconception 5

A student says: 'A single counterexample completely destroys an inductive generalization, just as it destroys a deductive argument.'

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Argument Building

Inductive

Scaffolded Argument: Base Rates and Conditional Probability

Build inductive arguments in stages. Each task provides some evidence and a partial analysis. Complete the analysis, identify gaps, and strengthen the argument step by step.

Step-by-step argument strengthening

Complete each partial analysis and improve the argument at each stage.

Scaffold 1

Claim: Mediterranean diets reduce heart disease risk. Stage 1: You have observational data from 5 countries. Describe what this evidence establishes. Stage 2: You add a randomized trial with 7,000 participants. How does this change the argument? Stage 3: A meta-analysis combines 15 studies. What does the full evidence base now support?

Scaffold 2

Claim: Later school start times improve teen academic performance. Stage 1: One school district changed start times and saw GPA increase by 0.2 points. Evaluate this evidence alone. Stage 2: Three more districts replicated the result. How does this change your assessment? Stage 3: A nationwide study with controls for socioeconomic factors confirms the pattern. What is the argument strength now?

Scaffold 3

Claim: Urban green spaces reduce crime rates. Stage 1: You have a correlation between park density and lower crime in 10 cities. What can and cannot be concluded? Stage 2: A natural experiment -- a city builds parks in high-crime areas and crime drops. How much stronger is the argument? Stage 3: Multiple cities replicate with randomized neighborhood selection. Evaluate the full argument.

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Evaluation Practice

Inductive

Synthesis Review: Base Rates and Conditional Probability

These exercises combine all aspects of inductive reasoning: sampling, generalization, analogy, causal reasoning, and statistical evaluation. Each task requires integrating multiple skills.

Comprehensive inductive review

Apply all your inductive reasoning skills together.

Comprehensive 1

A government report claims: 'Based on a longitudinal study of 25,000 households across 50 cities over 10 years, households that adopted solar panels reduced their energy costs by an average of 40% and increased their property values by 8%.' Evaluate: (a) the sampling methodology, (b) the causal claim about cost reduction, (c) the causal claim about property values, (d) whether an analogical argument from these households to commercial buildings would be strong.

Comprehensive 2

Design a study to test whether flexible work hours improve employee well-being. Specify: (a) your sampling method and why it avoids bias, (b) what you would measure, (c) how you would control for confounders, (d) what conclusion different results would support, and (e) the limits of your study's generalizability.

Use one of the cases above, identify the evidence base, and judge how strong the conclusion is once you account for rival factors.

Not saved yet.

Animated Explainers

Step-by-step visual walkthroughs of key concepts. Click to start.

Read the explanation carefully before jumping to activities!

Riko

Further Support

Open these only if you need extra help or context

Mistakes to avoid before submitting
  • Treating a high likelihood as the same thing as certainty about the hypothesis.
  • Forgetting to include the non-H cases in the denominator.
Where students usually go wrong

Ignoring the base rate when the event is rare.

Treating P(evidence | hypothesis) as if it were already the posterior.

Using decimal multiplication in a way that makes the structure invisible.

Historical context for this way of reasoning

Gerd Gigerenzer

Gigerenzer's research showed that natural-frequency formats dramatically reduce base-rate errors in both students and trained professionals. Rewriting Bayesian problems with natural frequencies is not just a teaching trick — it's a cognitive upgrade.