Rigorous Reasoning

Decision And Rational Choice

Decision Theory: Choosing Under Uncertainty

How rational agents choose when outcomes depend on chance

Students learn the logic of rational decision making under risk and uncertainty, including expected value, utility, decision matrices, dominance, and the systematic biases that cause real decisions to depart from the normative ideal. The unit builds from qualitative preference reasoning through quantitative expected-utility analysis to the integrated evaluation of complex real-world choices.

IntegratedAdvanced300 minutes0/5 lessons started

Study Flow

How to work through this unit without overwhelm

1. Read the model first

Each lesson opens with a guided explanation so the learner sees what the core move is before any saved response is required.

2. Study an example on purpose

The examples are there to show what strong reasoning looks like and where the structure becomes clearer than ordinary language.

3. Practice with a target in mind

Activities work best when the learner already knows what the answer needs to show, what rule applies, and what mistake would make the response weak.

Lesson Sequence

What you will work through

Open lesson 1
Lesson 1

What Makes a Decision Rational?

Introduces the central idea of decision theory: that the quality of a decision is determined by the reasoning used to make it, not by the outcome that happens to occur. Establishes the core vocabulary of options, states, consequences, and preferences.

Start with a short reading sequence, study 2 worked examples, then use 15 practice activitys to test whether the distinction is actually clear.

Guided reading2 worked examples15 practice activitys
Concept15 activities2 examples
Lesson 2

Expected Value and Expected Utility

Teaches the mechanics of computing expected value across outcomes, introduces utility functions that capture risk aversion through diminishing marginal utility, and walks through the St Petersburg paradox as motivation for why utility must sometimes replace money in the calculation.

Read for structure first, inspect how the example turns ordinary language into cleaner form, then complete 15 formalization exercises yourself.

Guided reading3 worked examples15 practice activitystranslation support
Formalization15 activities3 examples
Lesson 3

Decision Matrices and Dominance

Teaches students to build decision matrices that lay out options against states of the world, identify dominant and dominated strategies, and apply expected-value reasoning where probabilities are known while recognizing where genuine uncertainty calls for different tools.

Use the reading and examples to learn what the standards demand, then practice applying those standards explicitly in 15 activitys.

Guided reading3 worked examples15 practice activitysstandards focus
Rules15 activities3 examples
Lesson 4

Sunk Costs, Opportunity Costs, and Framing

Introduces the most common decision errors — sunk-cost thinking, opportunity-cost neglect, loss aversion, framing effects, and anchoring — and positions prospect theory as the descriptive counterpart to normative expected-utility theory. Students learn to detect these errors in their own and others' reasoning.

Use the reading and examples to learn what the standards demand, then practice applying those standards explicitly in 15 activitys.

Guided reading3 worked examples15 practice activitysstandards focus
Rules15 activities3 examples
Lesson 5

Capstone: Decisions Under Real Uncertainty

An integrative lesson that applies the full toolkit of decision theory — expected utility, dominance, opportunity cost, framing correction, and descriptive bias awareness — to complex real-world decisions in career choice, investment, public policy, and ethical tradeoffs. Cases are designed to require multiple concepts working together.

Each lesson now opens with guided reading, then moves through examples and 2 practice activitys so you are not dropped into the task cold.

Guided reading2 worked examples2 practice activitys
Capstone2 activities2 examples

Rules And Standards

What counts as good reasoning here

Maximize Expected Utility

When probabilities are known and preferences are represented by a utility function, a rational agent should choose the option with the highest expected utility.

Common failures

  • Choosing the option with the highest possible payoff without weighing how likely it is.
  • Substituting the most likely outcome for the expected value and ignoring the remaining possibilities.
  • Treating expected utility as a guarantee of the preferred outcome rather than as a long-run average over repeated choices.

Transitivity of Preferences

If an agent prefers A to B and B to C, then the agent should prefer A to C; cycles of preference are irrational and expose the agent to exploitation.

Common failures

  • Preferring A to B in one framing and B to A in another because the choice context changed the salience of attributes.
  • Holding cyclical preferences (A over B, B over C, C over A) that can be pumped for arbitrary losses.

Ignore Sunk Costs

A rational decision is determined by the future consequences of available options; past investments that cannot be recovered should play no role in the choice.

Common failures

  • Continuing a failing project because of the money and time already spent on it.
  • Refusing to abandon a plan that clearly will not succeed because doing so would 'waste' prior effort.
  • Letting the size of a past commitment substitute for an analysis of current expected value.

Dominance Principle

A rational agent should never choose an option that is weakly dominated, and should always prefer an option that strictly dominates its alternatives.

Common failures

  • Selecting a dominated option because it is familiar, vivid, or emotionally salient.
  • Missing a dominance relation because the decision matrix was not laid out explicitly.
  • Treating dominance as a tiebreaker rather than as the most powerful decision rule available.

Independence Axiom

If an agent prefers A to B, then the agent should prefer any mixture (A with probability p, some outcome X with probability 1-p) to (B with probability p, X with probability 1-p); the presence of a common outcome should not flip the preference.

Common failures

  • Allen Allais-style reversals where the same underlying comparison flips depending on whether a sure thing is framed into the choice.
  • Letting certainty (a sure outcome) dominate analysis in a way that contradicts the agent's ordering over the non-sure parts.

Respect Base Rates in Decision Analysis

When decisions depend on probabilities, those probabilities must reflect background base rates and not just vivid or recent information; decision analysis inherits the base-rate discipline of Bayesian inference.

Common failures

  • Inflating the probability of a dramatic outcome because it is easy to imagine (availability bias).
  • Using a recent anecdote as if it were a reliable estimate of the underlying frequency.
  • Ignoring the actual prevalence of failures when evaluating an optimistic business forecast.

Weigh Opportunity Cost Explicitly

An option is only as good as the best alternative it displaces; a good choice must be compared against its next-best alternative, not evaluated in isolation.

Common failures

  • Accepting an option because it looks attractive on its own without asking what is being given up.
  • Treating a small positive expected value as a clear win without asking whether a better option was available for the same resources.

Formalization Patterns

How arguments get translated into structure

Decision Matrix

Input form

practical_choice_with_uncertainty

Output form

options_by_states_table_with_payoffs

Steps

  • List the available options as rows.
  • List the possible states of the world as columns.
  • Fill in the payoff or utility for each option-state pair.
  • If probabilities are known, add a row for state probabilities.
  • Check for dominance relations first.
  • Compute expected utility for each non-dominated row.
  • Identify the option with the highest expected utility as the recommended choice, noting any assumptions made about probabilities or utility.

Common errors

  • Omitting a state of the world and thereby biasing the calculation.
  • Filling in outcomes by intuition without actually asking what would happen in each state.
  • Computing expected value across dominated options and missing that the dominance check could have eliminated them immediately.
  • Treating the chosen option as guaranteed to produce the payoff that went into the expected-value calculation.

Expected Value Calculation

Input form

option_with_probabilistic_outcomes

Output form

numerical_expected_value

Steps

  • List every possible outcome that results from the option.
  • Assign a probability to each outcome, ensuring the probabilities sum to 1.
  • Assign a payoff (in dollars, utility units, or another common scale) to each outcome.
  • Multiply each probability by its payoff.
  • Sum the products to obtain the expected value.
  • Compare the expected value against the expected values of alternative options and against any relevant reference point (the current status quo, a safe alternative).

Common errors

  • Using probabilities that do not sum to 1 because one outcome was forgotten.
  • Mixing dollar payoffs with utility values in the same calculation.
  • Reading the computed expected value as a likely outcome rather than as a long-run average.
  • Ignoring variance and tail risk when the stakes are high enough that a bad outcome would be unrecoverable.

Utility Function Application

Input form

monetary_gamble_or_prospect

Output form

expected_utility_score

Steps

  • Identify the decision maker's wealth or baseline reference level.
  • Transform each dollar payoff into a utility value using a concave function such as the square root or logarithm when risk aversion is appropriate.
  • Multiply each utility value by the probability of the corresponding outcome.
  • Sum the products to obtain expected utility.
  • Compare expected utility across options, remembering that the utility scale is meaningful only up to positive linear transformations.

Common errors

  • Using a linear utility function and then wondering why the analysis recommends obviously reckless gambles.
  • Switching utility functions between options in the same comparison.
  • Confusing utility units with dollars when reporting the conclusion.

Concept Map

Key ideas in the unit

Expected Value

The probability-weighted average of an action's possible outcomes, computed as EV = sum over outcomes of probability times payoff.

Utility

A numerical measure of how much an outcome is worth to a particular agent, calibrated so that higher numbers always correspond to more preferred outcomes.

Risk versus Uncertainty

Risk refers to situations where the probabilities of outcomes are known or estimable; uncertainty refers to situations where those probabilities themselves are unknown or contested.

Dominance

Option A dominates option B when A yields an outcome at least as good as B in every possible state of the world, and strictly better in at least one state.

Preference Ordering

A ranking of alternatives that a decision maker holds, ideally satisfying completeness (every pair is comparable) and transitivity (if A is preferred to B and B to C, then A is preferred to C).

Opportunity Cost

The value of the best alternative you give up when you choose one option over another.

Sunk Cost

A cost that has already been incurred and cannot be recovered regardless of future action.

Marginal Reasoning

Evaluating decisions by asking about the incremental costs and benefits of small changes rather than about total averages.

Diminishing Marginal Utility

The property that successive units of a good produce smaller and smaller increases in utility; the tenth dollar gained matters less to you than the first.

Assessment

How to judge your own work

Assessment advice

  • Am I evaluating this decision by the reasoning used or by the outcome that happened to occur?
  • Have I written down all the relevant options, states, and consequences, or am I focused on just one?
  • Are my preferences over the outcomes coherent, or would I flip my ranking if someone described them differently?
  • Do not treat a lucky outcome as evidence that the underlying reasoning was sound.
  • Do not treat an unlucky outcome as evidence that the underlying reasoning was flawed.
  • Do the probabilities I used sum to exactly 1, and did I include every possible outcome?
  • Am I computing expected value of money or expected utility, and is that choice appropriate for the stakes?
  • If I changed from a risk-neutral to a risk-averse utility function, would the recommendation change, and does that change match my intuition about the case?
  • Do not report the expected value as the amount you will actually receive on a single trial.
  • Do not use a linear utility function when the stakes are large enough that losing everything would be ruinous.
  • Have I listed every option, including the status quo, and every relevant state of the world?
  • Did I check for dominance before computing expected values?
  • Am I using expected value under genuine risk (known probabilities) or am I smuggling confidence into an uncertainty problem?
  • For high-stakes one-shot decisions, does my recommended option have a worst case I can live with?
  • Do not skip the dominance check on the grounds that it is 'obvious' — explicit checks catch errors intuition misses.
  • Do not apply expected-value selection rules to situations where the probabilities themselves are the contested part of the problem.
  • Am I letting past investments influence a forward-looking decision?
  • Have I identified what I am giving up by choosing this option?
  • Would I make the same choice if the outcome were described in the opposite framing?
  • Is an anchor or a vivid recent event affecting my numerical judgments?
  • Do not confuse the feeling of commitment to a past investment with a rational reason to continue.
  • Do not evaluate an option in isolation without asking what better alternative you might be displacing.
  • Have I structured the decision with options, states, and consequences before running any numbers?
  • Have I removed sunk costs from the analysis?
  • Have I identified the opportunity cost of each option explicitly?
  • Would my preference flip if I reframed the options, and what does that tell me about the framing's hold on my reasoning?
  • Can I state the conditions under which I would revise this decision?
  • Do not confuse a thorough calculation with a good decision — the calculation is only as good as the structure it rests on.
  • Do not treat integrated analysis as a license to ignore any one tool when its use is inconvenient; skipping the framing check is not neutrality, it is capitulation to whatever framing you started with.

Mastery requirements

  • Distinguish Decision Quality From Outcome QualityPercent Consistent · 80_percent_consistent
  • Compute Expected Value And Expected UtilityCorrect Calculations · 6_correct_calculations
  • Build And Analyze Decision MatricesCorrect Matrices With Dominance And Expected Value · 5_correct_matrices_with_dominance_and_expected_value
  • Diagnose Decision ErrorsCorrect Diagnoses · 6_correct_diagnoses
  • Integrate Decision Tools In Real CasesSuccessful Integrated Analyses · 3_successful_integrated_analyses

History Links

How earlier logicians shaped modern tools

Blaise Pascal

In the Pensees, argued that belief in God could be defended by a decision-theoretic calculation comparing infinite potential gain against finite loss — the first recorded use of expected-value reasoning applied to an existential choice.

Pascal's wager is a canonical early example of expected-value reasoning and is still used to illustrate how infinite utilities strain the ordinary framework.

Daniel Bernoulli

In 'Specimen Theoriae Novae de Mensura Sortis' (1738), resolved the St Petersburg paradox by proposing that rational agents maximize expected utility rather than expected money, and that utility is a concave function of wealth (diminishing marginal utility).

The foundation for utility theory, risk aversion, and the modern view that the logarithmic or square-root relationship between money and satisfaction explains why people do not pay infinite amounts for infinite-expected-value gambles.

John von Neumann and Oskar Morgenstern

In Theory of Games and Economic Behavior (1944), proved the expected utility theorem: any agent whose preferences over risky prospects satisfy a small set of axioms (completeness, transitivity, continuity, independence) must behave as if maximizing the expected value of some utility function.

The axiomatic foundation of modern decision theory, game theory, and the normative interpretation of rational choice under risk.

Herbert Simon

Argued that real decision makers operate under cognitive and informational limits, and that they typically 'satisfice' (pick a good-enough option) rather than 'optimize' (find the absolute best). Introduced the concept of bounded rationality as a descriptive counterweight to idealized expected-utility theory.

Bounded rationality explains why expected-utility maximization is a normative ideal rather than an empirical description, and motivates the use of heuristics and structured procedures in practical decision analysis.

Daniel Kahneman and Amos Tversky

Developed prospect theory, which describes how people actually choose under risk: they evaluate outcomes relative to a reference point, exhibit loss aversion (losses loom larger than equivalent gains), overweight small probabilities, and are sensitive to how options are framed.

Prospect theory is the descriptive counterpart to normative expected utility theory. It explains the sunk-cost fallacy, the endowment effect, framing effects, and the systematic shape of real decision errors that decision theory students need to anticipate.

Leonard Savage

In The Foundations of Statistics (1954), extended expected-utility theory to decisions under subjective uncertainty, showing that a rational agent's probabilities and utilities can be derived jointly from preferences over acts.

Savage's framework is the standard model for personalist (subjective) probability and underlies modern Bayesian decision analysis.