1. Read the model first
Each lesson opens with a guided explanation so the learner sees what the core move is before any saved response is required.
Decision And Rational Choice
How rational agents choose when outcomes depend on chance
Students learn the logic of rational decision making under risk and uncertainty, including expected value, utility, decision matrices, dominance, and the systematic biases that cause real decisions to depart from the normative ideal. The unit builds from qualitative preference reasoning through quantitative expected-utility analysis to the integrated evaluation of complex real-world choices.
Study Flow
1. Read the model first
Each lesson opens with a guided explanation so the learner sees what the core move is before any saved response is required.
2. Study an example on purpose
The examples are there to show what strong reasoning looks like and where the structure becomes clearer than ordinary language.
3. Practice with a target in mind
Activities work best when the learner already knows what the answer needs to show, what rule applies, and what mistake would make the response weak.
Lesson Sequence
Introduces the central idea of decision theory: that the quality of a decision is determined by the reasoning used to make it, not by the outcome that happens to occur. Establishes the core vocabulary of options, states, consequences, and preferences.
Start with a short reading sequence, study 2 worked examples, then use 15 practice activitys to test whether the distinction is actually clear.
Teaches the mechanics of computing expected value across outcomes, introduces utility functions that capture risk aversion through diminishing marginal utility, and walks through the St Petersburg paradox as motivation for why utility must sometimes replace money in the calculation.
Read for structure first, inspect how the example turns ordinary language into cleaner form, then complete 15 formalization exercises yourself.
Teaches students to build decision matrices that lay out options against states of the world, identify dominant and dominated strategies, and apply expected-value reasoning where probabilities are known while recognizing where genuine uncertainty calls for different tools.
Use the reading and examples to learn what the standards demand, then practice applying those standards explicitly in 15 activitys.
Introduces the most common decision errors — sunk-cost thinking, opportunity-cost neglect, loss aversion, framing effects, and anchoring — and positions prospect theory as the descriptive counterpart to normative expected-utility theory. Students learn to detect these errors in their own and others' reasoning.
Use the reading and examples to learn what the standards demand, then practice applying those standards explicitly in 15 activitys.
An integrative lesson that applies the full toolkit of decision theory — expected utility, dominance, opportunity cost, framing correction, and descriptive bias awareness — to complex real-world decisions in career choice, investment, public policy, and ethical tradeoffs. Cases are designed to require multiple concepts working together.
Each lesson now opens with guided reading, then moves through examples and 2 practice activitys so you are not dropped into the task cold.
Rules And Standards
When probabilities are known and preferences are represented by a utility function, a rational agent should choose the option with the highest expected utility.
Common failures
If an agent prefers A to B and B to C, then the agent should prefer A to C; cycles of preference are irrational and expose the agent to exploitation.
Common failures
A rational decision is determined by the future consequences of available options; past investments that cannot be recovered should play no role in the choice.
Common failures
A rational agent should never choose an option that is weakly dominated, and should always prefer an option that strictly dominates its alternatives.
Common failures
If an agent prefers A to B, then the agent should prefer any mixture (A with probability p, some outcome X with probability 1-p) to (B with probability p, X with probability 1-p); the presence of a common outcome should not flip the preference.
Common failures
When decisions depend on probabilities, those probabilities must reflect background base rates and not just vivid or recent information; decision analysis inherits the base-rate discipline of Bayesian inference.
Common failures
An option is only as good as the best alternative it displaces; a good choice must be compared against its next-best alternative, not evaluated in isolation.
Common failures
Formalization Patterns
Input form
practical_choice_with_uncertainty
Output form
options_by_states_table_with_payoffs
Steps
Common errors
Input form
option_with_probabilistic_outcomes
Output form
numerical_expected_value
Steps
Common errors
Input form
monetary_gamble_or_prospect
Output form
expected_utility_score
Steps
Common errors
Concept Map
The probability-weighted average of an action's possible outcomes, computed as EV = sum over outcomes of probability times payoff.
A numerical measure of how much an outcome is worth to a particular agent, calibrated so that higher numbers always correspond to more preferred outcomes.
Risk refers to situations where the probabilities of outcomes are known or estimable; uncertainty refers to situations where those probabilities themselves are unknown or contested.
Option A dominates option B when A yields an outcome at least as good as B in every possible state of the world, and strictly better in at least one state.
A ranking of alternatives that a decision maker holds, ideally satisfying completeness (every pair is comparable) and transitivity (if A is preferred to B and B to C, then A is preferred to C).
The value of the best alternative you give up when you choose one option over another.
A cost that has already been incurred and cannot be recovered regardless of future action.
Evaluating decisions by asking about the incremental costs and benefits of small changes rather than about total averages.
The property that successive units of a good produce smaller and smaller increases in utility; the tenth dollar gained matters less to you than the first.
Assessment
Assessment advice
Mastery requirements
History Links
In the Pensees, argued that belief in God could be defended by a decision-theoretic calculation comparing infinite potential gain against finite loss — the first recorded use of expected-value reasoning applied to an existential choice.
In 'Specimen Theoriae Novae de Mensura Sortis' (1738), resolved the St Petersburg paradox by proposing that rational agents maximize expected utility rather than expected money, and that utility is a concave function of wealth (diminishing marginal utility).
In Theory of Games and Economic Behavior (1944), proved the expected utility theorem: any agent whose preferences over risky prospects satisfy a small set of axioms (completeness, transitivity, continuity, independence) must behave as if maximizing the expected value of some utility function.
Argued that real decision makers operate under cognitive and informational limits, and that they typically 'satisfice' (pick a good-enough option) rather than 'optimize' (find the absolute best). Introduced the concept of bounded rationality as a descriptive counterweight to idealized expected-utility theory.
Developed prospect theory, which describes how people actually choose under risk: they evaluate outcomes relative to a reference point, exhibit loss aversion (losses loom larger than equivalent gains), overweight small probabilities, and are sensitive to how options are framed.
In The Foundations of Statistics (1954), extended expected-utility theory to decisions under subjective uncertainty, showing that a rational agent's probabilities and utilities can be derived jointly from preferences over acts.