Rigorous Reasoning

Decision And Rational Choice

Capstone: Decisions Under Real Uncertainty

An integrative lesson that applies the full toolkit of decision theory — expected utility, dominance, opportunity cost, framing correction, and descriptive bias awareness — to complex real-world decisions in career choice, investment, public policy, and ethical tradeoffs. Cases are designed to require multiple concepts working together.

Read the explanation sections first, then use the activities to test whether you can apply the idea under pressure.

IntegratedCapstoneLesson 5 of 50% progress

Start Here

What this lesson is helping you do

An integrative lesson that applies the full toolkit of decision theory — expected utility, dominance, opportunity cost, framing correction, and descriptive bias awareness — to complex real-world decisions in career choice, investment, public policy, and ethical tradeoffs. Cases are designed to require multiple concepts working together. The practice in this lesson depends on understanding Expected Value, Utility, Dominance, and Opportunity Cost and applying tools such as Maximize Expected Utility and Transitivity of Preferences correctly.

How to approach it

Read the explanation sections first, then use the activities to test whether you can apply the idea under pressure.

What the practice is building

You will put the explanation to work through guided problem solving and quiz activities, so the goal is not just to recognize the idea but to use it under your own control.

What success should let you do

Complete a structured decision analysis for 3 of the 5 capstone cases, applying at least four distinct tools (e.g., dominance, expected utility, opportunity cost, framing check) in each analysis and justifying the final recommendation with explicit tradeoffs and revision triggers.

Reading Path

Move through the lesson in this order

The page is designed to teach before it tests. Use this sequence to keep the reading, examples, and practice in the right relationship.

Read

Build the mental model

Move through the guided explanation first so the central distinction and purpose are clear before you evaluate your own work.

Study

Watch the move in context

Use the worked examples to see how the reasoning behaves when someone else performs it carefully.

Do

Practice with a standard

Only then move into the activities, using the pause-and-check prompts as a final checkpoint before you submit.

Guided Explanation

Read this before you try the activity

These sections give the learner a usable mental model first, so the practice feels like application rather than guesswork.

Integration

Real decisions require multiple tools at once

The earlier lessons introduced decision tools one at a time: the decision matrix, dominance, expected value, utility, sunk cost, opportunity cost, and framing. Real decisions almost never fit into a single tool. A career change involves expected value (the salaries in question), utility (how you weigh financial gain against other goods), dominance checks (some options may be Pareto-worse than others across all states), opportunity cost (what you give up by taking one path), and framing corrections (reference points shift depending on how you describe the move). Using one tool in isolation would miss the structure of the problem.

The capstone skill of decision theory is the ability to bring these tools together in the right order without getting lost. Start by structuring the problem — options, states, consequences — and checking for dominance. Then compute expected utility where probabilities are reliable, and apply scenario or robustness thinking where they are not. Identify opportunity costs explicitly by naming the best alternative. Remove sunk costs from the analysis. Check the framing by reversing gain-loss language and confirming that the preference does not flip. Only then should you commit to a recommendation. This routine takes longer than an intuitive judgment but produces decisions you can defend to yourself and others.

What to look for

  • Structure the decision first: options, states, consequences.
  • Check for dominance before computing anything.
  • Compute expected utility where probabilities are reliable.
  • Identify opportunity costs explicitly.
  • Eliminate sunk costs from the analysis.
  • Reframe the options to verify that the preference is not driven by description.
Integrative decision making is a routine: structure, eliminate, compute, frame-check, then recommend.

Application domain

Career decisions and long-horizon uncertainty

Career choices stretch the tools of decision theory because the horizons are long, the probabilities are fragile, and the utility function involves things other than money. A software engineer considering whether to leave a stable job for a startup should compute the expected value in dollar terms but should not stop there. She also needs to consider the utility of the work itself, the value of learning, the opportunity cost of the stable salary she is giving up, and the distribution of outcomes in case the startup fails. The decision is not 'which option has the higher expected salary' but 'which option has the higher expected utility over my full life plan, given my risk tolerance and the uncertainty of the estimates.'

Under long-horizon uncertainty, robustness matters more than point estimates. An option that performs well across many plausible scenarios is usually better than one that performs brilliantly in the expected case but catastrophically in plausible alternatives. This is why career advisors tell people to consider what they would do if the startup failed, or if the field they are studying becomes less in demand. The question is not 'what is the single most likely outcome' but 'which option leaves me with acceptable futures across the range of plausible outcomes'.

What to look for

  • Translate career payoffs into utility, not just money.
  • Use robustness thinking for long-horizon outcomes where probabilities are fragile.
  • Ask what the worst plausible scenario looks like for each option, and whether you can live with it.
  • Identify the opportunity cost of each path explicitly.
Career decisions combine expected-utility reasoning with robustness thinking because the probabilities over long horizons are fragile and the utility function is broad.

Application domain

Public policy and moral tradeoffs

Public policy decisions stress-test decision theory in a different way. Many policies involve tradeoffs between goods that are hard to compare: lives, money, liberty, equality, environmental quality. Expected-value calculations still apply — a policy that prevents 10,000 deaths per year is worth more than one that prevents 100 in expected terms, all else equal — but the utility function that converts between goods is contested rather than given. Different stakeholders have different utility functions, and the decision analyst must make those differences explicit rather than pretending a single function applies to everyone.

Policy decisions also raise the issue of whose utility counts and how to aggregate across people. Cost-benefit analysis typically sums monetary-equivalent costs and benefits across a population, which treats a dollar to a wealthy person as equal to a dollar to a poor person — a choice that is not morally neutral. Decision analysts can use weighted sums, expected-lives-saved criteria, or maximin rules (protect the worst-off) to reflect different ethical frameworks. The decision-theoretic contribution is not to settle the ethics but to make the normative choices visible and the tradeoffs computable once those choices have been made.

What to look for

  • Make the utility function explicit, especially when tradeoffs involve non-monetary goods.
  • Identify whose welfare is being counted and how it is being aggregated.
  • Use weighted sums or maximin rules to reflect different ethical frameworks rather than pretending the choice of framework is neutral.
  • Separate the factual analysis (which option produces what outcomes) from the normative question (how to weigh the outcomes).
Policy decisions need decision theory for the structure and ethics for the weights; the analyst's job is to make the weights explicit rather than to resolve them by calculation.

Application domain

Investment, insurance, and personal finance

Personal financial decisions are one of the best testing grounds for decision theory because the numbers are usually available and the stakes are real. Insurance is valuable to a risk-averse agent even when it has negative expected value, because the utility loss from a catastrophic uninsured loss is larger than the utility cost of steady premium payments. Diversification is valuable because spreading exposure across independent or weakly correlated investments reduces variance without reducing expected return, which is a strict improvement for a risk-averse agent. Index funds are valuable because they capture the average market return at low cost, and most active strategies do not beat that average after fees once opportunity costs are accounted for.

The typical personal-finance errors are familiar from earlier lessons. The sunk-cost fallacy leads people to hold onto losing investments longer than they should. Loss aversion leads them to sell winners too early to 'lock in' gains. Framing leads them to prefer options described as 'saving' over identical options described as 'spending.' Anchoring leads them to buy when a price looks low relative to a recent high. Rigorous personal finance is mostly about having a written plan that constrains these biases in advance — set a target allocation, rebalance on schedule rather than on emotion, and ignore the salience of any individual month's performance.

What to look for

  • Recognize that insurance against catastrophic loss is rational under risk aversion even with negative expected money value.
  • Use diversification to reduce variance without sacrificing expected return.
  • Make financial decisions on written rules rather than on day-to-day feeling.
  • Audit your portfolio decisions for sunk-cost, loss-aversion, framing, and anchoring errors.
Personal finance rewards the disciplined application of decision theory and punishes every bias the earlier lessons described.

Core Ideas

The main concepts to keep in view

Use these as anchors while you read the example and draft your response. If the concepts blur together, the practice usually blurs too.

Expected Value

The probability-weighted average of an action's possible outcomes, computed as EV = sum over outcomes of probability times payoff.

Why it matters: Expected value is the starting point for evaluating any choice whose outcome depends on chance, and it generalizes straightforwardly to expected utility.

Utility

A numerical measure of how much an outcome is worth to a particular agent, calibrated so that higher numbers always correspond to more preferred outcomes.

Why it matters: Utility converts money, time, health, and other goods into a single scale that captures what actually matters to the decision maker, including attitudes toward risk.

Dominance

Option A dominates option B when A yields an outcome at least as good as B in every possible state of the world, and strictly better in at least one state.

Why it matters: Dominance is the most powerful decision rule available because it lets you eliminate options without knowing any probabilities, and rational agents never choose a dominated option.

Opportunity Cost

The value of the best alternative you give up when you choose one option over another.

Why it matters: Every decision has an opportunity cost, and ignoring it leads people to accept options that look good in isolation but are dominated by alternatives they failed to consider.

Sunk Cost

A cost that has already been incurred and cannot be recovered regardless of future action.

Why it matters: Rational decision making is forward-looking, which means sunk costs should never influence current choices; the sunk-cost fallacy is the tendency to let them do so anyway.

Reference

Open these only when you need the extra structure

How the lesson is meant to unfold

Concept Intro

The core idea is defined and separated from nearby confusions.

Worked Example

A complete example demonstrates what correct reasoning looks like in context.

Guided Problem Solving

This step supports the lesson by moving from explanation toward application.

Independent Practice

You work more freely, with less support, to prove the idea is sticking.

Assessment Advice

Use these prompts to judge whether your reasoning meets the standard.

Mastery Check

The final target tells you what successful understanding should enable you to do.

Reasoning tools and formal patterns

Rules and standards

These are the criteria the unit uses to judge whether your reasoning is actually sound.

Maximize Expected Utility

When probabilities are known and preferences are represented by a utility function, a rational agent should choose the option with the highest expected utility.

Common failures

  • Choosing the option with the highest possible payoff without weighing how likely it is.
  • Substituting the most likely outcome for the expected value and ignoring the remaining possibilities.
  • Treating expected utility as a guarantee of the preferred outcome rather than as a long-run average over repeated choices.

Transitivity of Preferences

If an agent prefers A to B and B to C, then the agent should prefer A to C; cycles of preference are irrational and expose the agent to exploitation.

Common failures

  • Preferring A to B in one framing and B to A in another because the choice context changed the salience of attributes.
  • Holding cyclical preferences (A over B, B over C, C over A) that can be pumped for arbitrary losses.

Ignore Sunk Costs

A rational decision is determined by the future consequences of available options; past investments that cannot be recovered should play no role in the choice.

Common failures

  • Continuing a failing project because of the money and time already spent on it.
  • Refusing to abandon a plan that clearly will not succeed because doing so would 'waste' prior effort.
  • Letting the size of a past commitment substitute for an analysis of current expected value.

Dominance Principle

A rational agent should never choose an option that is weakly dominated, and should always prefer an option that strictly dominates its alternatives.

Common failures

  • Selecting a dominated option because it is familiar, vivid, or emotionally salient.
  • Missing a dominance relation because the decision matrix was not laid out explicitly.
  • Treating dominance as a tiebreaker rather than as the most powerful decision rule available.

Independence Axiom

If an agent prefers A to B, then the agent should prefer any mixture (A with probability p, some outcome X with probability 1-p) to (B with probability p, X with probability 1-p); the presence of a common outcome should not flip the preference.

Common failures

  • Allen Allais-style reversals where the same underlying comparison flips depending on whether a sure thing is framed into the choice.
  • Letting certainty (a sure outcome) dominate analysis in a way that contradicts the agent's ordering over the non-sure parts.

Respect Base Rates in Decision Analysis

When decisions depend on probabilities, those probabilities must reflect background base rates and not just vivid or recent information; decision analysis inherits the base-rate discipline of Bayesian inference.

Common failures

  • Inflating the probability of a dramatic outcome because it is easy to imagine (availability bias).
  • Using a recent anecdote as if it were a reliable estimate of the underlying frequency.
  • Ignoring the actual prevalence of failures when evaluating an optimistic business forecast.

Weigh Opportunity Cost Explicitly

An option is only as good as the best alternative it displaces; a good choice must be compared against its next-best alternative, not evaluated in isolation.

Common failures

  • Accepting an option because it looks attractive on its own without asking what is being given up.
  • Treating a small positive expected value as a clear win without asking whether a better option was available for the same resources.

Patterns

Use these when you need to turn a messy passage into a cleaner logical structure before evaluating it.

Decision Matrix

Input form

practical_choice_with_uncertainty

Output form

options_by_states_table_with_payoffs

Steps

  • List the available options as rows.
  • List the possible states of the world as columns.
  • Fill in the payoff or utility for each option-state pair.
  • If probabilities are known, add a row for state probabilities.
  • Check for dominance relations first.
  • Compute expected utility for each non-dominated row.
  • Identify the option with the highest expected utility as the recommended choice, noting any assumptions made about probabilities or utility.

Watch for

  • Omitting a state of the world and thereby biasing the calculation.
  • Filling in outcomes by intuition without actually asking what would happen in each state.
  • Computing expected value across dominated options and missing that the dominance check could have eliminated them immediately.
  • Treating the chosen option as guaranteed to produce the payoff that went into the expected-value calculation.

Expected Value Calculation

Input form

option_with_probabilistic_outcomes

Output form

numerical_expected_value

Steps

  • List every possible outcome that results from the option.
  • Assign a probability to each outcome, ensuring the probabilities sum to 1.
  • Assign a payoff (in dollars, utility units, or another common scale) to each outcome.
  • Multiply each probability by its payoff.
  • Sum the products to obtain the expected value.
  • Compare the expected value against the expected values of alternative options and against any relevant reference point (the current status quo, a safe alternative).

Watch for

  • Using probabilities that do not sum to 1 because one outcome was forgotten.
  • Mixing dollar payoffs with utility values in the same calculation.
  • Reading the computed expected value as a likely outcome rather than as a long-run average.
  • Ignoring variance and tail risk when the stakes are high enough that a bad outcome would be unrecoverable.

Utility Function Application

Input form

monetary_gamble_or_prospect

Output form

expected_utility_score

Steps

  • Identify the decision maker's wealth or baseline reference level.
  • Transform each dollar payoff into a utility value using a concave function such as the square root or logarithm when risk aversion is appropriate.
  • Multiply each utility value by the probability of the corresponding outcome.
  • Sum the products to obtain expected utility.
  • Compare expected utility across options, remembering that the utility scale is meaningful only up to positive linear transformations.

Watch for

  • Using a linear utility function and then wondering why the analysis recommends obviously reckless gambles.
  • Switching utility functions between options in the same comparison.
  • Confusing utility units with dollars when reporting the conclusion.

Worked Through

Examples that model the standard before you try it

Do not skim these. A worked example earns its place when you can point to the exact move it is modeling and the mistake it is trying to prevent.

Worked Example

The Job Offer Fully Analyzed

The fully worked analysis shows how the tools combine. Expected money favors the startup; expected utility under moderate risk aversion favors the stable job. The recommendation depends on the decision maker's actual utility function, and the analysis makes that dependence explicit rather than hiding it behind a single number.

Content

  • Options: Stay at current job (180,000 dollars per year stable), join startup (120,000 dollars per year plus equity with expected value 10% times 5,000,000 + 20% times 500,000 + 70% times 0 = 500,000 + 100,000 + 0 = 600,000 dollars over five years, or 120,000 dollars per year expected equity).
  • Expected-money comparison: Startup offers expected 120,000 + 120,000 = 240,000 per year. Stable job offers 180,000. On money alone, the startup wins by 60,000 dollars per year in expectation.
  • Risk check: Startup variance is huge. In 70% of futures, the equity is worth 0 and total compensation is 120,000 versus 180,000 in the stable job — a 60,000-per-year loss for five years.
  • Utility adjustment: A risk-averse agent (U concave in annual compensation) will weight the 70% downside more heavily than the 10% upside. The expected-utility winner is often the stable job, even though the expected-money winner is the startup.
  • Opportunity cost: Leaving the stable job means giving up 300,000 dollars of guaranteed compensation over five years and the colleagues and stability that come with it.
  • Framing check: Reframe the startup as 'trade 300,000 of guaranteed income plus stability for a 10% chance at 5 million dollars.' Still attractive? Reframe the stable job as 'turn down a 10% chance at 5 million to keep a predictable paycheck.' Still attractive? If preferences flip, framing is driving the response.
  • Recommendation: Depends on risk tolerance and life situation. A younger agent with no dependents and high upside tolerance might reasonably take the startup. A mortgaged agent with risk aversion should likely stay.

Worked Example

The Sunk R&D Project Analyzed

Removing sunk costs from the analysis turns an emotionally difficult decision into an arithmetically obvious one. The discipline of separating past from future costs is most valuable exactly when the past costs are large enough to feel intolerable to abandon.

Content

  • Sunk cost: 12 million dollars already spent. Remove from analysis.
  • Forward-looking analysis: Continuing costs 4 million and produces up to 3 million in revenue, for a net 1-million-dollar loss on the remaining work.
  • Opportunity cost: The 4 million dollars could fund an alternative project with expected return 6 million, for a net 2-million-dollar gain.
  • Rational recommendation: Stop the current project and redirect resources to the alternative. The choice is a 1-million-dollar loss versus a 2-million-dollar gain — a 3-million-dollar swing that is entirely independent of the 12 million already spent.
  • Counter to the sunk-cost instinct: Management's reluctance ('we've come so far') is an emotional response to the size of the sunk cost, not a forward-looking analysis. The 12 million is gone whether the project continues or stops.

Pause and Check

Questions to use before you move into practice

Self-check questions

  • Have I structured the decision with options, states, and consequences before running any numbers?
  • Did I check for dominance before computing expected values?
  • Have I removed sunk costs from the analysis?
  • Have I identified the opportunity cost of each option explicitly?
  • Would my preference flip if I reframed the options, and what does that tell me about the framing's hold on my reasoning?
  • Can I state the conditions under which I would revise this decision?

Practice

Now apply the idea yourself

Move into practice only after you can name the standard you are using and the structure you are trying to preserve or evaluate.

Guided Problem Solving

Integrated

Integrated Decision Analysis

For each case, structure the decision (options, states, consequences), check for dominance, compute expected utility where probabilities are available, identify opportunity costs, remove any sunk costs, check the framing, and then state a recommendation with its justification and revision triggers. Your analysis should make all the tools of the unit visible where they apply.

Five integrated decision cases

Each case is designed to require multiple tools from the unit. Show your structured analysis, not just a final recommendation. When numerical data are provided, compute them.

Case 1 — The startup offer

Nadia is a senior engineer with a stable job paying 180,000 dollars a year plus benefits. She receives an offer from an early-stage startup: 120,000 dollar salary plus equity that has a 10 percent chance of being worth 5 million dollars in five years, a 20 percent chance of being worth 500,000 dollars, and a 70 percent chance of being worth zero. She values stability, likes her current colleagues, and has a mortgage. Structure her decision, compute expected value of the equity, identify opportunity costs, and discuss whether she should take the offer under different risk-aversion assumptions.

Compare expected-money-value, expected-utility under mild risk aversion, and robustness considerations. Identify the opportunity cost of leaving the stable job.

Case 2 — The drug approval tradeoff

A regulatory agency is considering approving a new drug. Clinical trials suggest the drug prevents a serious complication in 30 percent of patients with a particular condition (affecting 100,000 patients per year). It also causes severe side effects in 2 percent of users. Alternative treatments are less effective but have fewer side effects. The agency can approve, reject, or require further trials. Structure the decision, identify the relevant utilities, discuss whose welfare is being counted, and recommend an approach. Note which framing biases might affect public perception regardless of the analytical result.

Make the utility function and aggregation choice explicit. Discuss minimax-style reasoning for the worst case.

Case 3 — The sunk R&D project

A manufacturing firm has spent 12 million dollars over three years on an R&D project whose remaining work will cost another 4 million dollars. Current market analysis suggests the resulting product will earn at most 3 million dollars in revenue. Senior management is reluctant to cancel the project because 'we've come so far.' The 4 million dollars could alternatively fund a much smaller new project with an expected return of 6 million dollars. Structure the decision, identify the sunk costs, compute the forward-looking expected values, and recommend.

Explicitly separate sunk costs from forward-looking costs. Compute the opportunity cost of continuing.

Case 4 — The climate adaptation policy

A coastal city must decide among three adaptation strategies: build a seawall (cost 500 million dollars, protects against 1-in-100-year storms but not worse), retreat from the most vulnerable areas (cost 300 million dollars, protects all inhabitants but disrupts 10,000 households), or take no action (cost 0 dollars upfront but potential damages from severe storms could reach 2 billion dollars with uncertain probability). Probabilities of severe storms are contested among climate models. Structure the decision and explain which selection rule (expected value under best estimates, minimax, robustness across scenarios) best fits this decision context.

Use both expected-value reasoning and minimax reasoning, and explain why the choice of rule matters in the presence of contested probabilities.

Case 5 — The rental versus purchase choice

A young professional can either rent an apartment (1,800 dollars per month) or buy a condo (30,000 dollar down payment, 2,100 dollar monthly payment including mortgage and taxes, expected appreciation of 2 percent per year but with uncertainty). He plans to stay in the city for at least five years but might take a job elsewhere. Structure the decision, identify the opportunity cost of the 30,000 dollar down payment, compute expected value under different scenarios for move timing and appreciation, and discuss how framing effects (e.g., 'throwing away money on rent') might distort the intuitive comparison.

Compute the opportunity cost of the down payment explicitly. Note the framing error in 'throwing away money on rent.'

Not saved yet.

Quiz

Integrated

Capstone Concept Check

Answer each question concisely, drawing on the full toolkit of the unit.

Short-answer capstone check

Each question tests the integration of multiple concepts from the unit.

Q1

Describe the correct order of operations when analyzing a complex decision: dominance check, expected-value computation, opportunity cost, sunk cost, and framing check. Why does the order matter?

Dominance first (no probabilities needed), then expected value, then sanity checks.

Q2

Explain why an insurance policy with negative expected monetary value can still be rational for a risk-averse agent. Use the utility function framework.

Mention that concave utility means large losses hurt more in utility than large gains help.

Q3

A friend is trying to decide whether to continue a Ph.D. program she no longer enjoys. She has spent three years and considers them wasted if she leaves. How would you help her analyze the decision, and which decision errors should you be alert to in her reasoning?

Sunk-cost fallacy, opportunity cost, framing of loss vs gain.

Q4

Under what conditions is expected-value maximization the right decision rule, and under what conditions should it be supplemented or replaced by other rules?

Risk vs uncertainty, one-shot vs repeated, catastrophic downside.

Q5

Explain the normative-descriptive distinction between expected utility theory and prospect theory, and explain why a rigorous decision analyst needs both.

Expected utility is the standard to aim at; prospect theory catalogs the predictable deviations.

Not saved yet.

Animated Explainers

Step-by-step visual walkthroughs of key concepts. Click to start.

Read the explanation carefully before jumping to activities!

Riko

Further Support

Open these only if you need extra help or context

Mistakes to avoid before submitting
  • Do not confuse a thorough calculation with a good decision — the calculation is only as good as the structure it rests on.
  • Do not treat integrated analysis as a license to ignore any one tool when its use is inconvenient; skipping the framing check is not neutrality, it is capitulation to whatever framing you started with.
Where students usually go wrong

Running an expected-value calculation before checking for dominance, missing free eliminations.

Reporting the expected-value-maximizing option as the recommendation without considering risk aversion or worst-case outcomes.

Failing to identify the opportunity cost of the recommended option — what is being given up by choosing it.

Letting the size of a past investment rather than the forward expected value drive a continuation decision.

Accepting a choice because its description frames it as a gain, without asking whether the same outcomes would be accepted if framed as a loss.

Invoking uncertainty to abandon the analysis entirely rather than switching to robustness or minimax reasoning.

Historical context for this way of reasoning

Herbert Simon

Simon's work on bounded rationality is the right lens for the capstone. Real agents cannot optimize perfectly in the face of complex uncertainty, so they must rely on structured procedures that approximate rationality without demanding impossible cognitive feats. The integrated routine in this lesson — structure, eliminate, compute, frame-check, recommend — is exactly the kind of bounded-rational procedure Simon recommended.