Why single-line projections mislead

Every standard financial plan contains a chart that looks roughly the same: a single line ascending from left to right, inflected upward by the assumed rate of return, terminating at some target wealth figure in retirement. The line is clean, legible, and almost entirely misleading. The line communicates a false sense of precision, presenting one deterministic trajectory when the space of plausible outcomes spans an enormous range.

The root of the problem is the arithmetic versus geometric return distinction — one of the most consequential and consistently misunderstood concepts in quantitative finance. Arithmetic average returns and geometric (compound) returns are not the same thing. Volatility drives them apart, and the gap widens with time. The relationship is approximated as: geometric return ≈ arithmetic return − (volatility² / 2). A portfolio with an arithmetic average return of 7% and annual volatility of 15% has an expected geometric return of roughly 5.9%. Over 30 years, the compounding difference between those two numbers is enormous.

The arithmetic trap: A portfolio that returns +50% then −33% has an arithmetic average of +8.5% per year, but a geometric return of exactly 0%. The terminal value is identical to the starting value. Path matters — not just the average.

Whenever an advisor or planning tool inputs a single "expected return" figure into a deterministic projection, the advisor is implicitly assuming that return is achieved smoothly, every year, without volatility. The real world delivers returns in a lumpy, path-dependent sequence that may have the same average but a radically different terminal value. Single-line projections do not capture the probability of falling 40% in year three of retirement. Single-line projections do not show what a portfolio looks like at the 10th percentile of outcomes. Single-line projections present a single expected-case number that is, by construction, an arithmetic extrapolation that conflates the mean of a distribution with a likely outcome. Monte Carlo simulation is the correction to this structural flaw.

What Monte Carlo simulation actually does

The term "Monte Carlo" refers to a broad class of computational methods that use repeated random sampling to obtain numerical results. In the context of portfolio planning, the methodology is conceptually direct: define a return distribution for each asset class in the portfolio, specify the correlations, draw random return sequences for each period across a chosen time horizon, and compute the resulting portfolio value. Repeat this process thousands of times — typically 10,000 paths is standard — and collect the distribution of terminal outcomes.

The return distributions can be constructed in two principal ways. The first is parametric: assume that returns follow a known distribution (most commonly normal, or Gaussian) characterised by a mean and standard deviation estimated from historical data or set as forward-looking assumptions. The second is historical bootstrapping: draw actual historical return sequences at random, with replacement, preserving the correlation structure across asset classes in each sampled period. Each approach has trade-offs. Parametric simulation is clean and controllable but depends critically on distributional assumptions — a point covered in the limitations section below. Historical bootstrapping preserves empirical fat tails and cross-asset relationships but is constrained by the length of the historical record.

The output of 10,000 simulated paths is visualised as a fan chart: a band of possible portfolio trajectories that narrows near inception (all paths start from the same point) and widens with time as the paths diverge. The fan is shaded to show percentile bands — the 10th through 90th percentile corridor, the median line, and typically the 25th and 75th percentile inner band. This visual representation communicates something that a single line fundamentally cannot: the outcome is genuinely uncertain, the uncertainty compounds over time, and planning should account for the full distribution rather than a single expected value. The covariance structure underlying these simulations benefits enormously from rigorous estimation via Ledoit-Wolf shrinkage, which produces better-conditioned correlation and volatility inputs than the raw sample covariance matrix.

Sequence-of-returns risk: why order matters as much as average

Of all the concepts that Monte Carlo simulation makes legible, sequence-of-returns risk is perhaps the most important for investors approaching or in retirement. The arithmetic seems counterintuitive at first: how can two investors with identical 20-year average annual returns end up with dramatically different terminal wealth? The answer is that in the presence of ongoing withdrawals, the order in which returns are received materially affects the outcome in a way that the average return does not capture.

The asymmetry is structural. During the accumulation phase — when an investor is adding contributions each month — poor early returns are partially mitigated by the ability to purchase additional units at depressed prices. A severe drawdown in year two of a 30-year accumulation horizon is painful but partially self-correcting. The distribution phase reverses this entirely. Once an investor begins withdrawing from a portfolio — in retirement, or during any structured drawdown — poor early returns force the sale of units at depressed prices to fund spending. Those units are permanently gone. Those units cannot participate in the subsequent recovery. The portfolio is permanently impaired.

Worked Example — Sequence of Returns

Investor A retires with $1,000,000 and withdraws $50,000 per year. Returns over four years: −25%, −15%, +30%, +25%. Four-year arithmetic average: +3.75%.

Investor B retires with the same $1,000,000 and withdraws the same $50,000 per year. Returns in reverse order: +25%, +30%, −15%, −25%. Four-year arithmetic average: also +3.75%.

Investor A ends year four with approximately $612,000. Investor B ends year four with approximately $786,000 — a $174,000 difference attributable entirely to sequence. Extend this asymmetry across a 20–30 year retirement and the outcomes diverge catastrophically at the lower end of the distribution.

Monte Carlo simulation captures this risk precisely because Monte Carlo simulation generates thousands of distinct return sequences, including sequences where severe drawdowns cluster early in retirement. A deterministic projection using the average return will always show the same optimistic terminal value. The simulation shows the distribution of outcomes, and critically, the simulation shows how many of those thousands of paths ended in portfolio depletion before the planned end date. That number — the probability of ruin — is one of the most consequential pieces of information available to a retiree, and the probability of ruin is invisible in any single-line projection. This is also why tail risk measurement via CVaR is a natural complement to Monte Carlo analysis: where the simulation tells you the frequency of bad outcomes, CVaR tells you the severity.

How to read a Monte Carlo output

The output of a properly constructed Monte Carlo simulation should be read as a probability distribution, not as a prediction. When 10,000 simulations are run for a portfolio and 2,200 of those paths end in ruin, the result is not a precise prediction of a 22% ruin probability. The result says that under the specified return and volatility assumptions, and under the specified spending rate, 22% of plausible historical return sequences would have caused portfolio depletion. This is qualitatively different from a deterministic forecast, and it should be used differently.

The three numbers to focus on are the 10th percentile planning floor, the 50th percentile central case, and the 90th percentile optimistic scenario. The 10th percentile is the most important of the three. The 10th percentile represents the portfolio value at which only 10% of simulated paths performed worse — it is, in practical terms, the floor against which one should stress-test spending plans. A retirement that remains solvent at the 10th percentile is reasonably robust to adverse sequences. A retirement that runs into difficulty at the 30th percentile is fragile.

The "success rate" figure that Monte Carlo tools report — "your plan has a 78% success rate" — means precisely that 78 of 100 simulated paths achieved the stated goal (most commonly, maintaining a positive portfolio balance through a specified retirement age). A success rate of 95% indicates a robust plan. A success rate of 60% requires adjustment — reduced spending, increased savings, extended working years, or some combination. The simulation does not specify which lever to pull; the simulation specifies how hard the lever needs to be pulled.

One practical point: the success rate figure is sensitive to the assumed time horizon and spending rate. A 4% withdrawal rate on a 30-year horizon produces a materially lower success rate than a 3.5% rate on the same horizon, and the relationship is nonlinear in the tail. Small reductions in the withdrawal rate produce disproportionate improvements in the probability of success because small reductions reduce the frequency of worst-case compounding failures. Modest spending flexibility has outsized impact on tail outcomes — one of the most practically valuable results that Monte Carlo analysis produces.

Practical inputs and their impact on simulation quality

The quality of a Monte Carlo analysis is entirely determined by the quality of its inputs. The most consequential input is the return assumption. Forward-looking return estimates are preferable to simply extrapolating historical averages. Current valuations, yield environments, and structural conditions in markets embed meaningful information about prospective returns that the historical average ignores. Using the historical equity premium mechanically in an environment of elevated valuations and compressed credit spreads overstates the expected return and systematically understates the probability of poor outcomes.

Volatility and correlation inputs are the second most important determinant of simulation quality. The covariance matrix is estimated using Ledoit-Wolf shrinkage, a regularisation technique that improves on the sample covariance matrix by shrinking the sample matrix toward a structured constant-correlation target. The sample covariance matrix — computed directly from historical returns — is notoriously noisy when the number of assets is large relative to the number of observations. Ledoit-Wolf shrinkage produces a better-conditioned estimate that reduces the impact of estimation error on simulation outcomes. The improvement in covariance estimation translates directly to more reliable tail risk estimates in the Monte Carlo output.

Spending rules and withdrawal rates interact with the simulation in important and nonlinear ways. A fixed dollar withdrawal (e.g., €80,000 per year regardless of portfolio value) produces different and generally worse tail outcomes than a variable spending rule (e.g., withdrawing a fixed percentage of current portfolio value, or using a floor-and-ceiling guardrail approach). Variable spending rules effectively build a form of automatic adjustment into the plan — when the portfolio underperforms, spending adjusts downward, reducing the probability of ruin at the cost of spending variability. The choice of spending rule is as important as the return assumption in determining success rates.

Finally, time horizon sensitivity deserves explicit attention. Extending the planning horizon by five years does not increase risk linearly — extending the horizon increases the probability of encountering a sustained adverse sequence and amplifies the compounding effect of early drawdowns. Planning to the median life expectancy is, by construction, planning to fail half the time.

What Monte Carlo tells you that a financial plan does not

A traditional financial plan — even a sophisticated one — is a deterministic document. Monte Carlo simulation replaces the single projected path with a distribution, and in doing so, Monte Carlo simulation surfaces four categories of insight that the deterministic plan cannot produce.

First, Monte Carlo simulation quantifies tail scenarios and ruin probability. The probability that a portfolio is depleted before the end of the planning horizon is a calibration of how much buffer the plan contains. A plan showing 95% success has very different characteristics than one showing 75% success, even if both show the same median terminal value. The difference is entirely in the distribution of outcomes below the median.

Second, Monte Carlo simulation reveals the optimal spending rate as a function of desired confidence level. Rather than applying a rule-of-thumb withdrawal rate, the simulation allows explicit calibration: what withdrawal rate corresponds to a 90% success rate, a 95% rate, or a 99% rate? The answer depends on the specific asset allocation, return assumptions, and time horizon — it is not a universal constant. The Asset Lens tool is designed to support this analysis across different portfolio compositions.

Third, Monte Carlo simulation makes explicit the value of spending flexibility. A client who is willing to reduce spending by 10% in response to a poor sequence of returns dramatically improves the probability of success. The simulation quantifies this: spending flexibility is a form of risk management, and spending flexibility can be traded directly against asset allocation risk. A more conservative portfolio paired with flexible spending often outperforms an aggressive portfolio with rigid spending at the tail of the distribution.

Fourth, Monte Carlo simulation provides the analytical foundation for comparing the impact of different risk management strategies — such as incorporating alternative allocations, adjusting asset class weights, or evaluating structured products — on the full distribution of outcomes rather than just the expected return. This connects naturally to the Black-Litterman framework for expressing return views within a disciplined optimisation, where the optimised portfolio is subsequently stress-tested through simulation.

Limitations: model risk and what Monte Carlo cannot capture

Monte Carlo simulation is a planning tool, not a forecast. Every simulation result is conditional on its assumptions, and those assumptions introduce model risk that can be as consequential as the market risk being modelled.

The most significant limitation is the distributional assumption. Most Monte Carlo implementations assume that returns are drawn from a normal (Gaussian) distribution. Empirical asset returns are not normal. Asset returns exhibit fat tails — extreme events occur far more frequently than a normal distribution predicts. The 2008 financial crisis, the March 2020 drawdown, and the 1987 crash all involved return realisations that a normal distribution assigns near-zero probability. A simulation built on normality systematically underestimates the frequency and severity of tail events. Historical bootstrapping preserves some of this tail behaviour, but historical bootstrapping is constrained by the historical record, which may not contain analogues for future tail events.

The second limitation is parameter estimation error. The return and volatility assumptions that feed the simulation are themselves estimates, subject to substantial uncertainty. A 1% error in the assumed real return has enormous consequences for 30-year projections. Monte Carlo success rates should be treated as order-of-magnitude indicators — the difference between 92% and 94% is not meaningful; the difference between 70% and 90% is. Precision in the output does not reflect precision in the inputs.

Third, Monte Carlo simulation as typically implemented does not capture structural breaks and regime changes. The simulation draws from a single stationary distribution across the entire horizon. A simulation calibrated on post-1990 U.S. data will not produce return sequences resembling the 1970s stagflation environment or Japan's post-1990 deflation. Regime-switching models address this partially, but regime-switching models introduce additional layers of parameter uncertainty. The appropriate response at the advisory level is scenario analysis alongside simulation — complementing the probabilistic output with deterministic stress tests of specific adverse regimes.

None of these limitations diminish the value of Monte Carlo simulation relative to deterministic projections. Used with appropriate humility about its assumptions, and paired with scenario analysis for events outside those assumptions, Monte Carlo simulation transforms financial planning from a false-precision exercise into an honest accounting of risk and probability.