Does a hundred wrongs make one right?
How a bet over lunch changed behavioral economics forever

I want to tell you a story of a famous bet. It’s a tale of how a single lunch conversation changed the course of behavioral economics.
Imagine I propose the following. First, I will flip a fair coin. Then:
If the coin lands on heads, you win $200;
If the coin lands on tails, you lose $100.
Do you take this bet?
OK, now, let’s change things up a bit. Instead of a single bet, I’ll offer you a string of 100 such bets.
Would you take the offer now?
Take a moment to decide before you read on.
1. The origin story
This bet comes from a famous paper that Paul Samuelson wrote in the 1960s.

In the paper, Paul Samuelson described a lunch conversation with a colleague at MIT. When Samuelson offered the “win $200, lose $100” bet, the colleague replied:
“I won’t bet because I would feel the $100 loss more than the $200 gain. But I’ll take you on if you promise to let me make 100 such bets.”
The colleague’s thought process was probably something like this: “If I take a single bet, there’s a 50% chance I’ll lose money. However, this isn’t a bad bet, with a positive expected value of $50. If I took 100 bets, I would win $50 per bet on average, and the probability of losing money should be very small by the Law of Large Numbers.”
Below you can see the probability distribution of gains for different values of n, the number of times the gamble is repeated (link to Colab notebook). The distribution does get better with n. With 100 bets, the probability of losing any money is less than 0.04%, while the expected gain is $5,000!1

So, smart answer by Samuelson’s colleague, right? Well, not quite.
In his paper, Samuelson proved that his MIT colleague was irrational.
If you’re rational and you reject a single bet, then you should also reject 100 repeated bets. If you don’t want to take a single bite of a chili pepper, you probably shouldn’t take a hundred bites. Same thing with bets.
2. And this matters… why?
I know what you’re thinking: “Fun story. Economists are so petty they write papers to prove their colleagues are stupid.”
Uhm, maybe. But the story goes way beyond that.
Let’s say you’re considering how to invest your savings. You could put them into something safe (let’s say government bonds) or something risky (let’s say the stock market).
The standard advice is to buy more stocks when you’re young and more bonds when you’re older.
The reasoning is based on the same “Law of Large Numbers” intuition. Suppose you park a lot of your money in stocks. Stocks are very volatile. Hence, if you need your money a year from now, there’s a big chance you’ll have lost some of it. On average, though, stocks tend to go up. So, if you have many years ahead of you, it makes sense to tilt your portfolio towards stocks.
For instance, here’s the recommended asset allocation from Vanguard:
As you get older, the advice is to buy fewer stocks.
But if you take Samuelson’s argument seriously, then Vanguard’s advice is… wrong?2
3. The paradox
All in all, we have a paradox:
On the one hand, it appears to make sense to:
Accept Samuelson’s bet when it’s repeated 100 times;
Invest more in stocks when you’re young.
On the other hand, you know, math. 🤷
How do we resolve the paradox?
3.1. Samuelson’s argument
Let’s first better understand Samuelson’s argument.
Samuelson’s definition of rationality was expected utility maximization. So, let’s assume you have a function u(w) that gives the utility of wealth w. You act to maximize the expected value of this function.
Samuelson proved the following theorem:
Suppose that an expected-utility maximizer rejects a single bet for all initial wealth levels in the relevant range. Then, the expected-utility maximizer will also reject the same bet repeated n>1 times.
I will spare you the full proof (you can take a look at Samuelson’s paper—it’s honestly a fun read). However, it’s very helpful to understand the main intuition.
Suppose you start with $1,000. Let’s assume that—just like Samuelson’s colleague—you’d never take a single round of Samuelson’s bet. However, should you ever take two rounds?
First, draw your potential end-states in a decision tree:
In this tree, you start with $1,000. If you win (probability of 1/2), you now have $1,200, and you play the bet again. In round two, if you lose, you’re left with $1,100; if you win, you end up at $1,400.
We assume that, like Samuelsons’ colleague, you’d never accept a single bet. Mathematically, this means that
for all wealth levels w in the relevant range.
Samuelson’s insight was to work backwards. Suppose that, after playing the bet once, you end up with $1,200:
Since accepting a single bet is never optimal, your expected utility at this stage is less than u(1200)—i.e., the utility of rejecting the gamble. Similarly, if you end up at the node with $900, your expected utility is less than u(900).
Therefore, your expected utility at the beginning of the game is less than
But that’s just the expected utility from accepting a single “win $200, lose $100” gamble at the initial wealth of $1,000! So, you’d never take two rounds of the bet.3
That’s, in a nutshell, the intuition behind Samuelson’s argument: Repeating the gamble n times does not reduce risk. If a single bet causes you pain, then 100 repetitions should only make your pain worse.
3.2. Resolutions to the paradox
There are essentially three ways to resolve the paradox:
Samuelson’s colleague was rational. Samuelson was making some hidden assumptions. If you relax these assumptions, it’s all consistent with expected utility;
Samuelson’s colleague was irrational. The irrationality was to accept the repeated gamble;
Samuelson’s colleague was irrational. The irrationality was to reject the single gamble.
Let’s discuss each possibility in turn.
3.2.1. Resolution #1: Actually, it’s all rational
Samuelson’s argument is very convincing. However, there’s an assumption that’s kind of hidden.
The assumption in Samuelson’s theorem is that you reject the single bet for all initial wealth levels (or at least all wealth levels in some relevant range).4 This turns out to be a restrictive assumption. In our example, perhaps you would reject the bet at a wealth level of $1,000 but accept it at $1,200. If that’s the case, you may reject a single bet but accept a string of two bets.
I don’t find this argument persuasive. In our two-bet example, why would a difference of $200 in your wealth have any impact on your risk attitudes? For an expected utility maximizer, the relevant concept of wealth is something like “lifetime wealth,” and for that, $200 is a rounding error.
3.2.2. Resolution #2: Irrational, don’t take repeated bets
If we reject resolution #1, we must conclude that Samuelson’s colleague wasn't an expected-utility maximizer.
We are left with two possibilities. Either the irrationality was to accept the repeated gamble or to reject the single bet.
Based on my reading of the paper, I think Samuelson thought the mistake was to take the repeated gamble. To Samuelson, the decision to reject the single gamble revealed a strong aversion to risk. It’s foolish to compound the risk by repeating the bet multiple times.
Samuelson gave two intuitive reasons why the colleague made the mistake:
Misunderstanding the Law of Large Numbers. The colleague was applying the Law of Large Numbers incorrectly. Let’s say that your gain from a single bet is S. The Law of Large Numbers guarantees, that for large n, the average gain is close to $50:
\(\bar{S} \equiv \frac{1}{n} \left( S_1 + S_2 + \dots + S_n \right) \approx $50\)However, what you should care about is not the average gain but your total gain:
\(S_1 + S_2 + \dots + S_n\)This sum does not converge to any fixed number. On the contrary, the variance of the sum increases with n, as we already saw in the graph above.
Ignoring small-probability huge-loss outcomes. If you repeat the gamble 100 times, then the worst-case outcome is losing $10,000. If it hurts to lose $100, then it should really hurt to lose $10,000. The probability of this happening is tiny but it’s there. (“Epsilon ain’t zero,” wrote Samuelson.)
Again, I do not find these takes very convincing.
Yes, the Law of Large Numbers doesn’t mean that total gains converge to a fixed number. However, the repeated gamble does have a much better risk-return profile, even if its variance is higher.
As for ignoring huge losses, if anything, people attribute excessive weight to small-probability events. So, that doesn’t seem like a likely explanation. This intuition is also at odds with Samuelson’s proof. In the proof, even if Samuelson’s colleague had won 99 bets in a row, they would still reject the 100th bet. It’s not really about low-probability huge-loss states.
3.2.3. Resolution #3: Irrational, don’t reject attractive single bets
That leaves us with the last possibility: Samuelson’s colleague was irrational to reject the single bet.
There’s a strong case for this. Let’s say we can approximate the colleague’s utility with a constant relative risk aversion utility function:
Here, γ > 0 is the coefficient of relative risk aversion.
A quick calculation reveals something surprising: If you reject the 50:50 bet of “win $200, lose $100” at a wealth level of $50,000, then your relative risk aversion must be higher than 240! That’s extremely high, exceeding empirical estimates by 100-200x.
Such extreme risk aversion leads to wild predictions. If your relative risk aversion is 240, then you should also reject a 50:50 gamble of “win $X, lose $200” for any $X! So, for instance, you’d reject the gamble of “win $10 million, lose $200” at 50:50 odds. That seems absolutely counterfactual.5
This argument is due to Matthew Rabin. In his famous “calibration theorem,” Rabin showed that the essence of the argument doesn’t hinge on any parametric assumptions. For example, if you reject the 50:50 bet of “win $200, lose $100,” then—no matter what the utility function—you’d also turn down the 50:50 bet of “win $20,000, lose $200.”
To me, that’s a convincing case that rejecting Samuelson’s original bet is inconsistent with expected utility.
3.3. Loss aversion explains it all
OK, fine, but still: It doesn’t seem wildly irrational to reject a single round but accept a string of 100 bets. If that type of behavior is inconsistent with expected utility, what’s the alternative?
The modern answer is loss aversion.
It’s tough to explain the idea of loss aversion without math. For one thing, expected utility also has a kind of “loss aversion”. The key idea, though, is that when people evaluate bets, they think about potential gains and losses relative to some reference point (such as your initial wealth). Then, there’s a kink in how you evaluate bets at the reference point, with losses having a disproportionately negative impact.
The graph below illustrates how the value function—which takes the place of the utility function—looks like with loss aversion:
With loss aversion, you may well reject a single bet but accept multiple repeated bets. Left as an exercise for the reader, as they say.6
By the way, in this post, I've equated rationality with the maximization of expected utility. Not everyone agrees. There may be some sense in which loss aversion is rational (e.g., maybe it makes you a better negotiator). It’s also not crazy to speculate that loss aversion may have also had some evolutionary advantage. But that’s a topic for another day.
4. Conclusion
Let’s conclude.
If you’re an expected-utility maximizer, then you should never repeat a gamble that you wouldn’t accept once.
Many people, though, find repeated gambles attractive, and that’s part of the standard financial advice of “stocks for the long run.”
The most compelling resolution to this paradox, I think, is loss aversion. We really hate losing money. Repeating a positive expected-value gamble reduces the probability of loss. That’s what we like about repetition.
A different takeaway is that you should almost always accept a positive-value bet at modest stakes. Otherwise, you’re probably excessively risk-averse. Also, as one of my PhD instructors said, never buy insurance for your bike or smartphone. That’s negative expected value at small stakes, and you should be approximately risk-neutral there.
Anyway, who would have thought that a casual remark over lunch would inspire hundreds of academic papers? The next time you’re offered a bet, think twice about your reply.
Your reply may just spark the next research wave.
The linked Colab notebook calculates the probability exactly using numerical methods. However, a quick way to check this number is to use the standard normal approximation.
The standard portfolio allocation model in economics does say that you should invest a constant share of your wealth in stocks, independent of your age.
By induction, you can extend this argument for any number of repetitions.
In our example above, you need to reject the single bet for all w in [$900, $1,200].
This discussion doesn’t account for inflation. Accounting for inflation, a bet of “lose $100, win $200” in 1960 would translate to a bet of roughly “lose $800, win $1,600” today. If you reject a 50:50 bet of “lose $800, win $1,600,” then your relative risk aversion must be greater than 30. At this level of risk aversion, you’d reject a 50:50 gamble of “win $X, lose $1,500” for any $X. For example, you’d reject a 50:50 gamble of “win $10 million, lose $1,500.” That’s still insane.
A little help: Suppose that the value function is v(x) = x for x > 0 and v(x) = -2.5x for x < 0, where x denotes the gain relative to the initial wealth. Then, you’d reject a single bet but accept a string of two bets. A loss aversion parameter of 2.5 isn’t wild; it’s close to existing estimates (which lie around 2).
"But if you take Samuelson’s argument seriously, then Vanguard’s advice is… wrong?"
No, the key difference is that the results of the coin flips are independent of each other, while the performance of stocks show high degrees of temporal correlation. If there is a recession right before your retirement, it doesn't matter if you hold 100 different stocks (or an index fund) - it's going to severely derail your retirement plans.
That being said, I agree with the rest of the essay - you should take the bet for one coin flip if you would take it for 100 coin flips given the positive expected value, unless if $100 represents a huge fraction of your overall wealth such that the diminishing marginal returns to wealth result in negative expected utility for the bet.
>In his paper, Samuelson proved that his MIT colleague was irrational.
ROTFL. Samuelsons math does not take into account diminishing marginal utility. Nor actuarials science: https://en.wikipedia.org/wiki/Ruin_theory