Newcomb's Problem remains one of philosophy's most debated thought experiments, and a recent discussion thread has revived a pragmatic argument for resolving it: empirical outcomes. The core puzzle presents a scenario where a highly accurate predictor determines whether a sealed box contains $1 million (if you take only that box) or $1,000 (if you also take a second box). The predictor has demonstrated 99 percent accuracy in previous cases. The question is whether one should take one box or both.
The original poster argues that one-boxers walk away with more money on average than two-boxers, and therefore one should want to be a one-boxer. This framing shifts the debate from abstract decision theory to empirically observable outcomes. The argument suggests that if an experiment were run with a sufficiently accurate predictor—perhaps an AI trained on stated preferences of thousands of internet users, or a real-world scenario funded by a billionaire—the data would demonstrate that one-boxers consistently achieve better financial results.
The Case for One-Boxing
Proponents of one-boxing in this discussion emphasize the straightforward empirical logic: groups of one-boxers systematically outperform groups of two-boxers when the predictor maintains high accuracy. If the predictor correctly anticipates 99 percent of decisions, then 99 percent of one-boxers will find $1 million in their box, while 99 percent of two-boxers will take $1,001 total across both boxes. The average one-boxer receives approximately $990,000 while the average two-boxer receives roughly $1,001. From a consequentialist perspective focused on actual monetary outcomes, becoming a one-boxer is the rational choice.
This argument has intuitive appeal because it grounds philosophical reasoning in observable reality. Rather than debating the metaphysics of causation or decision-theoretic frameworks, one-boxers in this discussion point to what would actually happen if the scenario were replicated. The predictor's accuracy is the fulcrum: with sufficient predictive power, the strategy that produces better real-world results is the one agents should adopt.
Objections and Complications
Two-boxing defenders raise several objections to this argument. The most fundamental contention concerns the relevance of group averages to individual decision-making. Even if one-boxers collectively leave with more money, a two-boxer who is aware of the predictor's accuracy and their own intended two-boxing choice might reason that the predictor has already made its decision. The box is already filled or empty before the choice moment arrives. Two-boxing thus becomes a way to secure both boxes regardless of the predictor's judgment. From a causal perspective, a two-boxer's current action cannot change what the predictor did in the past.
Critics also question whether the empirical argument truly settles the underlying decision theory. Some argue that appealing to group outcomes sidesteps the fundamental issue: what should a rational individual do given that the predictor has already acted? They contend that average outcomes across populations and correct individual reasoning in a specific instance are distinct questions. A two-boxer might acknowledge that one-boxers do better on average while still maintaining that two-boxing is rational for them personally at the moment of choice.
Additionally, participants debate the realism of achieving a 99 percent accurate predictor outside the thought experiment. Real-world predictive accuracy might be substantially lower, which could invert the monetary advantage. The original poster acknowledges that an AI predictor might achieve only 80 percent accuracy in practice, yet argues this would still favor one-boxing. Whether empirical experiments could ever resolve the philosophical puzzle—or would merely relocate the disagreement—remains contested.
Broader Implications
The discussion highlights a recurring tension in decision theory: the gap between actions that produce good outcomes for groups and actions that seem rational from an individual agent's perspective facing a fixed past. It also raises questions about what counts as sufficient justification for rational choice. If empirical evidence shows one outcome dominates, does that settle the matter, or do underlying causal considerations remain philosophically relevant even if they lead to worse results?
Source: reddit.com/r/changemyview
Discussion (0)