Useless Thought Experiments
Posted by Michael Dickens on April 12, 2013
Philosophers often use thought experiments in an attempt to refute some theory. In the particular case of ethical thought experiments, philosophers’ arguments tend to take this form:
1. Consider some unlikely situation.
2. In this situation, moral philosophy X says you should do Y.
3. Y is clearly immoral.
4. Therefore, X cannot be true.
In response, people who believe X often try to refute (2)—the idea that Y follows from X. In many cases, this is a mistake. Typically, the weakest point here is (3)—the assumption that Y is immoral. Even if we intuitively feel that Y must be immoral, our intuitions often misguide us; if we want to think clearly, we must apply rationality to our judgments whenever possible. We cannot reject a moral philosophy because of a thought experiment.
Intuitional and Rational Judgments
When making ethical judgments, people tend to rely heavily on intuition. An ethical system must contain some sort of intuition as the basis for a first principle; but it is important to select the right intuition. Most people who have not studied ethics (and even many who have) tend to act on whatever intuition they happen to feel in the moment, even if it contradicts some previous feeling.
When establishing an ethical theory, one should choose a few basic intuitions and then use rational principles to develop a consistent philosophy.
Once one selects some ethical framework, one can no longer make statements such as “Y is clearly immoral.” It must be proven immoral within the framework, not by our limited and often-inconsistent  instincts.
Below, I discuss three thought experiments that have been used to reject utilitarianism, and why they fail at this purpose.
The Repugnant Conclusion is stated as follows:
Consider two worlds, A and B. In worlds A, there exist a small number of people who are all very happy. In worlds B, there exist more people who are each less happy, but the total utility is still greater than that of situation A. According to classical utilitarianism, B is a better world than A.
Now consider some worlds C that has more people than B who are each less happy, but where the total utility is greater than that of B. Supposedly, C better than B.
Keep descending like this until we reach a world with many people whose lives are barely worth living. This world is “clearly” (i.e. intuitively) worse than world A.
For a somewhat more detailed description and some diagrams, see “Mere Addition Parodox”.
This is an interesting thought experiment because it effectively outlines numerous reasons why we cannot rely on intuition.
Most total utilitarians I have seen do not believe this is a problem, although many others try to create strange and convoluted alternatives to total utilitarianism to avoid the repugnant conclusion. (Some even go so far as to reject transitivity of happiness comparisons, which has to be one of the most absurd ethical positions ever espoused. If you’re rejecting mathematical axioms to make your ethical system work, you’re probably doing something wrong.)
It often surprises me how far some philosophers—ostensibly, seekers of truth—will go to avoid a little emotional uneasiness with their own theories. The whole point of creating an ethical framework is to find answers to moral questions in situations where our intuitions leave us blind. If we always try to mold our framework to fit every little intuition, then why bother with a framework at all? Why not just do whatever feels right, and give no regard to any abstract philosophical theory? Why not just live with the cognitive dissonance of arbitrary intuitions and forget about trying to behave consistently?
If we accept a few basic premises, we must accept that the best world is the world with the most happiness and least suffering. Therefore, it must be better to create a world with more people who are each less happy as long as the total amount of happiness increases. One should not be willing to reject the basic premises of utilitarianism just because one feels a little uncomfortable about the so-called Repugnant Conclusion.
The argument against the Repugnant Conclusion is not purely theoretical. There exists strong scientific evidence that our intuitions mislead us in this case.
It is a well-documented fact that humans do not deal well with changes in scope. This scope insensitivity causes us to misunderstand the significance of large numbers. One cannot coherently discuss the Repugnant Conclusion without understanding scope insensitivity. (If you have not read the linked article, please do so—here it is again.)
My own analysis of the Repugnant Conclusion is that its apparent force comes from equivocating between senses of barely worth living. In order to voluntarily create a new person, what we need is a life that is worth celebrating or worth birthing, one that contains more good than ill and more happiness than sorrow – otherwise we should reject the step where we choose to birth that person. Once someone is alive, on the other hand, we’re obliged to take care of them in a way that we wouldn’t be obliged to create them in the first place – and they may choose not to commit suicide, even if their life contains more sorrow than happiness. If we would be saddened to hear the news that such a person existed, we shouldn’t kill them, but we should not voluntarily create such a person in an otherwise happy world. So each time we voluntarily add another person to Parfit’s  world, we have a little celebration and say with honest joy “Whoopee!”, not, “Damn, now it’s too late to uncreate them.”
And then the rest of the Repugnant Conclusion – that it’s better to have a billion lives slightly worth celebrating, than a million lives very worth celebrating – is just “repugnant” because of standard scope insensitivity. The brain fails to multiply a billion small birth celebrations to end up with a larger total celebration of life than a million big celebrations.
From The Lifespan Dilemma.
See also: Torture vs. Dust Specks.
People even fail to properly account for scope when only considering their own personal well-being: see The Lifespan Dilemma.
Another favourite is the Utility Monster:
Suppose there exists a being called the Utility Monster that gets more pleasure per unit of resources than anyone else, and does not suffer from diminishing marginal utility. According to utilitarianism, we should give all our resources to the Utility Monster and spend our entire lives as slaves to it.
The Utility Monster is illustrated by this webcomic.
The concept of the Utility Monster is supposed to be a reductio ad absurdum of utilitarianism, but I have no problem with the concept. If such a being exists, and utilitarianism says we should devote our lives to improving its welfare, then I absolutely agree that we should.
Surely this thought produces a negative visceral reaction. It used to have that effect on me, too, but I’ve gotten used to it. The fundamental principles of utilitarianism are too important to throw out the window just because a thought experiment makes me feel uncomfortable .
If we create an extreme thought experiment that we cannot hope to have an intuition about, and it conflicts with our intuitions, this proves what exactly? . . . If we consider utilitarianism as a suggested morality, then it doesn’t matter at all what our intuition is at the conclusion. Most people are born with an intuition that heavy objects will fall faster than lighter ones, and that the sun goes around the Earth once a day. Intuition simply doesn’t lead to truth. -rudster
Our intuitions properly guide us most of the time, but they do not help us much when we encounter novel situations. Our intuitional understanding of how objects move makes quantum mechanics harder to comprehend, and our perception of time makes relativity seem counterintuitive. If we existed as very tiny or very fast-moving beings, our intuitions would be suited to these situations; similarly, if we existed in a universe where beings do not experience diminishing marginal utility, our intuitions would better help us understand the Utility Monster.
Additionally, I find that I cannot grok the idea of a being that does not have diminishing marginal utility. Every time I try to imagine what it would be like to be the Utility Monster, I think, “Well, after Bill Gates gives me all his money, I won’t really find any value in getting money from Jane down the street.” It seems practically impossible not to think this way. But that’s not how the Utility Monster works.
And what is it like to get more pleasure per unit of resources than anyone else? This is even harder to comprehend than a being with no diminishing marginal utility. In short, we cannot rightly make the intuitional judgment that the Utility Monster is wrong if we cannot even properly conceive of it.
I’m surprised though that when discussing the [Repugnant Conclusion] and [Utility Monster] alongside, you don’t point out how closely related they are – each is conceptually a reductio of rejection of the other. Think having loads of just-barely-happy people sounds horrible? Then you must support condensing them into fewer more-happy-people. One entity getting all the utility is unjust? Then you presumably prefer it if we divide it into numerous proportionately-less-happy entities.
If our intuitions reject a world where all the utility becomes concentrated in a single individual, and they also reject a world where utility is spread out thinly among many individuals, this demonstrates as clearly as anything why we should not rely on them so heavily. When we consider the Repugnant Conclusion and the Utility Monster in unison, it becomes painfully obvious that our intuitions are not internally consistent .
Kill One to Save Many
There are many examples of moral dilemmas in which you must choose between killing one and letting many die. One of the most striking such scenarios is the organ transplant dilemma:
You are a transplant surgeon with five patients who each need a different organ: one needs a heart, one needs a lung, one a pancreas, one a kidney, and one a liver. You have no organ donors, and each of these people is on the verge of death. You are in your office, trying to figure out what to do, when a healthy man walks in for a checkup. You could kill him while he’s sleeping and harvest his organs, saving your five dying patients. Should you do it?
This was the hardest dilemma for me to come to terms with, until I remembered that we don’t base ethical decisions on our emotional reactions to thought experiments.
If this scenario exists in isolation, utilitarianism clearly dictates that you should kill the patient to save the five. But in reality, things are quite different. You would almost certainly be charged with murder and be thrown in jail for the rest of your life. More importantly, you would be unable to continue your role as a doctor, unable to help those who most need it. Even if you weren’t convicted of murder, no one would trust you anymore and they’d refuse treatment out of fear that you’d kill them. Knowledge of this event would spread, causing many to become fearful of doctors even when they most need medical treatment. In the long run, killing the one man to save the five would have greater negative consequences than positive.
Furthermore—and this point is worth repeating—you should choose a school of moral philosophy based on its foundations, not based on how well it accords with all of your intuitions. There is no moral philosophy that fully aligns with intuition, because intuition is internally inconsistent.
Moral instinct is grounded in reality. Thought experiments where you have to kill one person to save many—such as the organ transplant dilemma—seem somewhat absurd because they have so little connection to reality. A better thought experiment would be, “If you could save dozens of animals in factory farms for less than $100, would it be good to do so?” Such a thought experiment reflects an actual choice that exists in the real world—I suppose this makes it less of a thought experiment and more of a reality experiment. However, it is still enlightening to consider because many people simply glance over questions such as these. Real-life ethical questions such as this one do not encounter the same failures that many thought experiments do.
So let’s stop trying to use thought experiments to argue against an ethical theory. Instead, let’s consider the theory’s principles and its applications to the real world.
 To many, it may not be immediately obvious that moral intuitions are inconsistent. Here are some examples:
1. Sometimes morality is based on rules (murder is always wrong, no matter what), and sometimes it is based on consequences (lying is wrong, unless of course you have a really good reason). It cannot be both.
2. Morality lives in the world, which means there is no reason a priori to distinguish between killing and letting die (it may not be clear that the latter assertion follows from the former, but I have no room to justify it here and almost any consequentialist would agree). But killing is wrong and letting die is not.
3. The Repugnant Conclusion is bad and the Utility Monster is also bad (this essay explains these, and why our intuitions contradict each other in this case).
A thoughtful reader can probably come up with more examples.
 Interestingly, once I accept that my intuitions are wrong, they tend to fade. I used to feel a strong negative reaction to the thought of the Utility Monster being real, but now I just accept it in the same way that I accept The Monty Hall Problem—it used to seem unintuitive, but my intuitions have adapted to better model reality.
 It may appear that utilitarianism is also inconsistent if it endorses both the Utility Monster and the Repugnant Conclusion, but this is not the case. In both hypothetical scenarios, the option that our intuitions tend to reject is the option of greatest utility. For the Utility Monster, a single individual has more happiness than everyone else combined; for the Repugnant Conclusion, many individuals have more happiness than a few individuals.