There’s this odd tendency in folk-morality and folk-reasoning to conflate individuals and collections of individuals.
Buying lottery tickets is, for the most part, irrational1. If you press people on why they play when it’s a losing proposition—well, the real answer, I think, is that they get some intangible benefit from being able to hope they get rich for a short while, but the answer they tend to give is that someone has to win, so why not them?
Obviously this is junk as a probability analysis, and it’s an example of the sort of thing I’m talking about. They’re conflating themselves as someone—who has an extremely low probability of winning—with someone as an existential operator on the population of lottery ticket buyers.
Proper analysis: there exists some person (in the population of lottery ticket buyers) such that that person wins the lottery, and I am some person, but these need not have a high probability of being the same, and the probability is in fact very low—too low, generally, to justify the costs.
Bad analysis: there exists some person such that that person wins the lottery, and I am some person, ergo—something? I have a high probability of winning? Buying lottery tickets is a good value proposition?
Another example is that once, when someone found out I don’t vote2, they tried to convince me to with an argument like this: what if everyone did that?
Well, I don’t know what if anything the constitution prescribes in the event that no one votes—presumably it would create some kind of crisis—but who cares? That’s never going to happen.
We can interpret this argument slightly more charitably:
You have some non-zero preference for one major party over the other.
It would be bad if all the voters from the party you prefer didn’t vote.
If some action A would be bad if everyone in some relevant demographic did it, and you are in that demographic, you shouldn’t do A.
Ergo, you should vote.
But still, who cares? I don’t buy that “If everyone did [thing you’re doing], that would be bad” implies “[thing you’re doing] is bad.” That’s only true if you conflate individuals with collectives.
Like, sure. I’ll concede that if I were a master hypnotist who had millions of people under my thrall, doing exactly what I do, then I’d have to take into account how millions of other people will behave in response to what I do. Or if I were a celebrity, or if there were some kind of social signalling issue here.
But I’m just one insubstantial guy. Whether or not I vote won’t effect whether millions of other people do the same thing. Put another way, if they were going to not vote, they were going to not vote regardless of what I did; if they were going to vote, they were going to vote regardless of what I did. I am completely powerless to change the outcome.
I’ve argued before that utilitarians ought to eat meat (if they like eating meat). I think that the utilitarian’s immediate skepticism toward this argument is based, in part, on the tendency to conflate people with collectives. If every utilitarian ate meat, it might prove a substantial boon for the factory farms, which might slaughter more animals in response to the increased demand. If one utilitarian eats meat, I don’t see how the massive factory farm industry will be able to discern any difference in demand3.
Even if you think there's a signalling problem and I shouldn't be broadcasting this argument to my audience for fear of tipping the scales, which would only matter if I were Scott Alexander, my point is that in the members of my audience’s private moral calculus, the one they go through all the time without articulating it, the same argument goes through.
You ever play the video game Pikmin? It’s a lot of fun. Your character, Olimar, crash lands his spaceship onto a hostile planet where he meets sentient carrots he names Pikmin that follow him around and do his bidding as he tries to reconstruct his ship and escape.
But people aren’t Pikmin. Nobody—nobody I know, anyway—has a hundred sentient carrots following them around mimicking them. To the extent that someone like Scott Alexander might actually have people who mimic what he says and does just like Pikmin, yes, I concede he ought to take that into account.
But for the rest of us average schlubs, doing a purely private moral calculus, we don’t need to worry about Pikmin.
I don’t deny that our ethical systems ought to be universalizable. Otherwise, they’re not really deserving of the name. If you’re a utilitarian, you ought to wonder what the world would be like if everyone were a utilitarian. If you’re a deontologist or virtue ethicist or whatever, same.
I also don’t deny that if your individual action does have some non-zero effect on morally salient outcomes, you should consider that in your private moral calculus.
I do deny that individual actions need to be universalizable. Imagine how silly that would be, if we required it!
“Being a CEO is wrong. If everyone were a CEO, we wouldn’t have any workers.”
“Being a worker is wrong. If everyone were a worker, we wouldn’t have any CEOs.”
And it would license things like driving on the left side of the road in the USA because, after all, if everyone drove on the left side of the road there’d be no problems!
“An action A is acceptable if and only if it would be acceptable if everyone did A,” is obviously a bad metric.
A more charitable interpretation of this take on universalizability might be something like the following: an action A is acceptable in situation S if, given any situation S’ with an analogous fact pattern to S, action A would also be acceptable in S’. But that just seems like a restatement of something we take to be trivially true in ethics, i.e., that underlying moral principles ought to apply universally even when contexts change.
Let’s analyze an example situation.
Yes, if everyone feeds the bear at the zoo, as one of the propaganda examples from a children’s book goes, he might get sick. Here’s how to analyze that situation privately in a sort of flow-chart method:
Question 1: will my throwing the bear some food impact whether he gets sick or not?
If the answer is yes, do not throw the food. If not, go to Question 2.
Question 2: will my throwing the bear some food have any negative impact at all on his health that he wouldn’t already get from other people throwing food?
If the answer is yes, do not throw the food. If not, go to Question 3.
Question 3: is there a signalling problem? In other words, if (and only if) I throw the bear food that might not hurt him on its own, will other people join in and throw enough food to hurt him (or hurt him further)?
If the answer is yes, do not throw the food. If the answer is no, throw the food.
In other words, I claim your individual action is acceptable as long as it passes the following three tests: you haven’t caused harm yourself, you haven’t made any existing harm worse, and your action hasn’t resulted in other people causing harm. It’s hard to see how such an individual action could be bad without conflating individuals and collectives. I realize that’s a bit much for a children’s book, but just because the truth is complicated doesn’t mean we should propagandize kids into thinking the wrong thing.
There may be rare occasions when the expected value of playing the lottery is positive.
On an expected value basis. The odds of any one individual’s vote mattering to the outcome are so low as to make voting not worth the time and effort, particularly if, as I do, you live in a solidly blue or red state. For a contra view on this, read the excellent paper Why You Should Vote to Change the Outcome.
Even if you don’t buy into this argument, utilitarians ought to steal meat and eat it given the opportunity to do so without getting caught. I don’t see how a utilitarian can wiggle out of that one.
This seems like "any free action that direclty affects another person's freedom must be justified on reasons." The golden rule and universalizability are just guides to examine the reasonableness of our actions.
Universalizable principles can be justified if they are specific enough., or at least given a reasonable interpretation. Its principles that need to be universalizable, not actions. You need principles. You can't judge actions on their own.
For instance, for voting, you may say that you have a duty to vote if you have a non-zero preference for one choice over the other. Yet you have no duty to do so if the chance of your vote affecting the election is lower than the chance that you'll die on the way to go vote. This principle may still be universalizable without justifying extreme behavior like no one voting.