EA Concepts: Sophisticated Consequentialism and EA
In many conversations, we’ve seen people ask whether EA “implies [insert idea that would sound bizarre to most people]”.This could be meant as a criticism of EA or a call to action by people who think the EA community should do the bizarre-sounding thing.
(Examples of things that might fit in the blank: Donating until you have only enough money to survive, caring about the welfare of wild animals, setting up a surveillance state to reduce existential risk, banning certain research areas that are rife with infohazards, etc.)
Defining Effective Altruism
Sometimes, these bizarre-sounding implications of EA are actually implications of consequentialism (the ethical theory where the right action is always the action that leads to the best consequences). Organisers may wish to provide a definition of effective altruism to highlight the ways in which it differs from consequentialism.
We like Will MacAskill’s definition of effective altruism, which is
“(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world.”
Will’s article explains each part of this definition, and highlights the “misconception” that “effective altruism is just utilitarianism”:
“Unlike utilitarianism, effective altruism does not claim that one must always sacrifice one’s own interests if one can benefit others to a greater extent. Indeed, on the above definition effective altruism makes no claims about what obligations of benevolence one has.”
This reply also applies to consequentialism more broadly and highlights that EA can be consistent with a variety of ethical positions.
Naive vs Sophisticated Consequentialism
In other cases, bizarre-sounding ideas seem justified because their immediate consequences seem good. But often, careful consideration of all consequences (sophisticated consequentialism) might cause us to reject them. For example, if people in EA typically donated so much money that they could barely afford to survive, the community would probably fall apart; people would struggle to stay motivated; and we wouldn’t have the resources to share our ideas or grow the community further.
One useful role organisers can play here is to point out the indirect effects of actions that may at first seem justified, by considering questions such as: “What would happen if a large number of people acted on this principle? What would happen if EA became known for this kind of action?”
Considering Considerateness discusses possible indirect effects of the actions of the EA community, and argues that communities of altruists should be cooperative.
“A classic example is a utilitarian who lies to further altruistic ends, while failing to pay heed to negative indirect effects [...] If you behave dishonestly, people will increase their credence that others are similarly dishonest. You may also inspire others to be dishonest, through the bandwagon effect. Through both of these mechanisms, you will be undermining trust in your community [...] This means that for communities of people striving to do good, such as the effective altruism community, considerateness should be a surprisingly high priority. It could be that, in order to do the most good, they should be considerably more considerate than commonsense morality requires.”
More resources:
Bizarre-sounding ideas might be excellent ideas
Some of these ideas are, in fact, very good. Kelsey Piper explains why it is crucial for our community to be open to unusual ideas in her blog post On “fringe” ideas:
“I want us to be open to the idea that our society is very wrong about important things, I want us to be supportive of efforts to care about more, and I want us to be casting a really wide net for ways we could be going wrong. Finally, to make sure all of this work stays grounded enough that it can actually help people, I want all of the above to happen only in conjunction with growth in the resources we allocate to concrete priorities.”
|