Hide table of contents

I think it’d be interesting to discuss the relationship between effective altruism and utilitarianism (or consequentialism more broadly). These are my initial reflections on the topic, intended as a launching point for discussion rather than a comprehensive survey. (Despite the five years I spent studying philosophy - before deciding that this didn’t actually help anyone! - this post is focused on practicalities rather than precise analysis, though I’d enjoy hearing that too.)

A classification of EA views

EAs often say that their views don’t presuppose consequentialism (in its classic form, the view that an act is right if and only if it results in at least as much expected good as any other available act). And this is true for a wide range of characteristically EA views, such as that giving large sums (say 10% of your income) to charities ranked as highly cost-effective is a good thing. However it is not true for other views which some regard as part of EA, such as that small chances of astronomical effects on the sorts of lives that are brought about in the future can overwhelm any effects your actions have on people who exist now. This may not logically presuppose consequentialism, but it is generally based on it. On many moral views people who don’t exist - and especially people who wouldn’t exist but for your actions - don’t matter morally in the same way as people who do. So it is helpful to divide EA beliefs into these three categories:

  1. Those that are probably true even if consequentialism is false
  2. Those that are probably false - sometimes even repugnant! - on non-consequentialist views
  3. Those that fall into neither of the above categories (it will be a toss-up whether these beliefs are true even if consequentialism is false).

Which category a belief falls into is important. One uncontroversial reason for this is that many people are not consequentialists and that we want to convince them. Beliefs in category 1 will be the easiest sell, followed by those in category 3; beliefs in category 2 will be a tough sell.

Another reason is that consequentialism may be false. The importance of this possibility depends upon the probability we assign to it, but it must carry some weight unless it can be rejected absolutely, which is only plausible on the most extreme forms of moral subjectivism. I do not find these views credible, but going into this would be a digression, so I’ll simply flag that moral subjectivists will have a different perspective on this. I’ve also found that some other anti-realists are extremely confident (though not certain) that consequentialism is true, though it’s an open question how often this is reasonable.

When we’re concerned with convincing non-consequentialists, we will focus on the particular non-consequentialist positions that they hold, which will generally be those that are most popular. When we’re concerned with the possibility that consequentialism is false, by contrast, we should really care about whether the EA views at issue are true or false on the non-consequentialist theories that we find most plausible rather than on the theories that are most popular. For instance, if you think that you might owe special duties to family members then this is relevant regardless of how popular that position is. (That said, its very popularity may make you find it more plausible, as you may wish to avoid overconfidently rejecting moral claims that many thoughtful people accept.)

Which categories do EA views fall into?

The answer to this question depends on what non-consequentialist positions we are talking about. I’ll consider a few in turn.

First, take the position that people who don’t exist have less moral weight. There are several versions of this position, each with different implications. On one version, only people who exist matter at all; this would make far future oriented charities less promising. On another, people who don’t yet exist matter less; the implications of this depend on how much less, but in some cases effects on non-existent people won’t alter which charities are most effective. On yet another, certain sorts of people matter less - for example, those who won’t exist because you acted a certain way. This example would affect our evaluation of existential risk charities.

Second, there are a wide variety of positions which directly or indirectly reduce the relative moral weight of animals, or of people who don’t currently exist. Consequentialism (and in particular classical utilitarianism, or some aspects thereof) is plausibly the moral theory that is friendliest to them. For example, when it focuses on pleasure and pain it puts concern for animals on the strongest possible ground, since it is in their capacity to feel pain that animals are closest to us. So we should expect animals’ moral weight to decrease if we give some credence to these positions.

A third sort of non-consequentialist position is that we should not act wrongly in certain ways even if the results of doing so appear positive in a purely consequentialist calculus. On this position we should not treat our ends as justifying absolutely any means. Examples of prohibited means could be any of the adjectives or nouns commonly associated with wrongdoing: dishonesty, unfairness, cruelty, theft, et cetera. This view has strong intuitive force. And even if we don’t straightforwardly accept it, it’s hard not to think that a sensitivity to the badness of this sort of behaviour is a good thing, as is a rule of thumb prohibiting them - something that many consequentialists accept.

It would be naive to suppose that effective altruists are immune to acting in these wrong ways - after all, they’re not always motivated by being unusually nice or moral people. Indeed, effective altruism makes some people more likely to act like this by providing ready-made rationalisations which treat them as working towards overwhelming important ends, and indeed as vastly important figures whose productivity must be bolstered at all costs. I’ve seen prominent EAs use these justifications for actions that would shock people in more normal circles. I shouldn’t give specific examples that are not already in the public domain. But some of you will remember a Facebook controversy about something (allegedly and contestedly) said at the 2013 EA Summit, though I think it’d be fairest not to describe it in the comments. And there are also attitudes that are sufficiently common to not be personally identifiable, such as that one’s life as an important EA is worth that of at least 20 “normal” people.

A fourth and final non-consequentialist position to consider is that you owe special duties to family members or others who are close to you, and perhaps also to those with whom you have looser relations, such as your fellow citizens. This may limit the resources that you should devote to effective altruism, depending on how these duties are to be weighed. It may give you permission to favour your near and dear. However, it seems implausible that it generally makes it wrong to donate 10% of your income, though a non-consequentialist friend did once argue this to me (saying that taking aunts out for fancy meals should take priority).

An important question about this position is how the special duties it refers to are to be weighed against EA actions. It may be that the case for these actions is so overwhelming - because of contingent facts about the way the world happens to be, with severe poverty that can be alleviated astonishingly cheaply - that they significantly reduce the call of these duties. The sheer scale of the good that we can do does seem to provide unusually strong reasons for action. However to assume that this scale is decisive would be to fail to take the possibility that consequentialism is false seriously, because non-consequentialists are not concerned only with scale.

Taking a step back, it’s worth noting that in the above I’ve focused on the ways in which some effective altruists could go wrong if consequentialism is false. The outlook for effective altruism in general is still quite positive in this case. On most non-consequentialist views, effective altruist actions generally range from being supererogatory (good, but beyond the call of duty) to being morally neutral. These views generally consider giving to charity good and would consider taking this to EA lengths at worst misplaced, not seriously morally wrong (unless you seriously neglect those for whom you are responsible). They would hardly consider concern for animals’ wellbeing morally wrong either, even if animals have a significantly lower moral status than humans.

Ironically, the worst these views would generally say about effective altruism is that it suffers from high opportunity costs. Having mistaken what matters, an effective altruist would not pursue it effectively. But these views generally consider that supererogatory, so again the picture is not so bad. (I owe these points to Gregory Lewis.)

What’s your take?

I’d love to hear people’s take on these issues, and on the relationship between effective altruism and consequentialism more broadly, which I certainly haven’t covered comprehensively. Which of the non-consequentialist positions above do you find most plausible, and what are the implications of this? And are there other non-consequentialist positions that would have implications for effective altruists?

(I would like to thank Theron Pummer, Gregory Lewis and Jonas Müller for helpful comments.)

8

0
0

Reactions

0
0

More posts like this

Comments54
Sorted by Click to highlight new comments since:

I don't consider myself a consequentialist, but I do support effective altruism. I don't believe a set of ethics, e.g., consequentialism as a whole, has a truth-value, because I don't believe ethics corresponds to truth. It lacks truth-value because it lacks truth-function; to ask if consequentialism is 'true or false' is a category error. That's my perspective. I used to think this was moral anti-realism, but apparently some moral anti-realists also believe consequentialism could be true. That confuses me. Anyway, I allow the possibility that moral realism might be true, and hence, consequentialism, or another normative model of the world, could also be "true". While I'm open to changing my mind to such in the future, I literally can't fathom what that would mean, or what believing that would feel like. Note I respect positions holding ethics or morality can be a function of truth, but I'm not willing to debate them in these comments. I'd be at a loss of words for defending my position, while I doubt others could change my mind. Practically, I'll only change my mind by learning more on my own, which I intend to do.

On the other hand, I, uh, in the past have intuited on the foundations of morality more deeply than I would expect most others uneducated in philosophy do. I lack any formal education in philosophy. I have several friends who study philosophy formally or informally, and have received my knowledge of philosophy exclusively from Wikipedia, friends, LessWrong, and the Stanford Encyclopedia of Philosophy. Anyway, I realized at my core I feel it's unacceptable there would be a different morality for different people. That is, ideally, everyone who share the same morals. In practice, both out of shame and actual humility, I tend not to claim among others my morals are superior. I let others live with their values as I live with mine. A lot of this behavior on my part may be engendered and normalized being raised in a pluralistic, secular, Western, democratic, and politically correct culture.

My thoughts were requested, so here's my input. I expect my perspective on ethics is weird among supporters of effective altruism, and also the world at large. So, I'm an outlier among outliers whose opinion isn't likely worth putting much weight on.

Hey Evan, your position is called non-cognitivism.

I have a good friend who is a thorough-going hedonistic act utilitarian and a moral anti-realist (I might come to accept this conjunction myself). He's a Humean about the truth of utilitarianism. That is, he thinks that utilitarianism is what an infinite number of perfectly rational agents would converge upon given an infinite period of time. Basically, he thinks that it's the most rational way to act, because it's basically a universalization of what everyone wants.

Hi Evan,

I study philosophy and would identify as a moral anti-realist. Like you, I am generally inclined to regard attempts to refer to moral statements as true or false as (in some cases) category mistakes, though in other cases I think they are better translated as cognitive but false (i.e. some moral discourse is captured by one or more error theories), and in other cases moral claims are both coherent and true, but trivial - for instance, a self-conscious subjectivist who deliberately uses moral terms to convey their preferences. Unfortunately, I think matters are messier than this, in that I don't even think ordinary moral language has any determinate commitment, much of the time, to any particular metaethical stance, so there is no uniform, definitive way of stating what moral terms even mean - because they don't mean one thing, and often simply have nothing to do with the sorts of meanings philosophers want to extract out of them. This position is known as metaethical variability/indeterminacy.

Even though I reject that morality is about anything determinate and coherent, I also endorse utilitarianism insofar as I take it to be an accurate statement of my own values/preferences.

So, I suppose you can add at least one person to the list of people who are EAs that share something roughly in line with your metaethical views.

David Moss brings up the question of why EAs are disproportionately consequentialist in the Facebook thread:

"This kinda begs the question of what consequentialism is good for and why it seems to have an affinity for EA. A couple of suggestions: consequentialism is great for i) mandating (currently) counter-intuitive approaches (like becoming really rich to help reduce poverty) and ii) being really demanding relative to (currently) standard levels of demandingess (i.e. give away money until it stops being useful; not give away £5 a month if that doesn't really detract from your happiness in any way). These benefits to consequentialism are overturned in cases where i) your desired moral outcome is not counter-intuitive (if people are already inclined to think you should never harm innocent creatures or should always be a good ally, then consequentialism just makes people have to shoulder a, potentially very difficult, burden of proof, to show that their preferred action is actually helpful in this case), ii) if people were inclined to think that something is something that you should never do, as a rule, then consequentialism just makes people more open to potentially trading-off and doing things they otherwise would never do, in the right circumstances."

These two factors may partly explain why EAs are disproportionately consequentialist, but I'm not convinced they're the main explanation. I don't know what that explanation is, but I think other factors include that:

a) consequentialism is a contrarian, counter-intuitive moral position, and EA can be too

b) consequentialism goes along with a quantitative mindset

c) many EAs were recruited through people's social circles, and the seeds for these were often consequentialist or philosophical (studying philosophy being a risk factor for consequentialism)

I agree that the core EA tenets make sense also according to most non-consequentialist views. But consequentialism might be better at activating people because it has very concrete implications. It seems to me that non-consequentialist positions are often vague when it comes to practical application, which makes it easy for adherents to not really do much. In addition, adherence to consequentialism correlates with analytical thinking skills and mindware such as expected utility theory, which is central to understanding/internalizing the EA concept of cost-effectiveness. Finally, there's a tension between agent-relative positions and cause neutrality, so consequentialism selects for people who are more likely going to be on board with that.

I agree that the core EA tenets make sense also according to most non-consequentialist views.

Like which ones?

[anonymous]2
0
0

Helping other people more rather than less and, consequently, the instrumental rationality of charitable giving?

This. Another core EA tenet might be that non-human animals count (if they are sentient).

Kantianism has positive duties and Kant's "realm of ends" to me sounds very much like taking into account "the instrumental rationality of charitable giving". Kant himself didn't grant babies or non-human animals intrinsic moral status, but some Kantian philosophers, most notably Korsgard, have given good arguments as to why sentientism should follow from the categorical imperative.

Virtue ethics can be made to fit almost anything, so it seems easy to argue for the basic tenets of EA within that framework.

Some forms of contractualism do not have positive rights, so these forms would be in conflict with EA. But if you ground contractualism in reasoning from behind the veil of ignorance, as did Rawls, then EA principles, perhaps in more modest application (even though it is unclear to me why the veil of ignorance approach shouldn't output utilitarianism), will definitely follow from the theory. Contractualism that puts weight on reciprocity would not take non-human animals into consideration, but there, too, you have contractualists arguing in favor of sentientism, e.g. Mark Rowlands.

[anonymous]0
0
0

I was mostly referring to the vast majority of people who are disposed, for natural and extra-rational reasons, to generally want to help people. I'm rather sceptical of subsuming the gamut of the history of moral philosophy into EA. I suppose, and its increasingly so right now, such concerns might be incorporated into neo-Kantianism and virtue ethics; but then that's a rather wide remit, one can do almost anything with a theoretical body if one does not care for the source material. The big change is ethical partialism: until now, very few thought their moral obligations to hold equivalently across those inside and outside one's society. Even the history of cosmopolitanism, namely in Stoic and late eighteenth century debates in Germany, refuses as much: grounding particularistic duties, pragmatically or otherwise, as much as ethical impartialism.

Kant, for example, wrote barely anything on distributive justice, leaving historians to piece together rather lowly accounts, and absolutely nothing on international distributive justice (although he had an account of cosmopolitan right, namely of a right to hospitality, that is, to being able to request interaction with others who may decline except when such would ensure their demise - anticipating refugee rights, but nothing more). The most radical reading of Kant's account of distributive justice (and many reputable thinkers have concluded him to be a proto-Nozick) is that a condition of the perpetuation of republican co-legislation, itself demanded by external freedom, is the perpetuation of its constituent citizenship. The premise for which is obviously domestic. It seems that Kant did advocate a world state, at which time the justification would cross over to the global; prior to which, however, on even this most radical account, he appears to deny international distributive justice flatly.

As for Rawls, his global distributive minimalism is well-known, but probably contingently justifies altruism to his so-called burdened societies. That the veil of ignorance (which is basically the sum of its parts, and is thus superfluous to the justification, being expressly a mere contrivance to make visible its conditions) yields the two principles of justice, and not utilitarianism, is rather fundamental to it: in such a situation self-interested representative agents would not elect principles which might, given the contingent and thus unknown balance of welfare in a system, license their indigence, abuse or execution. When the conditions of justice hold, namely an economic capacity to ensure relatively decent lives for a society, then liberty is of foremost concern to persons conceived as rational and reasonable, as they are by Rawls.

[anonymous]0
0
0

I suspect consequentialism and EA correlates heavily because of EA's focus on helping others, instead of making themselves or their own actions more "moral". Focusing on helping others necessarily leads to caring about the consequences of one's actions instead of caring about how the actions reflect upon one's moral character or how moral the actions themselves are.

This "other-centeredness" is at least the reason why my own values are consequentialist.

These two factors may partly explain why EAs are disproportionately consequentialist, but I'm not convinced they're the main explanation. I don't know what that explanation is...

I would guess the simpler option that (virtually all actually supported) forms of consequentialism imply EA, whereas other moral theories, if they imply anything relevant at all, tend to imply it's optional.

One exception to consequentialisms implying EA, for e.g. is Randian Objectivism. And I doubt it's a coincidence that the EA movement contains a very small number of (I know of 0) Randian Objectivists ;)

I was thinking today what self-identified consequentialists among effective altruism and the rationality movement consider common-sense conclusions of consequentialism might differ significantly from what "mainstream" consequentialists think about. What I mean is that large portions of effective altruism and the rationality movement are fed from LessWrong, which has a disdain of academic philosophy that may have bled into ethics specifically.

For example, on LessWrong, concern for astronomical waste, countless future generations, and their value, is taken as obviously correct. In turn, mitigating existential risk being the right global priority is taken as a given. However, I don't recall LessWrong keeping track of how its conclusions on consequentialism compare to those of circles of consequentialist intellectuals in philosophy or political science. If I took, I don't know, a class on practical ethics, or the history of utilitarianism, at university, I'd be very surprised if professors or textbooks mentioned the looming importance of astronomical waste and existential risk reduction.

LessWrong has a slew of other mental habits, tropes, and beliefs that inform its consequentialism, which consequentialism in academic circles don't share. Intellectuals lacking the "proper" skills of rationality was the impetus for LessWrong in the first place. All this might matter because when someone cites the Future of Humanity Institute as utilitarian in its mission, this could be very confusing to other students of philosophy. Consequentialist philosophers among effective altruism will most likely appeal to other students of philosophy, consequentialist or not. Calling 'consequentialist' beliefs which aren't shared by most other 'consequentialists' would be a communication mistake.

"And there are also attitudes that are sufficiently common to not be personally identifiable, such as that one’s life as an important EA is worth that of at least 20 “normal” people." can you think about editing this please - its a view I'm worried doesn't deserve platform. It doesn't seem to be the result of consequentialist thinking, just vanity.

Explanation: If important was defined more precisely around specific questions, such as instrumental value to other people's welfare, it might be a way of thinking about how useful it is to spend time supporting current EA people compared to time supporting others (but even then, that's a dumb calculation because you want to be looking at a specific EA and a specific way of supporting them compared to the best available alternative). But as it stands I can't see how that's a useful thought - enlighten me if I'm wrong.

I agree that the life of an EA isn't going to be more important, even if saving that EA has greater value than saving someone who isn't an EA.

And if we're giving animals any moral weight at all (as we obviously should), the same can be said about people who are vegan.

Edited (after Tom A's comment): Maybe part of the problem is we're not clear here about what we mean by "a life". In my mind, a life is more or less important depending on whether it contains more or less intrinsic goods. The fact that an EA might do more good than a non-EA doesn't make their life more valuable - it doesn't obviously add any intrinsic goods to it - it just makes saving them more valuable. On the other hand, if we mean to include all of someone's actions and the effects of these actions in someone's "life", then the way it's worded is unproblematic.

This is nit-picky, but I think it's right. Is this what you're getting at, Tom S?

1). The way it reads it sounds like you're talking about intrinsic value to someone not used to these discussion

I'm not endorsing the view, just giving it as an example of one some people actually hold! At least in the cases I've had some exposure to, they're thinking of instrumental value, and of the worth of lives all things considered, not just who you should spend time supporting.

2). Doing a calc of the instrumental value of saving an individual from a group is not actually morally useful: you want to do it for an individual when its relevant if you're thining about instrumental value? (instrumental value can totally be a vanity thing - when would you have to save an EA's life anyway?)

I think that how we represent these arguments in writing is important to our brand as a movement, that's the thrust of my comment. It's obvious that you don't endorse it - but you are giving it platform. You're also saying its held by n>3 people. I think there's a cost to this and I can't see the benefit. put me right and I'll delete this thread. :)

What about the possibility of there being EA beliefs that are false even if consequentialism is true? This is the part of the relationship between them that I find the most interesting.

One view like that is the view of people who think that no one or almost no one has a life worth living - the suffering outweighs all the good. I'm pretty sympathetic to this view, more than I am sympathetic to any of the views outlined above. And this would make EA quite wrong (when we are saving lives and not merely making them better).

Interesting view - how did you come to it? What do you say to the millions/billions that report being very happy/satisfied with life?

I didn't mean to sound like I'm committed to the view. I'm merely sympathetic to it, in the sense that I think reasonable people could disagree about this. I don't yet know if I think it's right.

Have you seen any of the empirical psychology literature suggesting that humans have evolved to be highly optimistic and evaluate their lives as better than they actually are? That literature, combined with more common worries about evaluating happiness (I'm a hedonist) make me worried that most people don't have lives that are good on the whole.

Thought I'd just chime in with a relevant reference, in case anyone was curious:

Diener, E., Kanazawa, S., Suh, E. M., & Oishi, S. (2014). Why People Are in a Generally Good Mood. Personality and Social Psychology Review. doi: 10.1177/1088868314544467

"Evidence shows that people feel mild positive moods when no strong emotional events are occurring, a phenomenon known as positive mood offset. We offer an evolutionary explanation of this characteristic, showing that it improves fertility, fecundity, and health, and abets other characteristics that were critical to reproductive success. We review research showing that positive mood offset is virtually universal in the nations of the world, even among people who live in extremely difficult circumstances. Positive moods increase the likelihood of the types of adaptive behaviors that likely characterized our Paleolithic ancestors, such as creativity, planning, mating, and sociality. Because of the ubiquity and apparent advantages of positive moods, it is a reasonable hypothesis that humans were selected for positivity offset in our evolutionary past. We outline additional evidence that is needed to help confirm that positive mood offset is an evolutionary adaptation in humans and we explore the research questions that the hypothesis generates."

http://psr.sagepub.com/content/early/2014/09/09/1088868314544467.abstract

Thanks for sharing! That's good to know.

Certainly chimes more with my intuition. For the curious;-): 'adaptive functioning' or the ability to handle strong emotional events, seems to be ameanable to change through practice (but there might be a selection effect worth mulling over) http://self-compassion.org/UTserver/pubs/baermeditators.pdf (don't worry, journal of Cognitive Psychotherapy, just published here to get past paywall)

Long term meditators also report feeling positive background moods that are quite dramatic https://www.youtube.com/watch?v=L_30JzRGDHI

Which doesn't necessarily change much about the debate.

[This comment is no longer endorsed by its author]Reply

I have seen such literature, but you can get around some of the looking back bias problems by recording how you feel in the moment (provided you aren't pressured to answer dishonestly). I am sure a lot of people have miserable lives, but I do think that when I believe I have been fairly happy for the past 4 years, it is very unlikely the belief is false (because other people also thought I was happy to).

I do think the concern about accuracy of beliefs about experience warrants finding a better way to evaluate people's happiness in general though. It think such analysis could change the way people set up surveys to measure things like QALYs. I think it is quite likely that the value of years lived disabled or with old age are better than people think.

Yeah, I think you're all-around right. I'm less sure that my life over the past two years has been very good (my memory doesn't go back much father than that), and I'm very privileged and have a career that I enjoy. But that gives me little if any reason to doubt your own testimony.

it only makes sense to be "selfish gene utilitarian". EA makes an error of advocating actual altruism/charity. this is irrational.

https://medium.com/effective-economics/ethics-9c74a524b6e1

Here are several more recent resources addressing the differences between effective altruism and utilitarianism/consequentialism:

"EAs often say that their views don’t presuppose consequentialism" This is interesting, because some people believe that all ethical theories can be "consequentialised". If so, any EA who thinks their view presupposes moral realism actually could be say to think their view presupposes consequentialism.

Is anyone familiar with the philosophical literature on that? My understanding is that it's controversial.

Separately, what's the connection to moral realism?

Indeed, effective altruism makes some people more likely to act like this by providing ready-made rationalisations which treat them as working towards overwhelming important ends, and indeed as vastly important figures whose productivity must be bolstered at all costs. I’ve seen prominent EAs use these justifications for actions that would shock people in more normal circles.

I, and others, noticed myself making such rationalizations. Also, a friend of mind who interned at an effective altruist organization in Oxford reported several anecdotes of our allies there doing the same. I consider myself equally part of effective altruism, and 'more normal circles'. I was shocked at myself, and others. Then, I read more LessWrong, and made an effort to learn more about moral psychology and ethics. I've concluded if consequentialism from humans invariably leads to, e.g., repugnant conclusions and unintended consequences, then, while consequentialism might be true, humans aren't equipped for it. So, in practice, we're going to fail our ideals.

Impressed upon me from LessWrong is that humans, including rationalists and effective altruists, can and will rationalize everything, including their values. I realized it's especially crucial for ourselves to protect against such rationalizations because we might be more prone to them, due to being in an intellectual bubble of over-confidence and self-aggrandizing. Also, if we betray our own values, our hypocrisy revealed to ourselves feels much more damning to us than hypocrisy feels to others. Aspiring to effective altruism seems to contain a kernel of integrity and commitment that spoils the movement as a whole if we make false 'special exceptions' for our own (otherwise bad) behavior.

In practice, this has made me not want to identify as consequentialist[1]. At the very least, in practice, I'd want supporters of effective altruism to personally adhere to a form of act or rule utilitarianism. I'd want there to be a moratorium in making special exceptions for themselves (or ourselves, including me, whatever) to "bolster our productivity at all costs". I really believe rationalizing otherwise reprehensible behavior on consequentialist grounds, combined with the concrete over-confidence that could come with effective altruism, is a slope posing costs too high if we start rationalizing ever worse behavior.

Outside of circles around LessWrong, and effective altruism, I don't call myself a consequentialist as much because I don't want to be mistaken as explicitly non-consequentialist peers as totally in agreement with, e.g., utilitarianism. Inside these circles, I feel more comfortable expressing my actual sympathy for consequentialism. I'm not confident consequentialism should be an ultimate means for determining our moral actions. However, it seems to me a good set of heuristics, along with other moral traditions, for determining what actions are "right" in the face of otherwise inadequate moral intuition, or dogma.

[anonymous]0
0
0

Meta-note:

I would advise consistently saying "humans that don't exist yet" rather than "humans that don't exist", otherwise the distinction becomes completely absurd for humans who are not already familiar with the distinction.

Read this through the eyes of a newcomling:

"First, take the position that people who don’t exist have less moral weight."

Of course beings that don't exist have no moral weight! The question is just whether a being's location in time matters for its moral importance. Change it to "exist yet" and the newcomling too will understand the difference being talked about here.

Hi Tom,

Thx for starting a discussion on moral philosophy: I find it interesting and important!

It seems to me that you're wrong when you say that assigning special importance to people closer to oneself makes one a non-consequentialist. One can measure actions by their consequences and measure the consequences in ways that are asymmetric with respect to different people.

Personally I believe that ethics is a property of the human brain and as such it

  1. Has high Kolmogorov complexity (complexity of value). In particular it is not just "maximize pleasure - pain" or something like that (even though pleasure might be a complex concept in itself).
  2. Varies from person to person and between different moments in the life of the same person.
  3. Unlikely to assign equal value to all people since it doesn't make much sense evolutionary. Yes, I know we are adaption executors rather than fitness optimizers. Nevertheless, the thing we do optimize (which is not evolutionary fitness) came about through evolution and I see no reason it would be symmetrical with respect to permutations of people.

Btw, the last point doesn't mean you shouldn't give to people you don't know. It just means you shouldn't reach the point your own family is at subsistence level.

Another reason is that consequentialism may be false. The importance of this possibility depends upon the probability we assign to it, but it must carry some weight unless it can be rejected absolutely, which is only plausible on the most extreme forms of moral subjectivism.

I don't think this is true. It's perfectly possible to find some views (eq 'the set of all nonconsequentialist moral views') incoherent enough as to be impossible to consider (or at least, no more so than the negation of various axioms of applied maths and logic would be), but some others to be conceivable.

I basically adhere to that (in fact thinking it of the albeit poorly defined set of 'non utilitarian moral views'); I don't know (or much care) if people would describe me as a moral realist, but I doubt anyone would accuse me of being an extreme moral subjectivist!

Btw, I'm glad to see this post, and sad that it hasn't been upvoted more. I have nothing against the more emotion-oriented content that seems to dominate the top-voted page on this forum, but it's of little interest to me. I hope we begin to see more posts examining the logic and science behind EA.

I’ve also found that some other anti-realists are extremely confident (though not certain) that consequentialism is true, though it’s an open question how often this is reasonable.

I don't understand this. To quote a guy from LessWrong:

"While I guess this could be logically possible, anyone who is not a moral realist needs to provide some kind of explanation for what exactly a normative theory is supposed to be doing and what it means to assert one if there are no moral facts."

Also, I think positions one, two, and four are in fact compatible with consequentialism. That said, your post is still useful since, whatever terminology we may use to describe them, these issues happen to be important.

Anti-realism isn't the position that there are no moral facts; that's non-cognitivism.

Tom, that isn't the only way the term "moral anti-realism" is used. Sometimes it is used to refer to any metaethical position which denies substantive moral realism. This can include noncognitivism, error theory, and various forms of subjectivism/constructivism. This is typically how I use it.

For one thing, since I endorse metaethical variability/indeterminacy, I do not believe traditional descriptive metaethical analyses provide accurate accounts of ordinary moral language anyway. I think error theory works best in some cases, noncognitivism (perhaps, though not plausibly) in others, and various forms of relativism in others. What this amounts to is that I think all moral claims are either (a) false (b) nonsense or (c) trivial; in the latter sense, by "trivial" I mean they lack objective prescriptivity, "practical oomph" (as Richard Joyce would put it) or otherwise compel or provide reasons for action independent of an agent's goals or interests. In other words, I deny that there are any mind-independent moral facts. I'm honestly not sure why moral realism is taken very seriously. I'd be curious to hear explanations of why.

In other words, I deny that there are any mind-independent moral facts. I'm honestly not sure why moral realism is taken very seriously. I'd be curious to hear explanations of why.

I think we might get to something like moral realism as the result of acausal trade between possible agents.

I'm an emotivist-- I believe that "x is immoral" isn't a proposition, but, rather, is just another way of saying "boo for x". This didn't keep me from becoming an EA, though; I would feel hugely guilty if I didn't end up supporting GiveWell and other similar organizations once I have an income, and being charitable just feels nice anyways.

[anonymous]0
0
0

I'm unsure on what grounds the plausibility non-consequentialist theories is to be judged. Insofar as they affirm distinct premises they are thusly incommensurable, and consequently hold value only and insofar as you indeed affirm those premises. If we hypothetically assume those premises we can judge internal coherence: do the conclusions deductively follow. Historically, a non-negligible number of evaluative theories are plausible beyond their affirmation being unreasonable; they are internally coherent and somehow move us. They cannot be satisfactorily rejected independently of rejecting their major premises, but as can any theory of the historical set. Further, many highly influential candidates for judgement reject this moral rationalism: communitarianism, the later Rawls, Marxism, Habermasian discourse ethics, post-modernism, Rortyian liberal ironism, the political realism of Bernard Williams and Raymond Geuss, the Hellenistic sceptics, emotivism, early German Romanticism, and so forth. What are we to make of this? I am rather doubtful that one can, qua utilitarian, pass independent judgement as to the relative plausibility of the constituents of the history of moral and political philosophy.

This is borne out by your, with respect, exceedingly narrow list of plausible respects in which consequentialism might be false: nearly all of which are questions internal to utilitarian theory, of the scope and weight of the levers across which aggregative value is to be distributed, notwithstanding the mild opposition of your third point. In view of the fact that utilitarianism has markedly receded in post-Rawlsian anglophone political philosophy, that most philosophy and social theory since the linguistic turn rejects its basic structure, and that for most it fails on its own intuitionism, I would like to think there are more fundamental questions to ask than 'is agent type X a candidate for inclusion in aggregative valuation'.

To be frank, I lament the extent to which EA's ostensible ecumenicism, facilitating charitable giving without presupposing any particular normative or otherwise grounding, quickly falls apart as soon as one interacts with the community: nearly all of whom are utilitarians, and take possession of the movement as such. That so large a proportion of discussions on this forum are ruminations on utilitariansim is indicative; but it seeps into and infects the entire identity of the movement. I think this is probably tremendously self-limiting as a social movement, and it certainly profoundly alienates me. Sometimes it seems like EA has become the latest play-thing of Oxfordian moral philosophy.

Have you had a chance to read my post of a few days back? http://effective-altruism.com/ea/e8/dorothea_brooke_an_alternative_origin_story_for/ This was a deliberate attempt to explicitly engage a broader range of philosophical backgrounds. (I'm not sure if Tom has read it or not, or if this piece was related).

I'm not sure if Tom has read it or not, or if this piece was related

I haven't had time to finish it yet, but I look forwards to doing so! I'll try to comment on the relationship when I have.

[anonymous]0
0
0

Yes, I read, appreciated and indeed commented upon it! I thought it was a welcome contribution to what is mostly a stagnant diversity in EA, and certainly not a humanistic one.

So you did! Sorry, that's what I get for replying late at night on my phone!

To answer my own question, I personally assign some weight to all of these positions. I find the fourth - that I owe special duties to my near and dear - particularly plausible. However I don’t find it plausible that I owe special duties to my fellow citizens (at least not to an extent that should stop me donating everything over a certain amount to the global poor). I also think that we should take the third sort of position extra seriously, and avoid taking actions that are actively wrong on popular non-consequentialist theories. An additional reasons for this is that there are often good but subtle consequentialist grounds for avoiding these actions, and in my experience some consequentialists are insufficiently sensitive to them.

One thing to consider about (2) is that there are also non-consequentialist reasons to treat non-human animals better than we treat humans (relative to their interests). As one example, because humans have long treated animals unjustly, reasons of reciprocity require us to discount human interests relative to theirs. So that might push the opposite way as discounting animal interests due to moral uncertainty.

David Moss also mentions this in the Facebook thread:

" I think it's quite plausible that common non-consequentialist positions would support much stronger stances on non-human animals, for example, because they object to acts that constitute active harm and oppression of innocent victims etc. It's at least partly for this reason that some animal advocates have taken to self-consciously employing deontological criticisms of non-human animal suffering, that they ostensibly don't themselves believe to be true, as I understand it. "

In some cases "special duties" to family can be derived as a heuristic for utilitarianism. As a family member, you probably aren't replaceable, families tend to expect help from their members, and families are predisposed to reciprocate altruism: for many people there is a large chance of high negative utility both to yourself and family if you ignore your family. The consequences to you could be negative enough to make you less effective as an altruist in general.

For example, if you are a college student interested in EA and your parents stop paying for your degree, now you will have much less money to donate, and much less time to study if you have to pick up a job to pay your way though school.

Taking care of parents when they get older might also seem fairly non-consequentialist, but if there is a large inheritance at stake it could be the case that taking good care of your family is the highest utility thing for you to do.

As for kids, in some cases it may be possible that raising them to become effective altruists is the highest leverage thing to do. John Stuart Mill for example was raised in this manner... though I am sure he may have been quite miserable in the process:

http://en.wikipedia.org/wiki/John_Stuart_Mill#Biography

True, but I'd assume you'd agree that non-consequentialist who allow for special duties have different and potentially stronger, more overriding reason.

John Stuart Mill for example was raised in this manner... though I am sure he may have been quite miserable in the process

Indeed, he had a breakdown which he put down to his upbringing, though I don't know if it was primarily due to the utilitarian aspects of this. If I recall correctly, the (deeply uncharitable) parody of such an upbringing in Dickens' Hard Times was based on Mill.

[anonymous]0
0
0

This strikes me as a highly wishful and ad hoc adaptation of utilitarianism to pre-given moral dispositions, and personally, as something of a reductio.

Are you honestly suggesting the following as an inter-personal or intra-personal justification?:

"Taking care of parents when they get older might also seem fairly non-consequentialist, but if there is a large inheritance at stake it could be the case that taking good care of your family is the highest utility thing for you to do."

It follows, I suppose, if there is no inheritance at stake, that you should let them rot.

How do you justify utilitarianism? I can only hope not via intuitionism.

These are heuristics for specialized cases. In most cases you can do far more good elsewhere than you can do for your family. The case with Mill is a case where you are developing a child to help many more than you could, the case with parents is likewise a case where you are helping them to help many others via donating more than you could on your own. If we are being Kantian about this, the parents still aren't being used merely as a means because their own happiness matters and is a part of the consideration.

In cases where helping your parents helps only your parents, why not help someone else who you could help more effectively? There are more appalling counter-factual cases than letting parents rot, such as letting 10 times as many non-related people rot.

I think a fairly small set of axioms can be used to justify utilitarianism. This should get you pretty close:

-Only consequences matter.

-The only consequences that matter are experiences.

-Experiences that are preferred by beings are positive in value.

-Experiences that are avoided by beings are negative in value.

-VNM axioms

It is certainly possible to disagree with these statements though, and those who agree with them might justify them based on intuitions coming from thought experiments.

[anonymous]0
0
0

Most think that one's reason for action should be one's actual reason for action, rather than a sophistic rationalisation of a pre-given reason. There's no reason to adopt those 'axioms' independent of adopting those axioms; they certainly, as stated, have no impersonal and objective force. Insofar as that reason is mere intuition, which I see no reason for respecting, then clearly your axioms are insufficient with regard to any normal person - indeed, the entire post-Rawlsian establishment of Anglophone political theory is based exactly on the comparatively moving intuit of placing the right prior to the good.

"In cases where helping your parents helps only your parents, why not help someone else who you could help more effectively?"

That rhetorically begs the question of the evaluative content of help, or that helping persons is of especial value.

Does anything have impersonal and objective force? I am rather confused as to what you are comparing to that is better. If you are just talking about forcing people to believe things, that doesn't necessarily have anything to do with what it true. If you were just comparing to Rawls, why should I accept Rawls' formulation of the right as being prior or independent from the good? You can use Rawls' veil of ignorance thought experiment to support utilitarianism (1), so I don't see how Rawls can really be a counter objection, or specifically how Rawls' arguments don't rely on evoking intuitions. I may be misunderstanding the last sentence of your first paragraph though, so I do think it is possible that you have an argument which will change my mind.

I haven't seen someone attack the VNM axioms as there are plenty of non-consequentialists who think there are good reasons for believing them. I have a feeling you are really attacking the other presented axioms, but not these.

"sophistic rationalisation of a pre-given reason" This is a pretty uncharitable jump to accusation. The statements I listed above above are not immune to attack, when convinced to drop an axiom or to adopt a different one, the ethics system advocated will change. I had different values before I became utilitarian, and my beliefs about what was of utility changed based on changes in the axioms I used to derive it.

When I was a preference utilitarian, I came across a thought experiment about imagining a preference which has no consequences in terms of experience when it is satisfied. It didn't seem like such preferences could matter: therefore I was no longer a preference utilitarian. There wasn't a pre-given reason, though intuitions were used.

If you do think there is a way to derive a good ethical theory which does not rely on appealing to intuitions at some point in the argument, I would be very interested in hearing it. =)

(note from earlier) (1) Consider what world a self benefiting being would make with the veil of ignorance. The most rational thing based on it's goals is to maximize expected benefit: which will align exactly with what some utilitarians will argue for.

Curated and popular this week
Relevant opportunities