BA

Ben Auer

78 karmaJoined

Bio

Participation
9

President of the Effective Altruism group at the University of Melbourne.

Currently studying a BSc & Concurrent Diploma (Pure Math & Neuroscience). Hoping to work on AI alignment in the future.

Comments
17

I think this is a great idea. Just wanted to flag that we've done this with other clubs at the University of Melbourne in the past. To give some concrete examples of how this can achieve quite a lot without a huge amount of time and effort:

  • We successfully diverted $500 to GiveDirectly on one occasion, from the annual revenue of a club that raises money for charity, simply by attending their AGM and giving a presentation
  • On another occasion, we joined as a co-host for a charity fundraiser event with several other clubs, and were allowed to select high impact / EA-aligned charities as the recipients for the event, which ended up raising close to $1,200 total

I would definitely encourage EA groups at other universities to try similar things. There could be a lot of low-hanging fruit, e.g. clubs who simply haven't thought that carefully about their choices of charities before.

My understanding is that the self-effacing utilitarian is not strictly an 'ex-utilitarian', in that they are still using the same types of rightness criteria as a utilitarian (at least with respect to world-states). Although they may try to deceive themselves into actually believing another theory, since this would better achieve their rightness criterion, that is not the same as abandoning utilitarianism on the basis that it was somehow refuted by certain events. In other words, as you say, they're switching theories "on consequentialist grounds". Hence they're still a consequentialist in the sense that is philosophically important here.

Brilliant post. Thanks for writing it. I just want to add to what you said about ethics. It seems that evaluating whether an action / event is good or bad itself presupposes an ethical theory.[1] Hence I think a lot of the claims that are being made can be described as either (a) this event shows vividly how strongly utilitarianism can conflict with 'common-sense morality' (or our intuitions)[2] or (b) trying to follow[3] utilitarianism tends to lead to outcomes which are bad by the lights of utilitarianism (or perhaps some other theory). The first of these seems not particularly interesting to me, as suggested in your post, and the second is a separate point entirely - but is nonetheless often being presented as a criticism of utilitarianism.

  1. ^

    Someone else made this point before me in another post but I can’t find their comment.

  2. ^

    But note that this applies mostly to naive act utilitarianism.

  3. ^

    By which I mean 'act in accordance' with, but it's worth noting that this is pretty underdetermined. For instance, doing EV calculations is not the only way to act in accordance with utilitarianism.

I believe the ‘walls of text’ that Adrian is referring to are mine. I'd just like to clarify that I was not trying to collapse the distinction between a decision procedure and the rightness criterion of utilitarianism. I was merely arguing that the concept of expected value can be used both to decide what action should be taken (at least in certain circumstances)[1] and whether an action is / was morally right (arguably in all circumstances) - indeed, this is a popular formulation of utilitarianism. I was also trying to point out that whether an action is good, ex ante, is not necessarily identical to whether the consequences of that action are good, ex post. If anyone wants more detail you can view my comments here.

  1. ^

    Although usually other decision procedures, like following general rules, are more advisable, even if one maintains the same rightness criterion.

I'm not sure I agree with this. As far as I can tell the EA community has always been quite focused on being inclusive, kind and welcoming - see for instance this and this post from CEA, which are both years old. I'm very sorry to hear about the OP's experiences of course, and honestly surprised personally since my own experience has been a lot more positive. However, this doesn't automatically imply to me that we need a whole new community or something to that effect.

I would see this more as presenting an opportunity to improve our culture and amend any failures that our currently happening despite the efforts of a lot of community leaders. I don't think there's a 'fundamental flaw' in how the EA community is trying to operate in that respect. Also it seems to me that distancing the EA brand in this way you're suggesting would potentially incentivize it to become even less human and amiable - because then it would be distinguished by being the 'weird, rationalist / philosophical community'. (Not to mention that it would seemingly decrease opportunities for collaboration with the 'other community' and create confusion for those looking to get involved in EA.)

Edit: Just to be clear, I'm not making any general claims here about how successful the EA community has been in implementing the ideals I mentioned above. Obviously this post points to updating against that.

No worries. It is interesting though that you think my comment is a great example when it was meant to be a rebuttal. What I'm trying to say is, I wouldn't really identify as a 'utilitarian' myself, so I don't think I really have a vested interest in this debate. Nonetheless, I don't think utilitarianism 'breaks down' in this scenario, as you seem to be suggesting. I think very poorly-formulated versions do, but those are not commonly defended, and with some adjustments utilitarianism can accomodate most of our intuitions very well (including the ones that are relevant here). I'm also not sure what the basis is of the suggestion that utilitarianism works worse when a situation is more unique and there is more context to factor in.

To reiterate, I think the right move is (progressive) adjustments to a theory, and moral uncertainty (where relevant), which both seem significantly more rational than particularism. It's very unclear to me how we can know that it's 'impossible or unworkable' to find a system that would guide our thinking in all situations. Indeed some versions of moral uncertainty already seem to do this pretty well. I also would object to classifying moral uncertainty as an 'ad-hoc patch'. It wasn't initially developed to better accommodate our intuitions, but simply because as a matter of fact we find ourselves in the position of uncertainty with respect to what moral theory is correct (or 'preferable'), just like with empirical uncertainty.

I can't speak for others, but this isn't the reason I'm defending utilitarianism. I'd be more than happy to fall back on other types of consequentialism, or moral uncertainty, if necessary (in fact I lean much more towards these than utilitarianism in general). I'm defending it simply because I don't think that the criticisms being raised are valid for most forms of utilitarianism. See my comments below for more detail on that.

That being said, I do think it's perfectly reasonable to want a coherent ethical theory that can be used universally. Indeed the alternative is generally considered irrational and can lead to various reductios.

Hmm perhaps. I did try to address your points quite directly in my last comment though (e.g. by arguing that EV can be both a decision procedure and a rightness criterion). Could you please explain how I'm talking past you?

No. I meant 'metaethical framework.' It is a standard term in moral philosophy. See: https://plato.stanford.edu/entries/metaethics/

I'm aware of the term. I said that because utilitarianism is not a metaethical framework, so I'm not really sure what you are referring to. A metaethical framework would be something like moral naturalism or error theory.

Again, we do not need to bring decision theory into this. I am talking about metaethics here. So I am talking about what makes certain things morally good and certain things morally bad. In the case of utilitarianism, this is defined purely in terms of utility. And expected utility != value.

Metaethics is about questions like what would make a moral statement true, or whether such statements can even be true. It is not about whether a 'thing' is morally good or bad: that is normative ethics. And again, I am talking about normative ethics, not decision theory. As I’ve tried to say, expected value is often used as a criterion of rightness, not only a decision procedure. That’s why the term ‘expectational’ or ‘expectable’ utilitarianism exists, which is described in various sources including the IEP. I have to say though at this point I am a little tired of restating that so many times without receiving a substantive response to it.

Compare: we can define wealth as having a high net-worth, and we can say that some actions are better at generating a high net worth. But we need not include these actions in our definitions of the term 'wealth'. Because being rich != getting rich. The same is true for utilitarianism. What is moral value is nonidentical to any decision procedure.

Yes, the rightness criterion is not necessarily identical to the decision procedure. But many utilitarians believe that actions should be morally judged on the basis of their reasonable EV, and it may turn out that this is in fact identical to the decision procedure (used or recommended). This does not mean it can’t be a rightness criterion. And let me reiterate here, I am talking about whether an action is good or bad, which is different to whether a world-state is good or bad. Utilitarianism can judge multiple types of things.

Also, as I've said before, if you in fact wanted to completely discard EV as a rightness criterion, then you would probably want to adjust your decision procedure as well, e.g. to be more risk-averse. The two tend to go hand in hand. I think a lot of the substance of the dilemma you're presenting comes from rejecting a rightness criterion while maintaining the associated decision procedure, which doesn't necessarily work well with other rightness criteria.

This is not a controversial point, or a matter of opinion. It is simply a matter of fact that, according to utilitarianism, a state of affairs with high utility is morally good.

I agree with that. What I disagree with is whether that entails that the action that produced that state of affairs was also morally good. This seems to me very non-obvious. Let me give you an extreme example to stress the point:

Imagine a sadist pushes someone onto the road in front of traffic, just for fun (with the expectation that they'll be hit). Fortunately the car that was going to hit them just barely stops soon enough. The driver of that car happens to be a terrorist who was (counterfactually) going to detonate a bomb in a crowded space later that day, but changes their mind because of the shocking experience (unbeknownst to the sadist). As a result, the terrorist is later arrested by the police before they can cause any harm. This is a major counterfactual improvement in the resulting state of affairs. However, it would seem absurd to me to say that it was therefore good, ex ante, to push the person into oncoming traffic.


I'm guessing you mean 'normative ethical framework', not 'meta-ethical framework'. That aside, what I was trying to say in my comment is that EV theory is not only a criterion for a rational decision, though it can be one,[1] but is often considered also a criterion for what is morally good on utilitarian grounds. See, for instance, this IEP page.

I think your comment addresses something more like objective (or ‘plain’ or ‘actual’) utilitarianism, where all that matters is whether the outcome of an action was in fact net positive ex post, within some particular timeframe, as opposed to whether the EV of the outcome was reasonably deemed net positive ex ante. The former is somewhat of a minority view, to my knowledge, and is subject to serious criticisms. (Not least that it is impossible to know with certainty what the actual consequences of a given action will be.[2])[3]

That being said, I agree that the consequences ex post are still very relevant. Personally I find a ‘dual’ or ‘hybrid’ view like the one described here most plausible, which attempts to reconcile the two dichotomous views. Such a view does not entail that it is morally acceptable to commit an action which is, in reasonable expectation, net negative, it simply accepts that positive consequences could in fact result from this sort of action, despite our expectation, and that these consequences themselves would be good, and we would be glad about them. That does not mean that we should do the action in the first place, or be glad that it occurred.[4]

  1. ^

    Actually, I don’t think that’s quite right either. The rationality criterion for decisions is expected utility theory, which is not necessarily the same as expected value in the context of consequentialism. The former is about the utility (or 'value') with respect to the individual, whereas the latter is about the value aggregated over all morally relevant individuals affected in a given scenario.

  2. ^

    Also, in a scenario where someone reduced existential risk but extinction did in fact occur, objective utilitarianism would state that their actions were morally neutral / irrelevant. This is one of many possible examples that seem highly counterintuitive to me.

  3. ^

    Also, if you were an objective consequentialist, it seems you would want to be more risk-averse and less inclined to use raw EV as your decision procedure anyway.

  4. ^

    I am not intending to raise the question of ‘fitting attitudes’ with this language, but merely to describe my point about rightness in a more salient way.

Load more