Richard Y Chappell🔸

Associate Professor of Philosophy @ University of Miami
5403 karmaJoined


Academic philosopher, co-editor of, writes

🔸10% Pledge #54 with


Norms = social expectations = psychological pressure. If you don't want any social pressure to take the 10% pledge (even among EAs), what you're saying is that you don't want it to be a norm.

Now, I don't think the pressure should be too intense or anything: some may well have good reasons for not taking the pledge. The pressure/encouragement from a username icon is pretty tame, as far as social pressures go. (Nobody is proposing a "walk of shame" where we all throw rotten fruit and denounce the non-pledgers in our midst!) But I think the optimal level of social pressure/norminess is non-zero, because I expect that most EAs on the margins would do better to take the pledge (that belief is precisely why I do want it to become more of a norm -- if I already trusted that the social environment was well-calibrated for optimal decisions here, we wouldn't need to change social norms).

So that's why I think it's good, on the Forum and elsewhere, to use the diamond to promote the 10% pledge.

To be clear:

(1) I don't think the audience "being familiar" with the pledge undercuts the reasons to want it to be more of a norm among EAs (and others).

(2) The possibility that something "might not be the right decision" for some people does not show that it shouldn't be a norm. You need to compare the risks of over-pledging (in the presence of a norm) to the risks of under-pledging (in the absence of a norm). I think we should be more worried about the latter. But if someone wants to make the comparative argument that the former is the greater risk, that would be interesting to hear!

I think that's kind of the whole point of Giving What We Can? It's trying to change social norms in a more generous direction, which requires public signaling from people who support (and follow) the proposed 10% norm. (Impact doesn't just come from sharing abstract info - as if anyone were strictly unaware that it would be possible for them to donate 10% - but also from social conformity, wanting to be more like people we like and respect, etc.) I think the diamond icon is great for this purpose.

Sometimes people use "virtue signal" in a derogatory sense, meaning a kind of insincere signal of pseudo-virtue: prioritizing looking good over doing good. But it doesn't have to be like that. Some norms are genuinely good -- I think this is one -- and signaling your support for those norms is a genuinely good thing!

Fair point - updated accordingly. (The core point remains.)

re: "being an actual cause", is there an easy way to bracket the (otherwise decisive-seeming) vainglory objection that MacAskill raises in DGB of the person who pushes a paramedic aside so that he can instead be the actual (albeit less competent) cause of saving a life?

we had several completely different vaccines ready within just a single year.

Possibly worth flagging: we had the Moderna vaccine within two days of genome sequencing - a month before the first confirmed COVID death in the US. a month or so. Waiting a whole year to release it to the public was a policy choice, not a scientific constraint. (Which is not to say that scaling up production would have been instant. Just that it could have been done a lot faster, if the policy will was there.)

My impressions: I was very struck by how intellectually incurious and closed-minded Alice Crary was about EA (thought this wasn't surprising given her written work on the topic). She would respond to Peter's points by saying things like, "That all sounds very reasonable, so you just must not really be an EA, as I use the term." I had the strong impression she'd never actually spoken to an EA before.

Her overarching framing took the form of a dilemma: either EA is incapable of considering any evidence beyond RCTs (this seemed to be her core definition of EA), or else there is nothing distinctive about EA. Her underlying reasoning, as emerged at a few points, was that EA doesn't tend to fund the (self-evidently good) social justice advocacy of her political allies. The only possible explanation is that EA is blinded by an RCT-obsessed methodology. (Extrapolating a bit from her written work: Demands for evidence constitute moral corruption because proper moral sensitivity lets you just see that her friends' work ought to be funded.) EA is grievously harmful (again, by definition), because it shifts attention and resources (incl. the moral passions of the smartest college students) away from social justice activists. As such, it ought to be "abolished".

In my question, I tried to press her on whether she saw any "moral risks" to her opposition to EA. (In particular, since less effectiveness-focus would predictably lead to fewer donations to anti-malarial charities, is she at all concerned that her advocacy could result in more children dying of malaria.) She offered a politician-style non-response, that in no way acknowledged that trade-offs are real, or that there could be any possible downsides to abolishing EA. I was not impressed.

Fortunately, Peter did a great job of pushing back against all this, clarifying that:

  • RCTs are great, but obviously not the only kind of evidence. EA is about evidence, not just about RCTs. (Some projects can be quite speculative. Peter stressed that expected value reasoning can be quite open to "moonshots".) Still, it is important to do followups and be guided by evidence of some sort because otherwise you risk overinvesting in debacles like Playpumps.
  • If there's evidence that justice-oriented groups are doing work that really does a lot of good, then he'd expect EA orgs to be open to assessing and funding that.
  • Before GiveWell came along, charities weren't really evaluated for effectiveness. Charity Navigator used financial metrics like overhead ratios which are entirely disconnected from what actual impact the charity's programs are having. Insofar as others are now starting to follow GiveWell's lead and consider effectiveness, EA deserves credit for that.

You might like my 'Nietzschean Challenge to Effective Altruism':

The upshot: I’ll argue that there’s some (limited) overlap between the practical recommendations of Effective Altruism (EA) and Nietzschean perfectionism, or what we might call Effective Aesthetics (EÆ). To the extent that you give Nietzschean perfectionism some credence, this may motivate (i) prioritizing global talent scouting over mere health interventions alone, (ii) giving less priority to purely suffering-focused causes, such as animal welfare, (iii) wariness towards traditional EA rhetoric that’s very dismissive of funding for art museums and opera houses, and (iv) greater support for longtermism, but with a strong emphasis on futures that continue to build human capacities and excellences, and concern to avoid hedonistic traps like “wireheading”.

P.S. I think you mean to talk about 'ethical theory'. 'Metaethics' is a different philosophical subfield entirely.

To be clear, I'm all in favor of aiming higher! Just suggesting that you needn't feel bad about yourself if/when you fall short of those more ambitious goals (in part, for the epistemic benefits of being more willing to admit when this is so).

I agree with all this. If any Forum moderators are reading this, perhaps they could share instructions for how to update our display names? (Bizarrely, I can't find any way to do this when I go to edit my profile.)

That's an interesting case! I am tempted to deny that this (putative unconscious desire to be near the ocean) is really a mental state at all. I get that it can be explanatorily convenient to model it as such, using folk (belief-desire) psychology, but the same is true of computer chess programs. I'd want to draw a pretty sharp distinction between the usefulness of psychological modelling, on the one hand, and grounds for attributing real mental states, on the other. And I think it's pretty natural (at least from a perspective like mine) to take consciousness to be the mark of the mental, such that any unconscious state is best understood as mere information-processing, not meaningful mentality.

That's an initial thought, anyway. It may be completely wrong-headed!

Load more