J

JDLC

113 karmaJoined

Comments
7

Hey Daria! 3 questions from me:

  1. Why do you think this is the most effective thing people can donate funds to right now? (Why do you think it’s more effective than these charities, for example: https://www.givewell.org/charities/top-charities)
  2. What data can you provide to back this up? (Ideally numerical data/stats)
  3. How much funding would each of the organisations linked be able to use effectively?

(These are the sort of questions that readers of this forum tend to care about most, so the fact that your post doesn’t address them much is probably some/most of the reason it’s been downvoted, in case you were confused)

Answer by JDLC12
2
0

I received a DM from someone who wishes to remain anonymous, but made the following points in answer to the question:

  • TLDR: The Gates funding increase is likely a large counterfactual funding increase but hardly any funding increase in absolute terms
  • The foundation currently spends ~$9bn per year. This is the outcome of a (public) decision ~3 years ago, to grow spending from ~$6bn p.a. at the time to $9bn, as steady state annual expenditure, over a period of 2-3 years
  • This new update is only a very small increase in grants. (200Bn over 20 years = 10Bn p.a. = increase of 1Bn / 1/9th only.)
  • Since the $9bn decision, Warren Buffet withdrew his future contributions (also all public). It became clear through reporting around that that the majority of Foundation contributions to date had actually been Buffet money, not Gates money. So one should have expected a pretty meaningful drop in the $9Bn off the back of that, or Gates to significantly step up giving.
  • So it’s fair to say that this is a very meaningful counterfactual increase vs a world where the Foundation had dropped down to 4/5/6 again.
  • It is not a meaningful increase in what the world of global health will see at all - esp once you compare the 1Bn increase to the many Bns of reduced spending from the US, UK, Germany, Switzerland, Belgium, etc)

Considered writing a similar post about the impact of anti-realism in EA, but I’m going to write here instead. In short, I think accepting anti-realism is a bit worse/wierder for ‘EA as currently’ than you think:

Impartiality 

It broadly seems like the best version of morality available under anti-realism is contractualism. If so, this probably significantly weakens the core EA value of impartiality, in favour of only those who you have a ‘contract’. It might rule out spatially far away people, it might rule out temporally far away people (unless you have an ‘asymmetrical contract’ whereby we are obligated to future generations because past generations were obligated to us), it probably rules out impartiality animals or non-agents/morally incapable beings.

‘Evangelism’

EA generally seems to think that we should put resources into convincing others of our views (bad phrasing but gist is there). This seems much less compelling on anti-realism, because your views are literally no more correct than others. You could counter that ‘we’ have thought more and therefore can help people who are less clear. You could counter that other people have inconsistent views (“Suffering is really bad but factory farms are fine”), however there’s nothing compelling bad about inconsistency on an anti-realist viewpoint either.

Demandingness

Broadly, turning morality into conditionals means a lot of the ‘driving force’ behind doing good is lost. It’s very easy to say “if I want to do good I should do X”, but then say “wow X is hard, maybe I don’t really want to do good after all”. I imagine this affects a bunch of things that EA would like people to do, and makes it much harder practically to cause changes if you outright accept it’s all conditional.

Note: I’m using Draft Amnesty rules for this comment, I reckon on a few hours of reflection I might disagree with some/all of these.

This is the most downvoted post I’ve seen on the forum so far. Why?

One key concern: Ideas all seem good, but it’s unclear to me if any/all are Attention Hazards / Opportunity Costs. Even if they are good, is the resources investment counterfactually harmful?

Not sure TWE you considered this, or what breadth of expert views/consensus this doc got in order to account for this.

(Sorry for negativity on what is a cool idea :-) )

Thanks for writing this post, currently reading as part of OSP syllabus. My thoughts below:

Epistemic Status: Pure armchair Philosophy, informed by 2 years within a uni group as participant. Will be involved with group running this year, interested to see if/how this updates any of the below.

Backchaining: This seems excellent.

Goals: On an individual level, SMART goals are amazing. I'm concerned that, on the group level, SMART goals are over-specific and counterproductive. More specifically (pun intended), the SMART framework will (almost) inevitably lead to Goodharting due to the specificity/measurability requirements.

Outsourcing: Excellent. Possible from signposting too many people towards specific group/opportunities and overwhelming them.

Personal Development: I love the sentiment of "you should treat yourself like one of your members that you are responsible for helping", but disagree practically acting towards yourself in the same manner as another group member is the best idea. Partly because you can't be sufficiently objective, partly because an external/second perspective gives a lot of value. It seems to me that asking a co-leader / experienced exec to take (some) responsibility for your (the leaders) personal development is a much better way to do this.

Safeguarding Values: Love this idea. It should be a forum post if it isn't already, and I want the link if it already is!

Opportunity vs Obligation: I strongly prefer (and feel more motivated by) an opportunity framing. BUT I don't know if this is a general reaction or personal one. Perhaps both are required, and some people are much more likely to put the obligation onto themselves, whilst others need more external 'pressure' on this. Unsure if there is any research on this (quantitative or qualitative).

Socials and Development: Great. One line that struck me is "We’ll then often have a social straight after". I suspect that separating the social/development, but having them very close by (spatially and temporally) is significantly better than having them on different nights (say). Mainly because helps balance the twin considerations of a social dynamic and an action focus. Don't know if this is true.

Resources: All look super useful.

What’s the best (ie. influenced you the most) criticism or development of your ‘key ideas’?

Specific papers/references/links would be ideal!

(By ‘key ideas’ I’m thinking things like speciesism, your concept of persons or drowning child argument, but answer based on whatever you would yourself put in this category)