Hide table of contents

80,000 Hours just released an analysis of a survey we conducted at the EA Leaders Forum in August 2017. It is likely to be of significant interest to people here:

What are the most important talent gaps in the effective altruism community, and other survey questions

To keep all the comments in one place it would be helpful if you could comment on the post itself!

Hope you find this useful,

The 80,000 Hours team.

9

0
0

Reactions

0
0
Comments3
Sorted by Click to highlight new comments since:

[Disclaimer: Rob, 80k's Director of Research, and I briefly chatted about this on Facebook, but I want to make a comment here because that post is gone and more people will see it here. Also, as a potential conflict-of-interest, I took the survey and work at an organization that's between the animal and far future cause areas.]

This is overall really interesting, and I'm glad the survey was done. But I'm not sure how representative of EA community leaders it really is. I'd take the cause selection section in particular with a big grain of salt, and I wish it were more heavily qualified and discussed in different language. Of the organizations surveyed and number surveyed per organization, my personal count is that 14 were meta, 12.5 were far future, 3 were poverty, and 1.5 were animal. My guess is that a similar distribution holds for the 5 unaffiliated respondents. So it should be no surprise to readers that meta and far future work were most prioritized.* **

I think we shouldn't call this a general survey of EA leadership (e.g. the title of the post) when it's so disproportionate. I think the inclusion of more meta organization makes sense, but there are poverty groups like the Against Malaria Foundation and Schistosomiasis Control Initiative, as well as animal groups like The Good Food Institute and The Humane League, that seem to meet the same bar for EA-ness as the far future groups included like CSER and MIRI.

Focusing heavily on far future organizations might be partly due to selecting only organizations founded after the EA community coalesced, and while that seems like a reasonable metric (among several possibilities), is also seems biased towards far future work because that's a newer field and it's at least the reasonable metric that conveniently syncs up with 80k's cause prioritization views. Also, the ACE-recommended charity GFI was founded explicitly on the principle of effective altruism after EA coalesced. Their team says that quite frequently, and as far as I know, the leadership all identifies as EA. Perhaps you're using a metric more like social ties to other EA leaders, but that's exactly the sort of bias I'm worried about here.

Also, the EA community as a whole doesn't seem to hold this cause prioritization view (http://effective-altruism.com/ea/1e5/ea_survey_2017_series_cause_area_preferences/). Leadership can of course deviate from the broad community, but this is just another reason to be cautious in weighing these results.

I think your note about this selection is fair

  • "the group surveyed included many of the most clever, informed and long-involved people in the movement,"

and I appreciate that you looked a little at cause prioritization for relatively-unbiased subsets

  • "Views were similar among people whose main research work is to prioritise different causes – none of whom rated Global Development as the most effective,"
  • "on the other hand, many people not working in long-term focussed organisations nonetheless rated it as most effective"

but it's still important to note that you (Rob and 80k) personally favor these two areas strongly, which seems to create a big potential bias, and that we should be very cautious of groupthink in our community where updating based on the views of EA leaders is highly prized and recommended. I know the latter is a harder concern to get around with a survey, but I think it should have been noted in the report, ideally in the Key Figures section. And as I mentioned at the beginning, I don't think this should be discussed as a general survey of EA leaders, at least not when it comes to cause prioritization.

This post certainly made me more worried personally that my prioritization of the far future could be more due to groupthink than I previously thought.


Here's the categorization I'm using for organizations. It might be off, but it's at least pretty close. ff = far future

80,000 Hours (3) meta AI Impacts (1) ff Animal Charity Evaluators (1) animal Center for Applied Rationality (2) ff Centre for Effective Altruism (3) meta Centre for the Study of Existential Risk (1) ff Charity Science: Health (1) poverty DeepMind (1) ff Foundational Research Institute (2) ff Future of Humanity Institute (3) ff GiveWell (2) poverty Global Priorities Institute (1) meta Leverage Research (1) meta Machine Intelligence Research Institute (2) ff Open Philanthropy Project (5) meta Rethink Charity (1) meta Sentience Institute (1) animal/ff Unaffiliated (5)

*The 80k post notes that not everyone filled out all the survey answers, e.g. GiveWell only had one person fill out the cause selection section.

**Assuming the reader has already seen other evidence, e.g. that CFAR only recently adopted a far future mission, or that people like Rob went from other cause areas towards a focus on the far future.

Hey Jacy thanks for the detailed comment - with EA Global London on this weekend I'll have to be brief! :)

One partial response is that even if you don't think this is fully representative of the set of all organisation you'd like to have seen surveyed, it's informative about the groups that were. We list the orgs that were surveyed and point out who wasn't near the start of the article so people understand who the answers represent:

"The reader should keep in mind this sample does not include some direct work organisations that some in the community donate to, including the Against Malaria Foundation, Mercy for Animals or the Center for Human-Compatible AI at UC Berkeley."

You can take this information for whatever it's worth!

As for who I chose to sample - on any definition there's always going to be some grey area, orgs that almost meet that definition but don't quite. I tried to find all the organisations with full-time staff who i) were a founding part of the EA movement, or, ii) were founded by people who identify strongly as part of the EA community, or, iii) are now mostly led by people who identify more strongly as part of the EA movement than other other community. I think that's a natural grouping and don't view AMF, MfA or CHAI as meeting that definition (though I'd be happy to be corrected if any group does meet this definition whose leadership I'm not personally familiar with).

The main problem with that question in my mind is underrepresentation of GiveWell which has a huge budget and is clearly a central EA organisation - the participants from GiveWell gave me one vote to work with but didn't provide quantitative answers, as they didn't have a strong or clear enough view. More generally, people from the sample who specialise in one cause were more inclined to say they didn't have a view on fund which was most effective and so not answer it (which is reasonable but could bias the answers).

Personally like you I give more weight to the views of specialist cause priorities researchers working at cause-neutral organisations. They were more likely to answer the question and are singled out in the table with individual votes. Interestingly their results were quite similar to the full sample.

I agree we should be cautious about all piling on to the same causes and falling for an 'information cascade'. That said, if the views in that table are a surprise to someone, it's a reason to update in their direction, even if they don't act on that information yet.

I'd be very keen to get more answers to this question, including folks from direct work orgs. And also increase the sample at some organisations that were included in the survey, but for which few people answered that question (GiveWell most notably). With a larger sample we'll be able to break the answers down more finely to see how they vary by subgroup, and weight them by organisation size without giving single data points huge leverage over the result.

I'll try to do that in the next week or two one EAG London is over!

Thanks for the response. My main general thought here is just that we shouldn't depend on so much from the reader. Most people, even most thoughtful EAs, won't read in full and come up with all the qualifications on their own, so it's important for article writers to include those themselves, and to include those upfront and center in their articles.

If you wanted to spend a lot of time on "what causes do EA leadership favor," one project I see as potentially really valuable is a list of arguments/evidence and getting EA leaders to vote on their weights. Sort of a combination of 80k's quantitative cause assessment and this survey. I think this is a more ideal peer-belief-aggregation because it reduces the effects of dependence. Like if Rob and Jacy both prioritize the far future entirely because of Bostrom's calculation of how many beings could exist in it, then we'd come up with that single argument having a high weight, rather than two people highly favoring the far future. We might try this approach at Sentience Institute at some point, though right now we're more focused on just coming up with the lists of arguments/evidence in the field of moral circle expansion, so instead we did something more like your 2017 survey of researchers in this field. (Specifically, we would have researchers rate the pieces of evidence listed on this page: https://www.sentienceinstitute.org/foundational-questions-summaries)

That's probably not the best approach, but I'd like a survey approach that somehow tries to minimize the dependence effect. A simpler version would be to just ask for people's opinions but them have them rate how much they're basing their views on the views of their peers, or just ask for their view and confidence while pretending like they've never heard peer views, but this sort of approach seems more vulnerable to bias than the evidence-rating method.

Anyway, have fun at EAG London! Curious if anything that happens there really surprises you.

Curated and popular this week
Relevant opportunities