Hide table of contents

(This post was written by Kerry Vaughan and Larissa Hesketh-Rowe with contributions from other members of the CEA staff)

There has been discussion recently about how to approach building the EA community, in light of last weekend’s EA Summit and this post on problems with EA representativeness and how to solve it. We at CEA thought it would be helpful to share some of our thinking on community building and representativeness in EA.

This post comprises four sections:

  1. Why work to build the EA community? - why we prioritize building the EA community and think this is a promising area for people to work in.
  2. The challenges of prioritization - how prioritizing some activities can present challenges for community building and representativeness.
  3. CEA wants to support other community builders - how we can do better by working with other organizations and individuals.
  4. Our views on representativeness in EA - why we believe EA should be cause-impartial, but CEA’s work should be mostly cause-general, and involve more description of community priorities as they are.

Why work to build the EA community?

Ultimately, CEA wants to improve the world as much as possible. This means we want to do things that evidence and reason suggest are particularly high impact.

In order to make progress on understanding the world, or in solving any of the world’s most pressing problems, we are going to need dedicated, altruistic people who are thinking carefully about how to act. Those people can have a much higher impact if they are guided by and can add to cutting-edge ideas, have access to the necessary resources (e.g. money) and can coordinate with one another.

Due to this need, we think one way we can have significant impact is by building a global community of people who have made helping others a core part of their lives, and who use evidence and reason to figure out how to do so as effectively as possible.

This is why we consider working on building the EA community a priority path for those looking to have an impact with their career. Work done to bring people or resources into the community, or to help build on our ideas and coordination capacity, can multiply our impact several-fold even if we change our minds about which problems are most pressing in the future.

(You can see some considerations against working on EA community building here.)

The challenge of prioritization

CEA’s challenge is prioritization. Given that we have a finite amount of money, staff and management capacity, we have to choose where to focus our efforts. CEA cannot do everything that the EA community needs alone.

This year, we’ve been primarily focusing on people who have already engaged a lot with the ideas and community associated with effective altruism, so that we can better understand what those people need and help them put their knowledge and dedication to good use. We think of this as analogous to focusing at the bottom of a marketing funnel and getting to know our “core users”.

In practice, this has meant focusing on projects like running smaller retreats for people who are already highly engaged with EA and putting more attention on a smaller number of local groups, rather than trying to provide broad support to many.

Our plan has been to get these projects up and running and reliably doing valuable work before expanding our support further up the funnel. At this point, however, we are starting preparations to get more done higher in the funnel. Some valuable actions we’d like to take up the funnel soon include running a broader range of events, funding more projects, and supporting more local groups. To achieve these new goals, we’ve recently been looking to hire community specialists, events specialists, and an EA Grants Evaluator.

Inevitably, focusing on one area usually means deprioritizing other things that would also add a lot of value to the EA community. We try to mitigate some of the costs of prioritization by helping other groups provide support instead.

CEA supports other community builders

We generally encourage members of the EA community to get involved in building the EA community, especially in areas that are valuable but currently not prioritized by CEA. Because CEA is currently management and staff-constrained, the easiest way for us to support others is with funding, branding, and expertise.

Some actions we’ve taken (or plan to take) to support the work of others include:

  • Providing more than $650,000 to groups and individuals doing local community building (in progress).
  • Re-launching EA Grant applications to the public with a £2,000,000 budget and a rolling application process (to be launched by the end of October 2018).
  • Helping groups run EAGx conferences in their local areas by providing the brand, funding (both for the event and a stipend to organizers), and advice (this year we supported events in Australia, Boston, and the Netherlands).
  • Supporting Rethink Charity’s work on LEAN with a $50,000 grant (grant provided).
  • Supporting Charity Entrepreneurship’s work to build new EA charities with a $100,000 grant (grant currently being finalized).
  • Supporting the LessWrong 2.0 team with a $75,000 grant.
  • Supporting the EA Summit with a $10,000 grant.

There’s certainly more we can do to support the work others are doing, and we’ll be on the lookout for more opportunities in the future.

The EA Summit

A recent example of one of the ways we’re trying to support non-CEA community building efforts is by supporting the EA Summit, which took place last weekend. The EA Summit was a small conference for EA community builders, incubated by Paradigm Academy with participation from CEA, Charity Science, and the Local Effective Altruism Network (LEAN), a project of Rethink Charity.

In late June, Peter Buckley and Mindy McTeigue approached Kerry and Larissa to discuss their concerns around a growing bias towards inaction in the EA community and a slowdown in efforts to build a robust, thriving EA community. We decided that these were important problems and that the EA Summit was a good mechanism for addressing them, so we were happy to support the project.

The largest consideration against support was based on the concern that the Summit was incubated by Paradigm Academy, which is closely connected to Leverage Research. We concluded that this was not a compelling reason to avoid supporting the conference. The EA Summit was a transparent project of clear value to the EA community.

Three CEA staff members attended the conference with Kerry providing the closing keynote. Our impression was that the conference was a success. Despite being organized on short notice, the event had over 100 attendees, was well run, and ended with an excellent party. Attendees seemed to come away with the message that there are useful projects they can work on that CEA would support, and overall had overwhelmingly positive things to say about the conference.

However, the fact that Paradigm incubated the Summit and Paradigm is connected to Leverage led some members of the community to express concern or confusion about the relationship between Leverage and the EA community. We will address this in a separate post in the near future. [Edit: We decided not to work on this post at this time.]

EA and Representativeness

One area the EA Summit aimed to address was concern about representativeness in EA, most recently raised by Joey Savoie. The question of how CEA should represent the EA community is one we’ve thought about and discussed internally for some time. We plan to write a separate post on this, but here is an outline of our thinking so far. We believe the EA Forum should be a place for everyone to share and build upon ideas and models, so we’d love to see discussion of this here.

On representativeness, our current view is that:

  1. The EA community should be cause-impartial, but not cause-agnostic.
  2. CEA’s work should be broadly cause-general.
  3. Some of CEA’s work should be descriptive of what is happening in the community, but some of our work should also be prescriptive, meaning that it is based on our best guess as to what will have the largest impact.
  4. We’re unsure who our work should be representative of.
  5. While we took some steps to address representativeness prior to Joey’s post, we welcome suggestions on how we can improve.

The EA community should be cause-impartial:

EA is about figuring out how to do the most good and then doing it. This means we don’t favor any particular beneficiaries, approaches or cause areas from the start, but instead select causes based on an impartial calculation of impact (cause-impartiality). This in turn means we should be both seeking to reduce our uncertainty about the relevant impact of different causes and seeking to find new areas that could potentially be even more important (see Three Heuristics for Finding Cause X for some ideas on how this might be done).

Success for the EA community should include a strong possibility that we learn more, change our minds, and therefore no longer work on causes that we once thought were important.

CEA’s work should be broadly cause-general:

The reason we have an EA community instead of individual communities focused on specific causes is:

  1. We don’t know for certain what causes are most important and we may discover a new Cause X in the future.
  2. We don’t know for certain which approaches to existing causes are most important and we may discover new approaches in the future.
  3. Despite our uncertainty, we can take actions that are useful across many causes.

CEA’s work should be broadly beneficial regardless of one’s views on the relative importance of different causes. This is why our mission is to build the EA community. We believe our comparative advantage lies in finding and coordinating with people who can work on important problems.

CEA’s work as both descriptive and prescriptive:

While most of our work is cause-general, there will be cases where we have opportunities to support work in particular cause areas that we currently believe are likely to have the highest impact.

We think it is therefore helpful to make a distinction between aspects of CEA’s work that are descriptive and those that are more prescriptive.

Descriptive work aims to reflect what is actually happening in the EA community; the kinds of projects people are working on and issues people are thinking about. The EA Newsletter is a clear example of this because it includes updates from around the community and from a variety of EA and EA-adjacent organizations.

Other aspects of CEA’s work should be prescriptive, meaning that they involve taking a view on where the community should be headed or on what causes are likely to be most important. For example, CEA’s Individual Outreach team does things like help connect members of the community with jobs we consider high-impact.

In forums where CEA is providing a resource to the entire EA community (for example, the EA Forum, Effective Altruism Funds, or events like EA Global), our work should tend towards being more descriptive.

We’re unsure who our work should be representative of:

One challenge in making our work more representative is that it’s unclear what reference class we should be using when making our work more representative.

On one extreme, we could use all self-identifying EAs as the reference class. This has the downside of potentially requiring that our work address issues that expert consensus indicates are not particularly important.

On another extreme, we could use the consensus of community leaders as the relevant reference class. This has the downside of potentially requiring that our work not address the issues that the overwhelming number of community members actually care about.

The best solution is likely some hybrid approach, but it’s unclear precisely how such an approach might work.

Soliciting a wider range of viewpoints:

Although we think we should do more to address representativeness concerns, we had already taken some steps to address this concern prior to Joey’s post on this issue.

These included:

  • Consulting ~25 advisors from different fields about EA Global content (already in place).
  • Changing the EA handbook to be more representative (in progress).
  • Selecting new managers of the Long-Term Future and EA Community EA Funds (in progress).

We do, however, recognize that when consulting others it’s easy to end up selecting for people with similar views, and that this can leave us with blind spots in particular areas. We are thinking about how to expand the range of people we get advice from. While we cannot promise to enact all suggestions, we would like to hear suggestions from forum users about what else they might like to see from CEA in this area.

Comments31
Sorted by Click to highlight new comments since:

We would like to hear suggestions from forum users about what else they might like to see from CEA in this area.

Here is my two cents. I hope it is constructive:


1.

The policy is excellent but the challenge lies in implementation.

Firstly I want to say that this post is fantastic. I think you have got the policy correct: that CEA should be cause-impartial, but not cause-agnostic and CEA’s work should be cause-general.

However I do not think it looks, from the outside, like CEA is following this policy. Some examples:

  • EA London staff had concerns that they would need to be more focused on the far future in order to receive funding from CEA.

  • You explicitly say on your website: "We put most of our credence in a worldview that says what happens in the long-term future is most of what matters. We are therefore more optimistic about others who roughly share this worldview."[1]

  • The example you give of the new EA handbook

  • There is a close association with 80000 Hours who are explicitly focusing much of their effort on the far future.

These are all quite subtle things, but collectively they give an impression that CEA is not cause impartial (that it is x-risk focused). Of course this is a difficult thing to get correct. It is difficult to draw the line between saying: 'our staff members believe cause___ is important' (a useful factoid that should definitely be said), whilst also putting across a strong front of cause impartiality.


2.

Suggestion: CEA should actively champion cause impartiality

If you genuinely want to be cause impartial I think most of the solutions to this are around being super vigilant about how CEA comes across. Eg:

  • Have a clear internal style guide that sets out to staff good and bad ways to talk about causes

  • Have 'cause impartiality' as a staff value

  • If you do an action that does not look cause impartial (say EA Grants mostly grants money to far future causes) then just acknowledge this and say that you have noted it and explain why it happened.

  • Public posts like this one setting out what CEA believes

  • If you want to do lots of "prescriptive" actions split them off into a sub project or a separate institution.

  • Apply the above retroactively (remove lines from your website that make it look like you are only future focused)

Beyond that, if you really want to champion cause impartiality you may also consider extra things like:

  • More focus on cause prioritisation research.

  • Hiring people who value cause impartiality / cause prioritisation research / community building, above people who have strong views on what causes are important.


3.

Being representative is about making people feel listened too.

Your section on representatives feels like you are trying to pin down a way of finding an exact number so you can say we have this many articles on topic x and this many on topic y and so on. I am not sure this is quite the correct framing.

Things like the EA handbook should (as a lower bound) have enough of a diversity of causes mentioned that the broader EA community does not feel misrepresented but (as an upper bound) not so much that CEA staff [2] feel like it is misrepresenting them. Anything within this range seems fine to me. (Eg. with the EA handbook both groups should feel comfortable handing this book to a friend.) Although I do feel a bit like I have just typed 'just do the thing that makes everyone happy' which is easier said than done.

I also think that "representativeness" is not quite the right issue any way. The important thing is that people in the EA community feel listened too and feel like what CEA is doing represents them. The % of content on different topics is only part of that. The other parts of the solution are:

  • Coming across like you listen: see the aforementioned points on championing cause impartiality. Also expressing uncertainty, mentioning that there are opposing views, giving two sides to a debate, etc.

  • Listening -- ie. consulting publicly (or with trusted parties) wherever possible.

If anything getting these two things correct is more important than getting the exact percentage of your work to be representative.


Sam :-)


[1] https://www.centreforeffectivealtruism.org/a-three-factor-model-of-community-building

[2] Unless you have reason to think that there is a systematic bias in staff, eg if you actively hired people because of the cause they cared about.

[anonymous]11
0
0

Thanks Sam! This is really helpful. I'd be interested in talking on Skype about this sometime soon (just emailed you about it). Some thoughts below:

Is longtermism a cause?

One idea I've been thinking about is whether it makes sense to treat longtermism/the long-term future as a cause.

Longtermism is the view that most of the value of our actions lies in what happens in the future. You can hold that view and also hold the view that we are so uncertain about what will happen in the future that doing things with clear positive short-term effects is the best thing to do. Peter Hurford explains this view nicely here.

I do think that longtermism as a philosophical point of view is emerging as an intellectual consensus in the movement. Yet, I also think there are substantial and reasonable disagreements about what that means practically speaking. I'd be in favor of us working to ensure that people entering the community understand the details of that disagreement.

My guess is that while CEA is very positive on longtermism, we aren't anywhere near as positive on the cause/intervention combinations that longtermism typically suggests. For example, personally speaking, if it turned out that recruiting ML PhDs to do technical AI-Safety didn't have a huge impact I would be surprised but not very surprised.

Threading the needle

My feeling as I've been thinking about representativeness is that getting this right requires threading a very difficult needle because we need to optimize against a large number of constraints and considerations. Some of the constraints include:

  • Cause areas shouldn't be tribes -- I think cause area allegiance is operating as a kind of tribal signal in the movement currently. You're either on the global poverty tribe or the X-risk tribe or the animal welfare tribe and then people tend to defend the views of the tribe they happen to be associated with. I think this needs to stop if we want to build a community that can actually figure out how to do the most good and then do it. Focusing on cause areas as the unit of analysis for representativeness entrenches the tribal concern, but it's hard to get away from because it's an easy-to-understand unit of analysis.
  • We shouldn't entrench existing cause areas -- we should be aiming for an EA that has the ability to shift its consensus on the most pressing problems as we learn more. Some methods of increasing representativeness have the effect of entrenching current cause areas and making intellectual shifts harder.
  • Cause-impartiality can include having a view -- cause impartiality means that you do an impartial calculation of impact to determine what to work on. Such a calculation should lead to developing views on what causes are most important. Intellectual progress probably includes decreasing our uncertainty and having stronger views.
  • The view of CEA staff should inform, but not determine our work -- I don't think it's realistic or plausible for CEA to take actions as if we have no view on the relative importance of different problems, but it's also the case that our views shouldn't substantially determine what happens.
  • CEA should sometimes exercise leadership in the community -- I don't think that social movements automatically become excellent. Excellence typically has to be achieved on purpose by dedicated, skilled actors. I think CEA will often do work that represents the community, but will sometimes want to lead the community on important issues. The allocation of resources across causes could be one such area for leadership although I'm not certain.

There are also some other considerations around methods of improving representativeness. For example, consulting established EA orgs on representativeness concerns has the effect of entrenching the current systems of power in a way that may be bad, but that gives you a sense of the consideration space.

CEA and cause-impartiality

Suggestion: CEA should actively champion cause impartiality

I just wanted to briefly clarify that I don't think CEA taking a view in favor of longtermism or even in favor of specific causes that are associated with longtermism is evidence against us being cause-impartial. Cause-impartiality means that you do an impartial calculation of the impact of the cause and act on the basis of that. This is certainly what we think we've done when coming to views on specific causes although there's obviously room for reasonable disagreement.

I would find it quite odd if major organizations in EA (even movement building organizations) had no view on what causes are most important. I think CEA should be aspiring to have detailed, nuanced views that take into account our wide uncertainty, not no views on the question.

Making people feel listened to

I broadly agree with your points here. Regularly talking to and listening to more people in the community is something that I'm personally committed to doing.

Your section on representatives feels like you are trying to pin down a way of finding an exact number so you can say we have this many articles on topic x and this many on topic y and so on. I am not sure this is quite the correct framing.

Just to clarify, I also don't think trying to find a number that defines representativeness is the right approach, but I also don't want this to be a purely philosophical conversation. I want it to drive action.

Disclosure: I copyedited a draft of this post, and do contract work for CEA more generally

I don't think that longtermism is a consensus view in the movement.

The 2017 EA Survey results had more people saying poverty was the top priority than AI and non-AI far future work combined. Similarly, AMF and GiveWell got by far the most donations in 2016, according to that same survey. While I agree that someone can be a longtermist and think that practicality concerns prioritize near-term good work for now anyway, I don't think this is a very compelling explanation for these survey results.

As a first pass heuristic, I think EA leadership would guess correctly about community-held views more often if they held the belief "the modal EA-identifying person cares most about solving suffering that is happening in the world right now."

[anonymous]8
0
0

I agree that I might be wrong about this, but it's worth noting that I wasn't trying to make a claim about the modal EA. When talking about the emerging consensus I was implicitly referring to the influence-weighted opinion of EAs or something like that. This could be an area where I don't have access to a representative sample of influential EAs which would make it likely that the claim is false.

Longtermism is the view that most of the value of our actions lies in what happens in the future.

You mean 'in the far future', correct? Unless you believe in backwards causality, and excluding the value that occurs at the same moment you act, all the value of our actions is in the future. I presume by 'far future' you would mean actions affecting future people, as contrasted with presently existing people.

I do think that longtermism as a philosophical point of view is emerging as an intellectual consensus in the movement

Cards on the table, I am not a long-termist; I am sympathetic to person-affecting views in population ethics. Given the power CEA has in shaping the community, I think it's the case that any view CEA advocated would eventually become the consensus view: anyone who didn't find it appealing would eventually leave EA.

I just wanted to briefly clarify that I don't think CEA taking a view in favor of longtermism or even in favor of specific causes that are associated with longtermism is evidence against us being cause-impartial.

I don't think this can be true. If you're a longtermist, you can't also hold person-affecting views in population ethics (at least, narrow, symmetric person-affecting views), so taking the longtermist position requires ruling such views out of consideration. You might think you should rule out, as obviously false, such views in population ethics, but you should concede you are doing that. To be more accurate you could perhaps call it something like "possibilism cause impartiality - selecting causes based on impartial estimates of impact assuming we account for the welfare of everyone who might possibly exist" but then it would seem almost trivially true long-termist ought to follow (this might not be the right name, but I couldn't think of a better restatement off-hand).

Hi Kerry, Some more thoughts prior to having a chat.

-

Is longtermism a cause?

Yes and no. The term is used in multiple ways.

A: Consideration of the long-term future.

It is a core part of cause prioritisation to avoid availability biases: to consider the plights of those we cannot so easily be aware of, such as animals, people in other countries and people in the future. As such, in my view, it is imperative that CEA and EA community leaders promote this.

B: The long-term cause area.

Some people will conclude that the optimal use of their limited resources should be putting them towards shaping the far future. But not everyone, even after full rational consideration, will reach this view. Nor should we expect such unanimity of conclusions. As such, in my view, CEA and EA community leaders can recommend people to consider this causes area, but should not tell people this is the answer.

-

Threading the needle

I agree with the 6 points you make here.

(Although interestingly I personally do not have evidence that “area allegiance is operating as a kind of tribal signal in the movement currently”)

-

CEA and cause-impartiality

I think CEA should be careful about how to express a view. Doing this in wrong way could make it look like CEA is not cause impartial or not representative.

My view is to give recommendations and tools but not answers. This is similar to how we would not expect 80K to have a view on what the best job is (as it depends on an individual and their skills and needs) but we would expect 80K to have recommendations and to have advice on how to choose.

I think this approach is also useful because:

  • People are more likely to trust decisions they reach through their own thinking rather than conclusions they are pushed towards.

  • It handles the fact that everyone is different. The advice or reasoning that works for one person may well not make sense for someone else.

I think (as Khorton says) it is perfectly reasonable for an organisation to not have a conclusion.

-

(One other thought I had was on examples of actions I would be concerned about CEA or another movement building organisations taking would be: Expressing certainty about a area (in internal policy or externally), basing impact measurement solely on a single cause area, hiring staff for cause-general roles based on their views of what causes is most important, attempting to push as many people as possible to a specific cause area, etc)

Hi Kerry, Thank you for the call. I wrote up a short summary of what we discussed. It is a while since we talked so not perfect. Please correct anything I have misremembered.

~

1.

~ ~ Setting the scene ~ ~

  • CEA should champion cause prioritisation. We want people who are willing to pick a new cause based on evidence and research and a community that continues to work out how to do the most good. (We both agreed this.)
  • There is a difference between “cause impartiality”, as defined above, and “actual impartiality”, not having a view on what causes are most important. (There was some confusion but we got through it)
  • There is a difference between long-termism as a methodology where one considers the long run future impacts which CEA should 100% promote and long-termism as a conclusion that the most important thing to focus on right now is shaping the long term future of humanity. (I asserted this, not sure you expressed a view.)
  • A rational EA decision maker could go through a process of cause prioritisation and very legitimately reach different conclusions as to what causes are most important. They may have different skills to apply or different ethics (and we are far away from solving ethics if such a thing is possible). (I asserted this, not sure you expressed a view.)

~

2.

~ ~ Create space, build trust, express a view, do not be perfect ~ ~

  • The EA community needs to create the right kind of space so that people can reach their own decision about what causes are most important. This can be a physical space (a local community) or an online space. People should feel empowered to make their own decisions about causes. This means that they will be more adept at cause prioritisation, more likely to believe the conclusions reached and more likely to come to the correct answer for themselves, and EA is more likely to come to a correct answers overall. To do this they need good tools and resources and to feel that the space they are in is neutral. This needs trust...

  • Creating that space requires trust. People need to trust the tools that are guiding and advising them. If people feel they being subtly pushed in a direction they will reject the resources and tools being offered. Any sign of a breakdown of trust between people reading CEA’s resources and CEA should be taken very seriously.

  • Creating that space does not mean you cannot also express a view. You just want to distinguish when you are doing this. You can create cause prioritisation resources and tools that are truly neutral but still have a separate section on what answers do CEA staff reach or what is CEA’s answer.

  • Perfection is not required as long as there is trust and the system is not breaking down.

  • For example: providing policy advice I gave the example of writing advice to a Gov Minister on a controversial political issue, as a civil servant. The first ~85% of this imaginary advice has an impartial summary of the background and the problem and then a series of suggested actions with evaluations of their impact. The final ~15% has a recommended action based on the civil servant’s view of the matter. The important thing here is that there generally is trust between the Minister and the Department that advice will be neutral, and that in this case the Minister trusts that the section/space setting out the background and possible actions is neutral enough for them to make a good decision. It doesn’t need to be perfect, in fact the Minister will be aware that there is likely some amount of bias, but as long as there is sufficient trust that does not matter. And there is a recommendation which the Minister can choose to follow or not. In many cases the Minister will follow the recommendation.

~

3.

~ ~ How this goes wrong ~ ~

  • Imagine someone who has identified cause X which is super important comes across the EA community. You do not want the community to either be so focused on one cause that this person is either put off or is persuaded that the current EA cause is more important and forgets about cause X

  • I mentioned some of the things that damage trust (see the foot of my previous comment).

  • You mentioned you had seen signs of tribalism in the EA community.

~

4.

~ ~ Conclusion ~ ~

  • You said that you saw more value in CEA creating a space that was “actual impartial” as opposed to “cause impartial” than you had done previously.

~

5.

~ ~ Addendum: Some thoughts on evidence ~ ~

Not discussed but I have some extra thoughts on evidence.

There are two areas of my life where much of what I have learned points towards the views above being true.

  • Coaching. In coaching you need to make sure the coachee feels like you are there to help them not in any way with you own agenda (that is different from theirs).

  • Policy. In policy making you need trust and neutrality between Minister and civil servant.

There is value in following perceived wisdom on a topic. That said I have been looking out for any strong evidence that these things are true (eg. that coaching goes badly if they think you are subtly biased one way or another) and I have yet to find anything particularly persuasive. (Counterpoint: I know one friend who knows their therapist is overly-bias towards pushing them to have additional sessions but this does not put them off attending or mean they find it less useful). Perhaps this deserves further study.

Also worth bearing in mind there maybe dissimilarities between what CEA does and the fields of coaching and policy.

Also worth flagging that the example of policy advice given above is somewhat artificial, some policy advice (especially where controversial) is like that but much of it is just: “please approve action x”

In conclusion my views on this are based on very little evidence and a lot of gut feeling. My intuitions on this are strongly guided by my time doing coaching and doing policy advice.

I would find it quite odd if major organizations in EA (even movement building organizations) had no view on what causes are most important.

I would definitely find it odd if individuals within an organization didn't have views on which causes are most important. I wouldn't find it that strange if CEA didn't have a formally stated view on which causes are most important, although I expect some views will be implied through your communication.

"Cause areas shouldn't be tribes" "We shouldn't entrench existing cause areas" "Some methods of increasing representativeness have the effect of entrenching current cause areas and making intellectual shifts harder."

Does this mean you wouldn't be keen on e.g. "cause-specific community liasons" who mainly talk to people with specific cause-prioritisations, maybe have some money to back projects in 'their' cause, etc? (I'm thinking of something analogous to an Open Philanthropy Project Program Officer )

[anonymous]2
0
0

Does this mean you wouldn't be keen on e.g. "cause-specific community liasons" who mainly talk to people with specific cause-prioritisations, maybe have some money to back projects in 'their' cause, etc? (I'm thinking of something analogous to an Open Philanthropy Project Program Officer )

I don't think I would be keen on this as stated. I would be keen on a system by which CEA talks to more people with a wider variety of views, but entrenching particular people or particular causes seems likely to be harmful to the long-term growth of the community.

Just wanted to say I loved how specific and detailed the feedback is here - thank you!

If you do an action that does not look cause impartial (say EA Funds mostly grants money to far future causes) then just acknowledge this and say that you have noted it and explain why it happened.

Do you mean EA Grants? The allocation of EA Funds across cause areas is outside of CEA's control since there's a separate fund for each cause area.

Yes thanks. Edited.

Just wanted to chip in on this. Although I do not think this addresses all the concerns I have with representativeness, I do think CEA has been making a more concerted and genuine effort at considering how to deal with these issues (not just this blog post, but also in some of the more recent conversations they have been having with a wider range of people in the EA movement). I think it's a tricky issue to get right (how to build a cause neutral EA movement when you think some causes are higher impact than others) and there is still a lot of thought to be done on the issue, but I am glad steps are happening in the right direction.

The "what causes should CEA represent?" issue seems especially tricky because the current canonical EA cause areas have very different metrics underpinning them.

Global development & animal welfare usually use GiveWell-style cost-effectiveness analysis to determine what's effective.

X-risk usually uses theoretical argument & back-of-the-envelope estimates to determine effectiveness.

I'm not sure what movement building uses – probably theory and back-of-the-envelope as well?

Anyway, point is that there's not a meta-metric that the current cause areas use to compare against each other.

So when considering a new cause area, should we use the x-risk standard of effectiveness? Or the global development one? (rhetorical)

Seems tricky – I'm glad CEA is thinking about this.

I really like the Open Philanthropy Project's way of thinking about this problem:

https://www.openphilanthropy.org/blog/update-cause-prioritization-open-philanthropy

The short version (in my understanding):

  1. Split assumptions about the world/target metrics into distinct "buckets".
  2. Do allocation as a two step process: intra-bucket on that bucket's metric, and inter-bucket separately using other sorts of heuristics.

(If you like watching videos rather than reading blog posts, Holden also discussed this approach in his fireside chat at EAG 2018: San Francisco.)

Sure, but I don't think that framework gives a decision procedure for what buckets are worth considering. (Haven't read it closely recently, so maybe I missed this.)

For example, I'm pretty sure a Christian who's interested in EA principles wouldn't be able to convince EA decision-makers that a Christian missionary intervention was effective, even if it was very cost-effective & had a track record of success.

The Christian wouldn't be able to make the case for their missionary intervention because "spreading the word of God" isn't a goal that EA considers worthwhile. As far as I know, EA doesn't have a strong case for why this kind of thing isn't worthwhile, it's just one of the "deep judgment calls" that Holden talks about in that post.

Not caring about Christian missionary work is in cultural DNA of EA. It's not a particularly justified position, rather it's an artifact of the worldview assumptions that a quorum of EAs brought to the community at a certain point in time.

(To be super-duper clear, I'm not advocating for Christian interventions to be included in EA; it's just an illustrative example.)

What are some open questions that you’d like to get input on here (preferably of course from people who have enough background knowledge)?

This post reads to me like an explanation of why your current approach makes sense (which I find mostly convincing). I’d be interested in what assumptions you think should be tested the most here.

[anonymous]4
0
0

The biggest open questions are:

1) In general, how can we build a community that is both cause impartial and also representative? 2) If we want to aim for representativeness, what reference class should we target?

In terms of representation then my own opinion in relation to the animal welfare cause area is that it could relate to moral theory. At present the dominant ideology (rational pragmatism) favoured by many utilitarians has functioned as a way for people to associate with one another, and offers a fairly easy way to become part of EAA through adopting certain organisations and ideas. This is an ideology which in my view has been dismissive of rights based approaches by diminishing their value / relevance to effectiveness thinking.

To address this issue i believe rights based thinking ought to be valued and represented at various levels rather than dismissed in favour of the preferred ideology. This isn't to say anything about which organisations or approaches are "most" effective but dismissing moral theory in favour of an ideology seems to be weak at both representativeness and integrity (particularly where it hasn't been agreed upon but is more unilateral).

I tend to think that addressing issues of representation in cause areas will have better follow on results in the community at large (informed from below rather than from above). However, the problem here is that unrepresentative cause areas are more likely to be resistant to representation, because they are likely to gravitate toward that norm rather than away from it unless significant efforts are made, particularly where it has become institutionalised. Whilst it is unclear whether some EAA leaders would think that a lack of representativeness (as i am stating it) or plurality would be a bad or concerning thing anyway as it can instead be associated with increasing utility, particularly through simplifying the cause area.

EA grants seems like it should be in between in terms of being prescriptive vs. descriptive. If I had to pull a number out of a hat, then perhaps half the grants could be in the areas CEA considers most important and the other half could be more open.

it’s unclear what reference class we should be using when making our work more representative... The best solution is likely some hybrid approach, but it’s unclear precisely how such an approach might work.

Could you say more about what CEA is planning to do to get more clarity about who it should represent?

[anonymous]2
0
0

At the moment our mainline plan is this post with a request for feedback.

I've been talking with Joey Savoie and Tee Barnett about the issue. I intend to consult others as well, but I don't have a concrete plan for who to contact.

We do however recognize that when consulting others it’s easy to end up selecting for people with similar views and this can leave us with blind spots in particular areas. We are thinking about how to expand the range of people we get advice from. While we cannot promise to enact all suggestions, we would like to hear suggestions from forum users about what else they might like to see from CEA in this area.

It seems like you currently only consult people for EA Global content. Do you want to get advice on how to have a wider range of consultants for EA Global content, or are you asking for something else?

[anonymous]4
0
0

We're asking for feedback on who we should consult with in general, not just for EA Global.

In particular, the usual process of seeking advice from people we know and trust is probably producing a distortion where we aren't hearing from a true cross-section of the community, so figuring out a different process might be useful.

To add a little more background: we're always glad to get ideas from the community about EA Global on our content/speaker suggestion form.

We also get feedback on major decisions that will affect the community from an advisory panel, chosen because they had given us especially useful criticism in the past. However, we'd like to get more frequent, informal feedback as well.

Is there a process for joining & leaving the advisory panel, or is that handled informally?

Also, could you say a little more about how & when the panel is engaged for feedback?

(Speaking as a member of the panel, but not in any way as a representative of CEA).

It’s worth noting the panel hasn’t been consulted on anything in the last 12 months. I don’t think there’s anything necessarily wrong with this, especially since it was set up partly in response to the Intentional Insights affair and AFAIK there has been no similar event in that time, but I have a vague feeling that someone reading Julia’s posts would think it was more common, which I guess was part of the ‘question behind your question’, if that makes sense :)

Sorry, I think we must have had a miscommunication within CEA - I had the understanding that we'd written to the panel last week about something, but apparently that didn't happen yet. In general, though, it's true that we've only asked the panel for input rarely.

That's interesting background, thanks :-)

We've had this panel for a little more than a year, and haven't yet have turnover. If looking for a new member, we'd look for someone who had given us helpful outside perspective / criticism in the past.

We've asked the panel for feedback primarily when making decisions where CEA's view of its proper role in the community is especially likely to differ from others' view of CEA's proper role. One example is around whether CEA should express views on which other organizations are EA organizations.

Curated and popular this week
Relevant opportunities