The Meta Coordination Forum (MCF) is a place where EA leaders are polled on matters of EA community strategy. I thought it could be fun (and interesting) to run these same polls on EAs at large.
Note: I link to the corresponding MCF results throughout this post, but I recommend readers don’t look at those until after voting themselves, to avoid anchoring.
Edit (May 3rd): Looks like all but the first two polls are now closed. I thought I’d set them to be open for longer, but clearly I messed up. Sorry about that!
There is a leadership vacuum in the EA community that someone needs to fill
(MCF results)
EA thought leaders and orgs should be more transparent/communicative than they currently are
(MCF results)
We should focus more on building particular fields (AI safety, effective global health, etc.) than building EA
(MCF results)
Assuming there will continue to be three EAG-like conferences each year, these should all be replaced by conferences framed around specific cause areas/subtopics rather than about EA in general (e.g., by having two conferences on x-risk or AI-risk and a third one on GHW/FAW)
(MCF results)
We should promote AI safety ideas more than other EA ideas
(MCF results; see also the AIS field-building survey results)
Most AI safety outreach should be done without presenting EA ideas or assuming EA frameworks
(MCF results; see also the AIS field-building survey results)
We should try to make some EA sentiments and principles (e.g., scope sensitivity, thinking hard about ethics) a core part of the AI safety field
(MCF results; see also the AIS field-building survey results)
We should be trying to accelerate the EA community and brand’s current trajectory (i.e., ‘rowing’) versus trying to course-correct the current trajectory (i.e., ‘steering’)
(MCF results)
The case for doing EA community building hinges on having significant probability on ‘long’ (>2040) AI timelines
(MCF results)
There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention
(MCF results)
Some invitees to the Meta Coordination Forum (maybe like 3 out of the ~30) should be ‘independent’ EAs
I’m sneaking in this meta-level poll to close. For previous discussion, see this thread; I’m defining ‘independent EAs’ as non-Open Phil umbrella EAs / EAs who fall outside the existing invitation criteria.
The idea, in my mind, is that these independent EAs would be invited for having a track record of contributing (e.g., on this forum) to EA community discussion. The selection could be based on karma, or made by a panel, or (probably my favourite; h/t @Joseph_Chu) via election by the EA Forum community, in a kinda similar way to how we vote in debate weeks.
Possible candidates:
We have factually wrong beliefs about the outcome of some sort of process of major political change (communism? anarchism? world government?)
None of these strike me as super likely, but combining them all you still get an okay chance.
I'll straight up say I think figureheads and public leaders can be huge for movement growth even though there are risks. When Greta was front and center of the climatre movement I felt momentum was huge and even she decided to step back I think the momentum stall was really noticable.
I liked having Will Mckaskill to look to as a leader and high profile example with his giving style.
I see Rutger Bregnan and the attention he is getting in the media.
It might not be a comfortable thing but I think movements can benefit greatly from figureheads, although obviously there are risks for them, and the movement if they fail/fall for whatever reason.
Is there any data to back up the environmental movement growing and stalling around those times? It may have got a lot of media attention but it seems like the real gains on climate change were made by people who have been working in clean tech for decades and politicians that were already lobbying for various policies in the 2000s/2010s.
I would say the whole climate movement received a huge boost through Greta leading youth protests and being super visible including
Of course I think we can only attribute a tiny percentage of climate gains in that period to her being a figurehead front and center, but I think things have become harder since without an obvious person to rally behind.
And yes this is super subjective, just my opinion and no, I doubt there's any data to back that up unfortunately.
70%➔ 50% agreeThis is an interesting idea that I've never heard articulated before. Seems good in principle to have some people with fewer (or at least different to looking-after-their-org) vested interests.
Independent as in not affiliated with any org? If that's what it means then I probably agree
>2040, no. >2030, yes.
but no, I don't know what it is (or have a clear and viable plan for finding it)
My top picks for small causes that should maybe receive >20% of resources:
My guess is that pesticides impact on insect welfare probably falls into this category.
I thought of insect farming, but this is definitely one too!
AI Safety work is likely to be extremely important, but "other EA ideas" is too broad for me to agree. It would mean, for example, that it's more important than the "three radical ideas" and I have trouble agreeing with that.
On a literal interpretation of this statement, I disagree, because I don't think trying to inject those principles will be cost-effective. But I do think people should adopt those principles in AI safety (and also in every other cause area).
Some but not all should be replaced (low confidence)
I don't have very specific arguments. EA community-building seems valuable, but I do think that work on specific causes can be interesting and scalable (for example, Hive, AI for Animals, of the Estivales de la question animale in France, all concretely seems like good ways to draw new individuals into the EA/EA-adjacent community).
Agree "on principle", clueless (and concerned) on consequences.
From my superficial understanding of the current psychological research on EA (by Caviola and Althaus), a lot of core EA ideas are unlikely to really resonate with the majority of individuals, while the case for building safer AI seems to have broader appeal. Nonetheless, I do worry that AI Safety with a lack of EA ideas involved is more likely to favor an ethics of survival rather than a welfarist ethic, is unlikely to take S-risks / digital sentience into account, so it also seems possible that scaling in that way could have very negative outcomes.
Not a very developed objection, but "steering" seems to lack tractability to me, so I'd rather see the EA community scale to an extent, even though it could be perfected. Things like GWWC aiming to increase the number of pledge takers, or CEA organizing more medium-scale summits, seems more tractable to me, and potentially quite good.
Not sure it's okay to say this, but I simply agree with Michael Dickens on this. If we expect to have have AGI by 2038, or even say, 2033 (8 years from now!) it seems like EA community building could be very important. I know people who went full-time into AI safety / governance work less than one year after discovering the issue through EA.
Agree depending on what counts as "little attention". Wild animal welfare, perhaps S-risks, but neither of those are completely neglected.
I'd also be tempted to say "limiting the development of insect farming" as it seems likely to be very cost-effective, but I don't think the field could currently absorb that much funding.
90%➔ 100% disagreeAGI is probably a long time away. No one knows when AGI will be created. No one knows how to create AGI. AGI safety is such a vague, theoretical concept that there’s essentially nothing you can do about it today or in the near future.
Is this not already the case? I.e. don't the major EAGs already focus on specific cause areas?
Iterated Amplification AI to lead somehow
At least like communicate in a easier style