EA encourages us to base our entire careers around a particular ideology. I think this is correct, and I'm grateful as I've benefited personally from the work of e.g. 80,000 hours, but I also think this means we should be much warier of overconfidence in our beliefs.

The closest social movements I can think of that try to exert this level of influence over their communities are fundamentalist religious sects. To be clear, I think encouraging people to spend their careers more impactfully is quite a bit less prescriptive than a typical fundamentalist sect, but it is also a lot more prescriptive than a typical political movement, or even a liberal church.

Questions of "importance" and "impactfulness" also overlap non-trivially with questions of "meaning" and "purpose." I think the history of organized religion is good evidence that human brains tend towards overconfidence when dealing with these questions. This effect is compounded given that EA often tries to recruit undergraduates—I think this is an unusual time of life that comes with a particularly strong psychological desire for meaning and confidence. In this context, a pitch promising a solid basis for spending your life impactfully is going to be even more appealing.

I don't believe EAs are some kind of sinister cabal—I am one. I also think EAs in their early 20s are more than smart enough to make their own social/political/life decisions. But I still think a movement with the above ingredients is inevitably at risk of overconfidence, and I'd like to see this acknowledged more in our actions.

As a concrete example, I think we could more explicitly discuss the trade-offs of "EA methodologies." I've read multiple publications from EA orgs that go something like

"We set out to identify small-N maximally impactful opportunities. To do this, we considered some large N of possibilities for a few hours each, then incrementally narrowed our list, spending more hours on research in each phase, until eventually we came up with the following suggestions."

I think this is a very reasonable way of trying to recommend public policy/funding opportunities, but as with almost all methodologies in this space, it's still inevitably flawed and somewhat arbitrary.

For example, the breadth of the opportunities they consider means a researcher will inevitably be an expert in at best a tiny percentage of them. One alternative approach, which makes more intuitive sense to me, would be to informally encourage community members to amass a diverse range of experience and expertise, then provide a space for interdisciplinary discussion of the opportunities this helps them spot. In turn, this proposal can be critiqued as both narrower and much more resource-intensive (at least at first) than approach #1.

80,000 hours' advice on how to read our advice includes the disclaimer that it's "hard to ever be more than about 70% confident" in their positions. Talking more, and more explicitly, about the tradeoffs and uncertainties in our approaches would be one way of modelling (as well as giving) that advice.

I'm less confident in this final point, but I think a particularly scary example of the above dangers is OpenAI. This is an org that's sufficiently EA-aligned for me to have met several employees at EAGs, but I can't think of a company—past or present—that's been more insular and secretive, and the basic self-contradictions in its public messaging seem like a huge red flag for groupthink. I don't know how much (or whether) EA is to blame for that organization's current culture, but I think we should explicitly work to avoid anything like it moving forwards.

3

0
1

Reactions

0
1

More posts like this

Comments2
Sorted by Click to highlight new comments since:

This is an org that's sufficiently EA-aligned for me to have met several employees at EAGs

This seems like a very poor metric. I would not say OpenAI is an EA-aligned company; the standard EA view on OpenAI is it is a spectacularly destructive company that people would prefer stopped pushing the capabilities frontier. 

"EA-aligned" was probably a poor choice of words, maybe "EA-influenced" would be better. I agree that e.g. the EA forum's attitude to OpenAI is strongly negative.

Curated and popular this week
Relevant opportunities