I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.
With that being said, if and when having a positive impact on the world and satisfying community members does come apart, we want to keep our focus on the broader mission.
I understand the primary concern posed in this comment to be more about balancing the views of donors, staff, and the community about having a positive impact on the world, rather than trading off between altruism and community self-interest. To my ears, some phrases in the following discussion make it sound like the community's concerns are primarily self-interested: "trying to optimize for community satisfaction," "just plain helping the community," "make our events less of a pleasant experience (e.g. cutting back on meals and snack variety)," "don’t optimize for making the community happy" for EAG admissions).
I don't doubt that y'all get a fair number of seemingly self-interested complaints from not-satisfied community members, of course! But I think modeling the community's concerns here as self-interested would be closer to a strawman than a steelman approach.
On point 4:
I'm pretty sure we could come up with various individuals and groups of people that some users of this forum would prefer not to exist. There's no clear and unbiased way to decide which of those individuals and groups could be the target of "philosophical questions" about the desirability of murdering them and which could not. Unless we're going to allow the question as applied to any individual or group (which I think is untenable for numerous reasons), the line has to be drawn somewhere. Would it be ethical to get rid of this meddlesome priest should be suspendable or worse (except that the meddlesome priest in question has been dead for over eight hundred years).
And I think drawing the line at we're not going to allow hypotheticals about murdering discernable people[1] is better (and poses less risk of viewpoint suppression) than expecting the mods to somehow devise a rule for when that content will be allowed and consistently apply it. I think the effect of a bright-line no-murder-talk rule on expression of ideas is modest because (1) posters can get much of the same result by posing non-violent scenarios (e.g., leaving someone to drown in a pond is neither an act of violence nor generally illegal in the United States) and (2) there are other places to have discussions if the murder content is actually important to the philosophical point.[2]
By "discernable people," I mean those with some sort of salient real-world characteristic as opposed to being 99-100% generic abstractions (especially if in a clearly unrealistic scenario, like the people in the trolley problem).
I am not expressing an opinion about whether there are philosophical points for which murder content actually is important.
I'd probably give somewhat more credence to this if Washington didn't own 124 slaves at the time of his death. People in Virginia were emanicipating their slaves; Washington could have but did not during his lifetime. That suggests his actions were not merely constrained by what was possible for a politician to accomplish at the time.
Lincoln was pretty willing to enshrine slavery into the Constitution forever to save the Union (https://en.m.wikipedia.org/wiki/Corwin_Amendment), so I find his anti-slavery reputation to be too strong.
Thanks for writing this!
A few general points:
On cause areas:
One such point of doctrine is eschatology. Those who are who think the Second Coming is sure or very likely to happen within decades would reject the concept of a prolonged future for humanity and hence longtermism. This kind of eschatological expectation is common among the more conservative protestants.
In the current meta, where longtermism is practically close enough to synonymous with x-risk reduction, any confident belief in the Second Coming may be sufficient to foreclose significant engagement with longtermism for many Christians. The Second Coming doesn't really work if there are no people left because the AI killed them all! I suspect similar rationales would be present in many other religions, either because they have their own eschatologies or because human extinction would seem at tension with a foundational belief in a deity who is at least mostly benevolent, at least nearly omnipotent, and interested in human welfare.
Even beyond that, other subfields in longtermism don't mesh as well with common Christian theological concepts. Transhumanism, digital minds, and similar concepts are likely to be non-starters for many Christians. In most Christian theologies, human beings are uniquely made[2] in the image of God and their creations would not share in that nature at all. Furthermore, EA thinking about the future may be seen as technoutopian, which is in tension with Christian theologies that identify sin (~ a religious version of evil or wrongdoing) as the fundamental cause of problems in the world. So EA thinking can come off as seeking mostly technological solutions to a spiritual problem.
Depending on their beliefs about soteriology, a Christian with longtermist tendencies might also focus on evangelism, theorizing that eternity is forever and that what happens in the life to come is far more important than what happens on earth.
Some Christians might perceive working on animal welfare as misdirected and reject EA because they see animal welfare being a prominent cause area in the movement.
My guess is that EA reasoning about cause prio, rather than beliefs about the need to reduce animal suffering per se, would be the major stumbling block here. After all, companion-animal charities have long been popular in the US, and I don't have any reason to think that US Christians were shunning them. But (e.g.) trying to quantify the moral weight of a chicken's welfare in comparison to that of a human is probably more likely to upset someone coming from a distinctly Christian point of view than (say) the median adult in a developed country. Suggesting that the resulting number is in the single digits, or that the meat-eater problem is relevant to deciding whether to donate to global health charities, is even more likely to be perceived as off-putting.[3] Cf. the discussion of humans as being made in the image of God above.
Characteristic to both of these stances is that they lead to a rejection of only a particular cause area within EA. This would leave room to engage with the other parts.
Yes, although we don't know what EA content the hypothetical person would find first (or early). If the first content they happen to see is about (e.g.) the meat-eater problem, they may not come back for a second helping even though they would have resonated with GH&D work. With GH&D declining in the meta, this may be a bigger issue than it would have been years ago.
Also, I think many people -- Christian or not -- would be less likely to engage with a community if a significant portion of community effort and energy was devoted to something they found silly, inappropriate, or inconsistent with their deeply-held values.[4]
"Full community" is not the greatest term. I mean something significantly more than an affinity group, but not necessarily something insular from other groups practicing EA-I. A full community can stand on its own two feet, as it were. To use a Christian metaphor, a church would ordinarily be a full community. One can receive the sacraments/ordinances, learn and study, obtain spiritual support and guidance, serve those who are less privileged, and get what we might consider the other key functions of a communal Christian life through a church. I'm less clear in my own mind on the key functions of a community practicing EA-I.
There are of course, many different views about what "made" means here!
I do not mean to express an opinion on the merits of these topics, or suggest that discussion of them should be avoided.
Again, I am not expressing endorsement of a norm that we shouldn't talk about or do certain things because some group of people would object to that.
We praise people to hold them up as examples to emulate (even though all people are imperfect and thus all emulation should be partial). Holding people who committed large-scale crime up for emulation has a lot of downsides. Moreover, the effectiveness of effective historical figures is often context-dependent, and difficult to apply to greatly different circumstances. Finally, I'm not convinced that praise of effective leaders like Washington, Madison, and Churchill is neglected in at least American public education and discourse (but this may have changed since my childhood).
(EAs are often happy funding things on the basis of weaker evidence than RCTs).
Yes, but that is often in cases where (1) there are few/no interventions in the cause area amenable to RCTs or other high-reliability ways of assessing results (e.g., AI safety), or (2) the intervention has some added benefit that compensates for the less solid evidentiary base (e.g., if it works, foreign aid policy work would be massively more cost-effective than traditional GiveWell-style work). So I'd expect many EAs to consider weaker-evidence programs only if more weaknesses in the evidence base for corporate campaigns were identified and/or benefits for the vegan outreach interventions that compensate for a weaker evidentiary basis are identified.
Also, one can think that x-risk work is also generally effective in mitigating near-x-risk (e.g., a pandemic that "only" kills 99% of us). Particularly given the existence of the Genesis flood narrative, I expect most Christians would accept the possibility of a mass catastrophe that killed billions but less than everyone.