Currently working for Mieux Donner. I do many stuff, but I mostly write content.
Background in cognitive science. I run a workshop with the aim to teach methods to manage strong disagreements (including non-EA people). Also community building.
Interested in cyborgism and AIS via debate.
https://typhoon-salesman-018.notion.site/Date-me-doc-be69be79fb2c42ed8cd4d939b78a6869?pvs=4
I often get tremendous amounts of help from people knowing how to program being enthusiastic for helping over an evening.
Thanks for posting this. It makes me think -"Oh, someone is finally mentioning this".
Observation : I think that your model rests on hypotheses that are the hypotheses I expect someone from Silicon Valley to suggest, using Silicon-Valley-originating observations. I don't think of Silicon Valley as 'the place' for politics, even less so epistemically accurate politics (not evidence against your model of course, but my inner simulator points at this feature as a potential source of confusion)
We might very well need to use a better approach than our usual tools for thinking about this. I'm not even sure current EAs are better at this than the few bottom-lined social science teachers I met in the past -being truth-seeking is one thing, knowing the common pitfalls of (non socially reflexive) truth-seeking in political thinking is another.
For some reasons that I won't expand on, I think people working on hierarchical agency are really worth talking to on this topic and tend to avoid the sort of issues 'bayesian' rationalists will fall into.
I think I can confidently state that :
1-Some people will be heavily reluctant to attend BlueDot because it is an online course. Some people likewise have their needs better suited by alternatives (whether in terms of pedagogical style, UX, information bandwith).
2-Opening an AIS class in a university can unlock a surprizing amount of respectability.
Thank you for writing this ! I've been trying to find a good example of "translating between philosophical traditions" for some time, one that is both epistemically correct and well executed. This one is really good !
What I keep from this is the idea of making additional distinctions -aknowledging that EA (or whichever cause area one wants to defend) really is different from the initial "style", but being able to explain this difference with a shared vocabulary.
[This does not represent the opinion of my employer]
I currently mostly write content for an Effective Giving Initiative, and I think it would be somewhat misleading to write that we recommend animal charities that defend animal rights -people would misconstrue what we're talking about. Avoided suffering is what we think about when explaining who "made it" to the home page, it's part of the methodology, and my estimates ultimately weigh in on that. It's also the methodology of the evaluators who do all the hard work.
My guess would be that EA has a vast majority of consequentialists, whose success criterion is wellbeing, and whose methodology is [feasible because it is] welfare-focused (e.g. animal-adjusted QALYs per dollar spent). This probably sedimented itself early and people plausibly haven't questioned it a lot so far. EA-aligned rights-focused interventions exist, but they're ultimately measured according to their gains in terms of welfare.
On my side, I think it's already hard as it is to select cost-effective charities with a consequentialist framework (and sell it to people!), and "rights" add in a lot of additional distinctions (e.g. rights as means vs as ends) which makes it hard to operationalize. I can write an article about why we recommend animal welfare charity X in terms of avoided counterfactual suffering, but I'm clueless if I had to recommend it in terms of avoided right infringement, because it's harder to measure, and I'm not even sure of what I'm talking about.
I'd be happy to see people from other positions give their opinion, this is a strictly personal view.
Five new Effective Giving Initiatives have been created in 2024, augmenting the outreach of effective giving accross countries and linguistic communities, some of them already meeting a fruitful giving season!
Re: agency of the community itself, I've been trying to get to this "pure" form of EA in my university group, and to be honest, it felt extremely hard.
-People who want to learn about EA often feel confused and suspicious until you get to object-level examples. "Ok, impactful career, but concretely, where would that get me? Can you give me an example?". I've faced real resistance when trying to stay abstract.
-It's hard to keep people's attention without talking about object-level examples, be it for teaching abstract concepts. It's even harder once you get to the "projects" phase of the year.
-People anchor hard on some specific object-level examples after that. "Oh, EA ? The malaria thing?" (Despite my go-to examples included things as diverse as shrimp welfare and pandemic preparedness)
-When it's not an object-level example, it's usually "utilitarianism" or "Peter Singer", which act a lot as thought stoppers and have an "eek" vibe for many people.
-People who care about non-typical causes actually have a hard time finding data and making estimates.
-In addition to that, agency for really making estimates is hard to build up. One member I knew thought the most Impactful career choice he had was potentially working on nuclear fusion. I suggested him to find out about the Impact-Tractability-Neglectedness of it to compare to another option he had (even rough OOMs) as well as more traditional ones. I can't remember him giving any numbers even months later. When he just mentioned he felt sure about the difference, I didn't feel comfortable arguing about the robustness of his justification. It's a tough balance to strike between respecting preferences and probing reasons.
-A lot of it comes down to career 1:1s. Completing the ~8 or so parts is already demanding. You have to provide estimates that are nowhere to be found if your center of interest is "niche" in EA. You then have to find academic and professional opportunities as well as relations that are not referenced anywhere in the EA community (I had to reach back to the big brother of a primary school friend I had lost track of to get a fusion engineer he could talk to!). If you need funding, even if your idea is promising, you need excellent communication skills for writing a convincing blog post, plausibly enough research skills to get non-air-plucked estimates for ITN / cost-effectiveness analysis, and a desire to go to EAGs and convince people who could just not care. Moreover a lot of people expressly limit themselves to their own country or continent. It's often easier to stick to the usual topics (I get call for applications for AIS fellowships almost every months, of course I never had ones about niche topics)
-Another point about career 1:1s, the initial list of options to compare is hard to negotiate. Some people will neglect non-EA options, others will neglect EA options, and I had issues with artificially adding options to help them truly compare options.
-Another other point, some people barely have the time to come to a few sessions. It's hard to get them to actually rely on the methodological tools they haven't learned about in order to compare their options during career 1:1s.
-A good way to cope with all of this is to encourage students to start things out -to create an org rather than joining one. But not everyone has the necessary motivation for this.
I'm still happy with having started the year with epistemics, rationality, ethics and meta-ethics, and to have done other sessions on intervention and policy evaluation, suffering and consciousness, and population ethics. I didn't desperately need to have sessions on GHD / Animal Welfare/ AI Safety, thought they're definitely "in demand".
Thanks for this posting this.
First of, I want to acknowledge that discussing this issues is indeed very difficult. I'm happy that you made it through whatever you had to go through (I could qualify this experience, but I expect any effort on my side to fall short of being helpful), and I'm immensely sorry that you had to face all these different issues, in lack of a better term. I also want to pre-emptively say that I share some of your critiques and don't want to come off as judging your experience.
However, I have some questions on my mind. I'll just leave one here, in the hope that it doesn't come off as insensitive.
I'd be curious to see how switching from QALYs to something else would re-order EA priorities. What would your guess be ? Would SWD plausibly be above e.g. malaria prevention?
I'm not requesting anything extremely specific or committed, but I think it would help me paint a more complete picture of the critique, and potentially identify clearer points of disagreement.
As an ex-group organiser, I feel that my fallback plans have just been described with extreme precision.