Currently building a workshop with the aim to teach methods to manage strong disagreements (including non-EA people). Also community building.
Background in cognitive science.
Interested in cyborgism and AIS via debate.
https://typhoon-salesman-018.notion.site/Date-me-doc-be69be79fb2c42ed8cd4d939b78a6869?pvs=4
I often get tremendous amounts of help from people knowing how to program being enthusiastic for helping over an evening.
Thanks for this posting this.
First of, I want to acknowledge that discussing this issues is indeed very difficult. I'm happy that you made it through whatever you had to go through (I could qualify this experience, but I expect any effort on my side to fall short of being helpful), and I'm immensely sorry that you had to face all these different issues, in lack of a better term. I also want to pre-emptively say that I share some of your critiques and don't want to come off as judging your experience.
However, I have some questions on my mind. I'll just leave one here, in the hope that it doesn't come off as insensitive.
I'd be curious to see how switching from QALYs to something else would re-order EA priorities. What would your guess be ? Would SWD plausibly be above e.g. malaria prevention?
I'm not requesting anything extremely specific or committed, but I think it would help me paint a more complete picture of the critique, and potentially identify clearer points of disagreement.
[note: my donation is currently paused for financial reasons, but reflects what is written below]
Put simply, if living with a huge amount of suffering animals is maybe wrong, then living with a raising amount of suffering animals is certainly morally careless.
It's due to general trends -humans globally seem to do better and better at helping one another, with fewer and fewer children deaths overall, people seemingly concerned by the suffering of other human beings, GDP rising, etc. I expect the ball to keep rolling.
Animals, on the opposite, suffer in greater and greater amount, and we don't seem to have found a real solution to revert this trend. Animal rights advocacy might end up being a "phase" in retrospect, and this is a worrying prospect. My hope is that my donations help build up momentum to eventually reach a tipping point.
I really liked this post and would be extremely happy to see more of those, especially if there is substantial disagreement.
To do some small pushback on policy orgs that seem to do "vacuous" reports: when I once complained about the same on the side of some anti-safety advocates, someone quoted me this exact passage from Harry Potter and the Order of the Phoenix (vanilla, not the fanfic):
[Umbridge makes an abstract sounding speech]
[Hermione said] 'It explained a lot.'
'Did it?' said Harry in surprise. 'Sounded like a load of waffle to me.'
There was some important stuff hidden in the waffle,' said Hermione grimly.
'Was there?' said Ron blankly.
'How about: "progress for progress's sake must be discouraged"? How about: "pruning
wherever we find practices that ought to be prohibited"?'
'Well, what does that mean?' said Ron impatiently.
'I'll tell you what it means,' said Hermione through gritted teeth. 'It means the Ministry's
interfering at Hogwarts.'
I've discussed with people in "big instances" and they pretty much confirmed my suspicion. Officials who play the "real game" have two channels of communication. There is the "face", often a mandated role that they haven't chosen, meant to represent interests that can (and sometimes do) conflict with their. Then there is the "real person", with its political opinions, and nationality.
The two are articulated together through means of careful word selection and PR management. Except behind closed doors, actors will not give the "person's" reasons for doing something, as this could lead to serious trouble (the media alone is enough to be a threat). They will generate rationalizations in line with the "face" that, in some instances, may suspiciously aline with the "person's" reason, and in some other instances, could serve as dogwhistles. However, their interlocutor is usually aware that they are rationalizations, and will push back with other rationalizations. There is, to some extent, a real person-to-person exchange, and I expect orgs that are good at this game to appear vacuous from the outside.
There are exceptions to this strategy, of course (think Donald Trump, Mr Rogers, or, for a very French example, Elise Lucet). Yet even those exceptions are not naive and take for granted that some degree of hypocrisy is being displayed by the counterpart.
It might be that most communication on X-risk really is happening, it's just happening in Umbridgese. This may be a factor you've already taken in consideration, however.
Thank you for this post! For not-so-technically inclined people, it really helps to have an external reading with surrounding arguments, as opposed to reading Sutskever himself. I really appreciate this kind of post and think it's useful for forum users.
I hope related AI Safety efforts made / will make plans to make the best out of this situation, if it happens.
Workshops:
https://deepcanvass.org/ organizes introductions to Deep Canvassing regularly. My personal take is that the workshop is great, but I don't find it entirely aligned with a truth-seeking attitude (it's not appalling either), and I would suggest rationalists to bring it their own twist.
https://www.joinsmart.org/ also organizes workshops who often vary in theme. Same remark as above.
There is a discord server accessible from https://streetepistemology.com/, they organize regular practices sessions.
Motivational Interviewing and Principled Negotiation are common enough for you to find a workshop near where you live, I guess.
There's also the elephant in the room -my own eclectic workshop, which mostly synthesizes all of the above with (I believe) a more rationalist orientation and stricter ethics.
Someone told me about people in the US who trained on "The Art of Difficult Conversations", I'd be happy to have someone leave a reference here! If you're someone who's used to coaching for managing disagreements, feel free to drop your services below as well.
My position is quite the opposite: I put the symbol on my LinkedIn profile (and removed it from the URL) and WhatsApp profile.
I never dared to start a discussion about effective giving myself, but thanks to this, people around me started the discussion for me ("Oh, what does this emoji means btw? What's the 10% pledge?"). I've been impressed at how curious, supportive and positive people were, and didn't feel like proselytizing anything while doing so, merely answering their curiosity. And I'm speaking as someone who went as far as hiding my signing the pledge to my non-EA surrounding up until that point.
I don't think anyone one the EA Forum would get interested in effective giving through this, and I actually don't support targeting EAs first -I'd consider it a better outcome if people outside the community see the emoji as opposed as within the community. I think that EA has to be very outward facing, or it will fail.
The default trajectory for animal welfare looks grim, extremely grim, and does not seem about to reach a tipping point anytime soon. I do believe that a pig that shrieks is in pain, and that inflicting this pain is immoral.
I am more uncertain when it comes to tractability. I also favor pluralism and tend to view things with an inner preferential voting system to adjudicate my moral uncertainties.
At least in principle, different species may all be conscious, and all have the same range of capacities for hedonic intensity, but have very differently sized experiences. If so, they ought to be weighted accordingly. We should be indifferent between putting two individuals of a given species in the ice bath and putting one individual of a species that is very similar to the first but whose experiences are twice as large.
(Trigger warning: scenario involving non-hearing humans)
-If I think about a fish vs a fly, this makes some sense.
-If I think about a deaf person vs a hearing person, this starts to make less sense -empirically, I'd wager that there's no difference.
-If I think about a deafblind person vs a hearing-and-sighted person, then my intuition is opposite: I actually care about the deafblind person slightly more, because their tactile phenomenal space has much higher definition than the one of the h.a.s person.
All else being equal, the only thing that matters is the aggregated intensity, no matter the size.
Expanding on this, and less on-topic:
-I've met a lot of people who had preferences over their size of experience (typically, deaf people who want to stay deaf, hearing people who wanted to be deaf, etc)
-Humans with a restricted field of experience seem to experience the rest more intensely. This intensity seems to matter to me.
-I also think that someone who is human-like except with respect to additional senses does not necessarily merit more moral consideration -only if such senses lead them to suffer, but in terms of potential hapiness, it does not move me.
-I also feel that people with less modalities and a preference over them should be included in an inclusive society, not forced to get the "missing" modalities -much like I'm not interested, at the moment, in additional modalities -such as feeling sexually attracted by animals (it is, after all, something I truly never felt).
I'm confused about how this fares under your perspective, and maybe your answer could help me get back the main distinctions you were trying to do in this article?
Please note that I'm not accusing you of discriminating over modal fields among humans, I'm genuinely curious about the implications of your view. I already wrote a post on something related (my views might have changed on this) and I understand that we disagree, but I'm not sure.
Re: agency of the community itself, I've been trying to get to this "pure" form of EA in my university group, and to be honest, it felt extremely hard.
-People who want to learn about EA often feel confused and suspicious until you get to object-level examples. "Ok, impactful career, but concretely, where would that get me? Can you give me an example?". I've faced real resistance when trying to stay abstract.
-It's hard to keep people's attention without talking about object-level examples, be it for teaching abstract concepts. It's even harder once you get to the "projects" phase of the year.
-People anchor hard on some specific object-level examples after that. "Oh, EA ? The malaria thing?" (Despite my go-to examples included things as diverse as shrimp welfare and pandemic preparedness)
-When it's not an object-level example, it's usually "utilitarianism" or "Peter Singer", which act a lot as thought stoppers and have an "eek" vibe for many people.
-People who care about non-typical causes actually have a hard time finding data and making estimates.
-In addition to that, agency for really making estimates is hard to build up. One member I knew thought the most Impactful career choice he had was potentially working on nuclear fusion. I suggested him to find out about the Impact-Tractability-Neglectedness of it to compare to another option he had (even rough OOMs) as well as more traditional ones. I can't remember him giving any numbers even months later. When he just mentioned he felt sure about the difference, I didn't feel comfortable arguing about the robustness of his justification. It's a tough balance to strike between respecting preferences and probing reasons.
-A lot of it comes down to career 1:1s. Completing the ~8 or so parts is already demanding. You have to provide estimates that are nowhere to be found if your center of interest is "niche" in EA. You then have to find academic and professional opportunities as well as relations that are not referenced anywhere in the EA community (I had to reach back to the big brother of a primary school friend I had lost track of to get a fusion engineer he could talk to!). If you need funding, even if your idea is promising, you need excellent communication skills for writing a convincing blog post, plausibly enough research skills to get non-air-plucked estimates for ITN / cost-effectiveness analysis, and a desire to go to EAGs and convince people who could just not care. Moreover a lot of people expressly limit themselves to their own country or continent. It's often easier to stick to the usual topics (I get call for applications for AIS fellowships almost every months, of course I never had ones about niche topics)
-Another point about career 1:1s, the initial list of options to compare is hard to negotiate. Some people will neglect non-EA options, others will neglect EA options, and I had issues with artificially adding options to help them truly compare options.
-Another other point, some people barely have the time to come to a few sessions. It's hard to get them to actually rely on the methodological tools they haven't learned about in order to compare their options during career 1:1s.
-A good way to cope with all of this is to encourage students to start things out -to create an org rather than joining one. But not everyone has the necessary motivation for this.
I'm still happy with having started the year with epistemics, rationality, ethics and meta-ethics, and to have done other sessions on intervention and policy evaluation, suffering and consciousness, and population ethics. I didn't desperately need to have sessions on GHD / Animal Welfare/ AI Safety, thought they're definitely "in demand".