I live for a high disagree-to-upvote ratio
My hobby horse around these parts has been that EA should be less scared about reaching out to the left (where I’m politically rooted), and thinking about what commonalities we have. This is something I have already seen in the animal welfare movement, where EAs are unafraid to work with existing vegan activism, and have done a good job of selling philanthropic funding to them, despite having large differences in opinion on the margins.
As you note, it’s not unreasonable that EA looks very far left from some perspectives. GiveDirectly is about direct empowerment, and I would argue that a lot of global development work, especially economic development, can be anti-imperial and generally concord with Marxist ideas of the internationale. Some better outreach and PR management in these communities would go a long way in the same way that it has for the political centre-left, who seem to get lots more attention from EA.
I strongly agree, and would add that this is a big concern of mine in direct intervention delivery also. Kaya Guides is fortunate enough to have a team that understands digital marketing, and we decided early on to recruit participants for our intervention using Meta ads. When I joined, I implemented direct conversion tracking from our intervention. Combined, we have reduced recruitment costs to around US$1, which is substantially cheaper than forming partnerships in the early years of an organisation, and is much more flexible.
I have been advocating for it where I can within AIM, for interventions that support it (and sometimes go a step further, and try to advocate for selecting interventions that are well suited for digital delivery).
Let me know if there are ways I can help advocate for better growth marketing within EA interventions, I am very passionate about this!
A few points:
Small drive-by question for you: In your opinion, if C. Elegans is conscious and has some moral significance, and suppose we could hypothetically train artificial neural networks to simulate a C. Elegans, would the resulting simulation have moral significance?
If so, what other consequences flow from this—do image recognition networks running on my phone have moral significance? Do LLMs? Are we already torturing billions of digital minds?
If not, what special sauce does C. Elegans have that an artificial neural network does not? (If you’re not sure, where do you think it might lie?)
(Asking out of genuine curiosity—haven’t had a lot of time to interface with this stuff)
I guess I don’t find your conclusion intuitive. I’m sure there are a range of preference questions you could ask these extreme sufferers. For example, whether they, at a 5/10 life satisfaction, would trade places with someone in a low-income country with a life satisfaction of 2/10 who does not have their condition.
My hunch is that the former is true, that there is something you can elicit from these people that isn’t being captured in the Cantril Ladder. (In my work, we’ve found the Cantril Ladder to be unreliable in other ways). But on the other side of this, I do worry about rejecting people’s own accounts of their experiences—it may literally be true that these people are somewhat happy with their lives, and that we should focus our resources on those who report that they aren’t!
However, I would strongly wager that the majority of this sample does not believe in the three ideological points you outlined around authoritarianism, terrorist attacks, and Stalin & Mao (I think it is also quite unlikely that the people viewing the Tik Tok in question would believe these things either). Those latter beliefs are extremely fringe.