I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
I believe it was AI-generated. This author runs a service that appears to be a custom LLM for writing medical articles and they've posted a dozen AI-generated articles to the EA Forum, none of which were relevant or good. Profile picture appears to be AI-generated as well. Should probably be banned IMO, I just reported the profile.
Thanks for giving your perspective, Holly. This is useful since you're one of the few EAs who's organizing protests full-time.
I really think my model is better and the evidence in these studies should only tweak it.
Related to this, I said I'm 90% confident that protests worked. But I'm less than 90% confident in the results of my meta-analysis (maybe only 75% confident). A good chunk of my confidence comes from other, weaker evidence.
I wish there were studies that could tell me how to do this better, but there aren't
The studies do suggest one thing about how to do protests: nonviolence is better than violence. But you're already not doing violent protests.
My review of this summary:
[^1] Technically Wasow (2020) covered both violent and nonviolent protests, but the part about nonviolent protests was purely observational (no natural experiment).
[^2] Where "moving" includes both persuading someone to turn out, and persuading someone to change who they vote for. (Those two things are not equivalent when you're looking at absolute vote count, but they're equivalent with respect to vote share.)
I don't know much about the mechanisms, but based on the evidence I reviewed, I can say a few things:
Edit: Social Change Lab also has a review on what types of protests are most effective. I haven't reviewed the evidence in detail but my sense is it's mostly weak; still better than no evidence.
I think you are one of the few people who disregards x-risk and has a well-considered probability estimate for which it makes sense to disregard x-risk. (Modulo some debate around how to handle tiny probabilities of enormous outcomes.)
I was more intending to critique the sort of people who say "AI risk isn't a concern" without having any particular P(doom) in mind, which in my experience is almost all such people.
If you use a standard expected-value-like method for determining preferences, you still get that insect suffering is very important. Say (for simplicity) you have a 50% credence that aggregate insect suffering is 10,000x more important than aggregate human suffering, and a 50% credence that it's 0x as important. In expectation, it is 5,000x more important.
If you reject expected value reasoning, then it's not clear how you can form consistent preferences. Perhaps under a "moral parliament" view, you could allocate 50% of your charitable resources to insects and 50% to humans. IIRC there are some issues with moral parliaments (I think Toby Ord had a paper on it) but there might be some way to make it work.