I was going through Animal Charity Evaluators' reasoning behind which countries to prioritize (https://animalcharityevaluators.org/charity-review/the-humane-league/#prioritizing-countries) and I notice they judge countries with a higher GNI per capita as more tractable. This goes against my intuition, because my guess is your money goes further in countries that are poorer. And also because I've heard animal rights work in Latin America and Asia is more cost-effective nowadays. Does anyone have any hypotheses/arguments? This quick take isn't meant as criticism, I'm just informing myself as I'm trying to choose an animal welfare org to fundraise for this week (small, low stakes).
When I have more time I'd be happy to do more research and contact ACE myself with these questions, but right now I'm just looking for some quick thoughts.
Hey Jeroen! I'm a researcher at ACE and have been doing some work on our country prioritization model. This is a helpful question and one that we've been thinking about ourselves.
The general argument is that strong economic performance tends to correlate with liberalism, democracy, and progressive values, which themselves seem to correlate with progressive attitudes towards, and legislation for, animals. This is why it’s included in Mercy For Animals’ Farmed Animal Opportunity Index (FAOI), which we previously used for our evaluations and which our current country prioritization model is still loosely based on.
The relevance of this factor depends on the type of intervention being used - e.g., economic performance is likely to be particularly relevant for programs that depend on securing large amounts of government funding. For a lot of programs it won’t be very relevant, and for some a similar but more relevant indicator of tractability could be the percentage of income not spent on food (which we also use), as countries are probably more likely to allocate resources to animal advocacy if their money and mental bandwidth aren’t spent on securing essential needs. (Because of these kinds of considerations, this year we took a more bespoke approach when considering the likely tractability of each charity's work, relying less on the quantitative outputs of the country prioritization framework.)
Your intuition about money going further in poorer countries (everything else being equal) makes sense. We seek to capture this where possible on a charity-by-charity basis in our Cost-Effectiveness Assessments. For country prioritization more broadly, in theory it’s possible to account for this using indices like the OECD’s Purchasing Power Parities (PPP) Index. Various issues have been raised with the validity of PPP measurements (some examples here), which is one of the reasons we haven’t included it to date in our prioritization model, but for next year we plan to explore those issues in more detail and what the trade-offs are.
Glad this question-and-answer happened! A meta note that sometimes people post questions aimed at an organization but don't flag it to the actual org. I think it's a good practice to flag questions to the org, otherwise you risk: - someone not at the org answers the question, often with information that's incorrect or out of date - the org never sees the question and looks out-of-touch for not answering - comms staff at the org feel they need to comb public spaces for questions and comments about them, lest they look like they're ignoring people
(This doesn't mean you can't ask questions in public places, but email the org sending them the link!)
Thanks for pointing this out! I wasn't really sure where my question fell on the axis of "general EA animal welfare knowledge" (ex. prioritizing chickens > cows) to "specific detail about how ACE evaluates charities". By posting a quick take on the forum, I was hoping it was closer to the former, that I was just missing something obvious and that ACE wouldn't even have to be bothered. I shouldn't have overlooked the possibility that it might be more complicated!
Don't forget to go to http://www.projectforawesome.com today and vote for videos promoting effective charities like Against Malaria Foundation, The Humane League, GiveDirectly, Good Food Institute, ProVeg, GiveWell and Fish Welfare Initiative!
In case you're interested in supporting my EA-aligned YouTube channel A Happier World:
I've lowered the minimum funding goal from $10,000 to $2,500 to give donors confidence that their money will directly support the project. Because if the minimum funding goal isn't reached, you won't get your money back. Instead it will go back in your Manifund balance for you to spend on a different project. I understand this may have been a barrier for some, which is why I lowered the minimum funding goal.
At this point, I'd be willing to buy out credit from anyone who obtains credit on Manifund, applies said credit to this project, and the project doesn't fund. Hopefully Manifund will find a more elegant solution for this kind of issue (there was a discussion on Discord last week) but this should work as a stopgap.
(Offer limited to $240, which is the current funding gap between current offers and the $2500 minimum.)
Summary: I propose a view combining classic utilitarianism with a rule that says not to end streams of consciousness.
Under classic utilitarianism, the only thing that matters is hedonic experiences. People with a person affecting view object to this, but that view comes with issues of its own.
To solve the tension between these two philosophies, I propose a view that adds a rule to classical utilitarianism disallowing directly ending streams of consciousness (SOC)
This is a way to bridge the gap between the person-affecting view and 'personal identity doesn't exist' view and tries to solve some population ethics issues.
I like the simplicity of classic utilitarianism. But I have a strong intuition that a stream of consciousness is valuable intrinsically, meaning that it shouldn't be stopped/destroyed. Creating a new stream of consciousness isn't intrinsically valuable (except for the utility it creates).
A SOC isn't infinitely valuable. Here are some exceptions: 1. When not ending a SOC would result in more SOCs ending (see trolley problem): basically you want to break the rule as little as possible 2. The SOC experiences negative utility and there are no signs it will become positive utility (see euthanasia) 3. Ending the SOC will create at least 10x its utility (or a different critical level)
I believe this is compatible with the non-identity problem (it's still unclear who's you if you're duplicated or if you're 20 years older). But I've never felt comfortable with the teleportation argument, and this intuition explains why (as a SOC is being ended).
So generally this means: Making current population happier (or making sure few people die) > increasing amount of people
Future people don't have SOCs as they don't exist yet, but it's still important to make their lives go well.
Say we live in a simulation. If our simulation gets turned off and gets replaced by a different one of equal value (pain/pleasure wise), there still seems to be something of incredible value lost.
Still, if the simulation gets replaced by a sufficiently more valuable one it could still be good, hence exception number 3. The exception also makes sure you can kill someone to prevent future people never coming into existence (for example: someone is about to spread a virus that makes everyone incapable of reproducing).
I don't think adding this rule changes the EV calculations regarding increasing pain/pleasure of present and future beings when it doesn't involve ending streams of consciousness (I could be wrong though).
This rule doesn't solve the repugnant conclusion, but I don't think it's repugnant in the first place. I think my bar for a life worth living seems higher than those of other people.
How I came to this: I really liked this forum post arguing "Making current population happier > increasing amount of people". But if I agree it means there's something of value besides pure pleasure/pain. This is my attempt at finding what it is.
One possible major objection: If you're giving birth you're essentially causing a new SOC to be ended (as long as aging isn't solved). Perhaps this is solved by saying you can't directly end a stream of consciousness, but you can ignore second/third order effects (though I'm not sure how to make sense of that).
I'd love to hear your thoughts on these ideas. I don't think these thoughts are good enough or polished enough to deserve a full forum post. I wouldn't be surprised if the first comment under this shortform would completely shatter this idea.
I made a spreadsheet of all EA-aligned video content creators that I'm aware of. This doesn't mean they make EA content necessarily, just that they share EA values. If I've missed anyone, let me know!
In case you feel like adding another feature, it might be nice to include an example or two of each channel's EA-related content in another column. It's easy to tell how Rational Animations is EA-focused, but I wasn't sure which content I should look at for e.g. the person whose TikTok account was largely focused on juggling.
I was going through Animal Charity Evaluators' reasoning behind which countries to prioritize (https://animalcharityevaluators.org/charity-review/the-humane-league/#prioritizing-countries) and I notice they judge countries with a higher GNI per capita as more tractable. This goes against my intuition, because my guess is your money goes further in countries that are poorer. And also because I've heard animal rights work in Latin America and Asia is more cost-effective nowadays. Does anyone have any hypotheses/arguments? This quick take isn't meant as criticism, I'm just informing myself as I'm trying to choose an animal welfare org to fundraise for this week (small, low stakes).
When I have more time I'd be happy to do more research and contact ACE myself with these questions, but right now I'm just looking for some quick thoughts.
Hey Jeroen! I'm a researcher at ACE and have been doing some work on our country prioritization model. This is a helpful question and one that we've been thinking about ourselves.
The general argument is that strong economic performance tends to correlate with liberalism, democracy, and progressive values, which themselves seem to correlate with progressive attitudes towards, and legislation for, animals. This is why it’s included in Mercy For Animals’ Farmed Animal Opportunity Index (FAOI), which we previously used for our evaluations and which our current country prioritization model is still loosely based on.
The relevance of this factor depends on the type of intervention being used - e.g., economic performance is likely to be particularly relevant for programs that depend on securing large amounts of government funding. For a lot of programs it won’t be very relevant, and for some a similar but more relevant indicator of tractability could be the percentage of income not spent on food (which we also use), as countries are probably more likely to allocate resources to animal advocacy if their money and mental bandwidth aren’t spent on securing essential needs. (Because of these kinds of considerations, this year we took a more bespoke approach when considering the likely tractability of each charity's work, relying less on the quantitative outputs of the country prioritization framework.)
Your intuition about money going further in poorer countries (everything else being equal) makes sense. We seek to capture this where possible on a charity-by-charity basis in our Cost-Effectiveness Assessments. For country prioritization more broadly, in theory it’s possible to account for this using indices like the OECD’s Purchasing Power Parities (PPP) Index. Various issues have been raised with the validity of PPP measurements (some examples here), which is one of the reasons we haven’t included it to date in our prioritization model, but for next year we plan to explore those issues in more detail and what the trade-offs are.
Hope that helps!
Thank you so much for this elaborate and insightful response, Max! I understand the argument much better now.
Glad this question-and-answer happened!
A meta note that sometimes people post questions aimed at an organization but don't flag it to the actual org. I think it's a good practice to flag questions to the org, otherwise you risk:
- someone not at the org answers the question, often with information that's incorrect or out of date
- the org never sees the question and looks out-of-touch for not answering
- comms staff at the org feel they need to comb public spaces for questions and comments about them, lest they look like they're ignoring people
(This doesn't mean you can't ask questions in public places, but email the org sending them the link!)
Thanks for pointing this out! I wasn't really sure where my question fell on the axis of "general EA animal welfare knowledge" (ex. prioritizing chickens > cows) to "specific detail about how ACE evaluates charities". By posting a quick take on the forum, I was hoping it was closer to the former, that I was just missing something obvious and that ACE wouldn't even have to be bothered. I shouldn't have overlooked the possibility that it might be more complicated!
Don't forget to go to http://www.projectforawesome.com today and vote for videos promoting effective charities like Against Malaria Foundation, The Humane League, GiveDirectly, Good Food Institute, ProVeg, GiveWell and Fish Welfare Initiative!
How does one vote? (Sorry if this is super obvious and I'm just missing it!)
+1. I went to the Effective Altruism Barcelona Give Directly video, and the voting link just took me to the givewell homepage.
In case you're interested in supporting my EA-aligned YouTube channel A Happier World:
I've lowered the minimum funding goal from $10,000 to $2,500 to give donors confidence that their money will directly support the project. Because if the minimum funding goal isn't reached, you won't get your money back. Instead it will go back in your Manifund balance for you to spend on a different project. I understand this may have been a barrier for some, which is why I lowered the minimum funding goal.
Manifund fundraising page
EA Forum post announcement
At this point, I'd be willing to buy out credit from anyone who obtains credit on Manifund, applies said credit to this project, and the project doesn't fund. Hopefully Manifund will find a more elegant solution for this kind of issue (there was a discussion on Discord last week) but this should work as a stopgap.
(Offer limited to $240, which is the current funding gap between current offers and the $2500 minimum.)
I'd love to do more weekly coworkings with people! If you're interested in coworking with me, you can book a session here: https://app.reclaim.ai/m/jwillems/coworking
We can try it out and then decide if we want to do it weekly or not.
More about me: I run the YouTube channel A Happier World (youtube.com/ahappierworldyt) so I'll most likely be working on that during our sessions.
Today we celebrate Petrov day: The day that Stanislav Petrov potentially saved the world from a nuclear war. 40 years ago now.
I made a quick YouTube Short / TikTok about it: https://www.youtube.com/shorts/Y8bnqxAbMNg https://www.tiktok.com/@ahappierworldyt/video/7283112331121347873
An unpolished attempt at moral philosophy
Summary: I propose a view combining classic utilitarianism with a rule that says not to end streams of consciousness.
Under classic utilitarianism, the only thing that matters is hedonic experiences.
People with a person affecting view object to this, but that view comes with issues of its own.
To solve the tension between these two philosophies, I propose a view that adds a rule to classical utilitarianism disallowing directly ending streams of consciousness (SOC)
This is a way to bridge the gap between the person-affecting view and 'personal identity doesn't exist' view and tries to solve some population ethics issues.
I like the simplicity of classic utilitarianism. But I have a strong intuition that a stream of consciousness is valuable intrinsically, meaning that it shouldn't be stopped/destroyed. Creating a new stream of consciousness isn't intrinsically valuable (except for the utility it creates).
A SOC isn't infinitely valuable. Here are some exceptions:
1. When not ending a SOC would result in more SOCs ending (see trolley problem): basically you want to break the rule as little as possible
2. The SOC experiences negative utility and there are no signs it will become positive utility (see euthanasia)
3. Ending the SOC will create at least 10x its utility (or a different critical level)
I believe this is compatible with the non-identity problem (it's still unclear who's you if you're duplicated or if you're 20 years older).
But I've never felt comfortable with the teleportation argument, and this intuition explains why (as a SOC is being ended).
So generally this means: Making current population happier (or making sure few people die) > increasing amount of people
Future people don't have SOCs as they don't exist yet, but it's still important to make their lives go well.
Say we live in a simulation. If our simulation gets turned off and gets replaced by a different one of equal value (pain/pleasure wise), there still seems to be something of incredible value lost.
Still, if the simulation gets replaced by a sufficiently more valuable one it could still be good, hence exception number 3. The exception also makes sure you can kill someone to prevent future people never coming into existence (for example: someone is about to spread a virus that makes everyone incapable of reproducing).
I don't think adding this rule changes the EV calculations regarding increasing pain/pleasure of present and future beings when it doesn't involve ending streams of consciousness (I could be wrong though).
This rule doesn't solve the repugnant conclusion, but I don't think it's repugnant in the first place. I think my bar for a life worth living seems higher than those of other people.
How I came to this: I really liked this forum post arguing "Making current population happier > increasing amount of people". But if I agree it means there's something of value besides pure pleasure/pain. This is my attempt at finding what it is.
One possible major objection: If you're giving birth you're essentially causing a new SOC to be ended (as long as aging isn't solved). Perhaps this is solved by saying you can't directly end a stream of consciousness, but you can ignore second/third order effects (though I'm not sure how to make sense of that).
I'd love to hear your thoughts on these ideas. I don't think these thoughts are good enough or polished enough to deserve a full forum post. I wouldn't be surprised if the first comment under this shortform would completely shatter this idea.
Reason why I call it a "stream of consciousness": Streams change over time. Conscious beings do too. They can also split, multiply or grow bigger.
One thing I worry about though: Does your consciousness end when sleeping? Does it end when under anesthesia? These thoughts frighten me.
EA-aligned video content creators
I made a spreadsheet of all EA-aligned video content creators that I'm aware of. This doesn't mean they make EA content necessarily, just that they share EA values. If I've missed anyone, let me know!
https://docs.google.com/spreadsheets/d/1ukTCN4ADCkTLw9onQO-sTeDQfZQBz3bn_vjFVw6rqTQ/edit?usp=sharing
This was a really cool thing to do!
In case you feel like adding another feature, it might be nice to include an example or two of each channel's EA-related content in another column. It's easy to tell how Rational Animations is EA-focused, but I wasn't sure which content I should look at for e.g. the person whose TikTok account was largely focused on juggling.
Like I said, they don't necessarily make EA content. I think I'll add a column specifying whether they do or not.
Responding as per Samuel Shadrach's suggestion:
Neil Halloran seems like a good addition.
He doesn't seem to be an EA, yet he's rigorously writing on some EA aligned topics.
https://www.youtube.com/channel/UCtbym4p03AxE1vF9QB4wB5A
See here: https://forum.effectivealtruism.org/posts/matte7zzExKaZiTNo/charles-he-s-shortform?commentId=WPnGpLGr88afdsjyc
Added :)
I added the transcription of my newest video on sentientism and moral circle expansion to the EA Forum post :) https://forum.effectivealtruism.org/posts/2kNeKoCcHAHQRjRRH/new-a-happier-world-video-on-sentientism-and-moral-circle