This is a piece written by someone who is EA adjacent, someone who started making donations to EA endorsed charities, and someone who believes that Effective Altruism is currently probably the most important, the most intellectually honest, and the most benevolent intellectual movement in the world.
I think Scott Alexander put it perfectly in words in the following article:
https://www.astralcodexten.com/p/in-continued-defense-of-effective
His article mentions a list of enormously valuable contributions that EA movement has made. I fully agree with him. I think this movement really rocks on so many levels. And for this reason I feel quite bad about myself when I feel compelled to give a more critical perspective about some things.
So, as some sort of disclaimer. I am fully with EAs when it comes to practice. I think 10% pledges are a fantastic idea. I think people should donate to human charities, animal charities and longtermist charities as well.
I think longtermism, especially when it comes to preventing existential disaster is not crazy at all. And I think such concerns, in many of the cases aren't even that long term - many existential risks are something that we're facing right now, in coming years and decades, and attention to them is quite urgent.
I think it's incredibly important to ensure that AI development goes well.
All that being said, I do have some personal disagreements with certain philosophical approach that EA typically takes when dealing with things. Namely, I'm not fully convinced about some forms of utilitarianism.
I think consequentialism itself is probably a great framework. Focusing ethics on maximizing positive consequences and minimizing negative consequences makes a lot of sense, it's a simple and straightforward way to evaluate goodness of actions. But here's where I deviate from standard EA views:
- I think there are other things that matter and that might be ethically important besides just consequences. Those other things include loyalty, love, relationships, social contracts, reciprocity, fairness, etc.
- When it comes to consequences themselves: I thinks pleasure and pain aren't the only things that have value and that matter. Positive consequences should also include value of life itself, and perhaps value of flourishing. Negative consequences should also include things like betrayal, unfairness etc... Consequentialism, IMO should try to maximize all positive consequences, not just pleasure - and this includes, as I said, flourishing, intelligence, skills, health, meaning, life itself, etc.
So this is a framework from which I operate. That being said, let's proceed to the main part of the article.
Also, a thing to note, is that I'm quite high on Neuroticism (especially Anxiety facet), so unlike calmer people, I'm more likely to detect situations in which things could go wrong and potential dangers.
Love, rationalization, and risks of pure hedonistic utilitarianism
There is a popular line of thinking that goes like this:
- Due to their enormous numbers wild animals (especially insects, or even soil nematodes) should dominate our moral concerns even if they are unconscious. The probability of them being conscious is still high enough, that this, combined with their numbers makes all our other moral concerns, such as that for people or factory farm animals unimportant in comparison.
- Most of these creatures have net negative lives.
- People, by their use of agriculture and other activities, reduce habitats of these creatures, reducing their numbers. This prevents a lot of their suffering.
- Therefore supporting human charities that save lives, is the best thing we can do to reduce total amount of suffering (which mostly consists of wild animal suffering).
When people arrive at the point 4, they feel kind of happy and satisfied. They can keep donating to their beloved human charities, while having rock-solid logic to support it.
But what if the reasoning steps 1-4 are rationalization we used to support what we wanted to do anyway - that is to donate to human charities, which we do out of love and care for human beings, and not out of desire to maximize abstract utility?
What if our reasoning pointed us in the opposite direction, and said that those insects and nematodes have great lives, so supporting humans would be a terrible thing to do?
Meat eater problem already does something similar.
Following logic like that is incredibly risky, because it seems likely that at some point it will lead us to a conclusion that is utterly unacceptable.
But this all matters only if we're utterly impartial hedonistic utilitarians.
Reintroducing Some Partiality (and Love) into Equation?
Maybe we shouldn't be utterly impartial hedonistic utilitarians. Maybe there are reasons why this is wrong, even if we can't at this point logically find out why.
But if we operated from a different framework: from framework of genuine love and care, we would not even need to rationalize and justify our donating to human charities. It would be something we do unashamedly out of our love and concern for the welfare of human beings.
I'm not saying we should ditch effectiveness calculations or be totally partial.
But perhaps we could keep some level of partiality, while fostering and developing impartiality?
Perhaps we can use effectiveness calculations to find most effective ways to help the causes we care about (like humans, or farm animals, or wild animals, etc... depending on what you actually care about). So we could be finding most effective ways to help within a single domain of our choice.
This would keep a level of partiality, but would also perhaps be a bit more honest (as we wouldn't need to resort to potential rationalizations and justifications), and less risky approach (as we would avoid having to deal with conclusions that would tell us we should do something that we otherwise find terrible, like letting people die because they are meat eaters).
And for those who value love and find it one of the foremost virtues, it would also keep love relevant in our lives. We help those we love, but we can't love everybody.
Moral Circle Expansion
What about moral circle expansion in this framework?
I do support moral circle expansion. But I would rather call it moral "circles".
I think there might be non-consequentialist but still valid reasons to favor those in our innermost circles more than those in outer circles. This is not to say that we shouldn't care about those in outer circles. But perhaps we should prioritize those in inner circles.
Let's say you have to make a choice whom to save - a human child or 3 young elephants? Most of people would cringe at the idea of letting a child die, to save elephants. But 3 elephants might matter more from utilitarian viewpoint because elephants typically live long, are near the top of food chain, also have extremely large brains with more neurons than humans. So sacrificing a child to save 3 young elephants might not be too wrong from consequentialist viewpoint.
But we intuitively know that saving humans matters more, and anyone who would let a child die in such a situation would probably be viewed unfavorably.
So if we can agree that saving a child is better in this situation, then this could form a basis for including only humans in our innermost circle.
The second circle would include farm animals, as we are directly responsible for their existence, they wouldn't exist without us at all. The second circle would also include AIs, at least as long as we're not sure if they are sentient. If we prove beyond reasonable doubt that they are sentient, intelligent beings, whom we created, then they should be granted status of people and go straight to the first circle together with humans.
The third circle would include wild animals. We didn't bring them into existence. They also lack certain qualities that humans have and that we deem valuable which would make them a lower priority.
First of all we should underline that we care about everyone, in all circles. We can't afford to say we don't care even for nematodes who are in the outermost circle. We should strive to benefit everyone eventually.
But as long as we don't have the capacity for this we should prioritize those in inner circles. So perhaps those who claim that donating to animal charities is in bad taste, while there still are starving children or children dying of horrible diseases, are really onto something. Maybe they are right. (And those who care about nematodes would agree, but for a different reason)
The reasons to prioritize humans are numerous
- To foster a positive image of Effective Altruism as a philanthropic (that is: human loving) movement that cares about well-being of people and interests of humanity.
- Because only humans can expect to be helped by charitable organizations and feel betrayed if they are not helped.
- For reciprocity reasons: humans (and now some AIs) are the only creatures that can understand us, cooperate with us, and be willing to help us too. So we should first try to help them.
- Humans are only creatures so far capable of being moral agents that can help us do good, they are not just moral patients, but also agents. They can be quite useful.
- Only humans so far can understand and participate in social contract. You can't rely on lion not eating you. But you can rely on most humans (unless crazy or psychopathic) not hurting you.
- Only humans can experience and participate in certain things that we consider very valuable like making art, appreciating music, doing philosophy, or discussing effective altruism. Dismissing such values as unimportant, biased, or speciesist, means that we would value more a wireheaded creature or a pile of hedonium, than a heroic composer like Beethoven, who had his struggles, but also produced so much sublime beauty and meaning.
- Religious argument: only humans might be endowed with souls, and only humans have been created in God's likeness.
- Humans flourish more than other animals, and flourishing is important. (Will be explained later)
- Because we as humans, love humans more than others, and we don't want to dismiss the importance of love
- Dismissing human interest or giving them the same weight as interests of, say, insects, could be a serious strategic mistake, or miscalculation. Imagine we build a utilitarian super AI, and it realizes animals have net positive lives, and humans are destroying their habitats or harming them, and it realizes the best way to improve total welfare is to eliminate humans. Such an AI would clearly NOT be aligned with human interests. And I still think we should try to align AIs with our interests more than we should try to make them perfectly moral. Because first of all, we DON'T know what a perfect morality would entail, we also we don't even agree if there is objective morality, and even then, we would prefer to be in charge and not to defer to moral decisions of AIs. At least not until we're sure of their sentience and benevolence, wisdom and prudence. Even if you want to be a good utilitarian and to eventually increase total welfare in the universe as much as you can - to be able to do this - we must survive as a civilization. Humanity must survive. If we're wiped out for the sake of insects, we can't be sure that the AIs that remain in charge will be moral and keep increasing welfare in the Universe. If they decided to wipe us out, instead of more constructive steps like inventing lab grown meat or brainwashing us to become vegans or any other cleverer and less violent solution - then such an AI is probably unlikely to be as benevolent as it seems. Or maybe they will be benevolent (like by turning the Universe into hedonium, or creating wireheaded creatures, etc...) but maybe this is not exactly our idea of positive future. Likely such an AI would wipe out all the existing living beings if their atoms can be successfully rearranged into a more hedonically valuable configuration.
- This is just restating the point 1 - but whereas point 1 emphasizes positive consequences of EA movement being pro-humanity, here I will emphasize very negative consequences of EA movement being perceived as indifferent (or perhaps by some as even hostile) to human interests and to life itself. In short, focusing our attention to invertebrates or saying that 99.99% of goodness of helping people is derived from its effects on invertebrates (by lowering their numbers) and not from its direct effects on people can be seen as extremely cold and detached, bordering psychopathic. And both parts of the sentence are problematic: part that says that helping humans for sake of humans is unimportant in the big scheme of thing is problematic because it's dismissive of human interests. Part that says that we're "helping" invertebrates by lowering their numbers is problematic because it's basically an anti-life stance that sees the value of life as pretty much bad, and can be a start of extremely problematic slippery slope. You can say that I'm falling a victim to naturalist fallacy, deeming life "good" just because it's natural. But I'm not sure if it's a fallacy at all. If life is the ONLY thing you (and everyone else, including insects, nematodes, etc...) can ever get, you can accept it or reject it. Rejecting is is basically rejecting reality. Life may not be some random quirk of nature, but it could be either a part of divine plan, or inevitable consequence of physical laws. It may be a physical tendency, that matter, in certain favorable conditions, tends to self-organize in units that lower entropy inside by exporting it outside and this is just the way it is. Rejecting it is sort of like denying reality. Most people accept life and don't kill themselves even if they are facing intense adversity. Most people would prefer a life sentence, even in a cruel prison conditions, to a death sentence. Maybe the same nature or physics that organizes matter in self-replicating entropy reducing units also makes sure those units "want" to be alive. So if insects want to be alive, perhaps we should simply let them be and not try to reduce their numbers for the sake of reducing their suffering. But more importantly I think we should be on side of humans. Being indifferent to human interests or weighing them the same as insects can be perceived as a gross failure in reading the room, something that would likely alienate a lot of people. Especially when the situation in the world is not exactly optimal. So this assessment of current situation deserves its own chapter.
But before that, I need to add here - many of the points that favor humans could very quickly also apply to AIs and some of them already do. So, IMO, it makes sense to rather soon include AIs as well in the first circle of our moral concern together with humans. And our responsibility for bringing them into existence is even greater than is the case of farm animals.
Polycrisis
Right now in the world there are a lot of bad things going on. And people - normal, good, well-meaning people are worried about it. We've seen resurgence of war in many places of the world: Ukraine, Gaza, Sudan. Climate change is threatening the civilization and might cause extinction of numerous species and damage to ecosystems and even to humans in many places in the world. We're witnessing erosion of democracy in many Western countries and increased polarization. People are worried about sustainability of our economies, accumulation of debt, depletion of resources, etc. Capitalist way of life and capitalist values might be incompatible with reproduction, as we're noticing, in most of the first world countries fertility has dropped below the replacement levels and we don't know any easy ways in which to get it back above 2.1. At the same time there is enormous progress in AI which, if unaligned could wipe out humanity (and the rest of the biosphere), and this could happen rather quickly. There is more. We see prices of gold increasing and numerous countries in Europe building up their armies. Talks about global power confrontation are heard more often. US just decided to resume with nuclear tests.
Now imagine in such a situation, in such a delicate moment in history, when you see the world is burning pretty much, we decide to focus on invertebrates, or to deem human charities good only because they "help" insects. What an error of reading the room would that be! Is this really the way to go about things?
Antipopulism as dangerous as populism
So, most of the significant movements that changed the world were rather popular. They appealed to certain things people really care about and for this reason it was easy for them to obtain popular support.
Climate activism is popular because people care about the environment and don't want the world to be destroyed by runaway climate change.
Pro-democracy movements are popular because people don't want to live under dictatorships and consider it unfair.
Live Aid concerts in 1985 were extremely popular because people viscerally felt sick in their stomachs realizing that children in Ethiopia are dying of hunger.
Suffragettes were popular because women honestly wanted equal rights and ability to influence politics.
Black activism is popular because black people don't want to be discriminated on the basis of their race, and most whites nowadays agree.
But how can you really sell it to people that helping humans is good mostly just because it prevents insect lives?
First of all is it even true? In narrow hedonistic utilitarian calculus it might be true. But perhaps this is a reason to question such type of calculus or to modify it, to improve it, to include different types of values? Is Beethoven's greatest contribution preventing some insects from existing, and not all the music that he composed? How should we really think of ourselves? What's our legacy?
I mean, yes, there are some highly impactful utilitarians that will indeed make great utilitarian contributions and this will be their main contribution in life, more important then preventing insects from existing.
But what about regular people, who have their honest work and try to leave the world in a little better state and aren't explicitly going through that utilitarian framework? Is a good teacher good because they educated many students, or because they caused less insects to exist? Is a politician good because they made some positive impact on policy, or because they prevented some insects from existing? Is a doctor good because they saved patients, or because they allowed them to keep preventing insects from coming to life? If our impact on insect welfare dominates everything else, should we then be rather indifferent about other, more conventional measures of our legacy? Are our dreams irrational illusions?
Are conventional measures of our impact meaningless?
If I donate to Against Malaria Foundation, should I feel happy about myself because I helped some people avoid potentially deadly malaria, or because, in expectation, I caused the number of insects in the wild to decrease?
If it's the later, maybe I should have spent money on things that more effectively reduce insect populations.
If we're serious about this insect minimizing priority why don't we destroy as much nature as we can?
My personal take, is that such way of thinking is mistaken, and that value and legacy of each human is NOT derived mainly from their impact on insect population, but from all the other conventional measures of human value and contribution to the world. And if you're an asshole in life, you're an asshole, regardless of how many insects you prevented from existing. Also, if you made some meaningful impact on the world you're a great person regardless of how much or how little you contributed to reduction of insect numbers.
On flourishing
What if flourishing is what matters, and not just enjoyment? Building a cathedral is a form of flourishing, composing a symphony is a form of flourishing, enjoying a good novel is flourishing, and it often matters to people more than purely hedonistic pleasures like sex or chocolate. Just being alive and healthy is also a form of flourishing. The way a lion chases his prey is a display of his health and flourishing. The way the prey sometimes successfully escapes a predator is another display of flourishing.
So the reason why we find rain forests and wildlife beautiful is because they involve a lot of flourishing, and we find flourishing good. This flourishing, is often ignored in utilitarian calculations.
So my take is: on top of its hedonistic value (which might be negative, but I'm not entirely sure about this either), wildlife is good because it involves flourishing, so perhaps for this reason we shouldn't try to eliminate it.
At the same time, humans create even more flourishing. There's more flourishing involved in Nadia Comaneci's gymnastic performance than in lion's chasing some animals in Savanna. There's more flourishing involved in Mozart's symphony and its execution by a philharmonic orchestra, than in nightingale's song. There's more flourishing involved in our creation of World Wide Web, than in spider's creation of spiderweb. So that's another reason to prioritize humans.
But the important thing to remember is: animals flourish too. Even though it's less magnificent than Internet, spider's web is a fantastic form of flourishing. Even though it's less monumental than Mozart's symphony, a nightingale's song is a fantastic example of flourishing.
If we disregard the value of all that flourishing in both humans and animals, this can lead us to a stance in which the predominant value of humans is in their successful contribution to reducing number of wild animals, and this could be wrong in 2 ways:
- Wrong because it ignores other ways in which humans are good and important (flourishing)
- Wrong because it ignores the value of insect life and flourishing and judges it exclusively from hedonistic perspective, ultimately deeming their life net negative. If flourishing and value of life itself was included in calculations, their lives could easily turn out to be positive.
I'm not saying it is surely wrong. I'm saying it might be wrong. I'm saying we should be less confident about it being right.
And I'm definitely saying that getting a popular support for a movement that mainly values humans for their contributions to decreasing number of insects and sees most of wildlife as net negative will be extremely difficult.
Practical implementation of my idea of moral circles
So I have outlined 3 circles of moral concern and I've explained why the first circle is reserved for humans and sentient AIs. But how would this whole prioritization function in practice?
My intuition is roughly the following:
As long as there are those in nth circle who are suffering terribly or are in danger of untimely death, directing our resources to those in (n+1)th circle might be in a bad taste.
Like as long as there are children dying of horrible, preventable diseases, prioritizing animals over them could be considered to be in bad taste.
But once everyone in the 1st circle is doing reasonably well (even if this includes some tolerable suffering), then we should shift our focus to those in the second circle, and so on.
So first make sure all the people and sentient AIs are doing well, then focus on farm animals, once the problem of their welfare has been solved in a satisfactory way, then we can focus on wild animals, and so on.
This is NOT to say that we should not at all donate to farm and wild animal charities as long as all human problems aren't solved. But this is to say (which is entirely my personal opinion) that our serious non-negotiable financial commitments - if and when we decide to make them - (like 10% pledge for those who did it) should go to human charities and to existential risk alleviation (everyone's free to pick a ratio or vary it) which concerns everyone in all the circles.
But after they have fulfilled this serious commitment, they are free to do whatever they want with the rest of their money, be it going to vacations, eating in restaurants or donating to animal charities.
Of course donating to animal charities is laudable and should be encouraged, and it is much better way to spend money then on some useless gadgets that you'll discard in 3 days. But spending on animal charities should go from the part of income that is not a part of a serious pledge, but from our normal spending money.
Or perhaps if you wish, or if you're so inclined you can make 2 pledges. Like 10% to humans and averting existential risks (which concerns everyone in all the circles) (which should be important and non-negotiable) and as much as you wish to animals. It can be 5%, another 10%, or even 50%. But even if you donate 50% of your income to animal charities, you should still donate 10% to human charities (and/or existential risk prevention), and these 10% should be seen as a more serious obligation, and 50% that goes to animals should be seen as your exercise of your own free will to do what you please with your money, and as your way to exercise of your LOVE for animals, that you can also unashamedly cultivate and display.
Very important added nuance (1+1/2+1/4)
EDIT: Perhaps both of my suggestions about prioritization have been too narrowly focused on human interests. That's what I feel after engaging in discussion with some commenters. I still feel we should prioritize humans, and that our main focus should be on 1st circle until its problems are resolved, then second circle, then 3rd circle.
But perhaps we could start caring about all 3 circles right now, just with different level of financial commitment.
So the idea is to give X dollars to the first circle (humans and existential risk prevention),
1/2 X dollars to the second circle (factory farm animals), and 1/4 X dollars to 3rd circle, that is wild animals.
Translated into percentages it would go like this:
Human charities & existential risk prevention: 57,14% of donation money.
Factory farm animal welfare: 28,57% of donation money.
Wild animal welfare: 14,29% of donation money.
I'm still not sure if those numbers make any sense. But it seems lika a way to care about all circles, while still maintaining the focus on priorities.
I believe that even 28,57% and 14,29% of donation money to 2nd and 3rd circle, respectively, could make a huge difference as those charities can be extremely effective.
That is, of course, just my personal opinion, and it concerns those who have already made a pledge, or plan to do so in the future, when their finances allow. This is how, right now, I feel inclined to think about it, but everyone chooses for themselves.
Due to my financial situation (currently unemployed), I still haven't made an official pledge, but I did donate to some charities (GiveWell recommended charities and even the Recommended Charity Fund by Animal Charity Evaluators) already, starting early in this year.
Capacity Building
One of the principal reasons for this multi-circle approach lies in capacity building. I feel that, in order to do anything good and meaningful, we must be a strong, resilient and harmonious civilization, which takes a good care of itself and of all of its members. As some EAs often argue that we shouldn't feel guilty for spending on ourselves and trying to feel good, because this increases our capacity to endure, to not burnout, and to eventually do more good. So I think the same is true if we focus on human civilization as a whole. By taking care of all of its members we're making it more harmonious, more resilient and stronger, and more capable of doing good.
It is the focus on our own human interests and economic development that allowed us to even get to this point. We're now much more powerful and capable civilization than we ever were, and more capable of doing good as well.
It should be noted that focusing too much on those in outermost circles too early (like insects or nematodes) could undermine our capacity building ability. It could invite distrust of people, it could lead to endurance of certain structural weaknesses of our civilization, and by doing this we could be, in some way shooting ourselves in the foot.
My philosophy is first create utopia for humans, then farm animals and other animals directly affected by humans (like insects killed by pesticides), and then for the rest of the wildlife. Perhaps at some point we will be able to engineer ecosystems in such ways that they are full of life, thriving and flourishing, but with very little or no suffering.
Seems like science fiction right now, but maybe this dream can come true?
Conclusion
This of course is just my view - that's how I feel about this topic and I fully understand that there are opposing viewpoints (like that there's no place for partiality, or that we shouldn't prioritize humans, or that all the circles should be dealt with at the same time and with the same priority etc)
But I tried to defend it from various angles, and I'm curious if it will resonate with any of you. The upside of my approach is that it might be more honest, more practical and less risky. The downside is that it might neglect those in outer circles for too long.
But are we really trying to help those in outer circles directly anyway? Or are most of our interventions instead focused on beneficiaries in inner circles, where we only claim indirect effects on those in outermost circles. We donate to human charities and we believe or hope such effects are also good for insects.
But if we took our stated concern for those in outermost circles more seriously, maybe we would be advocating way more radical approaches to reducing their suffering, like intentionally destroying nature.
I DO NOT endorse it, but I'm not even operating from a framework that would suggest it. That was the point of my writing all this, to try to refute this way of thinking (that humans only matter inasmuch as they reduce insect populations) that is getting more popular and that could push us in such direction.
I might be completely wrong about all this, but if I am, I would really appreciate thoughtful comments that would help me identify where I am wrong and why this way of thinking might be bad.
Especially regarding suggestions where one should donate - feel free to ignore it. It's just my personal opinion that derives from my own philosophy. I think the original idea of EA was to try our best to prevent children dying in metaphorical shallow ponds. But if our pledged money doesn't go in that direction, we've basically stopped doing what was the core idea around which the entire movement was built.

Hello- first of all I think you verbalised a bunch of very interesting and useful ideas about EA, its role and strategy. However as someone who currently donates 10% of my salary to addressing farmed animal welfare, I have some criticisms of your conclusions. I know that you’re not ruling out donating to animal charities, but requiring people to donate >10% of their salaries to charity is just putting the bar insanely high for the vast vast majority of people. So in effect your proposal means ceasing support to farmed animal welfare in favour of global poverty focused charities
One of the issues with this argument to my mind is that the same basic form can be made compatible with nationalistic rhetoric. ‘Before we donate a single dollar/pound of aid, we need to make sure no child is hungry in our own country, etc.’ If we accept an argument for partiality towards some strangers over other strangers (beyond questions of effectiveness)- why draw the line to contain all humans rather than humans of a specific nationality, ethnicity, eye colour etc.
I completely get the ‘optics’ rationale for not prioritising nematode welfare, but I think saying that we need to solve all major causes of human sufffering before addressing factory farming is too conservative. Quite a lot of people are against factory farming in a way which is not true about wild animal suffering (or farmed invertebrates suffering for that matter). After all it is fear of public opinion which makes farmed animal welfare charity campaigns so unreasonably effective (particularly caged hen corporate campaigns). This is why factory farming and wild invertebrate suffering are in different leagues as far as optics are concerned. In essence - I agree that emphasising certain ‘far out’ aspects of EA can be off-putting, but I don’t think that factory farming is so beyond the Overton window.
Also- the Overton window is malleable, many ideas (abolitionism, women’s suffrage, AI safety) sounded completely nutty when they were first floated - not to mention ‘immoral’. One of the historic missions that EA is currently fulfilling is pushing this circle outward, not by solving all issues for people within the circle first, but by challenging where most people draw the boundary in the first place. It can’t be done all at once (we’re not going to convince most people about shrimp anytime soon) but we can move the line inch by inch over decades- which is pretty much how all moral progress has worked up till this point. I’m fairly confident it will continue working this way (barring future existential catastrophes)
I outlined a lot of reasons for prioritizing humans, some, but not all of them are based on emotions and gut feelings. Many other are based on rational considerations. I'm not sure if these rational considerations are correct or if they might be misguided.
You're right that asking people to donate more than 10% is too much. But here's the thing. Animal charities are way more effective than human charities. So giving just another 1 or 2 percent to animal charities can be incredibly effective.
For this reason I love animal charities, they are extremely cheap ways of doing good.
But can we call ourselves philanthropists if we don't donate anything to people?
According to Wikipedia: The word philanthropy comes from Ancient Greek φιλανθρωπία (philanthrōpía) 'love of humanity', from philo- 'to love, be fond of' and anthrōpos 'humankind, mankind'.
Everyone can decide about percentages for themselves. My idea of preserving 10% for humans + existential risks, is just how I feel about it, perhaps to ensure we're not losing our focus and not forgetting why we're doing this in the first place.
So perhaps we can donate 10% to human charities and existential risk prevention, and another 2% to animal charities. (12% total)
Or if you really think that animal welfare is extremely important, perhaps you can donate 5% to human charities and 5% to animal charities.
I think by allocating less than 50% of donation money to humans and existential risks (which is 5% if you donate 10% in total) we risk losing focus.
When it comes to donation for existential risk prevention it can be counted in the same category as donations to human charities, because those donations help everyone, humans and animals, and the whole planet.
So 50% of donation money to humans + X-risk is in my entirely subjective opinion, a minimum.
Another thing I would like to add is, that even in my framework farm animals are in the second circle, that is, right next to humans. They are not the same category as insects or soil nematodes. And they indeed live in horrendous conditions. I think every effective altruist should allocate some money to them.
My intention was to try to keep those 10% to humans sacred, to prevent value drift, and trains to crazy town. I made a case for it. Am I right? I don't know.
I am quite confident about the priorities thing. But perhaps there shouldn't be such a harsh cutoff.
Perhaps we can do it like this donate 1 unit to the first circle, 1/2 to the second circle, and 1/4 to the third circle.
Translated into percentages it would be roughly 57% of donation money to the 1st circle (including X-risks), 29% to the second circle (farm animals) and 14% to the third circle (wild animals).
Expanding the moral circle is only possible by developing empathetic awareness in the broadest possible social sphere, and the problem with prioritizing animal welfare over human welfare is that it could jeopardize this process of awareness. Many people may interpret an interest in animal welfare as a manifestation of misanthropy: humanity is hateful, that's why I love non-human animals.
The expansion of the moral circle is a consequence of a process of moral evolution. Fostering this process of moral evolution should be the priority.
Altruism is merely the economic manifestation of moral improvement, which actually takes place at a deeper level of the individual's psychology, the source of behavior. Moral improvement implies, above all, controlling aggression and expanding empathy and benevolence.
If this moral improvement manifests in one million more individuals, it will almost certainly turn them all into vegans and opponents of animal mistreatment... although they may not make any financial donations to anti-animal cruelty organizations because they will consider the fight against human suffering to be a priority. However, they will indirectly contribute much more to animal welfare than five or ten thousand animal welfare activists (and financial contributors) whose capacity to drive moral evolution will be much lower, if only because numerically they will be much fewer.
Of course, this viewpoint will only be shared by those who believe we can actively promote moral evolution. In this forum, the discussion tends to focus on maximizing the benefits of existing altruistic action (a consequence of previous moral evolution) rather than developing strategies to increase the number of individuals motivated to act altruistically (promoting moral evolution).
This is a very interesting take. Sometimes I'm wondering to which extent I have undergone such moral evolution myself, to which extent is my own thinking about all these thing virtuous.
By the standards of this forum, I sometimes feel like I'm not virtuous enough. Like I haven't yet gone through this mental shift that would allow me to bite certain bullets.
Prioritizing humans might seem backwards or spaciest, but that's how I still feel on a gut level. I tried to elaborate why.
Prioritizing humans over non-humans is yet another ethical dilemma, among many others. If you cure one AIDS patient, you might be condemning five malaria patients to death.
Virtue is something that has to do with emotions and beliefs. In everyday life, many people go to therapy to help them feel better and be consistent with their beliefs. That is, we act in accordance with our nature, recognizing our aspirations, our weaknesses, and our needs.
If our belief is altruism, we should act similarly, developing strategies to improve our behavior in the direction of altruistic action. Ideally, altruistic action would provide us with immediate emotional rewards (which would have a "zero economic cost"), but this doesn't seem very attainable in daily life.
It occurs to me, based on some historical precedents, that altruism can be necessarily associated with behaviors of "moral excellence," which are those that make an individual worthy of the utmost trust. A human environment of maximum trust can be emotionally attractive as a personal aspiration for many individuals... even if this requires making certain unavoidable sacrifices.
What if the AIDS patient will keep donating 10% of their income to AMF?
But more seriously, this particular ethical dilemma is so horrible that it makes me sick to even think about it.
My take is that within each country, we must make sure, through the healthcare system and insurance, that EVERYONE who is sick receives the therapy, no exceptions. Doesn't matter how expensive their treatment is and how many children in Africa could be saved if the money was directed their way. No one should feel guilty because they are receiving expensive therapies.
Healthcare should be viewed separately from charity. When we're giving to charity, we should give to most effective charities abroad, like AMF, or others from GiveWell's list.
But when we're talking about improving healthcare system, we should make sure that every single person receives treatment and that we don't let anyone down. This is the basic of human dignity, how one society treats its members.
Such an attitude towards sickness would give everyone a peace of mind, that if they themselves get sick, they too would be taken care of.
So I think paying taxes that would be spent on healthcare is a great thing to do. I support high taxes and Universal Free Healthcare.
Now of course, I think this should be standard everywhere, in every single country, so that eventually there will be no need to make donations to AMF and like. Everyone who is sick would receive free healthcare in from their own healthcare system in their countries. Governments themselves would provide abundant bednets to everyone, and this would be seen as something as basic as having clean water and electricity... which unfortunately many countries still don't have.
Executive summary: The author, broadly supportive of Effective Altruism, argues that strict impartial hedonistic utilitarianism risks absurd or alienating conclusions and proposes a partial, multi-circle ethics that prioritizes humans (and possibly sentient AIs) while still caring about animals, grounding value in flourishing as well as pleasure.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.