Am I wrong that EAs working in AI (safety, policy, etc.) and who are now earning really well (easily top 1%) are less likely to donate to charity?
At least in my circles, I get the strong impression that this is the case, which I find kind of baffling (and a bit upsetting, honestly). I have some just-so stories for why this might be the case, but I'd rather hear others' impressions, especially if they contradict mine (I might be falling prey to confirmation bias here since the prior should be that salary correlates positively with likelihood of donating among EAs regardless of sector).
Most EAs want to be rich and close to power. Or at least they are way more into the "effective" optimization part than the altruism. They talk a big game but getting in early on a rising power (AI companies) is not altruistic. Especially not when you end up getting millions in compensation due to very rapid valuation increases.
I made a large amount of money in the 2021 crypto bom. I made a much smaller, though large for me, amount in the 2017 crash. I have never had a high paying job. Often I have had no job at all. My longterm partner has really bad health. So I'm perhaps unusually able to justify holding onto windfalls. I still gave away 50% pre-tax both times.
Most eas are simply not the real deal.
There are at least three common justifications for not donating, each of which can be quite reasonable:
I don't donate to charity other than animal product offsets; this is mainly due to 1 and 2. As for 1, I'm still early career enough that immediate financial stability is a concern. Also for me, forgoing luxuries like restaurant food and travel makes me demotivated enough that I have difficulty working. I have tried to solve this in the past but have basically given up and now treat these luxuries as partially needs rather than wants.
For people just above the top-1% threshold of $65,000, 3 and 2 are very likely. $65,000 is roughly the rate paid to marginal AI safety researchers, so donating 20% will bring only 20% of someone's career impact even if the grantmakers find an opportunity as good as themself. If they also live in a HCOL area, 2 is very likely-- in San Francisco the average rent for a 1bed is $2,962/month and an individual making less than $104,000 qualifies for public housing assistance!
But shouldn't I have more dedication to the cause and donate anyway? I would prefer to instead spend more effort on getting better at my job (since I'm nowhere near the extremely high skillcap of AI safety research) and working more hours. I actually do care about saving for retirement, and finding a higher-paying job at a lab safety team just so I can donate is probably counterproductive, because trying to split one's effort between two theories of change while compromising on both is generally bad (see the multipliers post). If I happened to get an equally impactful job that paid double, I would probably start donating after about a year, or sooner if donations were urgent and I expected high job security.
My experience from the church is the salary doesn't correlate will with likelihood of donating, although it does of course correlate with donating larger amounts of money.
If EAs working in AI policy and safety were serious about AI Doom being a near-term possibility, I would expect they would donate huge amounts towards that cause. A clear case of "revealed preferences" not just stated ones.
I think I was assuming people working in highly paid AI jobs were donating larger percentages of their income, but I haven't seen data in either direction?
My experience from the church is the salary doesn't correlate will with likelihood of donating, although it does of course correlate with donating larger amounts of money.
Yes, though I thought maybe among EAs there would be some correlation. 🤷
I think I was assuming people working in highly paid AI jobs were donating larger percentages of their income, but I haven't seen data in either direction?
Yeah, me neither (which, again, is probably true; just not in my circles).
I'd consider this a question that doesn't benefit from public speculation because every individual might have a different financial situation.
Truth be told "earning really well" is a very ambiguous category. Obviously, if someone were financially stable, eg. consistently earning high 5 figure or six figure dollars/euros/pounds/francs or more annually(and having non-trivial savings) and having a loan-free house, their spending would almost always reflect discretionary interests and personal opinions (like 'do I donate to charity or not").
For everyone not financially stable, 'donating to charity' may not:
(a) be a discretionary decision and
(b) be a simple decision - that is, increasing charitable donations too soon comes at the expense of not investing in one's personal ability to weather volatility, which has knock-on qualitative effects on career progression (especially to senior management roles), future earnings potential and lifetime charitable contributions. Additionally, not getting one's personal finances in order early on contributes directly to great personal and family stress, which then has knock on effects on everything else.
tl;dr: when you're broke, money allocation is a high-risk, high-stress headache. The long term solution is to prioritize becoming the opposite of broke, i.e; financially stable, first.
also see: Julia Wise's post on the logistics of giving.
That analysis would be more compelling if the focus of the question were on a specific individual or small group. But, at least as I read it, the question is about the giving patterns of a moderately numerous subclass of EAs (working in AI + "earning really well") relative to the larger group of EAs.
I'm not aware of any reason the dynamics you describe would be more present in this subclass than in the broader population. So a question asking about subgroup differences seems appropriate to me.
Edit: I see your point. Still, I'll leave the below comment as-is, because from my (3rd world, generational financial instability, no health insurance, filial obligations etc.) point of view I think the perspective of a broke person ought to be represented.
But what counts as "numerous", though? How many EAs are actually working in AI - fifty people? A hundred people? Who's collecting data on this subgroup versus the larger EA group?
I agree that the question itself is appropriate and there's nothing wrong with it. I was saying this question doesn't benefit from public speculation, because, for one thing, there isn't any reliable data for objective analysis, and for another, the logistics of an individual's personal finance are a bigger factor in how or how much a person donates, at the non-millionaire level (in this subclass and the broader population).
Its relatively common (I don't know about rates) for such people to take pay-cuts rather than directly donate that percentage. I know some who could be making millions a year who are actually making hundreds. It makes sense they don't feel the need to donate anything additional on top of that!
It's not clear to me whether you're talking about people who (a) do a voluntary salary sacrifice while working at an EA org, or (b) people who could have earned much more in industry but moved to a nonprofit so now earn much less than their hypothetical maximum earning potential.
In case (a), yes, their salary sacrifice should count towards their real donations.
But I think a practical moral philosophy wherein donation expectations are based on your actual material resources (and constraints), not your theoretical maximum earning potential, seems more justifiable. So I don't think people who do (b) (which includes myself) should get to say that doing (b) liberates them from the same obligation to donate that would attend to a person in the same material circumstances with worse outside options.
I disagree and think that b is actually totally sufficient justification. I'm taking as an assumption that we're using an ethical theory that says people do not have an unbounded ethical obligation to give everything up to subsistence and that it is fine to set some kind of a boundary and fraction of your total budget of resources that you spend on altruistic purposes. Many people doing well paying altruistic careers (eg technical AI safety careers) could earn dramatically more money eg at least twice as much, if they were optimising for the highest paying career they could. I'm fairly sure I could be earning a lot more than I currently am if that was my main goal. But I consider the value of my labour from an altruistic perspective to exceed the additional money I could be donating and therefore do not see myself to have a significant additional ethical obligation to donate (though I do donate a fraction of my income anyway because I want to)
By foregoing a large amount of income for altruistic reasons, I think such people are spending a large amount of their resource budget on altruistic purposes, and that if they still have an obligation to donate more money that people in higher paying careers should be obliged to donate far more. Which is a consistent position, but not one I hold
I don't want to argue in anyone's specific case, but I don't think it's universally true at all or even true the majority of the time that people that those working in AI could make more elsewhere. It sounds nice to say, but I think often people are earning more in AI jobs than they would elsewhere .
My reasoning was roughly that the machine learning skill set is also extremely employable in finance which tends to pay better. though openai salaries do get pretty high nowadays and if you value openai and anthropic equity at notably above their current market value, then plausibly, they're higher paying. Definitely agreed it's not universal.
Sure. But the average person working in AI is not at Jane St level like you and yes, OpenAI/Anthropic comp is extremely high.
I would also say that people still have a moral obligation. People don't choose to be smart enough to do ML work.
My point is that "other people in the income bracket AFTER taking a lower paying job" is the wrong reference class.
Let's say someone is earning $10mn/year in finance. I totally think they should donate some large fraction of their income. But I'm pretty reluctant to argue that they should donate more than 99% of it. So it seems completely fine to have a post donation income above $100K, likely far above.
If this person quits to take a job in AI Safety that pays $100K/year, because they think this is more impactful than their donations, I think it would be unreasonable to argue that they need to donate some of their reduced salary, because then their "maximum acceptable post donation salary" has gone down, even though they're (hopefully) having more impact than if they donated everything above $100K
I'm picking fairly extreme numbers to illustrate the point, but the key point is that choosing to do direct work should not reduce your "maximum acceptable salary post donations", and that at least according to my values, that max salary post donation is often above what they get paid in their new direct role.
I understand this. Good analogy.
I suppose what it comes down to is that I actually DO think it is morally better for the person earning $10m/year to donate $9.9m/year than $9m/year, about $900k/year better.
I want to achieve two things (which I expect you will agree with).
I think it's also reasonable for people to set limits for how much they are willing to do.
My point is that "other people in the income bracket AFTER taking a lower paying job" is the wrong reference class.
Is there a single appropriate reference class here, as opposed to looking at multiple reference classes and weighting the results in some manner?
I agree that similarly situated person who decided to take a very high-paying job is a relevant reference class and should get some weight. However, it doesn't follow that person with similar incomes working a non-impactful job is an irrelevant reference class or should get zero weight.
As Marcus notes, "[p]eople don't choose to be smart enough to do ML work." I would add that people don't choose other factors that promote or inhibit their ability to choose a very high-paying job and/or a high-impact job (e.g., location and circumstances of birth, health, family obligations, etc.) In a pair of persons who are similarly situated economically, giving the more advantaged person a total pass on the moral obligation to donate money seems problematic to me. In this frame of reference, their advantages allowed them to land a more impactful job at the same salary as the less advantaged person -- and in a sense we would be excusing them from a moral obligation because they are advantaged. (Giving the more privileged person a big break is also going to make it rather hard to establish substantial giving as a norm in the broader community, but that's probably not in the scope of the question here.)
I don't have a clear opinion on how to weight the two reference classes beyond an intuition that both classes should get perceptible weight. (It also seems plausible there are other reference classes to weigh as well, although I haven't thought about what they might be.)
My point is that, even though there's a moral obligation, unless you think that high earning people in finance should be donating a very large fraction of their salary (so their post donation pay is less than the pay in AI safety), their de facto moral obligation has increased by the choice to do direct work, which is unreasonable to my eyes.
I would also guess that at least most people doing safety work at industry labs could get a very well paying role at a top tier finance firm? The talent bar is really high nowadays
I think I want to give (b) partial credit here in general. There may not be much practical difference between partial and full credit where the financial delta between a more altruistic job and a higher-salary job is high enough. But there are circumstances in which it might make a difference.
Without commenting on any specific person's job or counterfactuals, I think it is often true that the person working a lower-paid but more meaningful job secures non-financial benefits not available from the maximum-salary job and/or avoids non-financial sacrifices associated with the maximum-salary job. Depending on the field, these could include lower stress, more free time, more pleasant colleagues, more warm fuzzies / psychological satisfaction, and so on. If Worker A earns 100 currency units doing psychologically meaningful, low to optimal stress work but similarly situated Worker B earns 200 units doing unpleasant work with little in the way of non-monetary benefits, treating the entire 100 units Worker A forewent as spent out of their resource budget on altruistic purposes does not strike a fair balance between Worker A and Worker B.
I think the right stance here is a question of “should EA be praising such people or get annoyed they’re not giving up more if it wants to keep a sufficient filter for who it calls true believers”, and the answer there is obviously both groups are great & true believers and it seems dumb to get annoyed at either.
The 10% number was notably chosen for these practical reasons (there is nothing magic about that number), and to back-justify that decision with bad moral philosophy about “discharge of moral duty” is absurd.
I'm not going to defend my whole view here, but I want to give a though experiment as to why I don't think that "shadow donations"—the delta between what you could earn if you were income-maximizing, and what you're actually earning in your direct work job—are a great measure for the purposes of practical philosophy (though I agree they're both a relevant consideration and a genuine sacrifice).
Imagine two twins, Anna and Belinda. Both have just graduated with identical grades, skills, degrees, etc. Anna goes directly from college work on AI safety at Safety Org, making $75,000 / year. Belinda goes to work for OpenMind doing safety-neutral work, making $1M per year total compensation. Belinda learns more marketable skills; she could make at least $1M / year indefinitely. Anna, on the other hand, has studiously plugged away at AI safety work, but since her work is niche, she can't easily transfer these skills to do something that pays better.
Then imagine that, after three years, Belinda joins Anna at Safety Org. Belinda was not fired; she could have stayed at OpenMind and made $1M per year indefinitely. At this point, Anna has gotten a few raises and is making $100,000, and donating 3% of her salary. Belinda gets the same job on the same pay scale, and does equally good work, but donates nothing. Belinda reasons that, because she could still be $1M per year, she has "really" donated $900,000 of labor to Safety Org, and so has sacrificed roughly 90% of her income.
Anna, on the other hand, thinks that it is an immense privilege to be able to have a comfortable job where she can use her skills to do good, while still earning more than 99% of all people in the world. She knows that, if she had made different choices in life, she probably could have a higher earning potential. But that has never been her goal in life. Anna knows that the average person in her income bracket donate around 3% regardless of their outside job options, so it seems reasonable for her to at least match that.
Is Belinda more altruistic than Anna? Which attitude should EAs aspire to?
To give some more color on my general view:
I don't really think there's a first-order fact of the matter as to who of these two (or anyone) is "more altruistic," or what one's "obligations" are. At bottom, there are just worlds with more or less value in them.
My view mostly comes from a practical view of how the EA community and project can be most impactful, credible, and healthy. I think the best attitude is closer to Anna's than Belinda's.
Donating also has other virtues over salary reductions, since it is concrete, measurable, and helps create a more diversified funding ecosystem.
To be clear, I think it's great that people like Belinda exist, and they should be welcomed and celebrated in the community. But I don't think the particular mindset of "well I have really sacrificed a lot because if I was purely selfish I could have made a lot more money" is one that we ought to recognize as particularly good or healthy.
I will note that my comment made no reference to who is “more altruistic”. I don’t know what that term means personally, and I’d rather not get into a semantics argument.
If you give the definition you have in mind, then we can argue over whether its smart to advocate that someone ought to be more altruistic in various situations, and whether it gets at intuitive notions of credit assignment.
I will also note that given the situation, its not clear to me Anna’s proper counterfactual here isn’t making $1M and getting nice marketable skills, since she and Belinda are twins, and so have the same work capacity & aptitudes.
To be clear, I think it’s great that people like Belinda exist, and they should be welcomed and celebrated in the community. But I don’t think the particular mindset of “well I have really sacrificed a lot because if I was purely selfish I could have made a lot more money” is one that we ought to recognize as particularly good or healthy.
I think this is the crux personally. This seems very healthy to me, in particular because it creates strong boundaries between the relevant person and EA. Note that burnout & overwork is not uncommon in EA circles! EAs are not healthy, and (imo) already give too much of themselves!
Why do you think its unhealthy? This seems to imply negative effects on the person reasoning in the relevant way, which seems pretty unlikely to me.
Suppose they're triplets, and Charlotte, also initially identical, earns $1M/year just like Belinda, but can't/doesn't want to switch to safety. How much of Charlotte's income should she donate in your worldview? What is the best attitude for the EA community?
I didn't read Cullen's comment as about 10%, and I think almost all of us would agree that this isn't a magic number. Most would probably agree that it is too demanding for some and not demanding enough for others. I also don't see anything in Cullen's response about whether we should throw shade at people for not being generous enough or label them as not "true believers."
Rather, Cullen commented on "donation expectations" grounded in "a practical moral philosophy." They wrote about measuring an "obligation to donate."
You may think that's "bad moral philosophy," but there's no evidence of it being a post hoc rationalization of a 10% or other community giving norm here.
I feel quite confused about the case where someone earns much less than their earning potential in another altruistically motivated but less impactful career doing work that uses a similar skillset (e.g. joining a think tank after working on policy at an AI company). This seems somewhere between A and B.
But I think a practical moral philosophy wherein donation expectations are based on your actual material resources (and constraints), not your theoretical maximum earning potential, seems more justifiable.
It's complicated, I think. Based on your distinguishing (a) and (b), I am reading "salary sacrifice" as voluntarily taking less salary than was offered for the position you encumber (as discussed in, e.g., this post). While I agree that should count, I'm not sure (b) is not relevant.
The fundamental question to me is about the appropriate distribution of the fruits of one's labors ("fruits") between altruism and non-altruism. (Fruits is an imperfect metaphor, because I mean to include (e.g.) passive income from inherited wealth, but I'll stick with it.)
We generally seem to accept that the more fruit one produces, the more (in absolute terms) it is okay to keep for oneself. Stated differently -- at least for those who are not super-wealthy -- we seem to accept that the marginal altruism expectation for additional fruits one produces is less than 100%. I'll call this the "non-100 principle." I'm not specifically defending that principle in this comment, but it seems to be assumed in EA discourse.
If we accept this principle, then consider someone who was working full-time in a "normal" job and earn a salary of 200 apples per year. They decide to go down to half-time (100-apple salary) and spend the half of their working hours producing 100 charitable pears for which they receive no financial benefit. [1]The non-100 principle suggests that it's appropriate for this person to keep more of their apples than a person who works full-time to produce 100 apples (and zero pears). Their total production is twice as high, so they aren't similarly situated to the full-time worker who produces the same number of apples. The decision to take a significantly less well-paid job seems analogous to splitting one's time between remunerative and non-remunerative work. One gives up the opportunity to earn more salary in exchange for greater benefits that flow to others by non-donation means.
I am not putting too much weight on this thought experiment, but it does make me think that either the non-100 principle is wrong, or that the foregone salary counts for something in many circumstances even when it is not a salary sacrifice in the narrower sense.
How to measure pear output is tricky. The market rate for similar work in the for-profit sector may be the least bad estimate here.
Sharing this talk I gave in London last week titled "The Heavy Tail of Valence: New Strategies to Quantify and Reduce Extreme Suffering" covering aspects of these two EA Forum posts:
I welcome feedback! 🙂
Am I wrong that EAs working in AI (safety, policy, etc.) and who are now earning really well (easily top 1%) are less likely to donate to charity?
At least in my circles, I get the strong impression that this is the case, which I find kind of baffling (and a bit upsetting, honestly). I have some just-so stories for why this might be the case, but I'd rather hear others' impressions, especially if they contradict mine (I might be falling prey to confirmation bias here since the prior should be that salary correlates positively with likelihood of donating among EAs regardless of sector).
I understand this. Good analogy.
I suppose what it comes down to is that I actually DO think it is morally better for the person earning $10m/year to donate $9.9m/year than $9m/year, about $900k/year better.
I want to achieve two things (which I expect you will agree with).
- I want to "capture" the good done by anyone and everyone willing to contribute and I want them welcomed, accepted and appreciated by the EA community. This means that if a person who could earn $10m/year in finance and is "only" willing to contribute $1m/year (10%) to effective causes, I d
... (read more)