This is a special post for quick takes by Alfredo Parra 🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Am I wrong that EAs working in AI (safety, policy, etc.) and who are now earning really well (easily top 1%) are less likely to donate to charity?

At least in my circles, I get the strong impression that this is the case, which I find kind of baffling (and a bit upsetting, honestly). I have some just-so stories for why this might be the case, but I'd rather hear others' impressions, especially if they contradict mine (I might be falling prey to confirmation bias here since the prior should be that salary correlates positively with likelihood of donating among EAs regardless of sector).

Most EAs want to be rich and close to power. Or at least they are way more into the "effective" optimization part than the altruism. They talk a big game but getting in early on a rising power (AI companies) is not altruistic. Especially not when you end up getting millions in compensation due to very rapid valuation increases. 

I made a large amount of money in the 2021 crypto bom. I made a much smaller, though large for me, amount in the 2017 crash. I have never had a high paying job. Often I have had no job at all. My longterm partner has really bad health. So I'm perhaps unusually able to justify holding onto windfalls. I still gave away 50% pre-tax both times. 

Most eas are simply not the real deal.

My experience from the church is the salary doesn't correlate will with likelihood of donating, although it does of course correlate with donating larger amounts of money.

If EAs working in AI policy and safety were serious about AI Doom being a near-term possibility, I would expect they would donate huge amounts towards that cause. A clear case of "revealed preferences" not just stated ones. 

I think I was assuming people working in highly paid AI jobs were donating larger percentages of their income, but I haven't seen data in either direction?

My experience from the church is the salary doesn't correlate will with likelihood of donating, although it does of course correlate with donating larger amounts of money.

Yes, though I thought maybe among EAs there would be some correlation. 🤷

I think I was assuming people working in highly paid AI jobs were donating larger percentages of their income, but I haven't seen data in either direction?

Yeah, me neither (which, again, is probably true; just not in my circles).

There are at least three common justifications for not donating, each of which can be quite reasonable:

  1. A high standard of living and saving up money are important selfish wants for EAs in AI, just as they are in broader society.
  2. EAs in AI have needs (either career or personal) that require lots of money.
  3. Donations are much lower impact than one's career.

I don't donate to charity other than animal product offsets; this is mainly due to 1 and 2. As for 1, I'm still early career enough that immediate financial stability is a concern. Also for me, forgoing luxuries like restaurant food and travel makes me demotivated enough that I have difficulty working. I have tried to solve this in the past but have basically given up and now treat these luxuries as partially needs rather than wants.

For people just above the top-1% threshold of $65,000, 3 and 2 are very likely. $65,000 is roughly the rate paid to marginal AI safety researchers, so donating 20% will bring only 20% of someone's career impact even if the grantmakers find an opportunity as good as themself. If they also live in a HCOL area, 2 is very likely-- in San Francisco the average rent for a 1bed is $2,962/month and an individual making less than $104,000 qualifies for public housing assistance!

But shouldn't I have more dedication to the cause and donate anyway? I would prefer to instead spend more effort on getting better at my job (since I'm nowhere near the extremely high skillcap of AI safety research) and working more hours (possibly in ways that funge with donations eg by helping out grantmakers). I actually do care about saving for retirement, and finding a higher-paying job at a lab safety team just so I can donate is probably counterproductive, because trying to split one's effort between two theories of change while compromising on both is generally bad (see the multipliers post). If I happened to get an equally impactful job that paid double, I would probably start donating after about a year, or sooner if donations were urgent and I expected high job security.

I'd consider this a question that doesn't benefit from public speculation because every individual might have a different financial situation.

Truth be told "earning really well" is a very ambiguous category. Obviously, if someone were financially stable, eg. consistently earning high 5 figure or six figure dollars/euros/pounds/francs or more annually(and having non-trivial savings) and having a loan-free house, their spending would almost always reflect discretionary interests and personal opinions (like 'do I donate to charity or not").

For everyone not financially stable, 'donating to charity' may not:

(a) be a discretionary decision and 

(b) be a simple decision - that is, increasing charitable donations too soon comes at the expense of not investing in one's personal ability to weather volatility, which has knock-on qualitative effects on career progression (especially to senior management roles), future earnings potential and lifetime charitable contributions. Additionally, not getting one's personal finances in order early on contributes directly to great personal and family stress, which then has knock on effects on everything else.

tl;dr: when you're broke, money allocation is a high-risk, high-stress headache. The long term solution is to prioritize becoming the opposite of broke, i.e; financially stable, first.

also see: Julia Wise's post on the logistics of giving.

That analysis would be more compelling if the focus of the question were on a specific individual or small group. But, at least as I read it, the question is about the giving patterns of a moderately numerous subclass of EAs (working in AI + "earning really well") relative to the larger group of EAs. 

I'm not aware of any reason the dynamics you describe would be more present in this subclass than in the broader population. So a question asking about subgroup differences seems appropriate to me.

Edit: I see your point. Still, I'll leave the below comment as-is, because from my (3rd world, generational financial instability, no health insurance, filial obligations etc.) point of view I think the perspective of a broke person ought to be represented.

But what counts as "numerous", though? How many EAs are actually working in AI - fifty people? A hundred people? Who's collecting data on this subgroup versus the larger EA group?

I agree that the question itself is appropriate and there's nothing wrong with it. I was saying this question doesn't benefit from public speculation, because, for one thing, there isn't any reliable data for objective analysis, and for another, the logistics of an individual's personal finance are a bigger factor in how or how much a person donates, at the non-millionaire level (in this subclass and the broader population).

Its relatively common (I don't know about rates) for such people to take pay-cuts rather than directly donate that percentage. I know some who could be making millions a year who are actually making hundreds. It makes sense they don't feel the need to donate anything additional on top of that!

It's not clear to me whether you're talking about people who (a) do a voluntary salary sacrifice while working at an EA org, or (b) people who could have earned much more in industry but moved to a nonprofit so now earn much less than their hypothetical maximum earning potential.

In case (a), yes, their salary sacrifice should count towards their real donations.

But I think a practical moral philosophy wherein donation expectations are based on your actual material resources (and constraints), not your theoretical maximum earning potential, seems more justifiable. So I don't think people who do (b) (which includes myself) should get to say that doing (b) liberates them from the same obligation to donate that would attend to a person in the same material circumstances with worse outside options.

I disagree and think that b is actually totally sufficient justification. I'm taking as an assumption that we're using an ethical theory that says people do not have an unbounded ethical obligation to give everything up to subsistence and that it is fine to set some kind of a boundary and fraction of your total budget of resources that you spend on altruistic purposes. Many people doing well paying altruistic careers (eg technical AI safety careers) could earn dramatically more money eg at least twice as much, if they were optimising for the highest paying career they could. I'm fairly sure I could be earning a lot more than I currently am if that was my main goal. But I consider the value of my labour from an altruistic perspective to exceed the additional money I could be donating and therefore do not see myself to have a significant additional ethical obligation to donate (though I do donate a fraction of my income anyway because I want to)

By foregoing a large amount of income for altruistic reasons, I think such people are spending a large amount of their resource budget on altruistic purposes, and that if they still have an obligation to donate more money that people in higher paying careers should be obliged to donate far more. Which is a consistent position, but not one I hold

I don't want to argue in anyone's specific case, but I don't think it's universally true at all or even true the majority of the time that people that those working in AI could make more elsewhere. It sounds nice to say, but I think often people are earning more in AI jobs than they would elsewhere .

My reasoning was roughly that the machine learning skill set is also extremely employable in finance which tends to pay better. though openai salaries do get pretty high nowadays and if you value openai and anthropic equity at notably above their current market value, then plausibly, they're higher paying. Definitely agreed it's not universal.

Sure. But the average person working in AI is not at Jane St level like you and yes, OpenAI/Anthropic comp is extremely high.

I would also say that people still have a moral obligation. People don't choose to be smart enough to do ML work.

I also want to point out that having better outside income-maximizing options makes you more financially secure than other people in your income bracket, all else equal, which pro tanto would give you more reason to donate than them.

My point is that "other people in the income bracket AFTER taking a lower paying job" is the wrong reference class.

Let's say someone is earning $10mn/year in finance. I totally think they should donate some large fraction of their income. But I'm pretty reluctant to argue that they should donate more than 99% of it. So it seems completely fine to have a post donation income above $100K, likely far above.

If this person quits to take a job in AI Safety that pays $100K/year, because they think this is more impactful than their donations, I think it would be unreasonable to argue that they need to donate some of their reduced salary, because then their "maximum acceptable post donation salary" has gone down, even though they're (hopefully) having more impact than if they donated everything above $100K

I'm picking fairly extreme numbers to illustrate the point, but the key point is that choosing to do direct work should not reduce your "maximum acceptable salary post donations", and that at least according to my values, that max salary post donation is often above what they get paid in their new direct role.

I understand this. Good analogy.

I suppose what it comes down to is that I actually DO think it is morally better for the person earning $10m/year to donate $9.9m/year than $9m/year, about $900k/year better.

I want to achieve two things (which I expect you will agree with).

  1. I want to "capture" the good done by anyone and everyone willing to contribute and I want them welcomed, accepted and appreciated by the EA community. This means that if a person who could earn $10m/year in finance and is "only" willing to contribute $1m/year (10%) to effective causes, I don't want them turned away.
  2. I want to encourage, inspire, motivate and push people to do better than they currently are (insofar as it's possible). I think that includes an Anthropic employee earning $500k/year doing mech interp, a quant trader earning $10m/year, a new grad deciding what to do with their career and a 65-year old who just heard of EA.

I think it's also reasonable for people to set limits for how much they are willing to do. 

This is reasonable. I think the key point that I want to defend is that it seems wrong to say that choosing a more impactful job should mean you ought to have a lower post donation salary.

I personally think of it in terms of having some minimum obligation for doing your part (which I set at 10% by default), plus encouragement (but not obligation) to do significant amounts more good if you want to

My point is that "other people in the income bracket AFTER taking a lower paying job" is the wrong reference class.

Is there a single appropriate reference class here, as opposed to looking at multiple reference classes and weighting the results in some manner?

I agree that similarly situated person who decided to take a very high-paying job is a relevant reference class and should get some weight. However, it doesn't follow that person with similar incomes working a non-impactful job is an irrelevant reference class or should get zero weight. 

As Marcus notes, "[p]eople don't choose to be smart enough to do ML work." I would add that people don't choose other factors that promote or inhibit their ability to choose a very high-paying job and/or a high-impact job (e.g., location and circumstances of birth, health, family obligations, etc.) In a pair of persons who are similarly situated economically, giving the more advantaged person a total pass on the moral obligation to donate money seems problematic to me. In this frame of reference, their advantages allowed them to land a more impactful job at the same salary as the less advantaged person -- and in a sense we would be excusing them from a moral obligation because they are advantaged. (Giving the more privileged person a big break is also going to make it rather hard to establish substantial giving as a norm in the broader community, but that's probably not in the scope of the question here.)

I don't have a clear opinion on how to weight the two reference classes beyond an intuition that both classes should get perceptible weight. (It also seems plausible there are other reference classes to weigh as well, although I haven't thought about what they might be.)

My argument is essentially that "similar income, non impactful job" is as relevant a reference class to the "similar income, impactful job person" as it is as a reference class to the "high income, non impactful job" person. I also personally think reference classes is the wrong way to think about it. If taking a more impactful job also makes someone obliged to take on a lower post donation salary (when they don't have to), I feel like something has gone wrong, and the incentives are not aligned with doing the most good.

My point is that, even though there's a moral obligation, unless you think that high earning people in finance should be donating a very large fraction of their salary (so their post donation pay is less than the pay in AI safety), their de facto moral obligation has increased by the choice to do direct work, which is unreasonable to my eyes.

I would also guess that at least most people doing safety work at industry labs could get a very well paying role at a top tier finance firm? The talent bar is really high nowadays

I think I want to give (b) partial credit here in general. There may not be much practical difference between partial and full credit where the financial delta between a more altruistic job and a higher-salary job is high enough. But there are circumstances in which it might make a difference.

Without commenting on any specific person's job or counterfactuals, I think it is often true that the person working a lower-paid but more meaningful job secures non-financial benefits not available from the maximum-salary job and/or avoids non-financial sacrifices associated with the maximum-salary job. Depending on the field, these could include lower stress, more free time, more pleasant colleagues, more warm fuzzies / psychological satisfaction, and so on. If Worker A earns 100 currency units doing psychologically meaningful, low to optimal stress work but similarly situated Worker B earns 200 units doing unpleasant work with little in the way of non-monetary benefits, treating the entire 100 units Worker A forewent as spent out of their resource budget on altruistic purposes does not strike a fair balance between Worker A and Worker B.

Yeah this is fair

I feel quite confused about the case where someone earns much less than their earning potential in another altruistically motivated but less impactful career doing work that uses a similar skillset (e.g. joining a think tank after working on policy at an AI company). This seems somewhere between A and B.

But I think a practical moral philosophy wherein donation expectations are based on your actual material resources (and constraints), not your theoretical maximum earning potential, seems more justifiable. 

It's complicated, I think. Based on your distinguishing (a) and (b), I am reading "salary sacrifice" as voluntarily taking less salary than was offered for the position you encumber (as discussed in, e.g., this post). While I agree that should count, I'm not sure (b) is not relevant.

The fundamental question to me is about the appropriate distribution of the fruits of one's labors ("fruits") between altruism and non-altruism. (Fruits is an imperfect metaphor, because I mean to include (e.g.) passive income from inherited wealth, but I'll stick with it.) 

We generally seem to accept that the more fruit one produces, the more (in absolute terms) it is okay to keep for oneself. Stated differently -- at least for those who are not super-wealthy -- we seem to accept that the marginal altruism expectation for additional fruits one produces is less than 100%. I'll call this the "non-100 principle." I'm not specifically defending that principle in this comment, but it seems to be assumed in EA discourse.

If we accept this principle, then consider someone who was working full-time in a "normal" job and earn a salary of 200 apples per year. They decide to go down to half-time (100-apple salary) and spend the half of their working hours producing 100 charitable pears for which they receive no financial benefit. [1]The non-100 principle suggests that it's appropriate for this person to keep more of their apples than a person who works full-time to produce 100 apples (and zero pears). Their total production is twice as high, so they aren't similarly situated to the full-time worker who produces the same number of apples. The decision to take a significantly less well-paid job seems analogous to splitting one's time between remunerative and non-remunerative work. One gives up the opportunity to earn more salary in exchange for greater benefits that flow to others by non-donation means.

I am not putting too much weight on this thought experiment, but it does make me think that either the non-100 principle is wrong, or that the foregone salary counts for something in many circumstances even when it is not a salary sacrifice in the narrower sense.

  1. ^

    How to measure pear output is tricky. The market rate for similar work in the for-profit sector may be the least bad estimate here.

I think the right stance here is a question of “should EA be praising such people or get annoyed they’re not giving up more if it wants to keep a sufficient filter for who it calls true believers”, and the answer there is obviously both groups are great & true believers and it seems dumb to get annoyed at either.

The 10% number was notably chosen for these practical reasons (there is nothing magic about that number), and to back-justify that decision with bad moral philosophy about “discharge of moral duty” is absurd.

I'm not going to defend my whole view here, but I want to give a though experiment as to why I don't think that "shadow donations"—the delta between what you could earn if you were income-maximizing, and what you're actually earning in your direct work job—are a great measure for the purposes of practical philosophy (though I agree they're both a relevant consideration and a genuine sacrifice).

Imagine two twins, Anna and Belinda. Both have just graduated with identical grades, skills, degrees, etc. Anna goes directly from college work on AI safety at Safety Org, making $75,000 / year. Belinda goes to work for OpenMind doing safety-neutral work, making $1M per year total compensation. Belinda learns more marketable skills; she could make at least $1M / year indefinitely. Anna, on the other hand, has studiously plugged away at AI safety work, but since her work is niche, she can't easily transfer these skills to do something that pays better.

Then imagine that, after three years, Belinda joins Anna at Safety Org. Belinda was not fired; she could have stayed at OpenMind and made $1M per year indefinitely. At this point, Anna has gotten a few raises and is making $100,000, and donating 3% of her salary. Belinda gets the same job on the same pay scale, and does equally good work, but donates nothing. Belinda reasons that, because she could still be $1M per year, she has "really" donated $900,000 of labor to Safety Org, and so has sacrificed roughly 90% of her income.

Anna, on the other hand, thinks that it is an immense privilege to be able to have a comfortable job where she can use her skills to do good, while still earning more than 99% of all people in the world. She knows that, if she had made different choices in life, she probably could have a higher earning potential. But that has never been her goal in life. Anna knows that the average person in her income bracket donate around 3% regardless of their outside job options, so it seems reasonable for her to at least match that.

Is Belinda more altruistic than Anna? Which attitude should EAs aspire to?


To give some more color on my general view:

I don't really think there's a first-order fact of the matter as to who of these two (or anyone) is "more altruistic," or what one's "obligations" are. At bottom, there are just worlds with more or less value in them.

My view mostly comes from a practical view of how the EA community and project can be most impactful, credible, and healthy. I think the best attitude is closer to Anna's than Belinda's.

Donating also has other virtues over salary reductions, since it is concrete, measurable, and helps create a more diversified funding ecosystem.

To be clear, I think it's great that people like Belinda exist, and they should be welcomed and celebrated in the community. But I don't think the particular mindset of "well I have really sacrificed a lot because if I was purely selfish I could have made a lot more money" is one that we ought to recognize as particularly good or healthy.

I will note that my comment made no reference to who is “more altruistic”. I don’t know what that term means personally, and I’d rather not get into a semantics argument.

If you give the definition you have in mind, then we can argue over whether its smart to advocate that someone ought to be more altruistic in various situations, and whether it gets at intuitive notions of credit assignment.

I will also note that given the situation, its not clear to me Anna’s proper counterfactual here isn’t making $1M and getting nice marketable skills, since she and Belinda are twins, and so have the same work capacity & aptitudes.

To be clear, I think it’s great that people like Belinda exist, and they should be welcomed and celebrated in the community. But I don’t think the particular mindset of “well I have really sacrificed a lot because if I was purely selfish I could have made a lot more money” is one that we ought to recognize as particularly good or healthy.

I think this is the crux personally. This seems very healthy to me, in particular because it creates strong boundaries between the relevant person and EA. Note that burnout & overwork is not uncommon in EA circles! EAs are not healthy, and (imo) already give too much of themselves!

Why do you think its unhealthy? This seems to imply negative effects on the person reasoning in the relevant way, which seems pretty unlikely to me.

Suppose they're triplets, and Charlotte, also initially identical, earns $1M/year just like Belinda, but can't/doesn't want to switch to safety. How much of Charlotte's income should she donate in your worldview? What is the best attitude for the EA community?

I didn't read Cullen's comment as about 10%, and I think almost all of us would agree that this isn't a magic number. Most would probably agree that it is too demanding for some and not demanding enough for others. I also don't see anything in Cullen's response about whether we should throw shade at people for not being generous enough or label them as not "true believers."

Rather, Cullen commented on "donation expectations" grounded in "a practical moral philosophy." They wrote about measuring an "obligation to donate." 

You may think that's "bad moral philosophy," but there's no evidence of it being a post hoc rationalization of a 10% or other community giving norm here.

Should the EA Forum facilitate donation swaps? 🤔 Judging from the number of upvotes on this recent swap ask and the fact that the old donation swap platform has retired, maybe there's some unmet demand here? I myself would like to swap donations later this year. Maybe even a low-effort solution (like an open thread) could go a long way?

There used to be a website to try to coordinate this; not sure what ever happened to it.

[This comment is no longer endorsed by its author]Reply

I assume it's the one I linked in my original post? Catherine announced it was discontinued. :/

Ah sorry, I read your post too quickly :-)

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier