NB: I think EA spending is probably a very good thing overall and I’m not confident my concerns necessarily warrant changing much. But I think it's important to be aware of the specific ways this can go wrong and hopefully identify mitigations. Thanks to Marka Ellertson, Joe Benton, Andrew Garber, Dewi Erwan, Joshua Monrad and Jake Mendel for their input.
Summary
- The influx of EA funding is brilliant news, but it has also left many EAs feeling uncomfortable. I share this feeling of discomfort and propose two concrete concerns which I have recently come across.
- Optics: EA spending is often perceived as wasteful and self-serving, creating a problematic image which could lead to external criticism, outreach issues and selection effects.
- Epistemics: Generous funding has provided extrinsic incentives for being EA/longtermist which are exciting but also significantly increase the risks of motivated reasoning and make the movement more reliant on the judgement of a small number of grantmakers.
- I don’t really know what to do about this (especially since it’s overall very positive), so I give a few uncertain suggestions but mainly hope that others will have ideas and that this will at least serve as a call to vigilance in the midst of funding excitement.
Introduction
In recent years, the EA movement has received an influx of funding. Most notably, Dustin Moskovitz, Cari Tuna and Sam Bankman-Fried have each pledged billions of dollars, such that funding is more widely available and deployed.
This influx of funding has completely changed the game. First and foremost, it is wonderful news for those of us who care deeply about doing the most good and tackling the huge problems which we have been discussing for years. It should accelerate our progress significantly and I am very grateful that this is the case. But it has also had a drastic effect on the culture of the movement which may have unfortunate consequences.
A few years ago, I remember EA meet-ups where we’d be united by our discomfort towards spending money in fancy restaurants because of the difference it could make if donated to effective charities. Now, EA chapters will pay for weekly restaurant dinners to incentivise discussion and engagement. Many of my early EA friends also found it difficult to spend money on holidays. Now, we are told that one of the most impactful things university groups can do is host an all-expenses-paid retreat for their students.
I should emphasise here that I think these expenditures are probably good ideas which can be justified by the counterfactual engagement which they facilitate. These should probably continue to happen, however uncomfortable they make us feel.
But the fact that these decisions can be justified on one level doesn’t mean that they don’t also cause concrete problems which we should think about and mitigate.
Big Spending as an Optics Issue
Over the past few months, I’ve heard critical comments about a range of spending decisions. Several people asked me whether it was really a good use of EA money to pay for my transatlantic flights for EAG. Others challenged whether EAs seriously claim that the most effective way to spend money is to send privileged university students to an AirBNB for the weekend. And that’s before they hear about the Bahamas visitor programme…
In fact, I have recently found myself responding to spending objections more often than the standard substantive ones (e.g. what about my favourite charity?, can you really compare charities with each other?, what about systemic issues?).
I am not contesting here whether these programmes are worth the money. My own view is that most of them probably are and I try to lay this out to those who ask. But it is the perceptions which I find most concerning: many people see the current state of the movement and intuitively conclude that lots of EA spending is not only wasteful but also self-serving, straying far from what you’d expect the principles of an ‘effective altruism’ movement to be. Given the optics issues which have hindered the progress of EA in the past, we should be wary of this dynamic.
Importantly, I’ve heard this claim not only from critics of EA, but also from committed group members and an aligned student who might otherwise be more involved. This suggests that aside from opening us up to external criticism from people who don’t like EA anyway, spending optics may also hinder outreach and lead to selection effects, whereby proto-EAs who are uncomfortable with how money is spent are put off the movement and less likely to get involved. (I am grateful to Marka Ellertson and Joshua Monrad, who both raised versions of this valuable point.)
Longtermism vs Neartermism
One especially problematic framing concerns the apparent discrepancy between longtermist and neartermist funding. Many people find it understandably confusing to hear that ‘EA currently has more money than it can spend effectively’ whilst also noticing that problems like malaria and extreme poverty still exist, especially given how much EA focuses on how cheap it is to save a life and how important it is to practise what we preach.
I don’t claim that more money should necessarily go to neartermist areas, but I fear that excellent people who initially come to EA through a global health or animal welfare route may be put off by this dynamic and leave the movement entirely, especially if it isn’t explained with nuance and sensitivity. This is a comment which I have heard repeatedly over recent months and I am concerned that it could become a significant obstacle to EA movement-building, including for future longtermists.
Coordination and the Unilateralist’s Curse
Longtermists often mention the unilateralist’s curse as a problem associated with various x-risks. Even if the vast majority of altruistic actors behave sensibly, it only takes one reaching a different decision to the group to cause the catastrophe. It seems to me that similar dynamics exist with EA spending. Even if most funders are careful with regard to the optics, it only takes one misstep to attract headlines and stick in people’s heads. Given past experience with ‘earning to give’, this should be especially concerning for the movement.
Financial Incentives as an Epistemics Issue
Several years ago before the increase in funding, it didn’t pay to be EA. In fact, it was rather costly: financially costly because it usually involved a commitment to give away a lot of one’s resources, and socially costly because most people have an intuitive aversion to EA principles. As a result, most people around EA were probably there because they had thought hard and were really convinced that it was morally right.
In 2022, this is no longer necessarily the case. Suddenly, being an EA is exciting for a bunch of extrinsic reasons. College-age EAs have the chance to be flown around the world to conferences, invited to all-expenses-paid retreats and offered free dinners as an incentive for engaging with the community and the content.
As stated before, this is very exciting and a great thing. Generous funding gives us the chance to set ambitious visions to make EA huge on campuses around the world and get the best talent working on the biggest problems. Moreover, it can improve our diversity by making careers such as community-building accessible to people from different socioeconomic backgrounds. But it also risks clouding our judgement as individuals and as a movement.
Consider the case of a college freshman. You read your free copy of Doing Good Better and become intrigued. You explore how you can get involved. You find out that if you build a longtermist group in your university, EA orgs will pay you for your time, fly you to conferences and hubs around the world and give you all the resources you could possibly make use of. This is basically the best deal that any student society can currently offer. Given this, how much time are you going to spend critically evaluating the core claims of longtermism? And how likely are you to walk away if you’re not quite sure? Anecdotally, I’ve spoken to several organisers who aren’t convinced of longtermism but default to following the money nevertheless. I’ve even heard (joking?) conversations about whether it’s worth 'pretending' to be EA for the free trip.
When my friends in finance (not earning to give) tell me they’re working at Goldman to improve the world, I am normally sceptical. Psychology literature on motivated reasoning and confirmation bias suggests that we are excellent at finding views which justify whatever is in our interests. For example, one study shows that our moral judgements can be significantly altered by financial incentives; another shows that we naturally strengthen our existing views by holding confirming and disconfirming evidence to different standards.
Fortunately, unlike with finance careers, I think that longtermist careers are likely to be among the most impactful available to us. But given the financial incentives, I would expect it to be very difficult to notice if either longtermism as a whole or specific spending decisions turned out to be wrong. Research suggests that when a lot of money is on the line, our judgement becomes less clear. It really matters that the judgement of EAs is clear, so having a lot of money on the line should be cause for concern.
This is especially problematic given the nature of longtermism, simultaneously the best-funded area of EA and also the area with the most complex philosophy and weakest feedback loops for interventions.
Maybe this risk is mitigated by the fact that grantmakers in EA set these incentives by deciding where the money goes, and their judgements are careful and well-calibrated from years of experience, evaluation and excellent in-house research. This seems plausible to me. But if strong incentives are shifting our epistemic confidence from the movement as a whole to a small number of grantmakers, this is something we should at least notice.
What can we do differently?
I’m really not sure what the answer to this is, especially because I think most of these funding opportunities seem very good, so we shouldn’t stop them. I’m mainly putting this out there to start a conversation because I’m not sure how aware we are of these dynamics (I wasn’t until recently and others seem to think it is a concern which isn’t discussed enough, perhaps for some of the reasons stated above).
A few initial thoughts, not proposed with particular confidence:
- Can we create better resources for how to talk about the spending when it comes up, just like we have for substantive objections to EA? For example, accessible posts on why retreats / conferences / free dinners are considered good value for money under rigorous evaluative frameworks.
- (From Andrew) Along these lines, it could be valuable for university groups to conduct and publish some rough cost-benefit analyses on major programs (e.g. running a retreat, budgeting for socials, book and cookie giveaways, deciding whether to get an office). This is probably a good exercise for general EA thinking, but it might also help reduce some wastefulness by making EA groups think more about how they use money.
- A counter would be that this process takes time which could be spent on directly valuable activities - though for the reasons stated above, we should perhaps be sceptical of arguments which justify spending without thinking.
- It would be helpful to lay out clearly what money is available to which parts of the EA movement and what it can and can’t do. This would help clarify questions such as: “if EA has more money than it can spend effectively, why isn’t it giving more to AMF / why is it still encouraging people to donate to AMF / why can’t it just solve biorisk through brute financial force”. This post is a great start.
- We should be careful with how we advertise EA funding. For example, we should avoid the framing of ‘people with money want to pay for you to do X’ and replace this with an explanation of why X matters a lot and why we don’t want anyone to be deterred from doing X if the costs are prohibitive.
- Given the unilateralist’s curse, perhaps there should be some central forum for EA funders to coordinate / agree upon policies with an optics perspective in mind. Maybe this is already happening - I am certainly not well-placed to assess the ecosystem.
- (From Joe) Where appropriate, it should be made clear that grants aren’t conditional on agreement with the community. Funding criticism is a great start, but many people receiving grants (e.g. for travel) may still feel that there’s an implicit expectation for them to agree with the funder’s view, and we should make it clearer to people when this is not the case.
- Note, for example, that people who receive EA funding may find it more difficult to publish a critical piece like this, given the benefits which they derive from the status quo, perceptions of hypocrisy and feelings of betrayal towards the people funding them. As more EAs come to benefit from EA funding, this problem may grow.
- In this vein and if we think this is a big enough concern, perhaps we should encourage more criticism specifically relating to how funding is deployed?
- Should we re-emphasise the norm of significant giving? Money donated to top global health / animal welfare charities can still do a huge amount of good and taking this seriously as a community would help us avoid the mindset whereby the most impactful things we can do involve taking money rather than giving.
- A counter is that this may distract from other longtermist priorities which are much more valuable, but it might help with both optics and epistemics.
- (From Joe) At the very least, we should make the opportunity cost of funding more salient. EA was predicated on recognising the trade-offs inherent to altruistic decisions, and we shouldn’t forget that every ~$5,000 spent on speculative longtermist initiatives statistically costs a life in the short term. This is a significant responsibility which we shouldn't take lightly, yet current free-spending norms point the other way.
- Although we should often be willing to accept time-money trade-offs, there are some cases where norm shifts could go along way, such as putting students up in cheaper hotels, booking flights further in advance, or selecting cheaper flights where inconvenience is minimal (rather than treating money as no object).
- While this wouldn’t necessarily change our actions significantly, having a culture where this is collectively acknowledged would reduce the problematic impression that we’ve stopped appreciating the value of money.
Do you agree with the problems I've raised? If so, how do you think we can mitigate them?
One thing that bugged me when I first got involved with EA was the extent to which the community seemed hesitant to spend lots of money on stuff like retreats, student groups, dinners, compensation, etc. despite the cost-benefit analysis seeming to favor doing so pretty strongly. I know that, from my perspective, I felt like this was some evidence that many EAs didn't take their stated ideals as seriously as I had hoped—e.g. that many people might just be trying to act in the way that they think an altruistic person should rather than really carefully thinking through what an altruistic person should actually do.
This is in direct contrast to the point you make that spending money like this might make people think we take our ideals less seriously—at least in my experience, had I witnessed an EA community that was more willing to spend money on projects like this, I would have been more rather than less convinced that EA was the real deal. I don't currently have any strong beliefs about which of these reactions is more likely/concerning, but I think it's at least worth pointing out that there is definitely an effect in the opposite direction to the one that you point out as well.
Precisely. Also, the frugality of past EA creates a selection effect, so probably there is a larger fraction of anti-frugal people outside the community (and among people who might be interested) than we would expect from looking inside it.
My anecdotal experience hiring is that I get many more prospective candidates saying something like "if this is so important why isn't your salary way above market rates?" than "if you really care about impact, why are you offering so much money?" (Though both sometimes happen.)
I agree that it’s possible to be unthinkingly frugal. It’s also possible to be unthinkingly spendy. Both seem bad, because they are unthinking. A solution would be to encourage EA groups to practice good thinking together, and to showcase careful thinking on these topics.
I like the idea of having early EA intro materials and university groups that teach BOTECs, cost-benefit analysis, and grappling carefully with spending decisions.
This kind of training, however, trades off against time spent learning about eg. AI safety and biosecurity.
Great point! I think each spending strategy has its pitfalls related to signalling.
I think this correlates somewhat with people's knowledge/engagement with economics, and political lean. The "frugal altruism" will probably attract more left leaning people, while "spending altruism" probably attracts more right leaning people
1) One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works.
I'm not sure that's an entirely bad thing, because frugality seems mixed as a virtue e.g. it can lead to:
However, we need new hard-to-fake signals of seriousn... (read more)
Agree.
Fully agree we need new hard-to-fake signals. Ben's list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are:
- Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring processes. There could also be more advice and guidance for newer orgs on the best ways to look for and judge this when hiring. Decisions to promote staff should seek feedback from peers and direct reports.
- Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naïve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or fun
... (read more)Random but in the early days of YC they said they used to have a "no assholes" rule, which mean they'd try to not accept founders who seemed like assholes, even if they thought they might succeed, due to the negative externalities on the community.
Part of me is a bit sad that community building is now a comfortable and status-y option. The previous generation of community builders had a really high proportion of people who cared deeply about these ideas, were willing to take weird ideas seriously and often take a substantial financial/career security hit.
I don't think this applies to most of the current generation of community builders to the same degree and it just seems like much more of a mixed bag people wise. To be clear I still think this is good on the margin, I just trust the median new community builder a lot less (by default).
Something I like about "Doing high upside things even if there's a good chance they might not work out and seem unconventional" as a mark of seriousness is that it's its own form of sacrifice: being willing to look weird and fail and give up on full security and job comfort and do something hard because it's positive EV.
In your list of new hard-to-fake signals of seriousness I like.
I think that this is underrated and as a community, we overemphasise actually achieving things in the real world meaning if you want to get ahead within EA it often pays to do the medium right but reasonable thing over the super high EV thing, as the weird super high EV thing probably won't work.
I'm much more excited when I meet young people who keep trying a bunch of things that seem plausibly very high value and give them lots of information relative to people that did some ok-ish things that let them build a track record/status. Fwiw I think that some senior EAs do track these high EV high-risk things really well, but maybe the general perception of what people ought to do is too close to that of the non-EA world.
Thanks, I thought this was the best-written and most carefully argued of the recent posts on this theme.
Extra ideas for the idea list:
Also for what it is worth I was really impressed by the post. I it was an very well written, clear, and transparent discussion of this topic with clear actions to take.
I would love frugality options!
+1, the frugality options seem like a nice way to "make the opportunity cost of funding more salient" without necessarily requiring huge changes from event organizers.
+1. One concrete application: Offer donation options instead of generous stipends as compensation for speaking engagements.
I worry that it'd feel pretty fake for people who actually care about counterfactual impact. Money goes from EA sources to EA sources both ways.
Most EAs I've met over the years don't seem to value their time enough, so I worry that the frugal option would often cost people more impact in terms of time spent (e.g. cooking), and it would implicitly encourage frugality norms beyond what actually maximizes altruistic impact.
That said, I like options and norms that discourage fancy options that don't come with clear productivity benefits. E.g. it could make sense to pay more for a fancier hotel if it has substantially better Wi-Fi and the person might do some work in the room, but it typically doesn't make sense to pay extra for a nice room.
I think I agree with this. I think if I look historically at my mistakes in spending money, there was very likely substantially more utility lost from spending too little money rather than spending too much money.
To be more precise, most of my historical mistakes do not come from consciously thinking about time-money tradeoffs and choosing money instead of time ("oh I can Uber or take the bus to this event but Uber is expensive so I should take the bus instead") but from some money-expensive options not being in my explicit option set to prioritize in the first place ("oh taking the bus will take four hours total so I probably shouldn't attend the event") .
As I get in the habit of explicitly valuing my time often and trying to consider ways to buy time, I notice more and more options that my younger (and poorer) self would not even consider to be in the option set (e.g. international flights to conferences, cleaners, ordering food, paying money to alleviate bureaucracy hurdles, etc). Admittedly this coincided with the EA movement generally being much more spendthrift (and also there being far more resources now on time-money tradeoffs for people in my reference class) so it's plausible younger EAs don't have to go through the same mental evolutions to get the same effect.
I'm going through this right now. There have just clearly been times both as a group organiser and in my personal life when I should have just spent/taken money and in hindsight clearly had higher impact, e.g buying uni textbooks so I study with less friction to get better grades.
I know this isn't the only thing to track here, but it's worth noting that funding to GiveWell-recommended charities is also increasing fast, both from Open Philanthropy and from other donors. Enough so that last year GiveWell had more money to direct than room for more funding at the charities that meet their bar (which is "8x better than cash transfers", though of course money could be donated to things less effective than that). They're aiming to move 1 billion annually by 2025.
True, but GiveWell doesn't expect funding to grow at the same rate as top quality funding opportunities, so that $1bn/year is going to need further donors. Unless we believe GiveWell's top programmes/charities will never have a funding shortfall again, the point about where EA prioritises its funding still seems relevant.
Donating to AMF still seems like a good benchmark for cost effectiveness. Unlike George, my instinct is that e.g. a team retreat for an EA Group is likely to produce considerably less impact than spending the money on bednets or other GiveWell top charities.
In the spirit of trying to really engage with the question and figure out ground truth, maybe it's worth making a quick CBA or guesstimate model based on your general views for "Unlike George, my instinct is that e.g. a team retreat for an EA Group is likely to produce considerably less impact than spending the money on bednets or other GiveWell top charities" and then we can debate specifics and maybe come to better heuristics about this kind of thing. I'd be excited to see what numbers your intuition puts on things.
I've seen the time-money tradeoff reach some pretty extreme, scope-insensitive conclusions. People correctly recognize that it's not worth 30 minutes of time at a multi-organizer meeting to try to shave $10 off a food order, but they extrapolate this to it not being worth a few hours of solo organizer time to save thousands of dollars. I think people should probably adopt some kind of heuristic about how many EA dollars their EA time is worth and stick to it, even when it produces the unpleasant/unflattering conclusion that you should spend time to save money.
Also want to highlight "For example, we should avoid the framing of ‘people with money want to pay for you to do X’ and replace this with an explanation of why X matters a lot and why we don’t want anyone to be deterred from doing X if the costs are prohibitive" as what I think is the most clearly correct and actionable suggestion here.
I agree we should be careful with the "spend money to save time" guideline. It can be self-serving because spending time to save money can be unpleasant.
Also, there is the danger that you get used to the luxury of spending money to save time. If your situation changes, or need to update your estimate of the value of your time to a lower value, you should be willing to spend the time and not the money! (I hope this does not happen to you, but it may happen e.g. you need to move to your career plan B/C/Z)
This also applies to other luxuries.
Man, I find it so difficult (on, like, an emotional level) to think clearly about the dollar value of an hour of my time (I feel like it is overvalued?? because so many people make so much less money than me, a North American???) but I agree that adopting some kind of clear heuristic here is good, and that I should more frequently be doing explicit trades of "I will spend up to 2 hours on trying to find a cheaper option, because I think in expectation that's worth $60".
You might be aware of this but for others reading - there's a calculator to help you work out the value of your time.
I think it's worth doing once (and repeating when your circumstances change, e.g. new job), then just using that as a general heuristic to make time-money tradeoffs, rather than deliberating every time.
If a community claims to be altruistic, it's reasonable for an outsider to seek evidence: acts of community altruism that can't be equally well explained by selfish impulses, like financial reward or desire for praise. In practice, that seems to require that community members make visible acts of personal sacrifice for altruistic ends. To some degree, EA's credibility as a moral movement (that moral people want to be a part of) depends on such sacrifices. GWWC pledges help; as this post points out, big spending probably doesn't.
One shift that might help is thinking more carefully about who EA promotes as admirable, model, celebrity EAs. Communities are defined in important ways by their heroes and most prominent figures, who not only shape behaviour internally, but represent the community externally. Communities also have control over who these representatives are, to some degree: someone makes a choice over who will be the keynote speaker at EA conferences, for instance.
EA seems to allocate a lot of its prestige and attention to those it views as having exceptional intellectual or epistemic powers. When we select EA role models and representatives, we seem to optimise for demonstr... (read more)
This is a very interesting point that, for me, reinforces the importance of keeping effective giving prominent in EA. It is both a good thing, and also a defence against accusations of self-serving wastefulness, if a lot of people in the community are voluntarily sacrificing some portion of their income (with the usual caveats about 'if you have actual disposable income).
GWWC, OFTW etc. may be doing EA an increasing favour by enlisting a decent proportion of the community to be altruistic.
It's also noticeable that giving seems to be least popular with longtermists, who also seem to be doing the most lavish spending.
Many people prominent in EA still donate very large percentages, Julia Wise (featured in Strangers Drowning)/Jeff Kaufman 50%, Will MacAskill at least 50%, probably the same for Peter Singer and Toby Ord.
I was at an EA party this year where there was definitely an overspend of hundreds of pounds of EA money on food which was mostly wasted. As someone who was there, at the time, this was very clearly avoidable.
It remains true that this money could have changed lives if donated to EA charities instead (or even used less wastefully towards EA community building!) and I think we should view things like this as a serious community failure which we want to avoid repeating.
At the time, I felt extremely uncomfortable / disappointed with the way the money was used.
I think if this happened very early into my time affiliated with EA, it would have made me a lot less likely to stay involved - the optics were literally "rich kids who claim to be improving the world in the best way possible and tell everyone to donate lots of money to poor people are wasting hundreds of pounds on food that they were obviously never going to eat".
I think this happened because the flow of money into EA has made the obligations to optimise cost-efficiency and to think counterfactually seem a lot weaker to many EAs. I don't think the obligations are any weaker than they were - we should just have a slightly lower cost effectiveness bar for funding things than before.
I had exactly the same thought in an identical-sounding situation. I felt incredibly uncomfortable, and someone at the party pointed out to me that these kinds of spending habits really alienate young EAs from less privileged backgrounds who aren’t used to ordering pricey food deliveries whenever they feel like it
I think that it is worth separating out two different potential problems here.
1. It is bad that we wasted money that could have directly helped people.
2. It is bad that we alienated people by spending money.
I am much more sympathetic to (2) than (1).
Maybe it depends on the cause area but the price I'm willing to pay to attract/retain people who can work on meta/longtermist things is just so high that it doesn't seem worth factoring in things like a few hundred pounds wasted on food.
I think another framing here is that:
1) wasting hundreds of pounds of money on food is multiple orders of magnitude away from the biggest misallocation of money within EA community building,
2) All misallocations of money within EA community building is lower than misallocations of money caused by donations that were wasted by donating to less effective cause areas (for context, Open Phil spent ~200M in criminal justice reform, more than all of their EA CB spending to date), and
3) it's pretty plausible that we burned much more utility from failure to donate/spend enough rather than via donating too much to wasteful things, so looking at the "visible" waste is ignoring the biggest source of resource misallocation.
For what it's worth, even though I prioritize longtermist causes, reading
made me fairly uncomfortable, even though I don't disagree with the substance of the comment, as well as
I agree that it's important to not let the perfect be the enemy of the good, and it'd be bad to not criticize X just because X isn't the literal most biggest issue in the movement. But otoh some sense of scale is valuable (at least if we're considering the object level of resource misallocation and not just/primarily optics).
Like if 30 EAs are at a party, and their time is conservatively valued at $100/h, the party is already burning >$50/minute, just as another example. Hopefully that time is worth it.
This is probably a bit of an aside, but I don't think that is a valid way to argue about the value of time for people: It seems quite unlikely to me that instead of going to an EA party those people would actually have done productive work with a value of $100/h. You only have so many hours that you can actually do productive work and the counterfactual of going to this party would more likely be those people going to a (non-EA) party, going for dinner with friends, spending time with family, relaxing, etc than actually doing productive work.
Also with regards to longtermist stuff in particular, I think there’s a risk of falling into “the value of x-risk prevention is basically infinite, so the expected value of any action taken to try and reduce x-risk is also +infinity” reasoning.
I think this kind of reasoning risks obscuring differences in cost-effectiveness between x-risk mitigation initiatives which do exist and which we should take seriously because of other counterfactual uses of the money and because we don’t have unlimited resources.
(There’s a chance I’m badly rephrasing complicated philosophy debates around fanaticism, pascals mugging, etc here but I’m not sure)
To me, the most important issue that this (and other comments here) raises is that, as a community, we don't yet have a good model of how an altruist who (rationally/altruistically) places a very high value on their time should actually act. Or, for that matter, how they shouldn't.
Thanks for this clear write-up and as many others, I definitely share some of your worries. I liked it that you wrote that the extra influx of money could make the CB-position accessible to people from different socioeconomic backgrounds, since this point seems to be a bit neglected in EA discussions.
I think it is true for many other impactful career paths that decent wages and/or some financial security (e.g. smoothening career transitions with stipends) could help to widen the pool of potential applicants, e.g. to more people from less fortunate socioeconomic backgrounds. Don't forget that many people in the lower and lower-middle income class are raised with the idea that it is important to take care of your own financial security. I have plenty of anecdotes from people in that group that didn't pursue an EA career in the past, because the wage gap and the worries about financial insecurity were just too large. I see multiple advantages coming from widening the pool to people from lower / lower middle socioeconomic classes:
- Given that there is also a lot of talent in lower / lower middle socioeconomic classes, you will finally be able to attract more of them. This will increase t
... (read more)Adding on: Increasing EA spending in certain areas could certainly support diversity, but it could have the opposite effect elsewhere.
I’m concerned that focusing community-building efforts at elite universities only increases inequality. I’m guessing that university groups do much of the recruiting for all-expenses-paid activities. In practice, then, students at elite universities will benefit, while students at state schools and community colleges won’t even hear about these opportunities. So the current EA community-building system quite accurately selects for privileged students to give money to.
Curious about any work to change this pattern!
Thanks for writing this! Especially agree with: "We should be careful with how we advertise EA funding. For example, we should avoid the framing of ‘people with money want to pay for you to do X’ and replace this with an explanation of why X matters a lot and why we don’t want anyone to be deterred from doing X if the costs are prohibitive."
I've had a good experience with framing decisions around (reasonable) costs not getting in the way of high-impact work — not only from the perspective of optics, but also as a heuristic for where to draw boundaries (e.g. where to draw the line on what salaries to offer).
I think a lot of points in this post are very valid and concerning to me. I hope they will be taken seriously.
I guess mainly FTX, Open Philanthropy, EA Funds, and CEA. I've shared the article with relevant people in all of those.
Concrete example affecting me right now: this summer I’m considering internships in mental health, x-risk or global health cause prioritisation, and I’m also considering just doing a bunch of Coursera courses and working on a start up.
I think ideally I would be choosing entirely based on what offers more career capital / is more impactful, but it’s difficult not to be influenced by the fact that one of the internships would pay me £11k more than the other 3.
You should keep in mind that high-earning positions enable a large amount of donations! Money is a lot more flexible in which cause you can deploy it to. In light of current salaries, one could even work on x-risks as a global poverty EtG strategy.
You should be influenced by that! It is evidence for donors thinking that org is more important, and that org thinking you are more important. Prices transmit valuable information.
I think for difficult questions it is helpful to form both an inside view (what do I think) and an outside view (what does everyone else think). Pay is an indicator of the outside view. In an altruistic market how good an indicator it is depends on how much you trust a few big grantmakers to be making good decisions.
Ok, yes, but I think it’s a little more complicated than that, or we would all be working at Goldman or Google who also able to deploy altruistic narratives.
Hm, this might violate US antitrust law?
Thank you very much for this post. I thought it was well-written and that the topic may be important, especially when it comes to epistemics.
I want to echo the comments that cost-effectiveness should still be considered. I have noticed people (especially Bay Area longtermists) acting like almost anything that saves time or is at all connected to longtermism is a good use of money. As a result, money gets wasted because cheaper ways of creating the same impact are missed. For example, one time an EA offered to pay $140 of EA money (I think) for me for two long Uber rides so that we could meet up, since there wasn't a fast public transport link. The conversation turned out to be a 30-minute data-gathering task with set questions that worked fine when we did it on Zoom instead.
Something can have a very high value but a low price. I would pay a lot for potable liquid if I had to, but thanks to tap water that's not required, so I would be foolish to do so. In the example above, even if the value of the data were $140, the price of getting it was lower than that. After taking into account the value of time spent finding cheaper alternatives, EAs should capture the surplus whe... (read more)
I'm worried that in some cases it might be the case that grant makers and grant receivers are friends who actively socialize with each other, and that might corrupt the grantmaking process.
Being friends with someone is also a great way of learning about their capabilities, motivations and reliability, so I think it could be rational for rich funders to be giving grants to their friends moreso than strangers.
I disagree with you here. I think bring friends with someone makes you quite likely to overestimate their capabilities / reliability etc. If there’s psychology research available on how we evaluate people we know vs strangers, I’d love to read it.
Wait what? Predictive validity of CVs is minimal for most jobs, one might naively guess that they ought to be even less predicative for funding entrepreneurial projects than for jobs.
Why do you think companies rely on referrals more than on CVs?
There are lots of ways to accurately predict a job applicant’s future success. See the meta-analysis linked below, which finds general mental ability tests, work trials, and structured interviews all to be more predictive of future overall job performance than unstructured interviews, peer ratings, or reference checks.
I’m not a grantmaker and there are certainly benefits to informal networking-based grants, but on the whole I wish EA grantmaking relied less on social connections to grantmakers and more on these kinds of objective evaluations.
Meta-analysis (>6000 citations): https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.172.1733&rep=rep1&type=pdf
I'm not sure if this perspective is helpful but this issue reminds me of a somewhat analogous situation in the Financial Independence Retire Early (FIRE) movement. Originally the focus was on drastically limiting spending, increasing the savings rate to as high as possible, and retiring shockingly young. Then, as time passed some people realized they didn't want to live in such austerity. Other people found that they could move things along faster by focusing on earning more, instead of spending less. Then there were people who didn't really want to retire but more like get enough income to be comfortable and then downshift their lifestyles. There were folks who just focused on making as much money as possible and remained in the community even though they were just about getting rich. Then some people sort of stumbled into the movement having made a ton of money on cryptocurrency or Tesla options or whatever...they never really applied any of the principles but still retired early.
With all these changes in the demographics and mindsets of the community I've noticed that the subjects discussed and the behavior encouraged has notably changed... (read more)
I think what you’re describing is drift due to size (well in this case of FIRE, it actually might be drift due to experiences/values/maturity but let’s say due to size). The FIRE movement is “wide”. Maybe more appropriately, a subreddit like r/antiwork or r/superstonk is “wide”.
These “wide” movements have a lot of people. They often have momentum and can coordinate. But it’s unclear what resources and actions they can take, beyond buying stock or something. Also, as you point out, they have the tendency to drift or break apart.
But EA can do something else, which is getting “tall”. $50B of funding is just the beginning and this money is the least interesting resource EA has. EA can accumulate other things of great value. I think it's hard to write out exactly what these resources are (because it's hard to know in advance or because I’m dumb) but they are probably related to institutions and talent. One example would be a powerful applied math group that solves ELK.
A few thoughts on how we could mitigate some of these risks:
Consider the analogy with food production and food waste in relation to global hunger. We can grow enough food to feed the planet. Our ability to solve world hunger is not constrained by food production, but, in my understanding, by logistical issues involving waste, transportation, warfare, and governance problems.
Likewise, in EA, our ability to address the problems with which we are concerned may be increasingly unconstrained by funding. Instead, it's bottlenecked by similar logistics problems: waste, governance, coordination within and between organizations, the challenges of vetting grants, finding talent, building new organizations, and, as you are pointing out, optics. Can't blame lack of funding for your failures when you're no longer bottlenecked by funding!
It's important to understand that these optics and logistical problems are not a fluke, or the consequence of something we did wrong, but a natural consequence of growing to a certain size. It's just the next set of problems for us to solve.
Going forward, I would advocate for basing perceptions issues on legible evidence. I have no problem with this post, which does a good job of furthering a meaningful conversation. I n... (read more)
With the caveat that this is obviously flawed data because the sample is "people who came to an all-expenses-paid retreat," I think it's useful to provide some actual data Harvard EA collected at our spring retreat. I was slightly concerned that the spending would rub people the wrong way, so I included as one of our anonymous feedback questions, "How much did the spending of money at this retreat make you feel uncomfortable [on a scale of 1 to 10]?" All 18 survey answerers provided an answer. Mean: 3.1. Median: 3. Mode: 1. High: 9.
I think it's also worth noting that in response to the first question, "What did you think of the retreat overall?", nobody mentioned money, including the person who answered 9 (who said "Excellent arrangements, well thought out, meticulous planning"). On the question "Imagine you're on the team planning the next retreat, and it's the first meeting. Fill in the blank: "One thing I think we could improve from the last retreat is ____"," nobody volunteered spending less money; several suggestions involved adding things that would cost more money, including the person who answered 9, who suggested adding daily rapid tests. The question "Did participating in... (read more)
apologies if this was obvious from the responses in some other way, but did you consider that the person who gave a 9 might have had the scale backwards, i.e. been thinking of 1 as the maximally uncomfortable score?
Thanks for writing this post, this is an area I've also sometimes felt concerned about so it's great to see some serious discussion.
A related point that I haven't seen called out explicitly is that monetary costs are often correlated with other more significant, but less visible, costs such as staff time. While I think the substantial longtermist funding overhang really does mean we should spend more money, I think it's still very important that we scrutinize where that money is being spent. One example that I've seen crop up a few time is retreats or other events being organized at very short notice (e.g. less than two weeks). In most of these cases there's not been a clear reason why it needs to happen right now, and can't wait a month or so. There's a monetary cost to doing things last minute (e.g. more expensive flights and hotel rooms) but the biggest cost is the event will be less effective than if the organizers and attendees had more time to plan for it.
More generally I'm concerned that too much funding can have a detrimental effect on organisational culture. It's often possible to make a problem temporarily go away just by throwing money at it. Sometimes that's the right ... (read more)
My thoughts on this:
This post clearly articulates a lot of the related thoughts I've been having and discussing with other organizers; well done. I will add my quickly dashed off thoughts, coming in particular from the perspective of a EA group organizer:
1. The time/ money trade off is real, particularly for mostly volunteer-led groups where volunteer capacity is our main bottleneck. Nonetheless, in my view being cognizant of trade offs when allocating resources is core to EA, and it is a real loss when we just vaguely gesture at the time/money trade off and spend money without really thinking deeply about its best use. I advocate taking a rule utilitarian approach to this -- even if in any given situation it might be more time that it is "worth" to really think hard about whether spending funds on something is the best use of those funds--even within a more narrow framework like a group's overall goals--it is still worth doing as a rule. This also reinforces the norms of talking explicitly about trade offs, cause prioritization, and thinking strategically.
2. This is anecdotal of course, but I have directly seen people express discomfort when our group spends money on, e.g., paying f... (read more)
Because of Evan's comment, I think that the signaling consideration here is another example of the following pattern:
Someone suggests we stop (or limit) doing X because of what we might signal by doing X, even though we think X is correct. But this person is somewhat blind to the negative signaling effects of not living up to our own stated ideals (i.e. having integrity). It turns out that some more rationalist-type people report that they would be put off by this lack of honesty and integrity (speculation: perhaps because these types have an automatic norm of honesty).
The other primary example of this I can think of is with veganism and the signaling benefits (and usually unrecongnized costs).
A solution is that when you find yourself saying “X will put off audience Y” to ask yourself “but what audience does X help attract, and who is put off by my alternative to X?”
Warren Buffett called his private jet 'The Indefensible' — then renamed it 'The Indispensable' after realizing it was worth the money.
Source
Following the academic research closely as EAs often do produces many perspectives that are surprising to traditional activists. I'm a student at University of California Davis. Here my frugality is essential to getting my peers to take my perspectives on effectiveness seriously. If it wasn't for the frugality, they would dismiss me as not altruistic because I'm a moderate democrat instead of a socialist. I'm frugal because I believe it's the right thing to do (for me at least), not because of the optics. I don't know what the best answer is overall, but believe we should be particularly cautious about abandoning frugality in very left wing environments. Perhaps very different levels of frugality will be best in different communities.
Even before a cost-benefit analysis, I'd like to see an ordinal ranking of priorities. For organizations like the CEA, what would they do with a 20% budget increase? What would they cut if they had to reduce their budget by 20%? Same thing for specific events, like EAGs. For a student campus club, what would they do with $500 in funding? $2,000? $10,000? I think this type of analysis would be helpful for determining if some of the spending that appears more frivolous is actually the least important.
My suggestion would be that more people interested in Effective Altruism infrastructure donate to Giving What We Can instead of the E.A. Infrastructure Fund or CEA Community Building Fund. A community organized around effective giving is 1) better for optics; 2) better for us; 3) anecdotally, I was inducted into E.A. through global poverty, and then later got into longtermism and animal welfare by extension. Without good infrastructure and a strong culture of effective giving, E.A. will cease to be an excited and exciting (and growing) community working to solve the world's biggest problems, and will become simply a few eccentric billionaires weird AI risk pet project.
FWIW, I think it'd be pretty hard (practically and emotionally) to fake a project plan that EA funders would be willing to throw money at. So my prior is that cheating is rare and an acceptable cost to being a high-risk funder. EA is not about minimising crime, it's about maximising impact, and before we crack down on funding we should check our motivations. I don't want anyone to change their high-risk strategy based on hearsay, but I do want our top funders to be on the lookout so that they might catch a possible problem before it becomes rampant.
I like the culture-aligning suggestions for other reasons, though. I think the long-term future will benefit from the EA community remaining aligned with actually caring about people.
With Asana's stock down 82% in the past six months, Meta down 43%, and SBF's net worth cut in half in the past month, maybe the bigger worry should be a period of austerity and cutbacks?
I'm not sure if there's any data on this, but I think EAs do actually tend to come from well-off backgrounds.
Because of that, I think a share (I'd guess like 15%?) of EA funding for career building for students and recent graduates doesn't actually have counterfactual impact and just provides funding for people to do stuff which they would have spent their own money on anyway. More money in EA will mean more money being used in this way.
Obviously, this wasted money is bad, because it's still important for us to be cost-effective and the counterfactual use is still AMF.
So I think we'd benefit from a strong norm against using EA funding for career building activities which people would have spent their own money on anyway.
I don't think we should retire the "do you think this would be a better use of money than giving it to AMF?" type thinking, we should keep it alongside "actually, yes, flow through effects could mean that this is a better use of money than giving it to AMF".
There's also probably a case for experimenting with means-testing for grants, which a lot of social initiatives use to focus their money on people who need it the most, which improves counterfactual cost-effectiveness.
This is a great post, and I'm glad these points are being raised. I share a lot of the same concerns (basically, what happens to EA long term when it's just a good deal to join it?).
A big and small personal win from these changes in funding:
But it's easy to get into self-serving territory where you value your time so highly that you can justify almost any expense (or don't think of cheaper ways to meet the same goals). This can also move us into territory where, to do ostensibly altruistic work, we don't give anything up, and, in fact, argue that others should give things to us.
This feels fundamentally different from the movement that attracted me 5 years ago (though the reasoning is very consistent, and may well be right).
Like others, I really appreciate these thoughts, and it resonates with me quite a lot. At this point, I think the biggest potential failure mode for EA is too much drift in this direction. I think the "EA needs megaprojects" thing has generated a view that the more we spend, the better, which we need to temper. Given all the resources, there's a good chance EA is around for a while and quite large and powerful. We need to make sure we put these tools to good use and retain the right values.
It's interesting here how far this is from the original version of EA and its criticisms; e.g. that EA was an unrealistic standard that involved sacrificing one's identity and sense of companionship for an ascetic universalism.
I think the old perception is likely still more common, but it's probably a matter of time (which means there's likely still time to change it). And I think you described the tensions brilliantly.
Congrats on having the most upvoted EA Forum post of all time!
Free food and free conferences are things that are somewhat standard among various non-EA university groups. It's easy to object to whether they're an effective use of money, but I don't think they're excessive except under the EA lens of maximizing cost-effectiveness. I think if we reframe EA universities groups as being about empowering students to tackle pressing global issues through their careers, and avoid mentioning effective donations and free food in the same breath, then it's less confusing why there is free stuff being offered. (Besides apparently being more appealing to students, I also genuinely think high-impact careers should be the focus of EA university groups.)
I'm in favor of making EA events and accommodation feel less fancy.
There are other expenses that I'd be more concerned about from an optics perspective about than free food and conferences.
... (read more)Maybe I missed this in a previous comment (or even the text itself, I just ctrl+f'ed it after skimming it) but one thing I think it could be worth spending more on is better working conditions (I think several EA orgs already to this well, but I would be surprised if there are no "laggards"). Think staffing projects properly so there is no burn-out, paid parental leave for both parents, childcare facilities near bigger offices, properly paid internships, etc. Burn-out plagues the "making the world better" industry and I think we can attract a lot of ... (read more)
A lot of good points here.
A few thoughts on the benefits of a frugal community:
Not sure if this is in any way a valid perspective of looking at it:
I wonder how the big spending looks in the perspective of a small donor. Say, a person with a median income within a rich country who gives a 1-10 percent of their salary away.
I used to "earn-to-give" with a after-tax salary of 11 euros/hour. That's a lot compared to the global average! This was enough to donate >10 percent. But my past self's hour worked could fund maybe a few minutes (?) of a researcher (I don't know what EA researchers earn) - and it might have been ... (read more)
Thank you for writing this post; I know these take a lot of time and I think this was a really valuable contribution to the discourse/resonated strongly with me.
I find it helpful get clearer about who the audience is in any given circumstance, what they most want/value and how money might help/hurt in reaching them. When you have a lot of money, it's tempting to use it as an incentive without noticing it's not what your audience actually most values. (And creates the danger of attracting the audience that does most value money, which we obviously don... (read more)
I think the point has been made in a few places that more money means lower barrier to entry and is an opportunity to reduce elitism in EA and I just wanted to add some nuance:
You point out it's difficult to control for "unilateralism". There isn't just one major funder but several, and each has many different areas and projects.
One thing that is more manageable and visible are "institutions" and culture around leadership:
- I think there is a genuine culture of good leadership ("servant leadership"?) in older and more established EA institutions/funders
- A lot of people right now in leadership and younger leader positions, seem to have given up higher income opportunities to be where they are
- A lot of people are selected not just bec
... (read more)Great post! This resonates a lot with me, and I'm happy the post has gotten a fair bit of attention. Anecdotally, this has increasingly become the part of EA I feel I have to answer for the most to outsiders these days.
A slightly related idea I've seen some success with — both in EA and elsewhere — is what I've come to think of as the reverse free lunch effect: When people get something fancy or expensive for free they tend to become aware they are being intized to be there. After all, there is no such thing as a free lunch and there might be an implicatio... (read more)
Thank you so much for this post. It eloquently captures concerns that I've increasingly heard from group members (e.g., I know a fairly-aligned member who wondered whether a retreat we were running was a "waste of CEA's money"). While I agree that the funding situation is a boon to the movement, I also agree that we should carefully consider its impact on optics/epistemics. I also think all your suggestions sound reasonable and I'd be really excited to see, for example,
- a 'go-to' justification (ideally including a BOTEC) for spending money on events
- more M&a
... (read more)I wonder if it might be possible to get volunteers to help find some of opportunities to save money, in the genre of
I am not confident that this is true, because coordinating with volunteers is a lot of work and coordination-time is limited, but I could imagine a world where you could be like "here is my BATNA for booking flights for these speakers, if someone can improve upon this in the next 12 hours, I will donate the difference in money to the charity of their choice".
I suspect that this will be more of an issue for the global poverty part of the movement and less of an issue for the long-termist component of the movement.
FWIW, Chris didn't say what you seem to be claiming he said
Maybe I'm misunderstanding this but I disagree. I think the average person thinks spending tons of money on global health poverty is good, particularly because it has concrete, visible outcomes that show whether or not the work is worthwhile (and these quick feedback loops mean the money can usually be spent on projects we have stronger confidence in).
But I think that spending lots of money on people who might have a .000001% chance of saving the world (in ways that are often seen as absurd to the average person) is pretty bad optics. A lot non-EAs don't think we can realistically make traction on existential risk because they haven't seen any evidence of traction. Plus, longtermists/x-risk people can come across as having an unfounded sense of grandiosity - because there are a whole bunch of people out there who think their various projects will drastically transform the world, and most people won't assume that the longtermist approach is the only one that'll actually work.
Yes, you should be influenced by it, in proportion to the extent you give credence to their worldview and agree with their values.
I don't have anything smart or worthwhile to comment, but I want to say that I am glad you wrote this.
I'm quite uncomfortable with the idea that the best use of money is to give it to inexperienced young people from wealthy families who went to expensive schools. Helping privileged people get access to more privileged doesn't rank high on my personal list of cause areas, and I'm glad that someone is speaking out against this trend.
Very strongly agree with you here. I also agree that the positives tend to outweigh the negatives, and I hope that this leads to more careful, but not less giving.
Thanks for writing this up!
This post does resonate with me, as when I was first introduced to EA, I was sceptical about the idea of "discussing the best ways to do good". This was because I wanted to volunteer rather than just talk about doing good (this was before I realised how much more impact I could have with my career/donations) and I think I would’ve been even more deterred if I’d heard that donated funds were being spent on my dinners.
However, it sounds like my attitude might have been quite different to others, reading the comments here. Also, I suspect I would’ve ended up becoming involved in EA either way as long as I heard about the core ideas.
I think a giga-donation ($1B+) or two to GiveDirectly will go a long way to improving optics (and - let us not forget - millions of lives!). In general, extravagant spending should be matched with such donations.
There should be some “optimal” allocation of funding or best effort to find one.
If there are extravagances (wasteful high spending that is ex ante bad) we should reveal that here publicly and analyze and take actions so that it doesn’t happen again.
It doesn’t make sense to re allocate vast amounts of money to offset another bad act.
I strongly agree that one should focus on impact, not on offsetting. See Claire Zabel's post against offsetting.
This post is excellent - thank you for writing and sharing. ❤️
Regarding this suggestion:
"Given the unilateralist’s curse, perhaps there should be some central forum for EA funders to coordinate / agree upon policies with an optics perspective in mind."
I think this would be hugely helpful, and that such a forum should be open and accessible to the rest of the EA community. I agree that SBF and Dustin+Cari have made amazing strides and are funding generally awesome things, but there's something unsettling about them being able to unilaterally move the needle... (read more)
A core issue with “voting” is that it’s not hard to change the voting pool (this is a whole other side to the coin no one has stirred everyone up with a post about, because I guess it’s less visceral than being infiltrated by stealthy predators). The incentives to change the voting pool would be so vast, and the institutional demands to regulate it are so large and don’t exist, that the system will collapse almost immediately.