Hide table of contents

Assumptions:

A. Many EA positions have a large surplus of people applying to them (eg. research)

B. Everyone applying for EA jobs aims to do good.

C. Most people in high earning professions will not be earning to give.

 Here I take EA jobs to mean any job directly contributing to a certain cause. 

Take person 1 and 2, where person 1 is marginally more skilled than person 2 at everything.
So person 1 is the job applicant who gets hired, and person 2 is the next best option.
When working directly for EA organizations, the added value of person 1 working there compared to person 2 would be fractional.
When working in a high paying profession and aiming to earn to give, person 1 will donate much more money to good causes than person 2, who according to assumption C would be a non EA applicant.

Following this, for areas with many applications, would earning to give not be the more advantageous option?

 

For contributing directly:

For earning to give:

 

So a direct contribution is worth it if:

 

When taking the median value from the 2018 80k hours survey on how much a new hire is worth in donations it comes out to $1,000,000 per year.
Assuming a 5% skill gap that would make the Added direct value:

When looking at rough guesstimate salaries for earning to give, along with a significant dose of eyeballing, it seems that this value can quite easily be surpassed. 

 

Discussion:

It seems to me then, that the place where direct contribution can be beneficial would be those functions where assumption A does not hold. And at that point, it would follow the normal advice for finding a career as can be found on 80k.
From the 2019 survey this would be mostly operational/ management positions. Oddly enough from the same survey it also shows that the EA community would need more researchers, which doesn't jive with the perception of a PhD excess. 

Assumption B seems either guaranteed trough the structure of the job, or trough the application process.
The counter argument here is that it frees that person up to do some other important job, but while assumption A holds, this wouldn't be significant. 

Assumption C is mostly based on anecdotal observation but seems to hold true.

A lot of the numbers used in my findings are anecdotal rather than extensively researched. 

 

My personal shortcomings:

I admit that I’m quite ignorant to the inner workings of the research world, and to how to write philosophical argumentation.
Along with that there is quite a spread amongst the values from 80k's research that due to my lack of skill in working with uncertainties I did not account for.

Please argue with my points regardless of the sloppy style that they are delivered in.

Advice on writing is also appreciated but possibly better delivered in a method other than a reply.
I aim to improve my overall skills and abilities, and if there is something here that seems particularly faulty and could use more work, I would be happy to hear it!

 

Sources:

80k's research into donation value compared to direct work (2018 is the last version I found the relevant table):

https://80000hours.org/2018/10/2018-talent-gaps-survey/#half-would-give-up-two-suitable-hires-in-two-years-time-in-exchange-for-their-last-hire

2019 EA leaders survey

https://forum.effectivealtruism.org/posts/TpoeJ9A2G5Sipxfit/ea-leaders-forum-survey-on-ea-priorities-data-and-analysis#Organizational_constraints 

3

0
0

Reactions

0
0
New Answer
New Comment


1 Answers sorted by

So person 1 is the job applicant who gets hired, and person 2 is the next best option. When working directly for EA organizations, the added value of person 1 working there compared to person 2 would be fractional.
When working in a high paying profession and aiming to earn to give, person 1 will donate much more money to good causes than person 2, who according to assumption C would be a non EA applicant.

It looks like you're comparing a situation where an EA applies to an EA organization (competing against other EAs) to a situation where the EA applies for earning to give, competing against non-EAs. You argue that the counterfactual difference is larger if the EA gets the high-earning job instead of a non-EA because for the EA role, the next-best candidate would also do something impactful if they get the role. 

This is true when you look at it very narrowly (only look at the impact difference for that one specific job that people applied to, their first job application). However, consider what happens in each case after the other person gets rejected. The non-EA who gets rejected for the high-earning job will do something else where they presumably won't have an outsized impact, either. By contrast, the other EA person who also applied to the direct work role will likely continue to apply to impactful roles. (They might even consider earning to give as a fallback option.)

So, once you consider further effects (second job applications, etc.), it becomes clear that the consideration you highlight loses most of its relevance. (It only applies to the degree that you getting the EA job slows down other EAs' career trajectories or adds some chance that they give up on impactful roles altogether, being discouraged.)

See also this article

When taking the median value from the 2018 80k hours survey on how much a new hire is worth in donations it comes out to $1,000,000 per year.
Assuming a 5% skill gap that would make the Added direct value:

I could imagine that the organizations here were asked to compare the person they actually hired to the next-best candidate. So, there's probably no discounting – the impact is estimated to be 1 million in donation/grantmaking equivalents.

The reason the values can be so high is because earning to give is only impactful if there are shovel-ready interventions. To get shovel-ready interventions, you need people doing direct work. To convert money into direct work, you need more direct work (e.g., grantmakers or headhunters or senior staff running hiring rounds and doing onboarding). In a funding landscape where organizations never have to neglect core priorities in order to fundraise, it isn't easy to replace direct work with money. Eventually, there have to be enough people to do all that direct work.


 

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat