This post is for orgs that have the common problem of not getting enough candidates to apply.
What I observe
About once a week, I talk to an EA who says “I won’t apply to this org, better fit people will probably apply” or “I won’t apply to this org, I’m probably only slightly better than the alternative so it's a tiny counterfactual”.
I hypothesize the “next best” is often also not applying
Since I hear this so often.
I hypothesize job ads often make this worse
Job ads often seem to forget that many EAs have impostor syndrome. They write things like “we only hire the best, even getting the 80th percentile instead of the 90th percentile would be so much less good for our org, which is why we aim so high!..” [this is not an actual quote]
What do I recommend employers do?
- Look at your own job ad or hiring pitch and ask yourself how a candidate with impostor syndrome would read it.
- Share this problem with your candidates, they probably don’t know about it.
- Encourage candidates to apply regardless of very subjective assessments of their own skill. Here’s an example of a CEA job ad which tries hard to avoid this whole problem.
Does this happen with people who I think would get the job?
Yes, totally. It doesn’t seem correlated with skill as far as I can tell.
Is this the only reason EAs don’t apply?
No, but it’s one of the big ones.
Candidates who are reading this: Am I saying you should apply even if you think you're not a good fit?
Long story. I hope this or this can help.
Summary
I hope this post uncovers a situation that is hard to see from the perspective of any individual employer or candidate, and that it lets you improve your hiring.
Have a low bar for reaching out
These considerations have certainly kept me from applying for any EA jobs for many years!
(I have my own EA startup now, which is probably the best of both worlds anyway, but that just means that this topic will become important for me from the other perspective.)
I’ve written about my worries here.
Basically, I feel like we’re back in 2006 before there was any EA or GiveWell, and someone gives me the advice that World Vision does good stuff and I should donate to them. It has about zero information content for me and leaves me just as ignorant about the best use of my resources as it found me. What are they trying to achieve and how? What are the alternatives? How do I compare them to each other? What criteria are important or irrelevant? How fungible are my contributions and what other activities am I leveraging?
Likewise with jobs I’m at a complete loss. How reliable are our interview processes? What are the probability distributions around the projected-performance scores that they produce? How can a top candidate know how much their distribution overlaps with that of the runner-up and by what factor they are ahead? Maybe even more importantly: How can a top candidate know what other options the runner-up will have so that the top candidate can decline the offer if the runner-up would otherwise go into AI capabilities (or is only an expected 2.5 rejections away from the AI capabilities fallback) or if the runner-up would otherwise have to give up on altruistic things because they’d run out of financial runway?
80,000 Hours has several insightful posts on the differences between the best and the second best candidate, reliability of interview processes, the trajectory of the expected added value of additional applicants, etc. Those are super interesting and (too) often reassuring, but a decision where I want to work for the next 5+ years is a decision that is about as big for me as a decision where to donate half a million or so. So these blog posts don’t quite cut it for me. Nor do I typically know enough about an organization to be sure that I’m applying the insights correctly.
What I would find more reassuring are adversarial collaborations, research from parties that don’t have any particular stake in the hiring situation, attempts to red-team the “Hiring is largely solved” kind of view, and really strong coordination between orgs. (Here’s a starting point.)
Questionnaires tell me that I have a serious case of impostor syndrome, so I don’t trust my intuitions on these things and don’t want to write a red-teaming attempt for fear it might be infohazardous if I’m wrong on balance. Then again I thought I must be somehow wrong about optimizing for impartial impact rather than warm-fuzzies in my charity activism before I found out about EA, and now I regret not being open about that earlier.
One thing that I have going for myself is that I’m not particularly charismatic, so that if I did end up as the supposed top candidate, I could be fairly sure that I could only have gotten there by skill, by chance, or by some unknown factor. So I feel like the riskiness of jobs for me forms a Laffer curve where jobs with no other applicants are trivially safe and jobs with hundreds of good applicants are safe again because the chance factor is really unlikely. In between be dragons.
Imma suggested reserving a large pot of donation money (and time for volunteering and coaching, and a promise to keep applying for jobs for a year, not work on AI capabilities, etc.) and then signaling that I’ll donate this pot according to the preferences of the organizations that I’m applying to if they all reject me. I can’t make the pot large enough to be really meaningful, but maybe it can serve as a tie breaker.
[I feel like I don’t approach this topic as dispassionately as I usually do with epistemics, so please bear in mind this “epistemic status.”]
Indeed! I imagine that that trades off against personal biases. When you feel like you’ve won in a 1:1000 lottery at your dream job while being worried about your finances, it’s hard to think objectively about whether taking the job is really the best thing impartially considered. I’d much rather stan... (read more)