This post is for orgs that have the common problem of not getting enough candidates to apply.
What I observe
About once a week, I talk to an EA who says “I won’t apply to this org, better fit people will probably apply” or “I won’t apply to this org, I’m probably only slightly better than the alternative so it's a tiny counterfactual”.
I hypothesize the “next best” is often also not applying
Since I hear this so often.
I hypothesize job ads often make this worse
Job ads often seem to forget that many EAs have impostor syndrome. They write things like “we only hire the best, even getting the 80th percentile instead of the 90th percentile would be so much less good for our org, which is why we aim so high!..” [this is not an actual quote]
What do I recommend employers do?
- Look at your own job ad or hiring pitch and ask yourself how a candidate with impostor syndrome would read it.
- Share this problem with your candidates, they probably don’t know about it.
- Encourage candidates to apply regardless of very subjective assessments of their own skill. Here’s an example of a CEA job ad which tries hard to avoid this whole problem.
Does this happen with people who I think would get the job?
Yes, totally. It doesn’t seem correlated with skill as far as I can tell.
Is this the only reason EAs don’t apply?
No, but it’s one of the big ones.
Candidates who are reading this: Am I saying you should apply even if you think you're not a good fit?
Long story. I hope this or this can help.
Summary
I hope this post uncovers a situation that is hard to see from the perspective of any individual employer or candidate, and that it lets you improve your hiring.
Have a low bar for reaching out
[I feel like I don’t approach this topic as dispassionately as I usually do with epistemics, so please bear in mind this “epistemic status.”]
Indeed! I imagine that that trades off against personal biases. When you feel like you’ve won in a 1:1000 lottery at your dream job while being worried about your finances, it’s hard to think objectively about whether taking the job is really the best thing impartially considered. I’d much rather stand on the shoulders of a crowd of people who are biased in many different directions and have homed in on some framework that I can just apply when I have to make such a decision.
Oh, sorry, not explicitly, but when I run into an important, opaque, confusing problem and most other people act like it’s not there, my mind goes to, “They must understand something that makes this a solved problem or non-issue that I don’t understand.” But of course there’s also the explanation that they’ve all concluded that someone else should solve it or that it’s too hard to solve.
Back in the day before EA, orgs that I was in touch with were also like, “The library, the clinic, and the cat yoga are all important projects, so we should split our funds evenly between them,” and I was secretly like, “Why? What about all the other projects besides these three? How do you know they’re not at least equally important? Are those three things really equally important? How do they know that? If I ask, will they hate me and our org for it? Or is it an infohazard, and if I ask, they’ll think about it and it’ll cause anomie and infighting that has much worse effects than any misallocation, especially if I’m wrong about the misallocation?”
It’s hard to say in retrospect, but I think was split like “30% they know something I don’t; 30% it’s an infohazard; 30% something else is going on; and 10% I’m right.” I failed to take into account that the “10% I’m right” should have much more weight because how important it would be if they turn out true despite the low probability, even though conversely I was very concerned about the dire effects of spreading a viral infohazard.
(After 1–2 years I started talking about it in private with close friends, and after 4 years, in 2014, I was completely out of the closet, when I realized that Peter Singer had beat me to the realization by a few decades and hasn’t destroyed civilization with it.)
Now I feel like the situation is vaguely similar, and I want to at least talk about it to not repeat that mistake.
Just need to try to find them again.
https://80000hours.org/2021/05/how-much-do-people-differ-in-productivity/
If productivity is really power-law distributed, that’d be a strong reason not to worry much about it because the top candidate is probably easy to identify. But without having engaged much with it, I’m worried that seeming outliers are often carried
What makes things worse, or harder to study, is that there are probably always many necessary conditions for outsized success, some of which stem from the candidate and others from the position or people the candidate ends up working with. These need to be teased apart somehow.
Brian has this interesting article about the differences in expected cost-effectiveness among the top 50% of charities. It contains a lot of very general considerations that limit the differences that we can reasonably expect between charities. Maybe a similar set of considerations applies to candidates so that it’s unlikely that there are even 10x differences between the top 50% of candidates in subjective expectation.
https://80000hours.org/2013/05/intelligence-matters-more-than-you-think-for-career-success/
With Less Wrong in 2013 maybe having an average and median IQ almost three standard deviations above average and the overlap of LW and EA, it’s easy for anyone but about 1:300 people to conclude that they probably don’t need to apply for most jobs. (Not that they’d be right not to – that’s an open question imo.) Whatever crystallized skills they have that are unique and relevant for the job, the IQ 150+ people can probably pick them all up within a year. That’s an oversimplification since there are smart people who will just refuse to learn something or otherwise to adapt to the requirements of the situation, but it feels like it’ll apply by and large.
An org I know did an IQ test as part of the application process, but one that was only calibrated up to IQ 130. That could be an interesting data point since my model would predict that some majority (don’t know how to calculate it) of serious applicants must’ve maxed out the score on it if the average IQ among them is in the 140 area. (By “serious” I mean to exclude ones who only want to fill some quota of applications to keep receiving unemployment benefits and similar sources of noise.)
What’s ironic is that as a kid I thought that my 135 score was unlikely to be accurate because it’s a 1:100 unlikely score, so it’s more likely that I got lucky during the test in some fashion or it was badly calibrated. (Related to the Optimizer’s Curse, but I didn’t know that term at the time.) Now among people whose average IQ is 140ish, it seems perfectly plausible. Plus several more tests all came out at 133 or 135. Yay, reference class tennis! Doesn’t reduce my confusion about what to do though.
Update: One of my worries has been that LW surveys from 2013–15ish found that the average IQ on LW was 140–143, even after quite a bit of statistical sanitization. If that carries over to EA, it makes me below-average smart in that crowd, so I should expect to be among the top candidates for a job almost never (only when it demands very rare, specific skills that I happen to have). There are effects such as that this subgroup is itself probably not normally distributed (but idk in which direction this pushes) and that the smartest people are perhaps already employed, but that’s all a bit unconvincing.
Now Scott has had some more data and ideas, and found that the average IQ is probably closer to 128 among the LW 2015 crowd. That makes me above-average smart and sign-flips this whole consideration for me.
https://80000hours.org/2019/08/how-replaceable-are-top-candidates-in-large-hiring-rounds/
This seems like a very interesting model that has made me much less worried about applications to very large hiring rounds in fields where the applicants’ plan Z is unlikely to be extremely harmful.
I wanted to play around with the models, reimplement them in Squiggle or Causal, and understand their sensitivity to the inputs better, but I never got around to it.
https://80000hours.org/articles/coordination/
It’s been too long that I read this article, but it also seems very relevant!
https://80000hours.org/career-guide/personal-fit/#performance-is-hard-to-predict-ahead-of-time
This is another article (section) I referenced.
Another important input is the so-called “value drift,” which, in my experience, has nothing to do with value drift and is mostly people running out of financial runway and ending up in dead-end industry jobs that eat up all of their time until they burn out. (Sorry for the hyperbole, but I dislike the term a lot.)
More recent research indicates that it’s surprisingly low compared to what I would’ve expected. But I haven’t checked whether I trust the data to be untainted by such things as survival bias.