I'm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I don't want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim that's like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isn't it possible going to 80k could be even better for AIS than doing the job could be?
In that form, the argument is naive and implausible. But I don't think I know what the "sophisticated" argument that replaces it is. Here are some thoughts:
This take is increasingly non-quick, so I think I'm going to post it and meditate on it somewhat and then think about whether to write more or edit this one.
I think there's a big difference between "more effective" and "most effective", and one of the most important and counterintuitive principles of EA is that trying to find the best option rather than just a good option can make a huge difference to how much good you do -- we have to prioritise between different goods, and this is painful to do (hence easy to avoid) but really important.
FWIW I clicked on "What is the admissions bar for EA Global?" expecting it to be a post asking that question, rather than answering it. Maybe I'd simply call this "Admissions bar for EA Global" or something.
(Equally, don't overweight this just because I happened to comment about it, but maybe the agree / disagree votes will be useful)
I'm writing a comment and not an answer because I think this also doesn't meet your criteria (too long, I'd guess), but I thought I'd mention It Looks Like You’re Trying To Take Over The World, a short story written by Gwern that is on this theme.
This is a nice summary, and I agree with your theory about potential causes. I added the Impostor Syndrome tag to this post, which you might find useful to browse for more ideas other people have had.
(In case anyone else was wondering, the dictionaries I checked accept both impostor and imposter as variants of the same word. It looks like existing posts are split about evenly between the two :) )
It feels like when I'm comparing the person who does object-level work to the person who does meta-level work that leads to 2 people (say) doing object-level work, the latter really does seem better all things equal, but the intuition that calls this model naive is driven by a sense that it's going to turn out to not "actually" be 2 additional people, that additionality is going to be lower than you think, that the costs of getting that result are higher than you think, etc. etc.
But this intuition is not as clear as I'd like on what the extra costs / reduced benefits are, and how big a deal they are. Here are the first ones I can think of: