It would be amazing if we always knew ahead of time which of the people pursuing a fat-tailed career path would end up on the fat end of that tail...
If you limit your impact considerations to AI risks (rather than cause neutral), a simple heuristic would be to ask orgs how valuable their recent hire is to them, top candidate vs second best (some 80k articles on this, let me know if you can't find them yourself). Additionally, AI risk nonprofits usually have higher total employee cost per person than 80k/year so you can assume that a great fit devoting their time is more valuable than receiving this sum in donations.
I'm sorry you've had a hard time applying! Your BOTEC misses the costs for candidates, which is also important for EA orgs (eg I appreciate most work tests are paid).
Many jobs get 100+ initial applications. They usually have form questions that take less than 20 minutes to fill in - this is very much by design, as some of the most promising applicants are indeed not full-time job hunting but working, often outside the impact space. On the other hand, the shortest work tests I've had so far were capped at 1h. Assuming 50% of 100 applicants are filtered out at the initial stage, we have wasted 50h of applicant time vs ~17h. (Conservatively not counting the 49 people who will be eliminated later)
Adding: you might be right that the reason is your lack of social proximity. But it could also be that you express yourself suboptimally in the written questions, or your CV doesn't present your skills well enough. One way more proximity to the community could help is by finding someone with hiring experience in the space who can give you feedback. :)
Thanks for writing this!
Another way to think about it: it's much more likely that a boring thing is neglected than something that sounds sexy.
A potential meta-skill I've been somewhat successful at building is to approach these superficially bland topics like a photographer: [If it's boring, you're not close enough.] (https://www.theartnewspaper.com/2021/07/29/if-your-pictures-arent-good-enough-youre-not-close-enough-vintage-prints-by-war-photographer-robert-capa-to-headline-photo-london) For example, you can challenge yourself to give a 5 minute lightning talk that gets others excited after 1h of research on such a topic!
I was also really surprised how easy it was to get experts on product certification (CE testing) and standardisation on the phone. They don't seem used to talking to an overly enthusiastic person in their twenties and have been insanely helpful.
Have you considered cutting down on EAG attendees overall by reducing the proportion of AI-Safety participants, and instead hosting (or support others doing so) large AI-Safety only conferences?
These in turn could be subsidized by industry - yes, this can be a huge conflict of interest, but given the huge cost on the one hand and the revenue in AI on the other, could be worth consideration.
Thanks for your comment!
On what is lacking: It was written for reading groups, which is already a softly gatekept space. It doesn't provide guidance on other communication channels: what people could write blogs or tweets about, what is safe to talk to LLMs about, what about google docs, etc. Indeed, I was concerned about infinitely abstract galaxy-brain infohazard potential from this very post.
On dissent:
Open Philanthropy has biosecurity scholarships which have also funded career transitions in the past. In the past, they opened applications around September.
Thanks for writing this up! Just a few rough thoughts:
Regarding the absorbency of AI Safety Researcher: I have heard people in the movement tossing around that 1/6th of the AI landscape (funding, people) being devoted to safety would be worth aspiring to. That would be a lot of roles to fill (most of which, to be fair, don't exist as of yet), though I didn't crunch the numbers. The main difference to working in policy would be that the profile/background is a lot more narrow. On the other hand, a lot of those roles may not fit what you mean by "researcher", and realistically won't be filled with EAs.
I'm also wondering if you're arguing against propagating the "hits-based approach" for careers to a general audience and find it hard to disentangle this here. There's probably a high absorbency for policy careers, but only few people succeeding in that path will have an extraordinarily high impact. I'm trying to point at some sort of 2x2 matrix with absorbency and risk aversion, where we might eventually fall short on people taking risks in a low-absorbency career path because we need a lot of people who try and fail in order to get to the impact we'd like.
I wonder why this particular question that you want answers to seems to be your crux, however - it seems the most urgent question for you is which major to choose, and for that, dentistry doesn't seem like the strongest earning to give option for the vast majority of people (or even a decision you could delay by 4 years?) - I'd encourage you to brainstorm more options and choose paths that allow you to learn more about your skill set as well as staying flexible - employment is likely going to look quite different in 4 years.