NM

Nadia Montazeri

293 karmaJoined Working (0-5 years)Schäfersee, 13407 Berlin, Germany

Comments
13

  1. We cannot infer from knowing it's a fat-tailed distribution who's going to be in the impactful fat tail and who's gonna be average (or do I misunderstand you here?). We need lots of people making informed bets, and we likely need an ecosystem. We can however give recommendations based on some heuristic - e.g., if you have an easy time taking advanced ML classes, you're more likely to have an impact in a technical field than someone who hasn't - those are cheap tests. I recommend applying to speak with 80,000 hours advising team if you haven't!
  2. I think it's reasonable to use past numbers as heuristic for future hires. I agree many impactful opportunities will be outside of EA orgs, but my hunch is most people who'll be very impactful in those roles (e.g. as a civil servant) would've also been quite successful inside an EA org (depending on different levels of "absorbency" between those at a given time - see Joey's post), depending on personal fit. Another consideration is how abundant funding in that cause is - does everything that's reasonable get a grant anyways, or are the competitive? Again, it'll matter if you want to do cross-cause comparison. 

I wonder why this particular question that you want answers to seems to be your crux, however - it seems the most urgent question for you is which major to choose, and for that, dentistry doesn't seem like the strongest earning to give option for the vast majority of people (or even a decision you could delay by 4 years?) - I'd encourage you to brainstorm more options and choose paths that allow you to learn more about your skill set as well as staying flexible - employment is likely going to look quite different in 4 years. 

It would be amazing if we always knew ahead of time which of the people pursuing a fat-tailed career path would end up on the fat end of that tail... 

If you limit your impact considerations to AI risks (rather than cause neutral), a simple heuristic would be to ask orgs how valuable their recent hire is to them, top candidate vs second best (some 80k articles on this, let me know if you can't find them yourself). Additionally, AI risk nonprofits usually have higher total employee cost per person than 80k/year so you can assume that a great fit devoting their time is more valuable than receiving this sum in donations.

I imagine the research-adjacent roles are just as competitive, if not more so (lots of people want to contribute to this field but exclude research because they don't come from a technical background). Got any numbers on how competitive those roles are? 

I'm sorry you've had a hard time applying! Your BOTEC misses the costs for candidates, which is also important for EA orgs (eg I appreciate most work tests are paid).

Many jobs get 100+ initial applications. They usually have form questions that take less than 20 minutes to fill in - this is very much by design, as some of the most promising applicants are indeed not full-time job hunting but working, often outside the impact space. On the other hand, the shortest work tests I've had so far were capped at 1h. Assuming 50% of 100 applicants are filtered out at the initial stage, we have wasted 50h of applicant time vs ~17h. (Conservatively not counting the 49 people who will be eliminated later) 

Adding: you might be right that the reason is your lack of social proximity. But it could also be that you express yourself suboptimally in the written questions, or your CV doesn't present your skills well enough. One way more proximity to the community could help is by finding someone with hiring experience in the space who can give you feedback. :) 

Thanks for writing this!

Have you considered cutting down on EAG attendees overall by reducing the proportion of AI-Safety participants, and instead hosting (or support others doing so) large AI-Safety only conferences?

These in turn could be subsidized by industry - yes, this can be a huge conflict of interest, but given the huge cost on the one hand and the revenue in AI on the other, could be worth consideration.

Do you think the PPE/PAPR example is part of that very small subset? It just happens to be the area I started working on by deference, and I might've gotten unlucky.

Or is the crux here response vs prevention?

Thanks for your comment!

On what is lacking: It was written for reading groups, which is already a softly gatekept space. It doesn't provide guidance on other communication channels: what people could write blogs or tweets about, what is safe to talk to LLMs about, what about google docs, etc. Indeed, I was concerned about infinitely abstract galaxy-brain infohazard potential from this very post.

On dissent:

  1. I wanted to double down on the message in the document itself that is preliminary and not the be-all-end-all.
  2. I have reached out to one person I have in mind within EA biosecurity who pushed back on the infohazard guidance document to give them the option to share their disagreement, potentially anonymously.

Open Philanthropy has biosecurity scholarships which have also funded career transitions in the past. In the past, they opened applications around September.

Thanks for writing this up! Just a few rough thoughts:

Regarding the absorbency of AI Safety Researcher: I have heard people in the movement tossing around that 1/6th of the AI landscape (funding, people) being devoted to safety would be worth aspiring to. That would be a lot of roles to fill (most of which, to be fair, don't exist as of yet), though I didn't crunch the numbers. The main difference to working in policy would be that the profile/background is a lot more narrow. On the other hand, a lot of those roles may not fit what you mean by "researcher", and realistically won't be filled with EAs.

I'm also wondering if you're arguing against propagating the "hits-based approach" for careers to a general audience and find it hard to disentangle this here. There's probably a high absorbency for policy careers, but only few people succeeding in that path will have an extraordinarily high impact. I'm trying to point at some sort of 2x2 matrix with absorbency and risk aversion, where we might eventually fall short on people taking risks in a low-absorbency career path because we need a lot of people who try and fail in order to get to the impact we'd like.

Load more