Hide table of contents

I'm deciding between double majoring in CS and dentistry (8 years total) or majoring only in CS (4 years). Although dentistry isn't useful for reducing AI risks and isn't quite interesting to me, the main appeal is adding another earning-to-give route.

However, I'm not asking whether I should pursue dentistry. I'd like to only isolate one key sub-question here:

If the fat-tailed distribution of impact holds true(as picture below), an average direct worker's contribution may be negligible compared to the talented (though I'm uncertain). Therefore, if my ability in AI risks direct work turns out average compared to other EA people in future, how would you compare an average direct AI risks worker's contribution to a dentist who donates an extra $80,000 per year?
Instead of asking which is better, I'd ask: How do you personally evaluate this trade-off?

As a 19 y/o who's spent 300 hours thinking about this alone, I'm hitting diminishing returns and definitely missed some aspects from isolated thinking. So, any outside perspective would be genuinely valuable. I also think this topic is somewhat neglected in the community.

Please DON'T aim for a perfect or rigorous answer. Quantity-over-quality brainstorming is better—I'd prefer 1 minute half-baked thoughts or even scattered biases over silence. Even replies as short as “I think the main crux is X” or “You may be underestimating Y” would be extremely helpful.

(Feel free to DM me if you don't prefer to answer publicly)

21

0
0

Reactions

0
0
New Answer
New Comment

3 Answers sorted by

It would be amazing if we always knew ahead of time which of the people pursuing a fat-tailed career path would end up on the fat end of that tail... 

If you limit your impact considerations to AI risks (rather than cause neutral), a simple heuristic would be to ask orgs how valuable their recent hire is to them, top candidate vs second best (some 80k articles on this, let me know if you can't find them yourself). Additionally, AI risk nonprofits usually have higher total employee cost per person than 80k/year so you can assume that a great fit devoting their time is more valuable than receiving this sum in donations.

Thanks for your answering a lot. 

1. Yes, of course we don't completely know. However 80000 hours has written in their research that even if we are talking on "ex-ante" expected distribution of people, it's probably still fat- tailed distribution. Therefore, it's possible we "often" know who's going to be in fat tailed and who's probably not.

2.I've heard of this heuristic. However in my case, I have to predict in advance. (I can't work in a non-profit now since I'm only 19). Also, it's probable you reduce AI risks in the non-EA world. In that case, your marginal impact isn't the gap of you and the second best applying the job.

1
Nadia Montazeri
1. We cannot infer from knowing it's a fat-tailed distribution who's going to be in the impactful fat tail and who's gonna be average (or do I misunderstand you here?). We need lots of people making informed bets, and we likely need an ecosystem. We can however give recommendations based on some heuristic - e.g., if you have an easy time taking advanced ML classes, you're more likely to have an impact in a technical field than someone who hasn't - those are cheap tests. I recommend applying to speak with 80,000 hours advising team if you haven't! 2. I think it's reasonable to use past numbers as heuristic for future hires. I agree many impactful opportunities will be outside of EA orgs, but my hunch is most people who'll be very impactful in those roles (e.g. as a civil servant) would've also been quite successful inside an EA org (depending on different levels of "absorbency" between those at a given time - see Joey's post), depending on personal fit. Another consideration is how abundant funding in that cause is - does everything that's reasonable get a grant anyways, or are the competitive? Again, it'll matter if you want to do cross-cause comparison.  I wonder why this particular question that you want answers to seems to be your crux, however - it seems the most urgent question for you is which major to choose, and for that, dentistry doesn't seem like the strongest earning to give option for the vast majority of people (or even a decision you could delay by 4 years?) - I'd encourage you to brainstorm more options and choose paths that allow you to learn more about your skill set as well as staying flexible - employment is likely going to look quite different in 4 years. 

These are pretty half-baked but:

  1. Is there any way to test out whether you are more or less likely to be on the fat end of the tail? Doing smaller projects, volunteering (not necessarily working) with orgs in your intended field, etc?
  2. If your goal is to leave room open for earning to give, are there other routes that offer either more flexibility or higher earning potential? Off the top of my head (and with no evidence whatsoever), I would think degrees in business, computer science, etc. also leave the door open for earning to give, while also providing skills that might be useful in direct work, giving you more flexibility in the future when you have the information you need in order to make that decision

IMO, you need to factor in the timeline in which you think AI safety is critical. While dentists may earn more, you are foregoing 4 years of income before you even start earning. You need to determine the rough break-even point at which you would cumulatively have earned more as a dentist, and therefore been able to give more.

If the critical phase of AI safety research precedes that date or is near it, then you may ironically contribute less in terms of marginal value of your giving.

Here's how I see things—

1. If AI advances so quickly that the earning power of CS collapses before you graduate, it is likely that the same will happen to dentists before you graduate from that program. But maybe the latter isn't true, in which case it could be reasonable to pursue dentistry.
2. If AI advances at a moderate pace, the breakeven logic I mentioned above probably means that you will have more impact by getting a moderately well-paying job sooner so that you can give during the critical period of AI development, since your giving would be largely deferred until after AGI if you went into dentistry.
3. If AI advances at a slow pace, then perhaps going into dentistry will ultimately allow you to contribute more.

One possibility you didn't mention, probably because it is unappealing to you—could you just major in dentistry? Then you would get the earning power and reduce the breakeven problem.

You are young. If I were you, out of these two options, I would just major in what I was interested in, and test out my talents. I would major in CS. If I performed exceptionally in AI safety stuff, I would try my hand at a direct career in it. If I didn't I would focus on getting some other software job with high earning potential.

You are certainly correct that earning-to-give is the rational move when you consider that the constraints are often on resources to fund our goals, rather than on candidates willing to work on them professionally.

Comments1
Sorted by Click to highlight new comments since:

Some additional thoughts: We often talk about personal fit, but would my comparative advantage/personal fit be earn-to-give as a dentist in the future? If I end up only aver ge at direct work, while dentistry would let me donate $80,000 per year, then that means I could fund one independent researcher who failed getting EA granting. If he's more talented than me, then donating may have more impact.

Also, if you think this question is not meaningful, feel free to tell me why.

Curated and popular this week
Relevant opportunities