Hi, I'm an 18 year old going into college in a week. I am studying Computer engineering and mathematics. Since I have a technical interest and AGI has a much higher probability ending humanity this century(1/10, I think) than other causes (that I would rather work on, like Biorisks is 1/10,000), would the utility positive thing to do be to force myself to get an ML alignment focused PhD and become a researcher?
I am at a mid-tier university. I think I could force myself to do AI alignment since I have a little interest, but not as much as the average EA. I wouldn't find as much engagement in it, but I also have an interest in starting a for-profit company, which couldn't happen with AGI alignment (most likely). I would rather work on a hardware/software combo for virus detection (Biorisks), climate change, products for 3rd world, other current problems, or other problems that will be found in the future.
Is it certain enough that AI alignment is so much more important that I should forgo what I think I will be good at/like to pursue it?
Edit: made some people confused that I had a false dichotomy between "pursuing my passion" and doing EA alignment. Removed that comment.
Hi Isaac, I agree with many other replies here. I would just add this:
I think AI alignment research could benefit from a broader range of expertise, beyond the usual 'AI/CS experts + moral philosophers' model that seems typical in EA approaches.
Lots of non-AI topics in computer science seem relevant to specific AI risks, such as crypto/blockchain, autonomous agents/robotics, cybersecurity, military/defense applications, computational biology, big data/privacy, social media algorithms, etc. I think getting some training in those -- especially the topics best aligned with your for-profit business interests -- would position you to make more distinctive and valuable contributions to AI safety discussions. In other words, focus on the CS topics relevant to AI safety that are neglected, and not just important and tractable.
Even further afield, I think cases could be made that studying cognitive science, evolutionary psychology, animal behavior, evolutionary game theory, behavioral economics, political science, etc. could contribute very helpful insights to AI safety -- but they're not very well integrated into mainstream AI safety discussions yet.