TL;DR:
I’m a 19-year-old freshman in Taiwan double-majoring in CS and Medicine. I’m considering dropping the medicine major to focus solely on CS, which would save about four years. I am already deeply committed to AI safety and read widely, but I lack senior people who can sanity-check my reasoning. I am looking for someone willing to occasionally discuss with me online, even asynchronously—no time pressure at all.
Hello everyone,
My name is Jack, a 19-year-old freshman currently double majoring in Computer Science and Medicine at a Taiwanese university (the medical program takes 8 years, since in Taiwan medicine is an undergraduate entry system and students receive a medical license after completing the 8-year curriculum).
I identify more with negative utilitarianism, so I hope to devote my career to s-risk reduction, especially AI s-risks.
I am currently deciding whether I should continue double majoring in medicine or drop it and major only in CS (which would take 4 years instead of 8). I am not worried about burnout, but the time cost is significant. At present, I lean around 70% toward switching to CS only, because medical training seems much less relevant to AI-related s-risks, but I am still genuinely unsure.
Below I summarize my main reasoning (I also have longer decision documents with more detailed frameworks, if anyone would like to read them).
1. How likely and important are AI-related s-risk scenarios?
Unlike many EAs, I am not fully confident that AI s-risks are more important than global health (reducing human disease suffering). As a freshman, I still lack deep expertise and intuition in AI. Despite reading nearly all relevant materials on 80,000 Hours, the AI Alignment Blue Dot course, EA Forum, LessWrong, CLR, CRS, Tomasik/Baumann/Vinding, and spending hundreds of hours studying, thinking, and discussing with different LLMs, I am only around 80% confident (not 90–100%) that AI s-risks outweigh global health work.
My main uncertainty is that I’m not fully convinced the probability of major AI s-risk scenarios (e.g., AI consciousness or malevolent powerful AI systems) is clearly non-negligible rather than completely agnostic—for example, is the probability completely unsure or probably at least >0.01%?
2. How feasible is it to work on s-risk while employed outside EA-aligned organizations?
This question feels even more crucial.
Even if I'm confident in the moral importance of AI s-risk reduction or WAS relative to human health, I still need to consider:
- job accessibility and income stability
- the likelihood of being able to do genuinely altruistic work while employed outside EA
Although EA commonly says “talent is the bottleneck, not funding,” in reality it is quite hard to get full-time positions or independent research grants in EA unless one is truly exceptional. Therefore, I may have to work in non-EA companies for most of my career.
My concern is that in such settings, I may not be able to do meaningfully altruistic work:
- For WAS, almost no non-EA jobs involve directly reducing wild-animal suffering.
- Academia might allow it, but becoming a professor is extremely competitive.
- For AI safety, many corporate AI positions are still not closely connected to reducing s-risk (e.g., preventing digital suffering).
My worry is a scenario where I major in CS, graduate, can hardly find EA jobs and grantings, then end up working in non-EA positions until retirement without ever contributing meaningfully to the problems I care about.
An alternative strategy would be: Become a doctor or dentist for around 10 years, save most of the income(like $150000 a year) and build financial security(Also, in Taiwan the medical school tuition is really cheap, so you won't be in debt at all after graduating). After that, I'll become an independent EA researcher by using my savings to self-fund myself, maintain my life if I later have years difficult to secure altruistic job opportunties or EA research funding.
But this approach costs 4+ years of additional training and delays meaningful contribution until later than 2033 instead of 2029, which is a large opportunity cost. It'd be much more ideal to work in a non-EA world but contribute altruistically.
Why I am seeking discussion partners
Although I'm having a really tough time deciding if I should do a double major or not, I'll keep working on it with patience and persistence. It's a crucial decision for my life because double majoring in medicine lead to 4-years of opportunity cost (Double majoring in other subjects probably won't take 4 years more, but in medicine it'd definitely take 4 years more)
There are basically no active EA groups in Taiwan right now, so I lack people who can provide experienced feedback. I am hoping to find someone (not necessarily an expert in AI s-risk specifically, opinions about general career decisions would also be valuable) who is willing to discuss with me occasionally.
Although I lack deep expertise, I can contribute thoughtful reasoning and outside-view criticisms perspectives. I have talked deeply with a few experts before, and most of them thought it's really a beneficial experience for both of us, but many of those people are currently too busy to continue discussions.
Final request
If anyone is willing to discuss—by text, voice, or any platform you prefer—I would be extremely grateful. Any level of commitment is welcome. Even a single 10-minute message every few weeks would already help a lot. There is absolutely no obligation; stopping discussion at any time is completely fine. I am perfectly comfortable with slow, asynchronous conversation whenever you happen to have time.
If you're open to talking, please either comment below, message me on EA Forum, or email me at: carlosgpt500@gmail.com
Thank you very much for reading.

Thank you very much for your kindness, I would email you later