23 yo studying Philosophy double major in Business Administration graduating in 2026
Co-leading EA Turkey, Uni Group Mentor (OSP),
Exited and terrified about digital sentience, hope to do field-building work for  it. Â
Â
My Career Aptitude Interests:
Â
Ăzellikle TĂźrkiye'de ve EA ile ilgilenmeye baĹlÄąyorsanÄąz lĂźtfen yazÄąn!!Â
Â
Great post! Thanks for writing it.
I believe many interventions in this area could be cost-effective. However, given the vastness of the space, each projectâs cost-effectiveness and expected value should be carefully assessed and compared against the most effective charities in EA.
I wanted to underline this because this is also a quite âsweetâ idea, where you spread EA ideas with the world, and due to the sweetness of the idea, sometimes even the EA community forgets about the fat-tailed nature of impact. The bar for effectiveness we have should be kept the same, and if math doesn't math we should stop doing this type of intervention
I'm honestly sad that the first/primary motivation you'd think of for writing a post like this would be to "stand out, act cutesy to hirers, and get a job". I've written this as a first post for a series on how the current broken view of Impactful EA Career can be revised. I find the first part of your comment rude and assuming to have bad intent.Â
Apart from this, I agree with you on the impact point. I'm very skeptical of your counterfactual impact with the 2nd or 3rd candidates for a role. I (generally) believe that optimising for different higher-absorbency, less EA-crowded paths is more impactful than landing a position on the 80K job board.Â
But disagree with the cause of this. I don't think this is about "looking professional" at all. The problem isn't about the single orgs that are doing the hiring. An org would open a role, do interviews, and hire someone who thinks they fit the role best. The cause is the disproportionality between "the number of jobs in the classic EA orgs" and "the number of smart, altruistic people who want to work in a classic EA org".Â
We need to change the belief that the best way to create impact with your career is by "working at classic EA orgs". So that people are more motivated to do earn to give, skill-build, found new orgs, Â try different projects, test new ideas etc. All these career options to have an impact should have higher status in the community.Â
Thanks for the amazing post. Iâm adding it to EA Turkeyâs syllabus of career mentorship program for inspiration đ
I think you didn't misunderstand me.Â
If you are not assuming the longterm effects, the intervals below are only for shorttermish effects. (this is what I got from your comment, correct please if It's wrong)
H=[100,200]Ă[â300,50]Ă[â50,50]
Isn't one of the main problems brought by deep moral uncertinity is about how an intervention itself is benefical for the target group or not?Â
An intervention effects far future. Far future effect is generally more important than short term effects. Â So even if our short therm analysis shows the intervention was beneficial it doesn't mean in aggregate it is net positive. Thus we don't know the impact of the intervention.Â
I believe this was the one of the main problems of moral uncertinity/cluenessness. What I don't get is how the model can solve moral uncertinity if it does not takes far longterm effects into account.Â
I guess I didn't specify how far these sets of targets go into the future, so you could assume I'm ignoring far future moral patients, or we can extend the example to include them, with interventions positive in expectation for some sets of far future targets.
Is the bolded part even possible? Is there interventions that are highly likely to be positive for the target group in the very far future?Â
Â
P.S: Thank you for responding my comment this fast even if the post is 5 years old :)Â
In the caculation  below it is assumed that, we know at least one intervention for each cause area that has no negatives so that we use our funds to compensate this net positive with negatives from other interventions.Â
Â
However isn't it hard to assume this? I personally can't think of an intervention that is definitely positive in the long term. It feels like instead of numbers there should be question marks due to complexity of everything. Â Am I missing something?Â
Thanks a lot James! Glad you found it useful.
Budget: We didnât have a set budget but were experimenting. We stopped after spending around $250 on Meta ads without getting any applicants.
Process: Our call-to-action directed people straight to the application form. In hindsight, your approach with instant forms and automatic follow-up emails sounds really smart. I hadnât considered that before, and it makes sense because the Instagram scrolling mindset probably isnât the best fit for filling out a full application right away.