guneyulasturker 🔸

Co-director @ EA Turkey

Bio

Participation
4

23 yo studying Philosophy double major in Business Administration graduating in 2026

Co-leading EA Turkey, Uni Group Mentor (OSP),

Exited and terrified about digital sentience, hope to do field-building work for  it.  

 

My Career Aptitude Interests:

  • Organization building, running, and boosting aptitudes
  • Communicator aptitudes
  • Entrepreneur aptitude
  • Community building aptitudes

 

Özellikle Türkiye'de ve EA ile ilgilenmeye başlıyorsanız lütfen yazın!! 
 

Comments
16

Great post! Thanks for writing it.

I believe many interventions in this area could be cost-effective. However, given the vastness of the space, each project’s cost-effectiveness and expected value should be carefully assessed and compared against the most effective charities in EA.

I wanted to underline this because this is also a quite “sweet” idea, where you spread EA ideas with the world, and due to the sweetness of the idea, sometimes even the EA community forgets about the fat-tailed nature of impact. The bar for effectiveness we have should be kept the same, and if math doesn't math we should stop doing this type of intervention

Thanks for flagging this. Someone else DMed about this, but it worked in my and a friend's devices. What device and browser are you using? 

Thanks for the comment! Which parts sounded the most like an LLM? I'm surprised because I only used AI (Grammarly) to catch grammar/spelling mistakes. 

I'm honestly sad that the first/primary motivation you'd think of for writing a post like this would be to "stand out, act cutesy to hirers, and get a job". I've written this as a first post for a series on how the current broken view of Impactful EA Career can be revised. I find the first part of your comment rude and assuming to have bad intent. 


Apart from this, I agree with you on the impact point. I'm very skeptical of your counterfactual impact with the 2nd or 3rd candidates for a role. I (generally) believe that optimising for different higher-absorbency, less EA-crowded paths is more impactful than landing a position on the 80K job board. 


But disagree with the cause of this. I don't think this is about "looking professional" at all. The problem isn't about the single orgs that are doing the hiring. An org would open a role, do interviews, and hire someone who thinks they fit the role best. The cause is the disproportionality between "the number of jobs in the classic EA orgs" and "the number of smart, altruistic people who want to work in a classic EA org". 


We need to change the belief that the best way to create impact with your career is by "working at classic EA orgs". So that people are more motivated to do earn to give, skill-build, found new orgs,  try different projects, test new ideas etc. All these career options to have an impact should have higher status in the community. 

Thanks for the amazing post. I’m adding it to EA Turkey’s syllabus of career mentorship program for inspiration 🙌

Are these disagreements representative of the general disagreements between people with long and short AI timelines?

I think you didn't misunderstand me. 

If you are not assuming the longterm effects, the intervals below are only for shorttermish effects. (this is what I got from your comment, correct please if It's wrong)

H=[100,200]×[−300,50]×[−50,50]

Isn't one of the main problems brought by deep moral uncertinity is about how an intervention itself is benefical for the target group or not? 

An intervention effects far future. Far future effect is generally more important than short term effects.  So even if our short therm analysis shows the intervention was beneficial it doesn't mean in aggregate it is net positive. Thus we don't know the impact of the intervention. 

I believe this was the one of the main problems of moral uncertinity/cluenessness. What I don't get is how the model can solve moral uncertinity if it does not takes far longterm effects into account. 

I guess I didn't specify how far these sets of targets go into the future, so you could assume I'm ignoring far future moral patients, or we can extend the example to include them, with interventions positive in expectation for some sets of far future targets.

Is the bolded part even possible? Is there interventions that are highly likely to be positive for the target group in the very far future? 

 

P.S: Thank you for responding my comment this fast even if the post is 5 years old :) 

In the caculation  below it is assumed that, we know at least one intervention for each cause area that has no negatives so that we use our funds to compensate this net positive with negatives from other interventions. 

 

However isn't it hard to assume this? I personally can't think of an intervention that is definitely positive in the long term. It feels like instead of numbers there should be question marks due to complexity of everything.  Am I missing something? 

Load more