Hide table of contents

TL;DR

  • Experience in Math, Datascience, small non-profit management
  • I care about people and suffering
  • Prefer intervention through removing bad actors
  • Not sure how to do that, Lobbying? 
  • Please send me potential jobs and internships

Introduction

Hi everyone, thank you for taking the time to read & help! 

I’m very new to EA and new to looking for 3rd sector jobs, so please feel free to send information I didn’t explicitly request & give constructive criticism :)

Short CV

Professional work

  • Bachelor of mathematics from TAU
  • Two years total of data science (internships)
  • One year of firmware optimization (internship)
  • Two years of cyber security systems integration (pre-uni)

Volunteer work

  • Five years on board of directors of the Israeli Go Association (four years as head)
  • One year of volunteer work for the Association for LGBTQ equality in Israel

Other skills

  • Fluent Hebrew and English, competent Dutch
  • Conducting research, writing and debating.

Example areas I care about

  1. Direct systemic violence (war, genocide, slavery)
  2. Global warming
  3. Global health

Why direct systemic violence is first on the list

I simply put a huge weight on the amount of mental suffering that comes together with such violence.

What I mean by “bad actors”

Any person or organization that creates harm for their own gain. This could be political leaders, religious leaders or companies.

Some unordered notable examples

  • Ethiopian leaders in the Tigray war
  • Chinese leaders in the context of Xinjiang
  • Owners of some private prisons in the US
  • Nestle in the context of child labor
  • ExxonMobil in the context of climate change denial and policy manipulation

Why focus on removing bad actors?

Personal POV

The frustration I feel with bad actors is to me a much stronger motivator than the pain and suffering alone.

I’ll refrain from considering why here ;)

EA POV

Bad actors take attention away from important problems, through misinformation and lobbying. This removes a great amount of financial support from non-EA members for EA causes. 

A clear example here is ExxonMobile’s influence on conventional climate change conversations and policy.

Considering the amount of resources non-EA members bring it seems worthwhile to spend effort on where those resources go. 

If you have any research about the negative impact of such actors, I would love to read it ;)

How to remove bad actors?

I don’t know. I would love to see more research on any of the following points.

My current understanding and thoughts:

  • The case of states
    • Sanctions seem to be mostly ineffective and are often harmful
    • Peaceful protest seems mostly ineffective
    • Violent protest seems to be a coinflip between effective and harmful
    • Lobbying seems effective but requires a lot of pre-existing power
  • The case of companies
    • Bad press seems to be a coinflip in the current political climate
    • The current laws don’t seem to be doing much

What would I prefer to do?

work with people for at least a large part of the day. That means less programming and mathematical research and more management or anything else that leads to a lot of conversations (right now my only idea is lobbying, other ideas are very welcome).

I do realize this is a tall order with my CV, so anything that would combine my strong suits with what I want to do, or would help me transition is more than welcome.

Geographic limitations:

  • I will be moving from Israel to South Korea in two months and will probably stay there for 1-2 years. I’m guessing that means I’ll have to do remote work, I do realize how this clashes with what I want.
  • I don’t have authorization to work in the US which seems to be a requirement for a lot of (even remote) jobs.

What did I try?

I’ve tried going through organization lists and seeing which has openings, I’ve tried using the 80,000 hours jobs board. 

I’ve seen that I simply do not qualify for most jobs, and that I have a hard time judging how relevant different jobs are to me.

How can you help?

  • Send jobs/ internships my way.
  • Send relevant research to the points above.
  • Give me feedback on anything! My reasoning, the style of this post etc…

 

Thank you so much for taking the time to read! 





 

10

0
0

Reactions

0
0

More posts like this

Comments3


Sorted by Click to highlight new comments since:

I don’t have authorization to work in the US which seems to be a requirement for a lot of (even remote) jobs.

Rethink Priorities (my company) hires remotely from nearly anywhere and do not need US work authorization, so it could be a good option for you. We will have many open roles soon that you may be interested in.

One thing is that we do not work much at all on "Direct systemic violence" and "bad actors". We do work on global warming and global health though.

While we're working on opening the roles we have an expression of interest form for fellowships as well as just a general expression of interest form for a job at Rethink Priorities.

If you sign up for our newsletter or follow us on social media (Twitter, Facebook, LinkedIn) you can get notified about all our job openings as they happen.

Thank you! I've read up on Rethink and it definitely seems interesting, I'll do a formal expression of interest soon ^^

[anonymous]1
0
0

Maybe reach out to new institutions, possibly via Linkedin, even if they don't have job opportunity listings, as they might find that it would take too long to make those listings, are too busy, or might not have listed them for other reasons. It also likely heavily reduces competition.

Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed