Hello, my name's Bella Forristal. I work at 80,000 Hours, as the director of growth.
I'm interested in AI safety, animal advocacy, and ethics / metaethics.
Previously, I worked in community building with the Global Challenges Project and EA Oxford, and have interned at Charity Entrepreneurship.
Please feel free to email me to connect at bellaforristal@gmail.com, or leave anonymous feedback at https://www.admonymous.co/bellaforristal :)
I now get an error for the link at the top of the post; here's another link I found which currently works: https://www.sciencedirect.com/science/article/abs/pii/S0065280622000170
if someone doesn’t believe themselves to be a good enough fit, perhaps they’re best-placed to know that about themselves
I disagree — I think some people are just naturally under-confident, in a way that doesn't correlate particularly well with their actual skill. For example, see these seven stories written up by my lovely colleague Luisa :)
I’d like to know if any of the paid jobs advertised on 80,000 Hours receive very low or zero applications.
Yeah, I don't have that data sadly since it's with all the different orgs running those rounds. I've run 5 hiring rounds at 80,000 Hours, and the number of applicants was 110, 91, 137, 112, and 107 — so, all around 100 :)
Two very quick thoughts:
As an intuition pump: there are currently 715 jobs on our job board. How many of those are meeting your bar for 'EA-aligned'? I think there's roughly 5-10k people who consider themselves EAs. So even if a very high % of them are currently doing job searches, there's no way that all of these roles have hundreds of EA applicants.
if I ever find myself hiring, I might be tempted to say 'if you’re not confident in your fit, save yourself the trouble; our inbox will be full by lunch.'
The reason I wouldn't do this is that: a) It's very hard to be well-calibrated on whether you're likely to be a good fit. I think some people (certain personality types; women; people from ethnic minorities) are much more likely to "count themselves out," even if they might be a great fit. b) For jobs I've hired for in the past, I'm actually more excited about candidates with excellent transferable skills (high personal effectiveness, organisation, agency, social skills, prioritisation ability, taste, judgement, etc.) versus role-specific skills. But role-specific skills are much more concrete and easier to write about in a job ad. I think language like this might deter some of my favourite candidates!
Thanks for this post!
Something it looks like you didn't consider, and I'd be interested in your views on, are the arguments raised by this post.
Basically, the view I've come to in recent years is we are almost totally in the dark about the overall sign of eating wild-caught fish.
I still stick with veganism for some of the reasons you raise in the 'moral progress' section, but I think given current tech / welfare science, it's very hard to feel confident in a conclusion either way.
Morality is Objective
I was unable to come up with a non-assertive, non-arbitrary-feeling grounding for moral realism when I tried very hard to do this in 2021-22.
My vote isn't further towards anti-realism, because of:
I'd be pretty excited about 80k trying to do something useful here; unsure if it'd work, but I think we could be well-placed. Would you be up for talking with me about it? Seems like you have relevant context about these folks. Please email me if so at bella@80000hours.org :D
I strongly agree with this part:
[T]he specifics of factory farming feel particularly clarifying here. Even strong-identity vegans push the horrors of factory farming out of their heads most of the time for lack of ability to bear it. It strikes me as good epistemic practice for someone claiming that their project most helps the world to periodically stare these real-and-certain horrors in the face and explain why their project matters more – I suspect it cuts away a lot of the more speculative arguments and clarifies various fuzzy assumptions underlying AI safety work to have to weigh it up against something so visceral. It also forces you to be less ambiguous about how your AI project cashes out in reduced existential risk or something equivalently important.
I think it's quite hard to watch slaughterhouse footage and then feel happy doing something where you haven't, like, tried hard to make sure it's among the most morally important things you could be doing.
I'm not saying everyone should have to do this — vegan circles have litigated this debate a billion times — but if you feel like you might be in the position Matt describes, watch Earthlings or Dominion or Land of Hope and Glory.
I think it'd help if you spelled out more how you think these views contrast. They seem obviously consistent to me (if you have totalist views in population ethics, you think less suffering would be good).