Bella

Director of Growth @ 80,000 Hours
2261 karmaJoined Working (0-5 years)Bethnal Green, London, UK

Bio

Hello, my name's Bella Forristal. I work at 80,000 Hours, as the director of growth. 

I'm interested in AI safety, animal advocacy, and ethics / metaethics. 

Previously, I worked in community building with the Global Challenges Project and EA Oxford, and have interned at Charity Entrepreneurship. 

Please feel free to email me to connect at bellaforristal@gmail.com, or leave anonymous feedback at https://www.admonymous.co/bellaforristal :)

Comments
159

These guys absolutely worked their butts off to make this video, and I think the results show it :') Thanks Chana, Aric, Phoebe, Sam, and everyone for making something I'm so so so excited for the world to see!!

Thanks for this post!

Something it looks like you didn't consider, and I'd be interested in your views on, are the arguments raised by this post.

Basically, the view I've come to in recent years is we are almost totally in the dark about the overall sign of eating wild-caught fish.

I still stick with veganism for some of the reasons you raise in the 'moral progress' section, but I think given current tech / welfare science, it's very hard to feel confident in a conclusion either way.

Bella
4
0
0
50% disagree

Morality is Objective

 

I was unable to come up with a non-assertive, non-arbitrary-feeling grounding for moral realism when I tried very hard to do this in 2021-22. 

 

My vote isn't further towards anti-realism, because of:

  • Some uncertainty about what people might think I mean by 'objective' (I think I have specific, unchanging, moral reasons to do particular things)
  • I was a committed realist before 2021, so maybe I'll become convinced the other way! But I guess not.

IIUC, polls this far out from an election aren't generally trustworthy, so I don't currently think it's particularly likely they'll win.

I'd be pretty excited about 80k trying to do something useful here; unsure if it'd work, but I think we could be well-placed. Would you be up for talking with me about it? Seems like you have relevant context about these folks. Please email me if so at bella@80000hours.org :D

I strongly agree with this part:

[T]he specifics of factory farming feel particularly clarifying here. Even strong-identity vegans push the horrors of factory farming out of their heads most of the time for lack of ability to bear it. It strikes me as good epistemic practice for someone claiming that their project most helps the world to periodically stare these real-and-certain horrors in the face and explain why their project matters more – I suspect it cuts away a lot of the more speculative arguments and clarifies various fuzzy assumptions underlying AI safety work to have to weigh it up against something so visceral. It also forces you to be less ambiguous about how your AI project cashes out in reduced existential risk or something equivalently important.

I think it's quite hard to watch slaughterhouse footage and then feel happy doing something where you haven't, like, tried hard to make sure it's among the most morally important things you could be doing.

I'm not saying everyone should have to do this — vegan circles have litigated this debate a billion times — but if you feel like you might be in the position Matt describes, watch Earthlings or Dominion or Land of Hope and Glory.

I think this is just Matt's style (I like it, but it might not be everyone's taste!). I think the SummaryBot comment does a pretty great job here, so maybe read that if you'd like to get the TL;DR of the post.

More anonymous questions!

How much weight is given to location? It seems that UK/US-based organisations within EA often claim to be open to remote candidates around the world but seldom actually make offers to these candidates (at least from what I’ve seen/heard over the years)

I think I'd give quite a bit of weight against a candidate if they never had the ability to visit the office. But I think if someone lived overseas but e.g. could spend a couple of weeks here every 3-6 months, it's not a big downside.

I'm not sure which organisations specifically you're talking about, but speaking about 80k here:

  • Until 2023, our policy was that "primary staff" hires must be in-person. Then we changed it to only managers/team leads needed to be in person, and then we later dropped that too — so we're relatively new to being fully open to remote staff.
  • That said, a lot of our staff are remote.
  • Scanning through our org chart, 13 primary staff are "fully remote", and a further 3 are "mostly remote" (visit the office 1-2 days a week). That's out of 32 total primary staff.
  • So, my overall impression is 80k is "genuinely open" to remote staff :)

If a remote candidate did make it to the trial round, would it be a remote or in-person trial?

In-person. We can pay for (and book, if you like) flights and accommodation. We unfortunately can't pay for your time, unless you have the right to work in the UK (but if you do, we'll pay for your time as well!)

How much quantitative work is involved in this role – e.g. calculating cost-effectiveness, etc?

A fair amount!

I'd say the person in this role needs to have the quantitative skills to answer moderately complex data-related questions, but they do not need to have a quantitative degree (though that could be helpful). I think "was reasonably good at high school maths," plus the willingness to learn a few key concepts (such as cost-effectiveness, and diminishing marginal returns) would be sufficient :)

The application form contains a quantitative question for this reason. I think if you get this question right without too much trouble, you'll be fine :)

I agree with the substance but not the valence of this post.

I think it's true that EAs have made many mistakes, including me, some of which I've discussed with you :)

But I think that this post is an example of "counting down" when we should also remember the frame of "counting up."

That is — EAs are doing badly in the areas you mentioned because humans are very bad at the areas you mentioned. I don't know of any group where they have actually-correct incentives, reliably drive after truth, get big, complicated, messy questions like cross-cause prioritisation right — like, holy heck, that is a tall order!!

So you're right in substance, but I think your post has a valence of "EAs should feel embarrassed by their failures on this front", which I strongly disagree with. I think EAs should feel damn proud that they're trying.

Bella
43
17
1
4

Strongly agree with this well-articulated point.

Sometimes friends ask me why I work so hard, and I don't know how to get them to understand that it's because I believe that it matters — and the fact that they don't believe that about their work is maybe a sign they should do something else.

Load more