Bio

Participation
3

Currently looking for my next step in animal welfare. Reasonably clueless about what interventions are impartially good. 

"We have enormous opportunity to reduce suffering on behalf of sentient creatures [...], but even if we try our hardest, the future will still look very bleak." - Brian Tomasik

How I can help others

Happy to give feedback on projects, or get on a call about anything to give advice and share contacts.

Sequences
1

Animals in AI-transformed futures

Comments
117

Abraham has mentioned the new charity briefly on the Hive Slack.

"What exactly do you mean by moving goalposts?"

Quick answer: it's a reference to this paragraph.

Jo_🔸
16
0
0
10

Reading a (free) book by Magnus Vinding two years ago is what got me into effective altruism, and I now work full-time on (hopefully) improving the lives of others. I'm very excited by this book, and have loved the chapter drafts that have been released early. 

Fun fact: the draft chapter "healthy habits" from this new book convinced me to go from a night owl to a regular bedtime person, which is the single most transformative lifestyle change I've ever made. I'm awake for 2 hours more per day on average, and am less sleepy. 

Really appreciate this sort of cluster thinking exercise, thanks for sharing! ~20% dying from parasites intuitively shocked me upon reading, even though it checks out, given the abundance of parasites in the wild.

"I don’t think there is very much in the way of “this forecasting happened, and now we have made demonstrably better decisions regarding this terminal goal that we care about”."

I assume some people disagree with this strong claim. One example I've heard was AGI timelines and their influence on AI safety field priorities - though I guess one could answer that certain reports or expert opinions where disproportionately more useful than prediction markets.

On a different point, I appreciated Eli Lifland's past comment on many intellectual activities (such as grantmaking) being forms of forecasting.

Agree with the post and the bottom line, though I don't think it justifies focusing on AI safety, because of a disanalogy.

In your analogy, we assume that when we give the money to the mugger, they either make the coin more likely to land heads, or do nothing.

Meanwhile, in AI safety, small chances of averting doom come with small chances of causing doom - and it seems most of those who work in the field consider that some respected interventions are actually increasing P(Doom). They just disagree on what those doom-increasing interventions are.  

"EA isn't drawing the same talent as it used it"
I'm surprised by this claim: do you mean it's getting fewer talented newcomers on a yearly basis than before, or that the newcoming talent is different? (different profiles / skillsets)
I understood it at the former claim, but that would be surprising to me. I've heard a few orgs saying that they've been able to raise the bar for who to hire in the past years, because the EA-aligned talent pool has been getting bigger, with more senior professionals and exceptionally competent people. Also, generally, that EAs are less young on average than ten years ago, and that this has benefits for hiring.

I can actually think of one example in animal welfare: the EA Animal Welfare Fund forecasts grant outcomes. 

I appreciate the initiative! AI Safety is rich with disagreements, and it's nice to have an opportunity to easily map out the range of existing views. Thanks for sharing!

Fatebook has at once made me better at calibration, and provided an easy way to track what my initial beliefs were on topics before gaining more information. I'd recommend using it for sure!

Load more