I'm planning to spend time on the afternoon (UK time) of Wednesday 2nd September answering questions here (though I may get to some sooner). Ask me anything!
A little about me:
- I work at the Future of Humanity Institute, where I run the Research Scholars Programme, which is a 2-year programme to give space for junior researchers (or possible researchers) to explore or get deep into something
- (Applications currently open! Last full day we're accepting them is 13th September)
- I've been thinking about EA/longtermist strategy for the better part of a decade
- A lot of my research has approached the question of how we can make good decisions under deep uncertainty; this ranges from the individual to the collective, and the theoretical to the pragmatic
- e.g. A bargaining-theoretic approach to moral uncertainty; Underprotection of unpredictable statistical lives compared to predictable ones; or Defence in depth against human extinction
- Recently I've been thinking around the themes of how we try to avoid catastrophic behaviour from humans (and how that might relate to efforts with AI); how informational updates propagate through systems; and the roles of things like 'aesthetics' and 'agency' in social systems
- I think my intellectual contributions have often involved clarifying or helping build more coherent versions of ideas/plans/questions
- I predict that I'll typically have more to say to relatively precise questions (where broad questions are more likely to get a view like "it depends")
I almost feel cheeky responding to this as you've essentially been baited into providing a controversial view, which I am now choosing to argue against. Sorry!
I'd say that something doesn't have to be the most effective thing to do for it to be worth doing, even if you're an EA. If something is a good thing, and provided it doesn't really have an opportunity cost, then it seems to me that a consequentialist EA should do it regardless of how good it is.
To illustrate my point, one can say it's a good thing to donate to a seeing eye dog charity. In a way it is, but an EA would say it isn't because there is an opportunity cost as you could instead donate to the Against Malaria Foundation for example which is more effective. So donating to a seeing eye dog charity isn't really a good thing to do.
Choosing to follow a ve*an diet doesn't have an opportunity cost (usually). You have to eat, and you're just choosing to eat something different. It doesn't stop you doing something else. Therefore even if it realises a small benefit it seems worth it (and for the record I don't think the benefit is small).
Or perhaps you just think the personal cost to you of being ve*an is substantial enough to offset the harm to the animals. From a utilitarian view I'd imagine this is unlikely to be true. I happen to think avoiding the suffering of even one animal is significant, similarly to the fact that we think it would be highly significant to save just one human life. And following a vegan diet for a while will benefit way more than just one animal anyway.