A

arvomm

889 karmaJoined arvomm.com

Bio

I am a researcher at Rethink Priorities' Worldview Investigations Team. I also do work for Oxford's Global Priorities Institute. Previously I was a research analyst at the Forethought Foundation for Global Priorities Research. I took the role after completing the MPhil in Economics at Oxford University. Before that, I studied Mathematics and Philosophy at the University of St Andrews.

Find out more about me here.

Comments
23

Topic contributions
1

That's fair. The main thought that came to mind, which might not be useful, is developing the patience (eagerness to get to conclusions is often incompatible with the work required) and choosing your battles early. As you say, it can be hard and time-consuming. So people in the community asking narrower questions and focusing on one or two is probably the way to go. 

Thanks for looking through our work and for your comment, Deborah. We recognise that different parts of our models are often interrelated in practice. In particular, we’re concerned about the problem of correlations between interventions too, as we flag here. This is an important area for further work. That being said, it isn’t clear that the cases you have in mind are problems for our tools. If you think, for instance, that environmental interventions are particularly good because they have additional (quantifiable or non-quantifiable) benefits, you can update the tool inputs (including the cause or project name) to reflect that and increase the estimated impact of that particular cause area. We certainly don't mean to imply that climate change is an unimportant issue.

 I think another common pitfall is not working through things from first principles. I appreciate that it’s challenging and that any model is unrealistic. Still, BOTECs, pre-established boundaries between cause-areas/worldviews and our first instincts more broadly are likely to (and often do) lead us astray. Separately, I’m glad EA is so self-aware and worried about healthier epistemics, but I think we could do more to guard against echo-chamber thinking. 

I was personally struck by how sensitive portfolios are to even modest levels of risk aversion. I don’t know what “correct” level of risk aversion is, or what the optimal decision procedure is in practice (even though most of my theoretical sympathies lie with expected value maximisation). Even so, seeing how introducing bits of risk aversion, even when using parameters relatively generous towards x-risk, still points towards spending most resources on animals (and sometimes global health) has led me to believe that type of work is robustly better than I used to think. There are many uncertainties and I don't think EA should be reduced to any one of its cause-areas but, especially given this update, I would be sad to see the animal space shrink in relative size any more than it has.

Thanks for the question Carter! Would you mind saying a bit more about the kind of empirical work you have in mind? Are you thinking about empirical research into the inputs to the tools? Or are you thinking about using the tools to conduct research on people’s views about cause prioritization? Do you have any concrete empirical projects you’d like to see WIT do?

Thanks for your question, Chris. We hear you about the importance of making the content accessible. We’ve aimed to include the main takeaways in intro and conclusion posts that can be easily skimmed. We also provide an executive summary at the beginning of each post. We hope that these help, but we take the point that it may not be obvious that we’ve taken these steps, and we’ll revisit this suggestion in future sequences to make sure the purposes of those posts and introductory materials are clear. It may also be useful for us to consider more visual summaries of some of our results, as we provided for our discussion of human extinction. Do you have any concrete suggestions given the approach we’ve adopted so far?

Thank you for your kind words Ben. A substantial amount of in-house software work went into both tools. We used react and vite to create these, and python for the server running the maths behind the scenes. If the interest in this type of work and the value-added are high enough, we'd likely want to do more of it. 

On your last point, we haven't done open-source development from scratch for the projects we've done so far, but it might be a good strategy for future ones. That said, for transparency, we've made all our code accessible here.

Thank you for your comment Lukas, we agree that this tool, and more generally this approach, could be useful even in that case, when all considerations are known. The ideas we built on and the language we used came from the literature on moral parliaments as an approach to better understand and tackle moral uncertainty, hence us borrowing from that framing.

Thank you for adding various threads to the conversation Arepo! I don't disagree with what I take to be your main point: benign AI and interstellar travel are likely to have a big impact. I will say though, while their success might significantly reduce risk, and for a long time, any given intervention is unlikely to make major progress towards them. Hence, at the intervention level, I'm tempted to remain sceptical about the abundance of interventions that dramatically reduce risk for a long time.

Thank you for deeply engaging with our work and for laying out your thoughts on what you think are the most promising paths forward, like searching for contingent and persistent interventions, applying a medium-term lens to global health and animal welfare, or investigating fanaticism. I thought your post was well-written, to the point and enjoyable.

Load more