Bob Fischer

Senior Researcher @ Rethink Priorities
3725 karmaJoined Working (15+ years)Rochester, NY, USAbobfischer.net

Bio

I'm a Senior Researcher for Rethink Priorities, a Professor of Philosophy at Texas State University, a Director of the Animal Welfare Economics Working Group, the Treasurer for the Insect Welfare Research Society, and the President of the Arthropoda Foundation. I work on a wide range of theoretical and applied issues related to animal welfare. You can reach me here.

Sequences
3

Rethink Priorities' CRAFT Sequence
The CURVE Sequence
The Moral Weight Project Sequence

Comments
103

I'm encouraged by your principles-first focus, Zach, and I'm glad you're at the helm of CEA. Thanks for all you're doing. 

Thanks for your question, Nathan. We were making programmatic remarks and there's obviously a lot to be said to defend those claims in any detail. Moreover, we don't mean to endorse every claim in any of the articles we linked. However, we do think that the worries we mentioned are reasonable ones to have; lots of EAs can probably think of their own examples of people engaging in motivated reasoning or being wary about what evidence they share for social reasons. So, we hope that's enough to motivate the general thought that we should take uncertainty seriously in our modeling and deliberations.

Thanks, Deborah. Derek Shiller offered an answer to your question here.

Good question! Re: the Moral Weight Project, perhaps the biggest area of impact has been on animal welfare economics, where having a method to make interspecies comparisons is crucial for benefit-cost analysis. Many individuals and organizations have also reported to us that our work was an update on the importance of animals and on invertebrates specifically. We’ve seen something similar with the CCM tool, with results ranging from positive feedback and enthusiasm to more concrete updates in their decisions. There’s more we can say privately than publicly, however, so please feel free to get in touch if you’d like to chat! 

  • What are selfish lifestyle reasons to work on the WIT team?
    • It’s fun to talk to smart people! Remote work is great. It’s a privilege to be able to think about big problems that are both philosophically complicated and practically important. 
  • Is it fair to say the work WIT does is unusual outside of academia? What are closely related organizations that tackle similar problems?
    • Yes, what we do is very unusual outside of academia—and inside it too. Re: other groups that do global priorities research, the most prominent ones are GPI, PWI, and the cause prio teams at OP.
  • How does your team define "good enough" for a sequence? What adjustments do you make when you fall behind schedule? Cutting individual posts? Shortening posts? Spending more time?
    • That’s a hard one and we’re still trying to figure it out. There are a lot of variables here, many of which are linked to whether we have the funding to linger on a particular project. In general, however, our job isn’t to produce academic research: it’s to inform decisions. So, if we think we’ve done enough to help people who need to make decisions, then that’s a good sign that we should wrap up the project soon.
  • How much does the direction of a sequence change as you're writing it? It seems like you have a vision in mind when starting out, but you also mention being surprised by some results.
    • The general structure tends not to change much—we plan out posts together and have a general sense of the research we want to do—but the narrative certainly evolves as we learn more about the topic we’re investigating. The conclusions definitely aren’t set from the beginning!
  • Can you tell us more about the structure of research meetings? How frequently do individual authors chat with each other and for what reason? In particular, the CURVE sequence feels very intentionally like a celebration of different "EA methodologies". Most of the posts feel individual before converging on a big cost-effectiveness analysis.
    • We’re in touch all the time, brainstorming new ideas, reviewing drafts, and figuring out solutions to problems. The whole team meets once or twice a week and then we individually hop on 1-1 calls more frequently to discuss specific aspects of our projects. Most of the research still has a lead who’s driving it forward, but everyone’s fingerprints tend to be on everything. 
  • Much of your work feels numerical simulation over discrete choices. Have there been attempts to define "closed-form" analytical equations for your work? What are reasons to allocate resources to this versus not?
    • This ties to your previous question “How does your team define "good enough" for a sequence?”. We think analytical equations can be valuable (they are often tidier, speed up computational work, and can provide clearer insights into sensitivity analysis). For example, it’s a natural next step in our human extinction post, which we flagged in the conclusion. And indeed we’ve done some work towards this already but not polished it enough for it to be in a shareable state. Back to your question “when is a piece of research good enough to wrap up?” We don’t know for sure, but we’ve found that running computational simulations that we’re sufficiently confident in gives us approximations that are perfectly suitable to learn about the models we’re interested in. We hear you, closed-form solutions are mathematically satisfying. But, once we’ve learned the main headlines, it’s hard to justify spending the extra time working through closed-form solutions for everything, especially for some of the more complex models with several moving parts.
  • What are the main constraints the WIT team faces?
    • The standard ones: we’re funding- and capacity-constrained. We could do a lot more with additional resources!

Great (and difficult!) question, Jordan. I (Bob) am responding to this one for myself and not for the team; others can chime in as they see fit. The biggest issue I see in EA cause prioritization is overconfidence. It’s easy to think that because there are some prominent arguments for expected value maximization, we don’t need to run the numbers to see what happens if we have a modest level of risk aversion. It’s easy to think that because the future could be long and positive, the EV calculation is going to favor x-risk work. Etc. I’m not anti-EV; I’m not anti-x-risk. However, I think these are clear areas where people have been too quick to assume that they don’t need to run the numbers because it's obvious how they'll come out.

I’m a “chickens and children” EA, having come to the movement through Singer’s arguments about animals and global poverty. I still find EA most compelling both philosophically and emotionally when it focuses on areas where it’s clear that we can make a difference. However, the more I grapple with the many uncertainties associated with resource allocation, the more sympathetic I become to diversification, to include significant resources for work that doesn’t appeal to me at all personally. So you probably won’t catch me pivoting to AI governance anytime soon, but I’m glad others are doing it. 

Thanks for these questions, Toby!

  • Re: fitting products to our audience, that’s one reason we release them on the Forum! All our tools are in beta; the feedback we receive here is one of the  important ways we identify necessary refinements. As time and funding permit, we hope to improve our tools so that they better serve individuals and organizations trying to do as much good as they can. That being said, we also did a lot of user testing in advance, soliciting feedback on many iterations of each tool to improve usability and accessibility.
  • Re: how we hope people will use the Moral Parliament Tool, we have two main goals. First, we hope that people will use it to have more transparent conversations about their disagreements. For instance, when people are debating the merits of a particular intervention, is the crux the probability that an intervention will backfire, how bad they think backfiring will be, their relative aversions to backfiring, or the way they think they should navigate uncertainty given all their other commitments? The tool forces people to make these kinds of differences explicit and think through their implications. Second, we hope that people will use the Moral Parliament Tool to explore the implications of even modest levels of uncertainty. The tool makes it obvious that changes to parameter values, credences in moral theories, and aggregation methods have big consequences for overall allocations!

Excellent question, Ian! At a high level, I’d say that moral uncertainty has made me much more inclined to care about having an overlapping consensus of reasons for any important decision. Equivalently, I want a diverse set of considerations to point in the same direction before I’m inclined to make a big change. That’s how I got into animal work in the first place. It’s good for the animals, good for human health, good for long-term food security, good for the environment, etc. There are probably lots of other impacts too, but that’s the first one that comes to mind!

Load more