AS

Ariel Simnegar 🔸

Quantitative Researcher @ Quantic/Walleye Capital
2842 karmaJoined Working (0-5 years)Boston, MA, USA

Bio

Participation
3

I'm earning to give as a Quant Researcher at the Quantic group at Walleye Capital, a hedge fund. In my free time, I enjoy reading, discussing moral philosophy, and dancing bachata and salsa.

I'm also on LessWrong and have a Substack blog.

How I can help others

Reach out to me if you're interested in earning to give in quant trading!

Comments
214

I'd be doing less good with my life if I hadn't heard of effective altruism

My donations to effective charities are by far the most impactful thing I've ever done in my life, and that could not have happened without EA.

Organisations using Rethink Priorities’ mainline welfare ranges should consider effects on soil nematodes, mites, and springtails.

The only argument I can think of against this would be optics. To be appealing to the public and a broad donor base, orgs might want to get off of the train to crazytown before this stop. (I assume this is why GiveWell ignores animal effects when assessing their interventions’ impact, even though those swamp the effects on humans.) Even then, it would make sense to share these analyses with the community, even if they wouldn’t be included in public-facing materials.

I think most views where nonhumans are moral patients imply these tiny animals could matter. Like most people, I find the implications of this incredibly unintuitive, but I don’t think that’s an actual argument against the view. I think our intuitions about interspecies tradeoffs, like our intuitions about partiality towards friends and family, can be explained by evolutionary pressures on social animals such as ourselves, so we shouldn’t accord them much weight.

Hi guys, thanks for doing this sprint! I'm planning on making most of my donations to AI for Animals this year, and would appreciate your thoughts on these followup questions:

  1. You write that "We also think some interventions that aren’t explicitly focused on animals (or on non-human beings) may be more promising for improving animal welfare in the longer-run future than any of the animal-focused projects we considered". Which interventions, and for which reasons?
  2. Would your tentative opinion be more bullish on AI for Animals' movement-building activities than on work like AnimalHarmBench? Is there anything you think AI for Animals should be doing differently from what they're currently doing?
  3. Do you know of anyone working (or interested in working) on the movement strategy research questions you discuss?
  4. Do you have any tentative thoughts on how animal/digital mind advocates should think about allocating resources between (a) influencing the "transformed" post-shift world as discussed in your post and (b) ensuring AI is aligned to human values today?

Depopulation is Bad

Assuming utilitarian-ish ethics and that the average person lives a good life, this follows.
The question gets much more uncertain once you account for wild animal effects, but it seems likely to me that the average wild animal lives a bad life, and human activity reduces wild animal populations, which supports the same conclusion.

This year I donated to the Arthropoda Foundation!

One reason to perhaps wait before offsetting your lifetime impact all at once could be to preserve your capital’s optionality. Cultivated meat could in the future become common and affordable, or your dietary preferences could otherwise change such that $10k was too much to spend.

Your moral views on offsetting could also change. For example, you might decide that the $10k would be better spent on longtermist causes, or that it’d be strictly better to donate the $10k to the most cost-effective animal charity rather than offsetting.

I basically never eat chicken

That’s awesome. That probably gets you 90% of the way there already, even if there were no offset!

I think that's a great point! Theoretically, we should count all of those foundations and more, since they're all parts of "the portfolio of everyone's actions". (Though this would simply further cement the takeaway that global health is overfunded.)

Some reasons for focusing our optimization on "EA's portfolio" specifically:

  • Believing that non-EA-aligned actions have negligible effect compared to EA-aligned actions.
  • Since we wouldn't have planned to donate to ineffective interventions/cause areas anyway, it's unclear what effect including those in the portfolio would have on our decisionmaking, which is one reason why they may be safely ignorable.
  • It's far more tractable to derive EA's portfolio than the portfolio of everyone's actions, or even the portfolio of everyone's charitable giving.

But I agree that these reasons aren't necessarily decisive. I just think there are enough reasons to do so, and this assumption has enough simplifying power, that for me it's worth making.

Thanks for this research! Do you know whether any BOTECs have been done where an intervention can be said to create X vegan-years per dollar? I've been considering writing an essay pointing meat eaters to cost-effective charitable offsets for meat consumption. So far, I haven't found any rigorous estimates online.

(I think farmed animal welfare interventions are likely even more cost-effective and have a higher probability of being net positive. But it seems really difficult to know how to trade off the moral value of chickens taken out of cages / shrimp stunned versus averting some number of years of meat consumption.)

I don't think most people take as a given that maximizing expected value makes perfect sense for donations. In the theoretical limit, many people balk at conclusions like accepting a gamble with a 51% chance of doubling the universe's value and a 49% chance of destroying it. (Especially so at the implication of continuing to accept that gamble until the universe is almost surely destroyed.) In practice, people have all sorts of risk aversion, including difference-making risk aversion, avoiding worst case scenarios, and reducing ambiguity.

I argue here against the view that animal welfare's diminishing marginal returns would be sufficient for global health to win out against it at OP levels of funding, even if one is risk neutral.

So long as small orgs apply to large grantmakers like OP, so long as one is locally confident that OP is trying to maximize expected value, I'd actually expect that OP's full-time staff would generally be much better positioned to make these kinds of judgments than you or I. Under your value system, I'd echo Jeff's suggestion that you should "top up" OP's grants.

Does portfolio theory apply better at the individual level than the community level?

I think the individual level applies if you have risk aversion on a personal level. For example, I care about having personally made a difference, which biases me towards certain individually less risky ideas.

is this "k-level 2" aggregate portfolio a 'better' aggregation of everyone's information than the "k-level 1" of whatever portfolio emerges from everyone individually optimising their own portfolios?

I think it's a tough situation because k=2 includes these unsavory implications Jeff and I discuss. But as I wrote, I think k=2 is just what happens when people think about everyone's donations game-theoretically. If everyone else is thinking in k=2 mode but you're thinking in k=1 mode, you're going to get funged such that your value system's expression in the portfolio could end up being much less than what is "fair". It's a bit like how the Nash equilibrium in the Prisoner's Dilemma is "defect-defect".

At some point what matters is specific projects...?

I agree with this. My post frames the discussion in terms of cause areas for simplicity and since the lessons generalize to more people, but I think your point is correct.

Load more