Hide table of contents

Note: This is not an FTX post, and I don't think its content hinges on current events. Also - though this is probably obvious - I'm speaking in a strictly personal capacity.

Formal optimization problems often avail themselves of one solution - there can be multiple optima, but by default there tends to be one optimum for any given problem setup, and the highest expected value move is just to dump everything into that optimum.

As a community, we tend to enjoy framing things as formal optimization problems. This is pretty good! But the thing about formal problem setups is they encode lots of assumptions, and those assumptions can have several degrees of freedom. Sometimes the assumptions are just plain qualitative, where quantifying them misses the point; the key isn't to just add another order-of-magnitude (or three) variable to express uncertainty. Rather, the key is to adopt a portfolio approach such that you're hitting optima or near-optima under a variety of plausible assumptions, even mutually exclusive ones.

This isn't a new idea. In various guises and on various scales, it's called moral parliament, buckets,  cluster thinking, or even just plain hedging. As a community, to our credit, we do a lot of this stuff.

But I think we could do more, and be more confident and happy about it.

Case study: me

I do/have done the following things, that are likely EA-related:

  1. Every month, I donate 10% of my pre-tax income to the Against Malaria Foundation.
  2. I also donate $100 to Compassion in World Farming, mostly because I feel bad about eating meat.
  3. In my spare time, I provide editing services to various organizations as a contractor. The content I edit is often informed by a longtermist perspective, and the modal topic is probably AI safety.
  4. I once was awarded (part of a) LTFF (not FTX, the EA Funds one) grant, editing writeups on current cutting-edge AI safety research and researchers.

Case study from a causes perspective

On a typical longtermist view, my financial donations don't make that much sense - they're morally fine, but it'd be dramatically better in expectation to donate toward reducing x-risk.

On a longtermist-skeptical view, the bulk of my editing doesn't accomplish much for altruistic purposes. It's morally fine, but it'd be better to polish general outreach communications for the more legible global poverty and health sector.

And depending on how you feel about farmed animals, that smaller piece of the pie could dwarf everything else (even just the $100 a month is plausibly saving more chickens from bad lives than my AMF donations save human lives), or irrelevant (if you don't care about chicken welfare basically at all).

I much prefer my situation to a more "aligned" situation, where all my efforts go the same single direction.

It's totally plausible to me that work being done right now on AI safety makes a really big difference for how well things go in the next couple decades. It's also plausible to me that none of it matters, either because we're doomed in any case or because our current trajectory is just basically fine. 

Similarly, it's plausible to me (though I think unlikely) that I learn that AMF's numbers are super inflated somehow, or that its effectiveness collapsed and nobody bothered to check. And it's plausible that in 20 years, we will have made sufficient progress in global poverty and health that there no longer exist donation opportunities in the space as high leverage as there are right now, and so now is a really important time.

So I'm really happy to just do both. I don't have quantitative credences here, though I'm normally a huge fan of those. I just don't think they work that well for the outside view of the portfolio approach - I've aggregated various stories about what matters, and the ones that smell the best get some of my resources. If I'm maximizing anything, I think maybe it's the odds that looking back in 20 years, with different social incentives and peers and information about the state of the world, I think "yeah, some of that was really good."

Case study from an activities perspective

Another thing I want to highlight is that I'm really happy to both be doing some direct work (though not as a full time job, nor do I currently want it to be), and some donating.

Qualitatively, it seems like there are huge and hard-to-impossible-to-remove biases around something when it is one's entire livelihood. My mental health was noticeably worse when I derived most of my income from EA-adjacent sources. The biases aren't necessarily all in one direction, to be clear; I think I probably felt more critical of EA when my career was caught up in it to a significant degree, because career stuff going badly is just really painful, and sometimes things went badly. 

The difference in my cognition between then and now is very difficult to properly describe - it's just that huge. I feel like my thinking is much clearer with a job that has nothing to do with EA than it was before. I regularly see claims about fear of professional retaliation or losing out on opportunities, and it's wonderful to just... basically not feel that stuff at all. In general, I think really great advice for people is "If you're trying to break into a crowded, competitive field, and it goes badly and starts making you unhappy, just stop."

And on the other side, it feels good to actually spend some time engaging with EA ideas, and not simply donating money on autopilot. Being a (somewhat) active participant in the professional landscape is nice, and arguments that lending one's expertise is as much or more valuable (on average) than donating money are, indeed, pretty good.

I am not (currently) at all interested in either:

  1. Paying my bills with EA money
  2. Channeling all my involvement with EA through donations

I'm not sure how many people are in equilibria like mine, but I highly recommend it, particularly if you feel like you can't get all the way into the EA inner circle and it's frustrating you. You don't have to! You can just help out a little on the margin, and have a job you enjoy that has nothing to do with this stuff. And it's great!

Broader, movement level notes

I'm really happy GiveWell and Giving What We Can exist. I live in a small city. If I talk about stuff like AI Safety, people go into "oh what an interesting boondoggle to talk about while tipsy" mode, reliably. Even smart people. It's just not inside the culture here. I don't mind this.

In my local social scene, stuff like "hey saving lives by donating money is actually doable; the stuff you've heard about crooked charities is true to a degree, but it isn't all of them and some people do verifiably great work" is legible fair play. Giving What We Can is a totally fine thing to bring up occasionally as an option for people doing financially okay or better. And if things get really dodgy on the cutting edge, they're still around, reliably generating value that everyone can be super confident we'll look back on in 20 years (assuming the world hasn't ended, which okay, for some of you that might be a big if) and say "yeah, that stuff was good".

My status-seeking brain has the impulse to say "let's give GWWC more attention and status" but honestly... I don't know. Maybe it should be a small-ish outfit with a moderate degree of outreach that interested people who read Peter Singer in high school or college can organically find. That's what happened with me. Maybe it's just pretty much ideal - I certainly don't think it's mismanaged, and being way more ambitious might be a mistake. (I have a post brewing about this more specific question, too.) But I'm really glad GWWC exists in any case, and grateful to the people who run it. The top 20% or so by effectiveness of the Global Poverty and Health space is just really great, and I hope EA continues attracting people (and some their disposable incomes) to it.

I'm also really happy LessWrong exists, and that major AI labs have safety divisions now about things more foundational than algorithmic bias. It's great that we're actually making inroads in elite, hard-to-access fields. I worry there's too much of that relative to broad-base EA 101 content for ordinary people, but hey, I would, wouldn't I? I didn't go to an Ivy League school or whatever, so I have a bias against awarding mega-status to people near the very top of credential/prestige pyramids. C'est la vie. On the whole I'm glad we've made progress in both elite and non-elite dimensions over time, regardless of where social incentives make me want more glory on the margin.

I guess basically my feeling boils down to...

Conclusion

A lot of official communications from top-level community officials laud or defend a portfolio approach. Will's big post on (among other things) what we've accomplished, even while writing/promoting a longtermism book, touched on AMF. As a long-time donor, I appreciated that. Open Philanthropy does their bucket thing. Holden has shared really valuable models about cluster thinking, and the idea of moral parliaments (from Tobies Newberry and Ord) has been floating around for a while. There's no shortage on calls for diversification, from the very top levels.

But I think, maybe, we could take that a little more to heart? Like, it's really fine. You can care a little about a bunch of stuff. Your philanthropic interest can wander, even into territory that on certain models would be lower expected value than others. You can be partly motivated by local social gradients - just don't take it too far. You can give up if you keep shooting for the stars and getting burned. There's a lot of room on the outskirts. And having outskirts - having people, causes, organizations, that aren't trying to be at the very tip of the spear, the top of the pyramid - is actually, I think, super ecologically important.

And maybe more fun, too!

49

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

As someone in a somewhat similar position myself (donating to Givewell, vegetarian, exploring AI safety work), this was nice to read. Diversifying is a good way to hedge against uncertainty and to exploit differing comparative advantages in different aspects of one's life.

As an AI Safety person, I tend to believe that the community should move more towards existential risk (not claiming AI Safety maximalism). On the other hand, even if this is an individual's top priority, your diversification strategy may be optimal for them if AI safety is too abstract to fully engage their motivation.

In fact, I was considering doing some unrelated, non-EA volunteering so that I would have some more concrete impact as well, but I decided that I didn't actually have time. I may end up doing this at some point, but I'm all-in with AI Safety for now.

This is a really nice perspective. Great reminder that working on EA dollars is not the only way to do good, there's a whole world of ways to have an impact outside of the community itself. 

Ah shucks, I am sorry to hear it. Good luck to you!

More from Justis
Curated and popular this week
Relevant opportunities