Hey there~ I'm Austin, currently building https://manifund.org. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !
I'm a bit disappointed, if not surprised, with the community response here. I understand veganism is something of a sacred cow (apologies) in these parts, but that's precisely why Ben's post deserves a careful treatment -- it's the arguments you least agree with that you should extend the most charity to. While this post didn't cause me to reconsider my vegetarianism, historically Ben's posts have had an outsized impact on the way I see things, and I'm grateful for his thoughts here.
Ben's response to point 2 was especially interesting:
If factory farming seems like a bad thing, you should do something about the version happening to you first.
And I agree about the significance of human fertility decline. I expect that this comparison, of factory farming to modern human lives, will be a useful metaphor when thinking about how to improve the structures around us.
It's a good point about how it applies to founders specifically - under the old terms (3:1 match up to 50% of stock grant) it would imply a maximum extra cost from Anthropic of 1.5x whatever the founders currently hold. That's a lot!
Those bottom line figures doesn't seem crazy optimistic to me, though - like, my guess is a bunch of folks at Anthropic expect AGI on the inside of 4 years, and Anthropic is the go to example of "founded by EAs". I would take an even-odds bet that the total amount donated to charity out of Anthropic equity, excluding matches, is >$400m in 4 years time.
Anthropic's donation program seems to have been recently pared down? I recalled it as 3:1, see eg this comment on Feb 2023. But right now on https://www.anthropic.com/careers:
> Optional equity donation matching at a 1:1 ratio, up to 25% of your equity grant
Curious if anyone knows the rationale for this -- I'm thinking through how to structure Manifund's own compensation program to tax-efficiently encourage donations, and was looking at the Anthropic program for inspiration.
I'm also wondering if existing Anthropic employees still get the 3:1 terms, or the program has been changed for everyone going forward. Given the rumored $60b raise, Anthropic equity donations are set to be a substantial share of EA giving going forward, so the precise mechanics of the giving program could change funding considerations by a lot.
One (conservative imo) ballpark:
then $60b x 0.3 x 0.5 x 0.2 x 0.2 / 4 = $90m/y. And the difference between 1:1 and 3:1 match is the difference between $180m/y of giving and $360m/y.
Thanks for the recommendation, Benjamin! We think donating to Manifund's AI Safety regranting program is especially good if you don't have a strong inside view among the different orgs in the space, but trust our existing regrantors and the projects they've funded; or if you are excited about providing pre-seed/seed funding for new initiatives or individuals, rather than later-stage funding for more established charities (as our regrantors are similar to "angel investors for AI safety").
If you're a large donor (eg giving >$50k/year), we're also happy to work with you to sponsor new AI safety regrantors, or suggest to you folks who are particularly aligned with your interests or values. Reach out to me at austin@manifund.org!
This makes sense to me; I'd be excited to fund research or especially startups working to operationalize AI freedoms and rights.
FWIW, my current guess is that the proper unit to extend legal rights is not a base LLM like "Claude Sonnet 3.5" but rather a corporation-like entity with a specific charter, context/history, economic relationships, and accounts. Its cognition could be powered by LLMs (the way eg McDonald's cognition is powered by humans), but it fundamentally is a different entity due to its structure/scaffolding.
I'm not aware of any projects that aim to advise what we might call "Small Major Donors": people giving away perhaps $20k-$100k annually.
We don't advertise very much, but my org (Manifund) does try to fill this gap:
I agree that the post is not well defended (partly due to brevity & assuming context); and also that some of the claims seem wrong. But I think the things that are valuable in this post are still worth learning from.
(I'm reminded of a Tyler Cowen quote I can't find atm, something like "When I read the typical economics paper, I think "that seems right" and immediately forget about it. When I read a paper by Hanson, I think "What? No way!" and then think about it for the rest of my life". Ben strikes me as the latter kind of writer.)
Similar to the way Big Ag farms chickens for their meat, you could view governments and corporations as farming humans for their productivity. I think this has been true throughout history, but accelerated recently by more financialization/consumerism and software/smartphones. Both are entities that care about a particular kind of output from the animals they manage, with some reasons to care about their welfare but also some reasons to operate in an extractive way. And when these entities can find a substitute (eg plant-based meat, or AI for intellectual labor) the outcomes may not be ideal for for the animals.