A

Austin

Cofounder @ Manifund & Manifold
3789 karmaJoined San Francisco, CA, USA

Bio

Hey there~ I'm Austin, currently building https://manifund.org. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !

Comments
215

Really appreciate this post! I think it's really important to try new things, and also have the courage to notice when things are not working and stop them. As a person who habitually starts projects, I often struggle with the latter myself, haha.

(speaking of new projects, Manifund might be interested in hosting donor lotteries or something similar in the future -- lmk if there's interest in continuity there!)

Hey! Thanks for the thoughts. I'm unfortunately very busy these days (including, with preparing for Manifest 2025!) so can't guarantee I'll be able to address everything thoroughly, but a few quick points, written hastily and without strong conviction:

  • re non sequitor, I'm not sure if you've been on a podcast before but one tends to just, like, say stuff that comes to mind; it's not an all-things-considered take. I agree that Hanania denouncing his past self is a great and probably more central example of growth, I just didn't reference it because the SWP stuff was more top of mind (interesting, unexpected).
  • I know approximately nothing about HBD fwiw; like I'm not even super sure what the term refers to (my guess without checking: the controversial idea that certain populations/races have higher IQs?). It's not the case that I've looked a bunch into HBD and decided I'll invite these 6 speakers because of their HBD beliefs; I outlined the specific reasons I invited them, which is that they each had an interesting topic to talk about (none of which were HBD afaik). You could accuse me of dereliction of duty wrt researching the downstream effects of inviting speakers with controversy? idk, maybe, I'm open to that criticism, it's just there's a lot of stuff to juggle and it feels a bit like an isolated demand on my time.
  • I agree that racism directly harms people, beyond being offensive, and this can be very bad. It's not obvious to me where and how of racism is happening in my local community (broadly construed, ie the spaces I spend time in IRL and online), or what specific bad things that are caused by this racism? Like, I think my general view of racism is that it's an important cause area, alongside many other important causes to work on like AI safety, animal welfare, GHD, climate change, progress, etc -- but it happens to be not very neglected or tractable, for me personally to address.

No updates on ACX Grants to share atm; stay tuned!

Thank you Caleb, I appreciate the endorsement!

And yeah, I was very surprised by the dearth of strong community efforts in SF. Some guesses at this:

  • Berkeley and Oakland have been historical nexus for EA and rationality, with a rich-get-richer effect where people migrating to the bay choose East Bay
  • In SF, there's much more competition for talent: people can go work on startups, AI labs, FAANG, VC
  • And also competition for mindshare: SF's higher population and density means there are many other communities (eg climbing, biking, improv, yimby, partying)

Some are! Check out each project in the post, some have links to source code. 

(I do wish we'd gotten source code for all of them, next time might consider an open source hackathon!) 

Thanks Angelina! It was indeed fun, hope to have you join in some future version of this~

And yeah definitely great to highlight that list of projects, many juicy ideas in there for any aspiring epistemics hacker, still unexplored. (I think it might be good for @Owen Cotton-Barratt et al to just post that as a standalone article!)

I agree that the post is not well defended (partly due to brevity & assuming context); and also that some of the claims seem wrong. But I think the things that are valuable in this post are still worth learning from.

(I'm reminded of a Tyler Cowen quote I can't find atm, something like "When I read the typical economics paper, I think "that seems right" and immediately forget about it. When I read a paper by Hanson, I think "What? No way!" and then think about it for the rest of my life". Ben strikes me as the latter kind of writer.)

Similar to the way Big Ag farms chickens for their meat, you could view governments and corporations as farming humans for their productivity. I think this has been true throughout history, but accelerated recently by more financialization/consumerism and software/smartphones. Both are entities that care about a particular kind of output from the animals they manage, with some reasons to care about their welfare but also some reasons to operate in an extractive way. And when these entities can find a substitute (eg plant-based meat, or AI for intellectual labor) the outcomes may not be ideal for for the animals.

I'm a bit disappointed, if not surprised, with the community response here. I understand veganism is something of a sacred cow (apologies) in these parts, but that's precisely why Ben's post deserves a careful treatment -- it's the arguments you least agree with that you should extend the most charity to. While this post didn't cause me to reconsider my vegetarianism, historically Ben's posts have had an outsized impact on the way I see things, and I'm grateful for his thoughts here.

Ben's response to point 2 was especially interesting:

If factory farming seems like a bad thing, you should do something about the version happening to you first.

And I agree about the significance of human fertility decline. I expect that this comparison, of factory farming to modern human lives, will be a useful metaphor when thinking about how to improve the structures around us.

It's a good point about how it applies to founders specifically - under the old terms (3:1 match up to 50% of stock grant) it would imply a maximum extra cost from Anthropic of 1.5x whatever the founders currently hold. That's a lot! 

Those bottom line figures doesn't seem crazy optimistic to me, though - like, my guess is a bunch of folks at Anthropic expect AGI on the inside of 4 years, and Anthropic is the go to example of "founded by EAs". I would take an even-odds bet that the total amount donated to charity out of Anthropic equity, excluding matches, is >$400m in 4 years time. 

Anthropic's donation program seems to have been recently pared down? I recalled it as 3:1, see eg this comment on Feb 2023. But right now on https://www.anthropic.com/careers:
> Optional equity donation matching at a 1:1 ratio, up to 25% of your equity grant

Curious if anyone knows the rationale for this -- I'm thinking through how to structure Manifund's own compensation program to tax-efficiently encourage donations, and was looking at the Anthropic program for inspiration.

I'm also wondering if existing Anthropic employees still get the 3:1 terms, or the program has been changed for everyone going forward. Given the rumored $60b raise, Anthropic equity donations are set to be a substantial share of EA giving going forward, so the precise mechanics of the giving program could change funding considerations by a lot.

One (conservative imo) ballpark:

  • If founders + employees broadly own 30% of outstanding equity
  • 50% of that has been assigned and vested
  • 20% of employees will donate
  • 20% of their equity within the next 4 years

then $60b x 0.3 x 0.5 x 0.2 x 0.2 / 4 = $90m/y. And the difference between 1:1 and 3:1 match is the difference between $180m/y of giving and $360m/y.

Load more