Thinking about AI. Trying to build a Rat, EA, TPOT meetup scene in Asheville, NC
Hey Sophie, great post, I fully agree.
I'm working with a few other people (including a funder) to build the infrastructure that can support new individual donors make smaller, quicker, and more decorrelated AI safety grantmaking bets.
I think it'd be great to chat to see what you're working towards - I saw this on your profile: "I'm in the early stages of co-founding a donor advisory initiative focused on neglected areas within AI safety, and would appreciate connections with anyone in the grantmaking/fundraising space!"
Which metric would you use to compare welfare across species?
I don't think we know enough about consciousness/qualia/etc. to say anything with conviction about what it's like to be a nematode. And operationally, I don't think you won't be able to convince enough people/funders to take real action on soil animals because it's just too epistemically unsound and doesn't fit into people's natural world views.
When I say net negative, I don't mean if you try to help soil animals you somehow hurt more animals on the whole.
I mean that you will turn people away from the theory of animal suffering because advocating for soil animals will make them think the field/study of animal suffering as a whole is less epistemically sound or even common sense as they previously thought.
I'm going to write a post next week about this, but consider the backlash on twitter regarding Bentham’s Bulldog's post about bees and honey. More people came out in force against him than for him. I think that post, for instance, reduced the appetite for animal suffering discussion/action
Thank you!
On the code sharing. Yes, I thought about it, but it would take us a bit of effort to pull it all together and publish it online, I didn't want to spend that effort if no one was going to get value from it. So far, no one has found the courage and 3 seconds of effort to put a comment asking for the data/code (or more likely, people just don't want to spend the time wading through the code/data)
On nematodes, I think 169x the total number of neurons compared to humans is a poor/confused way to attempt to measure total welfare. And I think the second order effects of trying to convince people they should care about nemotodes (unless they are already diehard EA) is likely net negative for the animal suffering cause at large.
From Rob (waiting for his comment to be approved):
Thanks for trying Winnow! My guess is that you were redirected to the homepage after logging in and created a fresh document (no reviewers included by default). Now that you're logged in, try creating a document directly from this page and it should work: https://www.winnow.sh/templates/ea-rationalist-writing-assistant
On the Egregore / religion part
I agree! Egregore is occult so definitely religion-adjacent. But I also believe EA as a concept/community is religion-adjacent (not necessarily in a bad way).
It's a community, ethical belief, there is suggested tithing, sense of purpose/meaning, etc.
Funny - I don't think it feels written by a critic, but definitely a pointed outsider (somewhat neutral?) 3rd party analysis.
I do expect the Egregore report to trigger some people (in good and bad ways, see the comment below about feeling heard). The purpose is to make things known that are pushed into the shadows, the good and the bad. Usually things are pushed into the shadows because people don't want to or can't talk about them openly.
Hey Chris, thanks for commenting!
Do you mean downsides of building this platform at all? Like, making the ecosystem more legible could make it easier for people to help as well as attack?
Or more like "some orgs will not want to be listed in the database for particular reasons"
Because it seems fine that if an individual, org, or project wants to be excluded, we will keep them out.
I think one of the major bottlenecks in AI safety is that we don't have enough grantmakers to route resources (capital and compute), especially to smaller orgs / projects, so anything that helps that flow could be super high leverage.
I guess potential downsides include:
the reviews / signal in the database are low quality or actively harmful in a way that makes it more likely that net negative projects get funding
public criticism could harm people and projects (we haven't yet figured out how we want to handle negative comments / disendorsements. Seems like it could be good signal, but is somewhat risky. Probably pretty easy to set up an AI reviewer for comments, or make them semi-private, only shown to verified funders or something. Not sure yet, this is something we're going to think more about and maybe test out a few things.
incentivizes goodhearting / gaming the platform / popularity contests
These all seem manageable with effort and iteration, though.
Curious what you're thinking, though!