Thinking about AI. Trying to build a Rat, EA, TPOT meetup scene in Asheville, NC
Which metric would you use to compare welfare across species?
I don't think we know enough about consciousness/qualia/etc. to say anything with conviction about what it's like to be a nematode. And operationally, I don't think you won't be able to convince enough people/funders to take real action on soil animals because it's just too epistemically unsound and doesn't fit into people's natural world views.
When I say net negative, I don't mean if you try to help soil animals you somehow hurt more animals on the whole.
I mean that you will turn people away from the theory of animal suffering because advocating for soil animals will make them think the field/study of animal suffering as a whole is less epistemically sound or even common sense as they previously thought.
I'm going to write a post next week about this, but consider the backlash on twitter regarding Bentham’s Bulldog's post about bees and honey. More people came out in force against him than for him. I think that post, for instance, reduced the appetite for animal suffering discussion/action
Thank you!
On the code sharing. Yes, I thought about it, but it would take us a bit of effort to pull it all together and publish it online, I didn't want to spend that effort if no one was going to get value from it. So far, no one has found the courage and 3 seconds of effort to put a comment asking for the data/code (or more likely, people just don't want to spend the time wading through the code/data)
On nematodes, I think 169x the total number of neurons compared to humans is a poor/confused way to attempt to measure total welfare. And I think the second order effects of trying to convince people they should care about nemotodes (unless they are already diehard EA) is likely net negative for the animal suffering cause at large.
From Rob (waiting for his comment to be approved):
Thanks for trying Winnow! My guess is that you were redirected to the homepage after logging in and created a fresh document (no reviewers included by default). Now that you're logged in, try creating a document directly from this page and it should work: https://www.winnow.sh/templates/ea-rationalist-writing-assistant
On the Egregore / religion part
I agree! Egregore is occult so definitely religion-adjacent. But I also believe EA as a concept/community is religion-adjacent (not necessarily in a bad way).
It's a community, ethical belief, there is suggested tithing, sense of purpose/meaning, etc.
Funny - I don't think it feels written by a critic, but definitely a pointed outsider (somewhat neutral?) 3rd party analysis.
I do expect the Egregore report to trigger some people (in good and bad ways, see the comment below about feeling heard). The purpose is to make things known that are pushed into the shadows, the good and the bad. Usually things are pushed into the shadows because people don't want to or can't talk about them openly.
I'll let @alejbo take this question - I think it's a good one
Although at the high level I somewhat disagree with "I don't think chatbots are very good at epistemology", my guess would be they're better than you think, but I agree not perfect or amazing
But as you admit, most humans aren't either, so it's already a low bar
I'd ask you to consider, when have you ever taken action due to a 0.1–1 min video?
I think basically no one takes action from any video, even 30 min high quality YouTube videos.
But what you get when you have someone watch your 1 min video is that their feed will steer in that direction and they will see more videos from your channel and other AI Safety-aligned channels.
I think this might be where a lot of the value is.
If you can get 20M people to watch a few 45 second videos, you are making the idea more salient in their minds for future videos/discussions and are bending their feed in a good way.
If someone watches a 30 min YouTube video it's because they're already bought in to the idea and just want to stay abreast.
I would rather The Inside View get 20M 25 second views (8.3×10^6 mins) than The Cognitive Revolution get 500k 30 minute views (1.5×10^7 mins) because I think a lot of Cognitive Revolution viewers are already taking action or just enjoy the entertainment, etc.
On TikTok you might be the only AI Safety content someone sees (huge boost from literally 0 awareness for millions of people) while the marginal Cognitive Revolution video might expand the concept to like a thousand new people
whoops, fixed!