Hello! I'm Toby. I'm Content Strategist at CEA. I work with the Online Team to make sure the Forum is a great place to discuss doing the most good we can. You'll see me posting a lot, authoring the EA Newsletter and curating Forum Digests, making moderator comments and decisions, and more.Ā
Before working at CEA, I studied Philosophy at the University of Warwick, and worked for a couple of years on a range of writing and editing projects within the EA space. Recently I helped run the Amplify Creative Grants program, to encourage more impactful podcasting and YouTube projects. You can find a bit of my own creative output on my blog, and my podcast feed.
Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.
Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.
Planning to post the announcement today. Currently a little confused about whether to refer to transformative AI or artificial general intelligence. Transformative AI assumes a certain worldview, or at least assumes that AI will be transformative in some fairly radical way. If we discuss AGI, we might just be talking about a slightly more productive humanity and whether that will be good for animals, which feels like a very non-interesting question in comparison.Ā
There have been a lot of posts over the last couple of weeks, and when I've been putting together the Digests, I've seen several which seem criminally underrated.Ā
I'm quick-taking to remind you of the 'customize feed' feature. The link is at the top of the frontpage - click it to decide how your frontpage weights posts on different topics. If Forum readers used this more, there would be less underrated posts (I think!).Ā
I'm interested in finding ways the EA Forum could better promote cross-cause (as well as within cause) prioritisation, for example through events, or even commissioning posts. I'd be very happy to heard ideas from Forum readers/ the authors of this post.
I'd also be interested in hearing if you think the EA Forum isn't (and/or could not be) suited for this purpose. FWIW my guess is that the Forum would be better at surfacing ideas, critiques and considerations, and then someone generally has to be paid for very sustained focus on a prioritisation question (though with AI progress, who knows).Ā
I wasn't sure at first because it seemed so simple - but that formula seems to work really well.Ā
One possible downside is that its simplicity means the audience has to make more inferential leaps themselves to understand what we are getting at with the statement. But that's not necessarily a bad thing - It's good if the audience has to be a bit engaged in order to vote.Ā
Thanks Andrew. Most of what I can say is this:
As with most internal wranglings, there isnāt much that itās rational for me to share here, so I probably canāt answer your further questions.Ā
Iād like to make it very clear that me and Sarah think it is important that the EA movement has these conversations (and that we both favour transparency where possible). As of now, all I can say is that:
- We may be able to run one of the top two voted debates later in the year. Iāll let you know if this becomes possible.
- Our politics on the EA Forum policy remainsĀ this.Ā
Just to add - this is frustrating to us too, and I am hopeful that we will be able to host this debate week later in the year. This was not an arbitrary decision - and I would prefer if it made sense for us to share more, but it does not.Ā
I really appreciate this Seth :) The norms that Mo links below are the official answer to this, we use them when we moderate. We're working on a version which is shorter/catchier so it can appear in some form for new users.Ā
So far I've been sending people messages similar to your gentle counselling. Perhaps I should consider doing them as public comments - it looks like yours have gone down well.Ā
I think the question is still worth discussing if you believe that AI progress is much more gradual or will stall out at humanish levels of intelligence.Ā
Interesting. I was imagining that the question would have to be about some sort of locked-in super-intelligence. If we are talking about AGI systems which aren't drastically affecting the priorities that humanity has for itself, the question seems like a very obvious no (in other words - no AGI won't be good for animals, or bad for them).Ā
And then theres the typical question of what "aligned" means: aligned to who or what?Ā
You're right - it'd be frustrating if we just ended up having this debate for a week. That's what tempts me about "AGI which doesn't cause human extinction or disempowerment" (though those terms are ambiguous too of course).Ā
Thanks Edo!
I like that debate topics aren't overly operationalized.
Ā I agree, but there are better and worse ambiguities to spend our time discussing. For example "What is AGI" is a rabbit-hole, but ultimately not that interesting/ action-relevant.Ā
"Sentient beings" - Here I think the discussion should be contained to nonhuman animals because the other case seemed to be handled in the previous AI welfare debate.
I'm definitely leaning this way too.
I think that an operationalization which is too close to people's actual decisions may cause more people to defend their existing views or to take a stance based on what's more salient.
Yes, my ideal would always be that someone discusses a crux, arrives at an answer, and only then realises that it should influence their cause prioritisation.Ā
I'm curating this because it strikes me as an important piece of collective knowledge, and I'd like to further incentivise people to share takes like this.Ā