OOO until Tuesday.
Hello! I'm Toby. I'm Content Strategist at CEA. I work with the Online Team to make sure the Forum is a great place to discuss doing the most good we can. You'll see me posting a lot, authoring the EA Newsletter and curating Forum Digests, making moderator comments and decisions, and more.
Before working at CEA, I studied Philosophy at the University of Warwick, and worked for a couple of years on a range of writing and editing projects within the EA space. Recently I helped run the Amplify Creative Grants program, to encourage more impactful podcasting and YouTube projects. You can find a bit of my own creative output on my blog, and my podcast feed.
Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.
Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.
Thanks for writing this Nick - the comments/post graph is really useful to me and definitely dismaying to see. I wonder what the trends look like for other cause areas (I might try and figure out how to claude code an answer to that myself).
I'd personally be sad to see an end to GHD conversation on the Forum, and I'm keen to hear more ideas for re-igniting it. For example - events we could run (debate weeks, theme weeks), writers we should cross-post to the Forum (in the hope of enticing them on to comment) etc...
FWIW there will be a week-long GHD related event in April which should promote some discussion. More details coming soon when it's confirmed.
Thanks Zach - would it also avoid the issue if I just removed the 'probably'? Then a less strong agree would more naturally indicate uncertainty.
(Options that don't require more coding are preferred).
Edit: this'll be my plan until I hear otherwise
If you work in an office with other EAs/ interesting and interested people, consider putting the debate slider from our upcoming debate on a big whiteboard. It can lead to some interesting conversations, and even better, some counterfactual forum posts.
PS- I'm aware this looks a bit 'people selling mirrors'
Thanks for taking this so well Anna!
I felt a similar way - I see so much AI text online that I actually struggle to read it when I see it. However I also see that there are a lot of other readers who don't have this reaction, so take this with a pinch of salt.
If you're looking for tips at all, I'd recommend:
a) Taking a post like this as your penultimate draft, and then writing a much shorter post in your own words based on it. OR...
b) Making sure your system prompt contains a distilled version of this page as a 'what not to do'. This is the quickest way to ensure your text doesn't come across as too AI-written.
Also - thanks for disclosing the LLM use, that made me trust the content much more.
Planning to post the announcement today. Currently a little confused about whether to refer to transformative AI or artificial general intelligence. Transformative AI assumes a certain worldview, or at least assumes that AI will be transformative in some fairly radical way. If we discuss AGI, we might just be talking about a slightly more productive humanity and whether that will be good for animals, which feels like a very non-interesting question in comparison.
There have been a lot of posts over the last couple of weeks, and when I've been putting together the Digests, I've seen several which seem criminally underrated.
I'm quick-taking to remind you of the 'customize feed' feature. The link is at the top of the frontpage - click it to decide how your frontpage weights posts on different topics. If Forum readers used this more, there would be less underrated posts (I think!).
"Does this require one to believe that sentience is limited to/substantially more likely in biological neurons than silicon?"
"more likely" yes, "limited to" no.
I'd also add that this is a reasonable stance even for people who put a lot of credence in physicalist/ functionalist theories. Whatever your theoretical commitments, we know with more certainty than almost anything else that human brains an support consciousness, so it'd make sense to be particularly worried here.