Update, 12/7/21: As an experiment, we're trying out a longer-running Open Thread that isn't refreshed each month. We've set this thread to display new comments first by default, rather than high-karma comments.
If you're new to the EA Forum, consider using this thread to introduce yourself!
You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
(You can also put this info into your Forum bio.)
If you have something to share that doesn't feel like a full post, add it here!
(You can also create a Shortform post.)
Open threads are also a place to share good news, big or small. See this post for ideas.
I find it a bit frustrating that most critiques of AI Safety work or longtermism in general seem to start by constructing a strawman of the movement. I've read a ton of stuff by self-proclaimed long-termists and would consider myself one and I don't think I've ever heard anyone seriously propose choosing to decrease the risk of existential risk by .0000001 percent instead of lifting a billion people out of poverty. I'm sure people have, but it's certainly not a mainstream view in the community.
And as others have rightly pointed out, there's a strong case to be made for caring about AI safety or engineered pandemics or nuclear war even if all you care about are the people alive today.
The critique also does the "guilt by association" thing where it tries to make the movement bad by associating it with people the author knows are unpopular with their audience.