Anonymous feedback form: https://www.admonymous.co/kuhanj
To add a bit of context in terms of on-the-ground community building, I've been working on EA and AI safety community building at MIT and Harvard for most of the last two years (including now), though I have been more focused on AI safety field-building. I've also been helping out with advising for university EA groups, workshops/retreats for uni group organizers (both EA and AI safety), and organized beginning-of-year residencies at a few universities to support beginning-of-year EA outreach in 2021 and 2022 along with other miscellaneous EA CB projects (e.g. working with the CEA events team last year).
I do agree though that my experience is pretty different from that of regional/city/national group organizers.
I would guess the ratio is pretty skewed in the safety direction (since uni AIS CB is generally not counterfactually getting people interested in AI when they previously weren't, if anything EA might have more of that effect), so maybe something in the 1:10 - 1:50 range (1:20ish point estimate for median capabilities research: median safety research contribution ratio from AIS CB)?
I don't really trust my numbers though. This ratio is also more favorable now than I would have estimated a few months/years ago, when contribution to AGI hype from AIS CB would have seemed much more counterfactual (but also AIS CB seems less counterfactual now that AI x-risk is getting a lot of mainstream coverage).
Thank you for all your encouragement over the past few years for students and newer community members to post on the forum, and for actually making it easier and less scary to do so. I definitely would not have felt anywhere near as comfortable getting started without your encouragement and post editing offers. I've replaced Facebook binging with EA Forum binging since I both enjoyed it so much and found it really valuable for my learning. You will be missed, and incredibly hard to replace. Thank you for all your hard work!
Hi Michael, thanks for writing this up! These are important topics, and I'd love to see more discussion of them. Just want to clarify two potential misconceptions: I don’t think it’s no longer hard to get a direct work job, although I do feel reasonably confident that it isn’t as hard to get funding to do direct work as it was a few years ago (either through employment or grants, though I would probably still stand by this statement if we were only considering employment). Secondly, on this part:
Kuhan mentioned that to it's not easy to get an EA job if you're not willing to work that hard, both working hard during the job and preparing to get the job.
Is it the case that if you're hard-working and motivated and aligned with the values of the organizations you're applying for, then it's not that hard to get a job that works on a top cause?
There may have been some miscommunication in our conversation - I didn’t mean to imply that just being willing to work hard is enough to get a direct work job, or that people who aren’t able to get direct work positions aren’t able to due to their work ethic. What I meant to communicate is that I’ve found individuals who have a strong understanding of EA ideas, take actions (especially career planning) based on these ideas, and have a strong work ethic have had a lot of success finding direct work opportunities (through applying to jobs at EA orgs, applying for grants to run projects/do research/etc, and starting new organizations).
Seems worth trying! I'd be interested in reading a write-up if you decide to run it.