It's great to hear that being on the front foot and reaching out to people with specific offers has worked for you.
I actually want to push back on your advice for many readers here. I think for many people who aren't getting jobs, the reason is not because the jobs are too competitive, but that they're not meeting the bar for that role. This seems more common for EAs with little professional experience, as many employers want applicants who have already been trained. In AI Safety, it also seems like for some parts of the problem, an exceptional level of talent or skill is needed to meaningfully contribute.
In addition to applying for more jobs or reaching out to people directly, I'd also recommend:
I realise short timelines makes this all much harder, but I do think many people early in their career do their best work in the environment of an organisation, team, manager, etc.
Hi David, if I've understood you correctly, I agree that a reason to return home as for other priorities that have nothing to do with impact. I personally did not return home for the extra happiness or motivation required to stay productive, but because I valued these other things intrinsically, which Julia articulates better here: https://forum.effectivealtruism.org/posts/zu28unKfTHoxRWpGn/you-have-more-than-one-goal-and-that-s-fine
I do think the messaging is a little gentler than it used to be, such as the 80k content and a few forum posts emphasising that there are a lot of reasons to make life choices besides impact, and that that is ok. This is hard in general with written content aimed at a broad audience because some people probably need to hear the message to sacrifice a little more, and some a little less.
I think the moment you try and compare charities across causes, especially for the ones that have harder-to-evaluate assumptions like global catastrophic risk and animal welfare, it very quickly becomes clear how impossibly crazy any solid numbers are, and how much they rest on uncertain philosophical assumptions, and how wide the error margins are. I think at that point you're either left with worldview diversification or some incredibly complex, as-yet-not-very-well-settled, cause prioritisation.
My understanding is that all of the EA high net worth donor advisors like Longview, GiveWell, Coefficient Giving, (the org I work at) Senterra Funders, and many others are able to pitch their various offers to folks in Anthropic.
What has been missing is some recommended course prio split and/or resources, but that some orgs are starting to work on this now.
I think that any way to systematise this, where you complete a quiz and it gives you an answer, is too superficial to be useful. High net worth funders need to decide for themselves whether or not they trust specific grant makers beyond whether or not those grant makers are aligned with their values on paper.