E

ElliotTep

1396 karmaJoined

Comments
46

Oh, this is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp Welfare Project vs AMF (e.g. RP's moral weights project). 

Some rough thoughts: It's when we get to comparing Shrimp Welfare Project to AI safety PACs in the US that I think the task goes from crazy hard but worth it to maybe too gargantuan a task (although some have tried). I also think here the uncertainty is so large that it's harder to defer to experts in the way that one can defer to GiveWell if they care about helping the world's poorest people alive today.

But I do agree that people need a way to decide, and Anthropic staff are incredibly time poor and some of these interventions are very time sensitive if you have short timelines, so that just begs the question: if I'm recommending worldview diversification, which cause areas get attention and how do we split among them? 

I am legitimately very interested in thoughtful quantitative ways of going about this (my job involves a non-zero amount of advising Anthropic folks). Right now, it seems like Rethink Priorities is the only group doing this in public (e.g. here). To be honest, I find their work has gone over my heard, and while I don't want to speak for them my understanding is they might be doing more in this space soon.

 I think the moment you try and compare charities across causes, especially for the ones that have harder-to-evaluate assumptions like global catastrophic risk and animal welfare, it very quickly becomes clear how impossibly crazy any solid numbers are, and how much they rest on uncertain philosophical assumptions, and how wide the error margins are. I think at that point you're either left with worldview diversification or some incredibly complex, as-yet-not-very-well-settled, cause prioritisation. 

My understanding is that all of the EA high net worth donor advisors like Longview, GiveWell, Coefficient Giving, (the org I work at) Senterra Funders, and many others are able to pitch their various offers to folks in Anthropic.

What has been missing is some recommended course prio split and/or resources, but that some orgs are starting to work on this now. 

I think that any way to systematise this, where you complete a quiz and it gives you an answer, is too superficial to be useful. High net worth funders need to decide for themselves whether or not they trust specific grant makers beyond whether or not those grant makers are aligned with their values on paper. 

 It's great to hear that being on the front foot and reaching out to people with specific offers has worked for you.

I actually want to push back on your advice for many readers here. I think for many people who aren't getting jobs, the reason is not because the jobs are too competitive, but that they're not meeting the bar for that role. This seems more common for EAs with little professional experience, as many employers want applicants who have already been trained. In AI Safety, it also seems like for some parts of the problem, an exceptional level of talent or skill is needed to meaningfully contribute. 

In addition to applying for more jobs or reaching out to people directly, I'd also recommend:

  1. broadening your search to a wider array of roles.
  2. apply to impactful work that is not on the 80k job board. most impactful jobs arent run at orgs where most people are ea.
  3. get a few years of training under your belt and come back to these jobs, with I think a much higher chance of success. (see my post here)

I realise short timelines makes this all much harder, but I do think many people early in their career do their best work in the environment of an organisation, team, manager, etc.

As someone who just participated in a name change recently I can assure you the pros and cons of this name with other contenders was probably discussed ad nauseam by the team involved, and they decided on this name despite the nerdy and clunky vibe. 

Answer by ElliotTep13
6
0

Approx how much absorbency/room for more funding is there in each cause area? How many good additional opportunities are there over what is currently being funded? How steep are the diminishing returns for an additional $10m, $50m, $100m, $500m?

Thanks for writing this, as someone who feels more at home in EA spaces I do sometimes feel like EAs are pretty critical of rationalist sub-culture (often reasonably) but take for granted the valuable things rationalism has contributed to EA ideas and norms.

Hi David, if I've understood you correctly, I agree that a reason to return home as for other priorities that have nothing to do with impact. I personally did not return home for the extra happiness or motivation required to stay productive, but because I valued these other things intrinsically, which Julia articulates better here: https://forum.effectivealtruism.org/posts/zu28unKfTHoxRWpGn/you-have-more-than-one-goal-and-that-s-fine  

Ah man I feel you. To be honest I've been avoiding the abyss recently with some recent career vs family dilemmas. Lemme know if you want to have a chat sometime.

For sure. I think Chana does a good job of talking about some of the downsides of living in a hub similar to what you mention: https://forum.effectivealtruism.org/posts/ZRZHJ3qSitXQ6NGez/about-going-to-a-hub-1

Wow that's gotta be one of the fastest forum post to plan changes on record. I'm glad to hear this resolved what sounds like a big and tough question in your life. As I mentioned in the post, I do think stints in hubs can be a great experience.

Load more