Hide table of contents

I'm currently helping assess 80,000 Hours' impact over the past 2 years.

One part of our impact is ways we influence the direction of the EA community.

By "direction of the EA community," I mean a variety of things like:

  • What messages seem prevalent in the community
  • What ideas gain prominence or become less prominent
  • What community members are interested in

To get a better understanding of this, I'm gathering thoughts from community members on how they perceive 80,000 Hours to have influenced the direction of the EA community, if at all.

If you have ideas, please share them using this short form!

Note we are interested to hear about both positive and negative influences.

This isn’t supposed to be a rigorous or thorough survey -- but we think we should have a low bar for rapid-fire surveys of people in the community that could be helpful for giving us ideas or things to investigate.

Thank you! Arden

Comments21


Sorted by Click to highlight new comments since:

The deprioritization of nonlongtermist issues, orgs and paths. To me this is unwarranted and didn’t necessarily reflect the views of the EA community. I suspect this lead to some division and feeling pushed around/devalued by people interested in global health, suffering, animals, etc. This may have driven the movement to be more longtermist dominated as the other people engage less.

I agree with the increased focus on long-termism, but I would also like to 80,000 Hours minimise any feelings of division/devaluing as long as they can do so whilst remaining true to what they believe.

Thanks David - it seems like an important harm to consider if we've caused people who'd otherwise be doing valuable work in global health / animal welfare / other issues to leave the EA community // not do as valuable work.

I didn't know the job board did this! That is pretty terrible.

Well the job board still lists the “neartermist” jobs but the top orgs and pressing problems does impose these two separate tiers.

https://80000hours.org/problem-profiles/

Animal welfare is not even in the first or second tier.

Like, literally nanotech is beating it out, as well as "malevolent actors", and "improving governance of public goods".

*Opens "top recommended organizations"*

*Pauses

*Breathes deeply

A bit to unpack, but yeah, forecasting is beating out global health and wellbeing, for top priority.

The smells are large, e.g. shipping the org chart, etc.

This way of breaking things down is very confusing to me. It seems weird to have some of the listed areas be role types and some be cause areas. E.g. the grantmaker and EA org employee sections also include GiveWell

Thanks Rebecca, I see how that's a confusing way to organise things -- will pass on this feedback.

I was also going to say that it's pretty confusing that this list is not the same as either the top problem areas listed elsewhere on the site or the top-priority career paths, although it seems derived from the latter. Maybe there are some version control issues here?

  1. The rise and fall of earning-to-give (although over a longer time period than the past two years)
  2. Climate change being important but not a top priority (I know this view is held more generally but the 80k problem profile is always the linked-to source for it)
  3. I would speculate the job board is probably very important for matching impactful orgs with EA job seekers, but I have no evidence of that

I would speculate the job board is probably very important for matching impactful orgs with EA job seekers, but I have no evidence of that

The 80K job board is the second most important referral source of applicants to Rethink Priorities, behind only personal referrals (e.g., inviting specific people to apply and asking people to refer us people to invite to apply). It also holds a large margin over the third best source, which I think is Twitter but I'm not sure.

Thanks! this is helpful.

I'd suggest changing the first short answer question on the google form to a longer response. Right now I can only see part of the sentence I'm typing.

fixed - thank you!

(I submitted this to your form, figured I could also write it here for further discussion):

I think 80k has provided a clear and easy-to-consume career guide, which has influenced the conversation. There are a set of careers/cause areas which are legibly high priority and thus approved by 80k. This has the effect of nudging people into the approved careers and discouraging them from everything else.

I suspect this has both positive and negative effects. The positive ones are the first-order effects: hopefully people actually make better career choices and this helps the world. I have some fears about the second-order effects though. Mainly, I worry that (mainly through social dynamics) some people are pushed out of careers where they would actually have more impact, by moving into careers where they can't thrive as well, and thus grow less and end up making less of a difference on the world. It's hard to judge this impact on those people's lives, since it shows up slowly and over time.

Thanks! Agree about there being tradeoffs here. Curious if you have more to say on this:

Mainly, I worry that (mainly through social dynamics) some people are pushed out of careers where they would actually have more impact, by moving into careers where they can't thrive as well

Am I right in thinking that the worry that, by raising the status of some careers, 80k creates social pressure to do those rather than the one you have greater personal fit for?

(Do you think there’s a (reasonable) amount of emphasis on personal fit we could present which would mostly ameliorate your worries on this?)

I think 80k has tried to emphasize personal fit in the content, but something about the presentation seems to dominate the content, and I think that is somehow related to social dynamics. Something seems to get in the way of the "personal fit" message coming through; I think it is related to having "top recommended career paths". I don't know how to ameliorate this, or I would suggest it directly.

I'm sure this is frustrating to you too, since like 90% of the guide is dedicated to making the point that personal fit is important; and people seem to gloss over that.

One thing that could help would be eliminating the "top recommended career paths" part of the website entirely. That will be very unsatisfying to some readers, and possibly reduce the 'virality' of the entire project, so may be a net bad idea; but it would help with this particular problem. I am afraid I don't have any better ideas.

I agree this is an important point, but also think identifying top-ranked paths and problems is one of 80K's core added values, so don't want to throw out the baby with the bathwater here.

One less extreme intervention that could help would be to keep the list of top recommendations, but not rank them. Instead 80K could list them as "particularly promising pathways" or something like that, emphasizing in the first paragraphs of text that personal fit should be a large part of the decision of choosing a career and that the identification of a top tier of careers is intended to help the reader judge where they might fit.

Another possibility, I don't know if you all have thought of this, would be to offer something that's almost like a wizard interface where a user inputs or checks boxes relating to various strengths/weaknesses they have, where they're authorized to work, core beliefs or moral preferences, etc., and then the program spits back a few options of "you might want to consider careers x, y, and z -- for more, sign up for a session with one of our advisors." Then promote that as the primary draw for the website more than the career guides. Just a thought?

I think more emphasis on what makes a fulfilling career, as distinct from personal fit, which I take to mean ‘chance of being excellent at this’, would help ameliorate this and similar worries. This could just mean signal boosting more of your research on what makes a fulfilling career

One element of personal fit that’s not mentioned is the choice to have kids / become a primary caregiver for someone — see bessieodell’s great post. Current impact calculations don’t include this by default, which I think creates a cultural undercurrent of “real EAs don’t factor caregiving into their careers.”

Post here: https://forum.effectivealtruism.org/posts/ahne8S7JdmjmjHieu/does-effective-altruism-cater-to-women

Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 7m read
 · 
Article 5 of the 1948 Universal Declaration of Human Rights states: "Obviously, no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment." OK, it doesn’t actually start with "obviously," but I like to imagine the commissioners all murmuring to themselves “obviously” when this item was brought up. I’m not sure what the causal effect of Article 5 (or the 1984 UN Convention Against Torture) has been on reducing torture globally, though the physical integrity rights index (which “captures the extent to which people are free from government torture and political killings”) has increased from 0.48 in 1948 to 0.67 in 2024 (which is good). However, the index reached 0.67 already back in 2001, so at least according to this metric, we haven’t made much progress in the past 25 years. Reducing government torture and killings seems to be low in tractability. Despite many countries having a physical integrity rights index close to 1.0 (i.e., virtually no government torture or political killings), many of their citizens still experience torture-level pain on a regular basis. I’m talking about cluster headache, the “most painful condition known to mankind” according to Dr. Caroline Ran of the Centre for Cluster Headache, a newly-founded research group at the Karolinska Institutet in Sweden. Dr. Caroline Ran speaking at the 2025 Symposium on the recent advances in Cluster Headache research and medicine Yesterday I had the opportunity to join the first-ever international research symposium on cluster headache organized at the Nobel Forum of the Karolinska Institutet. It was a 1-day gathering of roughly 100 participants interested in advancing our understanding of the origins of and potential treatments for cluster headache. I'd like to share some impressions in this post. The most compelling evidence for Dr. Ran’s quote above comes from a 2020 survey of cluster headache patients by Burish et al., which asked patients to rate cluster headach
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of