Hide table of contents

I wrote this post for my personal Facebook and it was well received, so I thought it could be useful to people here on the EA Forum as well.

Colourful brain

My impression is that many people whose top career goal is 'improve the long-term future of humanity' are overly focused on working at a handful of explicitly EA/longtermist/AI-related organisations.

Some of those projects are great but it would be both crazy and impossible to try to cram thousands of people into them any time soon.

They're also not the natural place for most people to start their career, even if they might want to work at them later on.

The world is big, and opportunities to improve humanity's long-term prospects are not likely to be concentrated in just a handful of places we're already very familiar with.

Folks want to work on these projects mostly because they are solid opportunities to do good, but where does the narrow focus on them come from? I'm not sure, but some drivers might include:

  • They mostly publish and promote what they do, making them especially visible online.
  • It's fun to work with colleagues you already know, or who share your worldview.
  • They don't require people to pioneer their own unique path, which can be intimidating and just outright difficult.
  • They feel low-risk and legitimate. People you meet can easily tell you're doing something they think is cool. And you might feel more secure that you're likely doing something useful or at least sensible.
  • 80,000 Hours and others have talked about them more in the past.

For a while we've been encouraging readers/listeners to broaden the options they consider beyond the immediately obvious options associated with the effective altruism community. But I'm not always sure that message has cut through enough, or been enough to overcome the factors above.

I worry the end result is i) too little innovation or independent thinking, ii) some people not finding impactful jobs as they keep applying for a tiny number of positions they aren't so likely to get or which aren't even a good fit, and iii) people building less career capital than they otherwise might have.

Additional problems

First, to give readers some ideas, 80,000 Hours recently put up this list of problems which might be as good to work in as the 'classics' we've written on the most:

  • Measures to reduce the chance of ‘great power’ conflicts
  • Efforts to improve global governance
  • Voting reform
  • Improving individual reasoning
  • Pioneering new ways to provide global public goods
  • Research into surveillance
  • Shaping the development of atomic scale manufacturing
  • Broadly promoting positive values
  • Measures to improve the resilience of civilization
  • Reduction of s-risks
  • Research into whole brain emulation
  • Measures to reduce the risk of stable totalitarianism
  • Safeguarding liberal democracy
  • Research into human enhancement
  • Designing recommender systems at top tech firms
  • Space governance
  • Investing for the future.

The write-up on each is brief, but might be enough to get you started doing further research.

Additional career paths

Second, there's a new list of other career paths we don't know a tonne about or which are a bit vague, but we expect at least a few readers should take on:

  • Become a historian focusing on large societal trends, inflection points, progress, or collapse
  • Become a specialist on Russia or India
  • Become an expert in AI hardware
  • Information security
  • Become a public intellectual
  • Journalism
  • Policy careers that are promising from a longtermist perspective
  • Be research manager or a PA for someone doing really valuable work
  • Become an expert on formal verification
  • Use your skills to meet a need in the effective altruism community
  • Nonprofit entrepreneurship
  • Non-technical roles in leading AI labs
  • Create or manage a long-term philanthropic fund

There must be other things that should go on these lists — and some that should come off as well — but at least they're a start.

Again the description of each a brief, but are hopefully a launching pad for people to do more investigation.

(Credit goes to Arden Koehler for doing most of the work on the above.)

Additional jobs

Third, I don't know what fraction of people have noticed how many positions on our job board are at places they haven't heard of or don't know much about, and which have nothing to do with EA.

Some are great for directly doing good, others are more about positioning you to do something awesome later. But anyway, right now there's:

  • 131 on AI technical and policy work
  • 66 on biosecurity and pandemic preparedness
  • 11 on institutional decision-making
  • 95 on international coordination
  • 34 on nuclear stuff
  • 37 on other random longtermist-flavoured stuff

We've only got one person working on the board at the moment, so it's scarcely likely we've exhausted everything that could be listed either.

If nothing there is your bag maybe you'd consider graduate study in econ, public policy, security studies, stats, public health, biodefence, law, political science, or whatever.

Alternatively, you could develop expertise on some aspect of China, or get a job with promotion possibilities in the civil service, etc, etc.

Which also reminds me of this list of ~50 longtermist-flavoured policy changes and research projects which naturally lead to lots of idiosyncratic career and study ideas.

Anyway, I'm not saying if you can get a job at DeepMind or Open Philanthropy that you shouldn't take it — you probably should — just that the world of work obviously doesn't start and end with being a Research Scientist at DeepMind or a Grant-maker at Open Phil.

There's ~4 billion jobs in the world and more that could exist if the right person rocked up to fill them. So it's crazy to limit our collective horizons to, like, 5 at a time.

As I mention above, some of these paths can feel riskier and harder going than just working where your friends already are. So to help counter that, I suggest paying a bit more respect to the courage or initiative shown by those who choose to figure out their own unique path or otherwise do something different than those around them.

———

P.S. There's also a bunch of problems that some other people think are neat ways to improve our long-term trajectory about which I'm personally more skeptical — but maybe you agree with them not me:

  • More research into and implementation of policies for economic growth
  • Improving science policy and infrastructure
  • Reducing migration restrictions
  • Research to radically slow aging
  • Improving institutions to promote development
  • Research into space settlement and terraforming
  • Shaping lie detection technology
  • Finding ways to improve the welfare of wild animals
Comments6


Sorted by Click to highlight new comments since:

Another potential cause of the narrow focus, I think, is some people in fact expecting the vast majority of impact to be from a small group of orgs they mostly already know about. Curious whether you disagree with that expectation (i.e., you think the impact distribution of orgs is flatter than that), or whether you're just claiming that e.g. the distribution of applicants should be flatter regardless?

It could also be the case that the impact distribution of orgs is not flat yet we've only discovered a subset of the high impact ones so far (speculatively, some of the highest impact orgs may not even exist yet). So if the distribution of applicants is flatter then they are still likely to satisfy the needs of the known high impact orgs and others might end up finding or founding orgs that we later recognise to be high impact.

This is a great post, thanks for writing this up! 

I agree with the main point, and 80,000 Hours' webpage does make it clear that their top career recommendations (and the specific jobs in these areas that are highly concentrated in a few organizations) are pretty competitive, and most people on the EA movement are not going to be able to get into one of those. When  planning my career, I factor in this possibility, but one problem I face is that I don't feel I know enough about these other possibilities, and so there is a lot of uncertainty when I think about what should I do outside of the top career paths and top organizations.

I don't think the solution to this problem is for 80,000 Hours to try to discuss other problem areas and mention other EA-aligned organizations in more detail, because that would take a lot of effort. One thing that could be helpful, though, is to emphasize more the process people should go through when planning their careers, with more guidance on how to tackle problem areas that haven't been explored in much detail, how to explore areas that an EA think might be relevant but hasn't been explored at all, how to find organizations to work for in the problem areas they are interested in, and what to do if you can't get a job at an organization you really want to work for in the long term. 

I believe it would also help to share the trajectories of people in the EA community who have done some innovative work, or people who managed to find jobs at EA-aligned organizations that the movement was previously unaware of, emphasizing how they approached the task. Facilitating networking between people in a certain problem area could also prove really helpful.

I'm not saying there isn't any content in these topics, just that in my experience writing up and improving my own career plan over a few years I found it much easier to find EA material on why should I take a certain career path than on how to do it  more concretely (besides working at top career paths and organizations), and that based on my experience  I believe emphasizing more these aspects could go a long way into helping people structure better career plans.

Hey, if anyone is interested or already immersed in engineering physical goods or supply chain/logistics as their skillset, I want to be your buddy. DM me!

Designing recommender systems at top tech firms

Semi-related and somewhat off-topic, so forgive me for following that different track – but I recently thought about how one of the major benefits of EAGx Virtual for me was that it worked as a recommender system of sorts, in the form of "people reading my Grip profile (or public messages in Slack) and letting me know of other people and projects that I might be interested in". A lot of "oh you're interested in X? Have you heard of Y and Z?" which often enough led me to new interesting discoveries.

I'm curious if there may be a better approach to this, rather than "have a bunch of people get together and spontaneously connect each other with facts/ideas/people/projects based on mostly random interactions". This current way seems to work quite well, but it is also pretty non-systematic, luck-based, and doesn't scalethat well (it kind of does, but only in the "more people participate and invest time -> more people benefit" kind of way).

(That all being said, conferences obviously have a lot of other benefits than this recommender system aspect; so I'm not really asking whether there are ways to improve conferences, but rather whether there are different/separate approaches to connecting people with the information most relevant to them)

What about jobs in the field of education? I feel like there is a lack of discussion on teaching/working at public schools and its impact...I work at elementary school and oftentimes I feel (and understand to some extent) that these types of jobs appeal rather “unattractive” to the public. Even I ask myself, is my role really necessary? My role might soon be completely redundant. Especially in our technological age where all the information is available at our fingertips. And it seems like it's more and more going this way - remote teaching, home-schooling, smaller learning groups, etc. I am very interested in how the education system will evolve.

Curated and popular this week
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies