80,000 Hours has a lot of great research on promising career fields for effective altruists. But one thing I've discovered while doing my own career planning is that the difference between opportunities in a single field seems to matter just as much as the difference between fields. Opportunity-level analysis of job prospects is a great complement to looking at field-level overviews, and I think it can significantly improve career decisions.

As a case study, consider someone deciding between software engineering and academic mathematics. Looking at the typical person going into these fields, software engineering seems like a much more desirable choice from an EA perspective. As an industry, software pays better, is far less competitive, has more opportunities to do directly impactful work, and grants more career capital for the same level of ability. So you'd probably advise most people to pick software engineering over math academia.

But looking only at the typical cases throws away a lot of information. The average opportunity for a math degree isn't necessarily the opportunity you're most likely to take. I know a number of academic mathematicians with EA tendencies, and most of them help run SPARC, a CFAR-funded summer program that introduces mathematically talented high-schoolers to a bunch of ideas that includes many EA-related ones. (For various reasons, it seems to me that SPARC probably wouldn't have succeeded without some academic mathematicians on board.) I think SPARC is an extremely good idea and would be pretty sad if its instructors had all gone into software development instead. In other words, it's not just important what the average opportunity in a field is like, but also the opportunities that you in particular are likely to be able to find.

Furthermore, the quality of which individual opportunities are available in a given field seems to vary wildly, and randomly, from person to person. For instance, when I was looking around for software jobs last year, the job that I took looked easily twice as good as my next-best alternative. Someone with similar skills to me but who wasn't as lucky while looking for software jobs might not have found it--if so, they would probably have found better opportunities in a completely different field. Similarly, a number of my friends applied for academic research positions, and I've repeatedly seen one person find opportunities that seem much better than another person with virtually the same aptitudes.

All this means that basing your career decision purely on field-level analysis seems likely to miss some potentially great career paths. To sum up, here are some pieces of career choice advice that I think are currently underrated:

  • When finding a job, it's worth looking very hard for better opportunities. If there's a large random component to your available opportunities, then simply looking at a larger sample of opportunities is likely to surface better ones. To find my current job, I not only asked all my friends if they knew companies that would be good opportunities--I asked some of their friends and their friends' friends too. It was only at the third degree out that I found the place I ended up working.

  • If you're an unusually good fit for a field that isn't big with the EA crowd, look into it anyway. Even if the average opportunity in a field doesn't look that great, being really awesome at something tends to bring up cool opportunities almost regardless of what the thing is. Just like the mathematicians were able to start SPARC, or like Toby Ord and Will MacAskill have used their academic positions well, there seem to be a lot of benefits to excelling in almost any field.

  • Skills that expand the set of possible opportunities for you are extra-valuable. For instance, in college, I might have been better-served by taking fewer super-advanced math classes and more classes that would give me enough grounding in other fields to add some value in them. There's a balance to be struck here--it wouldn't be helpful to be a total dilettante, so it's probably better to branch out to neighboring fields than something different altogether. For instance, as a math and computer science major I probably wouldn't have gotten very much out of taking a couple journalism classes, but I might have tried out robotics, computational biology, or economics.

Comments8


Sorted by Click to highlight new comments since:

The main impact of early choices in a career may be determining what skills you develop, who you know, what you are respected for, and so forth. The value of these resources depends on how useful they will be down the line, i.e. on what opportunities you will have in 10 years. This seems to be an important consideration in favor of thinking about what area to be in.

Great post Ben, this seems like a really good point to make clear. I think there's a general point here that it's much easier, and often better, to choose between specific options than general categories of options.

Generally when I think about career choice I think it's useful to begin by narrowing down to a few different fields that seem best for impact and fit, and then within those fields seek out concrete opportunities - and ultimately the decision will come down to how good the opportunities are, not a comparison between the fields themselves. But you've still narrowed by field initially. This seems to be the case especially when the fields you're comparing seem roughly as good as each other or each have different advantages.

I like the suggestion of putting a lot of effort into looking for really good opportunities, too - I imagine this is often neglected. A side point there is that obviously in some fields this is more worth doing than others, because some fields are going to be higher variance than others in terms of how good the opportunities are. e.g. I'd imagine there's higher variance in software jobs than in certain academic ones.

This strongly fits with my experience. Even on a pure earnings basis, as I've researched various job opportunities I've found there's a shocking amount of variation on how much two job opportunities can differ, much more than I initially anticipated based on a naive view of what a competitive job market looks like. Often this pops up in non-obvious ways.

Actually examples that might happen to you: one finance firm turns out to be much better about paying bonuses to new employees who make big contributions right away. Or maybe Google only offers you slightly more cash than the startup you interviewed with—but the startup is giving you so little equity it won't be worth much even if the company gets acquired for $100 million, whereas Google offers you RSUs worth half again what your base salary is.

Do you have any thoughts about how to juggle timing when different opportunities will arise at different times? For example, if applying for jobs & university places at the same time, the response times will be very different.

The obvious strategy is to delay the decision as long as possible, but it's hard to know how to trade off confirmed options that will expire against potential options you haven't heard from yet.

One EA friend I talked to about this said he tried to do this, then found that when it came down to it he couldn't bear to let an opportunity slide while waiting for others, so just took the first thing he got.

I haven't had this problem in the past, probably because software companies are frequently so desperate for engineers that once they offer you a job they're OK being strung along for quite a while. Plus I've never applied for things as disparate as graduate programs and non-academic jobs at the same time. So my experience is limited!

However, I do think that careful negotiation can help with this problem for high-skill non-software fields as well. If a company thinks you're good enough to hire, they probably think you're good enough to wait a little while for (unless they're REALLY strapped for time). An exploding offer is often just them using Dark Arts to try to get people to accept before they can get better options, like what happened to your friend.

Between that, timing your job applications correctly, and investigating opportunities you haven't officially been offered yet to see whether you really want them, it's hopefully possible to smooth out many of the synchronization issues.

I completely agree with this. This is why we put so much emphasis on our general framework and how to choose process. Finding options that do well according to the framework is what ultimately matters; not the specific career path you're in.

I agree that your framework and process can apply to opportunity-level decisions as well as field-level decisions--I just think that it isn't emphasized in proportion to how useful I found it.

For instance, to me it looks like those pages are framed almost completely in terms of choosing broad career paths rather than choosing between individual opportunities. E.g., the heading on the framework page reads:

Our career evaluation framework helps you compare between different specific career options, like whether to go into consulting or grad school straight out of university; or whether to continue at your current for-profit job or leave to work for a non-profit.

To me this seems to emphasizes the field-level use-case for the framework but not the opportunity-level use case.

Ah ok, I had 'specific career options' in mind, but then I see the examples don't give the right impression. I'll change this.

Curated and popular this week
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies