Hide table of contents

The project: 

I am creating a database to give students an insight into different career paths with information on what the career is like, and how they might test their fit. Here is an example of what such a profile might look like: AI Safety technical research first steps.

I think young EAs not knowing their fit for various careers leads them to ruling out options too early or leads them to procrastinate on making progress in their career plan because it seems too difficult.

I also think this is something of a blindspot in existing EA resources. 80k usefully described the idea of a “ladder of tests” here, but I think it could be supplemented well with concrete examples of what those tests might look like for different career paths.

What you can do to help: 

You can fill in this form to give me input on how others can test their fit for your career path. All questions are optional, feel free to just answer those you have immediate thoughts on. https://forms.gle/8zPAzZy9fC7Yvz3s9 

Alternatively you can email me here to offer any thoughts or arrange a meeting: callum.evans@sjc.ox.ac.uk 

You can also share this post with any colleagues or contacts who you think might be interested in contributing.

Getting input and insight from people in high impact careers is naturally crucial to creating useful and accurate guidance for whether people are a good fit. I am looking for any contributions from people here about their career path - I want to get a diverse range of careers and so if you think any young EAs might be interested in pursuing your career, then I think it will be useful to cover.  

I think this project has a lot of potential value in providing some guidance to those considering different career paths and improving the pipeline to different positions. Any help would be greatly appreciated. I am also open to meta advice if you have any thoughts on how this project might best be done either for your specific career or generally, especially if you have tried a similar thing before. 


 

35

0
0

Reactions

0
0
Comments8


Sorted by Click to highlight new comments since:

I think this sounds like it could be a useful resource :)

I previously made a collection of Notes on EA-related research, writing, testing fit, learning, and the Forum, which might be helpful for this project or for some of this project's intended beneficiaries. 

(I know this isn't exactly what you're after, and I also shared it with you earlier, but someone suggested I share it in a comment on this post.)

I didn't see this referenced in your post, though there's a good chance you've seen it: Holden Karnofsky has ideas for how people can tell whether they're "on track" for different career paths. Might be more suited for people who are a bit further into testing a path, but probably worth citing for these profiles nonetheless.

[anonymous]3
0
0

This seems very useful. Personally, I would also be interested in: 

  1. Rate of improvement: What level of skill or advancement would be considered poor, mediocre, and exceptional after X months/hours? This would be especially valuable for careers involving soft skills, in which it is often hard to know how you should be measuring your performance or what a good rate of improvement/advancement looks like. (For AI research, it could be something like, "after X hours of learning this concept, it would be considered poor/fine/great to score somewhere in the Y percentile of this machine learning competition".)
  2. Related careers: If you mostly enjoy a career except for one or two specific components, what are other similar careers that may be a good fit?

Thanks for your comment. For your first point, I definitely agree in an ideal world that benchmarks for improvement would be useful but I would be hesitant for a few reasons. 

Firstly, you face quite a risk of putting people off a certain career when really you don't have the certainty to give that advice (especially when I am not a specialist in the field), and that could be really damaging and maybe not that useful. Secondly, these things are generally really context specific for how good X amount of progress is in Y amount of time. Eg. for your example, it could depend on pre-existing technical background, the amount of guidance and support you received while learning etc. - and I think this would be hard to quantify in a useful way.

Your second point is a really good one I think and something I would like to include - I suppose if I reach the point of creating a more comprehensive  collection then it should be easier to refer between them.

This seems like it could be a very valuable resource, and I will totally use it.

Agreed! Most of my EA networking is geared towards answering this question.

I think this could be an extremely useful resource.  In 80k’s job satisfaction post, 80k makes a very convincing case that it is useful to explore where your strengths are through doing side projects and work assignments (after first doing cheaper tests like informational interviews), but 80k does not go into much detail about how to actually go about identifying these projects/assignments.
In practice, I think that’s it’s actually very hard to identify work opportunities to try out different careers.  

I’m a software engineer 2 years out of college who is in the process of exploring other career paths.  I’ve spent a lot of time researching online how one can find opportunities to test out different skills.  I think two promising options are: 

  • Do a corporate cross-functional rotational program.  These are commonly offered to new grads by old-school American Fortune 500 companies (e.g. Ford, CVS, etc.), but I found several programs within the tech industry:
    • Axon LDP (https://www.axon.com/careers/bdp)
      • Lets you rotate across business roles for 6 months at a time: product management, marketing, sales, business development, finance
    • Yext Upward Program (https://www.yext.com/careers/upward)
      • Lets you rotate across business roles for 6 months - similar roles to Axon
    • Bookbub Rotational Program (https://www.bookbub.com/careers/open-positions/3766842)
      • Lets you rotate across product management, marketing, analytics roles for 4-6 months per role
    • I've found several other opportunities, but I think these are the most promising.  I actually made it to the final round of the Axon + Bookbub programs, but was ultimately rejected.  I wish there were more cross-functional rotational programs in the tech industry!
  • Do management consulting
    • I don’t know as much about this opportunity, but I’ve heard from my cousin who worked in this field that there are great opportunities to determine strengths.  Assuming you are performing well, you can have the flexibility to choose a range of projects, and these diverse projects can allow you to test out different skills.
    • I gather that you can do projects that focus more on analyzing data (testing analytical skills), other projects that focus more on drafting presentations and speaking with clients (testing communication and interpersonal skills).  You can work on projects that focus on improving internal software tools - sort of like product management.
    • Also I’ve heard there is a strong culture of feedback in management consulting, where there is tons of opportunity to get feedback on your performance from coworkers, which I imagine can be very helpful for identifying strengths.
    • Finally, with management consulting's excellent exit options, there's a better potential than most roles to move into another role that might align better with your strengths

Updates on this?

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 7m read
 · 
Tl;dr: In this post, I describe a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general feedback and encouragement. All mistakes are my own. Why I Wrote This Many community members have voiced their frustration because they have applied for many jobs and have got nowhere. Over the last few years, I’ve had hundreds of conversations with people trying to break into farmed animal advocacy or EA-aligned roles. When I ask whether they’re doing any networking or community engagement, they often shyly say “not really.” What I’ve noticed is that people tend to focus heavily on formal job ads. This makes sense, job ads are common, straightforward and predictable. However, the odds are stacked against them (sometimes 1,300:1 — see this recent Anima hiring round), and they tend to pay too little attention to the unofficial work — the small, informal, often unpaid actions that build trust and relationships long before a job is posted. This post is my attempt to name and explain that hidden layer of how hiring often happens, and to offer a more proactive, human, and strategic path into the work that matters. This isn’t a new idea, but I’ve noticed it’s still rarely discussed op