Hide table of contents

Full-time, remote


Initial deadline: June 8th. Currently: rolling applications.

If your ideal job would be leading an impact-driven organization, being your own boss and pushing for a safer future with AI, you might be a great fit for co-founding Catalyze Impact!

Below, you will find out more about Catalyze’s mission and focus, why co-founding this org would be high-impact, how to tell if you’re a good fit, and how to apply.

In short, Catalyze will 1) help people become independent technical AI Safety researchers, and  2) deliver key support to independent AI Safety researchers so they can do their best work.



Would highly appreciate it if you could share this message with people who you think might potentially be interested in this role.

You can ask questions, register interest to potentially fund us, work with us, make use of our services in the future and share information here.

Why support independent AI Safety researchers?

Lots of people want to do AI Safety (AIS) research and are trying to get in a position where they can, yet only around 100-300 people worldwide are actually doing research in this crucial area. Why? Because there are almost no AIS researcher jobs available due to AIS research organizations facing difficult constraints to scaling up. Luckily there is another way to grow the research field: having more people do independent research (where a self-employed individual gets a grant, usually from a fund). 

There is, however, a key problem: becoming and being a good independent AIS researcher is currently very difficult. It requires a lot of qualities which have nothing to do with being able to do good research: you have to be proactive, pragmatic, social, good enough at fundraising, very good at self-management and willing to take major career risks. Catalyze Impact will take away a large part of the difficulties that come with being an independent researcher, thereby making it a suitable option for more people so they are empowered to do good AIS research.

How will we help? 

This is the current design of the pilot - but you will help shape this further!

1. Fundraising support

       -> help promising individuals get funded to do research

2. Peer support networks & mentor-matching

       -> get feedback, receive mentorship, find collaborators, brainstorm and stay motivated rather than falling into isolation

3. Accountability and coaching 

       -> have structure, stay motivated and productive

4.  Fiscal sponsorship: hiring funded independent researchers as ‘employees’ 

       -> take away operational tasks which distract from research & help them build better career capital through institutional affiliation 


In what ways would this be impactful?

Alleviating a bottleneck for scaling the AIS research field by making independent research suitable for more people: it seems that we need a lot more people to be working on solving alignment. However, talented individuals who have invested in upskilling themselves to go do AIS research (e.g. SERI MATS graduates) are largely unable to secure research positions. This is oftentimes not because they are not capable enough of doing the research, but because there are simply too few positions available (see footnote). Because of this, many of these talented individuals are left with a few sub-optimal options: 

1) try to do research/a PhD in a different academic field in hopes that it will make them a better AIS researcher in the future

2) take a job working on AI capabilities (!)

3) try to become an independent AIS researcher

For many people, independent research (i.e. without this incubator) is not a good & viable option because being an independent researcher brings a lot of difficulties with it and arranging to be one requires specific skills. This drives these potential AIS researchers out of the field, delays or decreases their impact, and/or may even incentivize them to work on capabilities research instead - instead of contributing to AI Safety research.

Other ways in which this incubator could be impactful include:

• Increasing independent researchers’ productivity by offering them helpful services and centralizing certain operational tasks.

• Helping potential independent researchers get to work sooner by reducing the friction around fundraising.

• Increasing the number of research bets: additional independent research might increase the number of research directions being pursued. After all, as independent researchers individuals have more agency over deciding which research agendas to pursue. Pursuing more research bets could be very beneficial in this pre-paradigmatic field. 

Improving alignment research orgs’ applicant pool: independent researchers supported by us will arguably gain better research experience than they would through the alternative options they have. This could make alignment research organizations’ applicant pool more skilled, leading to better hires for them in the future.

Note: it seems unlikely that people will not apply for roles at/start research organizations because independent research becomes too appealing due to Catalyze’s help. However, we will keep an out for this to make sure we will not have this effect.


Why you might want to found a non-profit

  • Have a big positive impact: future developments in AI will probably influence every other problem in the world. However, trying to steer towards a future where we can use this technology for good & evading some terrifying x- or s-risk scenarios is crucial for this. Supporting others who are figuring out how to do this & enabling them to do this research is therefore potentially super high-leverage, especially when you focus on alleviating a bottleneck.
  • Personal development: charity entrepreneurship constantly pushes you to grow. Every day presents fresh challenges and opportunities to improve.
  • Autonomy: you are not forced to follow ineffective courses of action just because your manager insists on it. You have the ability to decide and steer what you are working on.
  • Career capital: gain experience in entrepreneurship, leadership, recruiting, negotiation, marketing, decision-making, communication, management, and much more. No matter what you will do in the future, a lot of the new things you learn will be transferable to other roles.
  • Flexible work schedule: whether you are an early bird or a night owl, you can tailor your work schedule to suit your preferences or move your weekend days around.
  • Varied work: one day you will be talking to customers, another day you will be fundraising, interviewing potential employees or strategizing about how to maximize our impact.
  • Purposeful work: Numerous jobs feel trivial, like being a small part of a massive system designed to sell consumer goods. But as a charity founder, you have the chance to prioritize what you believe in.
  • Pride and fulfillment: show yourself that you can build something great from the ground up.
  • No unnecessary internal bureaucracy: you’re making the rules.
  • Pick your colleagues: as one of the founders, you get to influence hiring decisions. Who you work with will strongly influence how much you enjoy the work you are doing.
  • Tangible impact: while the biggest chunk of your impact in assisting the AIS research field will be indirect (and not as tangible), you will also have a direct effect on the individuals you will help do their work better. Observing this first-hand will make this job extra gratifying.

About you

It’s a plus but not a prerequisite for you to have experience in (charity) entrepreneurship, working at a start-up, EA/AIS organization, or any other somewhat relevant working experience. Above all, you largely recognize yourself in the following description:

  • You’re entrepreneurial & action-oriented: you find that thinking through your actions is important, but you are also excited to actually make things happen!
  • You deeply care about improving the world: it is easy to get off course so when you are steering the ship you need to be altruistic and truly care about maximizing your impact.
  • Optimizing is your thing: you’re always coming up with better ways to do things - from loading the dishwasher to changing the world.
  • You like challenging yourself: you’re not looking for a simple life but for one full of interesting puzzles to solve that push you to grow and achieve results.
  • You’re proactive: you don’t wait for others’ permission, you will undertake or solve something when you think it is important and someone should do it.
  • You’re very conscientious. You are organized and good at meeting deadlines. You may also like spreadsheets so much that your friends think you may get married to one at some point.
  • You can juggle several priorities at once: you enjoy the challenge of having many things going on at the same time and being able to switch between them.
  • You’re emotionally resilient & persistent: when starting an organization you will inevitably have to handle major setbacks, run into problems, and put out fires. If this is very challenging to deal with for you, this role might not be the best fit. You are someone who will change strategies when this is the better thing to do but who will not give up on their goals.


About your co-founder

Your co-founder would be Alexandra Bos, currently based in Amsterdam.

  • Passionate about helping as much as possible to solve the problems in this world. Excited about non-profit entrepreneurship as a way to achieve this. Involved with EA for around 2.5 years.
  • Got to the last round (top ~3%) of Charity Entrepreneurship’s selection process for their past incubation round.
  • Relevant prior experience includes setting up and leading the TEDxLeidenUniversity organizing team for 1.5 years (a now self-sustaining organization). 
    • Experience with fundraising, hiring, project management, logistics, coaching, marketing & management (overseeing a team of 16 student organizers)
  • Recent graduate with BsC in Governance, Economics and Development.
  • Main strengths: generalist, mission-driven, strategic, and a creative problem-solver.
  • People-person though with little technical background.

Should you apply?

A general piece of advice: when in doubt, always apply! Don’t let imposter syndrome get the better of you ;)

Candidates with all sorts of backgrounds are welcome to apply and the application should not take too much time.

Salary: dependent on your needs & fundraising outcomes.

Application process

To apply, please fill out this form. If you already have your CV ready, it should take around 10-25 mins to fill out. The second round will consist of an interview, followed by a third round (and possibly a fourth round) where we assess our fit for working together. The whole process should be finished around mid-June.

Application deadline: Thursday, June 8th (in your timezone) - but feel free to apply earlier, it will speed up the process.


You can ask questions, register interest to potentially fund us, work with us, make use of our services in the future and share information here.

Link to this post but in a Google Doc





More posts like this

Sorted by Click to highlight new comments since: Today at 8:05 PM

However, talented individuals who have invested in upskilling themselves to go do AIS research (e.g. SERI MATS graduates) are largely unable to secure research positions.

It would be interesting to see the actual numbers, I think Ryan Kidd should have them.

Great point! They are currently compiling their results for what people have been doing post-MATS, I'm also curious what the results are

The things that the proposed startup is going to do seems to overlap in various ways with MATS, AI Safety Camp, Orthogonal (https://www.lesswrong.com/posts/b2xTk6BLJqJHd3ExE/orthogonal-a-new-agent-foundations-alignment-organization), European Network for AI Safety (ENAIS, https://forum.effectivealtruism.org/posts/92TAmcppCL7t54Ajn/announcing-the-european-network-for-ai-safety-enais), Nonlinear.org, and LTFF (if you plan to 'hire' researchers and pay them salary, i.e., effectively fund them, you basically plan to increase the total fundraising for AI safety, which is currently the LTFF's role).

Detailing similarities, differences, and partnerships with these projects and orgs would be useful

I understand it may look quite similar to different initiatives because I am only giving a very broad description in this post. Let me clarify a few things which will highlight differences with the other orgs/projects you mention:

-Catalyze's focus is on the post-SERI MATS part of the pipeline (so targeting people who have already done a lot of upskilling - e.g. already done AI Safety Camp/SERI MATS)

-The current plan is not to fund the researchers but to support already funded researchers (the 'hiring' them is just another way of saying their funding would not be paid out directly to them but first go through an org with tax-deductibility benefits e.g. 501(c)3 and then go to them). - so no overlap with LTFF there. One exception to the supporting already funded researchers is helping not-yet funded researchers in the fundraising process.

I don't really see similarities with Nonlinear apart from both naming ourselves 'incubators'. Same for with ENAIS apart from them also connecting people together. 

In short, I agree these interventions are not new. I think the packaging them up together and making a few additions & thereby making them easily accessible to this specific target group is most of the added value here.

Re: Nonlinear, they directly do services that you plan to do as well:

The Nonlinear Network: Funders get access to AI safety deal flow similar to large EA funders. People working in AI safety can apply to >45 AI safety funders in one application. The Nonlinear Support Fund: Automatically qualify for mental health or productivity grants if you work full-time in AI safety.

(Note that both are targeted not only at AI safety founders as may seem from the website, but independent researchers as well.)

Fair point, I see understand what you meant now. I think these would be great resources for us to potentially connect the independent researchers we would incubate with as well

Interesting, have you had a chance to pilot or trial this with any researchers so far?

The current plan is to run a pilot starting in July

This seems like a great opportunity. It is now live on the EA Opportunity Board!

Amazing, thanks!

Cool! I do alignment research independently and it would be nice to find an online hub where other people do this. The commonality I'm looking for is something like "nobody is telling you what to do, you've got to figure it out for yourself."

Alas, I notice you don't have a Discord, Slack, or any such thing yet. Are there plans for a peer support network?

Also, what obligations come with being hired as an "'employee'"? What will be the constraints on the independence of the independent research?

Fiscal sponsorship: hiring funded independent researchers as ‘employees’

      -> take away operational tasks which distract from research & help them build better career capital through institutional affiliation

Hi Rime, I'm not aware of any designated online space for independent alignment researchers either. Peer support networks are a central part of the plan for Catalyze so hopefully we'll be able to help you out with that soon! I just created a channel on the AI Alignment slack called 'independent-research' for now (as Roman suggested).

As for the fiscal sponsorship, it should not place any constraints on the independence of the research. The benefits would be that fundraising can be easier, you can get administrative support, tax-exempt status, and increased credibility because you are affiliated with an organization (which probably sounds better than being independent, especially outside of EA circles). 

I currently don't see risks there that would restrict independent researchers' independence.

That's very kind of you, thanksmuch.

I think it's better not to increase the number of distinct slack spaces without necessity. We can create a channel for independent researchers in the AI Alignment slack (see https://coda.io/@alignmentdev/alignmentecosystemdevelopment)


Although, I think many distinct spaces for small groups leads to better research outcomes for network epistemology reasons, as long as links between peripheral groups & central hubs are clear. It's the memetic equivalent of peripatric vs parapatric speciation. If there's nearly panmictic "meme flow" between all groups, then individual groups will have a hard time specialising towards the research niche they're ostensibly trying to research.

In bio, there's modelling (& some observation) suggesting that the range of a species can be limited by the rate at which peripheral populations mix with the centre.[1] Assuming that the territory changes the further out you go, the fitness of pioneering subpopulations will depend on how fast they can adapt to those changes. But if they're constantly mixing with the centroid, adaptive mutations are diluted and expansion slows down.

As you can imagine, this homogenisation gets stronger if fitness of individual selection units depend on network effects. Genes have this problem to a lesser degree, but memes are special because they nearly always show something like a strong Allee effect[2]--proliferation rate is proportional to prevalence, but is often negative below a threshold for prevalence.

Most people are usually reluctant to share or adopt new ideas (memes) unless they feel safe knowing their peers approve of it. Innovators who "oversell themselves" by being too novel too quickly, before they have the requisite "social status license", are labelled outcasts and associating with them is reputationally risky. And the conversation topics that end up spreading are usually very marginal contributions that people know how to cheaply evaluate.

By segmenting the market for ideas into small-world network of tight-knit groups loosely connected by central hubs, you enable research groups to specialise to their niche while feeling less pressure to keep up with the global conversation. We don't need everybody to be correct, we want the community to explore broadly so at least one group finds the next universally-verifiable great solution. If everybody else gets stuck in a variety of delusional echo-chambers, their impact is usually limited to themselves, so the potential upside seems greater. Imo. Maybe.

  1. ^

    H/T Holly. Also discussed with ChatGPT here.

  2. ^

• Increasing the number of research bets: additional independent research might increase the number of research directions being pursued. After all, as independent researchers individuals have more agency over deciding which research agendas to pursue. Pursuing more research bets could be very beneficial in this pre-paradigmatic field.

I somewhat disagree this is a good idea to increase the number of "bets", where a "bet" is taken as an idiosyncratic framework or a theory. I explained this position here: https://www.alignmentforum.org/posts/FnwqLB7A9PenRdg4Z/for-alignment-we-should-simultaneously-use-multiple-theories#Creating_as_many_new_conceptual_approaches_to_alignment_as_possible__No and also touched upon it and discussed it with Ryan Kidd in the comments to this post: https://www.lesswrong.com/posts/bRtP7Mub3hXAoo4vQ/an-open-letter-to-seri-mats-program-organisers.

But independent researchers are not obliged to craft their own theories, of course, they could work within existing established frameworks (and collaborate with other researchers who work in these frameworks), just be organisationally independent.

Thanks for sharing! I skimmed through the things you linked but will read it in more detail soon