Jamie_Harris

Courses Project Lead @ Centre for Effective Altruism
3641 karmaJoined Working (6-15 years)London N19, UK

Bio

Participation
5

Jamie is the Courses Project Lead at the Centre for Effective Altruism, leading a team running online programmes that inspire and empower talented people to explore the best ways that they can help others. These courses and fellowships provide structured guidance, information, and support to help people take tailored next steps that set them up for high impact.

He has very light-touch involvement as a Fund Manager at the Effective Altruism Infrastructure Fund, which aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.

Lastly, Jamie is President of the board at Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history. (Most of the hard work is being done by the wonderful Jonah Boucher though!)

Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, as co-founder and researcher at Animal Advocacy Careers (which helps people to maximise their positive impact for animals), and as a Program Associate at Macroscopic Ventures (grantmaking focused on s-risks).
 

Comments
419

Topic contributions
5

Thank you for digging into this and sharing your findings! Some of these seem like important insights if they're correct.

On observation 3:

Motivation exists, experience is scarce and takes years. There might be no shortage of junior, motivated, community-adjacent generalists seeking entry. The shortage is at the other end: senior professionals with years of accumulated judgment in operations, management, institutional engagement, communications, or policy who bring skills that cannot be acquired quickly. Several participants suggested that, for most generalist roles, a competent senior professional can get to a working level of AIS context in weeks, whereas it takes years, sometimes decades, to develop the professional judgment, external relationships, and organisational experience that senior roles require.

Seniority is a proxy for what is actually needed: diversity and outside-world fluency...

Several participants named this as an increasingly urgent gap. 

I'm somewhat surprised by this. It'd be good news for me if trye, because I run a career bootcamp that happens to have relatively large numbers of experienced, accomplished professionals seeking to transition into high-impact cause areas, often interested in AI safety, often generalists.

Would you be able to put me in touch with some of the people who expressed this need so we can understand the gap better and see if I can connect them to relevant talented people?

You can email me at Jamie.harris@centreforeffectivealtruism.org if easier!

I am currently the only Fund Manager at the EA Infrastructure Fund... and that needs to change!

I work full-time on something else within the Centre for Effective Altruism, and the EAIF needs a dedicated owner who will drive it forwards.

I think we're sitting on a big opportunity here. There's so much that the EA movement could achieve, and so much great work that could be enabled by EAIF.

Some indicators of promise here:

  • CEA is growing, but there's only so much that CEA can work on in-house. We need to fund and nurture great work that's happening elsewhere, too!
  • There are potential new sources of funding that EAIF could tap into; building a strong product here that donors are excited about is essential.
  • We have a mini roadmap laid out by recent successes within EA Funds.

Let me say more on that last one. I've been extremely impressed by what another EA Fund, the Animal Welfare Fund, has achieved over the past year or two, improving it's evaluation quality, it's staffing, and it's available pool of resources. I think the EAIF has the potential for a similar rocketship trajectory; it needs the right person to come in and make that happen.

CEA is hiring for a new Head of the EA Infrastructure Fund: full job description and application form here, apply by 4th May.

Let me know if you have questions! I can't promise deep engagement with all potential candidates, but I'll help out with key/quick uncertainties if I can! Some additional thoughts from Loic, new Head of EA Funds, here.

Oh, nice, I hadn't seen that, thank you! 

Looks more careful/thorough.

Separately, here's Claude's direct reply to your specific points in case you're curious (sorry I don't enough of a developed inside view take to respond myself!):

On "China don't have any frontier labs, only labs which distill other models": this is probably too strong. DeepSeek introduced genuine architectural innovations (Multi-head Latent Attention, fine-grained MoE) that Epoch AI characterises as real advances, not just distillation. That said, the distillation question is genuinely debated: OpenAI has alleged it, and Chinese labs scraped millions of Claude conversations. The picture is mixed rather than one-sided.

On "no evidence of an arms race": both governments explicitly frame AI as a strategic contest (both opted out of the Feb 2026 responsible AI military declaration), there's confirmed espionage (Linwei Ding convicted Jan 2026 for stealing Google TPU secrets), and $2.5B in chip smuggling. Whether this constitutes an "arms race" depends on your definition, but the competitive dynamic Leopold predicted is clearly present.

Your most interesting point is the last one: that distillation and open source might mean an arms race never materialises because intelligence becomes cheap and accessible. This connects directly to what I think is Leopold's most consequential error. He predicted open source would fade and proprietary algorithms would create a durable American moat. Instead, capable AI is diffusing faster than his framework assumed. You're right that this weakens the case that compute concentration equals geopolitical power, and it's a genuinely underexplored implication of how things have played out.

Thanks for reviewing and raising this! You're right that the US/China dynamics are central to Situational Awareness's thesis and we underemphasised them. We've now added a dedicated China/US section with its own tab and three expandable cards, evaluating his specific sub-predictions on infrastructure (7nm chips, power, Middle East), algorithms and open source, and strategic dynamics. Would value your review of the updated version if you have time!

True, the 3.5 rating seems a bit harsh! I just tweaked the wording that you quoted directly

Blimey. Did you check with CE about offering it as part of their incubation program (funded by them, maybe paid by results as you say)? And/or other incubators like Catalyze, or fellowship programs (not founders per se) like Constellation? (IIRC they have an affiliated executive coach already)

I'm surprised by "I don't really want a grant" though. E.g. the usual process is basically seed funding grant to check/demonstrate progress --> if you achieve that (or seem on track to), you get renewed funding. The mechanism isn't perfect (maybe you can BS your way to success or you aren't funded without good reasons), but it's at least ideally fairly results based.

(I'd be inclined to agree that ideally the founders/participants themselves would pay, but if you have evidence that they are "irrationally self-sacrificial" and will continue to underpay for the service relative to what they'd endorse themselves with hindsight etc, then that seems like a decent case for grant funding.)

I opened your profile and website and couldn't tell what this referred to? I'm intrigued, even if it's no longer accepting sign ups! 

This post prompted me to write up an idea I've had in the back of my mind for a while. Asya argues that people in or considering technical or policy roles at AI safety organizations could maybe have more impact doing capacity-building work.

One way to test if this could be a good fit for you: if you have domain expertise in an AI safety or governance topic, creating a structured course around it might be more feasible than you'd expect. AI tools, volunteer facilitators, and people like me with more experience in courses/products can handle a lot of the heavy lifting, so the main contribution is your knowledge and judgment about what matters.

I've written up a short proposal exploring how this could work in practice; I'd be keen to hear from anyone interested in trying it out.

Separately: the discussion/comments on the LessWrong cross-post are pretty interesting regarding the case for and against working on capacity building, so people reading here might like to check through those discussions too.

Load more