Jamie_Harris

Courses Project Lead @ Centre for Effective Altruism
3528 karmaJoined Working (6-15 years)London N19, UK

Bio

Participation
5

Jamie is the Courses Project Lead at the Centre for Effective Altruism, leading a team running online programmes that inspire and empower talented people to explore the best ways that they can help others. These courses and fellowships provide structured guidance, information, and support to help people take tailored next steps that set them up for high impact.

He has very light-touch involvement as a Fund Manager at the Effective Altruism Infrastructure Fund, which aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.

Lastly, Jamie is President of the board at Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history. (Most of the hard work is being done by the wonderful Jonah Boucher though!)

Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, as co-founder and researcher at Animal Advocacy Careers (which helps people to maximise their positive impact for animals), and as a Program Associate at Macroscopic Ventures (grantmaking focused on s-risks).
 

Comments
413

Topic contributions
5

Blimey. Did you check with CE about offering it as part of their incubation program (funded by them, maybe paid by results as you say)? And/or other incubators like Catalyze, or fellowship programs (not founders per se) like Constellation? (IIRC they have an affiliated executive coach already)

I'm surprised by "I don't really want a grant" though. E.g. the usual process is basically seed funding grant to check/demonstrate progress --> if you achieve that (or seem on track to), you get renewed funding. The mechanism isn't perfect (maybe you can BS your way to success or you aren't funded without good reasons), but it's at least ideally fairly results based.

(I'd be inclined to agree that ideally the founders/participants themselves would pay, but if you have evidence that they are "irrationally self-sacrificial" and will continue to underpay for the service relative to what they'd endorse themselves with hindsight etc, then that seems like a decent case for grant funding.)

I opened your profile and website and couldn't tell what this referred to? I'm intrigued, even if it's no longer accepting sign ups! 

This post prompted me to write up an idea I've had in the back of my mind for a while. Asya argues that people in or considering technical or policy roles at AI safety organizations could maybe have more impact doing capacity-building work.

One way to test if this could be a good fit for you: if you have domain expertise in an AI safety or governance topic, creating a structured course around it might be more feasible than you'd expect. AI tools, volunteer facilitators, and people like me with more experience in courses/products can handle a lot of the heavy lifting, so the main contribution is your knowledge and judgment about what matters.

I've written up a short proposal exploring how this could work in practice; I'd be keen to hear from anyone interested in trying it out.

Separately: the discussion/comments on the LessWrong cross-post are pretty interesting regarding the case for and against working on capacity building, so people reading here might like to check through those discussions too.

This post felt motivating plus personally reassuring to me given that I work in capacity building (albeit not solely focused on AI safety). 

A couple of updates (or at least: things that feel more salient to me) from the case study /stories were around the value of personal connections and direct personal encouragement to consideration working on [specific thing]. In the stories, it seems that often came from workshops and in-person events, though I'm also wondering if I should be leaning even harder into ways to enable that in the online programs I run.

Cool, makes sense. To be clear, I think contacting representatives is helpful! I wasn't trying to question that.

I don't know anything about the Congress authorisation so will defer on that. I'll just say that if the legality is in dispute rather than unambiguous/settled, then using the word illegal might be counterproductive/polarising, whereas "unprecedented" seems unambiguously true. 

Nice one for taking action!

What was the illegal part? Isn't it just unprecedented?

(Checking partly for my own knowledge and also because it seemed quite central to your call to action to the legislators)

Cool post! I thought it was well-structured and evidenced, while also recognising limitations and counterarguments etc. 

The organization running it would need to have sufficient credibility for the organizations using it to want to forego their own application processes. I think a random person starting it would have very low credibility. My company, which had run several dozen hiring rounds for many organizations had maybe 50% the credibility necessary. This seems like a hard bar.

 

I feel like a service that aspires to eventually be a common app could shift towards that incrementally by offering partly-vetted candidates. It's not a fully centralised common app, but gets customers/sign ups from orgs who just want access to another sour e kf high quality candidates

That might reduce some of the value prop to initial candidates at first, if the service doesn't have many confirmed clients yet, but I suspect that (1) quite a lot would apply anyway, even without confirmed buy in from orgs, if the pitch was done well, (2) there might be other ways to make it appealing, e.g. finding ways to offer some (automated?) feedback.

I think this is an interesting question! I think you're right to point out some of the factors that influence it including cause area, role type (and payment norms for them). I also think organizational cultural norms affect it quite heavily. 

My guess is that if you had a large enough dataset and controlled for enough factors, salary would predict 'role leverage' quite well. But I don't expect it to be very useful when choosing between roles to apply for, because the correlation will be weak, your dataset is too small etc. Basically, there are too many predictors and too much noise for it to be very informative. I think you're better off just reading the descriptions or using other heuristics like cause area, job title etc if you're trying to filter quickly.

Load more