Hide table of contents

I occasionally get referred people. Those messages look something like this.

"From: [Person I respect and trust]

Hey, have you met [Someone]? linkedin.com/someone

They're interested in [some biosecurity thing] and have [some relevant qualifications]. They seem really promising, and I thought they might be a good fit for [something I'm doing].

Should I intro you?"

If I'm feeling particularly underwater and/or antisocial, I have a followup like this.

"Wow cool background. Have you worked with [Someone] on anything? Or do you know anyone who has?"

The answer is almost always no.

And I think this is bad.

Why?

  1. Matching people to high-impact work is really hard.
  2. Hiring by work trust network is the only cheat code I know.
  3. While working together on a very real project very extensively is most informative, working on a random thing is still probably informative.
  4. Working with people on some random thing is relatively easy.
  5. Time spent on the counterfactual-more casual introduction-y conversations- is comparatively less valuable.

If you buy this, the prescription is simple. Find the coolest EAs you know, find some random thing to work on, and work on it together.

. . . .

Appendix 1. Community builders, consider spending more time working on projects with promising people.

I imagine there are lots of good reasons I haven't thought about explaining the prioritization of volume v.s. depth in the community building funnel, and I don't know what I'm talking about in general. But in my individual experience, I'd trade at least 5 high-quality introductions like the one above for a single intro from the same distribution, but where the [Person I respect or trust] has direct work experience with whoever they are introing. Say, experience on a project of similar scope to a final project in an intro CS class at a good university.

Appendix 2. Some quick justification, I guess

Matching people to high-impact work is really hard.

Something needs to explain 'talent bottleneck in EA orgs' but also 'loads of talented people seeking direct work'. Lots of posts relevant to this, e.g. here: https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really

This is typically called "vetting" from what I know. I think its more like matching, in the sense that vetting for one team and role should be very different from another.

Hiring by work trust network is the only cheat code I know.

Earlier this year I hired a lot of people in a period of several weeks. And I think they're pretty amazing folks. The only way this could work was by asking people I was working with, or had worked with in the past, to tell me who they had worked with who was a great fit for our needs. Thats what I've been calling work trust networks (almost certainly stolen from someone else, sorry)- a sequence of direct work experiences.

This is a general competence filter. But more importantly, if you trust the person giving you their reference to be totally honest, it gives you an oracle to query about the future of a potential candidate at your company. Work experience doesn't just tell you "good/ not good", it tells you how someone likes using slack, whether they tend to procrastinate on longform writing tasks, and how patient they are with a disorganized onboarding process. This information can not only allow you to evaluate whether your team is a match, but also find match(es) between the person and your most pressing needs as an organization.

Notably, when I've tried to hire through trust networks, e.g. allowing for connections by reputation or friendship instead of just work experience, it hasn't worked as well (purely anecdotal, but also makes sense from first principles IMO)

While working together on a very real project very extensively is most informative, working on a random thing is still probably informative.

Yeah, this is probably the weakest link. Most of my work trust network wins have involved multi-month, close-collaboration type work experiences. OTOH, when I think back to college class projects, I feel like I learned a ton about my project partners. And I'm so desperate for any information which looks like a real experience working together that my gut says even a random thing would make a big difference. I feel like I could change my mind quickly on this, tho. Worth a try?

Working with people on some random thing is relatively easy.

I've got a computer-y background, so my rapidfire "proof of lots of ideas" brainstorm is gonna be skewed towards that. But here are some things which seem interesting to me, that you can work with people on, which don't seem that hard to give a try:

  • reproduce an ML paper or the analysis in a compbio paper
  • host a logistically complicated event
  • start a new school club
  • build an app
  • enter a datascience competition
  • take a challenging class together, ideally with a substantial project component
  • turn a small profit with some internet hustle
  • level up your local EA group
  • 3D print a prototype of your own custom PPE
  • Buy a nanopore sequencer and see if you can assemble a DIY version of your own genome, cross check with a commercial service
  • Join a startup together
  • become superforecasters together
  • scrape useful data from the internet and make a pretty & useful visualization
  • launch your own shitcoin
  • publish a peer-reviewed paper
  • make a top-rated restaurant on trip advisor (https://www.youtube.com/watch?v=bqPARIKHbN8)
  • compete in a memory competition

Shrug. What matters to me here is whether it feels like a toy or not (it should not), whether it seems like you'd learn something valuable even if a collaboration didn't pan out (you should) and how fun it sounds to work together vs. solo (together, obv).

Time spent on the counterfactual-more casual introduction-y conversations- is comparatively less valuable.

This basically comes down to my bit in appendix 1. The tradeoff is that you probably need to spend more time working with people than having casual intro conversations. However, there also seem to be some globally better moves: e.g. finding things you have to do anyway, like classes, and doing them together with people.

Overall, I'd guess it would be worth experimenting in this direction as long as high-impact teams were reporting that a shortage of promising-seeming candidates was less of a problem then vetting the candidates they have (these are ultimately fungible bc of role scoping, but w/e).

130

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since:

Maybe EA could do with some more hackathon-type events? It seems like one of the easiest ways (from the individual's POV) to get a very intensive experience of working with many different people!

eca
13
0
0

This seems like a great idea- I actually woke up this morning realizing I forgot it from my list!

One part of my perspective which is possibly worth reemphasizing: IMO, what you choose to work together does not need to be highly optimized or particularly EA. At least to make initial progress in this direction, it seems plausible that you should be happy with anything challenging/ without an existing playbook, collaborative, and “real” in the sense of requiring you to act like you would if you were solving a real problem instead of playing a toy game.

So in this case, while “EA should host hackathons” seems reasonable and exciting to me, especially as a downstream goal if working together turns out o be really useful, it doesn’t need to block easier stuff. I dont think a shortage of good hackathon prompts or organizers should stop groups of EAs from voting on the most interesting local hackathon run by someone else, going together as a group, and teaming up to work on something (with an EA lens if you want). Thats just extremely low cost to try out.

(Im also noticing that “Host an awesome EA hackathon” seems like type of collaborative, challenging project a person could team up on!)

I'd trade at least 5 high-quality introductions like the one above for a single intro from the same distribution. 

Personally, when I'm recruiting for a role, I'm usually so hungry to get more leads that I'm happy to follow up with very weak references. I would take 5 high-quality introductions, I would take one super-high-quality introduction, I would like all of the above. Yeah, it's great to hire from people who have worked with a friend of yours before, but that will never be 100% of the good candidates.

This may very much depend on what sort of role you're hiring for, though. Most of my experience is in hiring software engineers, where hiring is almost always limited by how many candidates you can find who will even talk to you, rather than your ability to assess them.

One impression I could imagine having after reading this post for the first time is something like: "eca would prefer fewer connections to people and doesn't value that output of community building work" or even more scandalously, "eca thinks community builders are wasting their time".

I don't believe that, and would have edited the draft to make that more clear if I had taken a different approach to writing it.

A quick amendment to that vibe.

  1. Community building is mission critical. It's also complicated, and not something I expect to have good opinions about currently, overall, because of lack of context and careful thought, among other things.
  2. I have personally found these types of introductions enormously valuable, especially in other phases of my career, and it would make me very sad if people turned them off!
  3. Even if I didn't find them personally valuable, I'd guess that they were still very valuable overall because I expect this to be person and context dependent, and I see others get value.
  4. Even if more should be invested in work connections overall, its not clear that the folks sending me intros (THANK YOU!! PLEASE DON'T STOP!) should be the ones doing the collaboration themselves ("Have you worked with [Someone] on anything? Or do you know anyone who has?"). Gains from specialization could imply that the folks making intro connections should focus on that, and others should do more deliberate working on stuff.
  5. Rather, my nebulous aim is some combo of A) sharing what my intuitions (narrowly trained) say the marginal effect of trading intros for work experience would be, for me, B) gesturing at an opportunity for even more value to be produced by community builders, if my experience generalizes and C) hoping selfishly that someone will help me understand whats going on here, so I quite complaining about it to my dinner companions before impulsively writing a forum post.

Not sure how much of this is in my head, but thats a thing.

Meta note: this was an experiment in jotting something down. I've had a lot of writers block on forum posts before and thought it would be good to try erring on the side of not worrying about the details.

As I'm rereading what I wrote late last night I'm seeing things I wish I could change. If I have time, I'll try writing these changes as comments rather than editing the post (except for minor errors).

(Curious for ideas/approaches/ recommendations for handling this!)

Thank you for writing this! I really like the idea -- working on things with other people is messy and difficult and fun, but also a great way to build useful skills. The skill of understanding what skills you possess and what complementary skills others you work closely with possess seems valuable to develop

More from eca
Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 7m read
 · 
Tl;dr: In this post, I describe a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general feedback and encouragement. All mistakes are my own. Why I Wrote This Many community members have voiced their frustration because they have applied for many jobs and have got nowhere. Over the last few years, I’ve had hundreds of conversations with people trying to break into farmed animal advocacy or EA-aligned roles. When I ask whether they’re doing any networking or community engagement, they often shyly say “not really.” What I’ve noticed is that people tend to focus heavily on formal job ads. This makes sense, job ads are common, straightforward and predictable. However, the odds are stacked against them (sometimes 1,300:1 — see this recent Anima hiring round), and they tend to pay too little attention to the unofficial work — the small, informal, often unpaid actions that build trust and relationships long before a job is posted. This post is my attempt to name and explain that hidden layer of how hiring often happens, and to offer a more proactive, human, and strategic path into the work that matters. This isn’t a new idea, but I’ve noticed it’s still rarely discussed op