Hide table of contents


TL;DR

  • This post describes the experience of trying to contribute independently to AI governance and biosecurity projects. Coordination gaps made it difficult to identify what work was actually needed.
  • Proposals include having fellowships maintain updated summaries of open questions and facilitate connections among applicants who were not selected.

Trying to “Just Do the Work”: One Attempt at Independent Contribution

Many people in EA argue that applying for a job is not a good strategy for getting a job in this environment. Instead, they should identify a project that needs doing and immediately make an impact by working on it while developing relevant skills, a network, and a portfolio of projects that prove their talent. Thinking this was a good idea, I decided to follow this advice in my final months at a contract position. With a background in policy and international relations, I thought biosecurity or AI governance could be a good starting point: I would identify a project that needed to be done and get started.

I reviewed project descriptions from fellowship programs such as SPAR and FIG, and reached out to the researchers working on AI governance and biosecurity whose projects aligned most with my background. I wanted to ask them about related gaps in the field and similar projects where independent contributions might be useful.

The researchers I spoke with were thoughtful, engaged, and genuinely willing to help. Eventually, I came across a list of research questions, and among them, one at the intersection of AI and biosecurity governance caught my interest. I invested time defining the scope of my research, mapping the regulatory landscape, and laying the groundwork, all while remaining in conversation with various experts. By the time I had a complete view of how I would organise my research, I learned that someone at an EA-related organization was already working on a nearly identical topic. The work was not public, and the people I had spoken with had no way of knowing about it. None of the experts I had been in communication with had been able to tell me I was wasting my time: each person could speak to what they were working on, but no one had a view of the whole landscape. That is not a criticism of individuals. It reflects a coordination gap that no individual researcher can fix on their own.

I might simply have been unlucky. Another attempt to do research in global health resulted in a solid partnership with an organisation that has clear views on how it could make my work impactful once it is finished. However, I do not believe my attempt to work on independent research on AI governance/biorisk was uncommon. The advice to “just do the work” assumes you can identify what work is needed. That level of situational awareness grows from professional networks built through conferences, institutional affiliations, and access to pre-publication research, the kind of access that comes with being a salaried researcher at an established organization, and cannot be reached through casual networking. For mid-career professionals operating independently, this knowledge does not exist by default. The same is true of financial runway. Spending weeks having conversations, following leads, and doing actual research is itself a form of professional work. The advice implicitly treats these resources as a given, when for many people in this transition, they are precisely what is missing.

Possible Entry Points for Improvement

What follows are proposals that I would be interested in exploring further.

Updated project lists

Researchers sometimes post lists of projects to investigate. For newcomers who do not yet know the relevant questions, these lists are invaluable. They provide direction when it is hard to find.

The problem is that these lists are sometimes outdated. A page from six months ago may no longer reflect the state of the field. Projects may have been completed, priorities may have shifted, and the researcher may have moved on. Since many newcomers rely on the same sources, the risk of duplication is high. Several people may independently begin work on a question that someone else has already answered.

I do not have a clean solution. This problem is easier to solve in the case of fellowships, which clearly mark which projects have selected a team to work on them, and which are still up for grabs. Nevertheless, it would be unreasonable to ask busy researchers to maintain their lists, as this would be an additional burden on people who are already overloaded and would only discourage them from drafting those useful lists in the first place. They cannot be expected to continuously track the field. We need to adopt systems to share the responsibility for maintaining these lists. When someone selects a project from a list, they need to add a comment with the date. Future readers can then contact the interested researcher directly to ask whether the work is ongoing, whether collaboration is welcome, or whether it has stalled. The system only works if people keep the community updated about their progress.

Connecting rejected applicants

A parallel gap exists on the supply side, concerning talent that has already been identified but lacks a structure to connect or collaborate. Fellowships and programs[1] receive far more qualified applicants than they can accept, and once rejected, those individuals disperse with no mechanism for connecting or collaborating.

The success of these research fellowships shows that there is great interest in the questions they study, but their infrastructure is insufficient to fully channel this enthusiasm. What if fellowships maintained more active waiting lists and facilitated connections among applicants who were not selected? These individuals could form working groups around proposed projects or questions that emerged from the application process. In some fellowships, project leaders have dropped out mid-program, and organizers encouraged teams to continue independently. Those teams already had a defined project, shared goals, and an established timeline, so it is unclear whether they would have succeeded without a team leader at any step of the project. But this could set a precedent, showing that once the coordination problem has been solved, self-organized projects can work. I believe the missing piece is, for the most part, a structured way to initiate them.

I would like to hear from others who have tried to contribute independently. Did you find a path forward? Did you encounter similar coordination gaps? Do you have ideas I have not considered? If you have experiences to share, please reach out.

  1. ^

     These include SPAR, FIG, ERA, Pivotal, PIBBSS, MATS, CAISH, CHAI, IAPS, Tarbell, LASR, CLR, FLI, and AI Safety Camp. AI Safety Camp is an interesting example as projects are not necessarily proposed by experts. People motivated by a question select other motivated researchers to assemble a team to conduct novel research. This makes it a useful reference point for participant-driven coordination.

27

1
0
2

Reactions

1
0
2

More posts like this

Comments4
Sorted by Click to highlight new comments since:

A couple of my thoughts, written quickly:

  • Sorry, it's a bummer to be scooped or even just feel scooped.
  • Most academic fields have, I think, less cross-org coordination than AI governance. I would be hesitant about trying to do generally more cross-org coordination in this space given that it's a departure from (what I view as) the norm in other fields.
  • As I am reading applications to hire AI governance researchers, one of my big questions is "has this person done relevant work before, successfully?". I don't think it would be much of a mark against that work if it was also similar to other work that was released at the same time, as long as it didn't seem like there was plagiarism and did seem like there were novel contributions.
  • Relatedly, multiple researchers teams taking independent stabs at the same question is often useful for reaching a higher quality of overall work, as they sometimes come up with different ideas/emphases/etc.
  • Some researchers have said (but I'm unsure where I land) that you almost never actually get scooped. Usually projects are a bit different in a way that is important and that you can emphasize in your output. Also you can boostrap from that work to make your project even better (but again, be clear about your original contributions vs. others'). 

Thank you, Aaron. This is an important point. I agree that it is worth putting out any work you have made out there. This is exactly the spirit of draft amnesty week! In this case, though, I had only conducted preparatory work to delineate the project. Continuing as if the other paper did not exist was not a realistic path, and my contribution would not have been meaningfully distinct. When I start a research project independently, I do it without much mentorship or institutional feedback, which makes it genuinely hard to assess whether I am working on the right question, using the right methods, or producing something useful to the field.

As someone reviewing applications, how do you evaluate independent research produced outside of a fellowship or academic context? Is there a threshold of rigor or novelty below which it hurts more than it helps to include it? And do you have suggestions for how early-career people in this transition can get lightweight feedback on research directions before investing weeks into a project? I ask because the fellowship application process often requires demonstrating prior work to get the mentorship needed to produce prior work, and I am trying to figure out how to navigate this as efficiently as possible.

As someone reviewing applications, how do you evaluate independent research produced outside of a fellowship or academic context?

I'm not sure how to answer this. I try to evaluate all work based on its quality; whether a project was single-authored also matters a fair amount (and high quality single-author work is an especially strong signal).

Is there a threshold of rigor or novelty below which it hurts more than it helps to include it?

Maybe for rigor, probably not for novelty. Applicants and other researchers should of course be up front about what are their contributions. 

And do you have suggestions for how early-career people in this transition can get lightweight feedback on research directions before investing weeks into a project?

Weeks sound like it might be a lot. I encourage people to do Apart Research Sprints or other Hackathon-style things which are shorter. I'm not really sure about getting lightweight feedback. In my experience, when junior people ask me for feedback on a project idea, the project idea is usually too broad or vague for me to know if it's a good project, and they have usually put less than 30 min of effort into it. So maybe my advice there is something like "if you are going to ask a more established researcher for feedback on your project plan, you should have already put a couple hours into the project, including surveying the relevant literature, coming up with a detailed project plan, and doing a little bit of de-risking". I'm not sure, maybe that's more intense than I endorse. Fortunately, even without the goal of getting feedback from somebody else, these are useful steps to begin a project with. 

I will also note that prior work does not always have to be extremely relevant. Academia exists and is by far the place where the most people learn research skills. 

I'm only in the beginning stages of trying to figure out how I may be able to contribute independently and haven't talked to a lot of individuals yet, but I agree on the difficulty of figuring out tractable, non-technical research questions and potential funding to pursue those questions. I submitted a proposal to IFP's Launch Sequence RFP and am noodling on one or two more. Kudos to you for taking the initiative!

Adapting a living literature review seems like one potential platform to host updated project lists. Since they already synthesize existing research, it'd be natural to look there to understand outstanding research areas. On your idea for folks to select from a lit, it does strike me that there should be a little bit of friction for someone to demonstrate interest in the beginning and continued attention over time, like a requirement every three weeks that they provide a mini-update or the project will be considered abandoned.

I really like what you said about connecting rejected applicants. Inspired by @abrahamrowe's language in your Shared EA Operations Hiring Platform post, silver (finalists) and bronze (semi-finalists?) medalists could have a lot of impact. To where if someone hosted an RFP or otherwise was opening to supporting exploration of new research streams, I'd say their group applicants should merit serious review. To help make this happen: Similar how EA applications ask applicants if they'd consent to having their information shared with other organizations, they could ask applicants if they'd want to be connected with others who almost but don't quite make it to the finish line.

Btw, great back and forth with @Aaron_Scher! I had similar questions about independent research produced outside of a fellowship or academic context.

Curated and popular this week
Relevant opportunities