Summary
Arkose is an early-stage AI safety fieldbuilding nonprofit focused on accelerating the involvement of experienced machine learning professionals in technical AI safety research through direct outreach, one-on-one calls, and public resources.
Between December 2023 and June 2025, we had one-on-one calls with 311 such professionals. 78% of those professionals said their initial call accelerated their involvement in AI safety[1].
Unfortunately, we’re closing due to a lack of funding. We remain excited about other attempts at direct outreach to this population, and think the right team could have impact here.
Why are we closing?
Over the past year, we’ve applied for funding from all of the major funders interested in AI safety fieldbuilding work, and several minor funders. Rather than try to massively change what we're doing to appeal to funders, with a short funding runway and little to no feedback, we’re choosing to close down and pursue other options.
What were we doing? Why?
* Calls: we ran 1:1 calls with mid-career machine learning professionals. Calls lasted an average of 37 minutes (range: 10-79), and we had a single call with 96% of professionals we spoke with (i.e. only 4% had a second or third call with us). On call, we focused on:
* Introducing existential and catastrophic risks from AI
* Discussing research directions in this field, and relating them to the professional’s areas of expertise.
* Discussing specific opportunities to get involved (e.g. funding, jobs, upskilling), especially ones that would be a good fit for the individual.
* Giving feedback on their existing plans to get involved in AI safety (if they have them).
* Connecting with advisors to support their next steps in AI safety, if appropriate (see below).
* Supportive Activities:
* Accountability: after calls, we offered an accountability program where participants set goals for next steps, and we check in with them. 114 call participants set goals for check
There's been some interesting research done at Georgetown University that shows higher amygdala activity in the brains of extraordinarily altruistic individuals (in this case, anonymous kidney donors) compared to a control group in response to other people's fear (no similar increase in activity in response to pain or anger). Video for reference attached.
One reason for the increased activity is that "fearful expressions appear vulnerable and infantile" which triggers in mammals (humans, primates, dogs, lions) an alloparental care response. Dr. Abigail Marsh claims that the route of altruism is "the capacity, the desire, the skill, to care for other people's babies."
Additionally, not only is the amygdala responsive to fearful stimuli, it's also said to be the "entry point to the parental care system," with a dense population of receptors for oxytocin.
EA is a hub of well-educated (less educated women = more children) altruistic individuals that are concerned with morality, not least including the moral weight of having biological or nonbiological children of their own. Pair these reasons with the evidence now supporting altruistic tendencies correlating to heightened amygdala activity, it could be that EAs are satisfying parental urges by simply participating in the EA movement. This participation could be enough for many EAs to forego parenthood of their own.
All that said, birth control is sometimes fallible. I am 34 weeks pregnant with my second child and have found parenting extremely challenging and rewarding :).