I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.
Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.
I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:
* Made ~20 visits to fish farms
* Wrote a recommendation on next steps for FWI’s stunning project
* Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions
* Received training in water quality testing methods
* Created charts in Tableau for a webinar presentation
* Brainstormed and implemented office improvements
I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under
There is often a clash between "alignment" and "capabilities" with some saying AI labs are pretending to do alignment while doing capabilities and others say they are so closely tied it's impossible to do good alignment research without producing capability gains.
I'm not sure this discussion will be resolved anytime soon. But I think it's often misdirected.
I think often what people are wondering is roughly "is x a good person for doing this research?" Should it count as beneficial EA-flavored research, or is it just you being an employee at a corporate AI lab? The alignment and capabilities discussions often seem secondary to this.
Instead, think we should stick to a different notion: something is "pro-social" (not attached to the term) AI x-risk research if it's research that (1) has a shot of reducing x-risk from AI (rather than increasing it or doing nothing) and (2) is not incentivized enough by factors external to the lab, to pro-social motivation, and to EA (for example: the market, the government, the public, social status in silicon valley, etc.)
Note (1) should include risks that the intervention changes timelines in some negative way, and (2) does not mean the intervention isn't incentivized at all, just that it isn't incentivized enough.
This is actually similar enough to the scale/tractability/neglectedness framework but it (1) incorporates downside risk and (2) doesn't run into the problem of having EAs want to do things "nobody else is doing" (including other EAs). EAs should simply do things that are underincentivized and good.
So, instead of asking things like, "is OpenAI's alignment research real alignment?" ask "how likely is it to reduce x-risk?" and "is it incentivized enough by external factors?" That should be how we assess whether to praise the people there or tell people they should go work there.
Thoughts?
Note: edited "external to EA" to "external to pro-social motivation and to EA"