I agree that we shouldn't use e2g as a shorthand for skillmaxing.
I am less optimistic about the 'fit' vs raw competence point. It's not clear to me that a good fit for the work position can easily be gleaned by work tests - a very competent person may be able to acquire that 'fit' within a few weeks on the job, for example, once they have more context for the kind of work the organization wants. So even if the candidates at the point of hiring looked very different, their comparison may differ unless we imagine both in an applied job context, having learned things they did not know at the time of hiring.
I am more broadly worried about 'fit' in EA hiring contexts, because as opposed to markers of raw competence, 'fit' provides a lot of flexibility for selecting traits that are relatively tangential to work performance and often unreliable. For example, value-fit might select for hiring likeminded folks who have read the same stuff the hiring manager has, and reduce epistemic diversity. A fit for similar research interests reduces epistemic diversity and locks in certain research agendas for a long time. A vibe-fit may select simply for friends and those who have internalized norms. A worktest that is on an explicitly EA project may select for those already more familiar with EA, even if it would be easy for an outsider candidate to pick up on basic EA knowledge quickly if they got the job.
My impression is that overall, EA does have a noticeable suboptimal tendency to hire likeminded folks and folks in overlapping social circles (i.e. friends; friends of friends). Insofar as 'fit' makes it easier to justify this tendency internally and externally, I worry that it will lead to suboptimal hiring. I acknowledge we may have very different kinds of 'fit' in mind here. I do think the examples I provide above do exist in EA hiring decisions.
I haven't done hiring rounds for EA, so I may be completely wrong - maybe your experience has been that after a few worktests it becomes abundantly clear who the right candidate is.
This is a cool list. I am unsure if this one is very useful:
* There aren't many salient examples of people doing direct work that I want to switch to e2g.
This is because I think that we are not able to evaluate what replacement candidate would fill the role if the employed EA had done e2g. My understanding is that many extremely talented EAs are having trouble finding jobs within EA, and that many of them are capable of work at the quality that current EA employees do.
This reason I think bites both ways:
* E2g is often less well optimised for learning useful object-level knowledge and skills than direct work.
My understanding is that many non-EA jobs provide useful knowledge and skills that are underrepresented in current EA organizations, albeit my impression is that this is improving as EA organizations professionalize. For example, I wouldn't be surprised if on average, a highly talented undergrad would likely become a more effective employee of an EA organization if they spent 2 years ETG at anonymous corporation before they started doing direct work. And if we're lucky, such experiences outside EA would promote epistemic diversity and reduce the risk of groupthink in EA organizations.
My understanding is that competition for EA jobs is extremely high, and that roles that are being posted attract sufficient numbers of outstanding candidates. This seems to be strong evidence to me that a fair share of people applying to EA jobs should consider ETG unless they have reason to believe that they specifically outshine other applicants for EA jobs (i.e., that the job would not be filled by an equally competent person).
Regarding skeptical optimism, how about
Cautious Optimism
Safety-conscious optimism
Lighthearted skepticism
Happy Skepticism
Happy Worries
Curious Optimism
Positive Skepticism
Worried Optimism
Careful Optimism
Vigilant Optimism
Vigilant Enthusiasm
Guarded Optimism
Guarded Enthusiasm
Mindful Optimism
Mindful Enthusiasm
Just throwing a bunch of suggestions out in case one of them sounds good to your ear.
To AMF, as part of this yearly fundraiser I run https://www.againstmalaria.com/FundraiserGroup.aspx?FundraiserID=9191
I love your blog and reliably find it to provide the highest-quality EA criticism I have found. I shifted my view on a handful of issues based on it.
It may be helpful for non-philosophy readers to know that the journals these paper are published in are very impressive. For example, Ethics (Mistakes in Moral Math of Longtermism paper) is the most well-regarded ethics journal I know of in our discipline, akin to e.g. what Science or Nature would be for a natural scientist.
I am somewhat disheartened that those papers did not gain visible uptake from key players in the EA space (e.g. 80K, Openphil), especially since it was published when most EA organizations strike me as moving strongly towards longtermism/AI risk. My sense is that it was briefly acknowledged, then simply ignored. I don't think that the same would have happened with e.g. a Science or Nature paper.
To stick with the Mistakes in Moral Math paper, for example: I think it puts forward a very strong argument against the very few explicit numerical models of EV calculations for longtermist causes. A natural longtermist response would be to either adjust models or present new models, incorporating factors such as background risk that are currently not factored in. I have not seen any such models. Rather, I feel like longtermist pitches often get very handwavey when pressed on explicit EV models that compare their interventions to e.g. AMF or Give Directly. I take it to be a central pitch of your paper that it is very bad that we have almost no explicit numerical models, and that those we have neglect crucial factors_. To me, it seems like that very valid criticism went largely unheard. I have not seen new numerical EV calculations for longtermist causes since publication. This may of course be a me problem - please send me any such comparative analyses you know!
I don't want to end on such a gloomy note - even if I were right that these criticisms are valid, and that EA fails to update on them, I am very happy that you do this work. Other critics often strike me as arguing in bad faith or being fundamentally misinformed - it is good to have a good-faith, quality critique to discuss with people. And in my EA-adjacent house, we often discuss your work over beers and food and greatly enjoy it haha. Please keep it coming!
I am organizing a fundraising competition between Philosophy Departments for AMF.
You can find it here: https://www.againstmalaria.com/FundraiserGroup.aspx?FundraiserID=9191
Previous editions have netted (badum-tschak) roughly $40.000:
https://www.againstmalaria.com/FundraiserGroup.aspx?FundraiserID=9189
Any contributions are very welcome, as is sharing the fundraiser. A more official-looking announcement is on Dailynous, a central blog of academic philosophy: people found this ideal for sharing via e.g. department listservs.
https://dailynous.com/2024/12/02/philosophers-against-malaria-a-fundraising-competition/
These are relatively low-effort to set up - I spend maybe 10-20h a year on them. If you are interested in setting up a similar thing for your discipline/social circles, feel very welcome to reach out for help.
Hey Emmannaemeka,
Thank you for writing this! I have little insight as to which EA roles you might or might not be a good fit for. But I wanted to chime in on ways of fitting into the EA community, as opposed to EA orgs. I am in academia, too, and do not myself strive to get a job with an EA org. I do not think this makes me 'less EA'. There are many really good ways to contribute to the overall EA project that are not at EA organizations.
I find one of the privileges that come with academia is teaching ambitious, talented students. Many students enter university with a burning zeal to change the world and bring about positive change. I think as teachers, we can have a real impact by guiding such students towards realizing their values and going into positions where they can effectively make the world a better place. I am naturally biased in my assessment of this, but I think its plausible that teaching can have a bigger impact than direct work - it is a realistic aim to get multiple students that you can help grow into direct roles in EA-style organizations. I often think that many of these students are 'better fits' than I myself would be in such roles.
It strikes me that as a faculty member in a genuinely meaningful and important field, you'd be in a premier position to have impact through your teaching.