According to public reports, Dan Hendrycks has been influenced by EA since he was a freshman (https://www.bostonglobe.com/2023/07/06/opinion/ai-safety-human-extinction-dan-hendrycks-cais/).
He did the 80,000 hours program.
He worries about AI bringing about the end of humanity, if not the planet.
After getting his Ph.D., he started an AI safety organization instead of joining one of the many AI startups.
And he's taken $13M in donations from two EA orgs - OpenPhilanthropy and FTX Foundation.
Yet he denies being an Effective Altruism member when asked about it by the press. For instance (https://www.bloomberg.com/news/newsletters/2024-06-27/an-up-and-coming-ai-safety-thinker-on-why-you-should-still-be-worried)
As an aside, Hendrycks is not alone in this. The founders of the Future of Life Institute have done the same thing (https://www.insidecyberwarfare.com/p/an-open-source-investigation-into).
I'm curious to know what others think about Hendryck's attempts to disassociate himself from Effective Altruism.
Many people across EA strongly agree with you about the flaws of the Bay Area AI risk EA position/orthodoxy,[1] across many of these dimensions, and I strongly disagree with the implication you have to be a strong axiological longtermist, believe that you have no special moral obligations to others, and be living in the Bay while working on AI risk to count as an EA.
To that extent that was the impression they gave you that's all EA is or was, I'm sorry. Similarly if this led to bad effects either explicitly or implicitly on the directions of, or implications for, your work as well as the future of AI Safety as a cause. And even if I viewed AI Safety as a more important cause than I currently do, I still feel like I'd want EA to share the task of shaping a beneficial future of AI with the rest of the world, and pursue more co-operative strategies rather than assuming it's the only movement that can or should be a part of it.
tl;dr - To me, you seem to be overindexing on a geograhically concentrated, ideologically undiverse group of people/institutions/ideas as 'EA', when there's a lot more to EA than that.
- ^
... (read more)I am one such person who is feeling ever more that this group of EA has u
I don't think Dan's statement implies the existence of those fairly specific beliefs you must endorse to "count" as an EA. Given that there is no authoritative measure of who is / isn't an EA, it is more akin to a social identity one can choose to embrace or reject.
It's common for an individual to decide not to identify with a certain community because of their aversion to a subpart or subgroup of that community. This remains true even where the subgroup is only a minority of the larger community, or the subpart is only a minor-ish portion of the community ideology.
My guess is that public identification as an EA is not a plus for the median established AI safety researcher, so there's no benefit for someone in that position to adopt an EA identity if they have any significant reservations.