According to public reports, Dan Hendrycks has been influenced by EA since he was a freshman (https://www.bostonglobe.com/2023/07/06/opinion/ai-safety-human-extinction-dan-hendrycks-cais/).
He did the 80,000 hours program.
He worries about AI bringing about the end of humanity, if not the planet.
After getting his Ph.D., he started an AI safety organization instead of joining one of the many AI startups.
And he's taken $13M in donations from two EA orgs - OpenPhilanthropy and FTX Foundation.
Yet he denies being an Effective Altruism member when asked about it by the press. For instance (https://www.bloomberg.com/news/newsletters/2024-06-27/an-up-and-coming-ai-safety-thinker-on-why-you-should-still-be-worried)
As an aside, Hendrycks is not alone in this. The founders of the Future of Life Institute have done the same thing (https://www.insidecyberwarfare.com/p/an-open-source-investigation-into).
I'm curious to know what others think about Hendryck's attempts to disassociate himself from Effective Altruism.
I don't think Dan's statement implies the existence of those fairly specific beliefs you must endorse to "count" as an EA. Given that there is no authoritative measure of who is / isn't an EA, it is more akin to a social identity one can choose to embrace or reject.
It's common for an individual to decide not to identify with a certain community because of their aversion to a subpart or subgroup of that community. This remains true even where the subgroup is only a minority of the larger community, or the subpart is only a minor-ish portion of the community ideology.
My guess is that public identification as an EA is not a plus for the median established AI safety researcher, so there's no benefit for someone in that position to adopt an EA identity if they have any significant reservations.