It's no secret, both within and outside of the community, that EA played a significant role in building OpenAI. To name a few links:
- There were at least two committed EAs on the OpenAI board prior to Altman's firing.
- In 2017 Open Phil gave out their biggest grant to date of $30 million to OpenAI. As part of this partnership, Holden Karnofsky, Open Phil's CEO, joined the OpenAI board. Karnosky held this position until 2021, when he left due to his wife's role in founding Anthropic.
- Many EAs have worked at OpenAI, and EA orgs like 80k hours have recommended OpenAI as a high-impact organization until recently.
Regardless of the exact degree of affiliation, the EA movement has broadly supported OpenAI as an AI safety effort. However, OpenAI's reputation and actions as an AI safety organization have come into question over the last couple of years. I think many in the community would agree the OpenAI can no longer be taken seriously as an AI safety org, and it's arguable OpenAI has had a significant negative impact on the world by accelerating AI development while neglecting safety. Despite this tension between OpenAI's actions and EA's values, I have yet to hear any strong voices clarifying the relationship between the two. Notably,
If EA is serious about preventing the harms of AI and AGI, holding companies like OpenAI accountable is an important step. This affiliation also plays poorly for EA's brand, giving the impression of being hypocritical and irresponsible. Speaking up can also be a strategic opportunity for EA to take a stand and reinvent the brand.
So, why hasn't EA denounced OpenAI?

What, concretely, would that involve? /What, concretely, are you proposing?
I'm not completely sure, which is why I posed it as a question. I posted this mostly because I didn't see much explicit discussion of this on the Forum and found that rather strange.
But some actions might look like:
"EA" isn't one single thing with a unified voice. Many EAs have indeed denounced OpenAI.
As an EA: I hereby denounce OpenAI. They have greatly increased AI extinction risk. The founding of OpenAI is a strong candidate for the worst thing to happen in history (time will tell whether this event leads to human extinction).
If you think it's merely 'arguable' that OpenAI has had had a significant negative impact through acceleration then I think you are significantly more positive than the median EA.
Hahaha, exactly. So then again the question: why hasn't the median EA done more to counteract the harm OpenAI is doing to the world? Or maybe they have and I just don't know about it?
I think near-term AGI is highly unlikely (specifically, I think there's significantly less than a 1 in 5,000 or 0.02% chance of AGI before the end of 2034) and I also think that claims about existential risk from AGI are poorly supported, but my impression of people in EA who do think near-term AGI is likely and do think x-risk from AGI is significant is that a lot of them have negative views on OpenAI. The EA community doesn't have an organization that typically makes position statements on behalf of the community. The most straightforward way to get something like this going would be to post an open letter on the forum and ask people to sign on. Probably some people would sign it. (I wouldn't.)
People in EA might be more reticent to denounce Anthropic, though, given that Holden Karnofsky now works there, Dustin Moskovitz of Good Ventures and Coefficient Giving (formerly Open Philanthropy) is an investor, Joe Carlsmith (formerly of Coefficient Giving) now works there, Amanda Askell has worked there for a while, and so on. Also, some people see Anthropic as the white hat to OpenAI's black hat, even though there's basically no difference (i.e. both just make chatbots and everything's fine).
This has been brought up a few times before. obviously EA isn't a monolith, but i personally might like the idea of making Sam Altman a "villain" perhaps better than denouncing AI in general. But either would do. I would love for some EA orgs (not just individuals) and even meta orgs to take a step like this. Yes it would be a risk, but I think it could have huge benefits reassuring the public and EA doubters post FTX, in in addition to the obvious AI safety stuff. Many still associate EA with OpenAI which is sad.
interestingly generally this sentiment seems to get met with a little more disagreement than agreement in previous discussions.
I agree with @Saul Munn though that it could be helpful to spell out exactly who and how you think some kind of a denouncement could happen.
Thinking along similar lines as you of EA orgs making stances. And like the idea that Sam Altman or specific actors might be easier to make "targets".
But yeah, I was mainly curious to get a sense of why I haven't seen more action from EA on OpenAI if people agree (with me) that they are so bad for AI safety. Maybe people just don't think OpenAI is bad.