NOTE: I will abbreviate ("reaching out" + "right to reply" as R+R)
Appreciate the clarification. Do you have any advice for people like myself who have a very different perspective on the value of what you recommend (i.e. R+R)? The way you have described it, I would normal consider the decision of what to do to be within my discretion as a poster. As an analogy, I try to write well reasoned arguments, but I understand that not too infrequently I will probably fail to do so. I might write something and think that I could refine the arguments if I took more time but decide what I have is good enough. But R+R seems much more binary than "make well reasoned arguments". Its hard for me to shake the feeling that it would be perceived as doing something distinctly "wrong" to fail to do so in certain cases.
General disagreement/ critical engagement with the ideas of an organisation could technically fall into this category, but is generally read as more collaborative than as an accusation of wrongdoing.
This seems like it could get awfully messy. I think strong disagreements tend to coincide with different views on the nature of the criticism and how accusatory it is, what appropriate tone is etc. It seems like the exact cases where some guidance is most needed are when people will heavily contest these types of issues.
Related to that, one of my concerns is focusing too much on R+R may predictably lead to what I consider unproductive discussions. I think back-and-forth among people who disagree has great value. I worry focusing on R+R has a "going meta" problem. People will argue about whether the degree of R+R done by the critic was justified instead of focusing on the object level merits of the criticism. The R+R debate can become a proxy war for people who's main crux is really the merits.
I also worry that expectations around R+R won't necessarily be applied consistently. I worry that R+R is in a sense a "regressive tax" on criticism, that R+R may in practice advantage orgs with more influence over orgs/people with less influence. I also worry that there may a "right targets" dynamic, where people criticizing "the right targets" may not be subject to the same expectations as people targeting well-liked EA orgs. This is why some of my questions above relate to "who" R+R applies to.
I think the Forum is naturally quite sceptical and won't let bad faith arguments stand for long
I agree with this, but the logic to me suggests that R+R might not really be needed? The OP raises the concern that orgs will have to scramble to answer criticism, but if they think people on the Forum will find the criticism might be warranted, doesn't that indicate that it would in fact be valuable for the org to respond? I personally think this overall would produce a better result without R+R, because orgs could in some (perhaps many) cases rely on commenters to do the work, and only feel the need to respond when a response provides information that the community doesn't have but would find useful. The fact that they feel the need to respond is a signal that there is information missing that they may uniquely have access to. Are you saying you think the Forum can identify bad faith but can't identify criticism quality as accurately?
I don't think it will matter if a bad faith response is published alongside a critique.
I agree, but I think similar reasoning applies to the initial criticism. Its obviously not good to have bad criticism, but its not the end of the world, and I think its often reasonably likely that the Forum will respond appropriately. I think to the extend possible it would be good to have symmetry where there aren't specific things required because a post is "criticism".
I agree people should downvote criticism based on whether the person reached out based on their own judgements. I might have a different assessment of any given case compared to the typical EA forum voter, but people should be allowed to vote based on their own views.
I also agree that an organization has no obligation to respond to any given criticism, even if the critic did reach out in advance.
No one is having their posts deleted for not reaching out, so the choice is ultimately up to the critic.
I would distinguish between a few things:
I think you can downvote something that is sub-optimal but not norm-violating, although I think its debatable exactly what the balance should be, so I can see an argument that 2 and 3 kind of bleed together.
On the other hand, I think its pretty fair to want to distinguish 1 from 2/3, and that it is reasonable to expect a reasonable degree of clarity on 1. I think its reasonable to want to understand what the moderators consider a norm even if they won't remove posts for violating that norm. I understand moderators can't give 100% exact standards because then people would abuse that by tip-toeing up to the line, but I believe my questions above go to pretty fundamental aspects of the issue, they aren't just random nitpicks.
I would also like to understand to what degree the norm in question respects some version of viewpoint neutrality. The OP to me seems to portray the ask as essentially viewpoint neutral (with-in the category of "criticism" anyway). I'm not so sure this would be the case if we really ran down the answers to my questions above. I have no problem with people up and downvoting based on non-viewpoint neutral considerations (it would be kind of crazy to do otherwise). I think moderation being highly dependent on viewpoint could be more of an issue.
This post is about criticism of EA organizations, so it doesn’t apply to OpenAI or the U.S. government.
I take this to be the case as well, but I think it would be worth making this explicit.
I interpreted this post as mostly being about charities with a small number of employees and relatively small budgets that either actively associate themselves with EA or that fall into a cause area EA generally supports, such as animal welfare or global poverty.
I think this is a fairly reasonable heuristic, I myself personally think the concept of punching up vs punching down is helpful in terms of calibrating criticism, but I don't think this means there should be a norm that one must reach out to orgs before criticizing or that a right of reply is required. I think we can judge criticisms on their reasonableness, and the individual critic should be responsible for navigating these factors and others and deciding when these things (reaching out, allowing a reply) would be appropriate and make sense.
Most commentary I have read on the EA forum about this includes what is essentially a bad faith exception. That if you are worried about the org your are criticizing acting in bad faith, retaliating in some way, etc. that you don't need to do these things. I think that probably applies to small orgs just as much as large orgs. This seems to suggest there is no general requirement to do these things for small orgs, just maybe you should have a lower bar in your reasonableness calculation for small orgs vs large ones.
If you wanted to criticize Good Ventures, Open Philanthropy, GiveWell, GiveDirectly, or the Against Malaria Foundation, then I think you could send them a courtesy email if you wanted, but they have so much funding and — in the case of Open Philanthropy at least — a large staff. They’re also already the subject of media attention and public discourse.
Part of my interest here is in understanding what the actual norm is that people intend to apply. If the norm is that large orgs aren't included, I think it would be worth having that stated explicitly. I'm somewhat doubtful that is what is intended in the OP, but if so it would be good to know.
I'm already on record in this comment thread that I don't agree with the norms laid out above regarding reaching out to orgs and a right to reply. At the same time, I'm extremely worried you all at VettedCauses will take the focus on those issues and assume that you have some amazing criticism that the community is ignoring because they can't look past the fact you didn't color within the lines on those issues. Although I haven't followed whatever is going on with you all, I very seriously doubt that is the case. It seems extremely likely to me that people have given valid push-back to your criticism, and you yourselves are now ignoring that push-back. You can't just criticize others but ignore criticism directed at yourselves!
Yarrow points out that this could be an existential issue for your organization, I think you all need to really seriously take that type of criticism onboard and think long and hard about it. I hope the fact that I disagree about the things like right to reply but still share these concerns helps get it through to you all that the criticism you are receiving is not solely because you failed on those counts. I don't give a single fuck about whether you gave someone a right to reply or whatever, but I still think what you all have been doing has a very high chance of being inappropriate and your criticisms probably have many serious issues on their merits.
Editing to add: Although I agree with what I said above, I also thought about this whole situation a bit, and I'm sure this situation feels terrible for you all, and I'm sorry for that. I think you all probably made some mistakes, but so have all of us. You all aren't bad people or bad EAs or anything like that. Giving criticism is hard, and its unfortunately an area where mistakes that wouldn't even get noticed in other places can earn you a lot of negative attention. Although I think there is important stuff for you all to learn from here, I hope you don't take this experience as an indictment of you all as people.
I strongly disagree with the idea that there is a general obligation to reach out to someone before you publicly criticize them, and I've been considering writing a post explaining my case. I'd like to ask some questions to better understand the positions that people on the forum/EA community hold on this topic.
You talk about practices you'd like to "encourage" but later speak of "these norms", which I take to mean the obligation to reach out and to offer a "right of reply". There are some things that it is good to do, but where one does not violate a norm when failing to do that thing. If someone makes a post that criticizes someone on the forum but does not reach out to the target of their criticism first, would you consider that to be violating a norm of the forum, even if that violation won't result in any enforcement?
Some posts that express similar views focus on criticism directed at organizations (e.g. "run posts by orgs"). Does the entity at which criticism is directed impact what a critic is expected to do? For example, it would surprise me if I was expected to reach out to OpenAI, the DOJ, or Amazon prior to making a post criticizing one of those entities on the forum. Similarly, people sometimes make posts that respond to criticism of EA or EA institutions that is published in other venues. Those responses are sometimes critical of the authors of the original criticism. I would also be surprised if the expectation was that such posts offer a right of reply to the original critics.
Lizka previously wrote a post about why, how and when to share a critique with the subject of your criticism. I highly recommend reading that post — she also includes a helpful guide with template emails for critics.
Appendix 3 of this post mentions this:
Criticism of someone’s work is more likely than other kinds of critical writing (like disagreement with someone’s written arguments)
What is in scope for "criticism" in this context? People may reasonably disagree on whether a particular piece of critical writing is more about public arguments/evidence (and thus is like disagreement with someone's arguments) or not. This also seems to suggest that if an org does something and publishes some reasons for doing it the critic might not need to reach out to them (but its unclear to me what the standard is), while if they simply state they are doing something and don't state any reasons a critic would have to reach out.
The other appendices mention cases when the target of criticism is not expected to act in good faith, and the "run posts by orgs" post mentions a similar exception to the expectation when the person/org being criticized may behave badly when a critic reaches out. I think its not uncommon that critics and their targets have major disagreements about whether these types of beliefs are reasonable. When can one invoke this type of reasoning for not reaching out?
My personal take is that there are pretty reasonable arguments that what we have seen in AI/ML since 2015 suggests AI will be a big deal. I like the way I have seen Yoshua Bengio talk about it "over the next few years, or a few decades". I share the view that either of those possibilities are reasonable. People who are highly confident that something like AGI is going to arrive over the next few years are more confident in this than I am, but I think that view is within the bounds of reasonable interpretation of the evidence. I think it is also with-in the bounds of reasonable to have the opposite view, that something like AGI is most likely further than a few years away.
Don't believe me? Talk to me again in 5 years and send me a fruit basket. (Or just kick the can down the road and say AGI is coming in 2035...)
I think this is a healthy attitude and that I think is worth appreciating. We may get answers to these questions over the next few years. That seems pretty positive to me. We will be able to resolve some of these disagreements productively by observing what happens. I hope people who have different views now keep this in mind and that the environment is still in a good place for people who disagree now to work together in the future if some of these disagreements get resolved.
I will offer the ea forum internet-points equivalent of a fruit basket to anyone who would like one in the future if we disagree now and in the future they are proven right and I am proven wrong.
I think part of the sociological problem is that people are just way too polite about how crazy this all is and how awful the intellectual practices of effective altruists have been on this topic.
Can you saw what view it is you think is crazy? It seems quite reasonable to me to think that AI is going to be a massive deal and therefore that it would be highly useful to influence how it goes. On other other hand, I think people often over-estimate the robustness of the arguments for any given strategy for how to actually do that influencing. In other words, its reasonable to prioritize AI, but people's AI takes are often very over-confident.
I appreciate your comment.
It seems clear that if Jaime had different views about the risk-reward of hypothetical 21st century AGI, nobody would be complaining about him loving his family.
I do think this is substantially correct, but I also want to acknowledge that these can be difficult subjects to navigate. I think anyone has done anything wrong, I'm sure I myself have done something similar to this many times. But I do think its worth trying to understand where the central points of disagreement lie, and I think this really is the central disagreement.
On the question of changing EA attitudes towards AI over the years, although I personally think AI will be a big deal, could be dangerous, and those issues are worth of significant attention, I also can certainly see reasons why people might disagree and why those people would have reasonable grievances with decisions by certain EA people and organizations.
An idea that I have pondered for a while about EA is a theory about which "boundaries" a community emphasizes. Although I've only ever interacted with EA by reading related content online, my perception is that EA really emphasizes the boundary around the EA community itself, while de-emphasizing the boundaries around individual people or organizations. The issues around Epoch I think demonstrate this. The feeling of betrayal comes from viewing "the community" as central. I think a lot of other cultures that place more emphasize on those other boundaries might react differently. For example, at most companies I have worked at, although certainly they would never be happy to see an employee leave, they wouldn't view moving to another job as a betrayal, even if an employee went to work for a direct competitor. I personally think placing more emphasis on orgs/individuals rather than the community as a whole could have some benefits, such as with the issue you raise about how to navigate changing views on AI.
Although emphasizing "the community" might seem like its ideal for cooperation, I think it can actually harm cooperation in the presence of substantial disagreements, because it generates dynamics like what is going on here. People feel like they can't cooperate with people across the disagreement. We will probably see some of these disagreements resolved over the next few years as AI progresses. I for one hope that even if I am wrong I can take any necessary corrections on-board and still work with people who I disagreed with to make positive contributions. Likewise, I hope that if I am right, people who I disagreed with still feel like they can work with me despite that.
As a side note, it’s also strange to me that people are treating the founding of Mechanize as if it has a realistic chance to accelerate AGI progress more than a negligible amount — enough of a chance of enough of an acceleration to be genuinely concerning. AI startups are created all the time. Some of them state wildly ambitious goals, like Mechanize. They typically fail to achieve these goals. The startup Vicarious comes to mind.
I admit I had a similar thought, but I am of two minds about it. On the one hand, I think intentions do matter. I think it is reasonable to point out if you think someone is making a mistake, even if you think ultimately that mistake is unlikely to have a substantial impact because the person is unlikely to succeed in what they are trying to do.
On the other hand, I do think the degree of the reaction and the way that people are generalizing seems like people are almost pricing in the idea that the actions in question have already had a huge impact. So I do wonder if people are kind of over-updating on this specific case for similar reasons to what you mention.
Although I haven't thought deeply about the issue you raise you could definitely be correct, and I think they are reasonable things to discuss. But I don't see their relevance to my arguments above. The quote you reference is itself discussing a quote from Sevilla that analyzes a specific hypothetical. I don't necessarily think Sevilla had the issues you raise in mind when we was addressing that hypothetical. I don't think his point was that based on forecasts of life extension technology he had determined that acceleration was the optimal approach in light of his weighing of 1 year-olds vs 50 year-olds. I think his point is more similar to what I mention above about current vs future people. I took a look at more of the X discussion, including the part where that quote comes from, and I think it is pretty consistent with this view (although of course others may disagree). Maybe he should factor in the things you mention, but to the extent his quote is being used to determine his views, I don't think the issues you raise are relevant unless he was considering them when he made the statement. On the other hand, I think discussing those things could be useful in other, more object level discussions. That's kind of what I was getting at here:
I think, at bottom, the problem is that Sevilla makes mistake in his analysis and/or decision-making about AI. His statements aren't norm-violating, they are just incorrect (at least some of them are, in my opinion). I think its worth having clarity about what the actual "problem" is.
I know I've been commenting here a lot, and I understand my style may seem confrontational and abrasive in some cases. I also don't want to ruin people's day with my self-important rants, so, having said my piece, I'll drop the discussion for now and let you get on with other things.
(although it you would like to response you are of course welcome, I just mean to say I won't continue the back-and-forth after, so as not to create a pressure to keep responding.)
Prioritising young people often makes sense from an impartial welfare standpoint
Sure, I think you can make a reasonable argument for that, but if someone disagreed with that, would you say they lack impartiality? To me it seems like something that is up for debate, within the "margin-of-error" of what is meant by impartiality. Two EAs could come down on different sides of that issue and still be in good standing in the community, and wouldn't be considered to not believe in the general principle of impartiality. Likewise, I think we can interpret Jeff Kaufman's argument above as expressing a similar view about an individual's loved-ones. It is within the "margin-of-error" of impartiality to still have a higher degree of concern for loved-ones, even if that might not be living up to the platonic ideal of impartiality.
My point in bringing this up is, the exact reason why the statement in question is bad seems to be shifting a bit over the conversation. Is the core reason that Sevilla's statement is objectionable really that it might up-weight people in a certain age group?
For some reason I find this title delightful. I kind of wish I could have an "argues without warning" flair or something.
I agree with arguments you present above and your conclusion about preferred norms. That said, I think people might have in mind certain types of cases that might justify the need for reaching out beyond the case of general "criticism". For example, imagine something like this:
Now, my view is, even if this is what happens, this is still a positive outcome, because, like you say:
Transparency has costs, but I think they are usually internal costs to the org, while transparency also has external benefits, and thus would be expected to be systematically under-supplied by orgs.
At the same time, I think most cases of criticism are realistically more mixed, with the critic making reasonable points but also some mistakes, and the org having some obvious corrections to the criticism but also some places where the back-and-forth is very enlightening. Requiring people to reach out I think risks losing a lot of the value that comes from such "debatable" cases for the reasons you mention.
Another set of cases that is worth separating out are allegations of intentional misconduct. I think there are particular reasons why it might make sense to have a higher expectation for critics to reach out to an org if they are accusing that org of intentional misconduct. I think this may also vary by whether the critic personally observed misconduct, in which case I think issues like a risk of retaliation or extreme difficulty for the critic may weigh in favor of not expecting the critic to reach out.