Hide table of contents

Downvotes are evidence. They provide information. They can be interpreted, especially when they aren’t accompanied by arguments or reasons.

Downvotes can mean I struck a nerve. They can provide evidence of what a community is especially irrational about.

They could also mean I’m wrong. But with no arguments and no links or cites to arguments, there’s no way for me to change my mind. If I was posting some idea I thought of recently, I could take the downvotes as a sign that I should think it over more. However, if it’s something I’ve done high-effort thinking about for years, and written tens of thousands of words about, then “reconsider” is not a useful action with no further information. I already considered it as best I know how to.

People can react in different ways to downvotes. If your initial reaction is to stop writing about whatever gets downvotes, that is evidence that you care a lot about social climbing and what other people think of you (possibly more than you value truth seeking). On the other hand, one can think “strong reactions can indicate something important” and write more about whatever got downvoted. Downvotes can be a sign that a topic is important to discuss further.

Downvotes can also be evidence that something is an outlier, which can be a good thing.

Downvoting Misquoting Criticism

One of the things that seems to have struck a nerve with some people, and has gotten me the most downvotes, is criticizing misquoting (examples one and two both got to around -10). I believe the broader issue is my belief that “small” or “pedantic” errors are (sometimes) important, and that raising intellectual standards would make a large overall difference to EA’s correctness and therefore effectiveness.

I’ll clarify this belief more in future posts despite the cold reception and my expectation of getting negative rewards for my efforts. I think it’s important. It’s also clarified a lot in prior writing on my websites.

There are practical issues regarding how to deal with “small” errors in a time-efficient way. I have some answers to those issues but I don’t think they’re the main problem. In other words, I don’t think many people want to be able to pay attention to small errors, but are limited by time constraints and don’t know practical time-saving solutions. I don’t think it’s a goal they have that is blocked by practicality. I think people like something about being able to ignore “small” or “pedantic” errors, and practicality then serves as a convenient excuse to help hide the actual motivation.

Why do I think there’s any kind of hidden motivation? It’s not just the disinterest in practical solutions to enable raising intellectual standards (which I’ve seen year after year in other communities as well, btw). Nor is it just the downvotes that are broadly not accompanied by explanations or arguments. It’s primarily the chronic ambiguity about whether people already agree with me and think obviously misquotes are bad on the one hand or disagree with me and think I’m horribly wrong on the other hand. Getting a mix of responses including both ~“obviously you’re right and you got a negative reaction because everyone already knows it and doesn’t need to hear it again” and ~“you’re wrong and horrible” is weird and unusual.

People generally seem unwilling to actually clearly state what their misquoting policies/attitudes are, but nevertheless say plenty of things that indicate clear disagreements with me (when they speak about it at all, which they often don’t but sometimes do). And this allows a bunch of other people to think there already are strong anti-misquoting norms, including people who do not actually personally have such a norm. In my experience, this is widespread and EA seems basically the same as most other places about it.

I’m not including examples of misquotes, or ambiguous defenses of misquotes, because I don’t want to make examples of people. If someone wants to claim they’re right and make public statements they stand behind, fine, I can use them as an example. But if someone merely posts on the forum a bit, I don’t think I should interpret that as opting in to being some kind of public intellectual who takes responsibility for what he says, claims what he says is important, and is happy to be quoted and criticized. (People often don’t want to directly admit that they don’t think what they post is important, while also not wanting to claim it’s important. That’s another example of chronic ambiguity that I think is related to irrationality.) If someone says to me “This would convince me if only you had a few examples” I’ll consider how to deal with that, but I don’t expect that reaction (and if you care that much you can find two good examples by reviewing my EA posting history, and many many examples of representative non-EA misquotes on my websites and forum).

Upvoting Downvoted Posts

There’s a pattern on Reddit, which I’ve also observed on EA, where people upvote stuff that’s a negative points which they don’t think deserves to be negative. They wouldn’t upvote it if it had positive votes. You can tell because the upvoting stops when it gets back to neutral karma (actually slightly less on EA due to strong votes – people tend to stop at 1, not at the e.g. 4 karma an EA post might start with).

In a lot of ways I think this is a good norm. Some people are quite discouraged by downvotes and feel bad about being disliked. The lack of reasons to accompany downvotes makes that worse for some types of people (though others would only feel worse if they were told reasons). And some downvotes are unwarranted and unreasonable so counteracting those is a reasonable activity.

However, there’s a downside to upvoting stuff that’s undeservedly downvoted. It hides evidence. It makes it harder for people to know what kinds of things get how many downvotes. Downvotes can actually be important evidence about the community. Reddit is larger and many subreddits have issues with many new posts tending to get a few downvotes that do not reflect the community and might even come from bots. I’m not aware of EA having this problem. It’s stuff that is downvoted more than normal which provides useful evidence. On EA, a lot of posts get no votes, or just a few upvotes. I believe getting to -10 quickly isn’t normal and is useful evidence of something, rather than something that should just be ignored as meaningless. Also it only happens to a minority of my posts. The majority get upvotes not downvotes.)

13

0
0

Reactions

0
0

More posts like this

Comments7


Sorted by Click to highlight new comments since:

I didn't downvote either of your articles on misquoting. Skimming over the first article now, it seems reasonably well argued.

However, I agree with the following points made on this comment (which you also referred to in your second article):

  • There's too much to read, so people don't have extensive time to engage with everything. Try to be succint.
    • One of your post spent 22 minutes to say that people shouldn't misquote. It's a rather obvious conclusion that can be exposed in 3 minutes top. I think some people read that as a rant.
  • Use examples showing why the topic is important (or even stories). It allows to link your arguments to something that exists.
    • You can think with purely abstract stuff - but most people are not like that. A useful point to keep in mind is you are not your audience. What works for you doesn't work for most other people. So adapting to other reasoning types is useful.

From skimming your first misquoting article, I don't think you've made the case that misquoting is a particular problem within EA. I don't think there are any examples? In which case, some people might read it, get to the end and think "well that was a waste of 22 minutes and hardly seems relevant to EA, so I'll downvote it to deter others from spending time reading it".

What sort of examples do you want? Do you want me to call out specific individuals who misquoted and say that's bad? You could look through my comment history and find some examples if you want to, but I thought drawing attention to and shaming those people would be bad.

It's easier to discuss whether misquoting is very bad for truth seeking, and mistreats a victim, without simultaneously making it a discussion about whether particular individuals in the community are bad.

The deadnaming article has a one paragraph summary near the start. It also has the text:

I think this norm [against deadnaming] is good. I think the same norm should be applied to misquoting for the same reasons. It currently isn’t (context).

The links clarify that EA does not have a strong norm against misquoting. What's the problem? Maybe you missed that part when skimming? It's in the introduction immediately before the article summary. The rest of the article does not attempt to argue this point; it's talking about something else which builds on this premise.

Why is this even controversial? If I tell you a misquote or poor cite in the sequences or some other literature you like, you aren't going to care much or start taking actions regarding the problem (such as checking whether the same author made more errors of a similar nature), right? You don't believe that misquoting is like deadnaming someone and should have a similar norm against it because it's hurtful to the victim in addition to being poor scholarship, do you? Don't you disagree with me and know that you disagree with me? The norm I'm advocating is not normal nor popular with any large group. So, fine, disagree with me – but I find it a really bizarre reaction for people who disagree with me to dismiss my arguments on the basis that I'm obviously right and this is a waste of time due to being uncontroversial common knowledge. Most people think stuff like "People are sloppy sometimes, which isn't a big deal." instead of thinking, "Being sloppy with quotes in particular isn't acceptable. Use copy/paste. If you must type something in, triple check it. There's no real excuse for quotes to be inaccurate in tiny ways; that's really bad even if the wording changes do not substantively change the meaning."

I'd like to first establish that this issue matters, and only second, potentially point out some specific examples. As long as I don't think anyone considers misquoting to actually be very bad, I don't think it's a good idea to bring up examples of people doing it. Also I don't think the problem is a few individuals behaving badly; it's a widespread problem of community attitudes and norms. The community simply doesn't value this kind of accuracy and is OK with misquotes; in that context, it's unfair to be very hard on individuals who get caught misquoting, so that's another reason not to name and shame anyone. If i give examples people will just tell me that the misquote didn't change the conclusion in that case and therefore doesn't really matter (rather than agreeing with me), which is not the point. Misquotes mistreat the person quoted like deadnaming, and also like other inaccuracies they're bad for truth seeking whether or not they change the conclusion. These are not popular claims, but I think they're important, so I tried to argue and explain them, and neither of these claims would be served well by examples because they're both about concepts not concretes. And if people don't like conceptual articles, or struggle to understand them, or don't like long articles ... fine whatever, but saying that people agree with me, when they don't, is really weird.

What sort of examples do you want? Do you want me to call out specific individuals who misquoted and say that's bad? You could look through my comment history and find some examples if you want to, but I thought drawing attention to and shaming those people would be bad.


It's generally a good sentiment to not want to call out specific individuals, particularly if they are not repeat offenders. However, if this is a widespread issue that is worth the attention of the community, then providing lots of examples will help demonstrate the scale of the problem without it seeming like you're picking on one or two people. 

If it is only one or two people who are repeat offenders, and these are senior members of EA orgs (and/or regular posters on the EA Forum), then it may be justified in shaming them.

It's easier to discuss whether misquoting is very bad for truth seeking, and mistreats a victim, without simultaneously making it a discussion about whether particular individuals in the community are bad.

Without examples to demonstrate that it's a common issue in the EA community, you may find that the discussion is very short, as I suspect most people will just think "yeah, misquoting is indeed bad for truth seeking, which is why I don't do it". 

Do you believe misquoting violates consent similarly to deadnaming, and should have a similar norm against it? Yes or no?

I don't think I know enough about either to make that judgement. 

Also tbh right now I don't have the time or interest to debate this topic. I provided the above comments as possible reasons you received a few downvotes, rather than to indicate a desire to debate the topic itself.

Hmm, I posted stuff that got downvotes, and the few comments I received were along the lines of "provide examples" or "what are you talking about?"

You can hover over the listing of your vote number and see how many votes your post received. In some cases, that will clarify whether there was a mix of upvotes and downvotes.

Nice post. I am currently asking a lot why my comments are being downvoted recently. I would appreciate more information on them so I can be better at sharing information.

I'm curious as to why my reductionist approach is also being disliked as in real world scenarios it is very productive. Yeah just hope people explain further to what extent they do not like but yeah I also understand the difficulty of explaining.

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na