Hide table of contents

I argue that you shouldn't accuse your interlocutor of being insufficiently truth-seeking. This doesn't mean you can't internally model their level of truth-seeking and use that for your own decision-making. It just means you shouldn't come out and say "I think you are being insufficiently truth-seeking".

What you should say instead

Before I explain my reasoning, I'll start with what you should say instead:

"You're wrong"

People are wrong a lot. If you think they are wrong just say so. You should have a strong default for going with this option.

"You're being intentional misleading"

For when you basically thinking they are lying but maybe technically aren't by some definitions of "lying".

What about if they are being unintentionally misleading? That's usually just being wrong, you should probably just say they are being wrong. But if you really think the distinction is important, you can say they are being unintentionally misleading.

"You're lying"

For when they are lying.

You can also add your own flair to any of these options to spice things up a bit.

Why you shouldn't accuse people of being insufficient truth-seeking

Clarity

It's not clear what you are even accusing them of. "Insufficient truth-seeking" could arguably be any of the options I mentioned above. Just be specific. If you really think what you're saying is so important and nuanced and you just need to incorporate some deep insight about truth, use the "add your own flair" option to sneak that stuff in.

Achieving your purpose in the discussion

The most common purposes you might have for engaging in the discussion and why invoking "truth-seeking" doesn't help them:

You want to discuss the object-level issue

You just fucked yourself because the discussion is immediately going to center on whether they actually are insufficiently truth-seeking and whether that accusation was justified. You're going to have to gather The Fellowship, take your argument to Mordor, and throw it into the fire of NEVER GO META before you're ever going to be able to discussion the object level again.

You want to discuss your interlocutor's misconduct

You again fucked yourself because:

  1. It's not clear what misconduct you are accusing them of.
  2. Because of the ambiguity they are going to try to make it seem like you're accusing them of more than what you intended, and therefore actually it's you who isn't being truth-seeking, and you're even accusing them of that in bad faith!
  3. Because your statement is about "truth-seeking" instead of the actual misconduct, observers who agree with your interlocutor on the object level but might be sympathetic to your misconduct allegation are going to find it harder to agree with you on the meta issue. You are muddying the object-meta waters instead of tackling the meta-level issue you want to address head-on.

Conclusion

Don't accuse your interlocutor of being insufficiently truth-seeking. Just say they are wrong instead.

12

3
1

Reactions

3
1
Comments6


Sorted by Click to highlight new comments since:

Imo the biggest reason not to do this is that it's labeling the person or getting at their character. There's a threat implied that they will be dismissed out of hand bc they are categorically in bad faith. It can be weaponized.

I agree. The OP is in some sense performance art on my part, where I take a proposition that I think people might general justify with high-minded appeals to epistemology or community dynamics, and yet I give only selfish reasons for the conclusion. 

At the same time, I do agree there are many altruistic reasons for the conclusion as well, such as yours. I think the specific issue with "truth-seeking" is that it has enough wiggle room where it might not necessarily be about someone's character (or at least less so than some of my alternatives), which means that when in the middle of a highly contentious discussion people can convince themselves that it's totally a great idea, more so than if they used something where the nature of the attack is more obvious.

To say someone is not "truthseeking" in Berkeley is like a righteous ex-communication. It gets to be an epistemic issue.

"Truthseeking" is a strange piece of jargon. I'm not sure what purpose it serves. It seems like the meaning of "truthseeking" ambiguates between "practicing good epistemology" and "being intellectually honest", as you describe. So, why not use one of those terms instead?

One thing that annoys me about the EA Forum (which I previously wrote about here) is that there's way too much EA Forum-specific jargon. One negative effect of this is it makes it harder to understand what people are trying to say. Another negative effect is it elevates a lot of interesting conjecture to the level of conventional wisdom. If you have some interesting idea in a blog post or a forum post, and then people are quick to incorporate that into the lingo, you've made that idea part of the culture, part of the conventional wisdom. And it seems like people do this too easily.

If you see someone using the term "truthseeking" on the EA Forum, then:

  1. There is no clear definition of this term anywhere that you can easily Google or search on the forum. There is a vague definition on the Effective Altruism Australia website. There is no entry for "truthseeking" in the EA Forum Wiki. The Wikipedia page for truth-seeking says, "Truth-seeking processes allow societies to examine and come to grips with past crimes and atrocities and prevent their future repetition. Truth-seeking often occurs in societies emerging from a period of prolonged conflict or authoritarian rule.[1] The most famous example to date is the South African Truth and Reconciliation Commission, although many other examples also exist."

  2. To the extent EA Forum users even have a clear definition of this term in their heads, they may be bringing along their own quirky ideas about epistemology or intellectual honesty or whatever. And are those good ideas? Who knows? Probably some are and a lot aren't. Making "truthseeking" a fundamental value and then defining "truthseeking" in your own quirky way elevates something you read on an obscure blog last year to the level of an idea that has been scrutinized and debated by a diverse array of scholars across the world for decades and stood the test of time. That's a really silly, bad way to decide which ideas are true and which are false (or dubious, or promising, or a mixed bag, or whatever).

  3. Chances are the person is using it passive-aggressively, or with the implication that they're more truthseeking than someone else. I've never seen someone say, "I wasn't being truthseeking enough and changed my approach." This kinda makes it feel like the main purpose of the word is to be passive-aggressive and act superior.

So, is this jargon anything but a waste of time?

It seems like the meaning of "truthseeking" ambiguates between "practicing good epistemology" and "being intellectually honest"

Very accurate and succinct summary of the issue.

One thing that annoys me about the EA Forum (which I previously wrote about here) is that there's way too much EA Forum-specific jargon.

Good point. I think actually there is an entire class of related jargon for which something like the above applies. For example, I think its often a bad idea to say stuff like:

  • "You're being uncharitable."
  • "You're strawmanning me."
  • "Can you please just steelman my position?"
  • "I don't think you could pass my ITT."
  • "You're argument is a committing the  motte-baily fallacy."
  • "You're committing the noncentral fallacy."

And other similar comments. I think clarity issue around some types of jargon are related to your next point. People pickup on ideas that are intuitive but still very rough. This can often mean that the speaker feels super confident in their meaning but it is confusing to the reader because they may interpret these rough ideas differently.

I also feel something similar to what you say where people seem to jump on ideas rather quickly and run with them, whereas my reaction is, don't you want to stress test this a bit more before giving it the full-send? I view this as a significant cultural/worldview difference that I perceive between myself and a lot of EAs, which I sometimes think of as a "do-er" vs "debater" dichotomy. I think EA strongly emphasizes "doing", whereas I'm not going to be beating the "debater" allegations anytime soon. I think worldview is upstream of my takes on the ongoing discussions around reaching out to orgs. I think the concept of "winning" expressed here is also related to a strong "doing over debating" view.

Making "truthseeking" a fundamental value

I think its inherently challenging to think of truth-seeking as a terminal value. Its under-specified, truth-seeking about what? How quickly paint dries? I think it makes more sense to think about constraints requiring truthfulness. Following on from this, I think trying to "improve epistemics" by trying to enforce "high standards" can be counterproductive because it gets in the way of the natural "marketplace of ideas" dynamic that often fuels and incentives good epistemics. The view of "truth-seeking" as this kind of quantitative thing that you want really high values of I think can cause confusion in this regard, making people think communities high in "truth-seeking" must therefore have "high standards".

Chances are the person is using it passive-aggressively, or with the implication that they're more truthseeking than someone else. I've never seen someone say, "I wasn't being truthseeking enough and changed my approach." This kinda makes it feel like the main purpose of the word is to be passive-aggressive and act superior.

I think this is often the case. Perhaps related to my more "debater" mentality, it seems to me like people in EA sometimes do something with their criticism where they think they are softening it, but they do so in a way that makes the actual claim insanely confusing. I think "truth-seeking" is partial downstream from this, because its not straight-up saying "you're being bad faith here" and thus feels softer. I wish people would be more "all the way in or all the way out". Either stick to just saying someone is wrong or straight-up accuse them of whatever you think they are doing. I think on balance that might mean doing the second one more than people do now, but perhaps doing the ambiguous version less.

Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online:

  • Say what you mean, as plainly as possible.
  • Try to use words and expressions that a general audience would understand.
  • Be more casual and less formal if you think that means more people are more likely to understand what you're trying to say.
  • To illustrate abstract concepts, give examples.
  • Where possible, try to let go of minor details that aren't important to the main point someone is trying to make. Everyone slightly misspeaks (or mis... writes?) all the time. Attempts to correct minor details often turn into time-consuming debates that ultimately have little importance. If you really want to correct a minor detail, do so politely, and acknowledge that you're engaging in nitpicking.
  • When you don't understand what someone is trying to say, just say that. (And be polite.)
  • Don't engage in passive-aggressiveness or code insults in jargon or formal language. If someone's behaviour is annoying you, tell them it's annoying you. (If you don't want to do that, then you probably shouldn't try to communicate the same idea in a coded or passive-aggressive way, either.)
  • If you're using an uncommon word or using a word that also has a more common definition in an unusual way (such as "truthseeking"), please define that word as you're using it and — if applicable — distinguish it from the more common way the word is used.
  • Err on the side of spelling out acronyms, abbreviations, and initialisms. You don't have to spell out "AI" as "artificial intelligence", but an obscure term like "full automation of labour" or "FAOL" that was made up for one paper should definitely be spelled out.
  • When referencing specific people or organizations, err on the side of giving a little more context, so that someone who isn't already in the know can more easily understand who or what you're talking about. For example, instead of just saying "MacAskill" or "Will", say "Will MacAskill" — just using the full name once per post or comment is plenty. You could also mention someone's profession (e.g. "philosopher", "economist") or the organization they're affiliated with (e.g. "Oxford University", "Anthropic"). For organizations, when it isn't already obvious in context, it might be helpful to give a brief description. Rather than saying, "I donated to New Harvest and still feel like this was a good choice", you could say "I donated to New Harvest (a charity focused on cell cultured meat and similar biotech) and still feel like this was a good choice". The point of all this is to make what you write easy for more people to understand without lots of prior knowledge or lots of Googling.
  • When in doubt, say it shorter.[1] In my experience, when I take something I've written that's long and try to cut it down to something short, I usually end up with something a lot clearer and easier to understand than what I originally wrote.
  • Kindness is fundamental. Maya Angelou said, “At the end of the day people won't remember what you said or did, they will remember how you made them feel.” Being kind is usually more important than whatever argument you're having. 

(Decided to also publish this as a quick take, since it's so generally applicable.)

  1. ^

    This advice comes from the psychologist Harriet Lerner's wonderful book Why Won't You Apologize? — given in the completely different context of close personal relationships. I think it also works here.

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 4m read
 · 
Earlier this year, we launched a request for proposals (RFP) from organizations that fundraise for highly cost-effective charities. The Livelihood Impact Fund supported the RFP, as did two donors from Meta Charity Funders. We’re excited to share the results: $1,565,333 in grants to 11 organizations. We estimate a weighted average ROI of ~4.3x across the portfolio, which means we expect our grantees to raise more than $6 million in adjusted funding over the next 1-2 years.   Who’s receiving funding These organizations span different regions, donor audiences, and outreach strategies. Here’s a quick overview: Charity Navigator (United States) — $200,000 Charity Navigator recently acquired Causeway, through which they now recommend charities with a greater emphasis on impact across a portfolio of cause areas. This grant supports Causeway’s growth and refinement, with the aim of nudging donors toward curated higher-impact giving funds. Effectief Geven (Belgium) — $108,000 Newly incubated, with solid early traction and plans to expand donor reach. This grant will help them expand from 1 to 1.5 FTE. Effective Altruism Australia (Australia) — $257,000 A well-established organization with historically strong ROI. This grant supports the hiring of a dedicated director for their effective giving work, along with shared ops staff, over two years. Effective Altruism New Zealand (New Zealand) — $17,500 A long-standing, low-cost organization with one FTE and a consistently great ROI. This grant covers their core operating expenses for one year, helping to maintain effective giving efforts in New Zealand. Etkili Bağış (Turkey) — $20,000 A new initiative piloting effective giving outreach in Turkey. This grant helps professionalize their work by covering setup costs and the executive director’s time for one year. Giv Effektivt (Denmark) — $210,000 A growing national platform that transitioned from volunteer-run to staffed, with strong early ROI and healthy signs of growth.