JWS

3300 karmaJoined

Bio

Kinda pro-pluralist, kinda anti-Bay EA.

I have come here to extend the principle of charity to bad criticisms of EA and kick ass. And I'm all out of charity.

(my opinions are fully my own, and do not represent the views of any close associates or the company I work for)

Posts
6

Sorted by New
4
JWS
· · 1m read

Sequences
1

Criticism of EA Criticism

Comments
268

JWS
4
1
0
1

Thanks for responding David, and again I think that the survey work you've done is great :) We have many points of agreement:

  • Agreed that you basically note my points in the previous works (both in footnotes and in the main text)
  • Agreed that it's always a hard tradeoff when compressing detailed research findings into digestible summaries of research - I know from professional experience how hard that is!
  • Agreed that there is some structure which your previous factor analysis and general community discussions picked up on, which is worth highlighting and examining

I still think that the terminology is somewhat misguided. Perhaps the key part I disagree is that "Referring to these clusters of causes and ideas in terms of "longtermism" and "neartermism" is established terminology" - even if it has been established I want to push back and un-establish because I think it's more unhelpful and even harmful for community discussion and progress. I'm not sure what terms are better, though some alternatives I've seen have been:[1]

I guess, to state my point as clearly as possible, I don't think the current cluster names "carve nature at its joints", and that the potential confusion/ambiguity in use could lead to negative perceptions that aren't accurate became entrenched

  1. ^

    Though I don't think any of them are perfect distillations

JWS
16
5
0

First off, thank you for this research and for sharing it with the community. My overall feeling on this work is extremely positive, and the below is one (maybe my only?) critical nitpick, but I think it is important to voice.

Causes classified as longtermist were Biosecurity, Nuclear risk, AI risk, X-risk other and Other longtermist. 

Causes classified as neartermist were Mental health, Global poverty and Neartermist other.

Causes classified as Other were Animal Welfare, Cause Prioritization, EA movement building and Climate change.

I have to object to this. I don't think longtermism is best understood as a cause, or set of causes, but more as a justification for working on certain causes over others. e.g.:

  • Working on Nuclear Risk could be seen as near-termist. You can have a person-affecting view of morality and think that, given the track record of nuclear near-miss incidents, that it's a high priority for the wellbeing of people alive today
  • We just lived through a global pandemic, there is active concern about H5N1 outbreaks right now, so it doesn't seem obvious to me that many people (EA or not) would count biosecurity in the 'longtermist' bucket
  • Similarly, many working on AI risk have short timelines that have only gotten shorter over the past few years.[1]
  • Climate Change could easily be seen through a 'longtermist' lens, and is often framed in the media as being an x-risk or affecting the lives of future generations
  • Approaching Global Poverty from a 'growth > randomista' perspective could easily be justified from a longtermist lens given the effects of compounding returns to economic growth for future generations
  • EA movement building has often been criticised as focusing on 'longtermist' causes above others, and that does seem to be where the money is focused
  • Those concerned about Animal Welfare also have concerns about how humanity might treat animals in the future, and if we might lock-in our poor moral treatment of other beings

(I'm sure everyone can think of their own counter-examples)

I know the groupings came out of some previous factor analysis you did, and you mention the cause/justification difference in the footnotes, and I know that there are differences in community cause prioritisation, but I fear that leading with this categorisation helps to reify and entrench those divisions instead of actually reflecting an underlying reality of the EA movement. I think it's important enough not to hide the details in footnotes because otherwise people will look at the 'longtermist' and 'neartermist' labels (like here) and make claims/inferences that might not correspond to what the numbers are really saying.

I think part of this is downstream of 'longtermism' being poorly defined/understood (as I said, it is a theory about justifications for causes rather than specific causes themselves), and the 'longtermist turn' having some negative effects on the community, so isn't a result of your survey. But yeah, I think we need to be really careful about labelling and reifying concepts beyond the empirical warrant we have, because that will in turn have causal effects of the community.

  1. ^

    In fact, I wonder if AI was separated ut from the other 3 'longtermist' causes, what the others might look like. I think a lot of objections to 'longtermism' are actually objections to prioritising 'AI x-risk' work.

JWS
15
7
0
1

Hi Remmelt, thanks for your response. I'm currently travelling so have limited bandwidth to go into a full response, and suspect that it'd make more sense for us to pick this up in DMs again (or at EAG London if you'll be around?)

Some important points I think I should share my perspective on though:

  1. One can think that both Émile and 'Fuentes' behaved badly. I'm not trying to defend the latter here and they clearly aren't impartial. I'm less interested in defending Fuentes than trying to point out that Émile shouldn't be considered a good-faith critic of EA. I think your concerns about Andreas, for example, apply at least tenfold to Émile.
  2. I don't consider myself an "EA insider", and I don't consider myself having that weight in the Community. I haven't worked at an EA org, I haven't received any money from OpenPhil, I've never gone to the Co-ordination Forum etc. I think of A-E, the only one I'm claiming support for is D - if Émile is untrustworthy and often flagrantly wrong/biased/inaccurate then it is a bad sign to not recognise this. The crux then, is whether Émile is that wrong/biased/inaccurate, which is a matter on which we clearly disagree.[1] One can definitely support other critiques of EA, and it certainly doesn't mean EA is immune to criticism or that it shouldn't be open to hearing them.

I'll leave it at that for now. Perhaps we can pick this up again in DMs or a Calendly call :) And just want to clarify that I do admire you and your work even if I don't agree with your conclusions. I think you're a much better EA critic (to the extent you identify as one) than Émile is.

  1. ^

    I really don't want to have to be the person to step up and push against them, but it seems like nobody else is willing to do it

JWS
31
10
1

I do not trust your perspective on this saga Remmelt.

For observers, if you want to go down the twitter rabbit hole when this all kicked off, and get the evidence with your own eyes, start here: https://nitter.poast.org/RemmeltE/status/1627153200930508800#m and if you want read the various substack pieces linked in thread[1]

To me, it's clear that Émile is acting the worst of everyone on that thread. And I think you treat Andreas far too harshly as well. You said of him "I think you are being intentionally deceptional here, and not actively truth-seeking." which, to me, describes Émile's behaviour exactly. The fact that, over a year on, you don't seem to recognise this and (if anything) support Émile more against EA is a bad sign.

We even had a Forum DM discussion about this a while ago, and I provided even more public cases of bad behaviour by Émile,[2] and you don't seem to have updated much on it.

I applaud your other actions to seek alternative viewpoints on the world on issues that EA cares about (e.g. your collaborations with Forrest Landry and talking to Glen Weyl), but you are so far off the mark with Émile. I hope you can change your mind on this.

  1. ^

    I recommend not doing it, since you all have much more useful things to do with your life. I'd note that Émile doesn't really push back on many of the claims in the Fuentes article, and the stuff around Hillary Greaves and 'Alex Williams' seem far enough to rule someone as a bad-faith actor.

  2. ^

    Clarification - 'bad behaviour' as in, Émile should not be regarded as a trusted source on anything EA, and is acting in bad faith. Not that they're doing anything illegal afaik

Going to quickly share that I'm going to take a step back from commenting on the Forum for the foreseeable future. There are a lot of ideas in my head that I want to work into top-level posts to hopefully spur insightful and useful conversation amongst the community, and while I'll still be reading and engaging I do have a limited amount of time I want to spend on the Forum and I think it'd be better for me to move that focus to posts rather than comments for a bit.[1]

If you do want to get in touch about anything, please reach out and I'll try my very best to respond. Also, if you're going to be in London for EA Global, then I'll be around and very happy to catch up :)

  1. ^

    Though if it's a highly engaged/important discussion and there's an important viewpoint that I think is missing I may weigh in

JWS
16
7
1

Like others, I just want to say I'm so sorry that you had this experience. It isn't one I recognise from my own journey with EA, but this doesn't invalidate what you went through and I'm glad you're moving in a direction that works for you as a person and your values. You are valuable, your life and perspective is valuable, and I wish all you all the best in your future journey.

Indirectly, I'm going to second @Mjreard below - I think EA should be seen as beyond a core set of people and institutions. If you are still deeply driven by the ideals EA was inspired by, and are putting that into action outside of "the movement", then to me you are still "EA" rather than "EA Adjacent".[1] EA is a set of ideas, not a set of people or organisations, and I will stand by this point.

Regardless, I wish you all the best, and that if you want to re-engage you do so on your terms.

  1. ^

    though ofc you can identify however you like

JWS
44
17
0

Again, a fan of you and your approach David, but I think you underestimate just how hostile/toxic Émile has been toward all of EA. I think it's very fair to substitute one for the other, and it's the kind of thing we do all the time in real, social settings. In a way, you seem to be emulating a hardcore 'decoupling' mindset here.

Like, at risk of being inflammatory, an intuition pump from your perspective might be:

It is possible that many complaints about Trump are true and also that Trump raises important concerns. I would not like to see personal criticism of Trump become a substitute for engagement with criticism by Trump.

I think many EAs view 'engagement with criticism by Torres' in the same way that you'd see 'engagement with criticism by Trump', that the critic is just so toxic/bad-faith that nothing good can come of engagement.

JWS
38
18
0

I think the main thing is their astonishing success. Like, whatever else anyone wants to say to Émile, they are damn hard working and driven. It's just in their case they are driven by fear and pure hatred of EA.

Approximately ~every major news media piece critical of EA (or covering EA with a critical lens, which are basically the same thing over the last year and a half) seems to link to/quote Émile at some point as a reputable and credible report on EA.

Sure, those more familiar with EA might be able to see the hyperbole, but it's not imo out there to imagine that Émile's immensely negative presentation of EA being picked out by major outlets has contributed to the fall of EA's reputation over the last couple of years.

Like, I was wish we could "collectively agree to make Émile irrelevant", but EA can't do that unilaterally given the influence their[1] ideas and arguments have had. Those are going to have to be challenged or confronted sooner or later.

  1. ^

    That is, Émile's

Answer by JWS2
0
0

To answer your question very directly on the confidence of millions of years in the future, the answer I think is "no", because I don't think we can be reasonably confident and precise about any significant belief about the state of the universe millions of years into the future.[1] I'd note that the article you link isn't very convincing for someone who doesn't share the same premesis, though I can see it leading to 'nagging thoughts' as you put it.

Other ways to answer the latter question about human extinction could be:

  • That humanity is positive (if human moral value is taken be larger than the effect on animals)
  • That humanity is net-positive (if the total effect of humanity is positive, most likely because of belief that wild-animal suffering is even worse)
  • Option value, or the belief that humanity has the capacity to change (as others have stated)

In practice though, I think if you reach a point where you might consider it to be a moral course of action to make all of humanity extinct, perhaps consider this a modus tonens of the principles that brought you to that conclusion rather than as a logical consequence that you ought to believe and act on. (I see David made a similar comment basically at the same time)

  1. ^

    Some exceptions for phyisics especially outside of our lightcone yada yada, but I think for the class of beliefs (I used significant beliefs) that are similar to this question this holds

JWS
14
17
5

I don't understand your lack of understanding. My point is that you're acting like a right arse.

When people make claims, we expect there to be some justification proportional to the claims made. You made hostile claims that weren't following on from prior discussion,[1] and in my view nasty and personal insinuations as well, and didn't have anything to back it up. 

I don't understand how you wouldn't think that Sean would be hurt by it.[2] So to me, you behaved like arse, knowing that you'd hurt someone, didn't justify it, got called out, and are now complaining.

So I don't really have much interest in continuing this discussion for now, or much opinion at the moment of your behaviour or your 'integrity'

  1. ^

    Like nobody was discussing CSER/CFI or Sean directly until you came in with it

  2. ^

    Even if you did think it was justified

Load more