Kinda pro-pluralist, kinda anti-Bay EA.
I have come here to extend the principle of charity to bad criticisms of EA and kick ass. And I'm all out of charity.
(my opinions are fully my own, and do not represent the views of any close associates or the company I work for)
First off, thank you for this research and for sharing it with the community. My overall feeling on this work is extremely positive, and the below is one (maybe my only?) critical nitpick, but I think it is important to voice.
Causes classified as longtermist were Biosecurity, Nuclear risk, AI risk, X-risk other and Other longtermist.
Causes classified as neartermist were Mental health, Global poverty and Neartermist other.
Causes classified as Other were Animal Welfare, Cause Prioritization, EA movement building and Climate change.
I have to object to this. I don't think longtermism is best understood as a cause, or set of causes, but more as a justification for working on certain causes over others. e.g.:
(I'm sure everyone can think of their own counter-examples)
I know the groupings came out of some previous factor analysis you did, and you mention the cause/justification difference in the footnotes, and I know that there are differences in community cause prioritisation, but I fear that leading with this categorisation helps to reify and entrench those divisions instead of actually reflecting an underlying reality of the EA movement. I think it's important enough not to hide the details in footnotes because otherwise people will look at the 'longtermist' and 'neartermist' labels (like here) and make claims/inferences that might not correspond to what the numbers are really saying.
I think part of this is downstream of 'longtermism' being poorly defined/understood (as I said, it is a theory about justifications for causes rather than specific causes themselves), and the 'longtermist turn' having some negative effects on the community, so isn't a result of your survey. But yeah, I think we need to be really careful about labelling and reifying concepts beyond the empirical warrant we have, because that will in turn have causal effects of the community.
In fact, I wonder if AI was separated ut from the other 3 'longtermist' causes, what the others might look like. I think a lot of objections to 'longtermism' are actually objections to prioritising 'AI x-risk' work.
Hi Remmelt, thanks for your response. I'm currently travelling so have limited bandwidth to go into a full response, and suspect that it'd make more sense for us to pick this up in DMs again (or at EAG London if you'll be around?)
Some important points I think I should share my perspective on though:
I'll leave it at that for now. Perhaps we can pick this up again in DMs or a Calendly call :) And just want to clarify that I do admire you and your work even if I don't agree with your conclusions. I think you're a much better EA critic (to the extent you identify as one) than Émile is.
I really don't want to have to be the person to step up and push against them, but it seems like nobody else is willing to do it
I do not trust your perspective on this saga Remmelt.
For observers, if you want to go down the twitter rabbit hole when this all kicked off, and get the evidence with your own eyes, start here: https://nitter.poast.org/RemmeltE/status/1627153200930508800#m and if you want read the various substack pieces linked in thread[1]
To me, it's clear that Émile is acting the worst of everyone on that thread. And I think you treat Andreas far too harshly as well. You said of him "I think you are being intentionally deceptional here, and not actively truth-seeking." which, to me, describes Émile's behaviour exactly. The fact that, over a year on, you don't seem to recognise this and (if anything) support Émile more against EA is a bad sign.
We even had a Forum DM discussion about this a while ago, and I provided even more public cases of bad behaviour by Émile,[2] and you don't seem to have updated much on it.
I applaud your other actions to seek alternative viewpoints on the world on issues that EA cares about (e.g. your collaborations with Forrest Landry and talking to Glen Weyl), but you are so far off the mark with Émile. I hope you can change your mind on this.
I recommend not doing it, since you all have much more useful things to do with your life. I'd note that Émile doesn't really push back on many of the claims in the Fuentes article, and the stuff around Hillary Greaves and 'Alex Williams' seem far enough to rule someone as a bad-faith actor.
Clarification - 'bad behaviour' as in, Émile should not be regarded as a trusted source on anything EA, and is acting in bad faith. Not that they're doing anything illegal afaik
Going to quickly share that I'm going to take a step back from commenting on the Forum for the foreseeable future. There are a lot of ideas in my head that I want to work into top-level posts to hopefully spur insightful and useful conversation amongst the community, and while I'll still be reading and engaging I do have a limited amount of time I want to spend on the Forum and I think it'd be better for me to move that focus to posts rather than comments for a bit.[1]
If you do want to get in touch about anything, please reach out and I'll try my very best to respond. Also, if you're going to be in London for EA Global, then I'll be around and very happy to catch up :)
Though if it's a highly engaged/important discussion and there's an important viewpoint that I think is missing I may weigh in
Like others, I just want to say I'm so sorry that you had this experience. It isn't one I recognise from my own journey with EA, but this doesn't invalidate what you went through and I'm glad you're moving in a direction that works for you as a person and your values. You are valuable, your life and perspective is valuable, and I wish all you all the best in your future journey.
Indirectly, I'm going to second @Mjreard below - I think EA should be seen as beyond a core set of people and institutions. If you are still deeply driven by the ideals EA was inspired by, and are putting that into action outside of "the movement", then to me you are still "EA" rather than "EA Adjacent".[1] EA is a set of ideas, not a set of people or organisations, and I will stand by this point.
Regardless, I wish you all the best, and that if you want to re-engage you do so on your terms.
though ofc you can identify however you like
Again, a fan of you and your approach David, but I think you underestimate just how hostile/toxic Émile has been toward all of EA. I think it's very fair to substitute one for the other, and it's the kind of thing we do all the time in real, social settings. In a way, you seem to be emulating a hardcore 'decoupling' mindset here.
Like, at risk of being inflammatory, an intuition pump from your perspective might be:
It is possible that many complaints about Trump are true and also that Trump raises important concerns. I would not like to see personal criticism of Trump become a substitute for engagement with criticism by Trump.
I think many EAs view 'engagement with criticism by Torres' in the same way that you'd see 'engagement with criticism by Trump', that the critic is just so toxic/bad-faith that nothing good can come of engagement.
I think the main thing is their astonishing success. Like, whatever else anyone wants to say to Émile, they are damn hard working and driven. It's just in their case they are driven by fear and pure hatred of EA.
Approximately ~every major news media piece critical of EA (or covering EA with a critical lens, which are basically the same thing over the last year and a half) seems to link to/quote Émile at some point as a reputable and credible report on EA.
Sure, those more familiar with EA might be able to see the hyperbole, but it's not imo out there to imagine that Émile's immensely negative presentation of EA being picked out by major outlets has contributed to the fall of EA's reputation over the last couple of years.
Like, I was wish we could "collectively agree to make Émile irrelevant", but EA can't do that unilaterally given the influence their[1] ideas and arguments have had. Those are going to have to be challenged or confronted sooner or later.
That is, Émile's
To answer your question very directly on the confidence of millions of years in the future, the answer I think is "no", because I don't think we can be reasonably confident and precise about any significant belief about the state of the universe millions of years into the future.[1] I'd note that the article you link isn't very convincing for someone who doesn't share the same premesis, though I can see it leading to 'nagging thoughts' as you put it.
Other ways to answer the latter question about human extinction could be:
In practice though, I think if you reach a point where you might consider it to be a moral course of action to make all of humanity extinct, perhaps consider this a modus tonens of the principles that brought you to that conclusion rather than as a logical consequence that you ought to believe and act on. (I see David made a similar comment basically at the same time)
Some exceptions for phyisics especially outside of our lightcone yada yada, but I think for the class of beliefs (I used significant beliefs) that are similar to this question this holds
I don't understand your lack of understanding. My point is that you're acting like a right arse.
When people make claims, we expect there to be some justification proportional to the claims made. You made hostile claims that weren't following on from prior discussion,[1] and in my view nasty and personal insinuations as well, and didn't have anything to back it up.
I don't understand how you wouldn't think that Sean would be hurt by it.[2] So to me, you behaved like arse, knowing that you'd hurt someone, didn't justify it, got called out, and are now complaining.
So I don't really have much interest in continuing this discussion for now, or much opinion at the moment of your behaviour or your 'integrity'
Thanks for responding David, and again I think that the survey work you've done is great :) We have many points of agreement:
I still think that the terminology is somewhat misguided. Perhaps the key part I disagree is that "Referring to these clusters of causes and ideas in terms of "longtermism" and "neartermism" is established terminology" - even if it has been established I want to push back and un-establish because I think it's more unhelpful and even harmful for community discussion and progress. I'm not sure what terms are better, though some alternatives I've seen have been:[1]
I guess, to state my point as clearly as possible, I don't think the current cluster names "carve nature at its joints", and that the potential confusion/ambiguity in use could lead to negative perceptions that aren't accurate became entrenched
Though I don't think any of them are perfect distillations