Bio

Pro-pluralist, pro-bednet, anti-Bay EA. 🔸 10% Pledger.

Sequences
3

Against the overwhelming importance of AI Safety
EA EDA
Criticism of EA Criticism

Comments
347

Yeah I could have worded this better. What I mean to say is that I expect that the tags 'Criticism of EA' and 'Community' probably co-occur in posts a lot more than two randomly drawn tags, and probably rank quite high on the pairwise ranking. I don't mean to say that it's a necessary connection or should always be the case, but it does mean that downweighting Community posts will disproportionately downweight Criticism posts.

If I'm right, that is! I can probably scrape the data from 23-24 on the Forum to actually answer this question.

Just flagging this for context of readers, I think Habryka's position/reading makes more sense if you view it in the context of an ongoing Cold War between Good Ventures and Lightcone.[1]

Some evidence on the GV side:

To Habryka's credit, it's much easier to see what the 'Lightcone Ecosystem' thinks of OpenPhil!

  • He thinks that the actions of GV/OP were and currently are overall bad for the world.
  • I think the reason why is mostly given here by MichaelDickens on LW, Habryka adds some more concerns in the comments. My sense is that the LW commentariat is turning increasingly against OP but that's just a vibe I have when skim-reading.
  • Some of it also appears to be for reasons to do with the Lightcone-aversion-to-"deception"-broadly-defined, which one can see from the Habryka's reasoning in this post or replying here to Luke Muehlhauser. This philosophy doesn't seem to explained in one place, I've only gleaned what I can from various posts/comments so if someone does have a clearer example then feel free to point me in that direction.
  • This great comment during the Nonlinear saga I think helps make a lot of Lightcone v OP discourse make sense.

I was nervous about writing this because I don't want to start a massive flame war, but I think it's helpful for the EA Community to be aware that two powerful forces in it/adjacent to it[2] are essentially in a period of conflict. When you see comments from either side that seem to be more aggressive/hostile than you otherwise might think warranted, this may make the behaviour make more sense.

  1. ^

    Note: I don't personally know any of the people involved, and live half a world away, so expect it to be very inaccurate. Still, this 'frame' has helped me to try to grasp what I see as behaviours and attitudes which otherwise seem hard to explain to me, as an outsider to the 'EA/LW in the Bay' scene.

  2. ^

    To my understanding, the Lightcone position on EA is that it 'should be disavowed and dismantled' but there's no denying the Lightcone is closer to EA than ~most all other organisations in some sense

First, I want to say thanks for this explanation. It was both timely and insightful (I had no idea about the LLM screening, for instance). So wanted to give that a big 👍

I think something Jan is pointing to (and correct me if I'm wrong @Jan_Kulveit) is that because the default Community tag does downweight the visibility and coverage of a post, it could be implicitly used to deter engagement from certain posts. Indeed, my understanding was that this was pretty much exactly the case, and was driven by a desire to reduce Forum engagement on 'Community' issues in the wake of FTX. See for example:

Now, it is also true that I think the Forum was broadly supportive about this at the time. People were exhausted by FTX, and there seemed like there was a new devasting EA scandal every week, and being able to downweight these discussions and focus on 'real' EA causes was understandably very popular.[1] So it wasn't even necessarily a nefarious change, it was responding to user demand. 

Nevertheless I think, especially since criticisms of EA also come with the 'Community' tag attached,[2] it has also had the effect of somewhat reducing criticism and community sense-making. In retrospect, I still feel like the damage wrought by FTX hasn't had a full accounting, and the change to down-weight Community posts was trying to solve the 'symptoms' rather than the underling issues.

  1. ^

    I think reading the most popular comments on the linked posts supports this.

  2. ^

    Willing to change my mind on this is there's much less of an overlap between the two than other major categories, for instance

Sharing some planned Forum posts I'm considering, mostly as a commitment device, but welcome thoughts from others:

  • I plan to add another post in my "EA EDA" sequence analysing Forum trends in 2024. My pre-registered prediction is that we'll see 2023 patterns continue: declining engagement (though possibly plateauing) and AI Safety further cementing its dominance across posts, karma, and comments.
  • I'll also try to do another end-of-year Forum awards post (see here for last year's) though with slightly different categories.
  • I'm working on an analysis of EA's post-FTX reputation using both quantitative metrics (Forum engagement, Wikipedia traffic) and qualitative evidence (public statements from influential figures inside and outside EA). The preliminary data suggests more serious reputational damage than the recent Pulse survey. I meaningful (as opposed to methodological or just a mistake) I suspect it might highlight the difference between public and elite perception.
  • I recently finished reading former US General Stanley McChrystal's book: Team of Teams. Ostensibly it's a book about his command of JSOC in the Iraq War, but it's really about the concept of Auftragstaktik as a method of command, and there was more than one passage which I thought was relevant to Effective Altruism (especially for what "Third Wave" EA might mean). This one is a stretch though, I'm not sure how interested the Forum would be for this, or whether it would be the right place to post it.

My focus for 2025 will be to work towards developing my position on AI Safety, and share that through a series of posts AI Safety sequence.[1] The concept of AGI went mainstream in 2024, and it does look like we will see significant technological and social disruption in the coming decades due to AI development. Nevertheless, I find myself increasingly skeptical of traditional narratives and arguments about what Alignment is, the likelihood of risk, and what ought to be done about it. Instead, I've come to view "Alignment" primarily as a political philosophy rather than a technical computer science. Nevertheless, I could very well be wrong on most-all of these ideas, and getting critical discussion from the community will I think be good both for myself and (I hope) the Forum readership.[2]

As such, I'm considering doing a deep-dive on the Apollo o1 report given the controversial reception it's had.[3] I think this is the most unlikely one though, as I'd want to research it as thoroughly as I could, and time is at a premium since Christmas is around the corner, so this is definitely a "stretch goal".

Finally, I don't expect to devote much more time[4] to adding to the "Criticism of EA Criticism" sequence. I often finish the posts well after the initial discourse has died down, and I'm not sure what effect they really have.[5] Furthermore, and I've started to notice my own views of a variety of topics start to diverge from "EA Orthodoxy", so I'm not really sure I'd make a good defender. This change may itself warrant a future post, though again I'm not committing to that yet.

  1. ^

    Which I will rename

  2. ^

    It possibly may be more helpful for those without technical backgrounds concerned about AI, but I'm not sure. I also think have a somewhat AGI-sceptical persepctive represented on the Forum might be useful for intellectual diversity purposes but I don't want to claim that. I'm very uncertain about the future of AI and could easily see myself being convinced to change my mind.

  3. ^

    I'm slightly leaning towards the skeptical interpretation myself, as you might have guessed

  4. ^

    if any at all, unless an absolutely egregious but widely-shared example comes up

  5. ^

    Does Martin Sandbu read the EA Forum, for instance?

I think this is, to a significant extent, definitionally impossible with longtermist interventions, because the 'long-term' part excludes having an empirical feedback loop quick enough to update our models of the world.

For example, if I'm curious about whether malaria net distribution or vitamin A supplementation is more 'cost-effective' than another, I can fund interventions and run RCTs, and then model the resulting impact according to some metric like the DALY. This isn't cast-iron secure evidence, but it is at least causally connected to the result I care about.

For interventions that target the long-run future of humanity, this is impossible. We can't run counterfactuals of the future or past, and I at least can't wait 1000 years to see the long-term impact of certain decisions on the civilizational trajectory of the world. Thus, any longtermist intervention cannot really get empirical feedback on the parameters of action, and mostly rely on subjective human judgement about them.

To their credit, the EA Long-Term Future Fund says as much on their own web page:

Unfortunately, there is no robust way of knowing whether succeeding on these proxy measures will cause an improvement to the long-term future.

For similar thoughts, see Laura Duffy's thread on empirical vs reason-driven EA

One potential weakness is that I'm curious if it promotes the more well-known charities due to the voting system. I'd assume that these are somewhat inversely correlated with the most neglected charities.

I guess this isn't necessarily a weakness if the more well-known charities are more effective? I can see the case that: a) they might not be neglected in EA circles, but may be very neglected globally compared to their impact and that b) there is often an inverse relationship between tractability/neglectedness and importance/impact of a cause area and charity. Not saying you're wrong, but it's not necessarily a problem.

Furthermore, my anecdotal take from the voting patterns as well as the comments on the discussion thread seem to indicate that neglectedness is often high on the mind of voters - though I admit that commenters on that thread are a biased sample of all those voting in the election.

It can be a bit underwhelming if an experiment to try to get the crowd's takes on charities winds up determining to, "just let the current few experts figure it out." 

Is it underwhelming? I guess if you want the donation election to be about spurring lots of donations to small, spunky EA-startups working in weird-er cause areas, it might be, but I don't think that's what I understand the intention of the experiment to be (though I could be wrong). 

My take is that the election is an experiment with EA democratisation, where we get to see what the community values when we do a roughly 1-person-1-ballot system instead of those-with-the-moeny decide system which is how things work right now. Those takeaways seem to be:

  • The broad EA community values Animal Welfare a lot more than the current major funders
  • The broad EA community sees value in all 3 of the 'big cause areas' with high-scoring charities in Animal Welfare, AI Safety, and Global Health & Development.

But you haven't provided any data 🤷

Like you could explain why you think so without de-anonymising yourself, e.g. sammy shouldn't put EA on his CV in US policy because:

  • Republicans are in control of most positions and they see EA as heavily democrat-coded and aren't willing to consider hiring people with it
  • The intelligentsia who hire for most US policy positions see EA as cult-like and/or disgraced after FTX
  • People won't understand what EA is on a CV will and discount sammy's chances compared to them putting down "ran a discussion group at university" or something like that
  • You think EA is doomed/likely to collapse and sammy should pre-emptively dissasociate their career from it

Like I feel that would be interesting and useful to hear your perspective on, to the extend you can share information about it. Otherwise just jumping in with strong (and controversial?) opinions from anonymous accounts on the forum just serves to pollute the epistemic commons in my opinion.

Right but I don't know who you are, or what your position in the US Policy Sphere is, if you have one at all. I have no way to verify your potential background or the veracity of the information you share, which is one of the major problems with anonymous accounts.

You may be correct (though again that lack of explanation doesn't help give detail or a mechanism why or help sammy that much, as you said it depends on the section) but that isn't really the point, the only data point you provide is "intentionally anonymous person of the EAForum states opinion without supporting explanations" which is honestly pretty weak sauce

I don't find comments like these helpful without explanations or evidence, especially from throwaway accounts

Yeah again I just think this depends on one's definition of EA, which is the point I was trying to make above.

Many people have turned away from EA, both the beliefs, institutions, and community in the aftermath of the FTX collapse. Even Ben Todd seems to not be an EA by some definitions any more, be that via association or identification. Who is to say Leopold is any different, or has not gone further? What then is the use of calling him EA, or using his views to represent the 'Third Wave' of EA? 

I guess from my PoV what I'm saying is that I'm not sure there's much 'connective tissue' between Leopold and myself, so when people use phrases like "listen to us" or "How could we have done" I end up thinking "who the heck is we/us?"

Load more