This is about informing how much I should defer to EA on what issues matter most. Is EA's turn to longtermism a good reason in itself for me to have turned to longtermism?
One story, the most flattering to EA, goes like this:
"EA is unusually good at 'epistemics' / thinking about things, because of its culture and/or who it selects for; and also the community isn't corrupted too badly by random founder effects and information cascades; and so the best ideas gradually won out among those who were well-known for being reasonable, and who spent tons of time thinking about the ideas. (E.g. Toby Ord convincing Will MacAskill, and a bit later Holden Karnofsky joining them.)"
Of course, there could be other stories to be told, to do with 'who worked in the same building as who' and 'what memes were rife in the populations that EA targeted outreach to' and 'what random contingent things happened, e.g. a big funder flipping from global health to animals and creating 10 new institutes' and 'who was on Felicifia back in the day' and 'did anyone actively try to steer EA this way'. Ideally, I'd like to run a natural experiment where we go back in time to 2008, have MacAskill and Ord and Bostrom all work in different countries rather than all in Oxford, and see what changes. (Possibly Peter Singer is a real-life instance of this natural experiment, akin to how the evolution of Australia's mammals, birds, and marsupials diverged from the rest of the world when the continent separated from Gondwanaland in the Mesozoic-Palaeocene epochs. Not that Peter is that old.)
But maybe looking at leadership is the wrong way around, and it's the rank-and-file members who led the charge. I'd be very interested to know if so. (One thing I could look at is 'how much did the sentiment on this forum lag or lead the messaging from the big orgs?')
I understand EA had x-risk elements from the very beginning (e.g. Toby Ord), but it was only in the late 2010s that it came to be the dominant strain. Most of us only joined the movement while this longtermist turn was already well underway — I took the GWWC pledge in 2014 but checked out of EA for a few years afterwards, returning in 2017 to find x-risk a lot more dominant, and the movement 2 to 3 times bigger — and we have no direct experience of the shift, so we can only ask our elders how it happened, and thence decide 'to what degree was the shift caused by stuff that seems correlated with believing true things?'. It would be a shame if anecdata about the shift were lost to cultural memory, hence this question.
Came here to cite the same thing! :)
Note that Dustin Moskovitz says he's not a longtermist, and "Holden isn't even much of a longtermist."
So my intuition is that the two main important updates EA has undergone are "it's not that implausible that par-human AI is coming in the next couple of decades" and "the world is in fact dropping the ball on this quite badly, in the sense that maybe alignment isn't super hard, but to a first approximation no one in the field has checked."
(Which is both an effect and a cause of updates like "maybe we can figure stuff out in spaces where the data is more indirect and hard-to-interpret", "EA should be weirder", "EA should focus more on research and intellectual work and technical work", etc.)
But I work in AI x-risk and naturally pay more attention to that stuff, so maybe I'm missing other similarly-deep updates that have occurred. Like, maybe there was a big update at some point about the importance of biosecurity? My uninformed guess is that if we'd surveyed future EA leaders in 2007, they already would have been on board with making biosecurity a top global priority (if there are tractable ways to influence it), whereas I think this is a lot less true for AI alignment.