LessWrong dev & admin as of July 5th, 2022.
My claim is something closer to "experts in the field will correctly recognize them as obviously much smarter than +2 SD", rather than "they have impressive credentials" (which is missing the critically important part where the person is actually much smarter than +2 SD).
I don't think reputation has anything to do with titotal's original claim and wasn't trying to make any arguments in that direction.
Also... putting that aside, that is one bullet point from my list, and everyone else except Qiaochu has a wikipedia entry, which is not a criteria I was tracking when I wrote the list but think decisively refutes the claim that the list includes many people who are not publicly-legible intellectual powerhouses. (And, sure, I could list Dan Hendryks. I could probably come up with another twenty such names, even though I think they'd be worse at supporting the point I was trying to make.)
This still feels wrong to me: if they’re so smart, where are the nobel laureates? The famous physicists?
I think expecting nobel laureates is a bit much, especially given the demographics (these people are relatively young). But if you're looking for people who are publicly-legible intellectual powerhouses, I think you can find a reasonable number:
(Many more not listed, including non-central examples like Robin Hanson, Vitalik Buterin, Shane Legg, and Yoshua Bengio[2].)
And, like, idk, man. 130 is pretty smart but not "famous for their public intellectual output" level smart. There are a bunch of STEM PhDs, a bunch of software engineers, some successful entrepreneurs, and about the number of "really very smart" people you'd expect in a community of this size.
He might disclaim any current affiliation, but for this purpose I think he obviously counts.
Who sure is working on AI x-risk and collaborating with much more central rats/EAs, but only came into it relatively recently, which is both evidence in favor of one of the core claims of the post but also evidence against what I read as the broader vibes.
first-hand accounts of people experiencing/overhearing racist exchanges
Sorry, I still can't seem to find any of these, can you link me to such an account? I have seen one report that might be a second-hand account, though it could have been a non-racial slur.
(I'm generally not a fan of this much meta, but I consider the fact that this was strong downvoted by someone to be egregious. Most of the comment is reasonable speculation that turned out to be right, and the last sentence is a totally normal opinion to have, which might justify a disagree vote at worst.)
And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists.
I think this is kind of funny because I (directionally) agree with a lot of your list, at least within the observed range of human cognitive ability, but think that strong decoupling norms are mostly agnostic to questions like trusting AI researchers who supported Lysenkoism when it was popular. Of course it's informative that they did so, but can be substantially screened off by examining the quality of their current research (and, if you must, its relationship to whatever the dominant paradigms in the current field are).
People who'd prefer to not have them platformed at an event somewhat connected to EA don't seem to think this is a trade off.
Optimizing for X means optimizing against not-X. (Well, at the pareto frontier, which we aren't at, but it's usually true for humans, anyways.) You will generate two different lists of people for two different values of X. Ergo, there is a trade off.
Anecdotally, a major reason I created this post was because the amount of very edgy people was significantly higher than the baseline for non-EA large events. I can't think of another event that I have attended where people would've felt comfortable saying the stuff that was being said.
Note that these two sentences are saying very different things. The first one is about the percentage of attendees that have certain views, and I am pretty confident that it is false (except in a trivial sense, where people at non-EA events might have different "edgy" views). If you think that percentage of the general population that holds views at least as backwards as "typical racism" is less than whatever it was at Manifest (where I would bet very large amounts of money the median attendee was much more egalitarian than average for their reference class)...
The second one is about what was said at the event, and so far I haven't seen anyone describe an explicit instance of racism or bigotry by an attendee (invited speaker or not). There were no sessions about "race science", so I am left at something of a loss to explain how that is a subject that could continue to come up, unless someone happened to accidentally wander into multiple ongoing conversations about the subject. Absent affirmative confirmation of such an event, my current belief is that much more innocous things are being lumped in under a much more disparaging label.
Your comment seems to be pretty straightforwardly advocating for optimizing for very traditional political considerations (appearance of respectability, relationships with particular interest groups, etc) by very traditional political means (disassociating with unfavorables). The more central this is to how "EA" operates, the more fair it is to call it a political project.
I agree that many rationalists have been alienated by wokeness/etc. I disagree that much of what's being discussed today is well-explained by a reactionary leaning-in to edginess, and think that the explanation offered - that various people were invited on the basis of their engagement with concepts central to Manifest, or for specific panels not related to their less popular views - is sufficient to explain their presence.
With that said, I think Austin is not enormously representative of the rationalist community, and it's pretty off-target to chalk this up as an epistemic win for the EA cultural scene over the rationalist cultural scene. Observe that it is here, on the EA forum, that a substantial fraction of commenters are calling for conference organizers to avoid inviting people for reasons that explicitly trade off against truth-seeking considerations. Notably, there are people who I wouldn't have invited, if I were running this kind of event, specifically because I think they either have very bad epistemics or are habitual liars, such that it would be an epistemic disservice to other attendees to give those people any additional prominence.
I think that if relevant swathes of the population avoid engaging with e.g. prediction markets on the basis of the people invited to Manifest, this will be substantially an own-goal, where people with 2nd-order concerns (such as anticipated reputational risk) signal boost this and cause the very problem they're worried about. (This is a contingent, empirical prediction, though unfortunately one that's hard to test.) Separately, if someone avoided attending Manifest because they anticipated unpleasantness stemming from the presence of these attendees, they either had wildly miscalibrated expectations about what Manifest would be like, or (frankly) they might benefit from asking themselves what is different about attending Manifest vs. attending any other similarly large social event (nearly all of which have invited people with similarly unpalatable views), and whether they endorse letting the mere physical presence of people they could choose to pretend don't exist stop them from going.
Perhaps it's missing from the summary, but there is trivially a much stronger argument that doesn't seem addressed here.
The general shape of Thorstad's argument doesn't really make it clear what sort of counterargument he would admit as valid. Like, yes, humans have not (yet) kicked off any process of obvious, rapid, recusive self-improvement. That is indeed evidence that it might take humans a few decades after they invent computing technology to do so. What evidence, short of us stumbling into the situation under discussion, would be convincing?
(Social and political bottlenecks do exist, but the technology is pretty straightforward.)
No easily summarizable comment on the rest of it, but as a LessWrong dev I do think the addition of Quick Takes to the front page of LW was very good - my sense is that it's counterfactually responsible for a pretty substantial amount of high quality discussion. (I haven't done any checking of ground-truth metrics, this is just my gestalt impression as a user of the site.)