Please people, do not treat Richard Hannania as some sort of worthy figure who is a friend of EA. He was a Nazi, and whilst he claims he moderated his views, he is still very racist as far as I can tell.
Hannania called for trying to get rid of all non-white immigrants in the US, and the sterilization of everyone with an IQ under 90 indulged in antisemitic attacks on the allegedly Jewish elite, and even post his reform was writing about the need for the state to harass and imprison Black people specifically ('a revolution in our culture or form of government. We need more policing, incarceration, and surveillance of black people' https://en.wikipedia.org/wiki/Richard_Hanania). Yet in the face of this, and after he made an incredibly grudging apology about his most extreme stuff (after journalists dug it up), he's been invited to Manifiold's events and put on Richard Yetter Chappel's blogroll.
DO NOT DO THIS. If you want people to distinguish benign transhumanism (which I agree is a real thing*) from the racist history of eugenics, do not fail to shun actual racists and Nazis. Likewise, if you want to promote "decoupling" factual beliefs from policy recommendations, which ... (read more)
I'd just like to clarify that my blogroll should not be taken as a list of "worthy figure[s] who [are] friend[s] of EA"! They're just blogs I find often interesting and worth reading. No broader moral endorsement implied!
I have little patience with polite society, its inconsistencies in which views are and are not acceptable, and its games of tug-of-war with the Overton Window. My own standards are strict and idiosyncratic. If I held everyone to them, I'd live in a lonely world, one that would exclude many my own circles approve of. And if you wonder whether I approve of something, I'm always happy to chat.
Just to expand on the above, I've written a new blog post - It's OK to Read Anyone - that explains (i) why I won't personally engage in intellectual boycotts [obviously the situation is different for organizations, and I'm happy for them to make their own decisions!], and (ii) what it is in Hanania's substack writing that I personally find valuable and worth recommending to other intellectuals.
I find it so maddeningly short-sighted to praise a white supremacist for being "respectful". White supremacists are not respectful to non-white people! Expand your moral circle!
A recurring problem I find with replies to criticism of associating with white supremacist figures like Hanania is a complete failure to empathize with or understand (or perhaps to care?) why people are so bothered by white supremacy. Implied in white supremacy is the threat of violence against non-white people. Dehumanizing language is intimately tied to physical violence against the people being dehumanized.
White supremacist discourse is not merely part of some kind of entertaining parlour room conversation. It’s a bullet in a gun.
fyi, I weakly downvoted this because (i) you seem like you're trying to pick a fight and I don't think it's productive; there are familiar social ratcheting effects that incentivize exaggerated rhetoric on race and gender online, and I don't think we should encourage that. (There was nothing in my comment that invited this response.) (ii) I think you're misrepresenting Trace. (iii) The "expand your moral circle" comment implies, falsely, that the only reason one could have for tolerating someone with bad views is that you don't care about those harmed by their bad views.
I did not mean the reference to Trace to function as a conversation opener. (Quite the opposite!) I've now edited my original comment to clarify the relevant portion of the tweet. But if anyone wants to disagree with Trace, maybe start a new thread for that rather than replying to me. Thanks!
4
Yarrow
Now I wonder if you’re actually familiar with Hanania’s white supremacist views? (See here, for example.)
Your comment seems a bit light on citations, and didn't match my impression of Hanania after spending 10s of hours reading his stuff. I've certainly never seen him advocate for an authoritarian government as a means of enforcing a "natural" racial hierarchy. This claim stood out to me:
Hannania called for trying to get rid of all non-white immigrants in the US
Hanania wrote this post in 2023. It's the first hit on his substack search for "immigration". This apparent lack of fact-checking makes me doubt the veracity of your other claims.
It seems like this is your only specific citation:
a revolution in our culture or form of government. We need more policing, incarceration, and surveillance of black people
This appears to be a falsified quote. [CORRECTION: The quote appears here on Hanania's Twitter. Thanks David. I'm leaving the rest of my comment as originally written, since I think it provides some valuable context.] Search for "we need more" on Wikipedia's second citation. The actual quote is as follows:
...actually solving our crime problem to any serious extent would take a revolution in our culture or system of government. Whether you want to focus on guns or th
I think the comments here are ignoring a perfectly sufficient reason to not, eg, invite him to speak at an EA adjacent conference. If I understand correctly, he consistently endorsed white supremacy for several years as a pseudonymous blogger.
Effective Altruism has grown fairly popular. We do not have a shortage of people who have heard of us and are willing to speak at conferences. We can afford to apply a few filtering criteria that exclude otherwise acceptable speakers.
"Zero articles endorsing white supremacy" is one such useful filter.
I predict that people considering joining or working with us would sometimes hear about speakers who'd once endorsed white supremacy, and be seriously concerned. I'd put not-insignificant odds that the number that back off because of this would reduce the growth of the movement by over 10%. We can and should prefer speakers who don't bring this potential problem.
A few clarifications follow:
-Nothing about this relies on his current views. He could be a wonderful fluffy bunny of a person today, and it would all still apply. Doesn't sound like the consensus in this thread, but it's not relev... (read more)
[This comment is no longer endorsed by its author]Reply
Manifold invited people based on having advocated for prediction markets, which is a much stricter criterion than being a generic public speaker that feels positively about your organization. With a smaller pool of speakers, it is not trivially cheap to apply filters, so it is not as clear cut as I claimed. (I could have found out this detail before writing, and I feel embarrassed that I didn't.)
Despite having an EA in a leadership role and ample EA-adjacent folks that associate with it, Manifold doesn't consider itself EA-aligned. It sucks that potential EA's will sometimes mistake non-EA's for EA's, but it is important to respect it when a group tells the wider EA community that we aren't their real dad and can't make requests. (This does not appear to have been common knowledge so I feel less embarrassed about this one.)
https://twitter.com/RichardHanania/status/1657541010745081857?lang=en. There you go for the quote in the form Wikipedia gives it.
7
Ebenezer Dukakis
Thank you. Is your thought that "revolution in our culture or system of government" is supposed to be a call for some kind of fascist revolution? My take is, like a lot of right-leaning people, Hanania sees progressive influence as deep and pervasive in almost all American institutions. From this perspective, a priority on fighting crime even when it means heavily disparate impact looks like a revolutionary change.
Hanania has been pretty explicit about his belief that liberal democracy is generally the best form of government -- see this post for example. If he was crypto-fash, I think he would just not publish posts like that.
BTW, I don't agree with Hanania on everything... for example, the "some humans are in a very deep sense better than other humans" line from the post I just linked sketches me out some -- it seems to conflate moral value with ability. I find Hanania interesting reading, but the idea that EA should distance itself from him on the margin seems like something a reasonable person could believe. I think it comes down to your position in the larger debate over whether EA should prioritize optics vs intellectual vibrancy.
Here is another recent post (titled "Shut up About Race and IQ") that I struggle to imagine a crypto-Nazi writing. E.g. these quotes:
2
David Mathers🔸
(Well not quite, Wiki edit out "or our culture" as an alternative to "form of government").
On one hand, some of his past views were pretty terrible (even though I believe that you've exaggerated the extent of these views).
On the other hand, he is also one of the best critics of conservatives. Take for example, this article where he tells conservatives to stop being idiots who believe random conspiracy theories and another where he tells them to stop scamming everyone. These are amazing, brilliant articles with great chutzpah. As someone quite far to the right, he's able to make these points far more credibly than a moderate or liberal ever could.
So I guess I feel he's kind of a necessary voice, at least at this particular point in time when there are few alternatives.
When someone makes the accusation that transhumanism or effective altruism or longtermism or worries about low birth rates is a form of thinly veiled covert racism, I generally think they don’t really understand the topic and are tilting at windmills.
But then I see people who are indeed super racist talking about these topics and I can’t really say the critics are fully wrong. Particularly if communities like the EA Forum or the broader online EA community don’t vigorously repudiate the racism.
Yes, Richard Hanania is pretty racist. His views have historically been quite repugnant, and he's admitted that "I truly sucked back then". However, I think EA causes are more important than political differences. It's valuable when Hanania exposes the moral atrocity of factory farming and defends EA to his right-wing audience. If we're being scope-sensitive, I think we have a lot more in common with Hanania on the most important questions than we do on political issues.
I also think Hanania has excellent takes on most issues, and that's because he's the most intellectually honest blogger I've encountered. I think Hanania likes EA because he's willing to admit that he's imperfect, unlike EA's critics who would rather feel good about themselves than actually help others.
More broadly, I think we could be doing more to attract people who don't hold typical Bay Area beliefs. Just 3% of EAs identify as right wing. I think there are several reasons why, all else equal, it would be better to have more political diversity:
In this era of political polarization, It would be a travesty for EA issues to become partisan.
Being "pretty racist" with a past history of being even worse is not a mere "political issue."
I don't see how the proposition that Hanania has agreeable views on some issues, like factory farming contradicts David's position that we should not treat him "as some sort of worthy figure" and (impliedly) that we should not platform him at our events or on our blogrolls.
There is a wide gap between the proposition that EA should seek to attract more "people who don't hold typical Bay Area beliefs" (I agree) and that EA should seek to attract people by playing nice with those like Hanania.
Among other things, the fact is that you can't create a social movement that can encompass 100% of humanity. You can't both be welcoming to people who hold "pretty racist" views and to the targets of their racism. And if you start welcoming in the pretty-racist, you're at least risking the downward spiral of having more racism-intolerant people like --> more openness to racism --> more departures from those intolerant to racism --> soon, you've got a whole lot of racism going on.
But I think that the modern idea that it's good policy to "shun" people who express wrong (or heartless, or whatever) views is totally wrong, and is especially inappropriate for EA in practice, the impact of which has largely been due to unusual people with unusual views.
Why move from "wrong or heartless" to "unusual people with unusual views"? None of the people who were important to EA historically have had hateful or heartless-and-prejudiced views (or, if someone had them secretly, at least they didn't openly express it). It would also be directly opposed to EA core principles (compassion, equal consideration of interests).
Whether someone speaks at Manifest (or is on a blogroll, or whatever) should be about whether they are going to give an interesting talk to Manifest, not because of their general moral character.
I think sufficiently shitty character should be disqualifying. I agree with you insofar that, if someone has ideas that seem worth discussing, I can imagine a stance of "we're talking to this person in a mo... (read more)
Why move from "wrong or heartless" to "unusual people with unusual views"?
I believe these two things:
A) People don't have very objective moral intuitions, so there isn't widespread agreement on what views are seriously wrong.
B) Unusual people typically come by their unusual views by thinking in some direction that is not socially typical, and then drawing conclusions that make sense to them.
So if you are a person who does B, you probably don't and shouldn't have confidence that many other people won't find your views to be seriously wrong. So a productive intellectual community that wants to hear things you have to say, should be prepared to tolerate views that seem seriously wrong, perhaps with some caveats (e.g. that they are the sort of view that a person might honestly come by, as opposed to something invented simply maliciously.)
None of the people who were important to EA historically have had hateful or heartless-and-prejudiced views (or, if someone had them secretly, at least they didn't openly express it).
I think this is absolutely false. A kind of obvious example (to many, since as above, people do not unanimously agree on what is hateful) is that famous Nick Bostrom... (read more)
I think this is just naive. People pay money and spend their precious time to go to these conferences. If you invite a racist, the effect will be twofold:
More racists will come to your conference.
more minorities, and people sympathetic to minorities, will stay home.
When this second group stays home (as is their right), they take their bold and unusual ideas with them.
By inviting a racist, you are not selecting for "bold and unusual ideas". You are selecting for racism.
And yes, a similar dynamic will play out with many controversial ideas. Which is why you need to exit the meta level, and make deliberate choices about which ideas you want to keep, and which groups of people you are okay with driving away. This also comes with a responsibility to treat said topics with appropriate levels of care and consideration, something that, for example, Bostrom failed horribly at.
I feel like you're trying to equivocate "wrong or heartless" (or "heartless-and-prejudiced," as I called it elsewhere) with "socially provocative" or "causes outrage to a subset of readers."
That feels like misdirection.
I see two different issues here:
(1) Are some ideas that cause social backlash still valuable?
(2) Are some ideas shitty and worth condemning?
My answer is yes to both.
When someone expresses a view that belongs into (2), pointing at the existence of (1) isn't a good defense.
You may be saying that we should be humble and can't tell the difference, but I think we can. Moral relativism sucks.
FWIW, if I thought we couldn't tell the difference, then it wouldn't be obvious to me that we should go for "condemn pretty much nothing" as opposed to "condemn everything that causes controversy." Both of these seem equally extremely bad.
I see that you're not quite advocating for "condemn nothing" because you write this bit:
perhaps with some caveats (e.g. that they are the sort of view that a person might honestly come by, as opposed to something invented simply maliciously.)
It depends on what you mean exactly, but I think this may not be going far enough. Some people don't cult... (read more)
It seems to me like there's no disagreement by people familiar with Hanania that his views were worse in the past. That's a red flag. Some people say he's changed his views. I'm not per se against giving people second chances, but it seems suspicious to me that someone who admits that they've had really shitty racist views in the past now continues to focus on issues where they – even according to other discussion participants here who defend him – still seem racist.
Agreed. I think the 2008-10 postings under the Hoste pseudonym are highly relevant insofar as they show a sustained pattern of bigotry during that time. They are just not consistent in my mind with having fallen into error despite even minimally good-faith, truth-seeking behavior combined with major errors in judgment. Sample quotations in this article. Once you get to that point, you may get a second chance at some future time, but I'm not inclined to give you the benefit of the doubt on your second chance:
A person who published statements like the Hoste statements over a period of time, but has reformed, should be on notice that there was something in them that led them to the point of glorifying white nationalism and
I agree with you when you said that we can know evil ideas when we see them and rightly condemn them. We don't have to adopt some sort of generic welcomingness to all ideas, including extremist hate ideologies.
I disagree with you about some of the examples of alleged racism or prejudice or hateful views attributed to people like Nick Bostrom and Scott Alexander. I definitely wouldn't wave these examples away by saying they "seem fine to me." I think one thing you're trying to say is that these examples are very different from someone being overtly and egregiously white supremacist in the worst way like Richard Hanania, and I agree. But I wouldn't say these examples are "fine".
It is okay to criticize the views and behaviour of figures perceived to be influential in EA. I think that's healthy.
1
cata
Appreciate the reply. I don't have a well-informed opinion about Hanania in particular, and I really don't care to read enough of his writing to try to get one, so I think I said everything I can say about the topic (e.g. I can't really speak to whether Hanania's views are specifically worse than all the examples I think of when I think of EA views that people may find outrageous.)
5
Yarrow
Wikipedia:
3
Yarrow
See this comment for a more detailed survey of Hanania's white supremacy.
I don't think it makes any sense to punish people for past political or moral views they have sincerely recanted. There is some sense in which it shows bad judgement but ideology is a different domain from most. I am honestly quite invested in something like 'moral progress'. Its a bit of a naive position to have to defend philosophically but I think most altruists are too. At least if they are being honest with themselves. Lots of people are empirically quite racist. Very few people grew up with what I would consider to be great values. If someone sincerely changes their ways Im happy to call them brother or sister. Have a party. Slaughter the uhhhhh fattest pumpkin and make vegan pumpkin pie.
However mr Hanania is stil quite racist. He may or may not still be more of a Nazi than he lets on but even his professed views are quite bad. Im not sure what the policy should be on cooperating with people with opposing value sets. Or on Hanania himself. I just wanted to say something in support of being truly welcoming to anyone who real deal rejects their past harmful ideology.
I have been extremely unimpressed with Richard Hanania and I don't understand why people find his writing interesting. But I think that the modern idea that it's good policy to "shun" people who express wrong (or heartless, or whatever) views is totally wrong, and is especially inappropriate for EA in practice, the impact of which has largely been due to unusual people with unusual views.
Whether someone speaks at Manifest (or is on a blogroll, or whatever) should be about whether they are going to give an interesting talk to Manifest, not because of their general moral character. Especially not because of the moral character of their beliefs, rather than their actions. And really especially not because of the moral character of things they used to believe.
By not "shunning" (actual, serious) racists, you are indirectly "shunning" everybody they target.
Imagine if there was a guy who's "unusual idea" was that some random guy called ben was the source of all the evils in the world. Furthermore, this is somehow a widespread belief, and he has to deal with widespread harrasment and death threats, despite doing literally nothing wrong. You invite, as speaker at your conference, someone who previously said that Ben is a "demonic slut who needs to be sterilised".
Do you think Ben is going to show up to your conference?
And this can sometimes set into motion a "nazi death spiral". You let a few nazis into your community for "free speech" reasons. All the people uncomfortable with the presence of one or two nazis leave, making the nazis a larger percentage of the community, attracting more, which makes more people leave, until only nazi's and people who are comfortable with nazis are left. This has literally happened on several occasions!
Shunning people for saying vile things is entirely fine and necessary for the health of a community. This is called "having standards".
I would add that it's shunning people for saying vile things with ill intent which seems necessary. This is what separates the case of Hanania from others. In most cases, punishing well-intentioned people is counterproductive. It drives them closer to those with ill intent, and suggests to well-intentioned bystanders that they need to choose to associate with the other sort of extremist to avoid being persecuted. I'm not an expert on history but from my limited knowledge a similar dynamic might have existed in Germany in the 1920s/1930s; people were forced to choose between the far-left and the far-right.
The Germany argument works better the other way round: there were plenty of non-communist alternatives to Hitler (and the communists weren't capable of winning at the ballot box), but a lot of Germans who didn't share his race obsession thought he had some really good ideas worth listening to, and then many moderate rivals eventually concluded they were better off working with him.
I don't think it's "punishing" people not to give them keynote addresses and citations as allies. I doubt Leif Wenar is getting invitations to speak at EA events any time soon, not because he's an intolerable human being but simply because his core messaging is completely incompatible with what EA is trying to do...
3
titotal
I do not think the rise of Nazi germany had much to do with social "shunning". More it was a case of the economy being in shambles, both the far-left and far-right wanting to overthrow the government, and them fighting physical battles in the street over it, until the right-wing won enough of the populace over. I guess there was left-wing infighting between the communists and the social democrats, but that was less over "shunning" than over murdering the other sides leader.
I think intent should be a factor when thinking about whether to shun, but it should not be the only factor. If you somehow convinced me that a holocaust denier genuinely bore no ill intent, I still wouldn't want them in my community, because it would create a massively toxic atmosphere and hurt everybody else. I think it's good to reach out and try to help well-intentioned people see the errors of their ways, but it's not the responsibility of the EA movement to do so here.
1
Timothy Chan
Yes, a similar dynamic (relating to siding with another side to avoid persecution) might have existed in Germany in the 1920s/1930s (e.g. I imagine industrialists preferred Nazis to Communists). I agree it was not a major factor in the rise of Nazi Germany - which was one result of the political violence - and that there are differences.
8
jacobjacob
(I haven't read the full comment here and don't want to express opinions about all its claims. But for people who saw my comments on the other post, I want to state for the record that based on what I've seen of Richard Hanania's writing online, I think Manifest next year would be better without him. It's not my choice, but if I organised it, I wouldn't invite him. I don't think of him as a "friend of EA".)
7
Thomas Kwa
Given the Guardian piece, inviting Hannania to Manifest seems like an unforced error on the part of Manifold and possibly Lightcone. This does not change because the article was a hitpiece with many inaccuracies. I might have more to say later.
Given his past behavior, I think it's more likely than not that you're right about him. Even someone more skeptical should acknowledge that the views he expressed in the past and the views he now expresses likely stem from the same malevolent attitudes.
But about far-left politics being 'not racist', I think it's fair to say that far-left politics discriminates in favor or against individuals on the basis of race. It's usually not the kind of malevolent racial discrimination of the far-right - which absolutely needs to be condemned and eliminated by society. The far-left appear primarily motivated by benevolence towards racial groups perceived to be disadvantaged or are in fact disadvantaged, but it is still racially discriminatory (and it sometimes turns into the hateful type of discrimination). If we want to treat individuals on their own merits, and not on the basis of race, that sort of discrimination must also be condemned.
Also, there is famously quite a lot of antisemitism on the left and far left. Sidestepping the academic debate on whether antisemitism is or is not technically a form of racism, it seem strange to me to claim that racism-and-adjacent only exist on the right.
(for avoidance of doubt, I agree with the OP that Hanania seems racist, and not a good ally for this community)
This is such a common-sense take, that it worries me it needs writing. I assume this is happening over on twitter (where I don't have an account)? The average non-EA would consider this take to be extremely obvious and is partly why I think we should be considered about the composition of the movement in general.
4
Jason
To clarify, I think when you say "sterilization of everyone under 90" you mean that he favored the "forcible sterilization of everyone with an IQ below 90" (quoting Wikipedia here)?
I'm working on a "who has funded what in AI safety" doc. Surprisingly, when I looked up Lightspeed Grants online (https://lightspeedgrants.org/) I couldn't find any list of what they funded. Does anyone know where I could find such a list?
Yep, the Lightspeed Grants table is part of the SFF table! I also think we should have published our own table, but it seemed lower priority after it was included in the SFF one.
We might also release a Lightspeed Grants retrospective soon.
2
Benevolent_Rain
Thanks for doing that and I look forward to hopefully publishing your findings. It would be valuable at least to me for the doc to show clearly, if you have time for that, if there might be biases in funding - it might be as important what is not funded as what is funded. For example, if some collection of smaller donors put 40% of funding towards considering slowing down AI, while a larger donor spends less than 2%, that might might be interesting at least as a pointer towards investigating such disparities in more detail (I noticed that Pause AI was a bit higher up in the donation election results, for example).
2
David Mathers🔸
Firstly, it's not really me you should be thanking, it's not my project, I am just helping with it a bit.
Secondly, it's just another version of this, don't expect any info about funding beyond an update to the funding info in this: https://www.alignmentforum.org/posts/zaaGsFBeDTpCsYHef/shallow-review-of-live-agendas-in-alignment-and-safety
I feel like people haven't taken the "are mosquito nets bad because of overfishing" question seriously enough and that it might be time to stop funding mosquito nets because of it. (Or at least until we can find an org that only gives them out in places with very little opportunity for or reliance on fishing.) I think people just trust GiveWell on this, but I think that is a mistake: I can't find any attempt by them to actually do even a back of the envelope calculation of the scale of the harm through things like increased food insecurity (or indeed harm to fish I guess.) And also, it'd be so mega embarrassing for them if nets were net negative, that I don't really trust them to evaluate this fairly. (And actually that probably goes for any EA org, or to some extent public health people as a whole.) The last time this was discussed on the forum:
1) the scale seemed quite concerning (https://forum.effectivealtruism.org/posts/enH4qj5NzKakt5oyH/is-mosquito-net-fishing-really-net-positive)
2) No one seemed to have a quick disproof that it made nets net negative. (Plus we also care if it just pushes their net effect below Give Directly or other options.)
"Fishing," said the old man "is at least as complicated as any other industry".
I was sitting in a meeting of representatives of the other end of the fishing industry: fleets of North Sea trawlers turning over >£1million each per year, fishing in probably the world's most studied at-risk fishing ecosystem. They were fuming because in the view of the scientists studying North Sea fish, cod stocks had reached dangerously low levels and their quotas needed reducing, but in the view of the fishermen actually catching the fish, cod stocks off the east coast of England were at such high levels they hit their month's cod quota in a day whilst actively trying to avoid catching cod. (I have no reason to believe that either view was uninformed or deceptive). "What they're probably not factoring in," he closed on, "is that cod populations in different regions are cyclical"
The point of that waffly anecdote is that factoring in the effects of mosquito nets on local fish ecosystems would actually be really hard, because an RCT in one area over one year really isn't going to tell you much about the ecosystems in other areas, or in other years. Even more so in isolated African watercourses... (read more)
There is a tension between different EA ideas here in my view. Early on, I recall, the emphasis was on how you need charity evaluators like GiveWell, and RTCs by randomista development economists, because you can't predict what interventions will work well, or even do more good than harm, on the basis of common sense intuition. (I remember Will giving examples like "actually, having prisoners talk to children about how horrible being in prison is seems to make the children more likely to grow up to commit crimes.) But it turns out that when assessing interventions, there are always points where there just isn't high quality data on whether some particular factor importantly reduces (or increases) the intervention's effectiveness. So at that point, we have to rely on commonsense, mildly disciplined by back of the envelope calculations, possibly supplemented by poor quality data if we're lucky. And then it feels unfair when this is criticized by outsiders (like the recent polemical anti-EA piece in Wired) because well, what else can you possibly do if high-quality studies aren't available and it's not feasible to do them yourself? But I guess from the outsider's perspective, it's easy to see why this looks like hypocrisy: they criticized other people for relying on their general hunches about how things work, but now the EAs are doing it themselves! I'm not really sure what the general solution (if any) to this is. But it does feel to my like there are a vast number of choice points in GiveWell's analyses where they are mostly guessing, and if those guesses are all biased in some direction rather than uncorrelated, assessments of interventions will be way off.
3
David Mathers🔸
Thanks that is helpful. It's frustrating how hard it is to be sure about this.
5
Seth Ariel Green
there have been a few "EA" responses to this issue but TBF they can be a bit hard to find
https://www.cold-takes.com/minimal-trust-investigations/
https://blog.givewell.org/2015/02/05/putting-the-problem-of-bed-nets-used-for-fishing-in-perspective/
9
Rebecca
The Wired article says that there’s been a bunch more research in recent years about the effects of bed nets on fish stocks, so I would consider the GiveWell response out of date
2
David Mathers🔸
I don't actually find either all THAT reassuring. The GW blogpost just says most nets are used for their intended purpose, but 30% being used otherwise is still a lot, not to mention they can be used for their intended purpose and the later to fish. The Cold Takes blog post just cites the same data about most nets being used for their intended purpose.
2
David Mathers🔸
I had seen the second of these at some point I think, but not the first.
1
Wes Reisen
1. you do bring up an interesting point that this should be factored into where nets are distributed.
2. (this is a very draft-stage idea) maybe if the nets weren't that water-proof, this issue would be solved? (cons: flooding, rain, potential pollution if, in the water, it dissipates, and less durability)
3. Maybe mention this to someone at givewell? idk tho
Some very harsh criticism of Leopold Aschenbrenner's recent AGI forecasts in the recent comments on this Metaculus question. People who are following stuff more closely than me will be able to say whether or not they are reasonable:
I didn't read all the comments, but Order's are obvious nonsense, of the "(a+b^n)/n = x, therefore God exists" tier. Eg take this comment:
This is obviously invalid. The existence of a theoretical complexity upper bound (which incidentally Order doesn't have numbers of) doesn't mean we are anywhere near it, numerically. Those aren't even the same level of abstraction! Furthermore, we have clear theoretical proofs for how fast sorting can get, without AFAIK any such theoretical limits for learning. "algorithms cannot infinitely improve" is irrelevant here, it's the slightly more mathy way to say a deepity like "you can't have infinite growth on a finite planet," without actual relevant semantic meaning[1].
Numerical improvements happen all the time, sometimes by OOMs. No "new mathematics or physics" required.
Frankly, as a former active user of Metaculus, I feel pretty insulted by his comment. Does he really think no one on Metaculus took CS 101?
1. ^
It's probably true that every apparently "exponential" curve become a sigmoid eventually, knowing this fact doesn't let you time the transition. You need actual object-level arguments and understanding, and even then it's very very hard (as people arguing against Moore's Law or for "you can't have infinite growth on a finite planet" found out).
6
Linch
To be clear I also have high error bars on whether traversing 5 OOMs of algorithmic efficiency in the next five years are possible, but that's because a) high error bars on diminishing returns to algorithmic gains, and b) a tentative model that most algorithmic gains in the past were driven by compute gains, rather than exogeneous to it. Algorithmic improvements in ML seems much more driven by the "f-ck around and find out" paradigm than deep theoretical or conceptual breakthroughs; if we model experimentation gains as a function of quality-adjusted researchers multiplied by compute multiplied by time, it's obvious that the compute term is the one that's growing the fastest (and thus the thing that drives the most algorithmic progress).
5
Order
In the future I would recommend reading the full comment. Admitting your own lack of knowledge (not having read the comments) and then jumping to "obviously nonsense" and "insulting" and "Does he really think no one on Metaculus took CS 101?" is not an amazing first impression of EA. You selected the one snippet where I was discussing a complicated topic (ease of algorithmic improvements) instead of low hanging and obviously wrong topics like Aschenbrenner seemingly being unable to do basic math (3^3) using his own estimates for compute improvements. I consider this to be a large misrepresentation of my argument and I hope that you respond to this forthcoming comment in good faith.
Anyway, I am crossposting my response from Metaculus, since I responded there at length:
...there is a cavernous gap between:
- we don't know the lower bound computational complexity
versus
- 100,000x improvement is very much in the realm of possibilities, and
- if you extend this trendline on a log plot, it will happen by 2027, and we should take this seriously (aka there is nothing that makes [the usual fraught issues with extending trendlines](https://xkcd.com/605/) appear here)
I find myself in the former camp. If you question that a sigmoid curve is likely, there is no logical basis to believe that 100,000x improvement in LLM algorithm output speed at constant compute (Aschenbrenner's claim) is likely either.
Linch's evidence to suggest that 100,000x is likely is:
- Moore's Law happened [which was a hardware miniaturization problem, not strictly an algorithms problem, so doesn't directly map onto this. But it is evidence that humans are capable of log plot improvement sometimes]
- "You can't have infinite growth on a finite planet" is false [it is actually true, but we are not utilizing Earth anywhere near fully]
- "Numerical improvements happen all the time, sometimes by OOMs" [without cited evidence]
None of these directly show that 100,000x improvement in compute or spe
8
Linch
I appreciate that you replied! I'm sorry if I was rude. I think you're not engaging with what I actually said in my comment, which is pretty ironic. :)
(eg there are multiple misreadings. I've never interacted with you before so I don't really know if they're intentional)
2
Linch
(I replied more substantively on Metaculus)
3
JWS 🔸
The Metaculus timeline is already highly unreasonable given the resolution criteria,[1] and even these people think Aschenbrenner is unmoored from reality.
1. ^
Remind me to write this up soon
3
David Mathers🔸
No reason to assume an individual Metaculus commentator agrees with the Metaculus timeline, so I don't think that's very fair.
I actually think the two Metaculus questions are just bad questions. The detailed resolution criteria don't necessarily match what we intuitively think=AGI or transformative AI, or obviously capture anything that important, and it is just unclear whether people are forecasting on the actual resolution criteria or on their own idea of what "AGI" is.
All the tasks in both AGI questions are quite short, so it's easy to imagine an AI beating all of them, and yet not being able to replace most human knowledge workers, because it can't handle long-running tasks. It's also just not clear how performance on benchmark questions and the Turing test translates to competence with even short-term tasks in the real world. So even if you think AGI in the sense of "AI that can automate all knowledge work" (let alone all work) is far away, it might make sense to think we are only a few years from a system that can resolve these questions yes.
On the other hand, resolving the questions 'yes' could conceivably lag the invention of some very powerful and significant systems, perhaps including some that some reasonable definition would count as AGI.
As someone points out in the comments of one of the questions; right now, any mainstream LLM will fail the Turing test, however smart, because if you ask "how do I make chemical weapons" it'll read you a stiff lecture about why it can't do that as it would violate its principles. In theory, that could remain true even if we reach AGI. (The questions only resolve 'yes' if a system that can pass the Turing test is actually constructed, it's not enough for this to be easy to do if Open AI or whoever want to.) And the stronger of the two questions requires that a system can do a complex manual task. Fair enough, some reasonable definitions of "AGI" do require machines that can match humans at every manual dexteri
3
JWS 🔸
Which particular resolution criteria do you think it's unreasonable to believe will be met by 2027/2032 (depending on whether it's the weak AGI question or the strong one)?
Two of the four in particular stand out. First, the Turing Test one exactly for the reason you mention - asking the model to violate the terms of service is surely an easy way to win. That's the resolution criteria, so unless the Metaculus users think that'll be solved in 3 years[1] then the estimates should be higher. Second, the SAT-passing requires "having less than ten SAT exams as part of the training data", which is very unlikely in current Frontier models, and labs probably aren't keen to share what exactly they have trained on.
it is just unclear whether people are forecasting on the actual resolution criteria or on their own idea of what "AGI" is.
No reason to assume an individual Metaculus commentator agrees with the Metaculus timeline, so I don't think that's very fair.
I don't know if it is unfair. This is Metaculus! Premier forecasting website! These people should be reading the resolution criteria and judging their predictions according to them. Just going off personal vibes on how much they 'feel the AGI' feels like a sign of epistemic rot to me. I know not every Metaculus user agrees with this, but it is shaped by the aggregate - 2027/2032 are very short timelines, and those are median community predictions. This is my main issue with the Metaculus timelines atm.
I actually think the two Metaculus questions are just bad questions.
I mean, I do agree with you in the sense that they don't fully match AGI, but that's partly because 'AGI' covers a bunch of different ideas and concepts. It might well be possible for a system to satisfy these conditions but not replace knowledge workers, perhaps a new market focusing on automation and employment might be better but that also has its issues with operationalisation.
1. ^
On top of everything else needed to successfull
2
David Mathers🔸
What I meant to say was unfair was basing "even Metaculus users, think Aschenbrenner's stuff is bad, and they have short time lines, off the reaction to Aschenbrenner of only one or two people.
2
David Mathers🔸
Which particular resolution criteria do you think it's unreasonable to believe will be met by 2027/2032 (depending on whether it's the weak AGI question or the strong one)?
Please people, do not treat Richard Hannania as some sort of worthy figure who is a friend of EA. He was a Nazi, and whilst he claims he moderated his views, he is still very racist as far as I can tell.
Hannania called for trying to get rid of all non-white immigrants in the US, and the sterilization of everyone with an IQ under 90 indulged in antisemitic attacks on the allegedly Jewish elite, and even post his reform was writing about the need for the state to harass and imprison Black people specifically ('a revolution in our culture or form of government. We need more policing, incarceration, and surveillance of black people' https://en.wikipedia.org/wiki/Richard_Hanania). Yet in the face of this, and after he made an incredibly grudging apology about his most extreme stuff (after journalists dug it up), he's been invited to Manifiold's events and put on Richard Yetter Chappel's blogroll.
DO NOT DO THIS. If you want people to distinguish benign transhumanism (which I agree is a real thing*) from the racist history of eugenics, do not fail to shun actual racists and Nazis. Likewise, if you want to promote "decoupling" factual beliefs from policy recommendations, which ... (read more)
I'd just like to clarify that my blogroll should not be taken as a list of "worthy figure[s] who [are] friend[s] of EA"! They're just blogs I find often interesting and worth reading. No broader moral endorsement implied!
fwiw, I found TracingWoodgrains' thoughts here fairly compelling.
ETA, specifically:
I find it so maddeningly short-sighted to praise a white supremacist for being "respectful". White supremacists are not respectful to non-white people! Expand your moral circle!
A recurring problem I find with replies to criticism of associating with white supremacist figures like Hanania is a complete failure to empathize with or understand (or perhaps to care?) why people are so bothered by white supremacy. Implied in white supremacy is the threat of violence against non-white people. Dehumanizing language is intimately tied to physical violence against the people being dehumanized.
White supremacist discourse is not merely part of some kind of entertaining parlour room conversation. It’s a bullet in a gun.
Your comment seems a bit light on citations, and didn't match my impression of Hanania after spending 10s of hours reading his stuff. I've certainly never seen him advocate for an authoritarian government as a means of enforcing a "natural" racial hierarchy. This claim stood out to me:
Hanania wrote this post in 2023. It's the first hit on his substack search for "immigration". This apparent lack of fact-checking makes me doubt the veracity of your other claims.
It seems like this is your only specific citation:
This appears to be a falsified quote. [CORRECTION: The quote appears here on Hanania's Twitter. Thanks David. I'm leaving the rest of my comment as originally written, since I think it provides some valuable context.] Search for "we need more" on Wikipedia's second citation. The actual quote is as follows:
... (read more)Regarding the last paragraph, in the edit:
I think the comments here are ignoring a perfectly sufficient reason to not, eg, invite him to speak at an EA adjacent conference. If I understand correctly, he consistently endorsed white supremacy for several years as a pseudonymous blogger.
Effective Altruism has grown fairly popular. We do not have a shortage of people who have heard of us and are willing to speak at conferences. We can afford to apply a few filtering criteria that exclude otherwise acceptable speakers.
"Zero articles endorsing white supremacy" is one such useful filter.
I predict that people considering joining or working with us would sometimes hear about speakers who'd once endorsed white supremacy, and be seriously concerned. I'd put not-insignificant odds that the number that back off because of this would reduce the growth of the movement by over 10%. We can and should prefer speakers who don't bring this potential problem.
A few clarifications follow:
-Nothing about this relies on his current views. He could be a wonderful fluffy bunny of a person today, and it would all still apply. Doesn't sound like the consensus in this thread, but it's not relev... (read more)
Un-endorsed for two reasons.
I have very mixed views on Richard Hannania.
On one hand, some of his past views were pretty terrible (even though I believe that you've exaggerated the extent of these views).
On the other hand, he is also one of the best critics of conservatives. Take for example, this article where he tells conservatives to stop being idiots who believe random conspiracy theories and another where he tells them to stop scamming everyone. These are amazing, brilliant articles with great chutzpah. As someone quite far to the right, he's able to make these points far more credibly than a moderate or liberal ever could.
So I guess I feel he's kind of a necessary voice, at least at this particular point in time when there are few alternatives.
I think it's pretty unreasonable to call him a Nazi--he'd hate Nazis, because he loves Jews and generally dislikes dumb conservatives.
I agree that he seems pretty racist.
When someone makes the accusation that transhumanism or effective altruism or longtermism or worries about low birth rates is a form of thinly veiled covert racism, I generally think they don’t really understand the topic and are tilting at windmills.
But then I see people who are indeed super racist talking about these topics and I can’t really say the critics are fully wrong. Particularly if communities like the EA Forum or the broader online EA community don’t vigorously repudiate the racism.
I'd like to give some context for why I disagree.
Yes, Richard Hanania is pretty racist. His views have historically been quite repugnant, and he's admitted that "I truly sucked back then". However, I think EA causes are more important than political differences. It's valuable when Hanania exposes the moral atrocity of factory farming and defends EA to his right-wing audience. If we're being scope-sensitive, I think we have a lot more in common with Hanania on the most important questions than we do on political issues.
I also think Hanania has excellent takes on most issues, and that's because he's the most intellectually honest blogger I've encountered. I think Hanania likes EA because he's willing to admit that he's imperfect, unlike EA's critics who would rather feel good about themselves than actually help others.
More broadly, I think we could be doing more to attract people who don't hold typical Bay Area beliefs. Just 3% of EAs identify as right wing. I think there are several reasons why, all else equal, it would be better to have more political diversity:
- In this era of political polarization, It would be a travesty for EA issues to become partisan.
- All else equal, political d
... (read more)Being "pretty racist" with a past history of being even worse is not a mere "political issue."
I don't see how the proposition that Hanania has agreeable views on some issues, like factory farming contradicts David's position that we should not treat him "as some sort of worthy figure" and (impliedly) that we should not platform him at our events or on our blogrolls.
There is a wide gap between the proposition that EA should seek to attract more "people who don't hold typical Bay Area beliefs" (I agree) and that EA should seek to attract people by playing nice with those like Hanania.
Among other things, the fact is that you can't create a social movement that can encompass 100% of humanity. You can't both be welcoming to people who hold "pretty racist" views and to the targets of their racism. And if you start welcoming in the pretty-racist, you're at least risking the downward spiral of having more racism-intolerant people like --> more openness to racism --> more departures from those intolerant to racism --> soon, you've got a whole lot of racism going on.
+1
If even some of the people defending this person start with "yes, he's pretty racist," that makes me think David Mathers is totally right.
Regarding cata's comment:
Why move from "wrong or heartless" to "unusual people with unusual views"? None of the people who were important to EA historically have had hateful or heartless-and-prejudiced views (or, if someone had them secretly, at least they didn't openly express it). It would also be directly opposed to EA core principles (compassion, equal consideration of interests).
I think sufficiently shitty character should be disqualifying. I agree with you insofar that, if someone has ideas that seem worth discussing, I can imagine a stance of "we're talking to this person in a mo... (read more)
I believe these two things:
A) People don't have very objective moral intuitions, so there isn't widespread agreement on what views are seriously wrong.
B) Unusual people typically come by their unusual views by thinking in some direction that is not socially typical, and then drawing conclusions that make sense to them.
So if you are a person who does B, you probably don't and shouldn't have confidence that many other people won't find your views to be seriously wrong. So a productive intellectual community that wants to hear things you have to say, should be prepared to tolerate views that seem seriously wrong, perhaps with some caveats (e.g. that they are the sort of view that a person might honestly come by, as opposed to something invented simply maliciously.)
I think this is absolutely false. A kind of obvious example (to many, since as above, people do not unanimously agree on what is hateful) is that famous Nick Bostrom... (read more)
I think this is just naive. People pay money and spend their precious time to go to these conferences. If you invite a racist, the effect will be twofold:
When this second group stays home (as is their right), they take their bold and unusual ideas with them.
By inviting a racist, you are not selecting for "bold and unusual ideas". You are selecting for racism.
And yes, a similar dynamic will play out with many controversial ideas. Which is why you need to exit the meta level, and make deliberate choices about which ideas you want to keep, and which groups of people you are okay with driving away. This also comes with a responsibility to treat said topics with appropriate levels of care and consideration, something that, for example, Bostrom failed horribly at.
I feel like you're trying to equivocate "wrong or heartless" (or "heartless-and-prejudiced," as I called it elsewhere) with "socially provocative" or "causes outrage to a subset of readers."
That feels like misdirection.
I see two different issues here:
(1) Are some ideas that cause social backlash still valuable?
(2) Are some ideas shitty and worth condemning?
My answer is yes to both.
When someone expresses a view that belongs into (2), pointing at the existence of (1) isn't a good defense.
You may be saying that we should be humble and can't tell the difference, but I think we can. Moral relativism sucks.
FWIW, if I thought we couldn't tell the difference, then it wouldn't be obvious to me that we should go for "condemn pretty much nothing" as opposed to "condemn everything that causes controversy." Both of these seem equally extremely bad.
I see that you're not quite advocating for "condemn nothing" because you write this bit:
It depends on what you mean exactly, but I think this may not be going far enough. Some people don't cult... (read more)
Agreed. I think the 2008-10 postings under the Hoste pseudonym are highly relevant insofar as they show a sustained pattern of bigotry during that time. They are just not consistent in my mind with having fallen into error despite even minimally good-faith, truth-seeking behavior combined with major errors in judgment. Sample quotations in this article. Once you get to that point, you may get a second chance at some future time, but I'm not inclined to give you the benefit of the doubt on your second chance:
- A person who published statements like the Hoste statements over a period of time, but has reformed, should be on notice that there was something in them that led them to the point of glorifying white nationalism and
... (read more)I don't think it makes any sense to punish people for past political or moral views they have sincerely recanted. There is some sense in which it shows bad judgement but ideology is a different domain from most. I am honestly quite invested in something like 'moral progress'. Its a bit of a naive position to have to defend philosophically but I think most altruists are too. At least if they are being honest with themselves. Lots of people are empirically quite racist. Very few people grew up with what I would consider to be great values. If someone sincerely changes their ways Im happy to call them brother or sister. Have a party. Slaughter the uhhhhh fattest pumpkin and make vegan pumpkin pie.
However mr Hanania is stil quite racist. He may or may not still be more of a Nazi than he lets on but even his professed views are quite bad. Im not sure what the policy should be on cooperating with people with opposing value sets. Or on Hanania himself. I just wanted to say something in support of being truly welcoming to anyone who real deal rejects their past harmful ideology.
I have been extremely unimpressed with Richard Hanania and I don't understand why people find his writing interesting. But I think that the modern idea that it's good policy to "shun" people who express wrong (or heartless, or whatever) views is totally wrong, and is especially inappropriate for EA in practice, the impact of which has largely been due to unusual people with unusual views.
Whether someone speaks at Manifest (or is on a blogroll, or whatever) should be about whether they are going to give an interesting talk to Manifest, not because of their general moral character. Especially not because of the moral character of their beliefs, rather than their actions. And really especially not because of the moral character of things they used to believe.
By not "shunning" (actual, serious) racists, you are indirectly "shunning" everybody they target.
Imagine if there was a guy who's "unusual idea" was that some random guy called ben was the source of all the evils in the world. Furthermore, this is somehow a widespread belief, and he has to deal with widespread harrasment and death threats, despite doing literally nothing wrong. You invite, as speaker at your conference, someone who previously said that Ben is a "demonic slut who needs to be sterilised".
Do you think Ben is going to show up to your conference?
And this can sometimes set into motion a "nazi death spiral". You let a few nazis into your community for "free speech" reasons. All the people uncomfortable with the presence of one or two nazis leave, making the nazis a larger percentage of the community, attracting more, which makes more people leave, until only nazi's and people who are comfortable with nazis are left. This has literally happened on several occasions!
Shunning people for saying vile things is entirely fine and necessary for the health of a community. This is called "having standards".
I would add that it's shunning people for saying vile things with ill intent which seems necessary. This is what separates the case of Hanania from others. In most cases, punishing well-intentioned people is counterproductive. It drives them closer to those with ill intent, and suggests to well-intentioned bystanders that they need to choose to associate with the other sort of extremist to avoid being persecuted. I'm not an expert on history but from my limited knowledge a similar dynamic might have existed in Germany in the 1920s/1930s; people were forced to choose between the far-left and the far-right.
Given his past behavior, I think it's more likely than not that you're right about him. Even someone more skeptical should acknowledge that the views he expressed in the past and the views he now expresses likely stem from the same malevolent attitudes.
But about far-left politics being 'not racist', I think it's fair to say that far-left politics discriminates in favor or against individuals on the basis of race. It's usually not the kind of malevolent racial discrimination of the far-right - which absolutely needs to be condemned and eliminated by society. The far-left appear primarily motivated by benevolence towards racial groups perceived to be disadvantaged or are in fact disadvantaged, but it is still racially discriminatory (and it sometimes turns into the hateful type of discrimination). If we want to treat individuals on their own merits, and not on the basis of race, that sort of discrimination must also be condemned.
Also, there is famously quite a lot of antisemitism on the left and far left. Sidestepping the academic debate on whether antisemitism is or is not technically a form of racism, it seem strange to me to claim that racism-and-adjacent only exist on the right.
(for avoidance of doubt, I agree with the OP that Hanania seems racist, and not a good ally for this community)
I'm working on a "who has funded what in AI safety" doc. Surprisingly, when I looked up Lightspeed Grants online (https://lightspeedgrants.org/) I couldn't find any list of what they funded. Does anyone know where I could find such a list?
Some (or all?) Lightspeed grants are part of SFF: https://survivalandflourishing.fund/sff-2023-h2-recommendations
I feel like people haven't taken the "are mosquito nets bad because of overfishing" question seriously enough and that it might be time to stop funding mosquito nets because of it. (Or at least until we can find an org that only gives them out in places with very little opportunity for or reliance on fishing.) I think people just trust GiveWell on this, but I think that is a mistake: I can't find any attempt by them to actually do even a back of the envelope calculation of the scale of the harm through things like increased food insecurity (or indeed harm to fish I guess.) And also, it'd be so mega embarrassing for them if nets were net negative, that I don't really trust them to evaluate this fairly. (And actually that probably goes for any EA org, or to some extent public health people as a whole.) The last time this was discussed on the forum:
1) the scale seemed quite concerning (https://forum.effectivealtruism.org/posts/enH4qj5NzKakt5oyH/is-mosquito-net-fishing-really-net-positive)
2) No one seemed to have a quick disproof that it made nets net negative. (Plus we also care if it just pushes their net effect below Give Directly or other options.)
3) There was surprisin... (read more)
"Fishing," said the old man "is at least as complicated as any other industry".
I was sitting in a meeting of representatives of the other end of the fishing industry: fleets of North Sea trawlers turning over >£1million each per year, fishing in probably the world's most studied at-risk fishing ecosystem. They were fuming because in the view of the scientists studying North Sea fish, cod stocks had reached dangerously low levels and their quotas needed reducing, but in the view of the fishermen actually catching the fish, cod stocks off the east coast of England were at such high levels they hit their month's cod quota in a day whilst actively trying to avoid catching cod. (I have no reason to believe that either view was uninformed or deceptive). "What they're probably not factoring in," he closed on, "is that cod populations in different regions are cyclical"
The point of that waffly anecdote is that factoring in the effects of mosquito nets on local fish ecosystems would actually be really hard, because an RCT in one area over one year really isn't going to tell you much about the ecosystems in other areas, or in other years. Even more so in isolated African watercourses... (read more)
Some very harsh criticism of Leopold Aschenbrenner's recent AGI forecasts in the recent comments on this Metaculus question. People who are following stuff more closely than me will be able to say whether or not they are reasonable: