RN

richard_ngo

7193 karmaJoined

Bio

Former AI safety research engineer, now AI governance researcher at OpenAI. Blog: thinkingcomplete.blogspot.com

Sequences
2

Replacing Fear
EA Archives Reading List

Comments
316

This seems like the wrong meta-level orientation to me. A meta-level orientation that seems better to me is something like "Truth and transparency have strong global benefits, but often don't happen enough because they're locally aversive. So assume that sharing information is useful even when you're not concretely sure how it'll help, and assume by default that power structures (including boards, social networks, etc) are creating negative externalities insofar as they erect barriers to you sharing information".

The specific tradeoff between causing drama and sharing useful information will of course be situation-dependent, but in this situation the magnitude of the issues involved feels like it should significantly outweigh concerns about "stirring up drama", at least if you make attempts to avoid phrasing the information in particularly-provocative or careless ways.

I disagree FWIW. I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.

you can infer that people who don't take AI risk seriously are somewhat likely to lack important forms of competence

This seems true, but I'd also say that the people who do take AI risk seriously also typically lack different important forms of competence. I don't think this is coincidental; instead I'd say that there's (usually) a tradeoff between "good at taking very abstract ideas seriously" and "good at operating in complex fast-moving environments". The former typically requires a sort of thinking-first orientation to the world, the latter an action-first orientation to the world. It's possible to cultivate both, but I'd say most people are naturally inclined to one or the other (or neither).

If Hassan had said that more recently or I was convinced he still thought that, then I would agree he should not be invited to Manifest.

My claim is that the Manifest organizers should have the right to invite him even if he'd said that more recently. But appreciate you giving your perspective, since I did ask for that (just clarifying the "agree" part).

Having said that, given that there is a very clear non-genocidal reading, I do not think it is a clear example of hate speech in quite the same sense as Hanania's animals remark

I have some object-level views about the relative badness but my main claim is more that this isn't a productive type of analysis for a community to end up doing, partly because it's so inherently subjective, so I support drawing lines that help us not need to do this analysis (like "organizers are allowed to invite you either way").

Why doesn't this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)?

Of course this is all a spectrum, but I don't believe this implication in part because I expect that impact is often heavy-tailed. You do something really well first and foremost by finding the people who naturally inclined towards being some of the best in the world at it. If a community that was really good at power struggles tried to get much better at truth-seeking, it would probably still not do a great job at pushing the intellectual frontier, because it wouldn't be playing to its strengths (and meanwhile it would trade off a lot of its power-seeking ability). I think the converse is true for EA.

I broadly endorse Jeff's comment above. To put it another way, though: I think many (but not all) of the arguments from the Kolmogorov complicity essay apply whether the statements which are taboo to question are true or false. As per the quote at the top of the essay:

"A good scientist, in other words, does not merely ignore conventional wisdom, but makes a special effort to break it. Scientists go looking for trouble."

That is: good scientists will try to break a wide range of conventional wisdom. When the conventional wisdom is true, then they will fail. But the process of trying to break the conventional wisdom may well get them in trouble either way, e.g. because people assume they're pushing an agenda rather than "just asking questions".

The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.

I agree that extreme truth-seeking can be counterproductive. But in most worlds I don't think that EA's impact comes from arguing for highly controversial ideas; and I'm not advocating for extreme truth-seeking like, say, hosting public debates on the most controversial topics we can think of. Rather, I think its impact will come from advocating for not-super-controversial ideas, but it will be able to generate them in part because it avoided the effects I listed in my comment above.

One person I was thinking about when I wrote the post was Medhi Hassan. According to Wikipedia:

During a sermon delivered in 2009, quoting a verse of the Quran, Hasan used the terms "cattle" and "people of no intelligence" to describe non-believers. In another sermon, he used the term "animals" to describe non-Muslims.

Medhi has spoken several times at the Oxford Union and also in a recent public debate on antisemitism, so clearly he's not beyond the pale for many.

I personally also think that the "from the river to the sea" chant is pretty analogous to, say, white nationalist slogans. It does seem to have a complicated history, but in the wake of the October 7 attacks its association with Hamas should I think put it beyond the pale. Nevertheless, it has been defended by Rashida Tlaib. In general I am in favor of people being able to make arguments like hers, but I suspect that if Hanania were to make an argument for why a white nationalist slogan should be interpreted positively, it would be counted as a strong point against him.

I expect that either Hassan or Tlaib, were they interested in prediction markets, would have been treated in a similar way as Hanania by the Manifest organizers.

I don't have more examples off the top of my head because I try not to follow this type of politics too much. I would be pretty surprised if an hour of searching didn't turn up a bunch more though.

I wasn't at Manifest, though I was at LessOnline beforehand. I strongly oppose attempts to police the attendee lists that conference organizers decide on. I think this type of policing makes it much harder to have a truth-seeking community. I've also updated over the last few years that having a truth-seeking community is more important than I previously thought - basically because the power dynamics around AI will become very complicated and messy, in a way that requires more skill to navigate successfully than the EA community has. Therefore our comparative advantage will need to be truth-seeking.

Why does enforcing deplatforming make truth-seeking so much harder? I think there are (at least) three important effects.

First is the one described in Scott's essay on Kolmogorov complicity. Selecting for people willing to always obey social taboos also selects hard against genuinely novel thinkers. But we don't need to take every idea a person has in board in order to get some value from them - we should rule thinkers in, not out.

Secondly, a point I made in this tweet: taboo topics tend to end up expanding, for structural reasons (you can easily appeal to taboos to win arguments). So over time it becomes more and more costly to quarantine specific topics.

Thirdly, it selects against people who are principled in defense of truth-seeking. My sense is that the people who organized Manifest are being very principled, and would also be willing to have left-wing people who have potentially-upsetting views. For example, there's been a lot of anti-semitism from prominent left-wing thinkers lately. If one of them wanted to attend Manifest, I think it would be reasonable for Jews to be upset. But I also expect that they'd be treated pretty similarly to Hanania (e.g. allowed to come and host sessions, name used in promotional materials). I'm curious what critics of Manifest think should be done in these cases.

To be clear, I'm not saying all events should take a stance like Manifest's. I'm just saying that I strongly support their right to do so.

Eh, I personally think of some things in the top 10 as "nowhere near" the most important issues, because of how heavy-tailed cause prioritization tends to be.

Load more