Former AI safety research engineer, now AI governance researcher at OpenAI. Blog: thinkingcomplete.blogspot.com
Ah, gotcha. Yepp, that's a fair point, and worth me being more careful about in the future.
I do think we differ a bit on how disagreeable we think advocacy should be, though. For example, I recently retweeted this criticism of Abundance, which is basically saying that they overly optimized for it to land with those who hear it.
And in general I think it's worth losing a bunch of listeners in order to convey things more deeply to the ones who remain (because if my own models of movement failure have been informed by environmentalism etc, it's hard to talk around them).
But in this particular case, yeah, probably a bit of an own goal to include the environmentalism stuff so strongly in an AI talk.
him being one of the few safetyists on the political right to not have capitulated to accelerationism-because-of-China (as most recently even Elon did).
Thanks for noticing this. I have a blog post coming out soon criticizing this exact capitulation.
every time he tries to talk about object-level politics it feels like going into the bizarro universe and I would flip the polarity of the signs of all of it
I am torn between writing more about politics to clarify, and writing less about politics to focus on other stuff. I think I will compromise by trying to write about political dynamics more timelessly (e.g. as I did in this post, though I got a bit more object-level in the follow-up post).
I worry that your bounties are mostly just you paying people to say things you already believe about those topics
This is a fair complaint and roughly the reason I haven't put out the actual bounties yet—because I'm worried that they're a bit too skewed. I'm planning to think through this more carefully before I do; okay to DM you some questions?
I think it is extremely easy to imagine the left/Democrat wing of AI safety becoming concerned with AI concentrating power, if it hasn't already
It is not true that all people with these sort of concerns only care private power and not the state either. Dislike of Palantir's nat sec ties is a big theme for a lot of these people, and many of them don't like the nat sec-y bits of the state very much either.
I definitely agree with you with regard to corporate power (and see dislike of Palantir as an extension of that). But a huge part of the conflict driving the last election was "insiders" versus "outsiders"—to the extent that even historically Republican insiders like the Cheneys backed Harris. And it's hard for insiders to effectively oppose the growth of state power. For instance, the "govt insider" AI governance people I talk to tend to be the ones most strongly on the "AI risk as anarchy" side of the divide, and I take them as indicative of where other insiders will go once they take AI risk seriously.
But I take your point that the future is uncertain and I should be tracking the possibility of change here.
(This is not a defense of the current administration, it is very unclear whether they are actually effectively opposing the growth of state power, or seizing it for themselves, or just flailing.)
Thanks for the feedback!
FWIW a bunch of the polemical elements were deliberate. My sense is something like: "All of these points are kinda well-known, but somehow people don't... join the dots together? Like, they think of each of them as unfortunate accidents, when they actually demonstrate that the movement itself is deeply broken."
There's a kind of viewpoint flip from being like "yeah I keep hearing about individual cases that sure seem bad but probably they'll do better next time" to "oh man, this is systemic". And I don't really know how to induce the viewpoint shift except by being kinda intense about it.
Upon reflection, I actually take this exchange to be an example of what I'm trying to address. Like, I gave a talk that was according to you "so extreme that it is hard to take seriously" and your three criticisms were:
I imagine you have better criticisms to make, but ultimately (as you mention) we do agree on the core point, and so in some sense the message I'm getting is "yeah, listen, environmentalism has messed up a bunch of stuff really badly, but you're not allowed to be mad about it".
And I basically just disagree with that. I do think being mad about it (or maybe "outraged" is a better term) will have some negative effects on my personal epistemics (which I'm trying carefully to manage). But given the scale of the harms caused, this level of criticism seems like an acceptable and proportional discursive move. (Though note that I'd have done things differently if I felt like criticism that severe was already common within the political bubble of my audience—I think outrage is much worse when it bandwagons.)
EDIT: what do you mean by "how to get broad engagement on this"? Like, you don't see how this could be interesting to a wider audience? You don't know how to engage with it yourself? Something else?
Thanks for sharing this, it does seem good to have transparency into this stuff.
My gut reaction was "huh, I'm surprised about how large a proportion of these people (maybe 30-50%, depending on how you count it) I don't recall substantially interacting with" (where by "interaction" I include reading their writings).
To be clear, I'm not trying to imply that it should be higher; that any particular mistakes are being made; or that these people should have interacted with me. It just felt surprising (given how long I've been floating around EA) and worth noting as a datapoint. (Though one reason to take this with a grain of salt is that I do forget names and faces pretty easily.)
My point is not that the current EA forum would censor topics that were actually important early EA conversations, because EAs have now been selected for being willing to discuss those topics. My point is that the current forum might censor topics that would be important course-corrections, just as if the rest of society had been moderating early EA conversations, those conversations might have lost important contributions like impartiality between species (controversial: you're saying human lives don't matter very much!), the ineffectiveness of development aid (controversial: you're attacking powerful organizations!), transhumanism (controversial, according to the people who say it's basically eugenics), etc.
Re "conversations can be had in more sensitive ways", I mostly disagree, because of the considerations laid out here: the people who are good at discussing topics sensitively are mostly not the ones who are good at coming up with important novel ideas.
For example, it seems plausible that genetic engineering for human intelligence enhancement is an important and highly neglected intervention. But you had to be pretty disagreeable to bring it into the public conversation a few years ago (I think it's now a bit more mainstream).
Narrowing in even further on the example you gave, as an illustration: I just had an uncomfortable conversation about age of consent laws literally yesterday with an old friend of mine. Specifically, my friend was advocating that the most important driver of crime is poverty, and I was arguing that it's cultural acceptance of crime. I pointed to age of consent laws varying widely across different countries as evidence that there are some cultures which accept behavior that most westerners think of as deeply immoral (and indeed criminal).
Picturing some responses you might give to this:
But EA as a movement is interested in things like:
So this sort of debate does seem pretty relevant.
I think EA would've broadly survived intact by lightly moderating other kinds of discomfort (or it may have even expanded).
The important point is that we didn't know in advance which kinds of discomfort were of crucial importance. The relevant baseline here is not early EAs moderating ourselves, it's something like "the rest of academic philosophy/society at large moderating EA", which seems much more likely to have stifled early EA's ability to identify important issues and interventions.
(I also think we've ended up at some of the wrong points on some of these issues, but that's a longer debate.)
No central place for all the sources but the one you asked about is: https://www.sebjenseb.net/p/how-profitable-is-embryo-selection