I am the director of Tlön, a small org that translates content related to effective altruism, existential risk, and global priorities research into multiple languages.
After living nomadically for many years, I recently moved back to my native Buenos Aires. Feel free to get in touch if you are visiting BA and would like to grab a coffee or need a place to stay.
Every post, comment, or wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License.
I’d be interested in seeing you guys elaborate on the comments you make here in response to Rob’s question that some control methods, such as AI boxing, may be “a bit of a dick move”.
An additional reason EAs may not be playing the attention arms race is that they may be persuaded by the fidelity model of spreading ideas.
Nope, it was Yudkowsky in a Facebook group about AI x-risk around 2015 or 2016. He specifically said he didn't think deep learning was the royal road to AGI.
Would you be able to locate the post in question? If Yudkowsky did indeed say that, I would agree that it would constitute a relevant negative update about his overall prediction track record.
Gotcha, so to be clear, you're saying: it would be better for the current post to have the relevant quotes from the references, but it would be even better to have summaries of the explanations?
Yes, that’s what I’m saying.
(I tend to think this is a topic where summaries are especially likely to lose some important nuance, but not confident.)
I defer to you, since I am not familiar with this topic. My above assessment was "on priors”.
Sorry, my comment wasn’t addressed to you in particular. It should probably have been a top-level comment; I posted it as a reply only because your comment was an example (among many) of the phenomenon I was describing. I also oppose mass surveillance, and it makes zero difference to me whether or not the people surveilled comprise the tiny fraction of the world population that happens to be American.
I just find it frustrating that the critical comments directed at Anthropic often fail to grapple with the complexity of the situation and the hard tradeoffs they face.