titotal

Computational Physicist
6038 karmaJoined Jul 2022

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Comments
529

Thank you for writing this post, I know how stressful writing something like this can be and I hope you give yourself a break!

I especially agree with your points about the lack of empathy. Empathy is the ability to understand and care about how other people are hurt or upset by your actions, no matter their background. This is an important part of moral reasoning, and is completely compatible with logical reasoning. One should not casually ignore harms in favour of utilitarian pursuits, that's how we got SBF (and like, stalinism). And if you do understand the harms, and realize that you have to do the action anyway, you should at least display to the harmed parties that you understand why they are upset. 

The OP was willing to write up their experience and explain why they left, but I wonder how many more people are leaving, fed up, in silence, not wanting to risk any backlash? How many only went to a few meetings but never went further because they sensed a toxic atmosphere? The costs of this kind of atmosphere are often hidden from view. 

In my experience, this forum seems kinda hostile to attempts at humour (outside of april fools day). This might be a contributing factor to the relatively low population here!

I was referring to the account that told torres to be careful or "someone will break your kneeecaps", the person obsessively tweeting attacks, the people impersonating Torres, tagging his ex-wife, etc. I can't rule out Torres faking some of this, but I think it's more plausible that the attacks are by real people who dislike Torres. 

I would guess you are not behind those, and that Torres is wrongly attributing them to you (they seem different in character to the post here). However since you seem to be a pseudonymous/throwaway account person who has only ever discussed this one topic, I have no way to be sure. 

I am fairly annoyed at the lack of good faith being given here, given the subject matter. 

As someone who is broadly on Torre's side of the fence, I find Torre's antics such as described here to be extremely annoying and unhelpful. I despise Boghossian's et als politics but the behavior here was clearly unjustified, and a lot of the other things here looks like either deliberate dishonesty or a severe lack of reading comprehension. I think this sort of behavior just makes things harder for people who genuinely want to criticize the real flaws and harms in EA thinking.

In fairness, you should probably link Torres response to the article (from part 3 onwards, although it doesn't actually address a lot of the accusations). Torres account of receiving harassment and threats of violence seem plausible to me, although we can never know for sure (another reason not to use sockpuppets and other underhanded tactics). 

Sam also thought that the blockchain could address the content moderation problem. He wrote about this here, and talked about it here, in spring and summer of 2022. If the idea worked, it could make Twitter somewhat better for the world, too.

 

I think this is an indication that the EA community may have hard a hard time seeing through tech hype. I don't think this this is a good sign now we're dealing with AI companies who are also motivated to hype and spin. 

The linked idea is very obviously unworkable. I am unsurprised that Elon rejected it and that no similar thing has taken off. First, as usual, it could be done cheaper and easier without a blockchain. second, twitter would be giving people a second place to see their content where they don't see twitters ads, thereby shooting themselves in the foot financially for no reason. Third, while facebook and twitter could maybe cooperate here, there is no point in an interchange between other sites like tiktok and twitter as they are fundamentally different formats. Fourth, there's already a way for people to share tweets on other social media sites: it's called "hyperlinks" and "screenshots". Fifth, how do you delete your bad tweets that are ruining your life is they remain permanently on the blockchain? 

I think jailtime counts as social sanction! 

I want to remind people that there are severe downsides of having these race and eugenics discussions like the ones linked on the EA forum.

1. It makes the place uncomfortable for minorities and people concerned about racism, which could someday trigger a death spiral where non-racists leave, making the place more racist on average, causing mor non-racists to leave, etc. 

2. It creates an acrimonious atmosphere in general, by starting heated discussions about deeply held personal topics. 

3. It spreads ideas that could potentially cause harm, and lead uninformed people down racist rabbitholes by linking to biased racist sources. 

4. It creates bad PR for EA in general, and provides easy ammunition for people who want to attack EA.

5. In my opinion, the evidence and arguments are generally bad and rely on flawed and often racist sources.

6. In my opinion, most forms of eugenics (and especially anything involving race) is extremely unlikely to be an actually effective cause area in the near future, given the backlash, unclear benefit, potential to create mass strife and inequality, etc

Now, this has to be balanced against a desire to entertain unusual ideas and to protect freedom of speech. But these views can still be discussed, debated, and refuted elsewhere. It seems like a clearly foolish move to host them on this forum. If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake. 

I think any AI that is capable of wiping out humanity on earth is likely to be capable of wiping them out on all the planets in our solar system. Earth is far more habitable than those other planets, so they would be correspondingly fragile and easier to take out. I don't think the distance would be much of an advantage, a current day spaceship only takes 10 years to get to pluto so the playing field is not very far. 

I think your point about motivation is important, but it also applies within Earth. Why would an AI bother to kill off isolated sentinlese islanders? A lot of the answers to that question (like it needs to turn all available resources into computing power) could also motivate it to attack an isolated pluto colony. So if you do accept that AI is an existential threat on one planet, space settlement might not reduce it by very much on the motivation front. 

I want to encourage more papers like this and more efforts to lay an entire argument for x-risk out.

 That being said, the arguments are fairly unconvincing. For example, the argument for premise 1 completely skips the step where you sketch out an actual path for AI to disempower humanity if we don't voluntarily give up. "AI will be very capable" is not the same thing as "AI will be capable of 100% guaranteed conquering all of humanity", you need a joining argument in the middle. 

Conferences are pretty great. In particular chatting to people in person gives you a way of finding the information that isn't optimised for in the journal publication system, such as the things someone tried that didn't work out, or didn't end up publishable. 

I like encouraging outsiders to go to conferences, but I would strongly caveat that you should be an outsider who at least has some related expertise knowledge. If you go to a chemistry conference with no knowledge of chemistry (or overlapping fields like physics and material science), the vast majority of talks and posters will be incomprehensible to you, and you won't know enough to ask insightful questions. Even for an experienced insider, talks from a different subfield can be completely useless because you don't have the necessary background knowledge to make sense of them. 

I find the most interesting/valuable talks/posters are the ones that are in my field and share a bit with my research, but are off in a different direction, so I'm being exposed to very new ideas, but still have the background to engage.  

Load more