titotal

Computational Physicist
5759 karmaJoined Jul 2022

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Comments
523

I want to remind people that there are severe downsides of having these race and eugenics discussions like the ones linked on the EA forum.

1. It makes the place uncomfortable for minorities and people concerned about racism, which could someday trigger a death spiral where non-racists leave, making the place more racist on average, causing mor non-racists to leave, etc. 

2. It creates an acrimonious atmosphere in general, by starting heated discussions about deeply held personal topics. 

3. It spreads ideas that could potentially cause harm, and lead uninformed people down racist rabbitholes by linking to biased racist sources. 

4. It creates bad PR for EA in general, and provides easy ammunition for people who want to attack EA.

5. In my opinion, the evidence and arguments are generally bad and rely on flawed and often racist sources.

6. In my opinion, most forms of eugenics (and especially anything involving race) is extremely unlikely to be an actually effective cause area in the near future, given the backlash, unclear benefit, potential to create mass strife and inequality, etc

Now, this has to be balanced against a desire to entertain unusual ideas and to protect freedom of speech. But these views can still be discussed, debated, and refuted elsewhere. It seems like a clearly foolish move to host them on this forum. If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake. 

I think any AI that is capable of wiping out humanity on earth is likely to be capable of wiping them out on all the planets in our solar system. Earth is far more habitable than those other planets, so they would be correspondingly fragile and easier to take out. I don't think the distance would be much of an advantage, a current day spaceship only takes 10 years to get to pluto so the playing field is not very far. 

I think your point about motivation is important, but it also applies within Earth. Why would an AI bother to kill off isolated sentinlese islanders? A lot of the answers to that question (like it needs to turn all available resources into computing power) could also motivate it to attack an isolated pluto colony. So if you do accept that AI is an existential threat on one planet, space settlement might not reduce it by very much on the motivation front. 

I want to encourage more papers like this and more efforts to lay an entire argument for x-risk out.

 That being said, the arguments are fairly unconvincing. For example, the argument for premise 1 completely skips the step where you sketch out an actual path for AI to disempower humanity if we don't voluntarily give up. "AI will be very capable" is not the same thing as "AI will be capable of 100% guaranteed conquering all of humanity", you need a joining argument in the middle. 

Conferences are pretty great. In particular chatting to people in person gives you a way of finding the information that isn't optimised for in the journal publication system, such as the things someone tried that didn't work out, or didn't end up publishable. 

I like encouraging outsiders to go to conferences, but I would strongly caveat that you should be an outsider who at least has some related expertise knowledge. If you go to a chemistry conference with no knowledge of chemistry (or overlapping fields like physics and material science), the vast majority of talks and posters will be incomprehensible to you, and you won't know enough to ask insightful questions. Even for an experienced insider, talks from a different subfield can be completely useless because you don't have the necessary background knowledge to make sense of them. 

I find the most interesting/valuable talks/posters are the ones that are in my field and share a bit with my research, but are off in a different direction, so I'm being exposed to very new ideas, but still have the background to engage.  

Interesting! I'm glad to see engagement with Thorstadt's work, this area is one I found myself less convinced about. 

Interstellar colonisation is insanely difficult and resource intensive, so I expect any widespread dispersal of humanity beyond our solar system to be extremely far off in the future. If you think that that existential risk is high, there may be an extremely small chance we survive to that point. 

I'm also not sure about your point on "misaligned AI's". Firstly, this should be "extinctionist AI's" or something, as it seems very unlikely that all misaligned AI's would actively want to hunt down tiny remnants of humanity. But if they were out to kill us, why would they need a reciever? It's far easier to send an automated killer probe long distances than to send a human colony, so it seems they'd be able to hunt down colonies physically if they need to. 

If you don't think misalignment automatically equals extinction, then the argument doesn't work. The neutral world is now competing with "neutral world where the software fucks up and kills people sometimes", which seems to be worse. 

In the 90's and 2000's, many people such as Eric Drexler were extremely worried about nanotechnology and viewed it as an existential threat through the "gray goo" scenario. Yudkowsky predicted drexler style nanotech would occur by 2010, using very similar language to what he is currently saying about AGI. 

It turned out they were all being absurdly overoptimistic about how soon the technology would arrive, and the whole drexlerite nanotech project flamed out by the end of the 2000's and has pretty much not progressed since. I think a similar dynamic playing out with AGI is less likely, but still very plausible. 

A lot of people here donate to givedirectly.org, with the philosophy that we should let the worlds poorest decide where money needs to be spent to improve their lives. Grassroots projects like this seem like a natural extension of this, where a community as a whole decides where they need resources in order to uplift everyone. I'm no GHD expert, and I would encourage an in depth analysis, but it's at least plausible that this could be more effective than givedirectly, as this project  is too large to be paid for under that model.  

Grassroots organising seems like a good idea in general: by cutting most of the westerners out of the process, the money goes into the third world economy. We could also see knock-on effects: maybe altruistic philosophy becomes more popular throughout Uganda, and they are more receptive to, say, animal rights later on in their development.  

I think more estimates around cost effectiveness is a good idea, but EA had funded far more speculative and dubious projects in recent memory. I would encourage EA funders to give the proposal a fair shot. 

I'm fine with CEA's, my problem is that this seems to have been trotted out selectively in order to dismiss Anthony's proposal in particular, even though EA discusses and sometimes funds proposals that make the supposed "16 extra deaths" look like peanuts by comparison. 

The Wytham abbey project has been sold, so we know it's overall impact was to throw something like a million pounds down the drain (when you factor in stamp duty, etc). I think it's deeply unfair to frame Anthony's proposal as possibly letting 16 people die, while not doing the same for Wytham, which (in this framing) definitively let 180 people die. 

Also, the cost effectiveness analysis hasn't even been done yet! I find it kind of suspect that this is getting such a hostile response when EA insiders propose ineffective projects all the time with much less pushback. There are also differing factors here worth considering, like helping EA build links with grassroots orgs, indirectly spreading EA ideas to organisers in the third world, etc. EA spends plenty of money on "community building", would this not count? 

The HPMOR thing is a side note, but I vehemently disagree with your analysis, and the initial grant, because the counterfactual in this case is not doing nothing, it's sending them a link to the website where HPMOR is hosted for free for everybody, which costs nothing. Plus HPMOR only tangentially advocates for EA causes anyway! A huge number of people have read HPMOR, and only a small proportion have gone on to become EA members. Your numbers are absurdly overoptimistic. 

Okay, that makes a lot more sense, thank you. 

I think the talk of transition risks and sail metaphors aren't actually that relevant to your argument here? Wouldn't a gradual and continuous decrease to state risk, like Kuznets curve shown in Thorstadt's paper here, have the same effect? 

Load more