A

Arepo

5330 karmaJoined

Participation
1

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
769

Topic contributions
18

Answer by Arepo2
1
0

Most of these aren't so much well-formed questions, as research/methodological issues I would like to see more focus on:

  • Operationalisations of AI safety that don't exacerbate geopolitical tensions with China - or ideally that actively seek ways to collaborate with China on reducing the major risks.
  • Ways to materially incentivise good work and disincentivise bad work within nonprofit organisations, especially effectiveness-minded organisations
  • Looking for ways to do data-driven analyses on political work especially advocacy; correct me if wrong, but the recommendations in EA space for political advocacy seem to necessarily boil down to a lot of gut-instincting on whether someone having successfully executed Project A makes their work on Project B have high expectation
  • Research into the difficulty of becoming a successful civilisation after recovery from civilisational collapse (I wrote more about this here)
  • How much scope is there for more work or more funding in the nuclear safety space, and what is its current state? Last I heard, it had lost a bunch of funding, such that highly skilled/experienced diplomats in the space were having to find unrelated jobs. Is that still true? 

Thanks! The online courses page describes itself as a collection of 'some of the best courses'. Could you say more about what made you pick these? There are dozens or hundreds of online courses these days (esp on general subjects like data analysis), so the challenge for pursuing them is often a matter of filtering convincingly.

Good luck with this! One minor irritation with the structure of the post is I had to read halfway down to find out which 'nation' it referred to. Suggest editing the title to 'US-wide', so people can see at a glance if it's relevant to them.

I remember him discussing person-affecting views in Reasons and Person, but IIRC (though it's been a very long time since I read it) he doesn't particularly advocate for them. I use the phrase mainly because of the quoted passage, which appears (again IIRC) in both The Precipice and What We Owe the Future, as well as possibly some of Bostrom's earlier writing. 

I think you could equally give Bostrom the title though, for writing to my knowledge the first whole paper on the subject.

Cool to see someone trying to think objectively about this. Inspired by this post, I had a quick look at the scores on the world happiness report to compare China to its ethnic cousins, and while there are many reasons to take this with a grain of salt, China does... ok. On 'life evaluation', which appears to be the all things considered metric (I didn't read the methodology, correct me if I'm wrong), some key scores:

Taiwan: 6.669

Philippines: 6.107

South Korea: 6.038

Malaysia: 5.955

China: 5.921

Mongolia: 5.833

Indonesia: 5.617

Overall it's ranked 68th of 147 listed countries, and outscores several (though I think a minority of) LMIC democratic nations. One could attribute some of its distance from the top simply as a function of lower GDP per capita, though one could also argue (as I'm sure many do) that its lower GDP per capita is a result of CCP control (though maybe if this is true and is going to continue to be true, that's incompatible with the idea that they've got a realistic chance of winning an AI arms race and consequently dominating the global economy).

One view I wish people would take more seriously is the possibility that it can be true both that

  • Chinese government is net worse for welfare standards than most liberal democracies; and
  • the expected harms coming from ratcheting up global tensions to avoid them winning an AI arms race are nonetheless much higher than the expected benefits

Thanks :)

I don't think I covered any specific relationship between factors in that essay (except those that were formally modelled in), where I was mainly trying to lay out a framework that would even allow you to ask a question. This essay is the first time I've spent meaningful effort on trying to answer it.

I think it's probably ok to treat the factors as a priori independent, since ultimately you have to run with your own priors. And for the sake of informing prioritisation decisions, you can decide case by case how much you imagine your counterfactual action changing each factor.

You don't need a very high credence in e.g. AI x risk for it to be the most likely reason you and your family die

 

I think this is misleading, especially if you agree with the classic notion of x-risk as excluding events from which recovery is possible. My distribution of credence over event fatality rates is heavily left-skewed, so I would expect far more deaths under the curve between 10% and 99% fatality than between 99% and 100%, and probably more area to the left even under a substantially more even partition of outcomes. 

I fear we have yet to truly refute Robin Hanson’s claim that EA is primarily a youth movement.

FWIW my impression is that CEA have spent significantly more effort on recruiting people from universities than any other comparable subset of the population.

Somehow despite 'Goodharting' being a now standard phrase, 'Badharting' is completely unrecognised by Google. 

I suggest the following intuitive meaning: failing to reward a desired achievement, because the proxy measure you used to represent it wasn't satisfied:

'No bonus for the staff this year: we didn't reach our 10% sales units growth target.'

'But we raised profits by 30% by selling more expensive products, you Badharting assholes!'

Load more