A

Arepo

5358 karmaJoined

Participation
1

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
774

Topic contributions
18

Answer by Arepo4
1
0

Most of these aren't so much well-formed questions, as research/methodological issues I would like to see more focus on:

  • Operationalisations of AI safety that don't exacerbate geopolitical tensions with China - or ideally that actively seek ways to collaborate with China on reducing the major risks.
  • Ways to materially incentivise good work and disincentivise bad work within nonprofit organisations, especially effectiveness-minded organisations
  • Looking for ways to do data-driven analyses on political work especially advocacy; correct me if wrong, but the recommendations in EA space for political advocacy seem to necessarily boil down to a lot of gut-instincting on whether someone having successfully executed Project A makes their work on Project B have high expectation
  • Research into the difficulty of becoming a successful civilisation after recovery from civilisational collapse (I wrote more about this here)
  • How much scope is there for more work or more funding in the nuclear safety space, and what is its current state? Last I heard, it had lost a bunch of funding, such that highly skilled/experienced diplomats in the space were having to find unrelated jobs. Is that still true? 
Arepo
4
1
0
70% agree

Should our EA residential program prioritize structured programming or open-ended residencies?

 

You can always host structured programs, perhaps on a regular cycle, but doing so to the exclusion of open-ended residencies seems to be giving up much of the counterfactual value the hotel provided. It seems like a strong overcommitment to a concern about AI doom in the next low-single-digit years, which remains (rightly IMO) a niche belief even in the EA world, despite heavy selection within the community for it.

Having said that, to some degree it sounds like you'll need to follow the funding, and prioritise keeping operations running. If that funding is likely to be conditional on a short-term AI safety focus then you can always shift focus if the world doesn't end in 2027 - though I would strive to avoid being long-term locked into that particular view.

[ETA] I'm not sure the poll is going to give you that meaningful results. I'm at approx the opposite end of it from @Chris Leong, but his answer sounds largely consistent with mine, primarily with a different emotional focus.

Thanks for the extensive reply! Thoughts in order:

I would also note that #3 could be much worse than #2 if #3 entails spreading wild animal suffering.

I think this is fair, though if we're not fixing that issue then it seems problematic for any pro- longtermism view, since it implies the ideal outcome is probably destroying the biosphere. Fwiw I also find it hard to imagine humans populating the universe with anything resembling 'wild animals', given the level of control we'd have in such scenarios, and our incentives to exert it. That's not to say we couldn't wind up with something much worse though (planetwide factory farms, or some digital fear-driven economy adjacent to Hanson's Age of Em)

I'm having a hard time wrapping my head around what the "1 unit of extinction" equation is supposed to represent.

It's whatever the cost in expected future value extinction today would be. The cost can be negative if wild-animal-suffering proliferates, and some trajectory changes could have a negative cost of more than 1 UoEs if they make the potential future more than twice as good, and vice versa (a positive cost of more than 1 UoE if they make the future expectation negative from positive).

But in most cases I think its use is to describe non-extinction catastrophes as having a cost C such that 0 < C < 1UoE.

the parable of the apple tree is more about P(recovery) than it is about P(flourishing|recovery)

Good point. I might write a v2 of this essay at some stage, and I'll try and think of a way to fix that if so.

"Resources get used up, so getting back to a level of technology the 2nd time is harder than the 1st time."
...
"A higher probability of catastrophe means there's a higher chance that civilization keeps getting set back by catastrophes without ever expanding to the stars."

I'm not sure I follow your confusion here, unless it's a restatement of what you wrote in the previous bullet. The latter statement, if I understand it accurately is closer to my primary thesis. The first statement could be true if 

a) Recovery is hard; or

b) Developing technology beyond 'recovery' is hard

I don't have a strong view on a), except that it worries me that so many people who've looked into it think it could be very hard, yet x-riskers still seem to write it off as trivial on long timelines without much argument.

b) is roughly a subset of my thesis, though one could believe the main source of friction increase would come when society runs out of technological information from previous civilisations.

I'm not sure if I'm clearing anything up here...

"we might still have a greater expected loss of value from those catastrophes" - This seems unlikely to me, but I'd like to see some explicit modeling.

So would I, though modelling it sensibly is extremely hard. My previous sequence's model was too simple to capture this question, despite being probably too complicated for what most people would consider practical use. To answer comparative value loss, you need to look at at least:

  • Risk per year of non-AI catastrophes of various magnitudes
  • Difficulty of recovery from other catastrophes
  • Difficulty of flourishing given recovery from other catastrophes
  • Risk per year of AI catastrophes of various magnitudes
  • Effect of AI-catastrophe risk reduction on other catastrophes? E.g. does benign AI basically lock in a secure future, or would we retain the capacity and willingness to launch powerful weapons at each other?
  • How likely is it that AI outcome is largely predetermined by, such that developing benign AI once would be strong evidence that if society subsequently collapsed and developed it again, it would be benign again?
  • The long-term nature of AI catastrophic risk. Is it a one-and-done problem if it goes well? Or does making a non-omnicidal AI just give us some breathing space until we create its successor, at which point we have to solve the problem all over again?
  • Effect of other catastrophe risk reduction on AI-catastrophe. E.g. does reducing global nuclear arsenals meaningfully reduce the risk that AI goes horribly wrong by accident? Or do we think most of the threat is from AI that deliberatively plans our destruction, and is smart enough not to need existing weaponry?
  • The long-term moral status of AI. Is a world where it replaces us as good or better than a world where we stick around on reasonable value systems?
  • Expected changes to human-descendant values given flourishing after other catastrophes

My old model didn't have much to say on any beyond the first three of these considerations.

Though if we return to the much simpler model and handwave a bit, if we suppose that annual non-extinction catastrophic risk is between 1 and 2%, then 10-20 year risk is between 20 and 35%. If we also suppose that chances of flourishing after collapse drop by 10 or more %, that puts it in the realm of 'substantially bigger threat than the more conservative AI x-riskers view AI as, substantially smaller than the most pessimistic views of AI x-risk'.

It could be somewhat more important either if chances of flourishing after collapse drop by substantially more (as I think they do), and much more important if we could persistently reduce catastrophic risk that persist for beyond the 10-20-year period (e.g. by moving towards stable global governance or at least substantially reducing nuclear arsenals).

Very helpful, thanks! A couple of thoughts:

  • EA grantmaking appears on a steady downward trend since 2022 / FTX.


It looks like this is driven entirely by Givewell/global health and development reduction, and that actually the other fields have been stable or even expanding.

Also, in an ideal world we'd see funding from Longview and Founders Pledge. I also gather there's a new influx of money into the effective animal welfare space from some other funder, though I don't know their name.

Kudos to whoever wrote these summaries. They give a great sense of the contents and at least wth mine capture the essence of it much more succinctly than I could!

Thanks! The online courses page describes itself as a collection of 'some of the best courses'. Could you say more about what made you pick these? There are dozens or hundreds of online courses these days (esp on general subjects like data analysis), so the challenge for pursuing them is often a matter of filtering convincingly.

Good luck with this! One minor irritation with the structure of the post is I had to read halfway down to find out which 'nation' it referred to. Suggest editing the title to 'US-wide', so people can see at a glance if it's relevant to them.

I remember him discussing person-affecting views in Reasons and Person, but IIRC (though it's been a very long time since I read it) he doesn't particularly advocate for them. I use the phrase mainly because of the quoted passage, which appears (again IIRC) in both The Precipice and What We Owe the Future, as well as possibly some of Bostrom's earlier writing. 

I think you could equally give Bostrom the title though, for writing to my knowledge the first whole paper on the subject.

Cool to see someone trying to think objectively about this. Inspired by this post, I had a quick look at the scores on the world happiness report to compare China to its ethnic cousins, and while there are many reasons to take this with a grain of salt, China does... ok. On 'life evaluation', which appears to be the all things considered metric (I didn't read the methodology, correct me if I'm wrong), some key scores:

Taiwan: 6.669

Philippines: 6.107

South Korea: 6.038

Malaysia: 5.955

China: 5.921

Mongolia: 5.833

Indonesia: 5.617

Overall it's ranked 68th of 147 listed countries, and outscores several (though I think a minority of) LMIC democratic nations. One could attribute some of its distance from the top simply as a function of lower GDP per capita, though one could also argue (as I'm sure many do) that its lower GDP per capita is a result of CCP control (though maybe if this is true and is going to continue to be true, that's incompatible with the idea that they've got a realistic chance of winning an AI arms race and consequently dominating the global economy).

One view I wish people would take more seriously is the possibility that it can be true both that

  • Chinese government is net worse for welfare standards than most liberal democracies; and
  • the expected harms coming from ratcheting up global tensions to avoid them winning an AI arms race are nonetheless much higher than the expected benefits
Load more