N

nonn

273 karmaJoined

Posts
1

Sorted by New
3
· · 1m read

Comments
31

I don't know what you mean? You can look at existing interventions that primarily help very young people (neonatal or childhood vitamin supplementation) v.s. a comparably-effective interventions that target adults or older people (e.g. cash grants, schistosomiasis)

There are multiple GiveWell charities in both categories, so this is just saying you should weight towards the ones that target older folks by maybe a factor of 2x or more, v.s. what givewell says (they assume the world won't change much)

Assuming two interventions are around similarly effective in life-years saved, interventions saving old lives must (necessarily) save more lives in the short run. E.g. save 4 lives granting 10 life-years v.s. save 1 life granting 40 life years.

Some fraction of people who don't work on AI risk cite "wanting to have more certainty of impact"as their main reason. But I think many of them are running the same risk anyway: namely, that what they do won't matter because transformative AI will make their work irrelevant, or dramatically lower value.

This is especially obvious if they work on anything that primarily returns value after a number of years. E.g. building an academic career or any career where most impact is realized later, working toward policy changes, some movement-building things, etc.

But also applies somewhat to things like nutrition or vaccination or even preventing deaths, where most value is realized later (by having better life outcomes, or living an extra 50 years). Though this category does still have certainty of impact, just the amount of impact might be cut by whatever fraction of worlds are upended in some way by AI. And this might affect what they should prioritize... e.g. they should prefer saving old lives over young ones, if the interventions are pretty-close on naive effectiveness measures.

Feels like there's some line where your numbers are getting so tiny and speculative that many other considerations start dominating, like "are your numbers actually right?"  E.g. I'd be pretty skeptical of many proposed ".000001% of huge number" interventions (especially skeptical on the on the .000001% side).

In practice, the line could be where "are your numbers actually right" starts becoming the dominant consideration.  At that point, proving your numbers are plausible is the main challenge that needs to be overcome - and is honestly where I suspect most people's anti-low-probabilities intuitions come from in the first place.

Very cool!

random thought: could include some of Yoshua Bengio's or Geoffrey Hinton's writings/talks on AI risks concerns in week 10 (& could include Lecun for counterpoint to get all 3), since they're very-well cited academics & Turing Award Winners for deep learning

I haven't looked through their writings/talks to find the most directly relevant, but some examples: https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/ https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/

My experience is that it's more that group leaders & other students in EA groups might reward poor epistemics in this way.

And that when people are being more casual, it 'fits in' to say AI risk & people won't press for reasons in those contexts as much, but would push if you said something unusual.

Agree my experience with senior EAs in the SF Bay was often the opposite–I was pressed to explain why I'm concerned about AI risk & to respond to various counterarguments.

No, though maybe you're using the word "intrinsically" differently? For the (majority) consequentialist part of my moral portfolio: The main intrinsic bad is suffering, and wellbeing (somewhat broader) is intrinsically good.

I think any argument about creating people/etc is instrumental - will they or won't they increase wellbeing? They can both potentially contain suffering/wellbeing themselves, and affect the world in ways that affect wellbeing/suffering now & in the future. This includes effects before they are born (e.g. on women's lives). TBH given your above arguments, I'm confused about the focus on abortion - it seems like you should be just as opposed to people choosing not to have children, and focus on encouraging/supporting people having kids.

For now, I think the ~main thing that matters is from a total-view longtermist perspective is making it through "the technological precipice", where risks of permanent loss of sentient life/our values is somewhat likely, so other total-view longtermist arguments flow through effects on this + influencing for good trajectory arguably. Since abortion access seems good for civilization trajectory (women can have children when the want, don't have their lives & health derailed, etc), more women involved in the development of powerful technology probably makes these fields more cautious/less rash, fewer 'unwanted children' [probably worse life outcomes], etc. Then abortion access seems good.

Maybe related: in general when maximizing, I think it's probably best to finding the most important 1-3 things, then focus on those things. (e.g. for temp of my house, focus on temp of thermostat + temp of outside + insulation quality, ignore body heat & similar small things)

I don't think near-term population is helpful for long-term population or wellbeing, e.g. in >10,000 years from now. More likely negative effect than positive effect imo, especially if the mechanism of trying to increase near-term population is to restrict abortion (this is not a random sample of lives!)

I also think it seems bad for general civilization trajectory (partially norm-damaging, but mostly just direct effects on women & children), probably bad for ability to make investments in resilience & be careful with powerful new technology. These seem like the most important effects from a longtermist perspective, so I think abortion-restriction is bad from a total-longtermist perspective.

I guess I did mean aggregate in the 'total' well-being sense. I just feel pretty far from neutral about creating people who will live wonderful lives, and also pretty strongly disagree with the belief that restricting abortion will create more total well-being in the long run (or short tbh).

For total-view longtermism, I think the most important things are ~civilization is on a good trajectory, people are prudent/careful with powerful new technology, the world is lower conflict, investments are made to improve resilience to large catastrophes, etc. Restricting abortion seems kinda bad for several of those things, and positive for none. So it seems like total-view longtermism, even ignoring all other reasons to think this, says abortion-restriction is bad.

I guess part of this is a belief that in the long-run, the number of morally-valuable lives & total wellbeing (e.g. in a 10 million years) is very uncorrelated or anti-correlated with near-term world population. (though I also think restricting abortion is one of the worst ways to go about increasing near-term population, even for those who do think near-term & very-long-term are pretty positively correlated)

abortion is morally wrong is a direct logical extension of a longtermist view that highly values maximizing the number of people on assumption that the average existing persons life will have positive value

I'm a bit confused by this statement. Is a world where people don't have access to abortion likely to have more aggregate well-being in the very long run? Naively, it feels like the opposite to me

To be clear I don't think it's worth discussing abortion at length, especially considering bruce's comment. But I really don't think the number of people currently existing says much about well-being in the very long run (arguably negatively correlated). And even if you wanted to increase near-term population, reducing access to abortion is a very bad way to that, with lots of negative knock-on effects.

Load more