Superforecaster, former philosophy PhD, Giving What We Can member since 2012. Currently trying to get into AI governance.Â
It seems to me like the really important thing is interpreting what "METR 80% time horizon goes to a year", or whatever endpoint you have in mind actually means. It's important if that takes longer than AI2027 predicts, obviously, but it seems more crux-y to me whether getting to that point means transformative AI is near or not, since the difference between "3 years and 7 seven years" say, while important seems less important to me than between "definitely in 7 years" and "who knows, could still be 20+ years away".Â
I think part of the issue here probably is that EAs mostly don't think biodiversity is good in itself, and instead believe only humans and animals experiencing well-being is good, and that the impact on well-being of promoting biodiversity is complex, uncertain and probably varies a lot with how and where biodiversity is being promoted. Hard to try and direct biodiversity funding if you don't really clearly agree with raising biodiversity as a goal.Â
Also, it's certainly not common sense that it is always better to have less beings with higher welfare. It's not common sense that a world with 10 incredibly happy people is better than one with a billion very slightly less happy people.
And not every theory that avoids the repugnant conclusion delivers this result, either.Â
Those are fair point in themselves, but I don't think "less deer is fine, so long as they have a higher standard of living" has anything like the same commonsense standing as "we should protect people from malaria with insecticide even if the insecticide hurts insects".Â
And it's not clear to me that assuming less deer is fine in itself even if their lives are good is avoiding taking a stance on the intractable philosophical debate, rather than just implicitly taking one side of it.Â
"A potentially lower-risk example might be the warble fly (Hypoderma), which burrows under the skin of cattle and deer, causing great discomfort, yet rarely kills its host. The warble fly is small in biomass, host-specific (so doesn't greatly affect other species), and has more limited interactions beyond its host-parasite relationship. Although it does reduce the grazing and reproductive activity of hosts, these effects are comparatively minor and could be offset with non-invasive fertility control"
Remember that it's not uncontroversial that it is preferable to have less animals at higher welfare level, rather than more animals at lower welfare level. Where welfare is net positive either way, some population ethicists are going to say having more animals at a lower level of welfare can be better than less at a higher level of welfare. See for example: https://www.cambridge.org/core/journals/utilitas/article/what-should-we-agree-on-about-the-repugnant-conclusion/EB52C686BAFEF490CE37043A0A3DD075 Â But also, even on critical level views designed to BLOCK the repugnant conclusion, it can sometimes be better to have more welfare subjects at a lower but still positive level of welfare, then less subjects at a higher level of welfare. So maybe it's better to have more deer even when some of them have warble fly, than to have less deer, but none of them have warble fly.Â
Â
And it's not so much that I think I have zero evidence: I keep up with progress in AI to some degrees, I have some idea of what the remaining gaps are to general intelligence, I've seen the speed at which capabilities have improved in recent years etc. It's that how to evaluate that evidence is not obvious, and so simply presenting a skeptic with it probably won't move them, especially as the skeptic-in this case you-probably already has most of the evidence I have anyway. If it was just some random person who had never heard of AI asking why I thought the chance of mildly-over-human level AI in 10 years was not far under 1%, there are things I could say. It's just you already know those things, probably, so there's not much point in my saying them to you.Â
Yeah, I agree that in some sense saying "we should instantly reject a theory that recommends WD" doesn't not combine super-well with belief in classical U, for the reasons you give. That's compatible with classical U's problems with WD being less bad than NU's problem's with it, is all I'm saying.Â
Somewhat surprised to hear that people can successfully pull that off.Â