Ozzie Gooen

10053 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
914

Topic contributions
4

Kudos for bringing this up, I think it's an important area!

Do we, as a community, sometimes lean more toward unconsciously advocating specific outcomes rather than encouraging people to discover their own conclusions through the EA framework?

There's a lot to this question.

I think that many prestigious/important EAs have come to similar conclusions. If you've come to think that X is important, it can seem very reasonable to focus on promoting and working with people to improve X.

You'll see some discussions of "growing the tent" - this can often mean "partnering with groups that agree with the conclusions, not necessarily with the principles". 

One question here is something like, "How effective is it to spend dedicated effort on explorations that follow the EA principles, instead of just optimizing for the best-considered conclusions?" This is something that arguably there would need to be more dedicated effort in order to really highlight. I think we just don't have all too much work in this area now, compared to more object-level work. 

Perhaps another factor seemed to have been that FTX has stained the reputation of EA and hurt CEA - after which there was a period where there seemed to be less attention on EA, and more on specific causes like AI safety. 

In terms of "What should the EA community do", I'd flag that a lot of the decisions are really made by funders and high-level leaders. It's not super clear to me how much agency the "EA community" has, in ways that aren't very aligned with these groups. 

All that said, I think it's easy for us to generally be positive towards people who take the principles in ways that don't match the specific current conclusions.

I personally am on the side that thinks that current conclusions are probably overconfident and lacking in some very important considerations.

Dang. That makes sense, but it seems pretty grim. The second half of that argument is, "We can't select for not-feeling-pain, because we need to spend all of our future genetic modification points on the chickens getting bigger and growing even faster."

I'm kind of surprised that this argument isn't at all about the weirdness of it. It's purely pragmatic, from their standpoint. "Sure, we might be able to stop most of the chicken suffering, but that would increase costs by ~20% or so, so it's a non-issue"

Happy to see more work here.

Minor question - but are you familiar with any experiments that might show which are the most understandable, especially at high speeds? It seems to me like some voices are much better than others at 2x+ speeds, I assume it should be possible to optimize this. This is probably the main thing I personally care about. 

I imagine a longer analysis would include factors like:
1. If intense AI happens in 10 to 50 years, that could do inventing afterwards. 
2. I expect that a very narrow slice of the population will be responsible for scientific innovations here, if humans do it. Maybe instead of considering the policies [increase the population everywhere] or [decrease the population everywhere], we could consider more nuanced policies. Related, if one wanted to help with animal welfare, I'd expect that [pro-natalism] would be an incredibly ineffective way of doing so, for the benefit of eventual scientific progress on animals. 

I can't seem to find much EA discussion about [genetic modification to chickens to lessen suffering]. I think this naively seems like a promising area to me. I imagine others have investigated and decided against further work, I'm curious why. 

Yea, I think I'd classify that as a different thing. I see alignment typically as a "mistake" issue, rather than as a "misuse" issue. I think others here often use the phrase similarly. 

I think that the phrase ["unaligned" AI] is too vague for a lot of safety research work.

I prefer keywords like:
- scheming 
- naive
- deceptive
- overconfident
- uncooperative

I'm happy that the phrase "scheming" seems to have become popular recently, that's an issue that seems fairly specific to me. I have a much easier time imagining preventing an AI from successfully (intentionally) scheming than I do preventing it from being "unaligned."

That sounds exciting, thanks for the update. Good luck with team building and grantmaking!

I think there's a (mostly but not entirely accurate) vibe that all AI safety orgs that are worth funding will already be approximately fully funded by OpenPhil and others, but that animal orgs (especially in invertebrate/wild welfare) are very neglected.

That makes sense, but I'm feeling skeptical. There are just so many AI safety orgs now, and the technical ones generally aren't even funded by OP. 

For example: https://www.lesswrong.com/posts/9n87is5QsCozxr9fp/the-big-nonprofits-post

While a bunch of these salaries are on the high side, not all of them are.

Load more