I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.
Kudos for bringing this up, I think it's an important area!
Do we, as a community, sometimes lean more toward unconsciously advocating specific outcomes rather than encouraging people to discover their own conclusions through the EA framework?
There's a lot to this question.
I think that many prestigious/important EAs have come to similar conclusions. If you've come to think that X is important, it can seem very reasonable to focus on promoting and working with people to improve X.
You'll see some discussions of "growing the tent" - this can often mean "partnering with groups that agree with the conclusions, not necessarily with the principles".
One question here is something like, "How effective is it to spend dedicated effort on explorations that follow the EA principles, instead of just optimizing for the best-considered conclusions?" This is something that arguably there would need to be more dedicated effort in order to really highlight. I think we just don't have all too much work in this area now, compared to more object-level work.
Perhaps another factor seemed to have been that FTX has stained the reputation of EA and hurt CEA - after which there was a period where there seemed to be less attention on EA, and more on specific causes like AI safety.
In terms of "What should the EA community do", I'd flag that a lot of the decisions are really made by funders and high-level leaders. It's not super clear to me how much agency the "EA community" has, in ways that aren't very aligned with these groups.
All that said, I think it's easy for us to generally be positive towards people who take the principles in ways that don't match the specific current conclusions.
I personally am on the side that thinks that current conclusions are probably overconfident and lacking in some very important considerations.
Dang. That makes sense, but it seems pretty grim. The second half of that argument is, "We can't select for not-feeling-pain, because we need to spend all of our future genetic modification points on the chickens getting bigger and growing even faster."
I'm kind of surprised that this argument isn't at all about the weirdness of it. It's purely pragmatic, from their standpoint. "Sure, we might be able to stop most of the chicken suffering, but that would increase costs by ~20% or so, so it's a non-issue"
Happy to see more work here.
Minor question - but are you familiar with any experiments that might show which are the most understandable, especially at high speeds? It seems to me like some voices are much better than others at 2x+ speeds, I assume it should be possible to optimize this. This is probably the main thing I personally care about.
I imagine a longer analysis would include factors like:
1. If intense AI happens in 10 to 50 years, that could do inventing afterwards.
2. I expect that a very narrow slice of the population will be responsible for scientific innovations here, if humans do it. Maybe instead of considering the policies [increase the population everywhere] or [decrease the population everywhere], we could consider more nuanced policies. Related, if one wanted to help with animal welfare, I'd expect that [pro-natalism] would be an incredibly ineffective way of doing so, for the benefit of eventual scientific progress on animals.
I think that the phrase ["unaligned" AI] is too vague for a lot of safety research work.
I prefer keywords like:
- scheming
- naive
- deceptive
- overconfident
- uncooperative
I'm happy that the phrase "scheming" seems to have become popular recently, that's an issue that seems fairly specific to me. I have a much easier time imagining preventing an AI from successfully (intentionally) scheming than I do preventing it from being "unaligned."
I'm also a broad fan of this sort of direction, but have come to prefer some alternatives. Some points:
1. I believe of this is being done at OP. Some grantmakers make specific predictions, and some of those might be later evaluated. I think that these are mostly private. My impression is that people at OP believe that they have critical information that can't be made public, and I also assume it might be awkward to make any of this public.
2. Personally, I'd flag that making and resolving custom questions for each specific grant can be a lot of work. In comparison, it can be great when you can have general-purpose questions, like, "how much will this organization grow over time" or "based on a public ranking of the value of each org, where will this org be?"
3. While OP doesn't seem to make public prediction market questions on specific grants, they do sponsor Metaculus questions and similar on key strategic questions. For example, there are a tournaments on AI risk, bio, etc. I'm overall a fan of this.
4. In the future, AI forecasters could do interesting things. OP could take the best ones, then these could make private forecasts of many elements of any program.