This could either be a new resource or an extension of an existing one. I expect that improving an existing resource would be faster and require lower maintenance.
My suggestion would be to improve the AI Governance section of aisafety.info.
cc: @melissasamworth / @Søren Elverlin / @plex
To possibly strengthen the argument made, I'll point out that moving already-effective money to a more effective cause or donation is smaller counterfactually because they are already looking at the question, and could easily come to the conclusion on their own. Moving money in a "Normie" foundation, on the other hand, can have knock-on effects of getting them to think more about impact at all, and change their trajectory.
I meant that I don't think it's obvious that most people in EA working on this would agree.
I do think it's obvious that most people overall would agree, though most would not agree or be unsure that a simulation matters at all. It's even very unclear how to count person-experiences overall, as Johnston's Personite paper argues: https://www.jstor.org/stable/26631215 and I'll also point to the general double-counting problem: https://link.springer.com/article/10.1007/s11098-020-01428-9 and suggest that it could apply.
I need to write a far longer response to that paper, but I'll briefly respond (and flag to @Christian Tarsney) that I think my biggest crux is that I think they picked weak objections to causal domain restriction, and that far better objections apply. Secondarily, for axiological weights, the response about egalitarian views leading to rejection of different axiological weights seems to be begging the question, and the next part ignores the fact that any acceptable response to causal domain restriction also addresses the issue of large background populations.
I recently discussed this on twitter with @Jessica_Taylor, and think that there's a weird claim involved that collapses into either believing that distance changes moral importance, or that thicker wires in a computer increases its moral weight. (Similar to the cutting dominos in half example in that post, or the thicker pencil, but less contrived.) Alternatively, it confuses the question by claiming that identical beings at time t_0 are morally different because they differ at time t_n - which is a completely different claim!
I think the many worlds interpretation confuses this by making it about causally separated beings which are either, in my view, only a single being, or are different because they will diverge. And yes, different beings are obviously counted more than once, but that's explicitly ignoring the question. (As a reducto, if we asked "Is 1 the same as 1" the answer is yes, they are identical platonic numbers, but if we instead ask "is 1 the same as 1 plus 1" the answer is no, they are different because the second is... different, by assumption!)
That's a fair point, and I agree that it leads to a very different universe.
At that point, however, (assuming we embrace moral realism and an absolute moral value of some non-subjective definition of qualia, which seems incoherent,) it also seems to lead to a functionally unsolvable coordination problem for maximization across galaxies.
a PhD applicant could ask their prospective supervisor’s current grad students what it’s like to work with the supervisor. Yet, at least when I was applying to grad school, this was not very common.
I often advise doing this, albeit slightly differently - talk to their recently graduated former PhD students, who have a better perspective on what the process led to and how valuable it was in retrospect. I think similar advice plausibly applies in corresponding cases - talk to people who used to work somewhere, instead of current employees.
I still don't think that works out, given energy cost of transmission and distance.