AI strategy & governance. ailabwatch.org.
I think for many people, positive comments would be much less meaningful if they were rewarded/quantified, because you would doubt that they're genuine. (Especially if you excessively feel like an imposter and easily seize onto reasons to dismiss praise.)
I disagree with your recommendations despite agreeing that positive comments are undersupplied.
Given 3, a key question is what can we do to increase P(optimonium | ¬ AI doom)?
For example:
(More precisely we should talk about expected fraction of resources that are optimonium rather than probability of optimonium but probability might be a fine approximation.)
One key question for the debate is: what can we do / what are the best ways to "increas[e] the value of futures where we survive"?
My guess is it's better to spend most effort on identifying possible best ways to "increas[e] the value of futures where we survive" and arguing about how valuable they are, rather than arguing about "reducing the chance of our extinction [vs] increasing the value of futures where we survive" in the abstract.
I want to make salient these propositions, which I consider very likely:
Considerations about just our solar system or value realized this century miss the point, by my lights. (Even if you reject 3.)
Call computronium optimized to produce maximum pleasure per unit of energy "hedonium," and that optimized to produce maximum pain per unit of energy "dolorium," as in "hedonistic" and "dolorous." Civilizations that colonized the galaxy and expended a nontrivial portion of their resources on the production of hedonium or dolorium would have immense impact on the hedonistic utilitarian calculus. Human and other animal life on Earth (or any terraformed planets) would be negligible in the calculation of the total. Even computronium optimized for other tasks would seem to be orders of magnitude less important.
So hedonistic utilitarians could approximate the net pleasure generated in our galaxy by colonization as the expected production of hedonium, multiplied by the "hedons per joule" or "hedons per computation" of hedonium (call this H), minus the expected production of dolorium, multiplied by "dolors per joule" or "dolors per computation" (call this D).
This is circular. The principle is only compromised if (OP believes) the change decreases EV — but obviously OP doesn't believe that; OP is acting in accordance with the do-what-you-believe-maximizes-EV-after-accounting-for-second-order-effects principle.
Maybe you think people should put zero weight on avoiding looking weird/slimy (beyond what you actually are) to low-context observers (e.g. college students learning about the EA club). You haven't argued that here. (And if that's true then OP made a normal mistake; it's not compromising principles.)
My impression is that CLTR mostly adds value via its private AI policy work. I agree its AI publications seem not super impressive but maybe that's OK.
Probably same for The Future Society and some others.
The thresholds are pretty meaningless without at least a high-level standard, no?