Joseph_Chu

480 karmaJoined Ontario, Canada
jlcstudios.com

Bio

Participation
1

An eccentric dreamer in search of truth and happiness for all. Formerly posted on Felicifia back in the day under the name Darklight. Been a member of Less Wrong and involved in Effective Altruism since roughly 2013.

Comments
92

Ah, good catch! Yeah, my flavour of moral realism is definitely naturalist, so that's a clear distinction between myself and Bentham, assuming you are correct about what he thinks.

I'll admit I kinda skimmed some of Bentham's arguments and some of them do sound a bit like rhetoric that rely on intuition or emotional appeal rather than deep philosophical arguments.

If I wanted to give a succinct explanation for my reasons for endorsing moral realism, it would be that morality has to do with what subjects/sentients/experiencers value, and these things they value, while subjective in the sense that they come from the perceptions and judgments of the subjects, are objective in the sense that these perceptions and in particular the emotions or feelings experienced because of them, are true facts about their internal state (i.e. happiness and suffering, desires and aversions, etc.). These can be objectively aggregated together as the sum of all value in the universe from the perspective of an impartial observer of said universe.

Most of the galactic x-risks should be limited by the speed of light (because causality is limited by the speed of light), and would, if initiated, probably expand like a bubble from their source, again, propagating outward at the speed of light. Thus, assuming a reasonably random distribution of alien civilizations, there should regions of the universe that are currently unaffected by that one or more alien civilizations causing a galactic x-risk to occur. We are most probably in such a region, otherwise we would not exist. So, yes, the Anthropic Principle applies in the sense that we eliminate a possibility (x-risk causing aliens nearby), but we don't eliminate all the other possibilities (alone in the region or non-x-risk causing aliens nearby), which is what I mean. I should have explained that better.

Also, the reality is that our long-term future is limited by the eventual heat death of the universe anyway (we will eventually run out of usable energy), so there is no way for our civilization to last forever (short of some hypothetical time travel shenanigans). We can at best delay the inevitable, and maximize the flourishing that occurs over spacetime.

Joseph_Chu
10
0
1
50% agree

Morality is Objective

I've been a moral realist for a very long time and generally agree with this post.

I will caveat though that there is a difference between moral realism (there are moral truths), and motivational internalism (people will always act according to those truths when they know them). I think the latter is much less clearly true, and one of the primary confusions that occur when people argue about moral realism and AI safety.

I also think that moral truths are knowledge, and we can never know things with 100% certainty. This means that even if there are moral truths in the world (out there), it is very possible to still be wrong about what they are, and even a superintelligence may not figure them out necessarily. Like most things, we can develop models, but they will generally not be complete.

I'm not sure I agree that the Anthropic Principle applies here. It would if ALL alien civilizations are guaranteed to be hostile and expansionist (i.e. grabby aliens), but I think there's room in the universe for many possible kinds of alien civilizations, and so if we allow that some but not all aliens are hostile expansionists, then there might be pockets of the universe where an advanced alien civilization quietly stewards their region. You could call them the "Gardeners". It's possible that even if we can't exist in a region with Grabby Aliens, we could still either exist in an empty region with no aliens, or a region with Gardeners.

Also, realistically, if you assume that the reach of an alien civilization spreads at the speed of light, but the effective expansion rate is much slower due to not needing the space until it's already filled up with population and megastructures, it's very possible that we might be within the reach of advanced aliens who just haven't expanded that far yet. Naturally occurring life might be rare enough that they might see value in not destroying or colonizing such planets, say, seeing us as a scientifically valuable natural experiment, like the Galapagos were to Darwin.

So, I think there's reasons why advanced aliens aren't necessarily mutually exclusive with our survival, as the Anthropic Principle would require. 

Given, I don't know which of empty space or Gardeners or late expanders is more likely, and would hesitate to assign probabilities to them.

Thanks for the thoughts!

I do think the second one has more potential impact if it works out, but I also worry that it's too "out there" speculative and also dependent on the AGI being persuaded by an argument (which they could just reject), rather than something that more concretely ensures alignment. I also noticed that almost no one is working on the Game Theory angle, so maybe it's neglected, or maybe the smart people all agree it's not going to work.

The first project is probably more concrete and actually uses my prior skills as an AI/ML practitioner, but also, there's a lot of people already working on Mech Int stuff. In comparison, my knowledge of Game Theory is self-taught and not very rigorous.

I'm tempted to explore both to an extent. The first one I can probably do some exploratory experiments to test the basic idea, and rule it out quickly if it doesn't work.

I more or less agree. It's not really a complaint from me. I probably was too provocative in my choice of wording earlier.

I want to clarify that I don't think ideas like the Orthogonality Thesis or Instrumental Convergence are wrong. They're strong predictive hypotheses that follow logically from very reasonable assumptions, and even the possibility that they could be correct is more than enough justification for AI safety work to be critical.

I was more just pointing out some examples of ideas that are very strongly held by the community, that happen to have been named and popularized by people like Bostrom and Yudkowsky, both of whom might be considered elites among us.

P.S. I'm always a bit surprised that the Neel Nanda of Google DeepMind has the time and desire to post so much on the EA Forums (and also Less Wrong). That probably says very good things about us, and also gives me some more hope that the folks at Google are actually serious about alignment. I really like your work, so it's an honour to be able to engage with you here (hope I'm not fanboying too much).

I mean, from the inside it would look like what you said, good ideas that are persuasive. And I'm not saying they aren't good ideas that are persuasive (I agree they are). I'm more just pointing out some examples of ideas that form core elements of our belief ecosystem, that have their source in the works of particular elites, in this case, being named and popularized by Bostrom's book Superintelligence, and then further popularized by Yudkowsky in the Sequences. In the sense that this is elitism, it's very mild, and I don't mean to imply it's a problem or anything. It's natural for the more elite among us to be in a better position to come up with the good ideas. I think also that to the extent there is deference here, it is usually well deserved, and also very tame compared to other intellectual communities.

The universe is already 13.8 billion years old. Assuming that our world is roughly representative for how long it takes for a civilization to spring up from a planet being formed (4.5 billion years), there has been about 9 billion years during which other more advanced civilizations could develop. Assuming it takes something like 100 million years to colonize an entire galaxy, one would already expect to see aliens having colonized the Milky Way, or initiated at least one of the existential risks that you describe. The fact that we are still here, is anthropic reasoning for either being alone in the galaxy somehow, that the existential risks are overblown, or, more likely, that there is already some kind of benign aliens in our neighbourhood who for whatever reason are leaving Earth alone (to our knowledge anyway), and probably protecting the galaxy from those existential risks.

(Note: I'm aware of the Grabby Aliens theory, but I still think it's quite probable that even if we are early, we are much less likely to be the very first civilization out there.)

Keep in mind, the most advanced aliens are likely BILLIONS of years ahead of us in development. They're likely unfathomably powerful. If we exist and they exist, they're probably also wise and benevolent in ways we don't understand (or else we wouldn't be here living what seem like net positive lives). Maybe there exist strong game theoretic proofs that we don't yet know for cooperation and benevolence that ensure that any rational civilization or superintelligence will have strong reasons to agree to cooperate at a distance and not initiate galaxy killing existential risks. Maybe those big voids between galaxies are where not so benign civilizations sprouted and galaxy killing existential risks occurred.

Though, it could also be that time travellers / simulators / some other sci-fi-ish entities "govern" the galaxy. Like, perhaps humans are the first civilization to develop time travel and so use their temporal supremacy to ensure the galaxy is ripe for human civilization alone, which could explain the Fermi Paradox?

All this, is of course, wild speculation. These kinds of conjectures are very hard to ground in anything else.

Anyways, I also found your post very interesting, but I'm not sure if any of these galactic level existential risks are tractable in any meaningful way at our current level of development. Maybe we should take things one step at a time?

Load more