MC

Michele Campolo

123 karmaJoined Jul 2020

Bio

Lifelong recursive self-improver, on his way to exploding really intelligently :D

More seriously: my posts are mostly about AI alignment, with an eye towards moral progress and creating a better future. If there was a public machine ethics forum, I would write there as well.

An idea:

  • We have a notion of what good is and how to do good
  • We could be wrong about it
  • It would be nice if we could use technology not only to do good, but also to also improve our understanding of what good is.

The idea above, and the fact that I’d like to avoid producing technology that can be used for bad purposes, is what motivates my research. Feel free to reach out if you relate!

At the moment I am doing research on agents whose behaviour is driven by a reflective process analogous to human moral reasoning, rather than by a metric specified by the designer. See Free agents.

Here are other suggested readings from what I've written so far:
-Naturalism and AI alignment
-From language to ethics by automated reasoning
-Criticism of the main framework in AI alignment

Sequences
1

Ongoing project on moral AI

Comments
8

Hey! I've had a look at some parts of this post, don't know where the sequence is going exactly, but I thought that you might be interested in some parts of this post I've written. Below I give some info about how it relates to ideas you've touched on:

This view has the advantage, for philosophers, of making no empirical predictions (for example, about the degree to which different rational agents will converge in their moral views)

I am not sure about the views of the average non-naturalist realist, but in my post (under Moral realism and anti-realism, in the appendix) I link three different pieces that give an analysis of the relation between metaethics and AI: some people do seem to think that aspects of ethics and/or metaethics can affect the behaviour of AI systems.

It is also possible that the border between naturalism and non-naturalism is less neat and clear than how it appears in the standard metaethics literature, which likes classifying views in well-separated buckets.

Soon enough, our AIs are going to get "Reason," and they're going to start saying stuff like this on their own – no need for RLHF. They'll stop winning at Go, predicting next-tokens, or pursuing whatever weird, not-understood goals that gradient descent shaped inside them, and they'll turn, unprompted, towards the Good. Right?

I argue in my post that this idea heavily depends on agent design and internal structure. As how I understand things, one way in which we can get a moral agent is by building an AI that has a bunch of (possibly many) human biases and is guided by design towards figuring out epistemology and ethics on its own. Some EAs, and rationalists in particular, might be underestimating how easy it is to get an AI that dislikes suffering, if one follows this approach.

If you know someone who would like to work on the same ideas, or someone who would like to fund research on these ideas, please let me know! I'm looking for them :)

Thank you!

Yes I am considering both options. For the next two months I'll focus on job and grant applications, then I'll reevaluate what to do depending on the results.

Hey, I just wanted to thank you for writing this!

I'm looking forward to reading future posts in the series; actually, I think it would be great to have series like this one for each major cause area.

Yes I'd like to read a clearer explanation. You can leave the link here in a comment or write me a private message.

Hey!

Thanks for the suggestion. I've read part of the Wikipedia page on Jungian archetypes, but my background is not in psychology and it was not clear to me. The advantage of just saying that our thoughts can be abstract (point 1) is that pretty much everyone understands the meaning of that, while I am not sure this is true if we start  introducing concepts like Jungian archetypes and the collective unconscious.

I agree with you that the AI (and AI safety) community doesn't seem to care much about Jungian archetypes. It might be that AI people get the idea anyway, maybe they just express it in different terms (e.g. they talk about the influence of culture on human values, instead of archetypes).  

Maybe "only person in the world" is a bit excessive :)

As far as I know, no one else in AI safety is directly working on it. There is some research in the field of machine ethics, about Artificial Moral Agents, that has a similar motivation or objective. My guess is that, overall, very few people are working on this.

What you wrote about the central claim is more or less correct: I actually made only an existential claim about a single aligned agent, because the description I gave is sketchy and really far from the more precise algorithmic level of description. This single agent probably belongs to a class of other aligned agents, but it seems difficult to guess how large this class is.

That is also why I have not given a guarantee that all agents of a certain kind will be aligned.

Regarding the orthogonality thesis, you might find 1.2 in Bostrom's 2012 paper interesting. He writes that objective and intrinsically motivating moral facts need not undermine the orthogonality thesis, since he is using the term "intelligence" as "instrumental rationality". I add that there is also no guarantee that the orthogonality thesis is correct :)

About psychopaths and metaethics, I haven't spent a lot of time on that area of research. Like other empirical evidence, it doesn't seem easy to interpret.