Rafael Ruiz

PhD in Philosophy @ London School of Economics
203 karmaJoined Feb 2021Pursuing a doctoral degree (e.g. PhD)Working (0-5 years)London, UK
www.rafaelruizdelira.com/

Bio

Participation
3

PhD Student in Philosophy at the London School of Economics, researching Moral Progress, Moral Circle Expansion, and the causes that drive it.

Previously, I did a MA in Philosophy at King's College London and a MA in Political Philosophy at Pompeu Fabra University (Spain).

When I have the time, I also run https://futurosophia.com/, a website and nonprofit aimed at promoting the ideas of Effective Altruism in Spanish.

You might also know me from EA Twitter. :)

More information about me at my personal website: https://www.rafaelruizdelira.com/

Comments
22

Fair! I agree to that, at least until this point of time.

But I think there could be a time where we could have picked most of the "social low-hanging fruit" (cases like the abolition of slavery, universal suffrage, universal education), so there's not a lot for easy social progress left to do. At least comparatively, then investing on the "moral philosophy low-hanging fruit" will look more worthwhile.

Some important cases of philosophical moral problems that might have great axiological moral importance, at least under consequentialism/utilitarianism could be population ethics (totalism vs averagism), our duties towards wild animals, and the moral status of digital beings.

I think figuring them out could have great importance. Of course, if we always just keep them as just an interesting philosophical thought experiment and we don't do anything about promoting any outcomes, they might not matter that much. But I'm guessing people in the year 2100 might want to start implementing some of those ideas.

Same! Seems like a fascinating, although complicated topic. You might enjoy Oded Galor's "The Journey of Humanity", if you haven't read it. :)

Sure! So I think most of our conceptual philosophical moral progress until now has been quite poor. If looked under the lens of moral consistency reasoning I outlined in point (3), cosmopolitanism, feminism, human rights, animal rights, and even longtermism all seem like slight variations on the same argument ("There are no morally relevant differences between Amy and Bob, so we should treat them equally").

In contrast, I think the fact that we are starting to develop cases like population ethics, infinite ethics, complicated variations of thought experiments (there are infinite variations of the trolley problem we could conjure up), that really test our limits of our moral sense and moral intuitions, hints at the fact that we might need a more systematic, perhaps computerized approach to moral philosophy. I think the likely path is that most conceptual moral progress in the future (in the sense of figuring out new theories and thought experiments) will happen with the assistance of AI systems.

I can't point to anything very concrete, since I can't predict the future of moral philosophy in any concrete way, but I think philosophical ethics might become very conceptually advanced and depart heavily from common-sense morality. I think this has been an increasing gap since the enlightenment. Challenges to common-sense morality have been slowly increasing. We might be at the early beginning of that exponential takeoff.

Of course, many of the moral systems that AIs will develop we will consider to be ridiculous. And some might be! But in other cases, we might be too backwards or morally tied to our biologically and culturally shaped moral intuitions and taboos to realize that it is in fact an advancement. For example, the Repugnant Conclusion in population ethics might be true (or the optimal decision in some sense, if you're a moral anti-realist), even if it goes against many of our moral intuitions. 

The effort will take place in separating the wheat from the chaff. And I'm not sure if it will be AI or actual moral philosophers doing this effort of discriminating good from bad ethical systems and concepts.

Outside of Marxism and continental philosophy (particularly the Frankfurt School and some Foucault), I think this idea has lost a lot of grip! So it has actually become a minority view or even awareness among current academic philosophers, particularly in the anglosphere.

However, I think it's a very useful idea that should make us look at our social arrangements (institutions, beliefs, morality...) with some level of initial suspicion. Luckily, some similar arguments (often called "debunking arguments" or "genealogical arguments") are starting to gain traction within philosophy again. 

I hadn't! Thanks for bringing this to my attention, I will take a look in the coming months.

Good! I think I mostly agree with this and I should probably flag it somewhere in the main post. 

I do agree with you, and I think it also shows what is a central point of the later parts of my thesis, when I will talk about the empirical ideas rather than philosophical ideas: that technologies (from shipbuilding, to the industrial revolution, to factory farming, to future AI) are more of a factor in moral progress or regress than ideologies. So many moral philosophers might have the wrong focus. 

(Although many of those things I would call "social" progress rather than "moral" strictly speaking, because it was triggered by external factors (economic and technological change) rather than moral reflection. It's not that we became more cruel to animals in terms of our intentions, it's that we gained more power over them.)

I agree with you this is very important, and I'd like to see more work on it. Sadly I don't have much concrete to say on this topic. The following is my opinion as a layman on AI:

I've found Toby Ord's framework here https://www.youtube.com/watch?v=jb7BoXYTWYI to be useful for thinking about these issues. I guess I'm an advocate for differential progress, like Ord. That is, prioritizing safety advancements relative to technical advancements. Not stopping work on AI capabilities, but right now shifting the current balance from capabilities work to safety work. And then in some years/decades once we have figured out alignment, shift the focus on capabilities again.

My very rough take on things is that as long as we manage to develop advanced LLMs (e.g. GPT5, 6, 7... and Copilots) slowly and carefully before dangerous AGI, we should use those LLMs to help us with technical alignment work. I think technical alignment work is the current bottleneck of the whole situation. There are either not enough people or we're not smart enough to figure it out on our own (but maybe computers could help!).

So, to your points, I think right now (1) Runaway AI Risk is higher than (2) Malicious actors catching up. I don't know by how much, since I don't know how well Chinese labs are doing regarding AI, and if they could come to big breakthroughs on their own. (And I don't know how to compare those points (1) and (2) to (3) and (4).)

Perhaps somebody could do a BOTEC calculation or a rough model with some very rough numbers to see what's a good tradeoff, and put it up for discussion. I'd like to see some work on this.

Hi Jonas! Henrich's 2020 book is very ambitious, but I thought it was really interesting. It has lots of insights from various disciplines, attempting to explain why Europe became the dominant superpower from the middle ages (starting to take off around the 13th century) to modernity.

Regarding AI, I think it's currently beyond the scope of this project. Although I mention AI at some points regarding the future of progress, I don't develop anything in-depth. So sadly I don't have any new insights regarding AI alignment.

I do think theories of cultural evolution and processes of mutations and selection of ideas could play a key role in predicting and shaping the long-term future, whether it's for humans or AI. So I'm excited for some social scientists or computer modellers to try to take this kind of work in a direction applied to making AI values dynamic and evolving (rather than static). But again, it's currently outside of the scope of my work and area of expertise.

Hi Ulrik! I'm definitely aware of this issue, and it's a very ugly side of this debate, which is why some people might have moved away from the topic in the past.

The dangers of using moral progress to justify colonialism and imperialism will be one key point in my next post, and it's also a brief section in the first chapter of my thesis. It's definitely worth cautioning against imposing progress to other cultures. And political intervention is much more complicated than "my culture is more progressed, so we should enforce it upon the rest". It deals with difficult issues of epistemic biases, lack of political legitimacy, and political paternalism. Aside from being a convenient excuse for the powerful to come to other countries and exploit others. It's probably not enough, but at least the explicit warning and section is there.

I try not to adhere or rely on particularly "western" values in my view of progress. But there's controversy as whether values such as moral impartiality and individual human rights are western. Amartya Sen for example wants to claim that they aren't, and he finds examples of such values in nonwestern contexts (in Development as Freedom Chapter 10). But I am not fully convinced either way. It's probably a tricky issue that deserves its own book-length treatment by itself.

If you know any other authors, academics or philosophers that bring in the non-western perspective with regards to values and progress, I'd love to take a look!

Hi Scott, glad I could motivate you to get Buchanan and Powell. It's a great book! It might feel a bit long if you're not a philosopher, but it's definitely a standout solid reading with many insights on this topic.

On The Blank Slate and Moral Uncertainty, sure, let me add the following to my reviews in order to add to that: 

Those two books I think are really good with regards to their subject matter. They're both general overviews of their respective fields. Moral Uncertainty is much more technical, but basically the required reading if you're getting into that topic for the first time. So it's 5/5 stars if you're interested in the topic, probably the best starting point.

The Blank Slate is more replaceable with any other introduction to cognitive science. It presents stuff like the view of the mind as a computer, with debates like the Language of Thought, connectionism, and human nature. Pinker is more accessible than "textbooks" like Tim Crane's "The Mechanical Mind", which are more rigorous but technical and harder to read.

Pinker also presents his opinionated side of things, siding with the evolutionary psychologists John Tooby and Leda Cosmides on analyzing humans as biological animals which inherit a lot of instincts. So he wants to argue that we are instinctually hard-wired towards certain behavior, which explains stuff like our capacity for language. 

At some points he rambles against "leftists" in academia which want to deny instinctual human nature. He wants to detach ethics and human rights from any particular conception of human nature.

I take a bit of a middle position between Pinker and the "leftist that wants to deny human nature". I think the evolutionary psychology standpoint is definitely worth taking into consideration, but it might be more straightforward to apply with regards to stuff like language learning, which happens in an individual mind, than to politics and societies, which is less straightforward since there are new phenomena that happens when you're dealing with interactions of large groups of people. (I think more recently Pinker would even agree with that, since in his later books like Better Angels and Enlightenment Now, Pinker is a defender of enlightenment values to overcome our "pre-wired" tribalism.) I'd say it's like a 4/5 stars reading.

I'll also edit the post to reflect this.

Load more