I have just started a 6 month residency at Newspeak House, a political technology college in London. It's gonna be a period of upskilling, networking and research. Also I'd like to find an ethical career. I intend to research/upskill as follows:
Consistency in ethical systems
I hypothesise that internal consistency and agreement with our deepest moral intuitions are the two most important features of any ethical system. I'd like to hear suggestions of any other necessary and sufficient characteristics of a good ethical system. Does anyone have suggestions of books to read or thoughts to consider? A corollary is that bad ethical systems are those which are inconsistent with themselves or our moral intuitions. Does anyone think they have a solid counterexample?
I am looking forward to trying to understand other people's ethical systems. Why do people make the decisions they do? What makes people change their minds? What allows people to ignore conflicting claims in their own beliefs?
A lot of rational, impactful people flow through Newspeak House so I'm also curious about their criticism of EA, since:
- If we are wrong about this movement we should want to move to something better
- It is good for us to understand our flaws so we can grow
- It is good to learn how we can convince people to join EA, if it is the best choice for the future of consciousness
- Are there ways people perceive EA which are skin deep but nonetheless turn people off. eg a friend said "I don't think earning to give is good advice" even though most EAs today agree with this.
Patterns in emerging communities of practise
This is Newspeak House's main interest: Learning about growing communities and building a library of best practices. In London, there are many growing organisations which are duplicating the same effort. By meeting with the organisers and seeing common patterns, Newspeak Fellows can learn and share these commonalities. This will help organisations grow faster. There is a risk that by making all organisations better bad organisations will be empowered, but most people are trying to do good things and if rationality can be empowered alongside this, it will create a better ecosystem for growing orgs. I intend to feedback here anything which I think will be useful to EA since I hope some of the information I'll be learning will be new to you all.
Machine learning
All hail our AI overlords. #Sarcasm.
Conclusion
I'm interested to hear your thoughts and suggestions. I would like to use this time well and open my future plans up to your reasoned empathetic criticism.
Also I'm looking for a bit of work so if anyone needs any web dev doing, please get in touch. I intend to come to some EA meetups so see you there. Hope you are well. Thanks for reading.
This is a pretty standard view in philosophical ethics, reflective equillibrium.
For a somewhat opposed approach, you might examine moral particularism (opposed to moral generalism), which roughly holds that we should make moral judgments about particular cases without (necessarily) applying moral principles, so while the particularist might care about coherence in some sense (when responding to moral reasons) they needn't be concerned with ensuring coherence between moral principles and between principles and our judgments about cases. You might separately wonder about how much weight should be given to our judgements about particular cases vs our judgements about general principles on a spectrum from hyper-particularism to hyper-methodism.
In terms of other characteristics of a good ethical system, I think it's worth considering that coherence doesn't necessarily get you very far. It seems possible, in principle, to have coherent views which are very bad (of course, this is controversial, and may depend in part on empirical facts about human moral psychology, alongside conceptual truths about morality). One might think that one needs an appropriate correspondence between one's (initial) moral views and the moral facts. Separately, one might think that it is more important to cultivate appropriate kinds of moral dispositions than to have coherent views.
Related to the last point, there is a long tradition of viewing ethical theorising (and in particular attempts to reason about morality) sceptically, especially associated with Nietzsche, according to which moral reasoning is more often rationalisation for more dubious impulses, in which case, again, one might be less concerned with trying to make one's moral views coherent and more in applying some other kind of procedure (e.g. a Critical or Negative one).
There is a lot of empirical moral psychology on these questions. I'm not sure specifically what you're interested in, otherwise I would be able to make more specific suggestions.
I think more applied messaging work about the reception of EA and receptivity to different messages would also be valuable to explore this and would likely help reduce risks which EAs run when engaging in outreach or conducting activities which are going to be perceived a certain way by the world.
Could this be considered similar to the bias/variance tradeoff in machine learning?