I am a sophomore at the University of Chicago (co-president of UChicago EA and founder of Rationality Group). I am mostly interested in philosophy (particularly metaethics, formal epistemology, and decision theory), economics, and entrepreneurship.
I also have a Substack where I post about philosophy (ethics, epistemology, EA, and other stuff). Find it here: https://substack.com/@irrationalitycommunity?utm_source=user-menu.
If anyone has any opportunities to do effective research in the philosophy space (or taking philosophy to real life/ related field) or if anyone has any more entrepreneurial opportunities, I would love to hear about them. Feel free to DM me!
I can help with philosophy stuff (maybe?) and organizing school clubs (maybe?)
Thanks for the comment!
Yep, in the philosophical literature, they are distinct. I was merely making the point that I'm not sure one of these (moral realism is true but not motivating) actually reflects what people want to be implying when they say moral realism is true. In what sense are we saying that there is objective morality if it relies on some sentiments? I guess one can claim that the rational thing to do given some objective (i.e. morality) is that objective, but that doesn't seem very distinct from just practical rationality. If it's just practical rationality, we should call it just that - still, as stated in the post, I don't think that we can make ought claims about practical rationality (though you can probably make conditional claims; given that you want x, and you should do what you want, you should take action y). Similarly, if one took this definition of realism seriously, they'd say that moral realism is true in the same way that gastronomical realism is true (i.e. that there are true facts about what food I should have because it follows from my preferences about them).
Also, I'm not sure I buy your last point. I think under the forms of realism that people typically want to talk about, theres a gradient to your increased morality as you increase rationality (using your evidence well, acting in accordance with your goals, ect). While you could just say that morality and motivation towards it only cashes out at the highest level of rationality (i.e. god or whatever), this seems weird and much harder to justify.
I don't think there are any normative facts, so you can finish that sentence, if you'd like. In other words, I don't think there's no objective feature in the world that tells you that you need to have x beliefs instead of y beliefs. If one did actually believe this, I'm curious about how this would play out (i.e. should someone do a bunch of very simple math equations all the time because they could gain many true beliefs very quickly? Seems weird).
On just having true beliefs, I would say that when you give some ontology of how the world works, you'd expect evolution to give us truth-tracking beliefs and or processes in many instances because it is actually useful for survival/reproduction (though it would also give us wrong beliefs, but we do see this -- i.e. we believe in concepts that don't REALLY carve reality like chairs because they're useful).
From a philosophy standpoint, I find incommensurability pretty implausible (at least to act upon) for a couple reasons:
Happy to chat more about this, if you think that you'd find that helpful.
We've talked about this, but I wanted to include my two counterarguments as a comment to this post:
Side note: this argument seems to rely on some ideas about astronomical waste that I won't discuss here (I also haven't done so much thinking on the topic), but it seems maybe worth it to frame around that debate.
I think this is going to be hard for university organizers (as an organizer at UChicago EA).
At the end of our fellowship, we always ask the participants to take some time to sign up for 1-1 career advice with 80k, and this past quarter myself and other organizers agreed that we felt somewhat uncomfortable doing this given that we knew that 80k was leaning a lot on AI -- as we presented it as merely being very good for getting advice on all types of EA careers. This shift will probably make it so that we stop sending intro fellows to 80k for advice, and we will have to start outsourcing professional career advising to somewhere else (not sure where this will be yet).
Given this, I wanted to know if 80k (or anyone else) has any recommendations on what EA University Organizers in a similar position should do (aside from the linked resources like Probably Good).
I think I get the theory you're positing, and I think you should look into Constructivism (particularly Humean Constructivism, as opposed to Kantian) and tethered values.
On this comment: Once you get agents with preferences, I'm not sure you can make claims about oughts -- sure, they can do the preferences and want them (and maybe there is some definition of rationality you can create such that it would be better to act in certain ways relative to achieving one's aims), but in what sense do they ought to? In what sense is this objective?
I'm also not sure I understand what a neutral/ impartial view means here, and I'm not understanding why someone might care about what it says at all (aside from their mere sentiments, which gets back to my last comment about motivation).
Also, I don't understand how this relates to the principle of indifference, which states that given some partition over possible hypotheses in some sample space and no evidence in any direction, you should assign an equal credence to all possibilities such that their total sums to one.