NB

Noah Birnbaum

Sophomore @ University of Chicago
284 karmaJoined Pursuing an undergraduate degree

Bio

Participation
7

I am a sophomore at the University of Chicago (co-president of UChicago EA and founder of Rationality Group). I am mostly interested in philosophy (particularly metaethics, formal epistemology, and decision theory), economics, and entrepreneurship. 

I also have a Substack where I post about philosophy (ethics, epistemology, EA, and other stuff). Find it here: https://substack.com/@irrationalitycommunity?utm_source=user-menu. 

How others can help me

If anyone has any opportunities to do effective research in the philosophy space (or taking philosophy to real life/ related field) or if anyone has any more entrepreneurial opportunities, I would love to hear about them. Feel free to DM me!

How I can help others

I can help with philosophy stuff (maybe?) and organizing school clubs (maybe?)

Comments
29

I think I get the theory you're positing, and I think you should look into Constructivism (particularly Humean Constructivism, as opposed to Kantian) and tethered values. 

 

On this comment: Once you get agents with preferences, I'm not sure you can make claims about oughts -- sure, they can do the preferences and want them (and maybe there is some definition of rationality you can create such that it would be better to act in certain ways relative to achieving one's aims), but in what sense do they ought to? In what sense is this objective? 

I'm also not sure I understand what a neutral/ impartial view means here, and I'm not understanding why someone might care about what it says at all (aside from their mere sentiments, which gets back to my last comment about motivation). 

Also, I don't understand how this relates to the principle of indifference, which states that given some partition over possible hypotheses in some sample space and no evidence in any direction, you should assign an equal credence to all possibilities such that their total sums to one. 

Thanks for the comment! 

Yep, in the philosophical literature, they are distinct. I was merely making the point that I'm not sure one of these (moral realism is true but not motivating) actually reflects what people want to be implying when they say moral realism is true. In what sense are we saying that there is objective morality if it relies on some sentiments? I guess one can claim that the rational thing to do given some objective (i.e. morality) is that objective, but that doesn't seem very distinct from just practical rationality. If it's just practical rationality, we should call it just that - still, as stated in the post, I don't think that we can make ought claims about practical rationality (though you can probably make conditional claims; given that you want x, and you should do what you want, you should take action y). Similarly, if one took this definition of realism seriously, they'd say that moral realism is true in the same way that gastronomical realism is true (i.e. that there are true facts about what food I should have because it follows from my preferences about them). 

Also, I'm not sure I buy your last point. I think under the forms of realism that people typically want to talk about, theres a gradient to your increased morality as you increase rationality (using your evidence well, acting in accordance with your goals, ect). While you could just say that morality and motivation towards it only cashes out at the highest level of rationality (i.e. god or whatever), this seems weird and much harder to justify. 

Just gonna have to write a reply post, probably 

I don't think there are any normative facts, so you can finish that sentence, if you'd like. In other words, I don't think there's no objective feature in the world that tells you that you need to have x beliefs instead of y beliefs. If one did actually believe this, I'm curious about how this would play out (i.e. should someone do a bunch of very simple math equations all the time because they could gain many true beliefs very quickly? Seems weird). 

On just having true beliefs, I would say that when you give some ontology of how the world works, you'd expect evolution to give us truth-tracking beliefs and or processes in many instances because it is actually useful for survival/reproduction (though it would also give us wrong beliefs, but we do see this -- i.e. we believe in concepts that don't REALLY carve reality like chairs because they're useful). 

Noah Birnbaum
20
6
5
90% disagree

Morality is Objective

Evolutionary debunking arguments - we can explain the vast majority of moral beliefs without positing the existence of extra substances -- therefore, we shouldn't posit them! 

Thank you for doing this — this is super helpful from a university organizer perspective. 

One question: Will you be able to handle the capacity of all the participants (UChicago, for example, is around 13 per quarter, excluding summer) after a university intro fellowship? 

From a philosophy standpoint, I find incommensurability pretty implausible (at least to act upon) for a couple reasons: 

  1. If two values are incommensurable, for every action that you take, there is some probability that you are making a trade-off between these actions. Given some version of expected value to be correct (where some probability of value is equivalent to trading off some lower value itself), this would mean that every action one takes is making a choice between two incommensurable goods. This seems to lock you into a constant state of decision paralysis (where every action you take is trading off two incommensurable goods), which, I believe, should just make incommensurable goods a non-viable option. (See this paper for more discussion)
  2. Imagine you have some credence in two things being incommensurable (thereby making it such that you have no reasons to act in either way). Even if this is the case, however, you should still have some non-zero credence in these values / actions being commensurable as it is a contingent proposition. If the credence of incommensurability gives you no reason to act and the credence of commensurability does give you reason to act, this makes incommensurability irrelevant, making it so that your actions should entirely be informed by the case conditional on commensurability. 

Happy to chat more about this, if you think that you'd find that helpful. 

We've talked about this, but I wanted to include my two counterarguments as a comment to this post: 

  1. It seems like there's a good likelihood that we have semi-mathusian constraints nowadays. While I would admit that one should be skeptical of total malthusianism (ie for every person dying another one lives because we are at max carrying capacity), I think it is much more reasonable to think that carrying constraints actually do exist and maybe its something like for every death you get .2 lives or something. If this is true, I think this argument weakens a bunch.
  2. This argument only works if, conditional on existential risk not happening, we don't hit malthusian constraints at any point in the future, which seems quite implausible. If we don't get existential risk and the pie just keeps growing, it seems like we would just get super-abundance and the only thing holding people back would be malthusian physical constraints on creating happy people. Therefore, we just need some people to live past that time of super-abundance to have massive growth. Additionally, even if you think those people wouldn't have kids (which I find pretty implausible -- as one person's preference for children would lead to many kids given abundance), you could talk about those lives being extremely happy which holds most of the weight. This also 

Side note: this argument seems to rely on some ideas about astronomical waste that I won't discuss here (I also haven't done so much thinking on the topic), but it seems maybe worth it to frame around that debate. 

I think this is going to be hard for university organizers (as an organizer at UChicago EA). 

At the end of our fellowship, we always ask the participants to take some time to sign up for 1-1 career advice with 80k, and this past quarter myself and other organizers agreed that we felt somewhat uncomfortable doing this given that we knew that 80k was leaning a lot on AI -- as we presented it as merely being very good for getting advice on all types of EA careers. This shift will probably make it so that we stop sending intro fellows to 80k for advice, and we will have to start outsourcing professional career advising to somewhere else (not sure where this will be yet). 

Given this, I wanted to know if 80k (or anyone else) has any recommendations on what EA University Organizers in a similar position should do (aside from the linked resources like Probably Good). 

Load more