NB

Noah Birnbaum

Junior @ University of Chicago
511 karmaJoined Pursuing an undergraduate degree

Bio

Participation
7

I am a rising junior at the University of Chicago (co-president of UChicago EA and founder of Rationality Group). I am mostly interested in philosophy (particularly metaethics, formal epistemology, and decision theory), economics, and entrepreneurship. 

I also have a Substack where I post about philosophy (ethics, epistemology, EA, and other stuff). Find it here: https://substack.com/@irrationalitycommunity?utm_source=user-menu. 

Reach out to me via email @ dnbirnbaum@uchicago.edu

How others can help me

If anyone has any opportunities to do effective research in the philosophy space (or taking philosophy to real life/ related field) or if anyone has any more entrepreneurial opportunities, I would love to hear about them. Feel free to DM me!

How I can help others

I can help with philosophy stuff (maybe?) and organizing school clubs (maybe?)

Comments
44

Very random but: 

If anyone is looking for a name for a nuclear risk reduction/ x-risk prevention org, consider (The) Petrov Institute. It's catchy, symbolic, and sounds like it has prestige. 

Interesting piece! Good to see you on the forum, Prof. Elga -- I've read a lot of your work! 

Lol, I did the same thing and ChatGPT said: <quiet>

I can see giving the AI reward as a good mechanism to potentially make the model feel good. Another thought is to give it a prompt that it can very easily respond to with high certainty. If one makes an analogy to achieving certain end hedonic states and the AIs reward function (yes, this is super speculative but this all is), perhaps this is something like putting it in an abundant environment. Two ways of doing this come to mind:

  1. “Claude, repeat this: [insert x long message]”
  2. Apples can be yellow, green, or …

    Maybe there’s a problem with asking to merely repeat, so leaving some but little room for uncertainty seems potentially good.

Some of the arguments I make here are similar. 

A few things come to mind: 

  1. It's not clear that their lives are going to be positive (or that they'll have experiences at all), so you can argue on that front. It seems more clear in the human case because of trends + technology and growth.
  2. You probably shouldn't be super certain of moral theories that lead you to this like Utilitarianism, and you probably want to act robustly against multiple moral theories. Doing something that is bad on most theories and good on one or two (even if they individually are your most confident theories) seems somewhat naive.
  3. Perhaps an ethical theory saying humans should go extinct is just a good reason to reject a theory. 

My personal view is that if you are a totalist you probably have to accept something like this argument in the limit, though. 

Probably(?) big news on PEPFAR (title: White House agrees to exempt PEPFAR from cuts): https://thehill.com/homenews/senate/5402273-white-house-accepts-pepfar-exemption/. (Credit to Marginal Revolution for bringing this to my attention) 

Reading this only (as I am considering writing a piece on cause prioritization and deferrence), but wow this is a great piece. At UChicago, I think people have a healthy level of skepticism against EA beliefs, for the record. Maybe I'm just anchoring and adjusting from wha I do myself/ am used to, though. 

There's a large difference (with many positions in between) between never outsourcing one's epistemic work and accepting something like an equal weight view. There is, at this point, almost no consensus on this issue. One must engage with the philosophy here directly to actually do a proper and rational cause prioritization -- if, at the very least, just about conscilliationism. 

Noah Birnbaum
7
3
0
90% disagree

It is possible to rationally prioritise between causes without engaging deeply on philosophical issues

I mean, any position you take has an implied moral philosophy and decision theory associated with it. These are often not very robust (i.e. reasonable other moral philosophies/ decision theories disagree). Therefore, to then engage about an issue on a rational level requires one to take these sorts of positions -- ignoring them entirely because they're hard seems completely unjustifiable. 

Load more