NB

Noah Birnbaum

Sophomore @ University of Chicago
182 karmaJoined Pursuing an undergraduate degree

Bio

Participation
7

I am a sophomore at the University of Chicago (co-president of UChicago EA and founder of Rationality Group). I am mostly interested in philosophy (particularly metaethics, formal epistemology, and decision theory), economics, and entrepreneurship. 

I also have a Substack where I post about philosophy (ethics, epistemology, EA, and other stuff). Find it here: https://substack.com/@irrationalitycommunity?utm_source=user-menu. 

How others can help me

If anyone has any opportunities to do effective research in the philosophy space (or taking philosophy to real life/ related field) or if anyone has any more entrepreneurial opportunities, I would love to hear about them. Feel free to DM me!

How I can help others

I can help with philosophy stuff (maybe?) and organizing school clubs (maybe?)

Comments
21

I think this is going to be hard for university organizers (as an organizer at UChicago EA). 

At the end of our fellowship, we always ask the participants to take some time to sign up for 1-1 career advice with 80k, and this past quarter myself and other organizers agreed that we felt somewhat uncomfortable doing this given that we knew that 80k was leaning a lot on AI -- as we presented it as merely being very good for getting advice on all types of EA careers. This shift will probably make it so that we stop sending intro fellows to 80k for advice, and we will have to start outsourcing professional career advising to somewhere else (not sure where this will be yet). 

Given this, I wanted to know if 80k (or anyone else) has any recommendations on what EA University Organizers in a similar position should do (aside from the linked resources like Probably Good). 

Note: I'm really unsure what I believe about the following comment, but I'm interested in hearing what others have to say about it. 

Whenever we add an additional condition of the type of thing we want (say, diversity), we sacrifice some amount of the terminal aim (getting the best people). While there are good reasons to care about diversity (optics, founder effects, making people feel more comfortable), there are also ones that are more controversial (for instance -- in some cases like grant-making, diversity of sex or race as a proxy for getting a "more diverse outlook" on a particular subject). Let's call optics/ founder effects instrumental diversity and more diverse outlook diversity. Given this framing, I think two points are important:

Note: I understand that this framing is weird because the kind of diversity of knowledge/ experience is said to be good instrumentally -- i still wanted to make a different conceptual category for it because 1) it's more controversial and 2) some conditions might apply to it that may not apply to other constraints.  

  1. Some argue that diversity is a powerful meme and will be hard to resist once you take some of its premises -- this type of thing seems particularly apt for value drift. Perhaps this means that EAs should be more hesitant to take on some of the diversity (as opposed to instrumental diversity) points into decision making when hiring and such.
  2. Conditional that someone decides to give some weight to diversity, I think it should be made more clear that this is a diversity point rather than an instrumental diversity point, as the former is more controversial. 

I'm interested in hearing what others have to say about this - especially if you think this comment overrates the amount that EAs care about diversity (vs instrumental diversity). I'm also interested in hearing if you think I'm underestimating the reasons for why diversity might be important that I might be missing. 

Good point. Will change this when it’s not midnight. Thanks! 

Thanks for the nice comment. Yea, I think this was more of "laying out the option space." 

All very interesting points! 

Enjoyed this article a lot, and I think the framing of the "root problem objection" is an underrated one!  

Thanks for the response. 

The part that I'm still stuck on is that this last part about the implicit tradeoff in one's offset seems crucial. The degree of offsetting is entirely based on tradeoff (maybe with some risk aversion under diff moral theories), but if you put that much into offsetting than it seems like you either have a major moral or epistemic disagreement with those that are donating in the first place. If that is the case, one person has got to give (either they don't offset near this much or they don't donate to AMF). 

While I'm here, I also wanted to thank you for writing this post. Super interesting, thoughtful, and I've shared with a bunch of people already! 

Perhaps I’m misunderstanding something, so please correct me if I’m wrong: 

If one accepts all these assumptions, why would the best course of action be to offset AMF donations rather than to avoid donating to AMF in the first place? 

If ITNs cause vastly more harm to mosquitoes than they help humans, wouldn’t this imply that AMF is not just a weak investment, but actually a net-negative intervention? It seems like these numbers, if taken seriously, suggest AMF should be deprioritized rather than merely balanced with shrimp welfare donations. 

I assume that this is mostly about hedging against uncertainty under diff moral theories, but it seems like making this tradeoff of offset compared to counterfactual more money to AMF implies a certain tradeoff that you're okay with such that you should never make the initial investment. 

I'm confused about what sorta epistemic/ moral uncertainty theory someone would need to be offsetting the way you propose. Tbh I've already confused myself with this comment, but I hope it's helpful(?)

Load more