NB

Noah Birnbaum

Junior @ University of Chicago
531 karmaJoined Pursuing an undergraduate degree

Bio

Participation
7

I am a rising junior at the University of Chicago (co-president of UChicago EA and founder of Rationality Group). I am mostly interested in philosophy (particularly metaethics, formal epistemology, and decision theory), economics, and entrepreneurship. 

I also have a Substack where I post about philosophy (ethics, epistemology, EA, and other stuff). Find it here: https://substack.com/@irrationalitycommunity?utm_source=user-menu. 

Reach out to me via email @ dnbirnbaum@uchicago.edu

How others can help me

If anyone has any opportunities to do effective research in the philosophy space (or taking philosophy to real life/ related field) or if anyone has any more entrepreneurial opportunities, I would love to hear about them. Feel free to DM me!

How I can help others

I can help with philosophy stuff (maybe?) and organizing school clubs (maybe?)

Comments
50

Noah Birnbaum
9
0
0
60% agree

Far-future effects are the most important determinant of what we ought to do

I agree with a bunch of the skepticism in the comments made, but (to me) it seems like there are not enough people on the strong longtermist side. 

A couple points responding to some of the comments: 

  1. You should have some non trivial probability in the times of perils hypothesis/lock in (perhaps in large part because AI might be a big deal) -- the idea that we're living in a time where the chances of existential risk are particularly high but if we get past it the rate of x risk will go down indefinitely (or at least for a very long while). This is plausible because increasing uncertainty as time goes furthes, as thorstad points out, makes x risk rate regress to mean, and the mean is quite plausibly low. If this is true, you don't need to make so many claims about the far future in order to have massive amount of impact on it.
  2. A lot of people refer to pascal's mugging or fanaticism here, which I don't usually think is correct. (Unless we reject pascal's mugging for ambuity aversion reasons, which i am uncertain about but probably don't) the probabilities that people usually put on longtermism are not near the kind of bets we shouldn't take if we're anti being fanatical because we take these similarly low probabilities all the time -- for instance, having fire extinguishers, wearing seatbelts, maybe most clinical trials. Unless you have significantly lower probability than that, invoking pascal's mugging feels a bit overly pessimistic about our ability to affect things like this. Also (and this is a cheeky move), if you just have some non-mugging-level probability in that claim being correct, you probably still get far future being most important without a mugging. 

On the other hand, a one point against that I don't think was brought up: 

  1. In the XPT, the Superforecaster median prediction was that there will only exist 500 billion humans (not near as many as, say, the bostrom or newberry numbers, which may make the cost + tractability concerns potentially such that it's not as important in expectation as, say, affecting very large amounts of shrimp or insects now (to be fair, the 95th percentile superforecaster was at 100 trillion, so maybe the uncertainty becomes fairly assymetrical quickly, though). 

Thank you for the post - and holy shit that’s a lot of outreach. 

I appreciate the appropriate hedging, but I would be even more hedged. 
1) it’s unclear how much of this stuff generalizes. For some universities, general meetings are amazing, and for others, they’re boring. I think it’s pretty difficult to make generalized claims about this from a sample size of universities of about 1 (especially when there’s counter evidence - for instance, the amazing organizers at Yale EA just got 70 intro fellowship apps from just doing TONS of outreach. That’s a large number and is possibly the equivalent update in the opposite direction). 
2) (controversial and speculative take incoming) maybe scaling past some numver of people (depending on university) is just very difficult at below top 30 universities (or that each university has some number like this, but if this is true, it feels like a higher cap would be correlated with prestige). In your case, perhaps there is some amount of potential EAs, you have good initial selection effects, and you just captured most people from that. 

Would be happy to get pushback on all of this, though. 

UChicago co-organizer here. +1 on everything 

In the report, it says: "A natural question is whether more accurate near-term forecasters made systematically different long-term risk predictions. Figure 4.1 suggests that there is no meaningful relationship between near-term accuracy and long-term risk forecasts."

It then says: "Overall, our findings challenge the hope that near-term accuracy can reliably identify forecasters with more credible long-term risk predictions."

One interpretation here (that I take this report to be offering) is that short term prediction accuracy don't extrapolate to long term prediction accuracy in general. However, another interpretation that I see as reasonable (maybe somewhat but not substantially less) is merely that Superforecasters aren't very good at predicting things that require lots of technical information (i.e. AI capabilities); after all (from my knowledge), very little work has been done to show that superforecasters are actually as good at predictions in technical subjects (almost all of the initial work was done in economics and geopolitics), and maybe there are some object level reasons to think that they wouldn't(?) 

Would be interested in hearing more thoughts/to be corrected if wrong here.  

Also: "This research would not have been possible without the support of the Musk Foundation, Open Philanthropy, and the Long-Term Future Fund." Musk Foundation, huh? Interesting. 

It's good to know that others want group organizers and members writing, and I think this post changed my impression to some degree. 

I was (and still am, to some degree) conflicted. On the one hand, low context people writing can hurt the quality of the average post or comment. On the other hand, it helps in the ways that you describe. 

Already sent this post to a few organizers that I hope join me in (1) writing on the forum and (2) encouraging group members to do so as well. 

Looking back on old 80k podcasts, and this is what I see (lol): 

Very random but: 

If anyone is looking for a name for a nuclear risk reduction/ x-risk prevention org, consider (The) Petrov Institute. It's catchy, symbolic, and sounds like it has prestige. 

Interesting piece! Good to see you on the forum, Prof. Elga -- I've read a lot of your work! 

Lol, I did the same thing and ChatGPT said: <quiet>

I can see giving the AI reward as a good mechanism to potentially make the model feel good. Another thought is to give it a prompt that it can very easily respond to with high certainty. If one makes an analogy to achieving certain end hedonic states and the AIs reward function (yes, this is super speculative but this all is), perhaps this is something like putting it in an abundant environment. Two ways of doing this come to mind:

  1. “Claude, repeat this: [insert x long message]”
  2. Apples can be yellow, green, or …

    Maybe there’s a problem with asking to merely repeat, so leaving some but little room for uncertainty seems potentially good.

Load more