I am a rising junior at the University of Chicago (co-president of UChicago EA and founder of Rationality Group). I am mostly interested in philosophy (particularly metaethics, formal epistemology, and decision theory), economics, and entrepreneurship.
I also have a Substack where I post about philosophy (ethics, epistemology, EA, and other stuff). Find it here: https://substack.com/@irrationalitycommunity?utm_source=user-menu.
Reach out to me via email @ dnbirnbaum@uchicago.edu
If anyone has any opportunities to do effective research in the philosophy space (or taking philosophy to real life/ related field) or if anyone has any more entrepreneurial opportunities, I would love to hear about them. Feel free to DM me!
I can help with philosophy stuff (maybe?) and organizing school clubs (maybe?)
Thank you for the post - and holy shit that’s a lot of outreach.
I appreciate the appropriate hedging, but I would be even more hedged.
1) it’s unclear how much of this stuff generalizes. For some universities, general meetings are amazing, and for others, they’re boring. I think it’s pretty difficult to make generalized claims about this from a sample size of universities of about 1 (especially when there’s counter evidence - for instance, the amazing organizers at Yale EA just got 70 intro fellowship apps from just doing TONS of outreach. That’s a large number and is possibly the equivalent update in the opposite direction).
2) (controversial and speculative take incoming) maybe scaling past some numver of people (depending on university) is just very difficult at below top 30 universities (or that each university has some number like this, but if this is true, it feels like a higher cap would be correlated with prestige). In your case, perhaps there is some amount of potential EAs, you have good initial selection effects, and you just captured most people from that.
Would be happy to get pushback on all of this, though.
In the report, it says: "A natural question is whether more accurate near-term forecasters made systematically different long-term risk predictions. Figure 4.1 suggests that there is no meaningful relationship between near-term accuracy and long-term risk forecasts."
It then says: "Overall, our findings challenge the hope that near-term accuracy can reliably identify forecasters with more credible long-term risk predictions."
One interpretation here (that I take this report to be offering) is that short term prediction accuracy don't extrapolate to long term prediction accuracy in general. However, another interpretation that I see as reasonable (maybe somewhat but not substantially less) is merely that Superforecasters aren't very good at predicting things that require lots of technical information (i.e. AI capabilities); after all (from my knowledge), very little work has been done to show that superforecasters are actually as good at predictions in technical subjects (almost all of the initial work was done in economics and geopolitics), and maybe there are some object level reasons to think that they wouldn't(?)
Would be interested in hearing more thoughts/to be corrected if wrong here.
Also: "This research would not have been possible without the support of the Musk Foundation, Open Philanthropy, and the Long-Term Future Fund." Musk Foundation, huh? Interesting.
It's good to know that others want group organizers and members writing, and I think this post changed my impression to some degree.
I was (and still am, to some degree) conflicted. On the one hand, low context people writing can hurt the quality of the average post or comment. On the other hand, it helps in the ways that you describe.
Already sent this post to a few organizers that I hope join me in (1) writing on the forum and (2) encouraging group members to do so as well.
I can see giving the AI reward as a good mechanism to potentially make the model feel good. Another thought is to give it a prompt that it can very easily respond to with high certainty. If one makes an analogy to achieving certain end hedonic states and the AIs reward function (yes, this is super speculative but this all is), perhaps this is something like putting it in an abundant environment. Two ways of doing this come to mind:
Apples can be yellow, green, or …
Maybe there’s a problem with asking to merely repeat, so leaving some but little room for uncertainty seems potentially good.
I agree with a bunch of the skepticism in the comments made, but (to me) it seems like there are not enough people on the strong longtermist side.
A couple points responding to some of the comments:
On the other hand, a one point against that I don't think was brought up: