I am a rising junior at the University of Chicago (co-president of UChicago EA and founder of Rationality Group). I am mostly interested in philosophy (particularly metaethics, formal epistemology, and decision theory), economics, and entrepreneurship.
I also have a Substack where I post about philosophy (ethics, epistemology, EA, and other stuff). Find it here: https://substack.com/@irrationalitycommunity?utm_source=user-menu.
Reach out to me via email @ dnbirnbaum@uchicago.edu
If anyone has any opportunities to do effective research in the philosophy space (or taking philosophy to real life/ related field) or if anyone has any more entrepreneurial opportunities, I would love to hear about them. Feel free to DM me!
I can help with philosophy stuff (maybe?) and organizing school clubs (maybe?)
I can see giving the AI reward as a good mechanism to potentially make the model feel good. Another thought is to give it a prompt that it can very easily respond to with high certainty. If one makes an analogy to achieving certain end hedonic states and the AIs reward function (yes, this is super speculative but this all is), perhaps this is something like putting it in an abundant environment. Two ways of doing this come to mind:
Apples can be yellow, green, or …
Maybe there’s a problem with asking to merely repeat, so leaving some but little room for uncertainty seems potentially good.
A few things come to mind:
My personal view is that if you are a totalist you probably have to accept something like this argument in the limit, though.
Probably(?) big news on PEPFAR (title: White House agrees to exempt PEPFAR from cuts): https://thehill.com/homenews/senate/5402273-white-house-accepts-pepfar-exemption/. (Credit to Marginal Revolution for bringing this to my attention)
Reading this only (as I am considering writing a piece on cause prioritization and deferrence), but wow this is a great piece. At UChicago, I think people have a healthy level of skepticism against EA beliefs, for the record. Maybe I'm just anchoring and adjusting from wha I do myself/ am used to, though.
There's a large difference (with many positions in between) between never outsourcing one's epistemic work and accepting something like an equal weight view. There is, at this point, almost no consensus on this issue. One must engage with the philosophy here directly to actually do a proper and rational cause prioritization -- if, at the very least, just about conscilliationism.
It is possible to rationally prioritise between causes without engaging deeply on philosophical issues
I mean, any position you take has an implied moral philosophy and decision theory associated with it. These are often not very robust (i.e. reasonable other moral philosophies/ decision theories disagree). Therefore, to then engage about an issue on a rational level requires one to take these sorts of positions -- ignoring them entirely because they're hard seems completely unjustifiable.
Very random but:
If anyone is looking for a name for a nuclear risk reduction/ x-risk prevention org, consider (The) Petrov Institute. It's catchy, symbolic, and sounds like it has prestige.