Saul Munn

@ Manifest, Manifund, OPTIC
670 karmaJoined Pursuing an undergraduate degreeWorking (0-5 years)
saulmunn.com

Comments
73

(also — thanks for taking the time to write this out & share it. these sorts of announcement posts don't just magically happen!)

How will this change affect university groups currently supported by Open Philanthropy that are neither under the banner of AI safety nor EA? The category on my mind is university forecasting clubs, but I'd also be keen to get a better sense of this for e.g. biosecurity clubs, rationality clubs, etc.

[epistemic status: i've spent about 5-20 hours thinking by myself and talking with rai about my thoughts below. however, i spent fairly little time actually writing this, so the literal text below might not map to my views as well as other comments of mine.]

IMO, Sentinel is one of the most impactful uses of marginal forecasting money.

some specific things i like about the team & the org thus far:

  • nuno's blog is absolutely fantastic — deeply excellent, there are few that i'd recommend higher
  • rai is responsive (both in terms of time and in terms of feedback) and extremely well-calibrated across a variety of interpersonal domains
  • samotsvety is, far and away, the best forecasting team in the world
  • sentinel's weekly newsletter is my ~only news source
    • why would i seek anything but takes from the best forecasters in the world?
    • i think i'd be willing to pay at least $5/week for this, though i expect many folks in the EA community would be happy to pay 5x-10x that. their blog is currently free (!!)
    • i'd recommend skimming whatever their latest newsletter was to get a sense of the content/scope/etc
  • linch's piece sums up my thoughts around strategy pretty well

i have the highest crux-uncertainty and -elasticity around the following, in (extremely rough) order of impact on my thought process:

  • do i have higher-order philosophical commitments that swamp whatever Sentinel does? (for ex: short timelines, animal suffering, etc)
  • will Sentinel be able to successfully scale up?
  • conditional on Sentinel successfully forecasting a relevant GCR, will Sentinel successfully prevent or mitigate the GCR?
  • will Sentinel be able to successfully forecast a relevant GCR?
  • how likely are the category of GCRs that sentinel might mitigate to actually come about? (vs no GCRS or GCRS that are totally unpredictable/unmitigateable)

i’ll add $250, with exactly the same commentary as austin :)

to the extent that others are also interested in contributing to the prize pool, you might consider making a manifund page. if you’re not sure how to do this or just want help getting started, let me (or austin/rachel) know!

also, you might adjust the “prize pool” amount at the top of the metaculus page — it currently reads “$0.”

epistemic status: extremely quickly written thoughts, haven't thought these through deeply, these are mostly vibes. i spent 10 minutes writing this out. i do not cite sources.

  • seems like non-human animals are suffering much more than humans, both in quantity of beings suffering & extent of suffering per being
    • it might be that non-human animals are less morally valuable than humans — i think i buy into this to some extent, but, like, you'd have to buy into this to a ridiculously extreme extent to think that humans are suffering more than non-human animals in aggregate
  • seems like animal welfare has been pretty tractable — in-particular, e.g. shrimp or insect welfare, where magnitudinal differences
  • it seems like there's currently substantially more of a global focus (in terms of $ for sure, but also in terms of general vibes) on global health than on animal welfare, even holding suffering between the two groups constant
  • i generally feel pretty cautious about expanding into new(er) causes, for epistemic modesty reasons (for both empirical & moral uncertainty reasons)
    • this is particularly true for the sub-cause-areas within animal welfare that seem most promising, like shrimp & insect welfare as well as wild animal welfare
    • this is what's preventing me from moving the dial ~all the way to the right
  • some things this question doesn't take into acct:
    • within each of these areas, how is the $100mm being spent?
    • how would other funders react to this? would e.g. some other funder pull out of [cause] because $100mm just appeared?
    • etc — though i don't think that these questions are particularly relevant to the debate
  • some cruxes around which i have the most uncertainty:
    • extent to which there continue to be tractable interventions in AW (compared to GH)
    • extent to which i believe that non-human lives have moral significance
    • probably some others that i'm not thinking of

i'd be curious to see the results of e.g. focus groups on this — i'm just now realizing how awful of a name "lab grown meat" is, re: the connotations.

There has been a lot of discussion of this, some studies were done on different names

could you link to a few of the discussions & studies?

Load more