Saul Munn

@ Manifest, Manifund, OPTIC
892 karmaJoined Pursuing an undergraduate degreeWorking (0-5 years)
saulmunn.com

Comments
92

hey, thanks for commenting!

this was a retreat for west coast ea uni students — we focused on LA & Bay Area schools, but would’ve been happy to have e.g. oregon schools there too.

re: filtering, our main reflection here was that, given an existing pool of high-context folks, retreats can be a useful way for those folks to coordinate & get to know each other better in ways that are hard to replicate elsewhere. and although it’s possible to use retreats to help newcomers get acquainted to EA, we thought other pathways (intro fellowships, 1:1s with organizers, reading intro material online, etc) would be more effective/scalable for newcomers, while avoiding the problem of making the whole ambient atmosphere lower context.

so — we definitely don’t think all retreats should filter out newcomers, just that retreats which do some filtering will be able to provide benefits to attendees that retreats which don’t do filtering won’t be able to provide.

maybe a relevant analogy here is EAGs vs EAGx’s: the former has a higher bar, but both are super useful at what they’re aiming for.

Also interested in this question! CC @Nathan Young who might know

UC Berkeley EA is hosting a west coast uni student EA retreat on april 10-12, with ~50 attendees from Berkeley, Stanford, UCLA, UCI, UCSD, & more, as well as special guests like Matt Reardon, Jake McKinnon, Jesse Gilbert, Julie Steele, Adam Khoja, Richard Ren, & more...

...but we only know to reach out to people who're involved with their uni's clubs. so: if you're interested in attending, book a 5-10 minute chat with alex or aiden :)

some examples of gaps in our outreach:

  • unis that don't have an EA club
  • students who haven't joined their uni's EA club
  • transfers to west-coast unis
  • students who're on leave from their uni and presently living on the west coast
  • high-schoolers who'll soon be starting at west coast

we won't be able to take everyone, but reading the ea forum is a pretty positive indicator that you'd be a good fit!

some further & updated thoughts, written in ~30 min, are below. canonical version lives here.


Here’s a frame I’ve found helpful for thinking about effective altruism:

  • When I look inside myself, I notice that I care about a lot of things.
    • You could also reasonably replace “care” with “wanting,” “preferring,” “valuing,” “desiring,” “having goals,” etc, rather than “caring.” I’m okay being loose.
    • Some examples of things I care about:
      • I want my sister to have an excellent career.
      • I’m hungry, and want some food.
      • I want to be valued by people I respect.
      • I want my dogs to have enjoyable lives.
      • (And many, many more).
    • (It’s often useful to be introspective/clear-eyed about what you care about, what that ontology looks like, which values are instrumental to which other values, etc., but I won’t be doing that here, and indeed I think it might be anti-helpful in this particular frame at this particular time. Stay with me until the end.)
  • Sort-of by definition, I want more of the things I care about. I see my life as a difficult, high-level optimization problem aimed at making decisions which, given my resources at various times, increase my values across time.
  • Some of the things I care about — like wanting food because I’m hungry — are fundamentally oriented at myself. And I take actions to do better along these axes.
    • Some examples of actions:
      • Reading a book on tax strategies
      • Learning how to cook
      • Asking people for feedback on my sartorial choices
      • etc
    • And in general, I try to be effective at getting what I want, here — that is, I aim to achieve these kinds of goals/values/preferences to as great of a degree as possible.
  • But other things I care about — like wanting my sister to have an excellent career, or my dogs to have enjoyable lives — are fundamentally oriented at others-by-their-lights. And I take actions to do better along these axes, too.
    • These motivations often look starkly different in a lot of different situations.
    • For some of these altruistic motivations, it just so happens that some lovely dynamics have coalesced such that there’s an existing group of people / infrastructure / etc who have worked & are working quite hard toward helping me get what I want w/r/t some of those things I care about that are oriented at others-by-their-lights. In particular, I haven’t found any community which is more effective at helping me achieve the things I care about that are oriented at others-by-their-lights than this one.

Why do I like this frame?

  • Because it’s apparent that I care about quite a few things. It becomes evident quickly that totalizing stances toward EA are just not worth it; a bad trade; just getting less of what I want.
    • In particular, I think this kind of frame can be validating toward folks who’ve gone quite far, and repressed the values that they in-fact have in other areas of their life. (I think I was in this camp ~two years ago.)
  • There are interesting subproblems that come into clearer view, e.g.:
    • When should, on the margin, my resources go toward different things that I care about?
    • What actions would get me more access to the things that I want with greater robustness (i.e. getting me closer to many different things I want, all at once)?
    • etc

Started something sorta similar about a month ago: https://saul-munn.notion.site/A-collection-of-content-resources-on-digital-minds-AI-welfare-29f667c7aef380949e4efec04b3637e9?pvs=74

What, concretely, would that involve? /What, concretely, are you proposing?

I think affecting P(things go really well | no AI takeover) is pretty tractable!

What interventions are you most excited about? Why? What are they bottlenecked on?

PurpleAir collects data from a network of private air quality sensors. Looks interesting, and possibly useful for tracking rapid changes in air quality (e.g. from a wildfire).

(written v quickly, sorry for informal tone/etc)

i think that a happy medium is getting small-group conversations (that are useful, effective, etc) of size 3–4 people. this includes 1-1s, but the vibe of a Formal, Thirty Minute One on One is a very different vibe from floating through 10–15, 3–4-person conversations in a day, each that last varying amounts of time.

  • much more information can flow with 3-4 ppl than with just 2 ppl
  • people can dip in and out of small conversations more than they can with 1-1s
  • more-organic time blocks means that particularly unhelpful conversations can end after 5-10m, and particularly helpful ones can last the duration that would be good for them to last (even many hours!)
  • 3-4 person conversations naturally select for a good 1-1. once 1-2 people have left a 3-4 person conversation, the conversation is then just a 1-1 of the two people who've engaged in the conversation longest — which seems like some evidence of their being a good match for a 1-1.

however, i think that this is operationally much harder to do for organizers than just 1-1s. my understanding is that this is much of the reason EAGs (& other conferences) do 1-1s, instead of small group conversations.

  • i think Writehaven did a mediocre job of this at LessOnline this past year (but, tbc, it did vastly better than any other piece of software i've encountered).
  • i think Lighthaven as a venue forces this sort of thing to happen, since there are so so so many nooks for 2-4 people to sit and chat, and the space is set up to make 10+ person conversations less likely to happen.

i know that The Curve (from @Rachel Weinberg) created some "Curated Conversations:" they manually selected people to have predetermined conversations for some set amount of time. iirc this was typically 3-6 people for ~1h, but i could be wrong on the details. rachel: how did these end up going, relative to the cost of putting them together?

[srs unconf at lighthaven this sunday 9/21]

Memoria is a one-day festival/unconference for spaced repetitionincremental reading, and memory systems. It’s hosted at Lighthaven in Berkeley, CA, on September 21st, from 10am through the afternoon/evening.

Michael NielsenAndy MatuschakSoren BjornstadMartin Schneider, and about 90–110 others will be there — if you use & tinker with memory systems like Anki, SuperMemo, Remnote, MathAcademy, etc, then maybe you should come!

Tickets are $80 and include lunch & dinner. More info at memoria.day.

Load more