Building out research fellowships and public-facing educational programming for lawyers
Tons of overlap in how the vlogbrothers think about their impact and EA. Great to see.
In particular, there was one episode of their podcast recently (I think it was "The Green Brothers are Often Wrong"), where they got comically close to describing themselves as EA; remarking that John was the heart (who cared a lot about people and "what was most important") and Hank was the head, being really concerned with science and truth and progress and reasoning.
They are of course aware of EA via large EA participation in their PFA donation drive, but I believe they have a distant, caricatured view of the community itself. I heard of a livestream where they were asked about it and John said something to the effect of "there's a lot of harm you can do if you only think of people as objects of analysis for you to intervene on when you should be dealing with them directly and empowering them in the ways that they decide they want to be empowered."
My view is that it's worth it, because there is a danger of people just jumping into jobs that have "AI" or even "AI security/safety" in the name, without grappling with tough questions around what it actually means to help AGI go well or prioritising between options based on expected impact.
I appreciate the dilemma and don't want to imply this is an easy call.
For me the central question is all of this is whether you foreground process (EA) or conclusion (AGI go well). It seems like the whole space is uniformly rushing to foreground the conclusion. It's especially costly when 80k – the paragon of process discourse – decides to foreground the conclusion too. Who's left as a source of wisdom foregrounding process?
I know you'e trying to do both. I guess you can call me pessimistic that even you (amazing Arden, my total fav) can pull it off.
Thanks Vanessa, I completely agree on the meta level. No one owes "EA" any allegiance because they might have benefitted from it in the past or benefitted from its intellectual progeny and people are of course generally entitled to change their minds and endorse new premises.
Your comment *is a very meta comment though* and leaves open the possibility that you're post hoc rationalizing following a trend that I see as starting with Claire Zebel's post "EA and Longtermism, not Cruxes for Saving the World," which I see as pretty paradigmatic of "the particular ideas that got us here (AI X-safety) no longer [are/feel] necessary, and seem inconvenient to where we are now in some ways, so let's dispense with them."
There could be fine object-level reasons for changing your mind on which premises matter of course and I'm extremely interested to hear those. In the absence of those object-level reasons though, I worry!
I'm still trying to direct the non-selfish part of myself towards scope-sensitive welfarism in a rationalisty way. For me that's EA. Others, including maybe you, seem to construe it as something narrower than that and I wonder both what that narrow conception is and whether its fair to the public meaning of the term "Effective Altruism."
Some combination of not having a clean thesis I'm arguing for, not actually holding a highly legible position on on the issues discussed, and being a newbie writer. Not trying to spare people's feelings. More just expressing some sentiments, pointing at some things, and letting others take from that what they will.
If there was a neat thesis it'd be:
Agree on most of this too. I wrote too categorically about the risk of "defunding." You will be on a shorter leash if you take your 20-30% independent-view discount. I was mostly saying that funding wouldn't go to zero and crash your org.
I further agree on cognitive dissonance + selection effects.
Maybe the main disagreement is that OP is ~a fixed monolith. I know people there. They're quite EA in my accounting; much like I think of many leaders at grantees. There's room in these joints. I think current trends are driven by "deference to the vibe" on both sides of the grant-making arrangement. Everyone perceives plain speaking about values and motivations as cringe and counterproductive and it thereby becomes the reality.
I'm sure org leaders and I have disagreements along these lines, but I think they'd also concede they're doing some substantial amount of deliberate deemphasis of what they regard as their terminal goals in service of something more instrumental. They do probably disagree with me that it is best all-things-considered to undo this, but I wrote the post to convince them!