Building out research fellowships and public-facing educational programming for lawyers
Some combination of not having a clean thesis I'm arguing for, not actually holding a highly legible position on on the issues discussed, and being a newbie writer. Not trying to spare people's feelings. More just expressing some sentiments, pointing at some things, and letting others take from that what they will.
If there was a neat thesis it'd be:
Agree on most of this too. I wrote too categorically about the risk of "defunding." You will be on a shorter leash if you take your 20-30% independent-view discount. I was mostly saying that funding wouldn't go to zero and crash your org.
I further agree on cognitive dissonance + selection effects.
Maybe the main disagreement is that OP is ~a fixed monolith. I know people there. They're quite EA in my accounting; much like I think of many leaders at grantees. There's room in these joints. I think current trends are driven by "deference to the vibe" on both sides of the grant-making arrangement. Everyone perceives plain speaking about values and motivations as cringe and counterproductive and it thereby becomes the reality.
I'm sure org leaders and I have disagreements along these lines, but I think they'd also concede they're doing some substantial amount of deliberate deemphasis of what they regard as their terminal goals in service of something more instrumental. They do probably disagree with me that it is best all-things-considered to undo this, but I wrote the post to convince them!
I agree with all of this.
My wish here is that specific people running orgs and projects were made of tougher stuff re following funding incentives. For example, it doesn't seem like your project is at serious risk of defunding if you're 20-30% more explicit about the risks you care about or what personally motivates you to do this work.
There are probably only about 200 people on Earth with the context x competence for OP to enthusiastically fund for leading on this work – they have bargaining power to frame their projects differently. Yet on this telling, they bow to incentives to be the very-most-shining star by OP's standard, so they can scale up and get more funding. I would just make the trade off the other way: be smaller and more focused on things that matter.
I think social feedback loops might bend back around to OP as well if they had fewer options. Indeed, this might have been the case before FTX. The point of the piece is that I see the inverse happening, I just might be more agnostic about whether the source is OP or specific project leaders. Either or both can correct if they buy my story.
I hope my post was clear enough that distance itself is totally fine (and you give compelling reasons for that here). It's ~implicitly denying present knowledge or past involvement in order to get distance that seems bad for all concerned. The speaker looks shifty and EA looks like something toxic you want to dodge.
Responding to a direct question by saying "We've had some overlap and it's a nice philosophy for the most part, but it's not a guiding light of what we're doing here" seems like it strictly dominates.
An implicit claim I'm making here is that "I don't do labels" is kind of a bullshit non-response in a world where some labels are more or less descriptively useful and speakers have the freedom to qualify the extent to which the label applies.
Like I notice no one responds to the question "what's your relationship to Nazism?" with "I don't do labels." People are rightly suspicious when people give that answer and there just doesn't seem to be a need for it. You can just defer to the question asker a tiny bit and give an answer that reflects your knowledge of the label if nothing else.
Yeah one thing I failed to articulate is how not-deliberate most of this behavior is. There's just a norm/trend of "be scared/cagey/distant" or "try [too] hard to manage perceptions about your relationship to EA" when you're asked about EA in any quasi-public setting.
It's genuinely hard for me understand what's going on here. Like there are vastly worse ~student groups people have been a part of from their current professional outlook that don't induce this much panic. It seems like an EA cultural tick.
If your AI work doesn't ground out in reducing the risk of extinction, I think animal welfare work quickly becomes the more impactful than anything AI. Xrisk reduction can be through more indirect channels, of course, though indirectness generally increases speculativeness of the xrisk story.