AI safety + community health
I wonder if measurability bias is present here. Encouragement and accountability for leaders are two crucial drivers of group success, and the most successful group leaders I know provide these for their members even though it doesn't scale. I can't say this isn't a good decision, because I don't know what the team time is trading off against, but I'd expect might lead to reduced group quality and growth over time.
Great news, and excited to see more effective careers organizations start and scale!
Question: why non-renewable by default? Funder diversity is obviously the ideal, but that can trade off with value alignment (especially as projects scale). Are you anticipating building longer-term partnerships with organizations that outperform other grantees?
[No expectation to respond here, but wanted to ask in case]
At this point, it’s been more than 24hr, and CEA’s leadership team still hasn’t responded (on the EA forum, which they run!)
I’d like to explore the idea that the CEA leader(s) involved in mishandling this case should step down. The gap between the organization’s stated goals and the choices made here is wide enough to strain imagination. I’d like someone else to have an opportunity to steward community resources and growth who has not made these catastrophic judgement errors.
It’s possible that I am overreacting, but I’m not confident that’s the case. Frances, again, thank you for your courage. Hope that you are safe and well.
A small story: I worked for several years at one of the top professional services firms, and encountered tremendous political opposition to forecasting, as had some of my more senior and accomplished peers. I was surprised by this. I shouldn't have been, given that I was de facto asking a hierarchical power structure to at least partially reallocate decision-making authority. Most leaders barely tolerate manipulable data — the threat of additional accountability is a tough sell. Possible, but tough!