[epistemic status: articulation of a position I kind of believe and think is under-articulated, but am unsure of the strength of]
I think EA has a lot of great ideas. I wish more people in the world deeply understood them, and took ~EA principles seriously. I'm very into people studying the bodies of knowledge that EA has produced, and finding friends and mentors in the ecosystem.
But I also think that EA is still a tiny corner of the world, and that there's a lot of important networks and knowledge beyond it. When I think about optimal allocation of people who are bought into EA, I want quite a lot of those people to go out and interact with different systems in the world, different peer groups; and learn from them, make connections.
In principle this should be pretty accessible. Except I worry about our implicit social structures sending the message "all the cool people hang around the centrally EA spaces" in a way that doesn't really support people to actually go and do these exploring moves while being engaged in and encouraged by EA.
I think that this is one of the (if not the) most important problems to fix in EA messaging / status-granting.[1] Note that I don't think we want to slow down people coming in to the EA bubble -- I think it's often healthy and good for people to get up to speed on a lot of stuff, to give them better context for subsequent decisions. So the challenge is to encourage people to graduate to exploring without making exploring itself so high-status that people jump directly there without learning the cool stuff that EA has to offer first.
What could we do about it? Some options:
- Encourage a narrative something like "when your EA learning slows down, that's often the time to dive back into the wider world"
- Celebrate people who follow this trajectory
- Make sure that community support structures are helpful and functional for people who have a lot of EA knowledge but are now exploring rather than "full time EA professionals"
I'd be keen to see fleshed out versions of these, or other ideas.
Absent good fixes here, I'm inclined to celebrate a certain amount of EA disillusionment: it seems important that a fraction of super talented people go and explore different areas, and if that's easier to access given disillusionment with EA then so much the worse for people's good opinions of EA. But this seems worse if something else could work, because of bad feeling, and making it harder for people to stop exploring mode and start working with the core of the community when that's correct.
N.B. I'm making a directional claim here. Of course it's quite possible to imagine getting to a stage where too many people go and explore, evaporating the pool of people trying to work on the most crucial things. What would be too much exploration? My guess is that in equilibrium the ideal might be between 10% and 20% of the people who are sufficiently skilled up to do really important work in the core should be exploring instead. And a larger group around them who can't yet find crucial work in the core (but hope to some day) should also do this. But I don't put that much stock in my numbers; I'm interested in takes from people who would go higher or lower.
- ^
Another candidate: wanting people who can think for themselves, but granting social status to people who appear to come to the same conclusions as leadership.
I worry less about EAs conforming. I think it's mostly lower-competence people who are tempted to irrationally conform to grab status, and higher-competence people are tempted to irrationally diverge to grab status.[1]
I'm wary of pushing the "go out and explore" angle too much without adequately communicating just how much wisdom you can find in EA/LW. Of course there's value to exploring far and wide, but there's something uniquely valuable that you can get here, and I don't know where else you could cost-effectively get it. I also want to push against the idea that people should prioritise reading papers and technical literature before they read a bunch of wishy-washy LW posts. Don't Goodhart on the impressive and legible, just keep reading what you notice speeds you through the search tree the fastest (minding that you're still following the optimal tree search strategy).
Umm, maybe think of them like "decoy prestige"? It could be usefwl to have downwards-legibly competent[2] people who get a lot of attention, because they'll attract the attention of competences below them, and this lets people who are higher competence to congregate without interference from below. Higher-competence people have an easier time discerning people's true competence around that level, so decoy prestige won't dazzle them. And it's crucial for higher-competence people to find other higher-competence people to congregate with, since this fine-tunes their prestige-seeking impulses to optimise more cleanly for what's good and true.[3]
I suspect a lot of high-competence alignment researchers are suitably embarrassed that their cause has gone mainstream in EA. I'm not of their competence, but I sometimes feel like apologising for prioritising AI simply because it's so mainstream now. ("If the AI cause is mainstream now, surely I'm competent enough to beat the mainstream and find something better?")
That is, their competence is legible to people way below that competence level. So even people with very low competence can tell that this person is higher competence.
Case in point, if I were surrounded by people I judged to be roughly as competent as me at what I do,[4] I wouldn't be babbling such blatant balderdash in public. Well, I would, because I'm incorrigible, but that's besides the point.
"competent at what I do" just means they have a particular kind of crazy epistemology that I endorse.[5]
The third level of meta is where all the cool footnotes hang out.
Yes. Definitely. Full agreement there. At the risk of seeming inconsistent, let me quote a former mentor of mine.
(God, I hate the rest of that poem though, haha!)