Update one year later
It's June 2o23 and some people are reading this for this summer so I feel I should point out how I now think this post is suboptimal.
I still think the Bay Area is great and agree directionally with my points below. I think this post does a fine job at listing many positive sides of visiting.
However, 4 things this post doesn't do well are:
- It doesn't express uncertainty much since I was kind of generally overconfident a year ago.
- It doesn't list reasons why one might not benefit from going. E.g., if one doesn't have an easy "way in" into the Berkeley EA/rat scene and is not very extraverted either.
- Relatedly, I didn't realize a year ago how much of the value I got from going to the Bay was contingent on (1) the specific things I did day-to-day, (2) the specific spaces that were available to me, and (3) my personality. Other people may go to the Bay but talk to people less/not have access to a great office space/not click so well with Bay culture/etc. and get substantially less out of it as a result. I've realized since that this is actually quite a common experience.
- The post doesn't contextualize how going to the Bay compares to going to the Oxford/London/Cambridge EA hubs, which are pretty good alternatives in many ways. (I didn't have this context a year ago. I still don't have context on the Washington DC and Boston EA hubs.)
Additionally, here are 2 particular bad things about going to the Bay that I want to highlight:
- Empirically, many people dislike it (maybe as many as people who do like it). A few common reasons are:
- Everyone is an "extremely EA/rationalist" alignment bro and they don't like this kind of person/it's just not diverse enough
- Gender ratio is pretty messed up
- Concerns about it being an echo chamber/people holding extreme beliefs.
- I share the concern about echo chambers, and more broadly epistemics in the Bay. Although I'm not super sure that the Bay really is worse than any other place here. I've written about my concerns here. The Bay Area is mentioned in items 2 & 3.
Original Post
Some EAs from the bay area like to invite people they find especially promising “to come to the bay over the summer” and “learn stuff and skill up”. It’s often very unclear to the other person what exactly the summer activity is that is being referred to here and why exactly it has to be done in the bay. It was very unclear to me. I’ve come to the bay area now and it has had tremendous benefits for me. So now I’ll try to lay out a theory of change for “coming to the bay over the summer” that helps other people assess whether this is something they want to do.
I’ve specifically come to Berkeley, which is the general EA hotspot around here, and this is where my points are most applicable. Also, the effects of coming to the bay are pretty diffuse and hard to nail down, which is why I’m going to list a lot of factors.
1. All-year EAG
There’s lots of cool and impressive EAs around, especially alignment researchers. You can reach out to them and ask for a 1-1 chat, like at EAGs.
There’s three advantages coming to Berkeley has over EAG as well:
- Cool and impressive people are usually booked out at EAGs.
- Coming to Berkeley and, e.g., running into someone impressive at an office space already establishes a certain level of trust since they know you aren’t some random person (you’ve come through all the filters from being a random EA to being at the office space).
- If you’re in Berkeley for a while you can also build up more signals that you are worth people’s time. E.g., be involved in EA projects, hang around cool EAs.
2. Great people
Besides impressive and well-known EAs, the bay also has an incredibly high concentration of amazing but less-well-known EAs. Sometimes you simply chat with someone and it turns out to be hugely valuable even though you don’t know the person and wouldn’t have known to reach out to them. The average person you interact with here is probably smarter and has better models of EA than the average person where you are based. This means you get better input from the outside in all sorts of ways, including:
- Knowledge embedded implicity in EA culture. E.g., I learnt a lot about building my own models, considering weird ideas, and decreasing coordination costs with high trust through implicit EA culture
- Better ideas in general because smart people have smart ideas (often)
- Better input on your career plans and projects (which is where high context on EA is especially useful). Generally, if you have thoughts or ideas related to EA, talking to lots of people about them should likely be your immediate next step. You will rapidly improve, adjust, or discard them, and not talking to people will just slow you down a lot.
- Spicier takes that require a high level of EA buy-in. (Depends on you whether you think this is good.)
Caveat: Berkeley EA is a subculture and like every subculture it’s an echo chamber. There’s implicit knowledge in this culture, but so is there cultural baggage that has somehow gotten caught in the echo chamber. Not every quirk of Berkeley EA is especially smart or rational. Some may be harmful. However, I maintain it’s a better-than-usual echo chamber with better-than-usual quirks.
3. Networking
Going to office spaces, dinners, parties etc. in Berkeley will give you a lot of networking. Networking is sort of the precursor to the last point, great people who give you great input. However, that’s not the only thing networking affords you and hence I put it here as a separate point. Being networked also affords you favors, lets you exert influence on important decisions and if/how things are done in Berkeley EA projects, and by extension EA as a whole. Being networked also leads to serendipitous encounters with even more people (which is easier if you already know a lot of people).
Meeting in-person in Berkeley lends itself much better to developing any sort of connection to people and assessing their potential than doing it virtually. (You want people to be able to assess your potential so they will want to give you cool opportunities.)
4. Shifting deprecated defaults and intuitions
If you have intellectually changed your mind, but not emotionally, you will not be very effective at acting in accordance with your new view. If you’ve updated from valuing your time at $20/h to $100/h, but you still feel like your time is cheap, you’ll often intuitively make bad tradeoffs. Surrounding yourself with people who act in accordance with the correct defaults and intuitions will shift you more towards those emotionally.
Some especially important defaults and intuitions are:
- An intuition for the value of your time
- Defaulting to viewing things in the light of EA. When you want to start a new project, buy a new thing, move to a new place, EA might not immediately occur to you as a consideration to take into account in your decision. Maybe you’d like to have this default though
- Defaulting to maximising impact instead of satisficing. Asking “What is the most impactful EA project I could start?” instead of trying to get some EA job
- An intuition for the gravity of x-risk. Feeling this emotionally, not just knowing it intellectually
5. Information flows
There’s some information that’s hard to get quickly and in a distilled fashion the further you are from EA hubs. I’m not entirely clear on what the mechanisms are that make this so. I will just describe some of the information of this kind.
Landscape knowledge: Information on the landscape of EA, or the field of alignment, or biosecurity etc. Including: The relevant people and what they do, the existing projects/orgs and how they think about the problem, the bottlenecks of the space as a whole, disagreements between people or projects/orgs, recent developments/the trajectory the space is on as a whole, hot takes floating around.
Landscape knowledge is insanely useful. Without knowing the whole space, you can’t make informed decisions about what sub-space you want to work in (Alignment agent foundations? Empirical research?). Without knowing the bottlenecks of the space, you don’t know what’s most urgent to work on. Without knowing about relevant infrastructure, you don’t know of all the support available in different places.
I want to put special emphasis on knowing about disagreements in the field because I personally was uninformed about those for a long time within alignment. I was planning to become a technical alignment researcher, but hadn’t realised how different the premises are different alignment efforts (MIRI, ARC, Anthropic, …) are built upon, making work on the wrong agenda effectively useless on some worldviews. I could’ve easily ended up digging into, say, MIRI’s research, only to realise very late that I actually think their approach is hopeless.
How to get hiring ready: For some organisations this is clearer than for others. Either way, it helps talking to several people who have gotten hired recently by the organisation in question or who are on a good track to get there.
Opportunities: My experience has been that I’ve learned about more opportunities the closer I’ve been to EA hotspots physically and socially. So, so many fellowships, grants, scholarships, jobs, internships, retreats, summits, contests, jobs, invitations to just go and stay somewhere for a while…
6. Space for the important stuff
In normal life, you can never give space and time to Effective Altruism according to its importance. Things come up, work, friends, interests, easier ways to spend your free time. Coming to Berkeley is helpful because it sets aside weeks or months for thinking about important stuff. And the environment here pushes you towards thinking about important stuff, not away from it.
The idea of making good money-time tradeoffs for example, was just some idea from the internet for me back in my home environment. Some weird idea as well, that felt unintuitive and socially unacceptable with the people around me. The incentive landscape was just not favourable at all to me engaging with this idea seriously and giving it space to unfold. In Berkeley on the other hand, people will bring it up to you, or it will come up because of the money-time tradeoffs people make all the time, and people want to hear your takes, and the space is made for you to think about this idea automatically. Not to mention that, whether this is a good thing or not, hearing an idea from a person is just much more engaging psychologically than reading it on a forum.
Usually all this space is also used in above-average ways since (related to 2.) the smart and highly engaged EAs around act as a filter for ideas, such that the highest quality ideas get amplified the most in Berkeley (roughly).
7. Moving towards doing ambitious EA work
(This is technically a child node of Information flows + Shifting deprecated defaults and intuitions, but it’s so important that it warrants a separate heading.)
It’s unclear from the outside:
- How desperately in need of people every EA project is, and how many projects never get started because there’s no one around to own them
- How easy it is to start a project and how secure this is relative to starting ambitious things outside of EA. Funding, advisors, a high-trust community, and social prestige are available
- How close you personally are to doing EA work / starting an EA project. People tend to overestimate how competent/experienced other people are who get cool jobs or start cool projects. Meeting these people in Berkeley helps you internalise this: everyone’s clueless
- What’s possible. Looking at what scale EA projects in the bay operate at disperses false notions of limits and helps shoot for the correct level of ambition
Even once you know these things intellectually, it’s hard to act in accordance with them before knowing them viscerally, e.g., viscerally feel secure in starting an ambitious project. Coming to Berkeley really helps with that.
8. Motivational effects
Interacting with passionate, value-aligned people on a daily basis feels very motivating and nourishing. Being able to talk about your work with others and get excited together is nice. I personally have never worked so much in my entire life and it’s by choice. Working from an EA office is ideal for these things, and also increases your productivity by taking care of meals, environmental design, various charger types, and other logistical hassle. The fact that you get social approval for being impactful also doesn’t hurt.
9. An amazing community
I personally really love the EA and rationalist community here. There’s an amazing concentration of smart and interesting people. There’s people geeking out about the concept of agency or value-alignment or consciousness, people discovering emotional work together, people doing weird shit like ecstatic dance, and lots of other types of people as well. Being part of this community has been incredibly valuable to me and has made me even more committed to EA. I’ve made many great friends here.
Concluding remarks
Possibly, some of these benefits could come from just talking to EAs influenced by the bay area, and not travelling there yourself. Probably less than 50% of the benefits though.
If this theory of change speaks to your current needs/bottlenecks, and you’ve been convinced to try coming to the bay, please contact me explaining where you are currently at and how Berkeley might help you. You can also apply for a call with Akash to speak about your plans.
I. It might be worth reflecting upon how large part of this seem tied to something like "climbing the EA social ladder".
E.g. just from the first part, emphasis mine
Replace "EA" by some other environment with prestige gradients, and you have something like a highly generic social climbing guide. Seek cool kids, hang around them, go to exclusive parties, get good at signalling.
II. This isn't to say this is bad . Climbing the ladder to some extent could be instrumentally useful, or even necessary, for an ability to do some interesting things, sometimes.
III. But note the hidden costs. Climbing the social ladder can trade of against building things. Learning all the Berkeley vibes can trade of against, eg., learning the math actually useful for understanding agency.
I don't think this has any clear bottom line - I do agree for many people caring about EA topics it's useful to come to the Bay from time to time. Compared to the original post I would probably mainly suggest to also consult virtue ethics and think about what sort of person you are changing yourself to, and if you, for example, most want to become "a highly cool and well networked EA" or e.g. "do things which need to be done", which are different goals.
(strongly upvoted because I think this is a clean explanation of what I think is an underrated point at the current stage, particularly among younger EAs).
Yeah, it would probably be good if people redirected this energy to climbing ladders in the government/civil service/military or important powerful corporate institutions. But I guess these ladders underpay you in terms of social credit/inner ringing within EA. Should we praise people aiming for 15y-to-high-impact careers more?
To support your point, Holden signal-boosted this in his aptitudes over paths post:
We should praise the class of worker in general but leave the individuals alone.
This feels like a surprisingly generic counterargument, after the (interesting) point about ladder climbing. "This could have opportunity costs" could be written under every piece of advice for how to spend time.
In fact, it applies less to this posts than to most advice on how to spend time, since the OP claimed that the environment caused them to work harder.
(A hidden cost that's more tied to ladder climbing is Chana's point that some of this can be at least somewhat zero-sum.)
I agree with you, being "a highly cool and well networked EA" and "do things which need to be done" are different goals. This post is heavily influenced by my experience as a new community builder and my perception that, in this situation, being "a highly cool and well networked EA" and "do things which need to be done" are pretty similar. If I wasn't so sociable and network-y, I'd probably still be running my EA reading group with ~6 participants, which is nice but not "doing things which need to be done". For technical alignment researchers, this is probably less the case, though still much more than I would've expected.
Even though these two goals may lead to similar instrumental actions (e.g. doing important work), I think these two goals grow different motivational structures inside of you. I recently wrote:
Separating out how important networking is for different kinds of roles seems valuable, not only for the people trying to climb the ladder but also for the people already on the ladder. (e.g., maybe some of these folks desperate to find good people to own valuable projects that otherwise wouldn't get done should be putting more effort into recruiting outside of the Bay.)
I feel like that's a good argument for why hanging around the cool, smart people can be good for "skilling up". But a lot of the value of meeting cool, smart people seems to come from developing good models! and surely it's possible to build good models of e.g community building, AI safety by doing self-directed study, and occasionally reaching out with specific questions as they arise. I think it's important to split up the value of meeting cool, smart people into A) networking and social signalling, and B) building better models. And maybe we should be focusing on B.
Yeah I also had a strong sense of this from reading this post. It reminded me of this short piece by C. S. Lewis called The Inner Ring, which I highly recommend. Here is a sentence from it that sums it up pretty well I think:
IN the whole of your life as you now remember it, has the desire to be on the right side of that invisible line ever prompted you to any act or word on which, in the cold small hours of a wakeful night, you can look back with satisfaction?
I think the most costly hidden impact is the perception of gatekeeping that occurs with such a system as this. Gatekeeping happens in two ways: for one, those who are less able to travel for reason such as their having to provide for their family or even their being homesick are put at a disadvantage. And two, those who are less able to schmooze (fun word!) and climb that ladder are also put at a disadvantage.
I agree, I think this is a problem, but I am not sure if the cost of solving the problem (I.e. replacing the system) is too high? Much like grades in undergraduate institutions, whether one agrees with their ethicality or not, they are a fairly accurate assessment of how one might do in graduate school because they are so similar in nature. Now, disregarding the argument as to whether or not grades should be used in either, what I am trying to say is that the social ladder that exists within EA exists because the skills that are required to climb this social ladder are skills that are valued within EA. Thus, I do not think we need to so much care about the system because I think it is actually solving for an efficiency problem that is addressed above.
You brought up specifically the opportunity cost, the essay above said that there are a million projects going on always and not enough people to staff them. I think this opportunity cost is apt in order to weed out the people who aren’t that serious about an idea or who just aren’t yet skilled enough. Furthermore, even if this wasn’t the case I do think that EA people are pretty productive when motivated enough, from experience I can say (I could be wrong on this in general, but for me at least) all you really need to know is one well-connected EA in order to have access to 100 more — and even then you can get access to many many more at online or in person events. You may call this time consuming schmoozing, but if thought about impact fully and effectively (qualities EA wants) I maintain that this could be done in one weekend.
For the negative perception it creates: I think it would only really do this for people who are already in EA because otherwise people just wouldn’t see the culture at hand. At the point of their seeing it, a negative perception might still occur, but by that point I would hope they weight the ideals of EA to say that this may be the most effective culture as we have been discussing in this thread.
Please let me know if there is something that I missed!
I think competence-sorted social strata are incredibly important for aligning higher-competence people with actually doing what's good and seeking what's actually true. If you use the simple model that people will behave according to what they predict other people will judge as good behaviour,[1] and what they will judge as good behaviour (at least in EA, ideally) is that which they predict will help others the most... then for people who are really good at figuring out what behaviour actually help others the most, they will only be motivated to do those things insofar as they're surrounded by judgers who also are able to see that those behaviours are good.
So if you have no competence-sorted social strata, the most competent people will be surrounded by people of much lower competence, which in turn means that the high-competence people won't be motivated to use their competence in order to figure out ways of being good that they know their judgers can't see. On this oversimplified model, you only really start to harvest the social benefit from extreme competence once you have all the extremely competent people frequently mingling with each other.
This is why I personally am in favour of EAs trying more (on the current margin) to hang out with people similar to them, and worry less (on the current margin) about groupthink. We're already sufficiently paranoid of groupthink, and a few bubbles getting stuck on crazy is worth the few bubbles bootstrapping themselves to greater heights.
I think it goes one meta-level up from this, but let's not needlessly complicate things, and this level is predictive enough.
For people who consider taking or end up taking this advice, some things I might say if we were having a 1:1 coffee about it:
We all want (I claim) EA to be a high trust, truth-seeking, impact-oriented professional community and social space. Help it be those things. Blurt truth (but be mostly nice), have integrity, try to avoid status and social games, make shit happen.
This comment is great, and resonates with a lot of the stuff I found hard when I was first immersed in the community at an EA hub.
I think there's a lot of truth to the points made in this post.
I also think it's worth flagging that several of them: networking with a certain subset of EAs, asking for 1:1 meetings with them, being in certain office spaces - are at least somewhat zero sum, such that the more people take this advice, the less available these things will actually be to each person, and possibly on net if it starts to overwhelm. (I can also imagine increasingly unhealthy or competitive dynamics forming, but I'm hoping that doesn't happen!)
Second flag is that I don't know how many people reading this can expect to have an experience similar to yours. They may, but they may not end up being connected in all the same ways, and I want people to go knowing that they take that as a risk and to decide whether it's worth it for them.
On the other side, people taking this advice can do a lot of great networking and creating a common culture of ambition and taking ideas seriously with each other, without the same set of expectations around what connections they'll end up making.
Third flag is I have an un-fleshed out worry that this advice funges against doing things outside Berkeley/SF that are more valuable career capital in the future for ever doing EA things outside of EA or bringing valuable skills and knowledge to EA (like, will we wish in 5 years that EAs had more outside professional experience to bring domain knowledge and legitimacy to EA projects rather than a resume full of EA things?). This concern will need to be fleshed out empirically and will vary a lot in applicability by person.
(I work on CEA's community health team but am not making this post on behalf of that team)
Plausible that this post's comments is not the optimal place for some of this. One may argue that each heading should be it's own comment, but I'm slightly uncertain what mods prefer.
Skilling up
I don't think the meatspace value prop you outlined constitutes skilling up. That needs more justification.
Aligning some social rewards with one's desire to believe true things or to help fix broken stuff seems plausibly critical for some people to keep their eye on the ball, or even to just take themselves seriously. Skilling up is not really related to this, except in the sense that you have to get whatever emotional structure that powers your motivation behind the grind.
But I've been basically worried that emphasis on giving students quickly legible levers early in their exposure to the movement is sort of looting scholarship from the effective altruism of the future, and the way this post evoked "skilling up" to talk about what it was interested in talking about, which I see as boiling down to networking, really triggered me.
I'd like to register a vote for an extremely boring notion of skilling up, which is: yes, textbooks. Yes, exercises. No, nodding along to a talk 3-10x more advanced than you and following the high level but not able to reproduce it at a low level. No, figuring out how to pattern match enough to flatter sr researchers at parties. Yes (often, or usually), starting hard projects quickly before an outside view would think you've done enough homework to be able to do it well, but the emphasis is on hard. Gaining friends and status may even be distracting!
Obviously there's a debate to be had about a scholarship/entrepreneurship tradeoff-- my sense is that people think we use to swing too far to the scholarship side, and now there will probably be an overcorrection. (One should also register that few people think getting kickass at stuff looks like locking oneself in a monastary with textbooks, but a longer treatment of the nuances there is out of scope for this comment).
But no, respectfully, I'm sorry but skilling up was not described in the post. Could you elaborate?
I think in star wars 8 there was this "sacred texts" situation, you may recall that after listening to luke rant about the importance of the sacred texts, yoda challenged whether he even read them. Luke says "well...", implying that he flipped through them, but wasn't intimately familiar with what they were saying. I'm personally bottlenecked in my current goals by not having knowledge that is sitting in books and papers I haven't read yet! Which says nothing of doing avant garde or state of the art things. I think this post risks entering a kind of genre of message for young EAs, which is "you get kickass at stuff by exposing yourself to people who are kicking ass", and I think that's auxiliary -- yes you need to construct the emotional substructure that keeps your eye on the ball (which socialization helps with), and yes you need advice from your elders from time to time -- but no, you get kickass at stuff by grinding.
Landscape knowledge
The bay is not necessary for landscape knowledge.
I (gave an 8 month college try at alignment) managed to form a map of and opinions about different threatmodels and research agendas online. I was at EA Hotel but there was only one other alignment person there and he was doing his own thing, not really interfacing with the funded cells of the movement. The discord/slack clusters are really good! AI Safety Camp is great! One widely cited, high status alignment researcher responded to a cold email with a calendly link. The internet remains an awesome place.
Clout and reputation
I'm not gonna say clout and reputation are amoral distractions-- I think they're tools, instruments that help everyone do compression and filtering, oneself included. I roughly model grantmakers as smart enough to know that clout is a proxy for plausible future impact at best, so I'm not gonna come here and say status games are disaligning the movement from it's goals.
But jan_kulveit is 1000% right and it warrants repeating: networking and kicking ass are different things. What do I think goodharting on clout looks like at a low level? It looks like the social rewards for simply agreeing with the ideology leaving people satisfied, then they end up not doing hard projects.
Keeping one's eye on the ball
I conjecture that some people
I think the bay can be counterfactually very good by increasing the impact of these people!
But I want the Minneapolis EA chapter to be powerful enough to support people who fail the university entrance exam. I don't want to leave billion dollar bills (or a billion human lives) on the sidewalk because someone wasn't legible or well connected at the right time. Keeping all our eggs in one basket seems bad in a myriad of ways.
People who can keep their eye on the ball, and grow as asskickers, without the bay should probably resist the bay, so that we build multipolar power. One of the arguments for this I have not advanced here is to do with correlated error, the idea that if we all lived together we may make homogenous mistakes, but perhaps another post.
Networks and caution
We should be cautious about allowing a set of standards to emerge where being good at parties (in the right city) correlates with generating opportunities.
I won't go into too much detail here, but FWIW I lived in the Bay Area for ~2.5 years and found it somewhat difficult to network or get into various EA/rationalist social scenes (I think I was something of an outlier, but not extremely so). If you don't have a clear pathway to meeting people (such as being invited to work out of an EA co-working space for the summer, or having friends already living out there) you might have a more difficult experience networking/socializing than the post describes.
That said, I think for many EAs, visiting the Bay Area for at least some period of time is a great idea.
You got a lot of flak this post, and I think many of the dissenting comments were good (I strongly upvoted the top one). I also think some specific points could be better argued, and it'd be quite valuable to have a deeper understanding of the downside risks and where the bottom-line advice is not applicable.
Nonetheless, I think I should mention publicly that I broadly agree with this post. I think the post advances a largely correct bottom-line conclusion, and for the right reasons. I think many EAs in positions to do so, for example undergrads/grad students over the summer, people with remote jobs, and people who can afford to take a break in between jobs, should seriously consider spending time in EA hubs.* I further think that many of the main reasons to do so are covered in this post, so this post is broadly right for the right reasons.
(If you read the dissenting comments carefully, none of them really contradicts this message. However I'm worried that the "vibe" of those comments will end up dissuading many EAs from seriously considering this in their option set).
* As far as I understand it, the SF Bay Area (Berkeley specifically) is the best hub for some important EA subfields, most notably AI alignment work and longtermist community building.
I was going to comment pretty much exactly the same thing, thanks for doing the hard work for me :)
I think part of what is missing here for me is a bit of the context before hand
This probably speaks to my own biases and perspectives, but my initial reaction to "come to the bay over the summer" involves several thoughts:
Maybe according to strict utilitarianism I should abandon them, but in terms of virtue that seems horrendous. I don't want to be the kind of person who breaks his word whenever it is convenient; I want to have a really high bar for breaking my word.
I like this comment because it does a great job of illustrating how socioeconomic status influences the risks one can take. Consider the juxtaposition of these two statements:
(from the comment)
(from the OP)
Let's say that for a typical motivated early-career EA, there's a 60% chance that moving to the Bay will result in desirable full-time employment within one month. (I have no idea if that's the correct number, just taking a wild guess.) From an expected-value standpoint, that seems like a great deal! Of course you would do that! But for someone who's resource-constrained, that 40% combined with the high living costs are really big red flags. What happens if things don't work out? What happens is that you've now blown all your savings and are up shit creek, and if you didn't embed yourself in the community well enough during that time to get a job, you probably don't have enough good friends to help you out of a financial hole either. So do you make the leap? Without a safety net or an upfront commitment, it's so much harder to opt for high-upside but riskier pathways, and that in turn ends up impacting the composition of the community.
In theory, there is funding specifically to cover exactly the scenarios you are worried about (“40%”), for promising AI safety people going to the Bay Area.
If there is a systemic gap, the funders would very much like to know and people should comment (or PM and concerns can be referred if appropriate).
"The Bay Area" or "The Bay" is I think a totally normal abbreviation for the "San Francisco Bay Area", see e.g. this wikitravel article. I've used this abbreviation with lots of people outside of the EA/Rationality community, and it seems to be commonly understood.
I'd expect it to be understood within people who live in the coastal US. I may have heard of "the Bay area" before I got involved in EA but definitely also wondered "what bay?" when people abbreviated to "the Bay". (In Canada, The Bay is a department store, so spending your summer interning in the Bay would have very different connotations!)
Definitely agree it's not an "ingroup" thing though, I think this is more of a certain class of American thing.
I think you're understating how gatekept the inner ring offices are.
I'm happy to host couchsurfers.
I love that you wrote this because I grappled with a slightly bigger version of this, which was 'move to the Bay,' and I wasn't able to get a detailed theory of change from the people who were recommending this to me.
I think point 4 is especially interesting and something that motivated my decision to move (essentially, 'experience Berkely EA culture'). Ironically, most people focused on the first three points (network effects). I do think I'm unsure whether point 4 (specifically, the shift towards maximization, which feels related to totalising EA) is a net positive. Though perhaps by "theory of change" you really just meant the effect of coming to Berkeley, and not claiming that coming to Berkeley is net positive for one's impact?
Adding a comment from an exchange me and Luise had on this post.
The benefits of going to the Bay are probably also highly cause area contingent imo. For example, someone who does animal welfare I would imagine to have significantly less benefit than someone doing AI technical/community building. If the original post seems very alien to your understanding of the Bay, it may be because the benefits (especially 1, 2 and 3) simply do not exist on the scale for your cause area. I'm fairly confident on this statement.
Now that people are leaving the bay, I'd be pretty interested to see an "exit survey" of what people found most valuable about their stay in the bay and whether they think the visit was worth it (in terms of time, money, etc) ex-post
pls make one!