Hide table of contents

Basically what it says on the tin. I have this psychological need to find a really intense structured organization to help me accomplish what I want in life (most importantly, saving the world), and EA organizations are natural candidates for this. However, most of the large ones I've found display too much "performative normalcy" and aren't really willing to be as hardcore as I want and need.

Any recommendations on where to find a hardcore totalizing community that can inject more structure into my life so I'm better equipped to save the world? I'm living in Boston for the next two years or so, so anything that requires moving somewhere else won't work, but other than that, all kinds of ideas are welcome.

16

3
0

Reactions

3
0
New Answer
New Comment


3 Answers sorted by

  • EA group house?
  • Tech startup incubator?
  • Research bootcamp, e.g. MATS?

Thanks for the advice. I was more wondering if there was some specific organization that was known to give that sort of environment and was fairly universally recognized as e.g. “the Navy SEALs of EA” in terms of intensity, but this broader advice sounds good too.

I think this is a joke, but for those who have less-explicit feelings in this direction:

I strongly encourage you to not join a totalizing community. Totalizing communities are often quite harmful to members and being in one makes it hard to reason well. Insofar as an EA org is a hardcore totalizing community, it is doing something wrong.

This was semi-serious, and maybe “totalizing” was the wrong word for what I was trying to say. Maybe the word I more meant was “intense” or “serious.”

CLARIFICATION: My broader sentiment was serious, but my phrasing was somewhat exaggerated to get my point across.

What you're asking for sounds risky; see here for a reflection from a former "hardcore" EA. I also imagine there aren't many really hardcore segments after the fall of Leverage Research, but I have no particular insight into that.

Thanks for the reflection.

I’ve read about Leverage, and it seems like people are unfairly hard on it. They’re the ones who basically started EA Global, and people don’t give them enough credit for that. And honestly, even after what I’ve read about them, their work environment still sounds better to me than a supposedly “normal” one.

3
RyanCarey
Yes, they were involved in the first, small, iteration of EAG, but their contributions were small compared to the human capital that they consumed. More importantly, they were a high-demand group that caused a lot of people serious psychological damage. For many, it has taken years to recover a sense of normality. They staged a partial takeover of some major EA institutions. They also gaslit the EA community about what they were doing, which confused and distracted decent-sized subsections of the EA communtiy for years. I watched The Master a couple of months ago, and found to be a simultaneously compelling and moving description of the experience of cult membership, that I would recommend.
4
Habryka [Deactivated]
I agree with a broad gist of this comment, but I think this specific sentence is heavily underselling Leverage's involvement. They ran the first two EA Summits, and also were heavily involved with the first two full EA Globals (which I was officially in charge of, so I would know).
Comments14
Sorted by Click to highlight new comments since:

Sorry, I know you said you're stuck in Boston, but tbh you're most likely to find like-minded people in the Bay Area[1]. Even if you're stuck in Boston for now, perhaps, it'd be possible for you to visit it at some point?

Just to echo other commenters: This is something to be very careful with. Even if you're certain that you want an intense environment, other people who say they want the same, may not actually be the kind of person who thrives in such an environment.

  1. ^

    I've heard that EA's in the Bay Area are more intense than EA's elsewhere. I suspect that this effect is a result of people who are serious about AI Safety moving to the Bay area and this probably affects the culture in general. 

Thanks for the advice. To be clear, I'm not certain that a hardcore environment would be the best environment for me either, but it seems worth a shot. And judging by how people tend to change in their involvement in EA as they get older, I'll probably only be as hardcore as this for like ten years.

Additionally, I wonder why there hasn't been an effort to start a more "intense" EA hub somewhere outside the Bay to save on rent and office costs. Seems like we're been writing about coordination problems for quite some time; let's go and solve one.

There is an "EA Hotel", which is decently-sized, very intensely EA, and very cheap.

Occasionally it makes sense for people to accept very low cost-of-living situations. But a person's impact is usually a lot higher than their salary. Suppose that a person's salary is x, their impact 10x, and their impact is 1.1 times higher when they live in SF, due to proximity to funders and AI companies. Then you would have to cut costs by 90% to make it worthwhile to live elsewhere. Otherwise, you would essentially be stepping over dollars to pick up dimes.

One advantage of the EA hotel, compared to a grant, for example, is that selection effects for it are surprisingly strong. This can help resolve some of the challenges of evaluation.

There have been attempts:

Coordination of Rationality/EA/SSC Housing Projects
New EA Hub Search and Planning

Unfortunately, they haven't gotten anywhere. If you think you can solve the problem, then go for it! But keep in mind that people have tried this in the past and failed.

How many FTEs are working on this problem?

Like none.

Seems like the kind of thing that should have at least one FTE on it. Is there a reason no one has really put a lot of time into it (e.g. a specific compelling argument that this isn't the right call), or is it just that no one has gotten to it?

Funding would be hard to come by.

Some folks in EA are pretty nervous about projects where a bunch of folks live together. Part of this is due to what happened in Leverage. Part of this is that when people live together, there is often drama and there are potential PR risks.

And what I'm describing isn't an individual project full of people who live together; it's coordinating a bunch of people who work on many different projects to move to the same general area. And even if I were describing an individual project full of people who live together, every single failure of such a project within EA is a rounding error compared to the Manhattan Project, for better or worse.

And one more thing: if some people are nervous, wouldn't it be possible to get funded from people who are enthusiastic?

Well, if you think you can pull it off, feel free to go for it and see if you can find interested funders.

I thought the whole point of EA was that we based our grantmaking decisions on rigorous analyses rather than hunches and anecdotes.

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T