(Sorry; I forgot to cross-post when I made this post)

Having recognized that I have asked these same questions repeatedly across a wide range of channels and have never gotten satisfying answers for them, I'm compiling them here so that they can be discussed by a wide range of people in an ongoing way.

  1. Why has EV made many moves in the direction of decentralizing EA, rather than in the direction of centralizing it? In my non-expert assessment, there are pros and cons to each decision; what made EV think the balance turned out in a particular direction?
  2. Why has Open Philanthropy decided not to invest in genetic engineering and reproductive technology, despite many notable figures (especially within the MIRI ecosystem) saying that this would be a good avenue to work in to improve the quality of AI safety research?
  3. Why, as an organization aiming to ensure the health of a community that is majority male and includes many people of color, does the CEA Community Health team consist of seven white women, no men, and no people of color?
  4. Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence? If so, what makes CEA as a whole think that their continued existence is worth the cost?
  5. Why do very few EA organizations do large mainstream fundraising campaigns outside the EA community, when the vast majority of outside charities do?
  6. Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the "TESCREAL" conspiracy theory and antisemitic conspiracy theories?
  7. Why do university EA groups appear, at least upon initial examination, to focus so much on recruiting, to the exclusion of training students and connecting them with interested people?
  8. Why is there a pattern of EA organizations renaming themselves (e.g. Effective Altruism MIT renaming to Impact@MIT)? What were seen as the pros and cons, and why did these organizations decide that the pros outweighed the cons?
  9. When they did rename, why did they choose to rename to relatively "boring" names that potentially aren't as good for SEO as one that more clearly references Effective Altruism?
  10. Why aren't there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.
  11. When EAs talk about the "unilateralist's curse," why don't they qualify those claims with the fact that Arkhipov and Petrov were unilateralists who likely saved the world from nuclear war?
  12. Why hasn't AI safety as a field made an active effort to build large hubs outside the Bay, rather than the current state of affairs in which outside groups basically just function as recruiting channels to get people to move to the Bay?

I'm sorry if this is a bit disorganized, but I wanted to have them all in one place, as many of them seem related to each other.

8

0
2

Reactions

0
2

More posts like this

Comments12


Sorted by Click to highlight new comments since:

I'm worried that a lot of these "questions" seem like you're trying to push a belief, but phrasing it like a question in order to get out of actually providing evidence for said belief. 

Why has Open Philanthropy decided not to invest in genetic engineering and reproductive technology, despite many notable figures (especially within the MIRI ecosystem) saying that this would be a good avenue to work in to improve the quality of AI safety research?

First, AI safety people here tend to think that super-AI is imminent within a decade or so, so none of this stuff would kick in time. Second, this stuff is a form of eugenics which has a fairly bad reputation, and raises thorny ethical issues even divorced from it's traditional role in murder and genocide. Third, it's all untested and based on questionable science and i suspect it wouldn't actually work very well, if at all.

Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence? If so, what makes CEA as a whole think that their continued existence is worth the cost?

Have you considered that the rest of EA is incentivised to pretend there aren't problems in EA, for reputational reasons? If so, why shouldn't community health be expanded instead of reduced? 

This question is basically just a baseless accusation rephrased into a question in order to get away with it. I can't think of a major scandal in EA that was first raised by the community health team. 

Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the "TESCREAL" conspiracy theory and antisemitic conspiracy theories?

Because this is a dumb and baseless parallel? There's a lot more to antisemitic conspiracy theories than "powerful people controlling things". In fact, the general accusation used by Torres is to associate TESCREAL with white supremacist eugenicists, which feels kinda like the opposite end of the scale 

Why aren't there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.

Because this is a terrible idea, and on multiple occasions has already led to harmful cult-like organisations. AI safety people have already spilled a lot of ink about why a maximising AI would be extremely dangerous, so why the hell would you want to do maximising yourself?

First off, I specifically spoke to the LessWrong moderation team in advance of writing this, with the intention of rephrasing my questions so they didn't sound like I was trying to make a point. I'm sorry if I failed in that, but making particular points was not my intention. Second of all, you seem to be taking a very adversarial tone to my post when it was not my intention to take an adversarial tone.

Now, on to my thoughts on your particular points.

I have in fact considered that the rest of EA is incentivized to pretend that there aren't problems. In fact, I'd assume that most of EA has. I'm not accusing the Community Health team of causing any particular scandal; just of broadly introducing an atmosphere where comparatively minor incidents may potentially get blown out of proportion.

There seem to be clear and relevant parallels here. Seven of the fifteen people named as TESCREALists in the First Monday paper are Jewish, and many stereotypes attributed to TESCREALists in this conspiracy theory (victimhood complex, manipulating our genomes, ignoring the suffering of Palestinians) line up with antisemitic stereotypes and go far beyond just "powerful people controlling things."

I want to do maximizing myself because I was under the impression that EA is about maximizing. In my mind, if you just wanted to do a lot of good, you'd work in just about any nonprofit. In contrast, EA is about doing the most good that you can do.

I understand that it's perilous, but so is donating a kidney, and a large number of EAs have done that anyway.

I downvoted this post because I think it's really hard for a list of 12 somewhat-related questions, and particularly for the comment threads answering them, to be useful to a broader audience than just the original author. I also feel like these questions really could do more to explain what your thinking is on them, because as it is I feel like you're asking for people to put in work you haven't put in yourself.

If I had these questions, I think the main avenues I'd consider to getting them answered would be:

  • post each one in its own Quick Take (= shortform), which would help separate the comment threads without dominating the frontpage with 12 posts at once,
  • pick one (or more, if they're obviously related), and expand a little more on what motivates the question and what thoughts you already have, and make that a post on its own,
  • consider other venues with smaller audiences (in-person or online social meetups, etc.)

You said:

I wanted to have them all in one place, as many of them seem related to each other

3 and 4 are obviously related, and 8 and 9. I don't see the relations between the others; I think if you're really making the pitch that this post is one topic, I need more explanation of what that topic is.

I agree for the most part with Michael's answers to your questions on LW so I'll just go over some slight differences. 
 

1- This movement should not be centralized at all IMO. EA should be a library. Also It's pretty gross that it's centralized but there is no political system sans a token donation election. I'm pretty sure nick beckstead and Will MacAskill etc etc would have been fired into the moon after ftx if there was a democratic voting process for leaders. 

 https://forum.effectivealtruism.org/posts/8wWYmHsnqPvQEnapu/?commentId=6JduGBwGxbpCMXymd
https://forum.effectivealtruism.org/posts/MjTB4MvtedbLjgyja/?commentId=iKGHCrYTvyLrFit2W
 
3- Agree with why the team is the way it is but they do have more of an obligation to correct this (conditional on the demographics of the team actually being an important dimension of success. It's believable but not a no-brainer) than your average HR dep. My experience working in a corporate job is that HR works for the man - don't trust them at all. CEAs community team is actually trying to solve problems to help all members of the community, not just the top dogs (well, at least you would hope)

5- Agree w/michael that they are. However,  you're picking up on a real thread of arrogance, and often a smug unwillingness to engage with non top 5 cause areas despite the flow-through effects possibly getting more money to the causes they want. I think local EA groups should focus more on fixing the issues in their cities. Not because it is as important but because I think they would gain a lot of recognition and they could leverage that to fundraise more for their causes down the line. Likewise, orgs should be more willing to compromise their work if that means getting way more money. A few years ago my parents asked me to help them research which homeless shelters in Chicago to donate to and I told them they should give the money to (insert ea FOTM charity). They got super triggered and I think if I just answered their question I would have more sway over other donations they made. 

8. I found this post, though I'll say I find the concept of an EA club not having ea in their name bizarre. I dislike the name effective altruism but that is the name of the movement so yea I would say they overcooked here. 

Why aren't there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.

There was an attempt at that in rationalism, Dragon Army, though it didn't ultimately succeed; you can find the postmortem at https://medium.com/@ThingMaker/dragon-army-retrospective-597faf182e50.

Yeah, I heard about that. As far as I can tell, the reason it failed was for reasons specific to the particular implementation here, and not due to the broader idea of implementing a project like this. In addition, Duncan has on multiple occasions expressed support for the idea of running a similar project that can learn from the mistakes made here. So my question is, why haven't more organizations like that been started?

I'll take a crack at some of these.

On 3, I basically don't think this matters. I hadn't considered it largely because it seems super irrelevant. It matters far more if any individual people shouldn't be there or some individuals should be there who aren't. AFAICT without much digging, they all seem to be doing a fine job and I don't see the need for a male/poc though feel free to point out a reason. I think nearly nobody feels they have a problem to report and then upon finding out that they are reporting to a white woman feel they can no longer do so. I would really hate to see EA become a place where we are constantly fretting and questioning demographic makeups of small EA organizations to make sure that they have enough of all the traits. It's a giant waste of time, energy and other resources

On 4, this is a risk with basically all nonprofit organizations. Do we feel AI safety organizations are exaggerating the problem? How about SWP? Do you think they exaggerate the number of shrimp or how likely they are to be sentient? How about Givewell? Should we be concerned about their cost-effectiveness analyses? It's always a question to ask but usually, a concern would come with something more concrete or a statistic. For example, the charity Will Macaskill talks about in the UK that helps a certain kind of Englishperson who is statistically ahead (though I can't remember if this is Scotts or Irishmen or another group)

On 7, university groups are limited in resources. Very limited. It is almost always done part-time while managing a full time courseload and working on their own development among other things and so they focus on their one comparative advantage of recruitment (since it would be difficult for others to do that) and outsource the training to other places (80k, MATS, etc.).

On 10, good point, I would like to see some movement within EA to increase the intensity.

On 11, another good point. I'd love to read more about this.

On 12, another good point but this is somewhat how networks work, unfortunately. There's just so many incentives for hubs to emerge and then to have a bunch of gravity. It kinda started in the Bay area and then for individual actors, it nearly always makes sense to go around there and then there is a feedback loop.

One last thing - if the reason you want to join a totalizing community is to gain structure, you don't need to join an EA cult to do this! 

- Join more groups unrelated to EA. Make sure to maintain a connection to this physical world and remember how beautiful it is. Friendship, community and love are extremely motivating. 

- I say this as a non-spiritual lifelong atheist: You may also consider adding some faith practice like hinduism or buddhism. I find a lot of hindu texts and songs to be extremely beautiful and although I don't believe in any of the magic stuff the idea of reincarnation and karma and the accompanying art / rituals can be motivating to me to do the best I can for this world. 

Feel free to dm me if you want

Thanks for the advice. I was saying that this type of community might be good, not just because I would benefit, but because I know a lot of other people who also would. And that due to a lot of arbitrary-seeming concerns, it's likely highly neglected.

Can you try to paint me a picture of how you specifically would benefit?

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier