This is a special post for quick takes by jackva. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I really liked several of the past debate weeks, but I find it quite strange and plausibly counterproductive to spend a week in a public forum discussing these questions.

There is no clear upside to reducing the uncertainty on this question, because there are few interventions that are predictably differentiated along those lines.

And there is a lot of communicative downside risk when publicly discussing trading off extinction versus other risks / foregone benefits, apart from appearing out of touch with > 95% of people trying to do good in the world ("academic" in the bad sense of the word).

I have the impression we have not learned from the communicative mistakes of 2022 in that we are again pushing arguments of limited practical import that alienate people and limit our coalitional options.

Is this question really worth discussing and publicly highlighting when really getting more buy in into existential risk prevention work broadly construed would be extremely desirable and naturally, in the main, both reduce extinction risk and increase the quality of futures where we survive?

I disagree that we should avoid discussing topics so as to avoid putting people off this community.[1] 

  • I think some of EA's greatest contributions come from being willing to voice, discuss and seriously tackle questions that seemed weird or out of touch at the time (e.g. AI safety). If we couldn't do that, and instead remained within the overton window, I think we lose a lot of the value of taking EA principles seriously.
  • If someone finds the discussion of extinction or incredibly good/bad futures offputting, this community likely isn't for them. That happens a lot!
  1. ^

    Perhaps for some distasteful-to-almost-everyone topics, but this topic doesn't seem like that at all.

This is not what I am saying, my point is about attentional highlighting.

I am all for discussing everything on the Forum, but I do think when we set attentional priorities -- as those weeks do -- we could reflect whether we are targeting things that are high value to be discussed and how they land with and how they affect the broader world could be a consideration here.

I think messaging to the broader world that we focus our attention on a question that will only have effects for the small set of funders that are hardcore EA-aligned makes ourselves small.

By crude analogy it's like having a whole Forum week on welfare weights at the opportunity cost of a week focused on how to improve animal funding generally.

We could have discussion weeks right now on key EA priorities in the news, from the future of effective development aid, to great power war and nuclear risk, to how to manage AI risk under new political realities, that all would seem to affect a much larger set of resourcing and, crucially, also signal to the wider world that we are a community engaging on some of the most salient issues of the day.

I think setting a debate week on a topic that has essentially no chance of affecting non-EA funders is a lost opportunity and I don't think it would come out on top as a topic in a prioritization of debate weeks topic in the spirit of "how can we do the most good?"

On a more personal level, but I think this is useful to report here, because I don't think I am the only one with this reaction: I've been part of this community for a decade and have built my professional life around it -- and I do find it quite alienating that, at a time where we are close to a constitutional crisis in the US, where USAID is in shambles and where the post WW2-order is in question, we aee not highlighting how to take better action in those circumstances but instead discussing a cause prioritization question that seems very unlikely to affect major funding. It feeds the critique of EA that I've previously seen as bad faith -- that we are too much armchair philosophers.

It seems like you're making a few slightly different points:

  1. There are much more pressing things to discuss than this question.
  2. This question will alienate people and harm the EA brand because it's too philosophical/weird.
  3. The fact that the EA Forum team chose this question given the circumstances will alienate people (kind of a mix between 1 and 2).

I'm sympathetic to 1, but disagree with 2 and 3 for the reasons I outlined in my first comment.

I think that's fine -- we just have different views on what a desirable size of the potential size of the movement would be.

To clarify -- my point is not so much that this discussion is outside the Overton Window, but that it is deeply inside-looking / insular. It was good to be early on AI risk and shrimp welfare and all of the other things we have been early on as a community, but I do think these issues have a higher tractability in mobilizing larger movements / having an impact outside our community than this debate week has.

On a more personal level, but I think this is useful to report here, because I don't think I am the only one with this reaction: I've been part of this community for a decade and have built my professional life around it -- and I do find it quite alienating that, at a time where we are close to a constitutional crisis in the US, where USAID is in shambles and where the post WW2-order is in question, we aee not highlighting how to take better action in those circumstances but instead discussing a cause prioritization question that seems very unlikely to affect major funding. It feeds the critique of EA that I've previously seen as bad faith -- that we are too much armchair philosophers.

I do think it's a good chance to show that the EA brand is not about short-term interventions but principles of first thinking, being open to weird topics, and inviting people to think outside of the media bubble. At the same time, I would like to see more stories out there (very generally speaking) about people who have used the EA principles to address current issues (at EA Germany, we have been doing this every month for 2 years and were happy to have you as one of the people in our portraits). It's great that Founders Pledge and TLYCS are acting on the crisis, and Effektiv Spenden is raising funds for that. But I'm glad they are doing this with their brands, leaving EA to focus on the narrow target group of impartially altruistic and truth-seeking people who might, in the future, build the next generation of organizations addressing these or other problems.

I have the impression we have not learned from the communicative mistakes of 2022 in that we are again pushing arguments of limited practical import that alienate people and limit our coalitional options.

In my view, the mistakes of 2022 involved not being professional in running organizations and strategically doing outreach. Instead of the broad communication under their EA brand then, I'm much more positive about how GWWC, 80k, or The School for Moral Ambition are spreading ideas that originated from EA. I hope we can get better at defining our niche target group for the EA brand and working to appeal to them instead of the broad public.

Thanks for laying out your view in such detail, Patrick!

I find it hard to grasp how the EA Forum can be so narrow -- given there are no Fora / equivalents for the other brands you mention.

E.g. I still expect the EA Forum is widely perceived as the main place where community discussion happens beyond the narrow mandate you outline so which attentional priorities will be set here will be seen as a broader reflection of the movement than what I think you intend.

I think the main issue is that I was interpreting your point about the public forum's perception as a fear that people outside could see EA as weird (in a broad sense). I would be fine with this.

But at the same time, I hope that people already interested in EA don't get the impression from the forum that the topics are limited. On the contrary, I would love to have many discussions here, not restricted by fear of outside perception.

This comment really makes me appreciate the nuanced way to give feedback with disagree and karma -- I think it is quite useful to incentivize critical feedback that the two can be, and are, distinguished.

I think this is a fair point - but it's not the frame I've been using to consider debate week topics.

My aim has been to generate useful discussion within the effective altruism community. I'd like to choose topics which nudge people to examine assumptions they've been making, and might lead to them changing their minds, and perhaps their priorities, or the focus of their work. I haven't been thinking about debate weeks as a piece of communications work/ as a way of reaching out to a broader audience. This question in particular was chosen because the Forum audience wouldn't necessarily have cached takes on it - an audience outside the Forum would need a lot of context to get what we are talking about.

Perhaps I'm missing something though - do you think this is more public facing than I'm assuming? To be clear, I know that it is public, but it's not directed at an outside audience in the way a book or podcast or op-ed might be. 

Edit: I'm also uncertain on the claim that "there are few interventions that are predictably differentiated along those lines" - I think Forethought would disagree, and though I'm not sure I agree with them, they've thought about it more than I have. 

Thanks for engaging and for giving me the chance to outline more clearly and with more nuance what my take is.

I covered some of this in my reply to Ollie, but basically (a) I do think that Forum weeks are significant attentional devices signaling what we see as priorities, (b) the Forum has appeared in detail in many EA-critical pieces and (c) there are many Forum weeks we could be running right now that would be much better both from a point of action guiding and perception in the wider world.

I take as given -- I am not the right person to evaluate this -- that there are some interventions that some EA funders might decide along those considerations.

But I am pretty confident it won't matter to the wider philanthropic world, almost no one is thinking about philanthropic interventions saying "does this make a world better where we survive v does this mostly affect probability of extinction?"

If EA were ascendant and we'd be a significant share of philanthropy maybe that'd be a good question to ask.

But in a world where our key longtermist priorities are not well funded and where most of the things we can be doing to broadly reduce risks are not clearly alignable to either side of the crux here, I think making this a key attentional priority seems to have, at least, significant opportunity cost.

EDIT: I am mostly trying to give a consistent and clearly articulated perspective here, I am surely overlooking things and you have information on this that I do not have. I hope this is useful to you, but I don't want to imply I am able to have an all-things-considered view.

Thanks for engaging on this as well! I do feel the responsibility involved in setting event topics, and it's great to get constructive criticism like this. 

To respond to the points a bit (and this is just my view- quite quickly written because I've got a busy day today and I'm happy to come back and clarify/change my mind in another reply): 

(a) - maybe, but I think the actual content of the events almost always contains some scepticism of the question itself, discussion of adjacent debates etc... The actual topic of the event doesn't seem like a useful place to look for evidence on the community's priorities. Also, I generally run events about topics I think people aren't prioritising. However, I think this is the point I disagree with the least - I can see that if you are looking at the forum in a pretty low-res way, or hearing about the event from a friend, you might get an impression that 'EA cares about X now'. 

(b) - The Forum does appear in EA-critical pieces, but I personally don't think those pieces distinguish much between what one post on the Forum says and what the Forum team puts in a banner (and I don't think readers who lack context would distinguish between those things either). So, I don't worry too much about what I'm saying in the eyes of a very adversarial journalist (there are enough words on the forum that they can probably find whatever they'd like to find anyway). 

To clarify - for readers and adversarial journalists - I still have the rule of "I don't post anything I wouldn't want to see my name attached to in public" (and think others should too), but that's a more general rule, not just for the Forum. 

(c)- I'm sure that it isn't the optimum Forum week. However (1) I do think this topic is important and potentially action-relevant - there is increasing focus on 'AI Safety', but AI Safety is a possibly vast field with a range of challenges that a career or funding could address, and the topic of this debate is potentially an important distinction to have a take on when you are making those decisions. And (2) I'm pretty bullish on forum events, and I'd like to run more, and get the community involved more, so any suggestions for future events are always welcome. 

 


 

Thanks for clarifying this!

I think ultimately we seem to have quite different intuitions on the trade-offs, but that seems unresolvable. Most of my intuitions there come from advising non-EA HNWs (and from spending time around advisors specialized in advising these), so this is quite different from mostly advising EAs.

Thank you for sharing your disagreements about this! :)

I would love for there to be more discussion on the Forum about how current events affect key EA priorities. I agree that those discussions can be quite valuable, and I strongly encourage people who have relevant knowledge to post about this.

I’ll re-up my ask from my Forum update post: we are a small team (Toby is our only content manager, and he doesn’t spend his full 1 FTE on the Forum) and we would love community support to make this space better:

  1. We don’t currently have the capacity to maintain expertise and situational awareness in all the relevant cause areas. We’re considering deputizing others to actively support the Forum community — if you’re interested in volunteering some time, please let us know (feel free to DM myself or Toby).
  2. In general, we are happy to provide support for people who may want to discuss or post something on the Forum but are unsure how to, or are unsure if that’s a good fit. For example, if you want to run an AMA, or something like a Symposium for a specific topic, you can ask us for help! :) Please have a low bar for reaching out to myself or Toby to ask for support.

Historically, the EA Forum has strongly leaned in the direction of community-run space (rather than CEA-run space). Recently we’ve done a bit more proactively organizing content (like Giving Season and debate weeks), but I really don’t want to discourage the rest of the community from making conversations happen on the Forum that you think are important. We have such little capacity and expertise on our team, relative to the entirety of the community, so we won’t always have the right answers!

To address your specific concerns: I’ll just say that I’m not confident about what the right decision would have been, though I currently lean towards “this was fine, and led to some interesting posts and valuable discussions”. I broadly agree with other commenters so I’ll try not to repeat their points. Here are some additional considerations:

  1. Debate weeks take a long time to plan out (around a month, though it depends on the topic), since it requires a bunch of coordination, which makes it particularly hard to do this around current events (for example, at some point I thought that the USAID cut was going to be reversed, and if that happened after we decided on the debate week topic we’d need to pivot our plans, and possibly this would make posts that people wrote in advance pretty useless).
  2. USAID in particular was discussed at various points on the Forum previously and those posts got a lot of karma/attention, so it’s not clear to me if a debate week on that topic would have been clearly more valuable.
  3. Traditional news sources, and even some relevant academic communities, are likely much better at reaching non-EA funders than the Forum could do right now, even on our best days. So if our goal were around influencing non-EA funders, I don’t think we would do any interventions that utilize the EA Forum.
  4. RE: “I do think that Forum weeks are significant attentional devices signaling what we see as priorities” — I would be surprised if anyone who doesn’t actively use the Forum thought this, partially because there’s not really a way to access Forum events after they are done, so they are quite hard to find. The biggest Forum event that we run is Giving Season (it spans ~2 months), which I think you’d agree is much more action-relevant and palatable to people who don’t associate with EA, and I would be somewhat surprised to learn that that event influenced any non-EA funders (at least I haven’t heard any stories about this happening), so I would be significantly more surprised if any non-EA funders were influenced by a debate week. (I think these rarely get any outside coverage, and I even know of people who work at EA orgs who don’t know about our debate week events.)
[comment deleted]2
0
0
Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that