This is a special post for quick takes by Noah Starbuck. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Survey Studies on Perception of EA Ideas

Has anyone looked into the possibility of doing survey studies on the perception of EA ideas?  I'm thinking of surveys that might include questions that prompt the participant to choice between 2 statements.  Each statement might contain an EA idea, but phrased in a different way.   The goal would be to determine which verbiage is more palatable.   Another type of question might measure which statement is more likely to convince the participant of a given view, or to take a certain action.  The audience would be those who were not already EAs.  Ideally the result would be a set of word & phrase choices that were statistically proven to be more palatable & also better at convincing people of changing their views or taking action.  This set of language could then be scaled as a best practice across a wide variety of community building & fund raising efforts.

SOP for EAG Conferences

1 - clarify your goals

2- clarify types of people you’d like to have 1-1s with to meet these goals

3- pick workshops you want to go to

4- in Swapcard app, delete the 1-1 time slots that are during workshops

5- search Swapcard attendee list for relevant keywords for 1-1s

6- make 1-1s, scheduled in location where it will be easier to find ppl (ie not main networking area) — ask organizers if unsure of what this will be in advance

Notes

-don’t worry about talks since they’re recorded

-actually use 1-1 time slot feature on Swapcard (by removing times you’re not available)

—-this removes rescheduling scramble via message that otherwise occurs

-make all 1-1s in same place for your convenience

-if there’s a workshop you want to go to that’s full, try going anyway

Quantifying Impact of Allyship

Intro 

Uncertainties 

  • Should ally be the one to write this given the many potential blind spots
  • Is it correct for an investigation of this type to be allied-centered (from the perspective of those in identity locations of societal privilege)
  • Problem of disempowering societally marginalized & of painting a one-directional picture
  • Reinventing the wheel / being dismissive of well established experts, particularly those who are members of societally marginalized groups)
  • Other blind spots related to social justice nuance

Goal

  • Create new allies that would not otherwise be motivated (without this style of analysis)
  • Motivate existing allies to take more action (by demonstrating incremental return on investment)
  • Everyone should just automatically do this (to the degree that they occupy privileged identity locations) as part of basic civics in US society & probably in global society as well
    • These practices may relate to other cause areas like longtermism (i.e. by avoiding lock-in of bad values), animal welfare & global health/development (i.e. by expanding circle of compassion starting with local people)

Why (Big/Solvable/Neglected)

  • Solvable
    • The positive impact of each new ally is complete marginal gain/100% counterfactual since each new ally will encounter a unique set of people & situations in their lives.
      • In other words, the counter argument that social justice advocacy is a crowded cause area does not necessarily hold here.
    • The need for domain-specific expertise is low since all of us are experts on our personal experience, which is in turn suffused with systemic privilege & injustice.
  • Big
    • Social injustice in the US is the cause of a lot of suffering [PLUG IN METRICS HERE]
      • Objective - economic; professional; healthcare; education
      • Subjective - well being scales/surveys; mental health metrics
    • Social injustice in the US tends to spread throughout the world [PLUG IN METRICS HERE]
      • Roe v wade recent case study (news articles on potential impact on other countries policies)
  • Neglected
    • Social justice gets a lot of air time as a whole cause area, but specific interventions may be much more significant
    • A larger body of EA-style analysis of social justice intervention that are not holistically dismissive could help us “strike gold”

The Model

Categories of Ally Action

  • Unplanned intervention
    • I.e. speaking up about an observed injustice
  • Planned intervention
    • I.e. scheduled discussion with another privileged individual
  • Protest
    • I.e. attending a protest
  • Institutional lobbying
    • I.e. advocating for Juneteenth as a work holiday
  • Automated action
    • I.e. leading with gender pronouns

Impact of 1 unit of action for each (weight #1)

  • Number of people impacted
  • Depth of impact
    • I.e. changes those people make

Degree of uncertainty of 1 unit of action for each (weight #2)

  • Variable which downweights according to degree of uncertainty

Negative impact/threat

  • Number of people alienated
  • Degree of alienation

https://docs.google.com/spreadsheets/d/15QtQw1e0HNWlFzzFUidB6K0rzzoeN4YqUvYQ6tt-HvQ/edit?usp=sharing 

Initial Reactions

  • This seems disappointing, if it’s true
  • Potential impact could be in prioritization of actions for allies & training better allies
  • Suggests that an active ally could counterfactually impact in the range of less than 1000 people in their lifetime
    • Even without the weights it’s less than 4000
  • Using the gut estimates of a mean includes gut estimate of range as well
  • Allyship is a lot of work & requires a lot of time & energy to change 1 person to become an effective ally
  • Allyship also has a lot of risks related to alienation & polarization
    • This is part of the reason why my gut move was to down weight so much

Appendix

The factors that make a good ally

  • Seeking opportunities to speak up
    • Knowing when to
    • Judgment/accuracy - is there actual systemic injustice being expressed here?
  • Willingness to speak up
    • Motivation
    • Resilience to risk
      • Social risk
        • Temporary awkwardness
        • Lasting relationship change
      • Professional risk
  • Speaking up tactfully
    • Timing
    • Duration
    • Tone
      • Selective ferocity
      • “We” frame
    • Body language
    • Word choice
    • Managing stress
  • Nonviolence/de-escalation
  • Positive affirmation where appropriate
  • Destressing afterwards
    • Forgetting about incident

How to Train Good Allies

  • White Awake - online class series on race-class organizing & other topics
  • Panchamama Alliance - online class series on spiritual-eco organizing & systemic injustice
  • Bystander Intervention training - i.e. Hollaback!
  • Proposal - An Ally academy
    • Follow up with graduates & ongoing mentorship

This reminded me of actor mapping. There's many different contexts of actor mapping, the one I originally learned about was in activism. It looks like you're trying to better tangibly quantify it, which I don't know how much exists on that. Slightly different topic, but this also reminded me of mapping a mutal aid network. 

Just looked this up- very interesting. I agree that’s along the lines of what I was thinking , with the added attempt to vaguely begin to quantify . And yeah mutual aid efforts could be another type of action to include in a map/model like this.

Re how much exists - I hope it’s a lot. But I fear there may be not that much based on personal experience. Also sometimes in activist & social justice circles there can be a resistance to quantifying a bottom line.

I resist it myself haha... I was planning on getting a post out sometime about how some things just can't be quantified, with examples of math problems that are not possible to calculate. I think quantification through labels rather than numbers is really useful. I've often heard people say to solve something it must be done strategically, but it doesn't end up going through because they have a hard time conceptualizing what an effective strategy would look like.

[comment deleted]1
0
0
Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T