This is a special post for quick takes by Noah Starbuck. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Survey Studies on Perception of EA Ideas

Has anyone looked into the possibility of doing survey studies on the perception of EA ideas?  I'm thinking of surveys that might include questions that prompt the participant to choice between 2 statements.  Each statement might contain an EA idea, but phrased in a different way.   The goal would be to determine which verbiage is more palatable.   Another type of question might measure which statement is more likely to convince the participant of a given view, or to take a certain action.  The audience would be those who were not already EAs.  Ideally the result would be a set of word & phrase choices that were statistically proven to be more palatable & also better at convincing people of changing their views or taking action.  This set of language could then be scaled as a best practice across a wide variety of community building & fund raising efforts.

SOP for EAG Conferences

1 - clarify your goals

2- clarify types of people you’d like to have 1-1s with to meet these goals

3- pick workshops you want to go to

4- in Swapcard app, delete the 1-1 time slots that are during workshops

5- search Swapcard attendee list for relevant keywords for 1-1s

6- make 1-1s, scheduled in location where it will be easier to find ppl (ie not main networking area) — ask organizers if unsure of what this will be in advance

Notes

-don’t worry about talks since they’re recorded

-actually use 1-1 time slot feature on Swapcard (by removing times you’re not available)

—-this removes rescheduling scramble via message that otherwise occurs

-make all 1-1s in same place for your convenience

-if there’s a workshop you want to go to that’s full, try going anyway

Quantifying Impact of Allyship

Intro 

Uncertainties 

  • Should ally be the one to write this given the many potential blind spots
  • Is it correct for an investigation of this type to be allied-centered (from the perspective of those in identity locations of societal privilege)
  • Problem of disempowering societally marginalized & of painting a one-directional picture
  • Reinventing the wheel / being dismissive of well established experts, particularly those who are members of societally marginalized groups)
  • Other blind spots related to social justice nuance

Goal

  • Create new allies that would not otherwise be motivated (without this style of analysis)
  • Motivate existing allies to take more action (by demonstrating incremental return on investment)
  • Everyone should just automatically do this (to the degree that they occupy privileged identity locations) as part of basic civics in US society & probably in global society as well
    • These practices may relate to other cause areas like longtermism (i.e. by avoiding lock-in of bad values), animal welfare & global health/development (i.e. by expanding circle of compassion starting with local people)

Why (Big/Solvable/Neglected)

  • Solvable
    • The positive impact of each new ally is complete marginal gain/100% counterfactual since each new ally will encounter a unique set of people & situations in their lives.
      • In other words, the counter argument that social justice advocacy is a crowded cause area does not necessarily hold here.
    • The need for domain-specific expertise is low since all of us are experts on our personal experience, which is in turn suffused with systemic privilege & injustice.
  • Big
    • Social injustice in the US is the cause of a lot of suffering [PLUG IN METRICS HERE]
      • Objective - economic; professional; healthcare; education
      • Subjective - well being scales/surveys; mental health metrics
    • Social injustice in the US tends to spread throughout the world [PLUG IN METRICS HERE]
      • Roe v wade recent case study (news articles on potential impact on other countries policies)
  • Neglected
    • Social justice gets a lot of air time as a whole cause area, but specific interventions may be much more significant
    • A larger body of EA-style analysis of social justice intervention that are not holistically dismissive could help us “strike gold”

The Model

Categories of Ally Action

  • Unplanned intervention
    • I.e. speaking up about an observed injustice
  • Planned intervention
    • I.e. scheduled discussion with another privileged individual
  • Protest
    • I.e. attending a protest
  • Institutional lobbying
    • I.e. advocating for Juneteenth as a work holiday
  • Automated action
    • I.e. leading with gender pronouns

Impact of 1 unit of action for each (weight #1)

  • Number of people impacted
  • Depth of impact
    • I.e. changes those people make

Degree of uncertainty of 1 unit of action for each (weight #2)

  • Variable which downweights according to degree of uncertainty

Negative impact/threat

  • Number of people alienated
  • Degree of alienation

https://docs.google.com/spreadsheets/d/15QtQw1e0HNWlFzzFUidB6K0rzzoeN4YqUvYQ6tt-HvQ/edit?usp=sharing 

Initial Reactions

  • This seems disappointing, if it’s true
  • Potential impact could be in prioritization of actions for allies & training better allies
  • Suggests that an active ally could counterfactually impact in the range of less than 1000 people in their lifetime
    • Even without the weights it’s less than 4000
  • Using the gut estimates of a mean includes gut estimate of range as well
  • Allyship is a lot of work & requires a lot of time & energy to change 1 person to become an effective ally
  • Allyship also has a lot of risks related to alienation & polarization
    • This is part of the reason why my gut move was to down weight so much

Appendix

The factors that make a good ally

  • Seeking opportunities to speak up
    • Knowing when to
    • Judgment/accuracy - is there actual systemic injustice being expressed here?
  • Willingness to speak up
    • Motivation
    • Resilience to risk
      • Social risk
        • Temporary awkwardness
        • Lasting relationship change
      • Professional risk
  • Speaking up tactfully
    • Timing
    • Duration
    • Tone
      • Selective ferocity
      • “We” frame
    • Body language
    • Word choice
    • Managing stress
  • Nonviolence/de-escalation
  • Positive affirmation where appropriate
  • Destressing afterwards
    • Forgetting about incident

How to Train Good Allies

  • White Awake - online class series on race-class organizing & other topics
  • Panchamama Alliance - online class series on spiritual-eco organizing & systemic injustice
  • Bystander Intervention training - i.e. Hollaback!
  • Proposal - An Ally academy
    • Follow up with graduates & ongoing mentorship

This reminded me of actor mapping. There's many different contexts of actor mapping, the one I originally learned about was in activism. It looks like you're trying to better tangibly quantify it, which I don't know how much exists on that. Slightly different topic, but this also reminded me of mapping a mutal aid network. 

Just looked this up- very interesting. I agree that’s along the lines of what I was thinking , with the added attempt to vaguely begin to quantify . And yeah mutual aid efforts could be another type of action to include in a map/model like this.

Re how much exists - I hope it’s a lot. But I fear there may be not that much based on personal experience. Also sometimes in activist & social justice circles there can be a resistance to quantifying a bottom line.

I resist it myself haha... I was planning on getting a post out sometime about how some things just can't be quantified, with examples of math problems that are not possible to calculate. I think quantification through labels rather than numbers is really useful. I've often heard people say to solve something it must be done strategically, but it doesn't end up going through because they have a hard time conceptualizing what an effective strategy would look like.

[comment deleted]1
0
0
Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of
 ·  · 7m read
 · 
The company released a model it classified as risky — without meeting requirements it previously promised This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work. After publication, this article was updated to include an additional response from Anthropic and to clarify that while the company's version history webpage doesn't explicitly highlight changes to the original ASL-4 commitment, discussion of these changes can be found in a redline PDF linked on that page. Anthropic just released Claude 4 Opus, its most capable AI model to date. But in doing so, the company may have abandoned one of its earliest promises. In September 2023, Anthropic published its Responsible Scaling Policy (RSP), a first-of-its-kind safety framework that promises to gate increasingly capable AI systems behind increasingly robust safeguards. Other leading AI companies followed suit, releasing their own versions of RSPs. The US lacks binding regulations on frontier AI systems, and these plans remain voluntary. The core idea behind the RSP and similar frameworks is to assess AI models for dangerous capabilities, like being able to self-replicate in the wild or help novices make bioweapons. The results of these evaluations determine the risk level of the model. If the model is found to be too risky, the company commits to not releasing it until sufficient mitigation measures are in place. Earlier today, TIME published then temporarily removed an article revealing that the yet-to-be announced Claude 4 Opus is the first Anthropic model to trigger the company's AI Safety Level 3 (ASL-3) protections, after safety evaluators found it may be able to assist novices in building bioweapons. (The