Hide table of contents

Zach Robinson writes: 'In my role at CEA, I embrace an approach to EA that I (and others) refer to as “principles-first”.'

Patrick Gruban responds: 'an approach focussed on principles...could be more powerful when there is broader stakeholder consensus on what they are.'

I've definitely noticed that EA manifests slightly differently in different places. I think it would be helpful to discuss:

  • What principles do you have that you view as core to your engagement with EA? Do you have any principles you hold as important but think are less relevant to EA?
  • What are principles you think people, groups, and organisations in EA tend to have, or should have, or wish they had? Is there a gap here in either direction?
  • What are your thoughts on the relative importance of various principles?
  • Do you think EA principles have changed, should change, or should stay the same over time?
  • What principles do you think are easier or harder to live up to?
  • What does a 'principles-first approach' mean to you? Do you think this is a helpful way to frame what we ought to do? Are there other frames you think would be more, or differently useful?

(Here is CEA's list of core principles that Zach references)

21

0
0

Reactions

0
0
New Answer
New Comment


5 Answers sorted by

One that I think is super important (and I think used to be on CEA's list?) is transparency:
* The movement was founded on Givewell/GWWC doing reviews of and ultimately promoting charities for which transparency is a prerequisite.
* Givewell themselves have been a model of transparency in their reasoning, value assumptions, etc
* It seems importantly hypocritical as a movement to demand it of evaluees but not practice it at a meta level
* Much of the sea of criticism (including my own) that followed FTXgate involved concerns about lack of transparency
* If as Zachary says the community  is 'CEA’s team, not its customers', it's hard for us to make useful decisions about how to participate without knowing the rationale or context for their key decisions
 

Out of the four "core" ideas, the one I take most issue with is the "scout mindset":

Scout mindset: We believe that we can better help others if we’re working together to think clearly and orient towards finding the truth, rather than trying to defend our own ideas. Humans naturally aren’t great at this (aside from wanting to defend our own ideas, we have a host of other biases), but since we want to really understand the world, we aim to seek the truth and try to become clearer thinkers.

I don't think this "scout vs soldier" is the most important thing when it comes to establishing truth. For example, a criminal trial is as "soldier" as you can get, but I would argue that trials still are truth seeking endeavors that often work quite well. 

Also, merely having a scout mindset is not enough: you could intend to find the truth, but be using really shit methods to do so. 

Instead, I would talk about a more general case of honesty, evidence-based reasoning, and testing/interrogation  of ideas, akin to scientific work. 

I think your "criminal trial" counter-example to the "scout mindset" narrative is really interesting.

I'm not convinced it quite holds up though, for a couple of reasons.

Firstly, I think there's two separate questions which you're conflating:

  1. How can someone, as an individual, best form accurate opinions about something?
  2. How can we design a process which will reliably lead to accurate decisions being made about contentious issues? And how can we design it so that those decisions will be broadly be trusted by the public?

These questions are similar, but not the... (read more)

5
titotal
I see "soldier mindset" being described as akin to "motivated thinking" (eg here), and I think it's a stretch to say that a prosecution lawyer is not doing motivated thinking (in that trying to prove one thing true is their literal job).  And yeah, for the reasons that you stated, if you can't trust people to be impartial (and people are not good at judging their own impartiality), setting up a system where multiple sides are represented by "soldier mindset" can legitimately be better at truth-seeking. Most episodes in scientific history have involved people who were really really motivated to prove that their particular theory was correct.  My real point, though, is that this "soldier vs scout" dichotomy is not the best way to describe what makes scientific style thinking work. You can have a combination of both work just fine: what matters is whether your overall process is good at picking out truth and rejecting BS. And I do not think merely trying to be impartial and truthseeking is sufficient for this. "scout mindset" is not a bad thing to try, but it's not enough. 

To me, the core EA principles that I refer to when talking about the community and its ideas (and the terms I use for them) are:

  1. Cosmopolitanism: The same thing that CEA means by "impartiality." Beings that I have no connection to are no less ethically important than my friends, family, or countrymen. 
  2. Evidence orientation: I think this is basically what CEA calls "Scout mindset."
  3. Attention to costs and cost-effectiveness:  The same thing that CEA calls "Recognition of tradeoffs"
  4. Commensurability of different outcomes: GiveWell, Open Philanthropy, and others make explicit judgments of how many income doublings for a family (for example) are equivalent to one under-5 life saved, or similar. This enables you to do "cause prioritization" - without it, you get into an "apples to oranges" problem in a lot of resource allocation questions. 

I like CEA's explicit highlighting of "Scope sensitivity" - I will embrace that in future conversations. But I'm writing this post to highlight outcome commensurability too. I think it is the one principle that most differentiates EA-aligned international development practitioners from other international development practitioners who have a firm grounding in economics. 

Principles are great!  I call them "stone-tips".  My latest one is:

Look out for wumps and woozles!

It's one of my favorite. ^^  It basically very-sorta translates to bikeshedding (idionym: "margin-fuzzing"), procrastination paradox (idionym: "marginal-choice trap" + attention selection history + LDT), and information cascades / short-circuits / double-counting of evidence…  but a lot gets lost in translation.  Especially the cuteness.

The stone-tip closest to my heart, however, is:

I wanna help others, but like a lot for real!

I think EA is basically sorta that… but a lot gets confusing in implementation.

The core of EA is to provide tools for optimizing the marginal impact of altruistic individual efforts given a large array of preferences and beliefs.

The most natural application is selecting optimal recipients of donations for any possible worldview.

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
 ·  · 19m read
 · 
I am no prophet, and here’s no great matter. — T.S. Eliot, “The Love Song of J. Alfred Prufrock”   This post is a personal account of a California legislative campaign I worked on March-June 2024, in my capacity as the indoor air quality program lead at 1Day Sooner. It’s very long—I included as many details as possible to illustrate a playbook of everything we tried, what the surprises and challenges were, and how someone might spend their time during a policy advocacy project.   History of SB 1308 Advocacy Effort SB 1308 was introduced in the California Senate by Senator Lena Gonzalez, the Senate (Floor) Majority Leader, and was sponsored by Regional Asthma Management and Prevention (RAMP). The bill was based on a report written by researchers at UC Davis and commissioned by the California Air Resources Board (CARB). The bill sought to ban the sale of ozone-emitting air cleaners in California, which would have included far-UV, an extremely promising tool for fighting pathogen transmission and reducing pandemic risk. Because California is such a large market and so influential for policy, and the far-UV industry is struggling, we were seriously concerned that the bill would crush the industry. A partner organization first notified us on March 21 about SB 1308 entering its comment period before it would be heard in the Senate Committee on Natural Resources, but said that their organization would not be able to be publicly involved. Very shortly after that, a researcher from Ushio America, a leading far-UV manufacturer, sent out a mass email to professors whose support he anticipated, requesting comments from them. I checked with my boss, Josh Morrison,[1] as to whether it was acceptable for 1Day Sooner to get involved if the partner organization was reluctant, and Josh gave me the go-ahead to submit a public comment to the committee. Aware that the letters alone might not do much, Josh reached out to a friend of his to ask about lobbyists with expertise in Cal
Rasool
 ·  · 1m read
 · 
In 2023[1] GiveWell raised $355 million - $100 million from Open Philanthropy, and $255 million from other donors. In their post on 10th April 2023, GiveWell forecast the amount they expected to raise in 2023, albeit with wide confidence intervals, and stated that their 10th percentile estimate for total funds raised was $416 million, and 10th percentile estimate for funds raised outside of Open Philanthropy was $260 million.  10th percentile estimateMedian estimateAmount raisedTotal$416 million$581 million$355 millionExcluding Open Philanthropy$260 million$330 million$255 million Regarding Open Philanthropy, the April 2023 post states that they "tentatively plans to give $250 million in 2023", however Open Philanthropy gave a grant of $300 million to cover 2023-2025, to be split however GiveWell saw fit, and it used $100 million of that grant in 2023. However for other donors I'm not sure what caused the missed estimate Credit to 'Arnold' on GiveWell's December 2024 Open Thread for bringing this to my attention   1. ^ 1st February 2023 - 31st January 2024
Recent opportunities in Building effective altruism
9
26
CEEALAR
· · 1m read