Hide table of contents

Zach Robinson writes: 'In my role at CEA, I embrace an approach to EA that I (and others) refer to as “principles-first”.'

Patrick Gruban responds: 'an approach focussed on principles...could be more powerful when there is broader stakeholder consensus on what they are.'

I've definitely noticed that EA manifests slightly differently in different places. I think it would be helpful to discuss:

  • What principles do you have that you view as core to your engagement with EA? Do you have any principles you hold as important but think are less relevant to EA?
  • What are principles you think people, groups, and organisations in EA tend to have, or should have, or wish they had? Is there a gap here in either direction?
  • What are your thoughts on the relative importance of various principles?
  • Do you think EA principles have changed, should change, or should stay the same over time?
  • What principles do you think are easier or harder to live up to?
  • What does a 'principles-first approach' mean to you? Do you think this is a helpful way to frame what we ought to do? Are there other frames you think would be more, or differently useful?

(Here is CEA's list of core principles that Zach references)

21

0
0

Reactions

0
0
New Answer
New Comment


5 Answers sorted by

One that I think is super important (and I think used to be on CEA's list?) is transparency:
* The movement was founded on Givewell/GWWC doing reviews of and ultimately promoting charities for which transparency is a prerequisite.
* Givewell themselves have been a model of transparency in their reasoning, value assumptions, etc
* It seems importantly hypocritical as a movement to demand it of evaluees but not practice it at a meta level
* Much of the sea of criticism (including my own) that followed FTXgate involved concerns about lack of transparency
* If as Zachary says the community  is 'CEA’s team, not its customers', it's hard for us to make useful decisions about how to participate without knowing the rationale or context for their key decisions
 

Out of the four "core" ideas, the one I take most issue with is the "scout mindset":

Scout mindset: We believe that we can better help others if we’re working together to think clearly and orient towards finding the truth, rather than trying to defend our own ideas. Humans naturally aren’t great at this (aside from wanting to defend our own ideas, we have a host of other biases), but since we want to really understand the world, we aim to seek the truth and try to become clearer thinkers.

I don't think this "scout vs soldier" is the most important thing when it comes to establishing truth. For example, a criminal trial is as "soldier" as you can get, but I would argue that trials still are truth seeking endeavors that often work quite well. 

Also, merely having a scout mindset is not enough: you could intend to find the truth, but be using really shit methods to do so. 

Instead, I would talk about a more general case of honesty, evidence-based reasoning, and testing/interrogation  of ideas, akin to scientific work. 

I think your "criminal trial" counter-example to the "scout mindset" narrative is really interesting.

I'm not convinced it quite holds up though, for a couple of reasons.

Firstly, I think there's two separate questions which you're conflating:

  1. How can someone, as an individual, best form accurate opinions about something?
  2. How can we design a process which will reliably lead to accurate decisions being made about contentious issues? And how can we design it so that those decisions will be broadly be trusted by the public?

These questions are similar, but not the... (read more)

5
titotal
I see "soldier mindset" being described as akin to "motivated thinking" (eg here), and I think it's a stretch to say that a prosecution lawyer is not doing motivated thinking (in that trying to prove one thing true is their literal job).  And yeah, for the reasons that you stated, if you can't trust people to be impartial (and people are not good at judging their own impartiality), setting up a system where multiple sides are represented by "soldier mindset" can legitimately be better at truth-seeking. Most episodes in scientific history have involved people who were really really motivated to prove that their particular theory was correct.  My real point, though, is that this "soldier vs scout" dichotomy is not the best way to describe what makes scientific style thinking work. You can have a combination of both work just fine: what matters is whether your overall process is good at picking out truth and rejecting BS. And I do not think merely trying to be impartial and truthseeking is sufficient for this. "scout mindset" is not a bad thing to try, but it's not enough. 

To me, the core EA principles that I refer to when talking about the community and its ideas (and the terms I use for them) are:

  1. Cosmopolitanism: The same thing that CEA means by "impartiality." Beings that I have no connection to are no less ethically important than my friends, family, or countrymen. 
  2. Evidence orientation: I think this is basically what CEA calls "Scout mindset."
  3. Attention to costs and cost-effectiveness:  The same thing that CEA calls "Recognition of tradeoffs"
  4. Commensurability of different outcomes: GiveWell, Open Philanthropy, and others make explicit judgments of how many income doublings for a family (for example) are equivalent to one under-5 life saved, or similar. This enables you to do "cause prioritization" - without it, you get into an "apples to oranges" problem in a lot of resource allocation questions. 

I like CEA's explicit highlighting of "Scope sensitivity" - I will embrace that in future conversations. But I'm writing this post to highlight outcome commensurability too. I think it is the one principle that most differentiates EA-aligned international development practitioners from other international development practitioners who have a firm grounding in economics. 

Principles are great!  I call them "stone-tips".  My latest one is:

Look out for wumps and woozles!

It's one of my favorite. ^^  It basically very-sorta translates to bikeshedding (idionym: "margin-fuzzing"), procrastination paradox (idionym: "marginal-choice trap" + attention selection history + LDT), and information cascades / short-circuits / double-counting of evidence…  but a lot gets lost in translation.  Especially the cuteness.

The stone-tip closest to my heart, however, is:

I wanna help others, but like a lot for real!

I think EA is basically sorta that… but a lot gets confusing in implementation.

The core of EA is to provide tools for optimizing the marginal impact of altruistic individual efforts given a large array of preferences and beliefs.

The most natural application is selecting optimal recipients of donations for any possible worldview.

Curated and popular this week
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 7m read
 · 
Crossposted from my blog.  When I started this blog in high school, I did not imagine that I would cause The Daily Show to do an episode about shrimp, containing the following dialogue: > Andres: I was working in investment banking. My wife was helping refugees, and I saw how meaningful her work was. And I decided to do the same. > > Ronny: Oh, so you're helping refugees? > > Andres: Well, not quite. I'm helping shrimp. (Would be a crazy rug pull if, in fact, this did not happen and the dialogue was just pulled out of thin air).   But just a few years after my blog was born, some Daily Show producer came across it. They read my essay on shrimp and thought it would make a good daily show episode. Thus, the Daily Show shrimp episode was born.   I especially love that they bring on an EA critic who is expected to criticize shrimp welfare (Ronny primes her with the declaration “fuck these shrimp”) but even she is on board with the shrimp welfare project. Her reaction to the shrimp welfare project is “hey, that’s great!” In the Bible story of Balaam and Balak, Balak King of Moab was peeved at the Israelites. So he tries to get Balaam, a prophet, to curse the Israelites. Balaam isn’t really on board, but he goes along with it. However, when he tries to curse the Israelites, he accidentally ends up blessing them on grounds that “I must do whatever the Lord says.” This was basically what happened on the Daily Show. They tried to curse shrimp welfare, but they actually ended up blessing it! Rumor has it that behind the scenes, Ronny Chieng declared “What have you done to me? I brought you to curse my enemies, but you have done nothing but bless them!” But the EA critic replied “Must I not speak what the Lord puts in my mouth?”   Chieng by the end was on board with shrimp welfare! There’s not a person in the episode who agrees with the failed shrimp torture apologia of Very Failed Substacker Lyman Shrimp. (I choked up a bit at the closing song about shrimp for s