Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey
Hide table of contents

I'm posting this in preparation for Draft Amnesty Week (Feb 24- March 2), but please also use this thread for posts you don't plan to write for Draft Amnesty. The last time I posted this question, there were some great responses. 

If you have multiple ideas, I'd recommend putting them in different answers, so that people can respond to them separately.

It would be great to see:

  • Both spur-of-the-moment vague ideas, and further along considered ideas. If you're in that latter camp, you could even share a google doc for feedback on an outline.
  • Commenters signalling with Reactions and upvotes the content that they'd like to see written.
  • Commenters responding with helpful resources or suggestions.
  • Commenters proposing Dialogues with authors who suggest similar ideas, or which they have an interesting disagreement with (Draft Amnesty Week might be a great time for scrappy/ unedited dialogues). 

Draft Amnesty Week

If the responses here encourage you to develop one of your ideas, Draft Amnesty Week (February 24- March 2) might be a great time to post it. Posts tagged "Draft Amnesty Week" don't have to be thoroughly thought through or even fully drafted. Bullet points and missing sections are allowed. You can have a lower bar for posting. 

New Answer
New Comment


18 Answers sorted by

A history of ITRI, Taiwan's national electronics R&D institute. It was established in 1973, when Taiwan's income was less than Pakistan's income today. Yet it was single-handedly responsible for the rise of Taiwan's electronics industry, spinning out UMC, MediaTek and most notably TSMC. To give you a sense of how insane this is, imagine that Bangladesh announced today that they were going to start doing frontier AI R&D, and in 2045 they were the leaders in AI. ITRI is arguably the most successful development initiative in history, but I've never seen it brought up in either the metascience/progress community or the global dev community.

Fascinating, I've never heard of this before, thanks! If anyone's curious, I had Deep Research [take a stab at writing this] (https://chatgpt.com/share/67ac150e-ac90-800a-9f49-f02489dee8d0) which I found pretty interesting (but have totally not fact checked for accuracy)

I'm considering writing about "RCTs in NGOs: When (and when not) to implement them"

The post would explore:

  • Why many new NGOs feel pressured to conduct RCTs primarily due to funder / EA community requirements.
  • The hidden costs and limitations of RCTs: high expenses, 80% statistical power meaning 20% chance of missing real effects, wide confidence intervals
  • Why RCTs might not be the best tool for early-stage organizations focused on iterative learning
  • How academic incentives in RCT design/implementation don't always align with NGO needs
  • Alternative evidence-gathering approaches that might be more appropriate for different organizational stages
  • Suggestions for both funders and NGOs on how to think about evidence generation

This comes from my conversations with several NGO founders. I believe the EA community could benefit from a more nuanced discussion about evidence hierarchies and when different types of evaluation make sense.

I would love to see this. Not a take I've seen before (that I remember). 

This sounds like it could be interesting, though I'd also consider if some of the points are fundamentally to do with RCTs. E.g., "80% statistical power meaning 20% chance of missing real effects" - nothing inherently says an RCT should only be powered at 80% or that the approach should even be one of null hypothesis significance testing.

1
Fernando Irarrázaval 🔸
Good point. Good to clarify that the 80% power standard comes from academic norms, not an inherent RCT requirement. NGOs should chose their statistical thresholds based on their specific needs, budget, and risk tolerance.

I would welcome a blog post about RCTs, and if you decide to write one, I hope you consider the perspective below.

As far as I can tell ~0% of nonprofits are interested in rigorously studying their programs in any way, RCTs or otherwise, and I can't help but suspect that this is largely because mostly when we do run RCTs we find that these cherished programs have ~no effect. It's not at all surprising to me that most charities that conduct RCTs feel pressured to do so by donors; but on the other hand basically all charity activities ultimately flow from don... (read more)

2
Fernando Irarrázaval 🔸
This is a great point. There's an important distinction though between evaluating new programs led by early-stage NGOs (like those coming from Charity Entrepreneurship) versus established programs directing millions in funding. I think RCTs make sense for the latter group. There's also a difference between the typical NGOs and EA-founded ones. In my experience, EA founders actively want to rigorously evaluate their programs, they don't want to work for ineffective interventions.

Would also love this. I think a useful contrast will be A/B testing in big tech firms. My amateur understanding is big tech firms can and should run hundreds of “RCTs” because:

  • No need to acquire subjects.
  • Minimal disruption to business since you only need to siphon off a minuscule portion of your user base.
  • Tech experiments can finish in days while field experiments need at least a few weeks and sometimes years.
  • If we assume treatments are heavy tailed, then a big tech firm running hundreds of A/B tests is more likely to learn of a weird trick that grows the business when compared to a NGO who may only get one shot. 
2
Fernando Irarrázaval 🔸
Yes, exactly. The marginal cost of an A/B test in tech is incredibly low, while for NGOs an RCT represents a significant portion of their budget and operational capacity.  This difference in costs explains why tech can use A/B tests for iterative learning, trying hundreds of small variations, while NGOs need to be much more selective about what to test.  And despite A/B testing being nearly free, most decisions at big tech firms aren't driven by experimental evidence.

How people who write on the EA Forum and on LessWrong can have non-obvious significant positive impact by influencing organizations (like mine) - both through culture and the merit of their reasoning.

Personally I'd be so keen on seeing that - it's part of the pitch that I make to authors. 

"EA for hippies" - I managed to explain effective altruism to a group of rich hippies that were in the process of starting up a foundation, getting them on-board with donating some of the revenue to global health charities. 

The post would detail how I explained EA to people who are far from the standard target audience.

I would very much like to see something like this. Being able to communicate EA ideas to people that are roughly aligned in terms of many altruistic values is useful.

I have a hastily written draft from a while back called "Cause neutrality doesn't mean all EA causes matter equally". It's a corrective to people sometimes using "cause neutrality" as a justification for not doing cause prioritisation/ treating current EA cause areas as canonical/ equally deserving of funding or effort. I didn't finish it because I ran out of steam/ was concerned I might be making up a guy to get mad at. 
I'll consider posting it for Draft Amnesty, especially if anyone is interested in seeing this take written up.

Very much in favor of posts clarifying that cause neutrality doesn't require value neutrality or deference to others' values.

Some things you might want to do if you are making a weighted factor model

Weighted factor models are commonly used within EA (e.g. by Charity Entrepreneurship/AIM and 80,000 Hours). Even the formalised Scale, Solvability, Neglectedness framework can, itself, be considered a form of weighted factor model.

However, despite their wide use, weighted factor models often neglect to use important methodological techniques which could test and improve their robustness,  which may threaten their validity and usefulness. RP's Surveys and Data Analysis  team previously consulted for a project who were using a WFM, and helped them understand certain things that were confusing them about the behaviour of their model using these techniques, but we've never had time to write up a detailed post about these methods. Such a post would discuss such topics as:

  • Problems with ordinal measures
  • When (not) to rank scores
  • When and how (not) to normalise
  • How to make interpretable rating scales
  • Identifying the factors that drive your outcomes
  • Quantifying and interpreting disagreement / uncertainty

This would be great to read, I walked away from at least one application process because I couldn't produce a decent WFM. I hope you write it!

How to interpret the EA Survey and Open Phil EA/LT Survey.

I think these surveys are complementary and each have different strengths and weaknesses relevant for different purposes.[1] However, I think what the strengths and weaknesses are and how to interpret the surveys in light of them is not immediately obvious. And I know that in at least some cases, decision-makers have had straightforwardly mistaken factual beliefs about the surveys which has mislead them about how to interpret them. This is a problem if people mistakenly rely on the results of only one of the surveys, or assign the wrong weights to each survey, when answering different questions.

A post about this would outline the key strengths and weaknesses of the different surveys for different purposes, touching on questions such as:

  • How much our confidence should change when we have a small sample size from a small population.
  • How concerned we should be about biases in the samples for each survey and what population we should be targeting.
  • How much the different questions in each survey allows us to check and verify the answers within each survey.
  • How much the results of each survey can be verified and cross-referenced with each other (e.g. by identifying specific highly engaged LTists within the EAS).

 

  1. ^

    Reassuringly, they also seem to generate very similar results, when we directly compare them, adjusting for differences in composition, i.e. only looking at highly engaged longtermists within the EA Survey.

Nice. I'd find this super interesting!

I'm thinking of writing a longer/ more nuanced collaborative piece discussing global vs local EA community building that I touched on in a previous post.

At some point I'd love to post something on ‘How to think about impact as a journalist’. I've accumulated a few notes and sources on the subject, and it's a question I often come back to, being directly concerned. But it's a big one and I haven't yet decided how to tackle it :)

Might be a nice candidate for a bullet-point outline draft amnesty post (like this one)? There's no rule that you can't republish it as a full post later on, and perhaps you could get some feedback/ ideas from the comments on a draft amnesty post...

I'm going to post about a great paper I read about the National Woman's Party, and 20th century feminism that I think has relevance to the EA communtiy :)

Reputation Hardening

Prompted largely by the fall in EA credibility in recent years. And also being unsatisfied with GiveWell's lack of independent verification of the charities they recommend.

Here is a lightly edited AI generated slop version:

Reputation Hardening: Should GiveWell Verify Charity Data Independently?

"Reputation hardening" involves creating more resilient reputations.

Recent events have shown how reputation damage to one EA entity can affect the entire movement's credibility and therefore funding and influence. While GiveWell's evaluation process is thorough, it largely relies on charity-provided data. I propose they consider implementing independent verification methods.

Applying to GiveWell/GHD

These measures could help detect potential issues early and strengthen confidence in effectiveness estimates.

This is a preliminary idea to start discussion. What other verification methods or implementation challenges should we consider?

I’d like to write: 

A post about making difficult career decisions with examples of how I made my own decisions and some tools I used to make them, and how they worked out. I have it roughly written but would definitely need feedback from you Toby before I post :))

A post about mental health: why I’m focusing on it this year, why I think more people in EA should focus on it and what exactly I’m doing, what’s working etc. Haven’t written it yet, but a lot of people are asking about it so I do think there is potential value. 

Sounds great, and always happy to give feedback :)

I would write how there's a collective action problem regarding reading EA forum posts. People want to read interesting, informative, and impactful posts and karma is a signifier of this. So often people will not read posts, especially on topics they are not familiar, unless it has already achieved some karma threshold. Given how time-sensitive front page availability is without karma accumulation and unlikely relatively low karma posts are too be read once off the front page, it is likely that good posts could be entirely ignored. On the other hand, some early traction could result in OK posts getting very high karma because a higher volume of people have been motivated to check the post out. 

 

I think this could be partially addressed by having volunteers, or even paying people, to commit to read posts within a certain time frame and upvote (or not, or downvote) if appropriate. It might be a better use of funds than myriad cosmetic changes. 

Below is a post I wrote that I think might be such a post that was good (or at least worthy of discussion) but people probably wanted to freeride on others' early evaluation. It discusses how jobs in which the performance metrics actually used are orthogonal to many ways in which good can be done may be opportunities for significant impact. 

 

https://forum.effectivealtruism.org/posts/78pevHteaRxekaRGk/orthogonal-impact-finding-high-leverage-good-in-unlikely

My previous attempt at predicting what I was going to write got 1/4, which ain't great.

This is partly planning fallacy, partly real life being a lot busier than expected and Forum writing being one of the first things to drop, and partly increasingly feeling gloom and disillusionment with EA and so not having the same motivation to write or contribute to the Forum as I did previously.

For the things that I am still thinking of writing I'll add comments to this post separately to votes and comments can be attributed to each idea individually.

I do want to write something along the lines of "Alignment is a Political Philosophy Problem"

My takes on AI, and the problem of x-risk, have been in flux over the last 1.5 years, but they do seem to be more and focused on the idea of power and politics, as opposed to finding a mythical 'correct' utility function for a hypothesised superintelligence. Making TAI/AGI/ASI go well therefore falls in the reference class of 'principal agent problem'/'public choice theory'/'social contract theory' rather than 'timeless decision theory/coherent extrapolated volition'. The latter 2 are poor answers to an incorrect framing of the question.

Writing that influenced my on this journey:

I also think this view helps explain the ... (read more)

1
Sharmake
My own take is that while I don't want to defend the "find a correct utility function" approach to alignment to be sufficient at this time, I do think it is actually necessary, and that the modern era is an anomaly in how much we can get away with misalignment being checked by institutions that go beyond an individual. The basic reason why we can get away with not solving the alignment problem is that humans depend on other humans, and in particular you cannot replace humans with much cheaper workers that have their preferences controlled arbitrarily. AI threatens the need to depend on other humans, which is a critical part of how we can get away with not needing the correct utility function. I like the Intelligence Curse series because it points out that an elite that doesn't need the commoners for anything and the commoners have no selfish value to the elite fundamentally means that by default, the elites starve the commoners to death without them being value aligned. The Intelligence Curse series is below: https://intelligence-curse.ai/defining/ The AIs are the elites, and the rest of humanity is the commoners in this analogy.

I don't think anyone wants or needs another "Why I'm leaving EA" post but I suppose if people really wanted to hear it I could write it up. I'm not sure I have anything new or super insightful to share on the topic.

I have some initial data on the popularity and public/elite perception of EA that I wanted to write into a full post, something along the lines of What is EA's reputation, 2.5 years after FTX? I might combine my old idea of a Forum data analytics update into this one to save time.

My initial data/investigation into this question ended being a lot more negative than other surveys of EA. The main takeaways are:

  • Declining use of the Forum, both in total and amongst influential EAs
  • EA has a very poor reputation in the public intellectual sphere, especially on Twi
... (read more)

A BOTEC of base rates of moderate-to-severe narcissistic traits (ie, clinical but not necessarily diagnosed) in founders and their estimated costs to the ecosystem. My initial research suggests unusually high concentrations in AI safety relative to other cause areas and the general population.

My ideas for draft amnesty week are replied to this message so they can be voted on separately:

Cosmological Fine-Tuning Considered:

The title’s kind of self-explanatory – over time I’ve noticed the cosmological fine-tuning argument for the existence of god become something like the most favored argument, and learning more about it over time has made me consider it more formidable than I used to think as well.

I’m ultimately not convinced, but I do consider it an update, and it makes for a good excuse for me to talk more about my views on things like anthropic arguments, outcome pumps, the metaphysics of multiverses, and interesting philosophical consi... (read more)

Topic from last round:

Okay, so, this is kind of a catch all. Out of the possible post ideas I commented last year, I never posted or wrote “Against National Special Obligation”, “The Case for Pluralist Evaluation”, or “Existentialist Currents in Pawn Hearts”. So, this is just the comment for “one of those”.

Observations on Alcoholism Appendix G:

This would be another addition to my Sequence on Alcoholism – I’ve been thinking in particular of writing a post listing out ideas about coping strategies/things to visualize to help with sobriety. I mention several in earlier appendices in the sequence – things like leaning into your laziness or naming and yelling at your addiction – but I don’t have a neat collection of advice like this, which seems like one of the more useful things I could put together on this subject.

Mid-Realist Ethics:

I occasionally bring up my meta-ethical views in blog posts, but I keep saying I’ll write a more dedicated post on the topic and never really do. A high level summary includes stuff like: “ethics” as I mean it has a ton of features that “real” stuff has, but it lacks the crucial bit which is actually being a real thing. The ways around this tend to fall into one of two major traps – either making a specific unlikely empirical prediction about the view, or labeling a specific procedure “ethics” in a way that has no satisfying difference f... (read more)

Moral problems for environmental restoration:

A post idea I’ve been playing with recently is converting part of my practicum write-up into a blog post about the ethics of environmental restoration projects. My practicum was with the “Billion Oyster Project”, which seeks to use oyster repopulation for geoengineering/ecosystem restoration, and I spent a big chunk of my write-up worrying about the environmental ethics of this, and I’ve been thinking this worrying could be turned into a decent blogpost.

I’ll discuss welfare biology briefly, but lots of it will s... (read more)

I'll post my ideas as replies to this, so they can be voted on separately.

(See here for a draft I whipped up for this, and feel free to comment!) Hayden Wilkinson’s “In defence of fanaticism” argues that you should always take the lower-probability odds of a higher-value reward over the inverse in decision theory, or face serious problems. I think accepting his argument introduces new problems that aren’t described in the paper:

  1. It is implied that each round of Dyson’s Wager (e.g. for each person in the population being presented with the wager) has no subsequent effect on the probability distribution for future rounds, which is
... (read more)

(See here for a draft I whipped for this, and feel free to comment!) An Earth-originating artificial superintelligence (ASI) may reason that the galaxy is busy in expectation, and that it could therefore eventually encounter an alien-originating ASI. ASIs from different homeworlds may find it valuable on first contact to verify whether they can each reliably enter into and uphold agreements, by presenting credible evidence of their own pro-social behaviour with other intelligences. If at least one of these ASIs has never met another, the only such agreemen... (read more)

Comments1
Sorted by Click to highlight new comments since:

I'm considering writing a post on why it's hard for some people who intellectually agree with EA foundations to be emotionally passionate about EA (and really "doing good" in general). This is mostly based on my experience as a university group organiser, my tendency to be drawn to EA-lite people who end up leaving the community, and the fact that I am not very passionate about EA. Very fuzzy TL;DR is that caring about cause prioritisation requires levels of uncertainty, but the average person needs to be able to see concrete steps to take and how their contribution can help people to feel a fervour that propels them into action. This is doubly true for people who are not surrounded by EAs. To combat this, I argue for one actionable item, and one broader, more abstract ideal. The action item is to have a visual, easily digestable EA roadmap, that links broader cause areas with specific things people and orgs are doing. Ideally, the roadmap would almost be like a bunch of "business pitches" to attract new employees, explaining the pain points, the solutions suggested, and how people can get involved. The broader ideal I want to advocate for is for the EA philosophy to be principles based, but for the day-to-day EA to be missions-based (which I view as different from being cause-area-oriented). 

It's all just vibes in my head right now, but I'd be curious to know if people would want to see interviews/surveys/any sort of data to back up what I'm saying.

Curated and popular this week
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 4m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Why ending the worst abuses of factory farming is an issue ripe for moral reform I recently joined Dwarkesh Patel’s podcast to discuss factory farming. I hope you’ll give it a listen — and consider supporting his fundraiser for FarmKind’s Impact Fund. (Dwarkesh is matching all donations up to $250K; use the code “dwarkesh”.) We discuss two contradictory views about factory farming that produce the same conclusion: that its end is either inevitable or impossible. Some techno-optimists assume factory farming will vanish in the wake of AGI. Some pessimists see reforming it as a hopeless cause. Both camps arrive at the same conclusion: fatalism. If factory farming is destined to end, or persist, then what’s the point in fighting it? I think both views are wrong. In fact, I think factory farming sits in the ideal position for moral reform. Because its end is neither inevitable nor impossible, it offers a unique opportunity for advocacy to change the trajectory of human moral progress. Not inevitable Dwarkesh raised an objection to working on factory farming that I often hear from techno-optimists who care about the issue: isn’t its end inevitable? Some cite the long arc of moral progress; others the promise of vast technological change like cultivated meat or Artificial General Intelligence (AGI) which surpasses human capabilities. It’s true that humanity has achieved incredible moral progress for humans. But that progress was never inevitable — it was the result of moral and political reform as much as technology. And that moral progress mostly hasn’t yet extended to animals. For them, the long moral arc of history has so far only bent downward. Technology may one day end factory farming, just as cars liberated w
 ·  · 1m read
 · 
This is a personal essay about my failed attempt to convince effective altruists to become socialists. I started as a convinced socialist who thought EA ignored the 'root causes' of poverty by focusing on charity instead of structural change. After studying sociology and economics to build a rigorous case for socialism, the project completely backfired as I realized my political beliefs were largely psychological coping mechanisms. Here are the key points: * Understanding the "root cause" of a problem doesn't necessarily lead to better solutions - Even if capitalism causes poverty, understanding "dynamics of capitalism" won't necessarily help you solve it * Abstract sociological theories are mostly obscurantist bullshit - Academic sociology suffers from either unrealistic mathematical models or vague, unfalsifiable claims that don't help you understand or change the world * The world is better understood as misaligned incentives rather than coordinated oppression - Most social problems stem from coordination failures and competing interests, not a capitalist class conspiring against everyone else * Individual variation undermines class-based politics - People within the same "class" have wildly different cognitive traits, interests, and beliefs, making collective action nearly impossible * Political beliefs serve important psychological functions - They help us cope with personal limitations and maintain self-esteem, often at the expense of accuracy * Evolution shaped us for competition, not truth - Our brains prioritize survival, status, and reproduction over understanding reality or being happy * Marx's insights, properly applied, undermine the Marxist political project - His theory of ideological formation aligns with evolutionary psychology, but when applied to individuals rather than classes, it explains why the working class will not overthrow capitalism. In terms of ideas, I don’t think there’s anything too groundbreaking in this essay. A lot of the
Recent opportunities in Building effective altruism
20
· · 2m read