Training for Good recently announced Red Team Challenge, a programme that calls small teams together to "red team" important ideas within effective altruism. This programme provides training in “red teaming” best practices and then pairs small teams of 2-4 people together to critique a particular claim and publish the results.
We are looking for the best ideas to “red team” and want to pay $100 to our 'top' answer and $50 to our second and third top pick.
Constraints
- It should be possible to reach a tangible result for relatively inexperienced researchers within a total time frame of ~50-60 hours or less of research, including a write up (divided between 2-4 team members)
- The red teaming question needs to be:
- Precisely defined with a clear goal and scope
- One sentence long
- Feel free to provide a short explanation of up to 100 words if the question itself is not fully self-explanatory or if you want to provide some additional context.
- Related to effective altruism in some way
How to participate
- We'd like you to leave your answer to the question as a comment to this post or by messaging us your answer using the Forum messaging system.
- If you have questions or want to clarify something, please ask it in a comment to this post.
- We don't want other users discussing other people's answers, so we will moderate away those comments. You may, however, upvote or downvote comments as per normal Forum usage.
- We will end the competition on March 28 2022
How we decide who wins
- We will ultimately pick who we give the $100 prize to solely on our own opinion of which one we find most useful for our intended goals. The same goes for the second and third prize.
- We might find that none of the answers are what we wanted (likely because we under-specified what we want). In that case, we will offer only $10 to the 1st, 2nd, and 3rd best. My fragile guess is that there is a 30% chance of this happening.
- We will DM the winners on prize closing. We might also comment publicly who won on this post, but we'll check in with you first.
Examples
- “Make the best case why this recommendation of charity X should not convince a potential donor to donate”
- “Scrutinize this career profile on X. Why might it turn out to be misleading /counterproductive /unhelpful for a young aspiring EA?”
- “Why might one not believe in the arguments for -
- EA university groups promoting effective giving?”
- hits-based giving being the most impactful approach to philanthropy at the current margin?”
- insects being considered moral patients?"
Red-team - "Are longtermism and virtue ethics actually compatible?"
A convincing red-team wouldn't need a complex philosophical analysis, but rather a summary of divergences between the two theories and an exploration of five or six 'case studies' where consequentialist-type behaviour and thinking is clearly 'unvirtuous'.
Explanation - Given just how large and valuable the long-term future could be, it seems plausible that longtermists should depart from standard heuristics around virtue. For instance, a longtermist working in biosecurity who cares for a sick relative might have good consequentialist reasons to abandon their caring obligations if a sufficiently promising position came up at an influential overseas lobbying group. I don't think EAs have really accepted that there is a tension here; doing so seems important if we are to have open, honest conversations about what EA is, and what it should be.
I would be interested in this one.
To provide a relevant anecdote to the Benjamin Todd thread, (n = 1, of course) I had known about EA for years, and agreed with the ideas behind it. But the thing that got me to actually take concrete action about it was that I joined a group that, among other things, asked its members to do a good deed each day. Once I got into the habit of doing good deeds, (and, even more importantly, actively looking for opportunities to do good deeds) however small or low-impact, I began thinking about EA more, and finally commit... (read more)