Hide table of contents

Seeds of Science is a new journal (funded through Scott Alexander's ACX grants program) that publishes speculative or non-traditional articles on scientific topics. Peer review is conducted through community-based voting and commenting by a diverse network of reviewers (or "gardeners" as we call them). 

We just sent out an article for review that may be of particular interest to the EA community so I wanted to see if anyone would be interested in joining us a gardener to review the article. It is free to join and anyone is welcome (we currently have gardeners from all levels of academia and outside of it). Participation is entirely voluntary - we send you submitted articles and you can choose to vote/comment or abstain without notification (so it's no worries if you don't plan on reviewing often but just want to take a look here and there at what kinds of articles people are submitting). 

To register, please email info@theseedsofscience.org with your  name, title (can be anything/optional), institution (same as title), and link (personal website, twitter, or linkedin is fine) for your listing on the gardeners page. From there, it's pretty self-explanatory - I will add you to the mailing list and send you an email that includes the manuscript, our publication criteria, and a simple review form for recording votes/comments.

Happy to answer any questions about the journal through email or in the comments below. Here is the title/abstract for the article. 

Moral Weights of Six Animals, Considering Viewpoint Uncertainty

Abstract

Many utilitarians would like a number to use to evaluate the moral impact of actions that affect animals. However, there is a great disagreement among scholars involved with animal ethics, both about how much different animals can suffer and how much that suffering morally matters. This paper produces estimates of moral weight, accounting for this uncertainty. We ran a Monte Carlo simulation that samples the ranges of major viewpoints scholars hold in the field, to show a spread of uncertainty for how we should treat six representative animals: crickets, salmon, chickens, pigs, cows, and elephants. The results show that the uncertainty is very large, with a 90% confidence interval ranging between an animal having no value and being valued as much as a human being. Therefore, we present 20% and 40% confidence intervals, as well as the median and geometric mean.


 

23

0
0

Reactions

0
0
Comments5


Sorted by Click to highlight new comments since:

What do you think of taking the log of neuron count dividing that by neural complexity and adding the total wellbeing impact of the individual to get the relative moral value? Intuitively, this can make sense: 

1) The more neurons, the more the individual can feel (but the intensity of perception can increase slower than the number of neurons).

2) The higher the neural complexity - which can correlate with one's ability to feel better about exteroceptive stimuli because they have more, either rational or emotional/intuitive experience (that is either due to ancestors' experience or the individual's life) to 'deal with them,' the less intensely the individual perceives.[1]

3) The impact of the individual on net wellbeing[2] should be added. I am suggesting this weighting.

For humans, especially privileged ones, 3) could make the contribution from individual's wellbeing negligible in the total because they can have much more influence on others. For individuals with less choices, including confined non-human animals,[3] on the contrary, the contribution of 3) can be neglected because these animals do not influence others.

How does this compare with what you found (and how is either finding more accurate)?

  1. ^

    This can correlate with 3) but due to a sum, there should be no double-counting.

  2. ^

    The devil can be in determining the counterfactual to use. For humans, this can be i) e. g. impact due to action, ii) due to inaction (to what extent that should be understood in a utilitarian way), iii) due to unfulfilled potential - e. g. someone did not study to be able to influence decisionmakers even though they could, or iv) due to unfulfilled capacity - e. g. someone who studied and can influence decisionmakers choses another job. For animals, this can be similar, except that animal's free will is intuitively understood as lower. For example, if a chicken in a crammed barn chose to try killing others instead of upskilling them in teaching young chicken to prevent diseases, it can be attributed to norms and environment set by the human caretakers rather than the choice of the chicken.

  3. ^

    Assuming that they cannot influence the wellbeing of others, e. g. by presenting positive attitude.

This sounds great to me but I'm not the author, I just run the journal. We'd love to have you share your review of the article - "To register, please email info@theseedsofscience.org with your  name, title (can be anything/optional), institution (same as title), and link (personal website, twitter, or linkedin is fine) for your listing on the gardeners page. From there, it's pretty self-explanatory - I will add you to the mailing list and send you an email that includes the manuscript, our publication criteria, and a simple review form for recording votes/comments." 

Thank you. I encourage you to

1) Encourage authors of EA-related articles to make their work publicly accessible

2) Post summaries of relevant articles on the EA Forum to facilitate discussion without the need to register and further ease the work of gardeners

Where is the article?

"To register, please email info@theseedsofscience.org with your  name, title (can be anything/optional), institution (same as title), and link (personal website, twitter, or linkedin is fine) for your listing on the gardeners page. From there, it's pretty self-explanatory - I will add you to the mailing list and send you an email that includes the manuscript, our publication criteria, and a simple review form for recording votes/comments."


 

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 1m read
 · 
I recently read a blog post that concluded with: > When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else that matters in your life. But if your tens of thousands of hours at work aren't something you expect to look back on with pride, consider whether there's something else you could be doing professionally that you could feel good about.
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would