This is a special post for quick takes by OdinMB 🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

EA-aligned news curation service - prototype online

Actually Relevant is a news curation service that evaluates stories based on how relevant they are for humanity and its long-term future. The website is a first prototype. I hope that Actually Relevant will promote EA causes and perspectives to more people, and that it will free EA-aligned readers from the headaches of irrelevant news.

I'm looking for

  • feedback What do you like? How can we improve?
  • interview partners 15 minutes to understand how you read the news and what you want to get out of your news consumption
  • partners in crime Let me know if you want to get involved

This story on the frontpage was rating a conservative 4/5 for importance, which your style guide says means

Major: affecting >1bn people in a significant way, having major implications for social, political, economic, or legal norms and systems on an international level, or representing important scientific or technological progress in an important area

But the article text only that it directly affects "Over 70,000" people. There are also some speculative comments that this could lead to a general "reevaluation of international legal norms and systems around land rights", but this seems quite unlikely to me. I would expect that you could write multiple stories a year about similar occurrences.

Thanks so much for taking a deeper look at one of the articles! I think you're right: a somewhat lower rating seems more appropriate in this case.

I believe that two things are true for the algorithm behind Actually Relevant: 1) almost all posts are more important for humanity than 90% of news articles by other outlets. In that sense, it's already useful. 2) Many relevance analyses are still off by at least one grade on the rating scale, meaning that some posts get a "major" or "critical" tag that should not get it. The idea is to use community and expert feedback to finetune the prompts to get even better results in the future. I also want to involve a human editor who could double check and adjust dubious cases.

In the post you referenced, the AI says: "The eviction has affected over 70,000 people and risks cultural extinction for the Maasai people. It also highlights the need for a reevaluation of international legal norms and systems around land rights. In certain scenarios, this situation could lead to a broader movement for indigenous land rights in Tanzania and beyond, making it an issue that is far more relevant for humanity than the number of directly affected people would suggest." I think it's a good sign that the algorithm realized that the extinction of an entire culture and developments around indiginous land rights should lead to a higher rating than the number of directly affected people would suggest. It might still be off in this case, but I'm optimistic that additional finetuning can get us there.

Looking for partners in crime to explore a "scope sensitive news provider"

I would like to find out if there is a market for a news provider that selects stories based on how much they matter to sentient life in the universe.[1] Specifically, I would like to run a few experiments following the Lean Startup approach, like pretending that the service already exists to see how many people would subscribe.

Please reach out

  • if you want to be involved in this exploration,
  • if you would pay for scope sensitive news, or
  • if you have ideas that you think we should consider in the exploration or in the prototypes.
  1. ^

    I took this idea more seriously when I read the post "What happens on the average day". rosehadshar mention "scope sensitivity" as their first criterion for an ideal news provider and define it as "a serious, good faith attempt to tell the stories that matter most to the most sentient life."

Curated and popular this week
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 10m read
 · 
This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs/rationalists thought of his arguments.  This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It seems very hard to create a true cost-benefit analysis, so we each have to make decisions about alcohol given limited information. I’ve never had problems with alcohol. It’s been a fun part of my life and my friends’ lives. I never expected to stop drinking or to write this post. Before I read more about it, I thought of alcohol like junk food: something fun that does not harm most people, but that a few people are moderately harmed by. I thought of alcoholism, like overeating junk food, as a problem of personal responsibility: it’s the addict’s job (along with their friends, family, and doctors) to fix it, rather than the job of everyday consumers. Now I think of alcohol more like tobacco: many people use it without harming themselves, but so many people are being drastically harmed by it (especially and disproportionately the most vulnerable people in society) that everyone has a responsibility to denormalize it. You are not likely to be harmed by alcohol. The average drinker probably suffers few if any negative effects. My argument is about how our collective decision to drink affects other people. This post is not