This is a special post for quick takes by Sudhanshu Kasewa. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

 

I've now spoken to  ~1,400 people as an advisor with 80,000 Hours, and if there's a quick thing I think is worth more people doing, it's doing a short reflection exercise about one's current situation. 

Below are some (cluster of) questions I often ask in an advising call to facilitate this. I'm often surprised by how much purchase one can get simply from this -- noticing one's own motivations, weighing one's personal needs against a yearning for impact, identifying blind spots in current plans that could be triaged and easily addressed, etc.

 

A long list of semi-useful questions I often ask in an advising call

 

  1. Your context:
    1. What’s your current job like? (or like, for the roles you’ve had in the last few years…)
      1. The role
      2. The tasks and activities
      3. Does it involve management?
      4. What skills do you use? Which ones are you learning?
      5. Is there something in your current job that you want to change, that you don’t like?
    2. Default plan and tactics
      1. What is your default plan?
      2. How soon are you planning to move? How urgently do you need to get a job?
      3. Have you been applying? Getting interviews, offers? Which roles? Why those roles?
      4. Have you been networking? How? What is your current network?
      5. Have you been doing any learning, upskilling? How have you been finding it?
      6. How much time can you find to do things to make a job change? Have you considered e.g. a sabbatical or going down to a 3/4-day week?
      7. What are you feeling blocked/bottlenecked by?
    3. What are your preferences and/or constraints?
      1. Money
      2. Location
      3. What kinds of tasks/skills would you want to use? (writing, speaking, project management, coding, math, your existing skills, etc.)
      4. What skills do you want to develop?
      5. Are you interested in leadership, management, or individual contribution?
      6. Do you want to shoot for impact? How important is it compared to your other preferences?
        1. How much certainty do you want to have wrt your impact?
      7. If you could picture your perfect job – the perfect combination of the above – which ones would you relax first in order to consider a role?
  2. Reflecting more on your values:
    1. What is your moral circle?
    2. Do future people matter?
    3. How do you compare problems?
    4. Do you buy this x-risk stuff?
    5. How do you feel about expected impact vs certain impact?
  3. For any domain of research you're interested in:
    1. What’s your answer to the Hamming question? Why?

 

If possible, I'd recommend trying to answer these questions out loud with another person listening (just like in an advising call!); they might be able to notice confusions, tensions, and places worth exploring further. Some follow up prompts that might be applicable to many of the questions above:

  1. How do you feel about that?
  2. Why is that? Why do you believe that?
  3. What would make you change your mind about that?
  4. What assumptions is that built on? What would change if you changed those assumptions?
  5. Have you tried to work on that? What have you tried? What went well, what went poorly, and what did you learn?
  6. Is there anyone you can ask about that? Is there someone you could cold-email about that?

 

Good luck!

With another EAG nearby, I thought now would be a good time to push out this draft-y note. I'm sure I'm missing a mountain of nuance, but I stand by the main messages:

 

"Keep Talking"

I think there are two things EAs could be doing more of, on the margin. They are cheap, easy, and have the potential to unlock value in unsuspecting ways.


Talk to more people

I say this 15 times a week. It's the most no-brainer thing I can think of, with a ridiculously low barrier to entry; it's usually net-positive for one while often only drawing on unproductive hours of the other. Almost nobody would be where they were without the conversations they had. Some anecdotes:

- A conversation led both parties discovering a good mentor-mentee fit, leading to one dropping out of a PhD, being mentored on a project, and becoming an alignment researcher.

- A first conversation led to more conversations which led to more conversations, one of which illuminated a new route to impact which this person was a tremendously good fit for. They're now working as a congressional staffer.

- A chat with a former employee gave an applicant insight about a company they were interviewing with and helped them land the job (many, many such cases).

- A group that is running a valuable fellowship programme germinated from a conversation between three folks who previously were unacquainted (the founders) (again, many such cases).
 

Make more introductions to others (or at least suggest who they should reach out to)

By hoarding our social capital we might leave ungodly amounts of value on the table. Develop your instincts and learn to trust them! Put people you speak with in touch with other people who they should speak with -- especially if they're earlier in their discovery of using evidence and reason to do more good in the world. (By all means, be protective of those whose time is 2 OOMs more precious; but within +/- 1, let's get more people connected: exchanging ideas, improving our thinking, illuminating truth, building trust.) 

At EAG, at the very least, point people to others they should be talking to. The effort in doing so is so, so low, and the benefits could be massive.

 

Edit: Punctuation

One habit to make that second piece of advice stick even more that I often recommend: introduce people to other people as soon as you think of it (i.e. pause the conversation and send them an email address or list of names or open a thread between the two people). 

I often pause 1:1s to find links or send someone a message because I'm prone to forgetting to do follow-up actions unless I immediately do it (or write it down).

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under