Hide table of contents

or Maximizing Good Within Your Personal Constraints

Note: The specific numbers and examples below are approximations meant to illustrate the framework. Your actual calculations will vary based on your situation, values, and cause area. The goal isn't precision—it's to start thinking explicitly about impact per unit of sacrifice rather than assuming certain actions are inherently virtuous.

 

You're at an EA meetup. Two people are discussing their impact:

Alice: "I went vegan, buy only secondhand, bike everywhere, and donate 5% of my nonprofit salary to animal charities."

Bob: "I work in finance, eat whatever, and donate 40% of my income to animal charities."

Who gets more social approval? Alice. Who prevents more animal suffering? Bob—by orders of magnitude.

Alice's choices improve welfare for hundreds of animal-years annually through diet change and her $2,500 donation. Bob's $80,000 donation improves tens of thousands of animal-years through corporate campaigns. Yet Alice is seen as virtuous while Bob is viewed suspiciously.

This mismatch between virtue signals and actual impact is costing lives.

The Personal Constraint Framework

Every person operates under different constraints:

  • Financial resources (income, obligations)
  • Time and energy (work demands, health, caregiving)
  • Social/professional context (client expectations, family dynamics)
  • Psychological makeup (what depletes vs. energizes you)
  • Skills and leverage (what you're uniquely positioned to do)

These constraints define your personal impact function. Pretending everyone has the same function is like pretending everyone should wear the same size shoes.

The question isn't "are you doing good?" but "are you doing the most good possible given YOUR constraints?"

Optimize for the decade, not the week. Leave 10-20% of your capacity as slack—burnout serves no one, and opportunities arise unexpectedly.

Return on Sacrifice (RoS): The Core Metric

For any action you might take "for good," calculate:

RoS = (Impact × Sustainability) / Personal Cost

  • Impact: Actual good created (use consistent units within each cause)
  • Personal Cost: What it costs YOU specifically
  • Sustainability: Will you maintain this? (0.5 if you'll quit soon, 1.5 if it gets easier over time)

Allocate your limited capacity to the highest RoS actions for you.

Case Studies: Where Good Intentions Go Wrong

Career: The Counterfactual Question

Sarah: Left her $200k consulting job for a $40k nonprofit role. The nonprofit would have hired someone roughly as capable. Her marginal contribution helps ~100 people annually beyond what her replacement would have done. Donates nothing now.

Marcus: Stayed in consulting, automated 30% donations. Funds 2 full-time charity researchers who wouldn't exist otherwise. His money creates positions that wouldn't otherwise be filled, helping thousands annually.

Rita: Left her $150k tech job for $60k at a biosecurity org. Her unique technical background means she's 10x more effective than the next-best candidate. Her work influences policy affecting millions.

Sarah gets social approval but creates minimal counterfactual good—someone else would have done 90% of what she does. Marcus and Rita both create massive good, through different paths that match their constraints and capabilities.

The key question: What's YOUR counterfactual impact? Against whom are you comparing yourself—the median plausible hire, or the actual runner-up?

Environmental Action: Personal vs. Systemic

Green Hannah: Never flies, grows own food, extensive recycling. Reduces personal emissions by 3-5 tons CO2/year. Spends 10+ hours weekly on this.

Offset Oliver: Flies for high-impact work, normal consumption, donates $5k/year to vetted climate initiatives. Funds removal/reduction of 50-200 tons CO2/year directly, or influences policy affecting thousands of tons.

One to two orders of magnitude difference for 1/10th the time.

Information and Influence

Purity Peter: Refuses any job connected to AI capabilities. Works in unrelated field, donates moderately.

Inside Ingrid: Works at major AI lab, influences safety practices from within. Helps shift millions in compute toward safety research. Without her, the position would go to someone who doesn't prioritize safety.

Who's actually reducing AI risk more?

Truth vs. Reach

Dr. Careful: Never simplifies, reaches 50 people/year with perfect epistemics. Converts 2 to effective giving.

Dr. Compelling: Uses persuasive clarity—plain language, directionally accurate summaries, publishes methods appendix. Reaches 5,000/year, converts 100 to effective giving.

High reach with epistemic hygiene beats low reach with perfect precision.

The Uncomfortable Truth About Offsets

If you would pay $X annually to avoid changing some behavior, but the harm from that behavior can be offset for $X/2, you should keep the behavior and donate $X.

Why? You offset the harm (X/2)andcreateadditionalgood(X/2) and create additional good ( X/2)andcreateadditionalgood(X/2). The world is better off than if you'd grimly abstained.

The discipline: Each January, pre-price what you'd pay for likely exceptions (eating meat at family events, necessary flights, etc.). When you "buy" the exception, donate that amount immediately. Second purchase costs 2×, third costs 4×. No IOUs, no rationalizing later.

This applies everywhere:

  • Diet → animal charity donations
  • Carbon → verified removal/reduction
  • Imperfect career → donation or influence from within
  • Consumption → effective giving

If something would predictably destroy your future capacity to do good (systematic deception, violence, dangerous information release), price it so high you'll never afford it. That's your guardrail without calling it one.

When Personal Practice Actually Matters

Sometimes personal practice has high RoS:

  1. When it affects your credibility with key audiences
  2. When it's genuinely easy for you (negative personal cost)
  3. When you have unusual counterfactual impact (you're 10x better than alternatives)
  4. When it builds capabilities you'll use for impact
  5. When it's a coordination mechanism that only works if everyone participates

These are empirical questions, not moral absolutes. Calculate, don't assume.

Your Personal Impact Portfolio

Different constraints → different optimal portfolios:

High-Earner, Time-Constrained

  • Maximize donations (30-50% if sustainable)
  • Buy convenience to protect time for highest-leverage work
  • Skip time-intensive personal practices with low RoS
  • One board seat or advisory role for leverage

Student, Time-Rich, Money-Poor

  • Build skills and credentials for future impact
  • Organize and movement-build where you have comparative advantage
  • Adopt low-cost personal practices if they're actually sustainable
  • Focus on increasing future earning/influence potential

Subject-Matter Expert

  • Everything into your specialty where you're irreplaceable
  • Influence beats personal practice
  • Guard your comparative advantage ruthlessly
  • Collaborate rather than duplicate others' work

Parent with Young Kids

  • Sustainable giving beats unsustainable higher amounts
  • Model values through choices, not rules
  • Preserve energy for what only you can do
  • Build systems that work with your life, not against it

The Reallocation Exercise

Take 30 minutes. List your current "good" activities:

  • Career choices and their counterfactual impact
  • Donation levels
  • Personal practices
  • Time allocations

For each, estimate:

  1. Impact (pick appropriate units)
  2. Personal cost to you (1-10 scale)
  3. Sustainability (0.5, 1.0, or 1.5)
  4. Calculate RoS = (Impact × Sustainability) / Cost

Find your obvious misallocations. What's high-cost, low-impact? What's low-cost, high-impact that you're not doing?

This month: Stop or reduce ONE low-RoS activity. Start or increase ONE high-RoS activity. Leave 20% of your capacity as slack.

Addressing the Predictable Objections

"This is just rationalization" Pre-pricing in cold blood and paying immediately prevents rationalization. Rationalization happens in the moment; this happens in advance.

"But virtue ethics!" Creating more good IS virtuous. Performance without impact is theater.

"What if everyone thought this way?" They already make these tradeoffs, just less honestly and effectively. Making it explicit increases total good.

"This will lead to value drift" Track your actual impact. Pick both leading indicators (effort) and lagging ones (outcomes). If outcomes don't improve, pivot.

"Some direct work really is highest-impact" Absolutely. If you're genuinely irreplaceable or 10x better than the alternative, direct work can dominate. But be honest about your counterfactual impact—most people overestimate it.

The Call to Action

Post your reallocation publicly. Share:

  • One low-RoS activity you're stopping/reducing
  • One high-RoS activity you're starting/increasing
  • Your honest assessment of your counterfactual impact
  • Your pre-priced exceptions for the year with escalation
  • Your "receipts now" commitment

Report back in 90 days with actual impact, not just intentions.

The Bottom Line

Your constraints are real. Your capacity is finite. You're not a perfect utilitarian machine—you're a flawed agent trying to do good in the world.

The question isn't whether you're virtuous by some abstract standard. It's whether you're creating the most good possible given who you actually are.

Some people's highest impact really is through direct work where they're irreplaceable. Others create more good by earning and donating. Still others by building movements, influencing policy, or conducting crucial research. The answer depends on your specific constraints and counterfactual impact.

What doesn't vary is this: optimizing for actual impact beats optimizing for the appearance of virtue. Every time.

That's not rationalization. That's optimization.

And in a world full of suffering that could be prevented, optimization isn't optional.


What's your highest RoS opportunity that you're not taking? What low-RoS activity are you doing from social pressure? What's your honest counterfactual impact in your current role? Share your reallocation in the comments.

127

18
3

Reactions

18
3

More posts like this

Comments23
Sorted by Click to highlight new comments since:

I appreciate this article and find the core point compelling. However, I notice signs of heavy AI editing that somewhat diminish its impact for me.

Several supporting arguments come across as flimsy/obvious/grating/"fake" as a result. For example, the "Addressing the Predictable Objections" reads more like someone who hasn't actually considered the objections but just gave the simplest answers to surface-level questions, rather than someone who  deeply brainstormed or crowdsourced the objections to the framework. Additionally, the article's tendency towards binary framings makes it hard for me to think through the relevant tradeoffs.

The fundamental argument is strong. I also appreciate the emphasis towards truth and evident care to remove inaccuracies. I imagine there was significant editing effort to avoid hallucinations. Nonetheless the breezy style makes it hard for me to read, and I'd appreciate seeing it developed with more depth and authentic engagement with potential counterarguments.

Thanks for reading and engaging, Linch.

You're correct that I used AI as an editor - with limited time, it was that or no post at all. That resource allocation choice (ship something imperfect but provocative vs. nothing) exemplifies the framework itself. I think more people should use AI to help develop and present their ideas rather than letting perfectionism or time constraints prevent them from contributing to important discussions.

The post was meant to provoke people to examine their own sacrifice allocations, not to be a comprehensive treatise. The objections section covers the predictable first-order pushbacks that stop people from even considering the framework. Deeper counterarguments about replaceability, offset quality, and norm-setting are important but would require their own posts.

The binary framings you note are intentional - they make the core tension vivid. Most people's actual optimization should be marginal reallocation within their portfolio, but "consider shifting 20% of your sacrifice budget" doesn't create the same useful discomfort as "Bob does more good than Alice."

The core point is that we should recognize how individual particularities - income, skills, psychological makeup, social context - dramatically affect how each person can maximize their impact. What's high-RoS for one person may be terrible for another. When we evaluate both our own choices and others' contributions, we need to account for these differences rather than applying uniform standards of virtue. The framework makes these personal tradeoffs explicit rather than hidden.

Am I right that a bunch of the content of this response itself was written by an AI?

You're correct that I used AI as an editor - with limited time, it was that or no post at all.

What if you took whatever input you fed to the AI and posted that instead?

The "input" wasn't a clean document - it was scattered notes, examples, and iterative revisions across multiple sessions. The final post is more coherent and useful than raw process documentation would be. We don't ask other posters to share their drafts, notes, or feedback chains either. The substance and arguments are mine; AI helped with structure and editing.

That's fair. I was imagining you wrote an outline and then fed the outline into an LLM. I usually prefer reading outlines over long posts, and I think it's good practice to have a summary at the top of a post that's basically your outline.

Posts on this topic that I liked:


I fairly strongly disagree with "be honest about your counterfactual impact—most people overestimate it.", and on only working at a nonprofit you consider effective if you think you're ~10x better than the counterfactual hire or "irreplaceable."

As an example, I'm confident that there are software developers who would have been significantly more impactful than me at my role at GWWC, but didn't apply, and the extra ~$/year that they are donating (if they are actually donating more in practice than what they would have) does not compensate for that.
I also think that there's a good chance that I would have done other vaguely impactful work, or donated more myself, if they had been hired instead of me, largely compensating for their missed donations.

Thanks for sharing these other posts. 

We have fairly different beliefs regarding replaceability of staff by orgs with funding (depending, of course on the scarcity of the labor supply), but you can certainly apply the framework this post endorses with a wide range of discounting job impact due to replaceability.

I like the framing, and think this is important stuff, but I agree with Linch that the AI-flavor is grating and makes me wonder how much you stand by the words -- if I object to a specific point, will you say that it's just the AI being a bit fuzzy and I should instead just read your post directionally?

While it's definitely a real thing that higher standards mean fewer posts, and this is something I've struggled with a lot personally, I also think it's tremendously valuable to have posts where at least one person has carefully thought through all the words and how they fit together into a coherent argument.

Some norms I would like to see when folks use LLMs substantively (not copy editing or brainstorming):

  1. explaining the extent of LLM use

  2. explaining the extent of human revision or final oversight

  • ideally I'd love to know section by section. Personally I'd prefer it if authors only included sections they have reviewed and fully endorse, even if this means much shorter posts
  1. not penalizing people who do 1) and 2) for keeping AI-speak or style
  • I think this unfairly discriminates against people who are busy, weak in writing and/or English. I personally dislike it, but don't think it's fair to impose this on others

I'm not sure it's helpful to view one's capacity to make sacrifices as a fixed budget. I wouldn't be surprised if vegans and monks tend to become more ethical rather than less ethical. Not just because it's signalling which correlates with wider ethical behaviours - but because making an ethical sacrifice does something to your psychology (e.g. makes you start to self-identify as an ethical person) and this actually increases your capacity to make further sacrifices. 

Some sacrifices probably trade off against each other more directly than others. For example, if I donate 10 dollars to charity A, I can definitely imagine feeling less morally obliged to donate a further 10 dollars to charity B. However, if I save a child from a burning building, I don't think this would reduce my enthusiasm for donating 10 dollars to charity. In fact, I can imagine it doing the opposite. 

I think this is a very strong counterargument. Perhaps making personal sacrifices changes your character such that it becomes easier to make further sacrifice. I think this might have more to do with a misperception as to how much a lifestyle change would end up being a long-term sacrifice. There may be the perception by the monks and the vegans from the outset that they are giving up significantly more than they feel to be after they have changed and adjusted to their new lifestyles. This would probably warrant experimentation within life to determine what lifestyle changes and choices are more or less (or negative) burdensome.

'Performance without impact is theatre.' I want to - performatively - write that in cute calligraphic lettering and stick it on my wall. It's a really good quote. The best. 

Wish I could take total credit for that turn of the phrase, but that was courtesy of Claude.

Thanks for the great post! Strongly upvoted.

So, I work in retail sales, donate $10 on some months, and spend my free 50 hours a week not learning useful skills or doing useful things. My counterfactual impact is zero.

I should practice making good decisions about how to spend my time. I should think about what activities would be most useful at any given moment, do the activity, then evaluate how it went. It will be hard at first, but it will get easier. 

Your starting point isn't actually zero - those $10 donations have real counterfactual impact. But you're right that systematically building decision-making habits matters more than any specific action.

With 50 free hours per week, you have significant capacity. Worth doing a quick inventory: What skills from retail (customer relations, operations, sales) might transfer to high-impact work? What are you naturally good at or find energizing? Also worth exploring - try a few different volunteer roles, online courses, or side projects to discover what interests you and where you might have unexpected aptitude.

Even converting 2-3 hours weekly to something with compounding returns - based on what you discover about your abilities and interests - would generate significant impact over time. The key is picking something sustainable that gets easier rather than harder.

The fact that you're thinking systematically about this rather than just following social scripts already puts you ahead. Start small with exploration, track what works, adjust as needed.

Aww, thanks! My job is pretty simple, although coworkers praised my ability to memorize codes and numbers. I'm not sure how useful that is though. In school and university I could get good grades without studying much, it's like I knew the logical answers to test questions. I did pretty well in STEM school subjects and English. But I have some trouble studying on my own, from textbooks or videos. 

I make music and post it on YouTube. I also find fiction writing to be interesting. Maybe I could try to be a content creator and see how it goes? 

Hi, Elena! Something to consider is signing up for a free account at GoodWallet https://thegoodwallet.org and adding your personal QR code to your YouTube channel - if someone likes your music, they could donate a few bucks to your wallet (to your pre-selected charity, or into your wallet for you to decide on later). Basically, the doing-good version of Buy Me a Coffee. Check it out if you are interested. Excited for you to explore how to put these great skills you recognize about yourself to work! Have fun!

Have you considered shaping supply/demand balance with career choices? Sometimes dynamics might be such that getting a marginal additional role filled in a field (just by market forces) would require a large amount of money to be spent (i.e. you need to increase the salaries of everyone in the field). Whereas you can simply increase the supply and get the role filled at current market rates.

Could you elaborate more on the 10-20% slack? What counts as slack and what benefit you expect?

I think you mistakenly equal actually doing good vs. not doing harm. Like being vegan is not doing good. It's just not contributing to the harm. And I agree, not contributing to harm is not good enough. And I understand in the end the numbers are important. 

But as fish_in_a_firetruck already pointed out, the psychological ripple effects get ignored, not only for forming habits of doing good, but also of not doing harm. Is a person who saved 10 lives free to kill 9 lives and still deserve to be called equally good as the one who only managed to save one live in the first place but tries not to harm anyone? People normalize doing harm when you discount the difference between doing good vs. not doing harm. 

I agree not to be perfectionist before putting something out there. But I do believe one should not only measure the theoretical validity of an argument but more so the impact it has on others beyond what you actually try to say - like the risk of how these arguments can be twisted to justify doing harm. I myself have my hot takes on controversial topic - where this is a big issue. 

Your ""This will lead to value drift" Track your actual impact. Pick both leading indicators (effort) and lagging ones (outcomes). If outcomes don't improve, pivot." Is brushing this off too quickly and too easily, as if that fixes the risk. 

I would love to see a republishing with flashing out the risks and objections, to mitigate those risks while keeping the very important points you are making. 

Unfortunately I don't have time to write a thorough response but I just wanted to flag that I think this post fails to seriously engage with any of the strong counter-arguments to the central point (including the objections that were mentioned).

For starters, acting under moral uncertainty is a lot more complicated than briefly considering some abstract notion of virtue ethics and assuming that it endorses the same decisions. I think virtue ethics would more likely have a problem with stopping some of the "low return on sacrifice" actions because this would reflect poor moral character, e.g. poor integrity. Similarly, deontological views would likely caution against being willing to cause harm (e.g. human or animal rights violations) in order to do more good in other ways.

Even from a purely consequentialist perspective, I think the indirect impacts of "low RoS" actions are being unduly shrugged off here, such as signalling and value drift. E.g. the suggestion of "tracking impact" as a solution to the latter seems to involve a misunderstanding. Value drift is not about losing sight of whether your actions are effective, but losing the motivation to care about whether your actions are effective. If you get rid of all the most visible / obvious ways you're doing good in your everyday life, you might start seeing yourself as less of a morally-motivated person and eventually stop tracking your less-visible impact too.

Curated and popular this week
Relevant opportunities