Strategy analyst at Innosight in Boston.

I graduated from Dartmouth College last year with a B.A. in philosophy and have been interested in EA for about 4 years. I currently work for a long-term strategy consultancy. I also have been serving on the Board of Directors of Positive Tracks, a national social change nonprofit, for five years. Within EA, I'm particularly interested in ethical theory and animal welfare.

How others can help me

Reach out to me if your organization is looking for a hard-working and passionate young person with analytical skills and a philosophy background. 

How I can help others

I love to chat all things EA!


Interesting article - thanks for sharing. My main problem with it has to do with the moral psychology piece. You write that: 

It's "disgusting and counterintuitive" for most people to imagine offsetting murder.


"Most of us still live in extremely carnist cultures and are bombarded with burger ads and sights of people enjoying meat next to us all the time like it is perfectly harmless."

In my opinion, these two arguments together make meat offsets a bad idea. People are opposed to murder offsets (no matter how theoretically effectively they may be) because murder feels like a deeply immoral thing to do. However, most people feel that eating meat is not deeply immoral - most people do it every day. I'd imagine folks react the same way to meat offsets as they do to carbon offsets. They think, "well I know I probably shouldn't eat so much meat / consume so much carbon, but I'm not gonna stop, so this offset makes some sense". But this is the wrong way to think about eating meat (and perhaps consuming carbon, too, but that's beside the point). We want people to feel that eating meat is immoral; we want them to feel that it's a form of killing a sentient being. And the availability of an offset trivializes the consumption.

I'm on board with your consequentialist reasoning here, but I'm worried the availability meat offsets may cause people's moral opinion on animal ethics to regress.

Thanks for the post! I agree that identifying those universal maxims or norms seems impossibly difficult given the breadth of humanity's views on morality. In fact, much of post-Kantian deontological thinking can be described as an attempt to answer the very question you ask in this post. I'm also not a trained philosopher (and I lean more towards consequentialism myself), but I'll share a few notes that might help:

  1. Most modern non-consequentialists have much more abstract moral/ethical theories than "follow the categorical imperative". For example, contractualists believe that these norms are only hypothetical and could only be agreed upon universally in an imagined scenario - for Rawls this was the original position. More modern contractualists like Tim Scanlon push this further. Scanlon summarizes his moral theory as: "An act is wrong if its performance under the circumstances would be disallowed by any set of principles for the general regulation of behaviour that no one could reasonably reject as a basis for informed, unforced, general agreement". The key word here is "reasonably". Scanlon pretty much wrote an entire book about what it means to reasonably reject a set of principles. The point is that these more modern deontological theories abstract away from our earthly moral disagreements. Many of them rely on a the existence of an objective moral truth that we can strive to emulate through rational discourse. In this way, it actually doesn't matter much what people or animals care about or their current moral customs.
  2. Other modern deontologists are more concerned with rights and obligations. In my opinion, these moral theories are too much about what we are barred from doing (don't lie, don't steal, don't violate anyone's right to privacy, etc.) than what we should do. Regardless, given you believe in the objective existence of these rights, employing them gives a more clear-cut guide to moral action than the categorical imperative alone. These philosophers would use this reasoning to dispute your claim that deontologists are "more likely to ignore what people unlike themselves care about".
  3. You write that a "deontologist might not draw any repugnant conclusions". All moral theories have shortcomings, and I think that all draw repugnant conclusions in certain hypothetical situations. In turn, philosophers who defend these theories either explain away the problem, "band-aid" their theory, or bite the bullet. There are a bunch of problems I could mention for deontology, but perhaps the most famous is the Inquiring Murderer problem
  4. I think it's worth noting that consequentialism has its fair share of epistemic problems. Determining the well-being or preferences or desires of others, especially when they are halfway around the world, is no easy task. Plus, even if you know what would make others' lives go better, it's often difficult to know how to bring about these outcomes.

TLDR: I agree that deontology has serious epistemic problems, and in practice, deontologists might be more prone to ignoring people unlike themselves (because they are far away or because they have different views). However, much work has been done to demystify non-consequentialist theories and make them actionable - it's just highly complex. In general, I tend to agree with Derek Parfit when he argues that all moral theorists are "climbing the same mountain on different sides" in their search for moral truth.

Great post - I think this is a really important meta-topic within EA that doesn't get enough airtime. It might also be worth considering the "hidden zero problem" coined by Mark Budolfson and Dean Spears here. The thrust of their argument is that if a charity is funded by the ultra-rich or their foundations, small donations may have measurably 0 impact.

As an example: suppose NGO X wants $10M in funding for 2022. Foundation X has been NGO X's largest donor for a few years running. If  small donors give NGO X $8M in 2022, Foundation X will fully fund it to $10M, but if small donors give $8M, Foundation X will give $1M more and still fully fund it to $10M. This means that some of the small donations did 0 impact other than saving Foundation X some cash.

Of the top of my head, there are a few obvious problems with the hidden-zero problem:

  1. Foundations having more money isn't necessarily a bad thing, especially if they give their assets away relatively quickly and effectively. 
  2. How are we supposed to know how much a certain real foundation like Open Philanthropy plans to give certain organizations?
  3. Many charities don't have such cut-and-dried budgets and fundraising goals. E.g., if GiveDirectly gets more money in 2022, it will simply give away more money by expanding the number of recipients and/or its geographical operations.

Regardless, Budolfson and Spears did a lot of fancy math to show the hidden zero problem is worth taking seriously in many cases, especially within EA.

All that being said, it's not clear to me how the hidden zero problem impacts your claim here. On one hand, if we intentionally diversify funding sources, charities might raises their budgets and demand the same amount from big foundations. However, if these foundations see that more money is coming in from more donors, they might decide the charity/cause is no longer "neglected" and choose to reduce the size of their grant.

Would love to hear thoughts on this from people more deeply entrenched in the grant-making world...

I love this piece - super well argued. Your argument applies to virtue ethics too if you replace “RIGHTS” with any virtue claimed to be intrinsically valuable by the virtue ethicist.

Hi David, thanks for the reply. I think I just totally disagree that humanity stopped pursuing ambitious goals. Just yesterday, we generated energy with nuclear fusion. We've reduced the price of solar cells by over 100x in a few decades. Hundreds of millions of people in China/India/Africa, etc. have been lifted out of extreme poverty. There are thousands of scientists pursuing cures for cancer and dementia. I could go on...

Humanity has trillions of dollars to spend, and it goes big on video games, consumer electronics, and fast food.

But our government doesn't have trillions of dollars and we have a ton of really important stuff to spend it on. I just think that improving education, closing the racial wealth gap, offering food stamps - heck, even building infrastructure here on earth are far more important. We can do multiple things at once, but we can't do everything. Every additional spend means something else has to be cut. Space exploration is near the bottom of my list of things I think our govt should spend on.

Hey Brad! I love the idea. I’m late to this comment section and many of my initial reactions were discussed at length. That being said here are a few ideas/questions which haven’t gotten much attention:

  1. You say that very few companies explicitly mention who benefits from their profits, but I think this is changing. More and more products state that their parent companies are woman-owned or Black-owned, etc.. I wonder if there is research about whether this sort of marketing actually drives sales. Regardless, the ubiquity of this sort of marketing is further evidence that this matters to consumers. Moreover, the very fact that CSR/ESG markets (for consulting, marketing, research, etc) are both well over $1B is good evidence for your cause...
  2. Interestingly, I feel like a lot of these products (those that state the identity of their ownership/leadership) are in luxury good markets. Makeup and fashion immediately come to mind. I agree that the "no-brainer" is a viable opportunity to change consumer habits, but you might also consider looking into more price inelastic markets. These markets may be easier to enter (I feel like new fashion brands crop up every day) but your presence might also be more ephemeral compared to CPGs. Taking this further, imagine a new car company like Rivian decided to donate all profits. I’d hypothesize that for a lot of folks that would be the deciding feature to influence a large purchasing decision (like a car).
  3. In the price elastic market I would guess it doesn’t matter much what causes you choose to benefit. As you state, people make these decisions quickly and if they see "all profits to charity", that will likely be enough. This is probably different in the luxury market because consumers are deliberating longer and it would be nice if the product somehow correlated with the cause.

I just wanted to say it’s really heartening for me to know that there’s so much good work going into aligning intl aid with the priorities of its recipients. As many have noted on this forum in the past few months, the potential for impact here is massive. Thank you!

If innovation really has stalled (which I’m skeptical of in the first place) it’s not because the space race is (mostly) over. There are deeply important issues on Earth for us to solve, and millions of people are innovating towards solutions to them every day. Sure, designing a tele-health or mobile banking system for people living in extreme poverty isn’t as sexy as landing on the moon, but it’s surely innovation. These types of projects may not dominate the news cycle but they represent the beginning of an alignment of research and development with the flourishing of all humans (and animals). Space exploration does not.

You say that we should aim higher than our current massive endeavors (eliminating diseases, expanding clean energy, protecting animal rights and natural habitats). But decades of work has proven that these endeavors are extremely difficult. Every marginal dollar and hour spent on these projects counts. And space exploration distracts from urgent need for innovation in these areas.

Thanks for fleshing this out - that all makes sense to me.

Thanks for the detailed reply to the trauma case. Your delineation between various definitions of personhood are helpful for interrogating my other questions as well. 

If it is the case that a "new" welfare subject can be "created" by a traumatic brain injury, then it might well be the case that new welfare subjects are created as one's life progresses. This implies that, as we age, welfare subjects effectively die and new ones are reborn. However, we don't worry about this because 1. we can't prevent it and 2. it's not clear when this happens / if it ever happens fully (perhaps there is always a hint of one's old self in one's new self, so a "new" welfare subject is never truly created).

Given the same argument applies to non-human animals, we could reasonably assume that we can't prevent this loss and recreation of welfare subjects. Moreover, we would probably come to the same conclusions about the badness of the death of the animals, even if throughout their lives they exist as multiple welfare subjects that we should care about. Where it becomes morally questionable is in considering non-human animals whose lives are worse than not worth living. Then, there should be increased moral concern for factory farmed animals given we accept that: 1. their lives are worse than not worth living; 2. they instantiate different welfare subjects throughout there life and 3. there is something worse about 2 different subjects each suffering for 1 year than 1 subject suffering for 2 years. (Again I don't think I accept premise 2 or 3 of this conclusion, I just wanted to take the hypothetical to its fullest conclusion.)

Load more