R

River

751 karmaJoined

Posts
2

Sorted by New

Comments
62

EA, back in the day, refused to draw a boundary with the rationality movement in the Bay area

 

That's a hell of a framing. EA is an outgrowth of the rationality movement which is centered in the bay area. EA wouldn't be EA without rationality.

I take it "any bad can be offset by a sufficient good" is what you are thinking of as being in the yellow circle implications. And my view is that it is actually red circle. It might actually be how I would define utilitarianism, rather than your UC.

What I am still really curious about is your motivation. Why do you even want to call yourself a utilitarian or an effective altruist or something? If you are so committed to the idea that some bads cannot be offset, then why don't you just want to call yourself a deontologist? I come to EA precisely to find a place where I can do moral reasoning and have moral conversations with other spreadsheet people, without running into this "some bads cannot be offset" stuff.

My main issue here is a linguistic one. I've considered myself a utilitarian for years. I've never seen anything like this UC, though I think I agree with it, and with a stronger version of premise 4 that does insist on something like a mapping to the real numbers. You are essentially constructing an ethical theory, which very intentionally insists that there is no amount good that can offset certain bads, and trying to shove it under the label "utilitarian". Why? What is your motivation? I don't get that. We already have a label for such ethical theories, deontology. The usefulness of having the label "utilitarian" is precisely to pick out those ethical theories that do at least in principle allow offsetting any bad with a sufficient good. That is a very central question on which people's ethical intuitions and judgments differ, and which this language of utilitarianism and deontology has been created to describe. This is where one of realities joints is. 

For myself, I do not share your view that some bads cannot be offset. When you talk of 70 years of the worst suffering in exchange for extreme happiness until the heat death of the universe, I would jump on that deal in a heartbeat. There is no part of me that questions whether that is a worthwhile trade. I cannot connect with your stated rejection of it. And I want to have labels like "utiliarian" and "effective altruist" to allow me to find and cooperate with others who are like me in this regard. Your attempt to get your view under these labels seems both destructive of my ability to do that, and likely unproductive for you as well. Why don't you want to just use other more natural labels like "deontology" to find and cooperate with others like you?

For instance, if someone is interested in AI safety, we want them to know that they could find a position or funding to work in that area.

 

But that isn't true, never has been, and never will be. Most people who are interested in AI safety will never find paid work in the field, and we should not lead them to expect otherwise. There was a brief moment when FTX funding made it seem like everyone could get funding for anything, but that moment is gone, and it's never coming back. The economics of this are pretty similar to a church - yes there are a few paid positions, but not many, and most members will never hold one. When there is a member who seems particularly well suited to the paid work, yes, it makes sense to suggest it to them. But we need to be realistic with newcomers that they will probably never get a check from EA, and the ones who leave because of that weren't really EAs to begin with. The point of a local EA org, whether university based or not, isn't to funnel people into careers at EA orgs, it's to teach them ideas that they can apply in their lives outside of EA orgs. Lets not loose sight of that.

I discovered EA well after my university years, which maybe gives me a different perspective. It sounds to me like both you and your group member share a fundamental misconception of what EA is, what questions are the central ones EA seeks to answer. You seem to be viewing it as a set of organizations from which to get funding/jobs. And like, there is a more or less associated set of organizations which provide a small number of people with funding and jobs, but that's not central to EA, and if that is your motivation for being part of EA, then you've missed what EA is fundamentally about. Most EAs will never receive a check from an EA org, and if your interest in EA is based on the expectation that you will, then you are not the kind of person we should want in EA. EA is, at its core, a set of ideas about how we should deploy whatever resources (our time and our money) that we choose to devote to benefiting strangers. Some of those are object level ideas (we can have the greatest impact on people far away in time and/or space), some are more meta level (the ITN framework), but they are about how we give, not how we get. If you think that you can have more impact in the near term than the long term, we can debate that within EA, but ultimately as long as you are genuinely trying to answer that question and base your giving decisions on it, you are doing EA. You can allocate your giving to near term causes and that is fine. But if you expect EAs who disagree with you to spread their giving in some even way, rather than allocating their giving to the causes they think are most effective, then you are expecting those EAs to do something other than EA. EA isn't about spreading giving in any particular way across cause areas, it is about identifying the most effective cause areas and interventions and allocating giving there. The only reason we have more than one cause area is because we don't all agree on which ones are most effective.

I'm not sure I see the problem here. By donating to effective charities, you are doing a lot of good. Whatever decision you make about eating meat or helping a random stranger who manages to approach you actually is trivial in comparison. Do those things or don't. It doesn't matter in the scheme of things. They aren't what makes you good or bad, your donations are.

Again you are not making the connection, or maybe not seeing my basic point. Even if someone dislikes leftist-coded things, and this causes them both to oppose wokism and to oppose foreign aid, this still does not make opposition to foreign aid about anti-wokism. The original post suggested there was a causal arrow running between foreign aid and wokism, not that both have a causal arrow coming from the same source.

EA is an offshoot of the rationalist movement! The whole point of EA's existence is to try to have better conversations, not to accept that most conversations suck and speak in vibes!

I also don't think it's true that conservatives don't draw the distinction between foreign aid and USAID. Spend five minutes listening to any conservative talk about the decision to shut down USAID. They're not talking about foreign aid being bad in general. They are talking about things USAID has done that do not look like what people expect foreign aid to look like. They seem to enjoy harking on the claim that USAID was buying condoms for Gaza. Now, whether or not that claim is true, and whether or not you think it is good to give Gazans condoms, you have to admit that condoms are not what anybody thinks of when they think of foreign aid.

You missed my point. I agree that foreign aid is charged along partisian lines. My point was that most things that are charged along partisian lines are not charged along woke/anti-woke lines. Foreign aid is not an exception to that rule, USAID is..

I appreciate that you have a pretty nuanced view here. Much of it I agree with, some of it I do not, but I don't want to get into these weeds. I'm not sure how any of it undermines the point that wokism and opposition to foreign aid are basically orthogonal.

Load more