Yeah I see your point. I think I personally have a stronger aversion to illegal requests from employers as a matter of a principle, even if the employee does that sort of thing anyway. But I can see how other people might view that differently.
That said, in this particular case, it doesn't seem like Chloe would otherwise be illegally buying weed?
You make a fair point about the risk of admitting to such activities in a public setting. Although, if the statement is not true, there would be no risk in denying it, right? I'm hesitant to assume something is true in the absence of a denial, but I wanted to at least give Nonlinear an opportunity to deny it.
This will vary between readers, but I personally find this more cruxy than perhaps you do. In my opinion: asking an employee to commit illegal acts, even with minimal social pressure, especially in a foreign country, especially if it happened multiple times, is a very serious concern. I can imagine extreme instances where it could be justified, but it doesn't seem like that applies to this situation.
I am also hoping that the accuracy of the weed allegation is much less ambiguous than some of the harder-to-pin down abuse claims (even if those might be worse in sum total if they were all true).
Thank you for taking the time to write up all of this evidence, and I can only imagine how time-consuming and challenging this must have been.
Apologies if I missed this, but I didn't see a response to Chloe's statement here that one of her tasks was to buy weed for Kat in countries where weed is illegal. This statement wasn't in Ben's original post, so I can see how you might have missed it in your response. But I would appreciate clarification on whether it is true that one of Chloe's tasks was to buy weed in countries where weed is illegal.
I really like this post. I totally agree that if x-risk mitigation gets credit for long-term effects, other areas should as well, and that global health & development likely has significantly positive long-term effects. In addition to the compounding total utility from compounding population growth, those people could also work on x-risk! Or they could work on GH&D, enabling even more people to work on x-risk or GH&D (or any other cause), and so on.
One light critique: I didn't find the theoretical infinity-related arguments convincing. There are a lot of mathematical tools for dealing with infinities and infinite sums that can sidestep these issues. For example, since is typically shorthand for , we can often compare two infinite sums by looking at the limit of the sum of differences, e.g., . Suppose denote the total utility at time given actions 1 and 2, respectively, and . Then even though , we can still conclude that action 1 is better because .
This is a simplified example, but my main point is that you can always look at an infinite sum as the limit of well-defined finite sums. So I'm personally not too worried about the theoretical implications of infinite sums that produce "infinite utility".
P.S. I realize this comment is 1.5 years late lol but I just found this post!
For context, I’m an AI safety researcher and I think the stance that AGI is by far the #1 issue is defensible, although not my personal view.
I would like to applaud 80k hours for several things here.
1. Taking decisive action based on their convictions, even if it might be unpopular.
2. Announcing that action publicly and transparently.
3. Responding to comments on this post and engaging with people’s concerns.
However, several aspects of this move leave me feeling disappointed.
1. This feels like a step away from Effective Altruism is a Question (not an ideology), which I think is something that makes EA special. If you’ll pardon the oversimplification, to me this decision has the vibe of “Good news everyone, we figured out how to do the most good and it’s working on AGI!” I’m not sure to what extent that is the actual belief of 80k hours staff, but that’s the vibe I get from this post.
2. For better or for worse, I think 80k hours wields tremendous influence in the EA community and it seems likely to me that this decision will shift the overall tenor and composition of EA as a movement. Given that, it seems a bit weird to me that this decision was made based on the beliefs of a small subset of the community (80k hours staff). Especially since my impression is that “AGI is by far the #1 issue” is not the median EA’s view (I could be wrong here though). 80k is a private organization, and I’m not saying there should have been a public vote or something, but I think the views of 80k hours staff are not the only relevant views for this type of decision.
Overall, there’s a crucial difference between (A) helping people do the most good according to *their* definition and views, or (B) helping people do the most good according to *your* definition and views. One could argue that (B) is always better, since after all, those are your views. But I think that neglects important second-order effects such as the value of a community.
It may be true that (B) is better in this specific case if the benefits outweigh those costs. It’s also not clear to me if 80k hours fully subscribes to (B) or is just shifting in that direction. More broadly, I’m not claiming that 80k hours made the wrong decision: I think it's totally plausible that 80k hours is 100% correct and AGI is so pressing that even given the above drawbacks, the shift is completely worth it. But I wanted to make sure these drawbacks were raised.
Questions for 80k hours staff members (if you’re still reading the comments):
1. Going forward, do you view your primary goal more as (A) helping people do the most good according to their own definition and views, or (B) helping people do the most good according to your definition and views? (Of course it can be some combination)
2. If you agree that your object-level stance on AGI differs from the median EA, do you have any hypotheses for why? Example reasons could be (A) you have access to information that other people don't, (B) you believe people are in semi-denial about the urgency of AGI, (C) you believe that your definition of positive impact differs significantly from the median EA.