The criticism of the concept of "effective altruism," and the second main criticism to the extent that it's related to it, also feels odd to me. Altruism in the sense of only producing good is not realistically possible. By writing your Wired article, say, it is overwhelmingly likely that you set off a chain of events that will cause a different sperm to fertilize a different egg, which will set off all sorts of other chains of events meaning that in hundreds of years we will have different people, different weather patterns, different all sorts of stuff. So writing this will cause untold deaths, untold suffering, that wouldn't have occurred otherwise. So too for your friend Aaron in the Wired article helping the people on the island, and so too for anything else you or anyone might do.
So either altruism is something other than doing only good, or altruism is impossible and the most we can hope for is some kind of approximation. It wouldn't follow that maximizing EV is the best way to be (or approximate being) altruistic, but the mere fact that the actions EAs take are like all other actions in that they have some negative consequences is not in itself much of a criticism.
The first criticism feels pretty odd to me. Clearly what Singer, MacAskill, GiveWell, etc. are talking about is the counterfactual impact of your donation, since that is the thing that should be guiding your decision-making. And that seems totally fine and in accordance with ordinary English: it is fine to say that I saved the life of the choking person by performing the Heimlich maneuver, that Henry Heimlich saved X number of lives by inventing the Heimlich maneuver, that my instructor saved Y number of lives by teaching people the Heimlich maneuver, and that a government program to promote knowledge of the Heimlich maneuver saved Z lives, even if X + Y + Z + 1 is greater than the overall number of lives saved by the Heimlich maneuver. And if I were, say, arguing for increased funding of the government program by saying it would save a certain number of lives, it would be completely beside the point to start arguing that I should actually divide the marginal impact of increased funding to account for the contribution of Henry Heimlich, etc.
The Insect Institute is hiring for a vital, exciting, foundational role: a full-time Program Coordinator or Program Officer (depending on the qualifications of the successful candidate). This is a high-responsibility position where you will have the opportunity to drive real impact for our mission. As our second full-time employee, you will be tasked with helping to carry out the Insect Institute's interventions, including through engagement with policymakers, regulators, NGOs, and potentially media. Suitably qualified candidates may also be asked to contribute to research and report writing. As one of only a few people worldwide working in an extremely important cause area, you will have the potential for enormous counterfactual impact.
Salary: $73,630-$87,694 USD pre-tax
Location: Fully remote
Application Deadline: April 1st, end of day in the EST time zone
The full job description and application is available here. If you know someone else who might be a good fit, a referral form is available here. We offer a $500 bonus for referring the successful candidate. Questions about the role can be directed to info@insectinstitute.org.
More Information:
Key Responsibilities
Requirements:
Preferred:
If you do not meet all of the below criteria, please still consider applying. Please also take an expansive interpretation of the below criteria (e.g., if you are not sure whether your work experience is relevant, err on the side of assuming it might be).
They are separate views, but related: people with person-affecting views usually endorse the asymmetry, people without person-affecting views usually don't endorse the asymmetry, and person-affecting views are often taken to (somehow or other) provide a kind of justification for the asymmetry. The upshot here is that it wouldn't be enough for people at OP to endorse person-affecting views: they'd have to endorse a version of a person-affecting view that is rejected even by most people with person-affecting views, and that independently seems gonzo--one according to which, say, I have no reason at all not to push a button that creates a trillion people who are gratuitously tortured in hell forever.
Very roughly, how this works: person-affecting views say that a situation can't be better or worse than another unless it benefits or harms someone. (Note that the usual assumption here is that, to be harmed or benefited, the individual doesn't have to exist now, but they have to exist at some point.) This is completely compatible with thinking it's worse to create the trillion people who suffer forever: it might be that their existing is worse for them than not existing, or harms them in some non-comparative way. So it can be worse to create them, since it's worse for them. And that should also be enough to get the view that, e.g., you shouldn't create animals with awful lives on factory farms.
Of course, usually people with person-affecting views want it to be neutral to create happy people, and then there is a problem about how to maintain that while accepting the above view about not creating people in hell. So somehow or other they'll need to justify the asymmetry. One way to try this might be via the kind of asymmetrical complaint-based model I mentioned above: if you create the people in hell, there are actual individuals you harm (the people in hell), but if you don't create people in heaven, there is no actual individual you fail to benefit (since the potential beneficiaries never exist). In this way, you might try to fit the views together. Then you would have the view that it's neutral to ensure the awesome existence of future people who populate the cosmos, but still important to avoid creating animals with net-negative lives, or future people who get tortured by AM or whatever.
Now, it is true that people with person-affecting views could instead say that there is nothing good or bad about creating individuals either way--maybe because they think there's just no way to compare existence and non-existence, and they think this means there's no way to say that causing someone to exist benefits or harms them. But this is a fringe view, because, e.g., it leads to gonzo conclusions like thinking there's no reason not to push the hell button.
I think all this is basically in line with how these views are understood in the academic literature, cf., e.g., here.
Generally, people with person-affecting views still want it to be the case that we shouldn't create individuals with awful lives, and probably also that we should prefer the creation of someone with a life that is net-negative by less over someone with a life that is net-negative by more. (This relates to the supposed procreation asymmetry, where, allegedly, that a kid would be really happy is not a reason to have them, but that a kid would be in constant agony is a reason not to have them.) One way to justify this would be the thought that, if you don't create a happy person, no one has a complaint, but if you do create a miserable person, someone does have a complaint (i.e., that person).
Where factory-farmed animals have net-negative lives, I'm not sure person-affecting views would justify neglecting animal welfare, then. (Similarly, re: longtermism, they might justify neglecting long-term x-risks, but not s-risks.)
Different ways of calculating impact make sense in different contexts. What I want to say is that the way Singer, MacAskill, GiveWell are doing it (i) is the one you should be using in deciding whether/where to donate (at the very least assuming you aren't in some special collective action problem, etc.) and (ii) one that is totally fine by ordinary standards of speech--it isn't deceptive, misleading, excessively imprecise, etc. Maybe we agree.