Thanks for the very interesting post.
I don't work in commercial aviation any more, but can offer a few pointers
So I think there's definitely something to be worked on here, but its going to take industry experts more than grassroots campaigning. I think there are probably some really interesting algorithm development projects there for people with the right skillsets too...
(For anyone interested in space, an analogous situation is the aluminium oxide deposited in the mesosphere by deorbiting spacecraft. This used to be negligible. It isn't now that constellations of 10s of 1000s of satellites with short design lives in LEO are a thing. The climate impact is uncertain and not necessarily large but probably negative; the impact on ozone depletion could be much more concerning. Changing mindsets on that one will be harder)
which sounds seriously expensive to me....
Clearly consciously sacrificing a life and unintentionally setting in motion a very indirect chain of events which leads to someone dying are not the same thing, especially in deontology which cares much more about rules and principles than effects.
Frankly butterfly effects are a bigger problem for forms of consequentialism/utilitarianism, where you do care solely about ends, and are faced with the problem that not only might the utility impact of all those "butterfly effects" you cause vastly exceed the ways you try to help people, but if you choose to factor them in they also raise the prospect that whether you're a moral person or not is completely incalculable...
Hi FWI. I have actually worked on both Earth Observation projects and projects looking at other forms of remote sensing for assessing water quality in aquaculture (but don't have the technical skillset to participate in your challenge). A few (hopefully helpful) points:
I wouldn't be hugely optimistic about success in the short term as I suspect the scope of what you're looking at is a lot more subtle than "spot the effects of leachate on the massive lake" and the data you have so far may not be enough
The other problem with the "indirect enough" argument is that the donations are even more indirect
Sure, the meat people eat is usually killed long before it's ordered and eating a few dozen chickens per year doesn't individually shift an industry. But likewise, a $1000 donation doesn't meaningfully affect an advocacy charity's ability to win a court case.
Both only work in aggregate, and on a causal basis the link between meat demand and factory farming is much more robustly-established than advocacy charity income and relative absence of factory farms[1]
And standards for crediting impact need to be stricter here because multiple counting is a much more meaningful problem than when an altruistic donor is deciding where to donate
This is a good point too. If you're using donations to prioritise in a counterfactual scenario, what part of the outcome is actually "your impact" is irrelevant. If you're using them to buy indulgences, that's less obviously the case.
on a money basis it's less certain, but I still don't think vegan diets are dramatically more expensive than meat ones, and the DALY impact of eating half a chicken doesn't seem to be very different from favourable estimates of DALY impact of a dollar donation to Legal Impact for Chickens...
A large part of backlash against effective altruism comes from people worried about EA ideals being corrosive to the “paying for public goods" or “partial philanthropy" mechanisms.
I think this is a good point. I don't think it's a particularly strong argument against EA, not least because EA doesn't appear to be having any discernible impact on people's willingness to fund climbing organizations or conference halls, but it certainly comes up a lot in critical articles.
More common forms of ostensibly "impartial" giving, like supporting global health initiatives or animal welfare, are probably better understood as examples of partial philanthropy with extended notions of “we", like “we, living humans" or “we, mammals".
But I don't agree with this. Giving anonymously to unknown recipients in a faraway country via an unconnected small NGO doesn't have any of the typical benefits associated with supporting a "collective we" (anticipated reciprocity, kin selection, identity, chauvinism against perceived enemies etc) making it about as impartial as it gets, and I don't think people care about chicken welfare out of collective identity, never mind a stronger sense of collective identity than with potential future humans. Indeed it would be far easier to class many longtermist organizations under your definition of "partial philanthropy" as recipients are typically known members of a community with shared beliefs (and sometimes social circles), and the immediate benefit is often research the donor and the donor's community find particularly interesting.
I think the factors you've highlighted that apply to some types of charity like access to public goods overlap with other motivations for giving like sense of duty, feelings of satisfaction and signalling which apply to all types of charity.
Thanks for the thoughtful response.
On (1) I'm not really sure the uncertainty and the trust in the estimate are separable. A probability estimate of a nonrecurring event[1] fundamentally is a label someone[2] applies to how confident they are something will happen. A corollary of this is that you should probably take into account how probability estimates could have actually been reached, your trust in that reasoning and the likelihood of bias when deciding how to act. [3]
On (2) I agree with your comments about the OP's point; if the probabilities are +/-1 percentage point with error symmetrically distributed they're still on average 1.5%[4], though in some circumstances introducing error bars might affect how you handle risk. But as I've said, I don't think the distribution of errors looks like this when it comes to assessing whether long shots are worth pursuing or not (not even under the assumption of good faith). I'd be pretty worried if hits based grant-makers didn't, frankly, and this question puts me in their shoes.
Your point about analytic philosophy often expecting literal answers to slightly weird hypotheticals is a good one. But EA isn't just analytic philosophy and St Petersburg Paradoxes, it's also people literally coming up with best guesses of probabilities of things they think might work and multiplying them (and a whole subculture based on that, and guesstimating just how impactful "crazy train" long shot ideas they're curious about might be). So I think it's pretty reasonable to treat it not as a slightly daft hypothetical where a 1.5% probability is an empirical reality,[5] but as a real world decision grant award scenario where the "1.5% probability" is a suspiciously precise credence, and you've got to decide whether to trust it enough to fund it over something that definitely works. In that situation, I think I'm discounting the estimated chance of success of the long shot by more than 50%.
FWIW I don't take the question as evidence the survey designers are biased in any way
"this will either avert 100,000 DALYs or have no effect" doesn't feel like a proposition based on well-evidenced statistical regularities...
not me. Or at least a "1.5%" chance of working for thousands of people and implicitly a 98.5% chance of having no effect on anyone certainly doesn't feel like the sort of degree of precision I'd estimate to...
Whilst it's the unintended consequences of how the question was framed, this example feels particularly fishy. We're asked to contemplate trading off something that certainly will work against something potentially higher yielding that is highly unlikely to work, and yet the thing that is highly unlikely to work turns out to have the higher EV because someone has speculated on its likelihood to a very high degree of precision, and those extra 5 thousandths made all the difference. What's the chance the latter estimate is completely bogus or finessed to favour the latter option? I'd say in real world scenarios (and certainly not just EA scenarios) it's quite a bit more than 5 in 1000....
that one's a math test too ;-)
maybe a universe where physics is a god with an RNG...
Feels like taking into account the likelihood that the "1.5% probability of 100,000 DALYs averted" estimate is a credence based on some marginally-relevant base rate[1] that might have been chosen with a significant bias towards optimism is very much in keeping with the spirit of the question (which presumably is about gauging attitudes towards uncertainty, not testing basic EV calculation skills)[2].
A very low percentage chance of averting a lot of DALYs feels a lot more like "1.5% of clinical trials of therapies for X succeeded; this untested idea might also have a 1.5% chance" optimism attached to a proposal offering little reason to believe it's above average rather than an estimate based on somewhat robust statistics (we inferred that 1.5% of people who receive this drug will be cured from the 1.5% of people who had that outcome in trials). So it seems quite reasonable to assume that the 1.5% chance of a positive binary outcome estimate might be biased upwards. Even more so in the context of "we acknowledge this is a long shot and high-certainty solutions to other pressing problems exist, but if the chance of this making an impact was as high as 0.0x%..." style fundraising appeals to EAs' determination to avoid scope insensitivity.
either that or someone's been remarkably precise in their subjective estimates or collected some unusual type of empirical data. I certainly can't imagine reaching the conclusion an option has exactly 1.5% chance of averting 100k DALYs myself
if you want to show off you understand EV and risk estimation you'd answer (C) "here's how I'd construct my portfolio" anyway :-)
They do specifically say that they consider other types of university funding to have greater cost-benefit (and I don't think it makes sense to exclude reputation concerns from cost-benefit analysis, particularly when reputation boost is a large part of the benefit being paid for in the first place). Presumably not paying stipends would leave more to go around. I agree that more detail would be welcome.
A cynic reads this as "you could have a great night in which you deprive a few hundred people of malaria nets, but at least in the long run they and also random unrelated and typically obnoxious corporations might stand to benefit from the gambling addiction this has instilled in you....". Possibly the first part of the proposition is slightly less icky if the house is simply taking a rake from a competitors in a game of skill, but still.
Maybe I just know too many people broken by gambling.