DT

David T

917 karmaJoined

Comments
156

You could have a great night in which you win hundreds or thousands of dollars, but even if you lose, they know that your losses are helping to dramatically better the world. 

A cynic reads this as "you could have a great night in which you deprive a few hundred people of malaria nets, but at least in the long run they and also random unrelated and typically obnoxious corporations might stand to benefit from the gambling addiction this has instilled in you....". Possibly the first part of the proposition is slightly less icky if the house is simply taking a rake from a competitors in a game of skill, but still.

Maybe I just know too many people broken by gambling.

Thanks for the very interesting post.

I don't work in commercial aviation any more, but can offer a few pointers

  • Eurocontrol are exactly the people you want taking this seriously - they regulate European airspace. So whilst I think it probably is neglected relative to other climate proposals in terms of funding vs estimated impact, it may not be neglected by the right people.
  • For related reasons, I think it's way more tractable than most interventions: changing altitude under certain conditions is a lot easier than dissuading people from flying or consuming. And there is an established track record of regulators enforcing environmental rules and costs like noise restrictions and NOx emissions charges (along with sticks governments haven't beat them with yet like carbon taxing jet fuel)
  • On the other hand it seems like it's actually true the current state of scientific consensus hasn't resolved the important question of when and where to divert yet (see the variability factors in your infographic) and the diversion usually does result in increased fuel burn (and some contrails are even cooling!) And flight directions are a complex multidimesional problem
  • Airspace controllers will need to be involved because airlines are unlikely to do anything voluntarily that impacts their profit margins (which are on average small anyway) regardless of how settled the science. In general, being "greener" through lower fuel consumption actually saves them money; this is an obvious exception.
    • An indirect "stick" approach like levying fines or additional charges on airlines causing contrails whilst passing through particular airspace sounds neat, but whilst theoretically contrails observed from the ground or orbit can be matched to ADS-B readings of aircraft that recently passed through that space, systematically validating that in a legally-valid way in congested airspace seems tricky...
  • I can't see it being practical to achieve via consumer pressure and wider public awareness campaigns run the risk of getting mixed up with "chemtrails" conspiracy theories
    • If you want a possible exception to airline lack of sympathy, a UK startup airline Zeroavia is owned by eco-activist billionaire Dale Vince. Their hydrogen powered fleet claims they already intend to capture water emissions to release at lower altitude [1] for the stated purpose of avoiding contrails. Zeroavia are a very atypical airline, currently have zero flights and I'm not sure how much aviation industry executives actually respect Dale, but if you wanted to outreach to an airline that actually might be sympathetic and see PR benefits of shouting about contrails, they'd be a starting point

So I think there's definitely something to be worked on here, but its going to take industry experts more than grassroots campaigning. I think there are probably some really interesting algorithm development projects there for people with the right skillsets too... 

(For anyone interested in space, an analogous situation is the aluminium oxide deposited in the mesosphere by deorbiting spacecraft. This used to be negligible. It isn't now that constellations of 10s of 1000s of satellites with short design lives in LEO are a thing. The climate impact is uncertain and not necessarily large but probably negative; the impact on ozone depletion could be much more concerning. Changing mindsets on that one will be harder)

  1. ^

    which sounds seriously expensive to me....

Answer by David T6
2
0

Clearly consciously sacrificing a life and unintentionally setting in motion a very indirect chain of events which leads to someone dying are not the same thing, especially in deontology which cares much more about rules and principles than effects.

Frankly butterfly effects are a bigger problem for forms of consequentialism/utilitarianism, where you do care solely about ends, and are faced with the problem that not only might the utility impact of all those "butterfly effects" you cause vastly exceed the ways you try to help people, but if you choose to factor them in they also raise the prospect that whether you're a moral person or not is completely incalculable...

Hi FWI. I have actually worked on both Earth Observation projects and projects looking at other forms of remote sensing for assessing water quality in aquaculture (but don't have the technical skillset to participate in your challenge). A few (hopefully helpful) points:

  • any sort of useful model is going to require calibrating against background data on the variables you select (particularly as you appear to be working with small, shallow freshwater pools which have different appearance in visual spectrum imagery that probably represents differences in factors other than those you're quantifying e.g. mineral content and depth)
    • since obviously any past water quality shared is going to be highly correlated with the current water quality in a given pool, it would make sense to evaluate models on their ability to pick up change from previous levels in future samples using future images rather than simply on predicting which pools have the most ammonia....
    • the more observations you can share, the better the chance the model actually works
  • the variables you've highlighted are theoretically detectable using EO, but they're relatively weak indicators perturbed by other stronger indicators and things like weather (at least it's normally sunny there!). Depending on how much water you collect at how many points in the farm, presumably there's some natural variation in the samples you collect too.
  • free satellite imagery such as Copernicus typically has 1 pixel representing 10-20m on the ground so some of your lakes might be only about 2-4 pixels accross. Large pixel sizes don't necessarily stop major changes in water quality being picked up in multispectral imagery (and aren't necessarily an issue if you're measuring the sea surrounding a coastal fish farm, but it's going to significantly affect the fidelity of your results. Unfortunately, I suspect commercial imagery (spatial resolutions more of the order of 0.5m pixels) is outside your budget
  • if you're currently only occasionally collecting data, satellite revisit rate should be fine 

I wouldn't be hugely optimistic about success in the short term as I suspect the scope of what you're looking at is a lot more subtle than "spot the effects of leachate on the massive lake" and the data you have so far may not be enough

The other problem with the "indirect enough" argument is that the donations are even more indirect

Sure, the meat people eat is usually killed long before it's ordered and eating a few dozen chickens per year doesn't individually shift an industry. But likewise, a $1000 donation doesn't meaningfully affect an advocacy charity's ability to win a court case. 

Both only work in aggregate, and on a causal basis the link between meat demand and factory farming is much more robustly-established than advocacy charity income and relative absence of factory farms[1]

And standards for crediting impact need to be stricter here because multiple counting is a much more meaningful problem than when an altruistic donor is deciding where to donate

This is a good point too.  If you're using donations to prioritise in a counterfactual scenario, what part of the outcome is actually "your impact" is irrelevant. If you're using them to buy indulgences, that's less obviously the case.

  1. ^

    on a money basis it's less certain, but I still don't think vegan diets are dramatically more expensive than meat ones, and the DALY impact of eating half a chicken doesn't seem to be very different from favourable estimates of DALY impact of a dollar donation to Legal Impact for Chickens... 

A large part of backlash against effective altruism comes from people worried about EA ideals being corrosive to the “paying for public goods" or “partial philanthropy" mechanisms.

I think this is a good point. I don't think it's a particularly strong argument against EA, not least because EA doesn't appear to be having any discernible impact on people's willingness to fund climbing organizations or conference halls, but it certainly comes up a lot in critical articles.

More common forms of ostensibly "impartial" giving, like supporting global health initiatives or animal welfare, are probably better understood as examples of partial philanthropy with extended notions of “we", like “we, living humans" or “we, mammals".

But I don't agree with this. Giving anonymously to unknown recipients in a faraway country via an unconnected small NGO doesn't have any of the typical benefits associated with supporting a "collective we" (anticipated reciprocity, kin selection, identity, chauvinism against perceived enemies etc) making it about as impartial as it gets, and I don't think people care about chicken welfare out of collective identity, never mind a stronger sense of collective identity than with potential future humans. Indeed it would be far easier to class many longtermist organizations under your definition of "partial philanthropy" as recipients are typically known members of a community with shared beliefs (and sometimes social circles), and the immediate benefit is often research the donor and the donor's community find particularly interesting. 

I think the factors you've highlighted that apply to some types of charity like access to public goods overlap with other motivations for giving like sense of duty, feelings of satisfaction and signalling which apply to all types of charity.

Thanks for the thoughtful response.

On (1) I'm not really sure the uncertainty and the trust in the estimate are separable. A probability estimate of a nonrecurring event[1] fundamentally is a label someone[2] applies to how confident they are something will happen. A corollary of this is that you should probably take into account how probability estimates could have actually been reached, your trust in that reasoning and the likelihood of bias when deciding how to act. [3]

On (2) I agree with your comments about the OP's point; if the probabilities are +/-1 percentage point with error symmetrically distributed they're still on average 1.5%[4], though in some circumstances introducing error bars might affect how you handle risk. But as I've said, I don't think the distribution of errors looks like this when it comes to assessing whether long shots are worth pursuing or not (not even under the assumption of good faith). I'd be pretty worried if hits based grant-makers didn't, frankly, and this question puts me in their shoes. 

Your point about analytic philosophy often expecting literal answers to slightly weird hypotheticals is a good one. But EA isn't just analytic philosophy and St Petersburg Paradoxes, it's also people literally coming up with best guesses of probabilities of things they think might work and multiplying them (and a whole subculture based on that, and guesstimating just how impactful "crazy train" long shot ideas they're curious about might be). So I think it's pretty reasonable to treat it not as a slightly daft hypothetical where a 1.5% probability is an empirical reality,[5] but as a real world decision grant award scenario where the "1.5% probability" is a suspiciously precise credence, and you've got to decide whether to trust it enough to fund it over something that definitely works. In that situation, I think I'm discounting the estimated chance of success of the long shot by more than 50%.

FWIW I don't take the question as evidence the survey designers are biased in any way

  1. ^

    "this will either avert 100,000 DALYs or have no effect" doesn't feel like a proposition based on well-evidenced statistical regularities...

  2. ^

    not me. Or at least a "1.5%" chance of working for thousands of people and implicitly a 98.5% chance of having no effect on anyone certainly doesn't feel like the sort of degree of precision I'd estimate to...

  3. ^

    Whilst it's the unintended consequences of how the question was framed, this example feels particularly fishy. We're asked to contemplate trading off something that certainly will work against something potentially higher yielding that is highly unlikely to work, and yet the thing that is highly unlikely to work turns out to have the higher EV because someone has speculated on its likelihood to a very high degree of precision, and those extra 5 thousandths made all the difference. What's the chance the latter estimate is completely bogus or finessed to favour the latter option? I'd say in real world scenarios (and certainly not just EA scenarios) it's quite a bit more than 5 in 1000....

  4. ^

    that one's a math test too ;-)

  5. ^

    maybe a universe where physics is a god with an RNG...

Feels like taking into account the likelihood that the "1.5% probability of 100,000 DALYs averted" estimate is a credence based on some marginally-relevant base rate[1] that might have been chosen with a significant bias towards optimism is very much in keeping with the spirit of the question (which presumably is about gauging attitudes towards uncertainty, not testing basic EV calculation skills)[2]

A very low percentage chance of averting a lot of DALYs feels a lot more like "1.5% of clinical trials of therapies for X succeeded; this untested idea might also have a 1.5% chance" optimism attached to a proposal offering little reason to believe it's above average rather than an estimate based on somewhat robust statistics (we inferred that 1.5% of people who receive this drug will be cured from the 1.5% of people who had that outcome in trials). So it seems quite reasonable to assume that the 1.5% chance of a positive binary outcome estimate might be biased upwards. Even more so in the context of "we acknowledge this is a long shot and high-certainty solutions to other pressing problems exist, but if the chance of this making an impact was as high as 0.0x%..." style fundraising appeals to EAs' determination to avoid scope insensitivity.

  1. ^

    either that or someone's been remarkably precise in their subjective estimates or collected some unusual type of empirical data. I certainly can't imagine reaching the conclusion an option has exactly 1.5% chance of averting 100k DALYs myself 

  2. ^

    if you want to show off you understand EV and risk estimation you'd answer (C) "here's how I'd construct my portfolio" anyway :-) 

They do specifically say that they consider other types of university funding to have greater cost-benefit (and I don't think it makes sense to exclude reputation concerns from cost-benefit analysis, particularly when reputation boost is a large part of the benefit being paid for in the first place). Presumably not paying stipends would leave more to go around. I agree that more detail would be welcome.

Load more