This stands in notable contrast to most other religious and philosophical traditions, which tend to focus on timescales of centuries or millennia at most, or alternatively posit an imminent end-times scenario.
Feels like the time of perils hypothesis (and associated imperatives to act and magnitude of reward scenario) popular with longtermists maps rather more closely to the imminent end times scenario common to many eras and cultures than Buddhist views of short and long cycles and an eventual[1] Maitreya Buddha...
there have also been Buddhists acting on the belief that the Maitreya was imminent or the claim that they were the Maitreya...
It's a little different, but I'm not sure indexing to the consumption preferences of a certain class of US citizen in 2025 represents a better index, or one particularly close to Rawls concept of primary goods. The "climate controlled space" in particular feels particularly oddly specific (both because much of the world doesn't need full climate control, and because 35m^2 is not a particularly "elite" apportionment of space )
To the extent the VPP concept is useful I'd say it's mostly in indicating that no matter how much it bumps GDP per capita, AI isn't going to automagically reduce costs of land and buildings, and is currently driving the amount of compute+bandwidth an "US coastal elite" person directly or indirectly consumes up very rapidly...
I don't have a global audience, but if I did I wouldn't have share this view I expressed to individuals back when COVID was first reported:
probably this isn't going to become a global pandemic or affect us at all; but the WHO overreacting to unknown new diseases is what prevents pandemics from happening
That take illustrates two things: firstly that there are actual lifesaving reasons for communicating messages slightly differently to your personal level of concern about an issue, and secondly hunches about what is going to happen next can be very wrong.
In fact, semi-informed contrarian hunches were shared frequently and often by public intellectuals throughout the pandemic, often with [sincere] high confidence. They predicted it would cease to be a thing as soon as the weather got warmer, were far too clever to wear masks because they knew that protective effects which might be statistically significant at population level had negligible impact upon them, they didn't have to worry about infection any more because they were using Invermectin as a prophylactic and they were keen to express their concern about vaccines[1] Piper's hunch is probably unusual in being directionally correct. Of all the possible cases for public intellectuals sharing everything they think about an issue, COVID is probably the worst example. Many did, and many readers and listeners died.
Being a domain expert relative to ones audience doesn't seem nearly enough to be contradicting actual experts with speculation on health in other contexts either.[2]
Similarly, I'm unfamiliar with Ball, but if he is “probably way above replacement level for “Trump admin-approved intellectual” he should probably try to stay in post. There are many principled reasons to cease to become a White House adviser, but to pursue a particular cause by placing less emphasis on arguments they're might be receptive to and more on others isn't really one of them. It's not like theories that Open Source AI might be valuable as an alternative to an oligarchy dominated by US Americans who orbit the White House struggle to get aired in other environments. Political lobbying is the ur-case for emphasizing the bits the audience cares about, and I'm really struggling to imagine any benefit to Ball giving the same message to 80k Hours and the Trump administration, unless the intention is for both audiences to ignore him.
I haven't read either of MacAskill's full length books so I'm less sure on this one, but my understanding is that one focuses on various approaches to addressing poverty and one focuses on the long term, in much the same way as Famine, Affluence and Morality has nothing to say on liberating animals and Animal Liberation having little to say on duties to save human lives.[3] I don't think there's anything deceptive in editorial focus, and I think if readers are concluding from reading one of those texts that Singer doesn't care about animals or that MacAskill doesn't care about the future, the problem with jumping to inaccurate conclusions is all theirs. MacAskill has written about other priorities since; I don't think he owes his audience apologies for not covering everything he cares about in the same book.
I do have an issue with "bait and switch" like using events nominally about the best way to address global poverty to segueing into "actually all these calculations to save children's lives we agonised earlier are moot; turns out the most important thing is to support these AI research organizations we're affiliated with"[4] but I consider that fundamentally different to focusing
These are just the good-faith beliefs held by intellectuals with more than a passing interest in the subject. Needless to say not all the people amplifying them had such good intentions or any relevant knowledge at all...
At least with COVID, public health authorities were also acting on low information. The same is not true in other cases, where on the one hand there is a mountain of evidence-based medicine and on the other, a smart more influential person idly speculating otherwise.
Even though Singer has had absolutely no issue with writing about some seriously unpopular positions he holds, he still doesn't emphasize everything important in everything he writes...
Apart from the general ickiness, I'm not even convinced it's a particularly good way to recruit the most promising AI researchers...
Seems like you and the other David T are talking past each other tbh.
Above you reasonably argue the [facetious] "time of carols" hypothesis is not remotely as credible as the time of perils hypothesis. But you also don't assign a specific credence to it, or provide an argument that the "time of carols" is impossible or even <1%[1]
I don't think it would be fair to conclude from this that you don't understand how probability works, and I also don't think that it is reasonable to assume that the probability of the 'time of carols' should be assumed sufficiently nontrivial to warrant action in the absence of any specific credence attached to it. Indeed if someone responded to you indirectly with an example which assigned a prior of "just 1%", to the "time of carols", you might feel justified in assuming it was them misunderstanding probability...
The rest of Thorstad's post which doesn't seem to be specifically targeted at you explicitly argues that in practice, specific claims involving navigating a 'time of perils' also fall into the "trivial" category,[2] in the absence of robust argument as to why of all the possible futures this one is less trivial than others. He's not arguing for "many gods" which invert the stakes so much as "many gods/pantheons means the possibility of any specific god is trivial, in the absence of compelling evidence of said god's relative likelihood". He also doesn't bring any evidence to the table (other than arguing that time of perils hypothesis involves claims about x-risk in different centuries which might be best understood as independent claims [3]) but his position is that this shouldn't be the sceptic's job...
(Personally I'm not sure what said evidence would even look like, but for related reasons I'm not writing papers on longtermism and am happy applying a very high discount rate to the far future)
I think everyone would agree that it is absurd (that's a problem with facetious examples)[4] but if the default is that logical possibilities are considered nontrivial until proven otherwise...
he doesn't state a personal threshold, but does imply many longtermist propositions dip below Monton's 5 * 10^-16 once you start adding up the claims....
a more significant claim he fails to emphasize is that the relevant criteria for longtermist interventions isn't so much that the baseline hypothesis about peril distribution is [incidentally] true but the impact of a specific intervention at the margin has a sustained positive influence on it.
I tend to dislike facetious examples, but hey, this is a literature in which people talk about paperclip maximisers and try to understand AI moral reasoning capacity by asking LLMs variations on trolley problems...
I think this is generally true, but not sure that GiveWell is the best example of competition in social sector peers, since they seem mainly focused on hiring Western-based number crunchers who might otherwise be at a university or analysing financial data in the commercial sector, rather than poaching a small pool of educated staff from local NGOs or the best-networked, highest-performing grant fundraisers in the West.
I don't think longtermism necessarily needs new priorities to be valuable if it offers a better perspective on existing ones (although I don't think it does this well either).
Understanding what the far future might need is very difficult. If you'd asked someone 1000 years what they should focus on to benefit us, you'd get answers largely irrelevant to our needs today.[1] If you asked someone a little over 100 years ago their ideas might seem more intelligible and one guy was even perceptive enough to imagine nuclear weapons, although his optimism about what became known as mutually assured destruction setting the world free looks very wrong now, and people 100 years ago that did boring things focused on the current world did more for us than people dreaming of post-work utopias.
To that extent, the focus on x-risk seems quite reasonable: still existing is something we actually can reasonably believe will be valued by humans in a million years time[2] Of course, there are also over 8 billion reasons to try to avoid human extinction alive today (and most non-longtermists consider at least as far as their children) but longtermism makes arguments for it being more important than we think. This logically leads to willingness to allocate more money to x-risk causes, and consider more unconventional and highly unlikely approaches x-risk. This is a consideration, but in practice I'm not sure that it leads to better outcomes: some of the approaches to x-risk seeking funding make directionally different assumptions about whether more or less AGI is crucial to survival: they can't both be right and the 'very long shot' proposals that only start to make sense if we introduce fantastically large numbers of humans to the benefit side of the equation look suspiciously like Pascal's muggings.[3]
Plus people making longtermist arguments typically seem to attach fairly high probabilities to stuff like AGI that they're working on in their own estimations, which if true would make their work entirely justifiable even focusing only on humans living today.
(A moot point but I'd have also thought that although the word 'longtermist' wasn't coined until much later, Bostrom and to a lesser extent Parfit fit in with the description of longtermist philosophy. Of course they also weren't the first people to write about x-risk)
I suspect the main answers would be to do with religious prophecies or strengthening their no-longer-extant empire/state
Notwithstanding fringe possibilities like the possibility humans in a million years might be better off not existing, or for impartial total utilitarians humanity might be displacing something capable of experiencing much higher aggregate welfare.
Not just superficially in that someone is asking to suspend scepticism by invoking huge reward, but also that the huge rewards themselves make sense only if you believe in very specific claims about x-risk over the long term future being highly concentrated in the present (very large numbers of future humans in expectation or x-risk being nontrivial for any extended period of time might seem superficially uncontroversial possibilities but they're actually strongly in conflict with each other).
I think it's also more fundamental in the sense a number of EA orgs are inherently "comms-focused" because they're lobbying for some sort of cause to some sort of decision maker (convince politicians to endorse challenge trials or ban datacentres and lead paint,, or maybe persuade fish farmers or maternal care workers in LEDCs to adopt a different approach). Or if they're not directly lobbying they might be trying to communicate research to a relatively small group of people like computer scientists or people who want to do inter-species utility loss comparisons.
Also, with some notable exceptions I think a lot of EA is quite insular: orgs want to convey that they're doing important work to OpenPhil funders, a pipeline of talent coming from EA groups, "aligned" organizations to collaborate with or the sort of small donor that's already thinking about long shot solutions to x-risks or making donations to improve the welfare of unfashionable creatures. That's a short list to a/b test, a hard group to target with paid media, and also an audience which has exacting expectations about how things are communicated, so the digital marketing to wider audience approach may not work so well. The down side is that competing for the same attention is going to usually be net less impactful than finding interest from the wider public...
Sure, your example showed that if one irrationally disregards earlier generations and focuses purely on the needs of cohort P, Option B is a clear winner. If one doesn't, we agree that it's actually pretty darn complicated to estimate the total welfare impact of donating now versus donating a larger nominal sum on equivalent problems (assuming they still exist) in future, which requires a lot of contestable counterfactual assumptions[1] as well as choice of discount rates, PPP and money nonlinearity assumptions and decisions about whether any value is attached to economic stimulus to non-recipients in developing countries and keeping marginal NGOs alive. (Donations to things other than poverty relief have their own idiosyncracies: hopefully the number of ITNs needed to prevent malaria deaths by ~2050 will be zero.)
The intergenerational elasticity point is an interesting one, but intergenerational income elasticities are higher in less developed countries (and the higher incomes are partially inherited by more people in later generations, assuming they continue to reproduce above replacement rate). And under normal assumptions we care about the earlier generations helped at least as much as the later ones, so you've already helped many more people than the direct recipients by the time the patient philanthropy fund is investigating how many more people accrued compound interest will let them help. Plus in the specific example of the roof we're talking about wealth, and you'd have to invest very well in stocks and shares to beat the imputed 20% annual returns on a tin roof, even over time spans that extend beyond its serviceability.
Catchup growth definitely exists, the only question is whether more marginal economies will be excluded from it.[2] There are many reasons for economic stagnation in poorer regions (most obviously terrible governance), but it's certainly not independent from whether philanthropic funds for economic growth and poverty alleviation decide that in the near term they should shift towards promoting the economic development of the stock market in their own country instead.[3] Too much patience is probably worse for developing countries than the opposite extreme of too much philanthropic cash chasing too few viable opportunities.
You also have to make assumptions about the philanthropists of the future as well: I'm not as rosy on near future technology-enabled post scarcity societies as some people on here, but if we trend in that direction maybe your nominally larger funds are a lot less relevant in future than that now
Never mind the Asian Tiger economies, even some conflict-ridden impoverished backwaters like Burkina Faso have seen average growth rates comparable to US stocks over extended periods of time, and even without wild technological optimism it'll probably be fairly hard to find people living under the new $3 per day (2025 PPP) poverty threshold in 2075
Makes wayyy more sense for funds to keep most of the funds invested in domestic stocks when they're endowments ring fenced for specific things like selective scholarships or maintenance of a facility than funds for promoting economic growth and poverty alleviation
Feels like in the real world you describe in which few/no cause areas are actually satiated for funding, neglectedness is of interest mainly in how it interacts with tractability.
If your small amount of effort kickstarts an area of research rather than merely adds some marginal quantity of additional research or funding, you might get some sort of multiplier on your efforts, assuming others find your case persuasive. And certain problems that have being neglected due to the relative obscurity/rarity of who/what they affect might be an indication that more tractable interventions exist (if there is a simple cure for common cancers it is remarkable we have not found it yet; conversely certain obscure diseases have been the subject of comparatively little research). On the other hand, the relationship doesn't always run that way: some causes like world peace are neglected precisely because however important they might be, there doesn't appear to be an efficacious solution.