I don't see how Thorstad's claim that the Space Guard Survey is a "special case" of a strong longtermist priority being reasonable (and that other longtermist proposals did not have the same justification) is "rebutted" by the fact that Greaves and McAskill use the Space Guard Survey as its example. The actual scope of longtermism is clearly not restricted to observing exogenous risks with predictable regularity and identifiable and sustainable solutions, and thus is subject at least to some extent to the critiques Thorstad identified.
Even the case for the Space Guard Survey looks a lot weaker than Thorstad granted if one considers that the x-risk from AI in the near term is fairly significant, which most longtermists seem to agree with. Suddenly instead of it having favourable odds of enabling a vast future, it simply observes asteroids[1] for three decades before AI becomes so powerful that human ability to observe asteroids is irrelevant, and any positive value it supplies is plausibly swamped by alternatives like researching AI that doesn't need big telescopes to predict asteroid trajectories and can prevent unfriendly AI and other x-risks. The problem is of course, that we don't know what that best case solution looks like[2] and most longtermists think many areas of spending on AI look harmful rather than near best case, but don't high certainty (or any consensus) about which areas those are. Which is Thorstad's 'washing out' argument
As far as I can see, Thorstad's core argument is that even if it's [trivially] true that the theoretical best possible course of action has most of its consequences in the future, we don't know what that course of action is or even near best solutions are. Given that most longtermists don't think the canonical asteroid example is the best possible course of action and there's widespread disagreement over whether actions like accelerating "safe" AI research are increasing or reducing risk, I don't see his concession the Space Guard Survey might have merit under some assumptions as undermining that.
ex post, we know that so far it's observed asteroids that haven't hit us and won't in the foreseen future.
in theory it could even involve saving a child who grows up to be an AI researcher from malaria. This is improbable, but when you're dealing with unpredictable phenomena with astronomical payoffs...
I'm particularly not sure I understand the concern that people might switch to other platforms with completely different audiences and feature sets.
Substack's value is that it is a place to sell subscriptions to content, not that it has particularly innovative or well-designed features. It seems that if writers wished to make money from their content they would switch to Substack regardless of the quality of EA forum software, whereas if their priority was engaging with EAs, there would be little incentive to switch to a service with a different audience and a monetisation-focused ethos even if its editing tools were top notch
Also in his original formulation "high status" environments are often simply nicer (especially to people who disproportionately care about material wealth and status). The people that do move to the developing world tend to be people that don't mind inconveniences associated with [global] relative poverty like having to drink bottled water or everything else around then looking a tad scruffy.
Above all, Filipinos benchmark their wealth relative to other Filipinos (even if their dream involves a Green Card). Americans don't start benchmarking themselves against Filipinos and start thinking that cars are exotic wealth just because they move to Manila
I think Godwinning the debate actually strengthens the case for "I don't do labels" as a position. True, most people won't hesitate to say that the label "Nazi" doesn't apply to them, whether they say they don't do labels or have social media profiles which read like a menu of ideologies.[1] On the other hand, many who wouldn't hesitate to say that they think Nazis and fascists are horrible and agree should be voted against and maybe even fought against would hesitate to label themselves as "antifascist", with its connotations of ongoing participation in activism and/or membership of self-styled antifascist groups whose other positions they may not agree with.
and from this, we can perhaps infer than figures at Anthropic don't think EA is as bad as Naziism, if that was ever in doubt ;-)
Feels like claims like "Trump's tariffs have slowed down AGI development" need some evidence to back then up. The larger companies working on AGI have already raised funds, assembled teams and bought hardware (which can be globally distributed if necessary) and believe they're going to get extraordinary returns on that effort. Unlike retail and low margin business, it doesn't seem like a 10% levy on manufactured goods or even being unable to import Chinese chips is going to stop them from making progress
I think the most likely explanation, particularly for people working at Anthropic is that EA has a lot of "takes" on AI, many of which they (for good or bad reasons) very strongly disagree with. This might fall into "brand confusion", but I think some of it's simply a point of disagreement. It's probably accurate to characterise the AI safety wing of EA as generally regarding it as very important to debate whether AGI is safe to attempt to develop. Anthropic and their backers have obviously picked a side on that already.
I think that's probably more important for them to disassociate from than FTX or individuals being problematic in other ways.
If we say that "because targeting you is the most effective thing we can do", we incentivise them to not budge. Because they will know that willingness to compromise invites more aggression
That presumably depends on whether "targeting you is the most effective thing we can do" translates into because you're most vulnerable to enforcement action or because you're a major supplier of this company that's listening very carefully to your arguments or because you claim to be market leading in ethics or even just because you're the current market leader. Under those framings, it still absolutely makes sense for companies to consider compromising.
Agree with the broader argument that if you resolve to never bother about small entities or entities that tell you to get lost then that will deter even more receptive ears from listening to you though.
I guess this also applies to junior positions within the system, whose freedom would be determined to a significant extent by people in senior positions
The obvious difference is that an alternative candidate for a junior position in a shrimp welfare organization is likely to be equally concerned about shrimp welfare. An alternative candidate for a junior person in an MEP's office or DG Mare is not, hence the difference at the margin is (if non-zero) likely much greater. And a junior person progressing in their career may end up with direct policy responsibility for their areas of interest, whereas a person who remains a lobbyist will never have this. It even seems non-obvious that even a senior lobbyist will have more impact on policymakers than their more junior adviser or research assistant, though as you say it does depend on whether the junior adviser has the freedom to highlight issues of concern.
"small" is relative. AMF manages significantly more donations compared with most local NGOs, but it does one thing and has <20 staff. That's very different from Save the Children or the Red Cross or indeed the Global Fund type organizations I was comparing it with, that have more campaigns and programmes to address local needs but also more difficulty in evaluating how effective they are overall. I understand that below the big headline "recommended" charities Give well does actually make smaller grants to some smaller NGOs too, but these will still be difficult to access for many
By "scope of longtermism" I took Thorstad's reference to "class of decision situations" in terms of permutations to be evaluated (maximising welfare, maximising human proliferation, minimising suffering etc) rather than categories of basic actions (spending, voting, selecting clothing).[1] I'm not actually sure it makes a difference to my interpretation of the thrust of his argument (diminution, washing out and unawareness means solutions whose far future impact swamps short term benefits are vanishingly rare and generally unknowable) either way.
Sure, Thorstad absolutely starts off by conceding that under certain assumptions about the long term future,[2] a low probability but robustly positive action like preparing to stop asteroids from hitting earth which indirectly enables benefits to accrue over the very long term absolutely can be a valid priority.[3] But it doesn't follow that one should prioritise the long term future in every decision making situation in which money is given away. The funding needs of asteroid monitoring sufficient to alert us to impending catastrophe are plausibly already met[4], and his core argument is we're otherwise almost always clueless about what the [near] best solution for the long term future is. It's not a particularly good heuristic to focus spending on outcomes you are most likely to be clueless about, and a standard approach to accumulation of uncertainty is to discount for it, which of course privileges the short term.
I mean, I agree that Thorstad makes no dent in arguments to the effect that if there is an action which leads to positive utility sustained over a very long period of time for a very large number of people it will result in very high utility relative to actions which don't have that impact: I'm not sure that argument is even falsifiable within a total utilitarian framework.[5] But I don't think his intention is to argue with [near] tautologies, so much as to insist that the set of decisions which credibly result in robustly positive long term impact is small enough to usually be irrelevant.
all of which can be reframed in terms of making money to spend available to spend on priorities" in classic "hardcore EA" style anyway...
Some of the implicit assumptions behind the salience of asteroid x-risk aren't robust: if AI doomers are right then that massive positive future we're trying to protect looks a lot smaller. On the other hand compared with almost any other x-risk scenario, asteroids are straightforward: we don't have to factor in the possibility of asteroids becoming sneaky in response to us monitoring them, or attach much weight to the idea that informing people about asteroids will motivate them to try harder to make it hit the earth.
you correctly point out his choice of asteroid monitoring service is different from Greaves and MacAskill's. I assume he does so partly to steelman the original, as the counterfactual impact of a government agency incubating the first large-scale asteroid monitoring programme is more robust than that of the marginal donation to NGOs providing additional analysis. And he doesn't make this point, but I doubt the arguments that decided its funding actually depended on the very long term anyway....
this is possibly another reason for his choice of asteroid monitoring service...
Likewise, pretty much anyone familiar with total utilitarianism can conceive a credible scenario in which the highest total utility outcome would be to murder a particular individual (baby Hitler etc), and I don't think it would be credible to insist such a situation could never occur or never be known. This would not, however, fatally weaken arguments against the principle of "murderism" that focused on doubting there were many decision situations where murder should be considered as a priority