DT

David T

1431 karmaJoined

Comments
256

Feels like tractability is the key point here. It doesn't matter a huge amount if 7 billion is or isn't the total amount of animals that would counterfactually be saved if all pets were fed vegan diets[1]

What matters is what change can feasibly be achieved by a marginal campaign or food innovation, given that vegan pet food is already a thing which I suspect most vegans are aware of, and most pet owners are not vegans. Also, many vegans are comfortable feeding their pets (or in the case of one person I know, an entire zoo) with omnivorous or carnivorous diets.

I suspect the returns to campaigning would look like marginal returns to vegan advocacy and meat alternatives research for humans, but it feels like this is where the evidence would be most interesting.

  1. ^

    the order of magnitude seems plausible when considering how many more animals free ranging domestic cats alone are estimated to kill...

However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.

I don't think this is a real contribution. I don't think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades. I think they're trying to make it because they think they can. 

And also because they [rightly or wrongly] believe that AGI will be more cost effective, more controllable, need less sleep and have higher problem solving potential than even the smartest possible humans. And be here a lot sooner. (And in some of the AGI fantasies, a route to making humans genetically smarter anyway!)
-

Even if one assumes near term "AGI" has a fairly low ceiling,[1] it seems like "intelligence augmentation" is unpromising as an EA intervention.[2] The necessary research is complex, expensive, long term and dependent not just on germline engineering, but on academic research to understand what intelligence is in less shallow terms than we currently do. It's not clear that there are individual tractable interventions. The quantifiable impact - if it actually worked - would presumably be a tiny proportion of people sufficiently rich and focused on maximising their offspring's intelligence paying to select a few genes somewhat correlated with intelligence for "designer babies", with the possibility this might translate enough into real world outcomes to turn a handful of children with already above average prospects into particularly capable and influential individuals. It is not obvious these children will grow up to use their greater talent (real or perceived) for mitigating existential risk or any other sort of greater good[3] Humans with rich, driven parents who've been taught about their superiority to ordinary humans from birth don't sound immune to "alignment problems" either....

As far as germline engineering goes, the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.

 

  1. ^

    I do actually, but it's not fashionable here, or indeed at MIRI!

  2. ^

    at least, viewed through EA's analytical lens rather than associated cultural tendency to overestimate the importance of individual intelligence..,

  3. ^

    I mean, what percentage of the world's smartest people focuses on that now?

In this case Anthropic chose to supply the DoW via a partnership with a company deeply embedded in the administration's part of the political spectrum, and even pointedly denied any objections to being used to support the administration's little expedition in Venezuela, and the administration decided that wasn't enough. There are many criticisms that can be made of Anthropic's stance on those issues; reluctance to engage with the current US administration isn't one of them.

If declining to actively support MAGA's demands they support development of AI with the explicit purpose of being an autonomous killing device is "virtue signalling", what's left of "AI alignment" to pursue?

A useful post and interesting starting point for further discussion

Few more that spring to mind

  • is there an intuitively plausible alternative/complementary causal factor which might explain the results? Has the study made some sort of attempt to control for this or estimate the effect size?
  • Does the paper include a large number of hypotheses and find some of them to be statistically significant at the 5 or 10% level.? You would expect that to happen by random chance and it smells of p-hacking, though including them all in the paper is at least more intellectually honest than the alternative approach of testing a lot of hypotheses and only acknowledging the ones that were [incidentally] statistically significant. This is why preregistration is valuable. Note that testing so many hypotheses one of them is bound to be a "finding" is not the same thing as testing whether an association between x and y persists across a large number of alternative regressions including other variables that might plausibly have a relationship; that's good practice.
  • Is there a standard methodology for conducting research of this nature in the field? has it been used, and if not, has a plausible rationale for taking a different approach been provided?

would be interesting to hear some of the more specialized ones that organizations like GiveWell and Rethink Priorities that evaluate a lot of research papers in particular fields use.

n.b. on the SMC vs ITN example, I'm fairly confident the answer is that the ITNs are a baseline directly comparable to "no treatment" as they shouldn't affect the progression from bite to symptomatic malaria infection targeted by SMC at all; they simply reduce the frequency of bites (but not to zero if the sample size is sufficiently large). Prevalance of malarial bites varies between cohorts in "no treatment studies" already. Access to some level of treatment after the fact (HMM) isn't a problem of study construction either; it complicates comparing severe malaria or death statistics with papers where sufferers may have had no treatment at all, but if anything would probably reduce the reported effect size for SMC. Medical ethics means the appropriate baseline/comparator for lifesaving treatment usually isn't "do absolutely nothing", it's "do the [next] best alternative"

I suspect that any resolution to this dispute is likely to be a lot less public than the OpenAI one

It's fairly obvious though that Amodei is signalling the company didn't object to the use of Claude to support the Venezuela operation, and that the company freely choose to be a defense contractor with a formal partnership with Palantir when they had plenty of other revenue/capital sources...

It seems like most of the additional coefficients you've added are impossible to estimate with any degree of confidence, particularly when it is plausible the impact may be negative. Whether it was the intention or not, that is the main message I get from your formulation

As someone who is not a strong longtermist, I note that an advantage for using non longtermist heuristics to evaluate impact is that identifying whether an action appears robustly positive for aggregate utility [for humans] on earth up to time t is much easier than anticipating the effect on the Virgo supercluster after time t

(A more sophisticated approach might use discounting for temporal and extreme spatial distance rather than time bounding, but amounts to the same thing; attaching zero weight to the estimated impact of my actions on the Virgo supercluster a thousand years from now)

EAs were also warning, for a long time, about the importance of health aid sent overseas. In contrast, non-EA leftists were more likely to call these institutions colonialist and call EAs racist for neglecting domestic political issues. But when Trump got elected, we were vindicated in the worst of ways: he destroyed much of USAID, and this was the single act in his presidency that led to the most deaths.

This feels like a strange argument to make, and one which seems to be trying way too hard to find evidence of vindication even in failures, which ironically is the opposite of what people with good epistemics should be doing. EAs were criticised [by critics whose arguments greatly varied in quality] for tending to treat international aid as primarily an optimization problem best addressed by small specialist charities and individuals maximising their donations, and largely ignoring the political dimension.

Then domestic political issues killed government programmes funding traditional Big Aid multinational aid agencies[1] with the stroke of a pen[2] and did far more damage than EA philanthropy is able to repair.

Directionally, thats the opposite of a validation of EA orthodoxy on aid.

I don't think neglecting the politics of whether aid actually gets disbursed is a strong argument against EA either - not least because I don't think EAs would have been able to dissuade people from voting Trump even if they'd made it their leading cause area, or convince Trump/Musk that foreigners lives mattered - but it's definitely not one where the "don't neglect politics" and "actually big programs that aren't quite as good as AMF are still really good" critics can be said to have lost the argument.

  1. ^

    (programs not run by EAs, but admired by some of them for their results) 

  2. ^

    For added irony, the person who gleefully signed those death warrants was at least superficially EA-adjacent enough to have enthusiastically endorsed MacAskill's writing and funded a couple of longtermist organizations in the past.

It would probably be worthwhile to encourage legally binding versions of the Giving Pledge in general

Donations before death are optimal, but it's particularly easy to ensure that the pledge is met at that stage with a will which can be updated at the time of signing it. (I presume most of the 64% did have a will, but chose to leave their fortune to others. I guess it's possible some fortunes inherited by widow[er]s will be donated to pledged causes in the fullness of time). 

I don't think this should replace the Giving Pledge; some people's intentions and financial situations are too complex to write into a binding contract, but such pledges should be taken more seriously (even though in practice they are still likely to be reversible).

Meta is paying billions of dollars to recruit people with proven experience at developing relevant AI models.

Does the set of "people with proven experience in building AI models" overlap with "people who defer to Eliezer on whether AI is safe" at all? I doubt it.

Indeed given that Yudkowsky's arguments on AI are not universally admired and people who have chosen building the thing he says will make everybody die as their career are particularly likely to be sceptical about his convictions on that issue, an endorsement might even be net negative.

The opportunity cost only exists for those with a high chance of securing comparable level roles in AI companies, or very senior roles at non-AI companies in the near future. Clearly this applies to some people working in AI capabilities research,[1] but if you wish to imply this applies to everyone working at MIRI and similar AI research organizations, I think the burden of proof actually rests on you. As for Eliezer, I don't think his motivation for dooming is profit, but it's beyond dispute that dooming is profitable for him. Could he earn orders of magnitude more money from building benevolent superintelligence based on his decision theory as he once hoped to? Well yes, but it'd have to actually work.[2]

Anyway, my point was less to question MIRI's motivations or Thomas' observation Nate could earn at least as much if he decided to work for a pro-AI organization and more to point out that (i) no, really, those industry norm salaries are very high compared with pretty much any quasi-academic research job not related to treating superintelligence as imminent and especially to roles typically considered "altruistic" and (ii) if we're worried that money gives AI company founders the wrong incentives, we should worry about the whole EA-AI ecosystem and talent pipeline EA is backing. Especially since that pipeline incubated those founders.

  1. ^

    including Nate

  2. ^

    and work in a way that didn't kill everyone, I guess...

Load more