DT

David T

1215 karmaJoined

Comments
221

I thought I was reasonably clear in my post but I will try again. As far as I understand .your argument is that the items in the tiers are heuristics people might use to determine how to make decisions, and the "tiers" represent how useful/trustworthy they are at doing that (with stuff in lower tiers like "folk wisdom" being not that useful and stuff in higher tiers like RCTs being more useful)

But I don't really see "literacy" or "math" broadly construed as methods to reach any specific decision, they're simply things I might need to understand actual arguments (and for that matter I am convinced that people can use good heuristics whilst being functionally illiterate or innumerate). The only real reason I can think of for putting them at the top is "many people argue against trusting (F-tier) folk wisdom is bad, there are some good arguments about not overindexing on (B-tier) RCTs, there are few decent arguments on principle against (S-tier) reading or adding up, despite the fact that literacy helps genocidal grudges as well as scientific knowledge to spread. I agree with this, but I don't think it illustrates very much that can be used to help me make better decisions as an individual. Because what really matters if I'm using my literacy to help me make a decision is what I read and what things I read I trust; much more than whether I can trust I've parsed it correctly. Likewise I think what thought experiments I'm influenced by is more important than the idea that thought experiments are (possibly) less trustworthy than at helping me make decisions than a full blown philosophical framework or more trustworthy than folk wisdown.

FWIW I think the infographic was fine and would suggest reinstating it (I don't think the argument is clearer without it, and it's certainly harder for people to suggest methods you might have missed if you don't show methods you included!)

Your linkpost also strips most of the key parts from the article, which I suspect some of the downvoters missed

But Gebru and Torres don't object to "the entire ideology of progress and technology" so much as accuse a certain [loosely-defined] group of making nebulous fantasy arguments about progress and technology to support their own ends, suggest they're bypassing a load of lower level debates about how actual progress and technology is distributed and accuse them of being racist. It's a subset of the "TESCREALs" who want AI development stopped altogether, and I don't think they're subliminally influenced by ancient debates on divine purpose either.

It's something of an understatement to suggest that it's not just Catholics and Anglicans opposed to ideas they disagree with gaining too much power and influence,[1] and it would be even more tendentious to argue that secular TESCREALs' interest in shaping the future and consequentialism is aligned in any way with Calvinist predestination. 

If Calvin were to encounter any part of the EA movement he'd be far more scathing than Gebru and Torres or people writing essays about how utilitarianism is bunk.[2] Maybe TESCREALism is just anti-Calvinism ;) ...

  1. ^

    Calvin was opposed to them too, although he believed heretics should suffer the death penalty rather than merely being invited to read thousand word blogs and papers about how they were bad people.

  2. ^

    and be equally convinced that the e-accelerationists and Timnit and Emile were condemned to eternal damnation. 

I didn't downvote or disagreevote, but I'm not sure the logic of the rankings is well explained. I get the idea that concepts in the lowest tiers are supposed to be of more limited value, but I'm not sure why the very top tiers are literacy/mathematics - seems like literacy/mathematics by themselves almost never point to any particular conclusions, but are merely prerequisites to using some other method to reach a decision. Is the argument that few people would dispute that literacy and mathematics should play some role in making decisions, where as the value of 'divine revelation' is hotly disputed and the validity of natural experiments debatable? That makes sense, but it feels like it needs more explanation.

E.g., most members of the Democratic party in the US would endorse "social safety nets, universal health care, equal opportunity education, respect for minorities" but would not self-identify as socialist

Many mainstream European politicians would though, whilst happily coexisting with capitalism. Treatment of "socialism" as an extremist concept which even people whose life mission is to expand social safety nets shy away from is US-exceptionalism; in the rest of the world it's a label embraced by a broad enough spectrum to include both Tony Blair and Pol Pot. So it's certainly of value to narrow that definition down a bit. :) 

It certainly reads better as satire than intellectual history. A valid criticism of the idea of "TESCREALISM" is that bundling together a long list of niche ideas just because they involve overlapping people hanging out on overlapping niche corners of the web (and in California) to debate related ideas about the future and their own cleverness doesn't actually make it a coherent *thing*, given that lots of the individual representatives of those groups have strong disagreements with the others and the average EA probably doesn't know what cosmism is.

On the other hand, it's difficult to take seriously the idea that secular intellectuals who find the Singularity and some of its loudest advocates a bit silly and some of the related ideas pushed a bit sus are covertly defending a particular side of a centuries old debate in Christian theology...

Feels like the argument you've constructed is a better one than the one Thiel is actually making, which seems to be a very standard "evil actors often claim to be working for the greater good" argument with a libertarian gloss. Thiel doesn't think redistribution is an obviously good idea that might backfire if it's treated as too important, he actively loathes it. 

I think the idea that trying too hard to do good things and ending up doing harm is absolutely a failure mode worth considering, but has far more value in the context of specific examples. It seems like quite a common theme in AGI discourse (follows from standard assumptions like AGI being near and potentially either incredibly beneficial or destructive, research or public awareness either potentially solving the problem or starting a race etc) and the optimiser's curse is a huge concern for EA cause prioritization overindexing on particular data points. Maybe that deserves (even) more discussion. 

But I don't think an guy that doubts we're on the verge of an AI singularity and couldn't care less whether EAs encourage people to make the wrong tradeoffs between malaria nets, education and shrimp welfare adds much to that debate, particularly not with a throwaway reference to EA in a list of philosophies popular with the other side of the political spectrum he things are basically the sort of thing the Antichrist would say.

I mean, he is also committed to the somewhat less insane-sounding  "growth is good even if it comes with risks" argument, but you can probably find more sympathetic and coherent and less interest-conflicted proponents of that view.

"Pro-natalists" do, although that tends to be more associated with specific ideas that the world needs more people like them (often linked to religious or nationalistic ideas) than EA. The average parent tends to think that bringing up a child is [one of] the most profound ways they can contribute to the world, but they're thinking more in terms of effort and association than effect size.

I also think it's pretty easy to make a case that having lots of children (who in turn have descendants) is the most impactful thing you could do based on certain standard longtermist assumptions (large possible future, total utilitarian axiology, human lives generally net positive) and uncertainty about how to prevent human extinction but I'm not aware of a strand of longtermism that actually preaches or practices this and I don't think it's a particularly strong argument.

Yeah. Frankly of all the criticisms of EA that might be easily be turned into something more substantial, accurate and useful with a little bit of reframing, a liberalism-hating surveillance-tech investor dressing his fundamental loathing of its principles and opposition to the limits it might impose on tech he actively promotes in pretentious pseudo-Christian allusion seems least likely to add any value. [1]

Doesn't take much searching of the forum to find outsider criticisms of aspects of the AI safety movement which are a little less oblique than comparing it with the Antichrist, written by people without conflicts of interest who've probably never written anything as dumb as this, most of which seem to get less sympathetic treatment.

  1. ^

    and I say that as someone more in agreement with the selected Thiel pronouncements on how impactful and risky near-term AI is likely to be than the average EA

tbf to the AI 2027 article, whilst it makes a number of contentious arguments its actual titles and subtitles seem quite low key. 

But I do agree with the meta point that norms of only socially punishing critics for boldness of their claims is counterproductive, and norms of careful hedging can result in actual sanewashing of nonsense "RFK advances novel theory about causes of autism; some experts suggest other causes". 

Good post Nick. I think the question mark about the timing of the experiment considering cuts to many robustly good programmes is a particularly good one

I don't think the Centre for Effective Aid Policy is a particularly accurate comparison, as I think there's a significant difference between the likely effectiveness of a new org lobbying Western governments to give money to different causes (against sophisticated lobbyists for the status quo and government-defined "soft power" priorities) and orgs with established relationships providing technical recommendations to improve healthcare outcomes to LEDC governments that actually express interest in using them. I think the lack of positive findings in the wider literature links you provide are more interesting, although suspect the outcomes are highly variable depending on level of government engagement, competence of organizations, magnitude of problems they purport to solve and whether the shifts they are promoting are even in the right direction. It would be interesting in that respect to see how GiveWell evaluated the individual organizations. I do agree that budgeting dashboards don't necessarily seem like an area relatively highly paid outsiders are best placed to optimise. 

I suspect the high cost reflects use of non-local staff, which of course has a mixture of advantages and disadvantages beyond the higher cost.

I'm sceptical of the value of RCTs between nations that have different healthcare policies and standards and bureaucracies to start with (particularly as I don't think there's a secular global trend in the sort of outcomes TSUs are supposed to achieve, and collecting data on some of them feels like it would involve nearly as much effort as actually providing the recommendations). A lot of policy and government optimization work - effective or otherwise - is hard to RCT especially at national level. Which doesn't mean there can't be more transparency and non-RCT metrics

Load more