DT

David T

1238 karmaJoined

Comments
225

Got to agree with the AI "analysis" being pretty limited, even though it flatters me by describing my analysis as "rigorous".[1] It's not a positive sign that this news update and jobs listing is flagged as having particularly high "epistemic quality"

That said, I enjoyed the 'egregore' section bits about the "ritualistic displays of humility", "elevating developers to a priesthood" and  "compulsive need to model, quantify, and systematize everything, even with acknowledged high uncertainty and speculative inputs => illusion of rigor".[2] Gemini seems to have absorbed the standard critiques of EA and rationalism better than many humans, including humans writing criticisms of and defences of those belief systems. It's also not wrong.

Its poetry is still Vogon-level though.

  1. ^

    For a start I think most people reading our posts would conclude that Vasco and I disagree on far too much to be considered "intellectually aligned", even if we do it mostly politely by drilling down to the details of each others' arguments

  2. ^

    OK, if my rigour is illusory maybe that complement is more backhanded than I thought  :)

Fair. I agree with this

Plenty of entities who aren't EAs doing that sort of lobbying already anyway

There are some good arguments that in some cases, developing countries can benefit from protecting some of their own nascent industries.

There are basically no arguments that the developed world putting tariffs (or anti dumping duties) on imports helps the developing world, which is the harmful scenario Karthik discusses in his article as an example of Nunn's argument that rich countries should stop doing things that harm poorer countries. Developed countries know full well these limit poorer countries' ability to export to them... but that's also why they impose them

At face value that might seem the case. In practice, Reform is a party dominated by a single individual, who enjoys promoting hunting, deregulation and criticising the idea of vegan diets: he's not exactly the obvious target for animal welfare arguments, particularly not when it's equally likely a future coalition will include representatives of a Green Party.

The point in the original article about conservatives and country folk being potentially sympathetic to  arguments for restrictions on importing meat from countries with lower animal welfare standards is a valid one, but it's the actual Conservative Party (who will be present in any coalition Reform needs to win and have a yawning policy void of their own) that fits that bracket, not the upstart "anti-woke", pro-deregulation party whose core message is a howl of rage about immigration. Farage's objections to the EU were around the rules, not protectionism, and he's actually highly vocal on the need to reduce restrictions in the import of meat from the US, which has much lower standards in many areas. Funnily enough, Farage political parties have had positions on regulating stunning animals for slaughter, but the targeting of slaughtering practices associated with certain religions might have been for... other reasons, and Farage rowed back on it[1]

  1. ^

    halal meat served in the UK is often pre-stunned, whereas kosher meat isn't, so the culture war arguments for mandatory stunning hit the wrong target....

I thought I was reasonably clear in my post but I will try again. As far as I understand .your argument is that the items in the tiers are heuristics people might use to determine how to make decisions, and the "tiers" represent how useful/trustworthy they are at doing that (with stuff in lower tiers like "folk wisdom" being not that useful and stuff in higher tiers like RCTs being more useful)

But I don't really see "literacy" or "math" broadly construed as methods to reach any specific decision, they're simply things I might need to understand actual arguments (and for that matter I am convinced that people can use good heuristics whilst being functionally illiterate or innumerate). The only real reason I can think of for putting them at the top is "many people argue against trusting (F-tier) folk wisdom is bad, there are some good arguments about not overindexing on (B-tier) RCTs, there are few decent arguments on principle against (S-tier) reading or adding up, despite the fact that literacy helps genocidal grudges as well as scientific knowledge to spread. I agree with this, but I don't think it illustrates very much that can be used to help me make better decisions as an individual. Because what really matters if I'm using my literacy to help me make a decision is what I read and what things I read I trust; much more than whether I can trust I've parsed it correctly. Likewise I think what thought experiments I'm influenced by is more important than the idea that thought experiments are (possibly) less trustworthy than at helping me make decisions than a full blown philosophical framework or more trustworthy than folk wisdown.

FWIW I think the infographic was fine and would suggest reinstating it (I don't think the argument is clearer without it, and it's certainly harder for people to suggest methods you might have missed if you don't show methods you included!)

Your linkpost also strips most of the key parts from the article, which I suspect some of the downvoters missed

But Gebru and Torres don't object to "the entire ideology of progress and technology" so much as accuse a certain [loosely-defined] group of making nebulous fantasy arguments about progress and technology to support their own ends, suggest they're bypassing a load of lower level debates about how actual progress and technology is distributed and accuse them of being racist. It's a subset of the "TESCREALs" who want AI development stopped altogether, and I don't think they're subliminally influenced by ancient debates on divine purpose either.

It's something of an understatement to suggest that it's not just Catholics and Anglicans opposed to ideas they disagree with gaining too much power and influence,[1] and it would be even more tendentious to argue that secular TESCREALs' interest in shaping the future and consequentialism is aligned in any way with Calvinist predestination. 

If Calvin were to encounter any part of the EA movement he'd be far more scathing than Gebru and Torres or people writing essays about how utilitarianism is bunk.[2] Maybe TESCREALism is just anti-Calvinism ;) ...

  1. ^

    Calvin was opposed to them too, although he believed heretics should suffer the death penalty rather than merely being invited to read thousand word blogs and papers about how they were bad people.

  2. ^

    and be equally convinced that the e-accelerationists and Timnit and Emile were condemned to eternal damnation. 

I didn't downvote or disagreevote, but I'm not sure the logic of the rankings is well explained. I get the idea that concepts in the lowest tiers are supposed to be of more limited value, but I'm not sure why the very top tiers are literacy/mathematics - seems like literacy/mathematics by themselves almost never point to any particular conclusions, but are merely prerequisites to using some other method to reach a decision. Is the argument that few people would dispute that literacy and mathematics should play some role in making decisions, where as the value of 'divine revelation' is hotly disputed and the validity of natural experiments debatable? That makes sense, but it feels like it needs more explanation.

E.g., most members of the Democratic party in the US would endorse "social safety nets, universal health care, equal opportunity education, respect for minorities" but would not self-identify as socialist

Many mainstream European politicians would though, whilst happily coexisting with capitalism. Treatment of "socialism" as an extremist concept which even people whose life mission is to expand social safety nets shy away from is US-exceptionalism; in the rest of the world it's a label embraced by a broad enough spectrum to include both Tony Blair and Pol Pot. So it's certainly of value to narrow that definition down a bit. :) 

It certainly reads better as satire than intellectual history. A valid criticism of the idea of "TESCREALISM" is that bundling together a long list of niche ideas just because they involve overlapping people hanging out on overlapping niche corners of the web (and in California) to debate related ideas about the future and their own cleverness doesn't actually make it a coherent *thing*, given that lots of the individual representatives of those groups have strong disagreements with the others and the average EA probably doesn't know what cosmism is.

On the other hand, it's difficult to take seriously the idea that secular intellectuals who find the Singularity and some of its loudest advocates a bit silly and some of the related ideas pushed a bit sus are covertly defending a particular side of a centuries old debate in Christian theology...

Feels like the argument you've constructed is a better one than the one Thiel is actually making, which seems to be a very standard "evil actors often claim to be working for the greater good" argument with a libertarian gloss. Thiel doesn't think redistribution is an obviously good idea that might backfire if it's treated as too important, he actively loathes it. 

I think the idea that trying too hard to do good things and ending up doing harm is absolutely a failure mode worth considering, but has far more value in the context of specific examples. It seems like quite a common theme in AGI discourse (follows from standard assumptions like AGI being near and potentially either incredibly beneficial or destructive, research or public awareness either potentially solving the problem or starting a race etc) and the optimiser's curse is a huge concern for EA cause prioritization overindexing on particular data points. Maybe that deserves (even) more discussion. 

But I don't think an guy that doubts we're on the verge of an AI singularity and couldn't care less whether EAs encourage people to make the wrong tradeoffs between malaria nets, education and shrimp welfare adds much to that debate, particularly not with a throwaway reference to EA in a list of philosophies popular with the other side of the political spectrum he things are basically the sort of thing the Antichrist would say.

I mean, he is also committed to the somewhat less insane-sounding  "growth is good even if it comes with risks" argument, but you can probably find more sympathetic and coherent and less interest-conflicted proponents of that view.

Load more