DT

David T

1256 karmaJoined

Comments
228

Also you look at the current US administration and the priorities and ... they're certainly not Singaporean or particularly interested in x-risk mitigation

Feels like the most straightforwardly rational argument for portfolio diversification is the assumption your EV and probability estimates almost certainly aren't the accurate or at least unbiased estimator they need to be for the optimal strategy to be to stick everything on the highest EV outcome. Even more so when the probability that a given EV estimate is accurate is unlikely to be uncorrelated with whether it scores particularly highly (the good old optimiser's curse, with a dose of wishful thinking thrown in). Financiers don't trust themselves to be perfectly impartial about stuff like commodity prices in central Asia or binary bets on the value of Yen on Thursday, and it seems unlikely that people who are extremely passionate about the causes they and their friends participate in ahead of a vast range of other causes that nominally claim to do good achieve a greater level of impartiality. Pascalian odds seem particularly unlikely to be representative of the true best option (in plain English, a 0.0001% subjective probability assessment of a 1 shot event is roughly "I don't really know what the outcome of this will be and it seems like there could be many, many things more likely to achieve the same end"). You can make the assumption that if they appear to be robustly positive and neglected they might deserve funding anyway, but that is a portfolio argument...

Doesn't this depend on what you consider the "top tier areas for making AI go well" (which doesn't seem to be defined by the post)? If that happens to be AI safety research institutes focused specifically on preventing "AI doom" via stuff you consider to be non-harmful, then naively I'd expect nearly all of them to be aligned with the movement focused on that priority, given that those are relatively small niches, the OP and their organisation and the wider EA movement are actively nudging people into them based on EA assumption that they're the top tier ones, and anyone looking more broadly at AI as a professional interest will find a whole host of lucrative alternatives where they won't be scrutinised on their alignment at interview and can go and make cool tools and/or lots of money on options.

If you define it as "areas which have the most influence on how AI is built" then those are more the people @titotal was talking about, and yeah, they don't seem particularly aligned with EA, not even the ones that say safety-ish things as a marketing strategy and took money from EA funds.

And if you define "safety" more broadly there are plenty of other AI research areas focusing on stuff like cultural bias or job market impact. But you and your organisation and 80000 hours probably don't consider them top tier for effectiveness and (not coincidentally) I suspect these have very low proportions of EAs. Same goes for defence companies who've decided the "safest" approach to AI is to win the arms race.  Similarly, it's no surprise that people who happen to be very concerned about morality and utilitarianism and doing the best they can with their 80k hours of working life who get their advice from Brutger don't become AI researchers at all, despite the similarities of their moral views.

Got to agree with the AI "analysis" being pretty limited, even though it flatters me by describing my analysis as "rigorous".[1] It's not a positive sign that this news update and jobs listing is flagged as having particularly high "epistemic quality"

That said, I enjoyed the 'egregore' section bits about the "ritualistic displays of humility", "elevating developers to a priesthood" and  "compulsive need to model, quantify, and systematize everything, even with acknowledged high uncertainty and speculative inputs => illusion of rigor".[2] Gemini seems to have absorbed the standard critiques of EA and rationalism better than many humans, including humans writing criticisms of and defences of those belief systems. It's also not wrong.

Its poetry is still Vogon-level though.

  1. ^

    For a start I think most people reading our posts would conclude that Vasco and I disagree on far too much to be considered "intellectually aligned", even if we do it mostly politely by drilling down to the details of each others' arguments

  2. ^

    OK, if my rigour is illusory maybe that complement is more backhanded than I thought  :)

Fair. I agree with this

Plenty of entities who aren't EAs doing that sort of lobbying already anyway

There are some good arguments that in some cases, developing countries can benefit from protecting some of their own nascent industries.

There are basically no arguments that the developed world putting tariffs (or anti dumping duties) on imports helps the developing world, which is the harmful scenario Karthik discusses in his article as an example of Nunn's argument that rich countries should stop doing things that harm poorer countries. Developed countries know full well these limit poorer countries' ability to export to them... but that's also why they impose them

At face value that might seem the case. In practice, Reform is a party dominated by a single individual, who enjoys promoting hunting, deregulation and criticising the idea of vegan diets: he's not exactly the obvious target for animal welfare arguments, particularly not when it's equally likely a future coalition will include representatives of a Green Party.

The point in the original article about conservatives and country folk being potentially sympathetic to  arguments for restrictions on importing meat from countries with lower animal welfare standards is a valid one, but it's the actual Conservative Party (who will be present in any coalition Reform needs to win and have a yawning policy void of their own) that fits that bracket, not the upstart "anti-woke", pro-deregulation party whose core message is a howl of rage about immigration. Farage's objections to the EU were around the rules, not protectionism, and he's actually highly vocal on the need to reduce restrictions in the import of meat from the US, which has much lower standards in many areas. Funnily enough, Farage political parties have had positions on regulating stunning animals for slaughter, but the targeting of slaughtering practices associated with certain religions might have been for... other reasons, and Farage rowed back on it[1]

  1. ^

    halal meat served in the UK is often pre-stunned, whereas kosher meat isn't, so the culture war arguments for mandatory stunning hit the wrong target....

I thought I was reasonably clear in my post but I will try again. As far as I understand .your argument is that the items in the tiers are heuristics people might use to determine how to make decisions, and the "tiers" represent how useful/trustworthy they are at doing that (with stuff in lower tiers like "folk wisdom" being not that useful and stuff in higher tiers like RCTs being more useful)

But I don't really see "literacy" or "math" broadly construed as methods to reach any specific decision, they're simply things I might need to understand actual arguments (and for that matter I am convinced that people can use good heuristics whilst being functionally illiterate or innumerate). The only real reason I can think of for putting them at the top is "many people argue against trusting (F-tier) folk wisdom is bad, there are some good arguments about not overindexing on (B-tier) RCTs, there are few decent arguments on principle against (S-tier) reading or adding up, despite the fact that literacy helps genocidal grudges as well as scientific knowledge to spread. I agree with this, but I don't think it illustrates very much that can be used to help me make better decisions as an individual. Because what really matters if I'm using my literacy to help me make a decision is what I read and what things I read I trust; much more than whether I can trust I've parsed it correctly. Likewise I think what thought experiments I'm influenced by is more important than the idea that thought experiments are (possibly) less trustworthy than at helping me make decisions than a full blown philosophical framework or more trustworthy than folk wisdown.

FWIW I think the infographic was fine and would suggest reinstating it (I don't think the argument is clearer without it, and it's certainly harder for people to suggest methods you might have missed if you don't show methods you included!)

Your linkpost also strips most of the key parts from the article, which I suspect some of the downvoters missed

But Gebru and Torres don't object to "the entire ideology of progress and technology" so much as accuse a certain [loosely-defined] group of making nebulous fantasy arguments about progress and technology to support their own ends, suggest they're bypassing a load of lower level debates about how actual progress and technology is distributed and accuse them of being racist. It's a subset of the "TESCREALs" who want AI development stopped altogether, and I don't think they're subliminally influenced by ancient debates on divine purpose either.

It's something of an understatement to suggest that it's not just Catholics and Anglicans opposed to ideas they disagree with gaining too much power and influence,[1] and it would be even more tendentious to argue that secular TESCREALs' interest in shaping the future and consequentialism is aligned in any way with Calvinist predestination. 

If Calvin were to encounter any part of the EA movement he'd be far more scathing than Gebru and Torres or people writing essays about how utilitarianism is bunk.[2] Maybe TESCREALism is just anti-Calvinism ;) ...

  1. ^

    Calvin was opposed to them too, although he believed heretics should suffer the death penalty rather than merely being invited to read thousand word blogs and papers about how they were bad people.

  2. ^

    and be equally convinced that the e-accelerationists and Timnit and Emile were condemned to eternal damnation. 

I didn't downvote or disagreevote, but I'm not sure the logic of the rankings is well explained. I get the idea that concepts in the lowest tiers are supposed to be of more limited value, but I'm not sure why the very top tiers are literacy/mathematics - seems like literacy/mathematics by themselves almost never point to any particular conclusions, but are merely prerequisites to using some other method to reach a decision. Is the argument that few people would dispute that literacy and mathematics should play some role in making decisions, where as the value of 'divine revelation' is hotly disputed and the validity of natural experiments debatable? That makes sense, but it feels like it needs more explanation.

Load more