Hide table of contents

A common criticism of EA/rationalist discussion is that we reinvent the wheel - specifically, that concepts which become part of the community have close analogies that have been better studied in academic literature. Or in some cases, that we fixate on some particular academically sourced notion to the exclusion of many similar or competing theories.

I think we can simultaneously test and address this purported problem by crowd sourcing an open database mapping EA concepts to related academic concepts, and in particular citing papers that investigate the latter. In this thread I propose the following format:

  • 'Answers' name an EA or rat concept either that you suspect might or know has mappings to a broader set of academic literature.
  • Replies to answers cite at least one academic work (or a good Wikipedia article) describing a related phenomenon or concept. In some cases, an EA/rat concept might be an amalgam of multiple other concepts, so please give as many replies to answers as seem appropriate.
  • Feel free but not obliged to add context to replies (as long as they link a good source)
  • Feel free to reply to your own answer

I'll add any responses this thread gets to a commentable Google sheet (which I can keep updating), and share that sheet afterwards. Hopefully this will be a valuable resource both for fans of effective altruism to learn more about their areas of interest, and for critics to asserting the reinventing-of-wheelness of EA/rat to prove instances of their case (where an answer gets convincing replies) or refute them (where an answer gets no or only loosely related replies).

I'll seed the discussion with a handful of answers of my own, most of which I have at best tentative mappings.

[ETA I would ask people not to downvote answers to this thread. If the system I proposed is functioning accurately, then every answer is a service to the community, whether it ends up being mapped (and therefore validated as an instance of people re) or not mapped (and therefore refuted). If you think this is a bad system, then please downvote the top level post, rather than disincentivising the people who are trying to make it work.]

26

0
0
1

Reactions

0
0
1
New Answer
New Comment

13 Answers sorted by

1
Mo Putera
Not really, "coordination failure due to positional arms race" is better.
4
Arepo
I'm not sure I take a throwaway comment by someone closely socially tied to the author of the comment as evidence that it isn't equivalent.  Also it doesn't need to be literally equivalent to them. The criticism, if there is one, would be that Scott's concept doesn't add anything to the work done by academics - although that criticism would be false if it unified hitherto un-unified fields in a useful way.
1
Mo Putera
That's fair, no need to take it.  Stuart Armstrong (author of the OP in the link above) seems to think it was academically inspiring, cf. the passage starting with Not sure if that counts for you. (I'm not socially tied to Luke in any way. I had the same misconception as you a long time ago, remember reading that comment as clarifying, and thought you would appreciate the share.)

Moloch is just a fanciful term for coordination traps right?

2
Arepo
Do you have a citation for coordination traps specifically? Coordination games seem pretty closely related, but Googling for the former I find only casual/informal references to it being a game (possibly a coordination game specifically) with multiple equilibria, some worse than others, such that players might get trapped in a suboptimal equilibrium.
2
Karthik Tadepalli
I agree with Linch that the idea that "a game can have multiple equilibria that are Pareto-rankable" is trivial. Then the existence of multiple equilibria automatically means players can get trapped in a suboptimal equilibrium – after all, that's what an equilibrium is. What specific element of "coordination traps" goes beyond that core idea?
2
Linch
Not really; rationalist jargon is often more memetically fit than academic jargon so it's often hard for me to remember the original language even when I first learned something from non-rationalist sources. But there's a sense in which the core idea (Nash equilibria may not be Pareto efficient) is ~trivial, even if meditating on it gets you something deep/surprising eventually. I don't really think of presenting this as Moloch as "reinventing the wheel," more like seeing the same problem from a different angle, and hopefully a pedagogically better one. 

This is a restatement of the law of iterated expectations. LIE says . Replace with an indicator variable for whether some hypothesis is true, and interpret as an indicator for binary evidence about . Then this immediately gives you a conservation of expected evidence: if , then , since is an average of the two of them so it must be in between them.

You could argue this is just an intuitive connection of the LIE to problems of decisionmaking, rather than a reinvention. But there's no acknowledgement of the LIE anywhere in the original post or comments. In fact, it's treated as a consequence of Bayesianism, when it follows from probability axioms. (Though one comment does point this out.)

To see it formulated in a context explicitly about beliefs, see Box 1 in these macroeconomics lecture notes.

2
Arepo
Thanks - agree or disagree with it, this is a really nice example of what I was hoping for.

Development Economics
One of the forum's highest rated posts is about how we should simply improve  economic growth in poor  countries

I don't see how this is reinventing the wheel? The post makes many references to development economics (11 mentions to be precise). It was not an instance of independently developing something that ended up being close to development economics.

5
Henry Howard🔸
The post suggests that 4 person-years of “careful analysis” will find “promising funding opportunities in this space”. Development economics does that careful analysis already, why would we make breakthroughs reinventing it?
5
Erich_Grunewald 🔸
That still does not seem like reinventing the wheel to me. My read of that post is that it's not saying "EAs should do these analyses that have already been done, from scratch" but something closer to "EAs should pay more attention to strategies from development economics and identify specific, cost-effective funding opportunities there". Unless you think development economics is solved, there is presumably still work to be done, e.g., to evaluate and compare different opportunities. For example, GiveWell definitely engages with experts in global health, but still also needs to rigorously evaluate and compare different interventions and programs. And again, the article mentions development economics repeatedly and cites development economics texts -- why would someone mention a field, cite texts from a field, and then suggest reinventing it without giving any reason?
4
Linch
This job posting seems related.
1
Matrice Jacobine
My experience is that many global-poverty-focused EA likes to refer to their field as "global health and development" but the existing literature in institutional development economics has been mostly ignored in favor of constantly retreading the same old streetlight-illuminated ground of bednets and deworming. This may in part because it might be problematic for EA Political Orthodoxy. @Ben Kuhn has made this point cogently here and here.
1
Ian Turner
I’m not sure I understood your point. What charities or programs do you think GiveWell should be funding, but aren’t, that are supported by “existing literature in institutional development economics”?
1
Matrice Jacobine
Institutional reforms to help LDCs escape the poverty trap.
1
Ian Turner
That doesn’t sound like a charity or charitable program to me?
1
Matrice Jacobine
That's what I mean by "constantly retreading the streetlight-illuminated ground". And lack of established charities hasn't stopped the longtermist wing (and to a certain extent the animalist wing) of EA before?
1
Ian Turner
Establishing a new charity is one thing, but I haven’t seen you propose a charitable program or intervention yet?
1
Matrice Jacobine
Why are the animalist and longtermist wings of EA the only wings that consider policy change an intervention?
1
Ian Turner
Is it possible we’re talking to past each other? “Institutional reforms” isn’t something a donor can spend money or donate to. But EA global health efforts are open to working on policy change; an example is the Lead Exposure Elimination Project. I still feel that you haven’t really answered the question, what do you think GiveWell should recommend, which they currently aren’t?
3
Matrice Jacobine
I don't know how to make it clearer. Longtermist nonprofits get to research world problems and their possible solutions without having to immediately show a randomized controlled trial following the ITN framework on policies that don't exist yet. Why is the same thing seemingly impossible for dealing with global poverty?
4
Jason
The academic fields most relevant to GH&D work are fairly mature. Because of that, it's reasonable for GH&D to focus less on producing stuff that is more like basic research / theory generation (academia is often strong in this and had a big head start) and devote its resources more toward setting up a tractable implementation of something (which is often not academia's comparative advantage for various reasons). GH&D also has a clearly successful baseline with near-infinite room for more funding, and so more speculative projects need to clear that baseline before they become viable. You haven't identified any specific proposed area to study, but my suspicion is that most of them would require sustained political commitment over many years in the LDC and/or large cash infusions beyond the bankroll of EA GH&D to potentially work.
1
Matrice Jacobine
Again, that is exactly what I am calling "constantly retreading the streetlight-illuminated ground". I do not think most institutional development economists would endorse the idea that LDCs can escape the poverty trap through short-term health interventions alone.
4
Jason
I don't think most development economists would endorse the idea that a viable pathway exists for LDCs to escape the poverty trap based on ~$600-800MM/year in EA funding (even assuming you could concentrate all GH&D funding on a single project) and near-zero relevant political influence, either. And those are the resources that GH&D EA has on the table right now in my estimation. To fund something at even the early stages, one needs either the ability to execute any resulting project or the ability to persuade those who do. The type of projects you're implying are very likely to require boatloads of cash, widespread and painful-to-some changes in the LDCs, or both. Even conditioned on a consensus within development economics, I am skeptical that EA has that much ability to get Western foreign aid departments and LDC politicians to do what the development economists say they should be doing. 
1
Matrice Jacobine
Okay, so why is the faction of EA with ostensibly the most funds the one with "near-zero relevant political influence" while one of the animalist faction's top projects is creating an animalist movement in East Asia from scratch, and the longtermist faction has the president of RAND? That seems like a choice to divide influence that way in the first place.

Updateless decision theory and logical decision theory

As nicely discussed in this comment, the key ideas of UDT and LDT seem to have been predated by, respectively, "resolute choice" and Spohn's variant of CDT. (It's not entirely clear to me how UDT or LDT are formally specified, though, and in my experience people seem to equivocate between different senses of "UDT".)

I saw a lot of criticism of the EA approach to x-risks on the grounds that we're just reinventing the wheel, and that these already exist in government disaster preparedness and the insurance industry. I looked into the fields that we're supposedly reinventing, and they weren't the same at all, in that the scale of catastrophes previously investigated was far smaller, only up to regional things like natural disasters. No one in any position of authority had prepared a serious plan for what to do in any situation where human extinction was a possibility, even the ones the general public has heard of (nuclear winter, asteroids, climate change).

The extent to which you think they're the same is going to depend heavily on 

  1. your long term moral discounting rate (if it's high, then you're going to be equally concerned between highly destructive events that very likely won't kill everyone and comparably destructive events that might),
  2. your priors on specific events leading to human extinction (which, given the lack of data, will have a strong impact on your conclusion), and
  3. your change in credence of civilisation flourishing post-catastrophe.

Given the high uncertainty behind each of those consider... (read more)

2
Robi Rahman
I agree with your numbered points, especially that if your discount rate is very high, then a catastrophe that kills almost everyone is similar in badness to a catastrophe that kills everyone. But one of the key differences between EA/LT and these fields is that we're almost the only ones who think future people are (almost) as important as present people, and that the discount rate shouldn't be very high. Under that assumption, the work done is indeed very different in what it accomplishes. I'm skeptical that the insurance industry isn't bothering to protect against asteroids and nuclear winter just because they think the government is already handling those scenarios. For one, any event that kills all humans is uninsurable, so a profit-motivated mitigation plan will be underincentivized and ineffective. Furthermore, I don't agree that the government has any good plan to deal with x-risks. (Perhaps they have a secret, very effective, classified plan that I'm not aware of, but I doubt it.)
2
Arepo
I happen to strongly agree that moral discount rate should be 0, but a) it's still worth acknowledging that as an assumption, and b) I think it's easy for both sides to equivocate it with risk-based discounting. It seems like you're de facto doing when you say 'Under that assumption, the work done is indeed very different in what it accomplishes' - this is only true if risk-based discounting is also very low. See e.g. Thorstad's Existential Risk Pessimism and the Time of Perils and Mistakes in the Moral Mathematics of Existential Risk for formalisms of why it might not be - I don't agree with his dismissal of a time of perils, but I do agree that the presumption that explicitly longtermist work is actually better for the long term than short-to-medium-term focused work is is based on little more than Pascalian handwaving. I'm confused by your paragraph about insurance. To clarify: * I don't expect insurance companies to protect against either extinction catastrophes or collapse-of-civilisation catastrophes, since as you say such catastrophes are uninsurable. * I suspect they also don't protect against medium-damage-to-civilisation catastrophes for much the same reason - I don't think insurance has the capacity to handle more than very mild civilisational shocks. * I do think government organisations, NGOs and academics have done very important work in the context of reducing risks of civilisation-harming events. * I think that if you assign a high risk of a post-catastrophe civilisation struggling to flourish (as I do), these events look comparably as bad from a long-term perspective as extinction once you also account for their greater likelihood. I suggested a framework for this analysis here and built some tools to implement it described here. Of course you can disagree about the high risk to flourishing from non-existential catastrophes but that's going to be a speculative argument about which people might reasonably differ. To my knowledge, no-one's made

Moral circle. There are so many frameworks from psychology on morality, empathy etc. But maybe I am missing some nuance that makes moral circle distinct from all of these but to date I have not seen it.

The concept was coined by Singer, who is an EA, but he coined it in 1981 and it has been a term of mainstream moral philosophy for a while.

2
Benevolent_Rain
Ah that might explain it - it is coming from philosophy not psychology!

It would be helpful if you mentioned who the original inventor was.

I had the impression there was a field of '(global) resilience studies' I'd seen before, but on a first look at the moment can't find anything convincingly on point.

6
Gideon Futerman
Pretty sure EA basically invented that (yes people were working on stuff before then and outside of it, but still that seems different to 'reinventing the wheel')
5
Matrice Jacobine
Not only longtermism predate progress studies, but the two have actively conflicting theoretical underpinnings and policy goals. See this article by @Garrison: While longtermism can be traced to Bostrom and Hughes' founding of the Institute for Ethics in Emerging Technologies with the express purpose of steering the world transhumanist movement away from Silicon Valley libertarianism and into a social-democratic direction, by focusing on ethical and social concerns about emerging technologies instead of defending the development of emerging technologies as an unalloyed natural right.
4
Mo Putera
No? cf. this dialogue between Jason Crawford and Clara Collier, Max Daniel's post (and this thread with Jason), Jason's attempt to find the crux between PS and x-risk communities, etc  
3
Chris Leong
This is very different. I'd reference Wittgenstein's Family Resemblances instead.
Comments7
Sorted by Click to highlight new comments since:

concepts which become part of the community have close analogies that have been better studied in academic literature

If they got into the community from the academic literature, this isn't reinventing the wheel, right? At worst it's rebranding the wheel, which feels like a different thing.

For example, is conservation of expected evidence an instance of reinventing the wheel, because this particular name for it is (as far as I know) a LessWrong innovation? I'm sure they (we?) didn't rediscover the theorem from basic principles.

I suppose you might still regard this as a point of criticism insofar as it creates jargon barriers, or insofar as you draw indirect (and IMO tenuous) inferences about a lack of collaboration with the mainstream (i.e. we can only get away with using different words because people who use the "normal" words don't talk to us). But I wouldn't want people drawing from this that LW is unfamiliar with mainstream probability theory.

I'm confused on how to interpret a disagreevote on a top-level "answer" which is supposed to merely list a concept. In contrast, one could disagreevote a reply to an answer because it doesn't map well to the top-level answer.

I interpreted disagreement to mean "EA is not reinventing the wheel with this concept".

I'd suggest going with a Wiki page rather than a Google Sheet. Google Sheets are more suited to the task, but almost inevitably become outdated at some point.

what wiki did you have in mind? wouldn't it also be at risk of becoming outdated?

EA or LW. Just less dependent on a single editor adding/approving changes.

thanks for making this post/question!

More from Arepo
Curated and popular this week
Relevant opportunities