Hide table of contents

2022 update: This is now superseded by a new version of the same open thread.


(I have no association with the EA Forum team or CEA, and this idea comes with no official mandate. I'm open to suggestions of totally different ways of doing this.)

Update: Aaron here. This has our official mandate now, and I'm subscribed to the post so that I'll be notified of every comment. Please suggest tags!

2021 update: Michael here again. The EA's tag system is now paired with the EA Wiki, and so proposals on this post are now for "entries", which can mean tags, EA Wiki articles, or (most often) pages that serve both roles.

The EA Forum now has tags, and users can now make tags themselves. I think this is really cool, and I've now made a bunch of tags. 

But I find it hard to decide whether some tag ideas are worth including, vs being too fine-grained or too similar to existing tags. I also feel some hesitation about taking too much unilateral action. I imagine some other forum users might feel the same way about tag ideas they have, some of which might be really good! (See also this thread.)

So I propose that this post becomes a thread where people can comment with a tag idea there's somewhat unsure about, and then other people can upvote it or downvote it based on whether they think it should indeed be its own tag. Details:

  • I am not saying you should always comment here before making a tag. I have neither the power nor the inclination to stop you just making tags you're fairly confident should exist!
  • I suggest having a low bar for commenting here, such as "this is just a thought that occurred to me" or "5% chance this tag should exist". It's often good to be open to raising all sorts of ideas when brainstorming, and apply most of the screening pressure after the ideas are raised.
    • The tag ideas I've commented about myself are all "just spitballing".
  • Feel free to also propose alternative tag labels, propose a rough tag description, note what other tags are related to this one, note what you see as the arguments for and against that tag, and/or list some posts that would be included in this tag. (But also feel free to simply suggest a tag label.)
  • Feel free to comment on other people's ideas to do any of the above things (propose alternative labels, etc.).
  • Make a separate comment for each tag idea.
  • Probably upvote or downvote just based on the tag idea itself; to address the extra ideas in the comment (e.g., the proposed description), leave a reply.
  • Maybe try not to hold back with the downvotes. People commenting here would do so specifically because they want other people's honest input, and they never claimed their tag idea was definitely good so the downvote isn't really disagreeing with them.

Also feel free to use this as a thread to discuss (and upvote or downvote suggestions regarding) existing tags that might not be worth having, or might be worth renaming or tweaking the scope of, or what-have-you. For example, I created the tag Political Polarisation, but I've also left a comment here about whether it should be changed or removed.

Comments374
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Retreat or Retreats

I think there are a fair few EA Forum posts about why and how to run retreats (e.g., for community building, for remote orgs, or for increasing coordination among various orgs working in a given area). And I think there are a fair few people who'd find it useful to have these posts collected in one place.

8
Pablo
Makes sense; I'll create it. By the way, we should probably start a new thread for new Wiki entries. This one has so many comments that it takes a long time to load.
4
MichaelA
Thanks! And good idea - done

Quadratic voting or Uncommon voting methods or Approval voting or something like that or multiple of these

E.g., this post could get the first and/or second tag, and posts about CES could get the second and/or third tag

4
Pablo
Created. I may try to expand the description to also cover quadratic funding. (Both quadratic voting and quadratic funding are instances of quadratic payments, at least in Buterin's framing, so we could use the latter for the name of the entry. I used 'quadratic voting' because this is the name that people usually associate with the general idea.)  
6
Pablo
The content of the old EA Concepts page is now part of the cost-effectiveness entry. However, it may be worth creating a separate entry on distribution of cost-effectivenss and moving that content there. I'll do that tomorrow if no one objects by then.
6
Stefan_Schubert
Sorry, I hadn't seen that. I now added the "cost-effectiveness" tag to the first of these three articles, since that even has "cost-effectiveness" in the title. The other two articles are actually about differences in performance between people. Potentially that should have its own tag. But it's also possible that that is too small a topic to warrant that. I'd also be happy for an article on distribution of cost-effectiveness.

Thanks. I'll take a look at the articles later today. My sense is that discussion of variation in performance across people is mostly of interest insofar as it bears on the question of distribution of cost-effectiveness, so I'd be tempted to use the distribution of cost-effectiveness tag for those articles, rather than create a dedicated entry.

Alignment tax

Here I'm more interested in the Wiki entry than the tag, though the tag is probably also useful. Basically I primarily want a good go-to link that is solely focused on this and gives a clear definition and maybe some discussion.

This is probably an even better fit for LW or the Alignment Forum, but they don't seem to have it. We could make a version here anyway, and then we could copy it there or someone from those sites could.

Here are some posts that have relevant content, from a very quick search:

... (read more)
6
Pablo
Here's the entry. I was only able to read the transcript of Paul's talk and Rohin's summary of it, so feel free to add anything you think is missing.
4
Pablo
Thanks, Michael. This is a good idea; I will create the entry. (I just noticed you left other comments to which I didn't respond; I'll do so shortly.)

READI Research

https://www.readiresearch.org/ 

My guess is that this org/collective/group doesn't (yet) meet the EA Wiki's implicit notability or number-of-posts-that-would-be-tagged standards, but I'm not confident about that. 

Here are some posts that would be given this tag if the tag was worth making:

... (read more)

Tags for some local groups / university groups

I'd guess it would in theory be worth having tags for EA Cambridge and maybe some other uni/local groups like EA Oxford or Stanford EA. I have in mind groups that are especially "notable" in terms of level and impact of their activities and whether their activities are distinct/novel and potentially worth replicating. E.g., EA Cambridge's seminar programs seem to me like an innovation other groups should perhaps consider adopting a version of, and with more confidence they seem like a good example of a certain ... (read more)

Biosurveillance

A central pillar for biodefense against GCBRs and an increasingly feasible intervention with several EAs working on it and potentially cool projects emerging in the near future. Possibly too granular as a tag since there's not a high volume of biosecurity posts which would warrant the granular distinction. But perhaps valuable from a Wiki standpoint with a definition and a few references. I can create an entry, if the mods are okay with it.

Example posts:

... (read more)
4
Pablo
Hi Jasper, I agree that this would be a valuable Wiki article, and if you are willing to write it, that would be fantastic.

Megaprojects

Would want to have a decent definition. I feel like the term is currently being used in a slippery / under-defined / unnecessary-jargon way, but also that there's some value in it. 

Example posts: 

Related entries:

Constraints on effective altruism

Scalably using labour

ETA: Now created

Corporate governance

Example of a relevant post: https://forum.effectivealtruism.org/posts/5MZpxbJJ5pkEBpAAR/the-case-for-long-term-corporate-governance-of-ai

I've mostly thought about this in relation to AI governance, but I think it's also important for space governance and presumably various other EA issues. 

I haven't thought hard about whether this really warrants an entry, nor scanned for related entries - just throwing an idea out there.

Brain-computer interfaces

See also the LW wiki entry / tag, which should be linked to from the Forum entry if we make one: https://www.lesswrong.com/tag/brain-computer-interfaces

Relevant posts:

4
Pablo
Looks good. I've now created the entry and will add content/links later.

Time-money tradeoffs or Buying time or something like that

For posts like https://forum.effectivealtruism.org/posts/g86DhzTNQmzo3nhLE/what-are-your-favourite-ways-to-buy-time and maybe a bunch of other posts tagged Personal development

4
Pablo
Cool, I created the entry here. I may add some text soon.

Criticism of the EA community

For posts about what the EA community is like, as opposed to the core ideas of EA themselves. Currently, these posts get filed under Criticism of effective altruism even though it doesn't quite fit.

4
Eevee🔹
Update: I have created Criticism of the effective altruism community.
6
Aaron Gertler 🔸
Seems like a good idea! If we have three criticism tags covering "causes", "organizations", and "community", then having a general "criticism of EA" tag doesn't seem to make sense. The best alternative seems like "criticism of EA philosophy". If I don't hear objections from Pablo/Michael, I'll make that change in a week or so and re-tag relevant posts.
2
MichaelA
So the plan is to have 4 tags, covering community, causes, organizations, and philosophy? Is so, that sounds good to me, I think. If the idea was to have just three (without philosophy), I'd have said it feels like there's something missing, e.g. for criticism of the ITN framework or ~impartial welfarism or the way EA uses expected value reasoning or whatever.

Arms race or Technology race or Arms/technology race something like that

Related entries

AI governance | AI forecasting | armed conflict | existential risk | nuclear warfare | Russell-Einstein Manifesto

--

I think such an entry/tag would be at least somewhat attention hazardous, so I'm genuinely unsure whether it's worth creating it. Though I think it'd also have some benefits, the cat is somewhat out of the bag attention-hazard-wise (at least among EAs, who are presumably the main readers of this site), and LessWrong have apparently opted for such a tag (focu... (read more)

4
Pablo
Yes, I actually have a draft prepared, though it's focused on AI, just like the LW article. I'll try to finish it within the next couple of days and you can let me know when I publish it if you think we should expand it to cover other technological races (or have another article on that broader topic).

Survey or Surveys

For posts that: 

  1. discuss results from surveys,
  2. promote surveys, and/or
  3. discussing pros and cons and best practices for using surveys in general and maybe for specific EA-relevant areas (e.g., how much can we learn about technology timelines from surveys on that topic? how best can we collect and interpret that info?). 

I care more about the first and third of those things, but it seems like in practice the tag would be used for the second. I guess we could discourage that, but it doesn't seem important.

"Survey" seems more appropriate... (read more)

2
Pablo
Yeah, makes sense. There's some overlap with Data, but my sense is that having this other entry is still justified. I don't have a preference for plural vs. singular.
2
MichaelA
Ok, now created.

Diplomacy

Might overlap too much with things like international relations and international organizations?

Would partly be about diplomacy as a career path.

2
Pablo
Probably worth it, if there are enough relevant posts and/or if there's discussion here or elsewhere about diplomacy as a career path. 

Coaching or Coaching & therapy or something like that

Basically I think it'd be useful to have a way to collect all posts relevant to coaching and/or therapy as ways to increase people's lifetime impact - so as meta interventions/cause areas, rather than as candidates for the best way to directly improve global wellbeing (or whatever). So this would include things like Lynette Bye's work but exclude things like Canopie.

In my experience, it tends to make sense to think of coaching and therapy together in this context, as many people offer both services, ... (read more)

2
Pablo
Yes, makes a lot of sense. Not sure why we don't have such a tag already. Weak preference for coaching over coaching & therapy.
4
MichaelA
Ok, now created, with coaching as the name for now

Independent impressions or something like that

We already have Discussion norms and Epistemic deference, so I think there's probably no real need for this as a tag. But I think a wiki entry outlining the concept could be good. The content could be closely based on my post of the same name and/or the things linked to at the bottom of that post.

2
Stefan_Schubert
I agree that it would be good to describe this distinction in the Wiki. Possibly it could be part of the Epistemic deference entry, though I don't have a strong view on that.
2
Pablo
How about something like beliefs vs. impressions?
2
MichaelA
Yeah, that title/framing seems fine to me
3
Pablo
After reviewing the literature, I came to the view that Independent impressions, which you proposed, is probably a more appropriate name, so that's what I ended up using.

Management/mentoring, or just one of those terms, or People management, or something like that

This tag could be applied to many posts currently tagged Org strategy, Scalably using labour, Operations, research training programs, Constraints in effective altruism, WANBAM, and effective altruism hiring. But this topic seems sufficiently distinct from those topics and sufficiently important to warrant its own entry.

2
Pablo
Sounds good. I haven't reviewed the relevant posts, so I don't have a clear sense of whether "management" or "mentoring" is a better choice; the latter seems preferable other things equal, since "management" is quite a vague term, but this is only one consideration. In principle, I could see a case for having two separate entries, depending on how many relevant posts there are and how much they differ. I would suggest that you go ahead and do what makes most sense to you, since you seem to have already looked at this material and probably have better intuitions. Otherwise I can take a closer look myself in the coming days.
2
MichaelA
Ok, I've now made this, for now going with just one entry called Management & mentoring, but flagging on the Discussion page that that could be changed later. 

United Kingdom policy & politics (or something like that)

This would be akin to the entry/tag on United States politics. An example of a post it'd cover is https://forum.effectivealtruism.org/posts/yKoYqxYxo8ZnaFcwh/risks-from-the-uk-s-planned-increase-in-nuclear-warheads 

But I wrote on the United States politics entry's discussion page a few months ago:

I suggest changing the name and scope to "United States government and politics". E.g., I think there should be a place to put posts about what actions the US government plans to take or can take, h

... (read more)
6
Pablo
Yeah, makes sense. I just created the new article and renamed the existing one. There is no content for now, but I'll try to add something later.

We've now redirected almost all of EA Concepts to Wiki entries. A few of the remaining concepts (e.g. "beliefs") don't seem like good wiki entries here, so we won't touch them.

However, there are a couple of entries I think could be good tags, or good additions to existing tags:

  1. Charity recommendations
  2. Focus area recommendations

It seems good to have wiki entries that contain links to a bunch of lists of charity and/or focus area recommendations. Maybe these are worked into tags like "Donation Choice"/"Donation Writeup", or maybe they're separate.

(Wherever the... (read more)

4
Pablo
Charity evaluators, e.g. GiveWell and Animal Charity Evaluators, have Wiki entries with sections listing their current recommendations. One option is to make the charity recommendations entry a pointer to existing Wiki entries that include such sections. Alternatively, we could list the recommendations themselves in this new Wiki entry, perhaps organizing it as a table that shows, for each charity, which charity evaluators recommend it.
4
Pablo
Yeah, how about communities adjacent to effective altruism?
4
Stefan_Schubert
Sounds good! Thanks.
4
Pablo
I created a stub. As usual, feel free to revise or expand it.

Open society

The ideal of an open society - a society with high levels of democracy and openness - is related to many EA causes and policy goals. For example, open societies are associated with long-run economic growth, and an open society is conducive to the "long reflection." This tag could host discussion about the value of open societies, the meaning of openness, and how to protect and expand open societies.

4
Pablo
I agree that the concept of an open society as you characterize it has a clear connection to EA. My sense is that the term is commonly used to describe something more specific, closely linked to the ideas of Karl Popper and the foundations of George Soros (Popper's "disciple"), in which case the argument for adding a Wiki entry would weaken. Is my sense correct? I quickly checked the Wikipedia article, which broadly confirmed my impression, but I haven't done any other research.
2
Eevee🔹
Yeah, maybe something broader like "democracy" or "liberal democracy." Perhaps we could rename the "direct democracy" tag to "democracy"?
6
Aaron Gertler 🔸
The direct democracy tag is meant for investments in creating specific kinds of change through the democratic process. But people are using it for other things now anyway -- probably it's good to have a "ballot initiatives" tag and rename this tag to "democracy" or something else. Good catch!
6
Pablo
Here's what I did: * I renamed direct democracy to ballot initiative. * I added two new entries: democracy and safeguarding liberal democracy. The first covers any posts related to democracy, while the second covers specifically posts about safeguarding liberal democracy as a potentially high-impact intervention. I still need to do some tagging and add content to the new entries.
2
Pablo
I agree. I'll deal with this tomorrow (Thursday), unless anyone wants to take care of it.
4
Stefan_Schubert
Yes, I think your sense is correct.
2
MichaelA
I do see this concept as relevant to various EA issues for the reasons you've described, and I think high-quality content covering "the value of open societies, the meaning of openness, and how to protect and expand open societies" would be valuable. But I can't immediately recall any Forum posts that do cover those topics explicitly. Do you know of posts that would warrant this tag? If there aren't yet posts that'd warrant this tag, then we have at least the following (not mutually exclusive) options: 1. This tag could be made later, once there are such posts 2. You could write a post of those topics yourself 3. An entry on those topics could be made * It's ok to have entries that don't have tagged posts * But it might be a bit odd for someone other than Pablo to jump to making an entry on a topic as one of the first pieces of EA writing on that topic? * Since wikis are meant to do things more like distilling existing work. * But I'm not sure. * This is related to the question of to what extent we should avoid "original research" on the EA Wiki, in the way Wikipedia avoids it * See also 4. Some other entry/tag could be made to cover similar ground

Career profiles (or maybe something like "job posts"?)

Basically, writeups of specific jobs people have, and how to get those jobs. Seems like a useful subset of the "Career Choice" tag to cover posts like "How I got an entry-level role in Congress", and all the posts that people will (hopefully) write in response to this.

2
EdoArad
What about posts that discuss personal career choice processes (like this)?
2
MichaelA
My personal, quick reaction is that that's a decently separate thing, that could have a separate tag if we feel that that's worthwhile. Some posts might get both tags, and some posts might get just one. But I haven't thought carefully about this. I also think I'd lean against having an entry for that purpose. It seems insufficiently distinct from the existing tags for career choice or community experiences, or from the intersection of the two.
2
MichaelA
Yeah, this seems worth having! And I appreciate you advocating for people to write these and for us to have a way to collect them, for similar reasons to those given in this earlier shortform of mine. I think career profiles is a better term for this than job posts, partly because: * The latter sounds like it might be job ads or job postings * Some of these posts might not really be on "jobs" but rather things like being a semi-professional blogger, doing volunteering, having some formalised unpaid advisory role to some institution, etc. OTOH, career profiles also sounds somewhat similar to 80k's career reviews. This could be good or bad, depending on whether it's important to distinguish what you have in mind from the career review format. (I don't have a stance on that, as I haven't read your post yet.)
2
MichaelA
Actually, having read your post, I now think it does sound more about jobs (or really "roles", but that sounds less clear) than about careers. So I now might suggest using the term job profiles. 
4
Aaron Gertler 🔸
Thanks, have created this. (The "Donation writeup" tag is singular, so I felt like this one should also be, but LMK if you think it should be plural.)
2
Pablo
Either looks good to me. I agree that this is worth having.

Update: I've now made this entry.

Requests for proposals or something like that

To cover posts like https://forum.effectivealtruism.org/posts/EEtTQkFKRwLniXkQm/open-philanthropy-is-seeking-proposals-for-outreach-projects 

This would be analogous to the Job listings tags, and sort of the inverse of the Funding requests tag.

This overlaps in some ways with Get involved and Requests (open), but seems like a sufficiently distinct thing that might be sufficiently useful to collect in one place that it's worth having a tag for this.

This could also be an entry t... (read more)

Update: I've now made this entry.

Semiconductors or Microchips or Integrated circuit or something like that

The main way this is relevant to EA is as a subset of AI governance / AI risk issues, which could push against having an entry just for this.

That said, my understanding is that a bunch of well-informed people see this as a fairly key variable for forecasting AI risks and intervening to reduce those risks, to the point where I'd say an entry seems warranted.

Update: I've now made this entry.

Consultancy (or maybe Consulting or Consultants or Consultancies)

Things this would cover:

... (read more)
6
Pablo
Yeah, I made a note to create an entry on this topic soon after Luke published his post. Feel free to create it, and I'll try to expand it next week (I'm a bit busy right now).

Update: I've now made this entry.

Alternative foods or resilient foods or something like that

A paragraph explaining what I mean (from Baum et al., 2016):

nuclear war, volcanic eruptions, and asteroid impact events can block sunlight, causing abrupt global cooling. In extreme but entirely possible cases, these events could make agriculture infeasible worldwide for several years, creating a food supply catastrophe of historic proportions. This paper describes alternative foods that use non-solar energy inputs as a solution for these catastrophes. For example,

... (read more)
4
Pablo
I'm in favor. Very weak preference for alternative foods until resilient foods becomes at least somewhat standard.

I now feel that a number of unresolved issues related to the Wiki ultimately derive from the fact that tags and encyclopedia articles should not both be created in accordance with the same criterion. Specifically, it seems to me that a topic that is suitable for a tag is sometimes too specific to be a suitable topic for an article.

I wonder if this problem could be solved, or at least reduced, by allowing article section headings to also serve as tags. I think this would probably be most helpful for articles that cover particular disciplines, such as psycho... (read more)

2
Aaron Gertler 🔸
These are reasonable concerns, but adding hundreds of additional tags and applying them across relevant posts seems like it will take a lot of time. As a way to save time and reduce the need for new tags, how many of your use cases do you think would be covered if multi-tag filtering was supported? That is, someone could search for posts with both the "psychology" and "career choice" tags and see posts about careers in psychology. This lets people create their own "fine-grained taxonomy" without so many tags needing to have a bunch of sub-tags.
2
MichaelA
I think something along these lines feels promising, but I feel a bit unsure precisely what you have in mind. In particular, how will users find all posts tagged with an article section heading tag? Would there still be a page for (say) social psychology like there is for psychology, and then it's just clear somehow that this page is a subsidiary tag of a larger tag? Inspired by that question, I think maybe a more promising variant (or maybe it's what you already had in mind) is for some article section headings to be hyperlinked to a page whose title is the other page's section heading and whose contents is that section from the other page, below which is shown all the tags with that section heading tag. Then if a user edits the section or the "section's own page", the edit automatically occurs in the other place as well.  And from "the section's own page" there's something at the top that makes it clear that this entry is a subsidiary entry of a larger entry and people can click through to get back to the larger one. Maybe the "something at the top" would look vaguely like the headers of posts that are in sequences? Maybe then you could even, like with sequences, click an arrow to the right or left to go to the page corresponding to the previous or following section of the overarching entry? Stepping back, this seems like just one example of a way we could move towards more explicitly having a nested hierarchy of entries where the different layers are in some ways linked together. I imagine there are other ways to do that too, though I haven't brainstormed any yet.

Meta: perhaps this entry should be renamed 'Propose and vote on potential entries' or 'Propose and vote on potential tags/Wiki articles'? We generally use the catch-all term 'entries' for what may be described as either a tag or a Wiki article.

2
MichaelA
Yeah, I considered that a few weeks ago but then (somewhat inexplicably) didn't bother doing it. Thanks for the prod - I have now done it :) 

I am considering turning a bunch of relevant lists into Wiki entries. Wikipedia allows for lists of this sort (see e.g. the list of utilitarians) and some (e.g. Julia Wise) have remarked that they find lists quite useful. The idea occurred to me after a friend suggested a few courses I may want to add to my list of effective altruism syllabi. It now seems to me that the Wiki might be a better place to collect this sort of information than some random blog. Thoughts?

2
MichaelA
Quick thoughts: * I think more lists/collections would be good * I think it's better if they're accessible via the Forum search function than if they're elsewhere * I think it's probably better if they're EA wiki entries than EA Forum posts or shortforms because that makes it easier for them to be collaboratively built up * And this seems more important for and appropriate to a list than an average post * Posts are often much more like a particular author's perspective, so editing beyond copyediting would typically be a bit odd (that said, a function for making suggestions could be cool - but that's tangential to the main topic here) * I don't think I see any other advantage of these lists being wiki entries rather than posts or shortforms * I think the only disadvantages of them being lists are that then we might have too many random or messy lists that have an air of official-ness or that the original list creator gets less credit for their contributions (their name isn't attached to the list) * But the former disadvantage can apply to entries in general and so we already need sufficient policies, other editors, etc. to solve it, so doesn't seem a big deal for lists specifically * And the former disadvantage can also apply to entries in general and so will hopefully be partially solved by things like edit counters, edit karma, "badges", or the like * So overall this seems worth doing Less important: * Various "collections" on my own shortform might be worth making into such entries * Though I think actually most of them are better fits for the bibliography pages of existing entries * (And ~ a month ago I added a link to those collections, or to all relevant items from the collections, to the associated entries that existed at the time)

Update: I've now made this entry

career advising or career advice or career coaching or something like that

We already have career choice. But that's very broad. It seems like it could be useful to have an entry with the more focused scope of things like:

  • How useful do various forms of career advising tend to be?
  • What are best practices for career advising?
  • What orgs work in that space?
    • E.g., 80k, Animal Advocacy Careers, Probably Good, presumably some others
  • How can one test fit for or build career capital in career advising?

This would be analogous to how we hav... (read more)

Charter cities or special economic zones or whatever the best catchall term for those things + seasteading is

From a quick search for "charter cities" on the Forum, I think there aren't many relevant posts, but there are:

... (read more)
4
Pablo
Yes, definitely. I already had some scattered notes on this. There's also the 80k podcast episode: Wiblin, Robert & Keiran Harris (2019) The team trying to end poverty by founding well-governed ‘charter’ cities, 80,000 Hours, March 31. An interview with Mark Lutter and Tamara Winter from the Charter Cities Institute.

Effective Altruism on Facebook and Effective Altruism on Twitter (and more - maybe Goodreads, Instagram, LinkedIn, etc). Alternatively Effective Altruism on Social Media, though I probably prefer tags/entries on particular platforms.

A few relevant articles:

https://forum.effectivealtruism.org/posts/8knJCrJwC7TbhkQbi/ea-twitter-job-bots-and-more

https://forum.effectivealtruism.org/posts/6aQtRkkq5CgYAYrsd/ea-twitterbot

https://forum.effectivealtruism.org/posts/mvLgZiPWo4JJrBAvW/longtermism-twitter

https://forum.effectivealtruism.org/posts/BtptBcXWmjZBfdo9n/ea-fa... (read more)

3
MichaelA
At first glance, I'd prefer to have Effective altruism on social media, or maybe actually just Social media, rather than the more fine-grained ones. (Also, I do think something in this vicinity is indeed worth having.) Reasoning: * I'm not sure if any of the specific platforms warrant an entry * If we have entries for the specific platforms, then what about posts relevant to effective altruism on some other platform? * We shouldn't just create an entry for every other platform there's at least one post relevant to, nor should we put them all under one of the other single-platform-focused tags. * But having an entry for Facebook, another for Twitter, and another for social media as a whole seems like too much? * Regarding dropping "Effective altruism on" and just saying "Social media": * Presumably there are also posts on things like the effects of social media, the future trajectory of it, or ways to use it for good or intervene in it that aren't just about writing about EA on it? * E.g., https://forum.effectivealtruism.org/posts/842uRXWoS76wxYG9C/incentivizing-forecasting-via-social-media * And it seems like it'd be good to capture those posts under the same entry? * Though maybe an entry for social media and an entry for effective altruism on social media are both warranted? Though also note that there's already a tag for effective altruism in the media, which has substantial overlap with this. But I think that's probably ok - social media seems a sufficiently notable subset of "the media" to warrant its own entry. (Btw, for the sake of interpreting the upvotes as evidence: I upvoted your comment, though as I noted I disagree a bit on the best name/scope.)
3
MichaelA
(Just wanted to send someone a link to a tag for Social media or something like that, then realised it doesn't exist yet, so I guess I'll bump this thread for a second opinion, and maybe create this in a few days if no one else does)
2
Pablo
I don't have accounts on social media and don't follow discussions happening there, so I defer to you and others with more familiarity.

Something like regulation

Intended to capture discussion of the Brussels effect, the California effect, and other ways regulation could be used for or affect things EAs care about.

Would overlap substantially with the entries on policy change and the European Union, as well as some other entries, but could perhaps be worth having anyway.

Update: I've now made this entry.

software engineering

Some relevant posts:

Related entries

artificial intelligence... (read more)

2
Pablo
Looks good to me.

Maybe we should have an entry for each discipline/field that's fairly relevant to EA and fairly well-represented on the Forum? Like how we already have history, economics, law, and psychology research. Some other disciplines/fields (or clusters of disciplines/fields) that could be added:

  • political science
  • humanities
    • I think humanities disciplines/fields tend to be somewhat less EA-relevant than e.g. economics, but it could be worth having one entry for this whole cluster of disciplines/fields
  • social science
    • But (unlike with humanities) it's probably better to h
... (read more)
2
Pablo
I'm overall in favor. I wonder if we should take a more systematic approach to entries about individual disciplines. It seems that, from an EA perspective, a discipline may be relevant in a number of distinct ways, e.g. because it is a discipline in which young EAs may want to pursue a career,  because conducting research in that discipline is of high value, because that discipline poses serious risks, or because findings in that discipline should inform EA thinking. I'm not sure how to translate this observation into something actionable for the Wiki, though, so I'm just registering it here in case others have thoughts along these lines.
2
MichaelA
Yeah, I do think it seems worth thinking a bit more about what the "inclusion criteria" for a discipline should be (from the perspective of making an EA Wiki entry about it), and that the different things you mention seem like starting points for that. Without clearer inclusion criteria, we could end up with a ridiculously large number of entries, or with entries that are unwarranted or too fine-grained, or with entries that are too coarse-grained, or with hesitation and failing to create worthwhile entries. I don't immediately have thoughts, but endorse the idea of someone generating thoughts :D
2
Stefan_Schubert
I agree that humanities disciplines tend to be less EA-relevant than the social sciences. But I think that the humanities are quite heterogeneous, so it feels more natural to me to have entries for particular humanities disciplines, than humanities as a whole. But I'm not sure any such entries are warranted; it depends on how much has been written.

Vetting constraints

Maybe this wouldn't add sufficient value to be worth having, given that we already have scalably using labour and talent vs. funding constraints.

4
Pablo
I think there should definitely be a place for discussing vetting constraints. My only uncertainty is whether this should be done in a separate article and, if so, whether talent vs. funding constraints should be split. Conditional on having an article on vetting constraints, it looks to me that we should also have articles on talent constraints and funding constraints. Alternatively, we could have a single article discussing all of these constraints.
2
MichaelA
I think I agree that we should either have three separate entries or one entry covering all three. I'm not sure which of those I lean towards, but maybe very weakly towards the latter?
4
MichaelA
Just discovered Vaidehi made a collection of discussions of constraints in EA, which could be helpful for populating whatever entries get created and maybe for deciding on scopes etc.

Mmh, upon looking at Vaidehi's list more closely, it now seems to me that we should have a single article: people have proposed various other constraints besides the three mentioned, and I don't think it would make sense to have separate articles for each of these, or to have an additional article for "other constraints". So I propose renaming talent vs. funding constraints constraints in effective altruism. Thoughts?

6
MichaelA
I think that that probably makes sense.
4
Pablo
Done. (Though I used the name constraints on effective altruism, which seemed more accurate. I don't have strong views on whether the preposition should be 'in' or 'on', however, so feel free to change it.) The article should be substantially revised (it was imported from EA Concepts), I think, but at least its scope is now better defined.
4
Pablo
Great. Let's have three articles then. Feel free to split the existing one, otherwise I'll do that tomorrow. [I know you like this kind of framing. ;) ]
4
Stefan_Schubert
Vetting constraints dovetails nicely with talent vs. funding constraints. I'm not totally convinced by the scalably using labour entry, though. One possibility would be to just replace it by a vetting constraints entry. Alternatively, it could be retained but renamed/reconceptualised.
2
Pablo
Yeah, scalably using labor just doesn't strike me as a natural topic for a Wiki entry, though I not sure exactly why. Maybe it's because it looks like the topic was generated by considering an interesting question—"how should the EA community allocate its talent?"—and creating an entry around it, rather than by focusing on an existing field or concept. I'd be weakly in favor of merging it with vetting constraints.
2
MichaelA
I'm currently in favour of keeping scalably using labour, though I also made the entry so this shouldn't be much of an update (it's not like a "second vote", just a repeat of the first vote after hearing the new arguments).  One consideration I'd add is that maybe it's a more natural topic for a tag than a wiki entry? It seems to me like having a tag for posts relevant to a (sufficiently) interesting and recurring question makes sense?
4
Stefan_Schubert
Fwiw, I think that "scalably using labour" doesn't sound quite like a wiki entry. I find virtually no article titles including the term "using" on Wikipedia. If one wants to retain the concept, I think that "Large-scale use of labour" or something similar would be better. There are may Wikipedia article titles including the term "use of [noun]". (Potentially nouns are generally better than verbs in Wikipedia article titles? Not sure.)

Update: I've now made this entry

Charity evaluation or (probably less good) Charity evaluator

We already have an entries donation choice, intervention evaluation, and cause prioritisation. But charity evaluation is a major component of donation choice for which we lack an entry. This entry could also cover things about charity evaluation orgs like GiveWell, e.g. how useful a role they serve, what the best practices for them are, and whether there should be one for evaluating longtermist charities or AI charities or whatever.

Downside of this name: Really it m... (read more)

4
Pablo
I think this should clearly exist.

Update: I've now made this entry.

Effective altruism outreach in schools or High school outreach or something like that

Overlaps with https://forum.effectivealtruism.org/tag/effective-altruism-education , but that entry is broader, and it seems like now there's a decent amount of activity or discussion about high school outreach specifically. E.g.:

... (read more)
4
Pablo
I'm in favor.

Barriers to effective giving or Psychology of (in)effective giving or something like that

Bibliography

Why aren’t people donating more effectively? | Stefan Schubert | EA Global: San Francisco 2018

EA Efficacy and Community Norms with Stefan Schubert [see description for why this is relevant]

[Maybe some other Stefan Schubert stuff]

[Probably some stuff by Lucius Caviola, David Reinstein, and others]

Related entries

cognitive bias | cost-effectiveness | donation choice | diminishing returns | effective giving | market efficiency of philanthropy | rationality | sc... (read more)

Yeah, I think Psychology of effective giving is probably the best name. Stefan, Lucius and others have published a bunch of stuff on this, which would be good to cover in the article.

9
Pablo
This is one of many emerging areas of research at the intersection of psychology and effective altruism: - psychology of effective giving (Caviola et al. 2014; Caviola, Schubert & Nemirow 2020; Burum, Nowak & Hoffman 2020) - psychology of existential risk (Shubert, Caviola & Faber 2019) - psychology of speciesism (Caviola 2019; Caviola, Everett & Faber 2019; Caviola & Capraro 2020) - psychology of utilitarianism (Kahane et al. 2018; Everett & Kahane 2020) I was thinking of covering all of this research in a general entry on the psychology of effective altruism, but we can also have separate articles for each.
4
MichaelA
I forgot that there was already an EA Psychology tag, so I've now just renamed that, added some content, and copied this comment of Pablo's on that Discussion page. (It could still make sense for someone to also create entries on those other topics and/or on moral psychology - I just haven't done so yet.)
2
Pablo
Great, thanks.
2
MichaelA
Apparently there's a new review article by Caviola, Schubert, and Greene called "The Psychology of (In)Effective Altruism", which pushes in favour of roughly that as the name.  I also think that, as you suggest, that can indeed neatly cover "psychology of effective giving" (i.e., that seems a subset of "psychology of effective altruism"), and maybe "psychology of utilitarianism". But I'm less sure that that neatly covers the other things you list. I.e., the psychology of speciesism and existential risk are relevant to things other than how effective people will be in their altruism. But we can just decide later whether to also have separate entries for those, and if so I do think they should definitely be listed in the Related entries section from the "main entry" on this bundle of topics (and vice versa).  So I think I currently favour: * Haven't an entry called psychology of (in)effective altruism * With psychology of effective altruism as a second-to-top pick * Probably not currently having a separate entry for psychology of (in)effective giving * But if people think there's enough distinctive stuff to warrant an entry/tag for that, I'm definitely open to it * Maybe having separate entries for the other things you mention

Psychology of (in)effective altruism is adequate for a paper, where authors can use humor, puns, and other informal devices, but inappropriate for an encyclopedia, which should keep a formal tone.

(To elaborate:  by calling the field of study e.g. the 'psychology of effective giving' one is not confining attention only to the psychology of those who give particularly effectively: 'effective giving' is used to designate a dimension of variation, and the field studies the underlying psychology responsible for causing people to give with varying degrees of effectiveness, ranging from very effectively to very ineffectively. By analogy, the psychology of eating is meant to also study the psychology of people who do not eat, or who eat little. A paper about anorexia may be called "The psychology of (non-)eating", but that's just an informal way of drawing attention to its focus; it's not meant to describe a field of study called "The psychology of (non-)eating", and that's not an appropriate title for an encyclopedia article on such a topic.)

9
RyanCarey
Yeah, the ultra-pedantic+playful parenthetical is a very academic thing. "Psychology of effective altruism" seems to cover giving/x-risk/speciesism/career choice - i.e. it covers everything we want.
2
MichaelA
Given the fact you both say this and the upvotes on those comments, I think we should probably indeed go with "psychology of effective giving" rather than "psychology of (in)effective giving".[1] I still don't think that actually totally covers psychology of speciesism, since speciesism is not just relevant in relation to altruism. Likewise, I wouldn't say the psychology of racism or of sexism are covered by the area "psychology of effective altruism". But I do think the entry on psychology of effective altruism should discuss speciesism and so on, and that if we later have an entry for psychology of speciesism they should link to each other. [1] But FWIW: * I don't naturally interpret the "(in)" device as something like humour, a pun, or an informal device * I think "psychology of effective altruism" and "psychology of ineffective altruism" do call to mind to distinct focuses, even if I'd expect each thing to either cover (with less emphasis) or "talk to" work on the other thing * Somewhat analogously, areas of psychology that focus on what makes for an especially good life (e.g., humanist psychology) are meaningfully distinct from those that focus on "dysfunction" (e.g., psychopathology), and I believe new terms were coined primarily to highlight that distinction But I don't think this matters much, and I'm totally happy for "psychology of effective giving" to be used instead.
2
MichaelA
(Oh, just popping a thought here before I go to sleep: "moral psychology" is a relevant nearby thing. Possibly it'd be better to have that entry than "psychology of effective altruism"? Or to have both?)
4
Pablo
Thanks. Coincidentally this was published yesterday. But I haven't done any tagging yet.
4
MichaelA
Ah, nice. Maybe I searched for the entry shortly before it was published. I've now tagged those 3 posts I mentioned, but haven't checked and tagged other things that come up when you search "Our World in Data".
2
Pablo
There are lots of hits for 'EA updates'. The three results that I thought deserved to be tagged were precisely the ones you had already identified. I haven't looked at this exhaustively, though, so if you find other relevant articles, feel free to add the tag to those, too.

Intelligence assessment or Intelligence (military and strategy) or Intelligence agencies or Intelligence community or Intelligence or something

I don't really like any of those specific names. The first is what Wikipedia uses, but sounds 100% like it means IQ tests and similar. The second is my attempt to put a disambiguation in the name itself. The third and and fourth are both too narrow, really - I'd want the entry to not just be about the agencies or community but also about the type of activity they undertake. The fifth is clearly even more ambiguous t... (read more)

(Edit: I've now made this entry.)

Independent research

Proposed text:

Independent research is research conducted by an individual who is not employed by any organisation or institution, or who is employed but is conducting this research separately from that. This person may or may not have funding for this research (e.g., via grants). Research that is done by two or more people collaborating, but still separate from an organisation or institution, could arguably be considered independent research.

There are various advantages and disadvantages of independent r

... (read more)
2
MichaelA
Some text from the latest LTFF report that could be drawn on when discussing advantages and disadvantages within this entry:
4
Pablo
Looks good, thanks!

Edit: I've now made this entry.

Longtermist Entrepreneurship Fellowship

I think this is only mentioned in three Forum posts so far[1], and I'm not sure how many (if any) would be added in future. 

It's also mentioned in this short Open Phil page: https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-jade-leung

I'm also not sure if the name is fully settled - different links seem to use different names, or to not even use a capitalised name.

[1] https://forum.effectivealtruism.org/posts/diZWNmLRgcbuwmYn4/long-term-future-fund-may-2021-gra... (read more)

4
Pablo
I'm in favor, though there's so little public information at this stage that inevitably the entry won't have any substantive content for the time being.
4
Pablo
Looks good.
4
MichaelA
Cool - given that, I've now made this (though without adding body text or tagging things, for time reasons). 

Some orgs it might be worth making entries about:

2
Pablo
Thanks, I'm in the process of compiling a master list of EA orgs and creating entries for the missing ones. Would you be interested in looking at the spreadsheet?
2
MichaelA
Yeah, I'll send you a DM

David Pearce (the tag will be removed if others think it’s not warranted)

Arguments against:

  • One may see David Pearce much more related to transhumanism (even if to the most altruistic “school” of transhumanism) than to EA (see e.g. Pablo’s comment).
  • Some of Pearce’s ideas goes against certain established notions in EA: e.g. he thinks sentience of classical digital computers is impossible under the known laws of physics, that minimising suffering should take priority over increasing happiness of the already well-off, that environmental interventions alone,
... (read more)
4
Michael Huang
To add to arguments for inclusion, here’s an excerpt from an EA Forum post about key figures in the animal suffering focus area. David Pearce’s work on suffering and biotechnology would be more relevant now than in 2013 due to developments in genome editing and gene drives.
6
nil
For those who may want to see the deleted entry, I'm posting it below:
3
Aaron Gertler 🔸
As the head of the Forum, I'll second Pablo in thanking you for creating the entry. While I defer to Pablo on deciding what articles belong in the wiki, I thought Pearce was a reasonable candidate. I appreciate the time you took to write out your reasoning (and to acknowledge arguments against including him).
1
nil
Thank you for appreciating the contribution. Since Pablo is trusted w/ deciding on the issue, I will address my questions about the decision directly to him in this thread.
5
Pablo
Thanks again, nil, for taking the time to create this entry and outline your reasoning. After reviewing the discussion, and seeing that no new comments have been posted in the past five days, I've decided to delete the article, for the reasons I outlined previously. Please do not let this dissuade you from posting further content to the Wiki, and if you have any feedback, feel free to leave it below or to message me privately.
0
nil
I'm sorry to hear this, Pablo, as I haven't been convinced that Pearce isn't relevant enough for effective altruism. Also, I really don’t see how the persons below have contributed more or are more relevant to effective altruism than Pearce (that is not necessarily to say that their entities aren’t warranted!). May it be correct to infer that at least some of these entries received less scrutiny than Pearce’s nomination? * Dylan Matthews * David Chalmers And perhaps: * Demis Hassabis * K. Eric Drexler May I ask why five days since the last comment were deemed enough for proceeding to the deletion? Is this part of the wiki’s rules? (If so, it must be my fault that I didn't have time to reply in time.) I also wanted to say that despite the disagreement, I appreacite that the wiki has a team commiteed to it. 

>Also, I really don’t see how the persons below have contributed more or are more relevant to effective altruism than Pearce

I tried to outline some criteria in an earlier comment. Chalmers and Hassabis fall under the category of "people who have attained eminence in their fields and who are connected to EA to a significant degree". Drexler, and perhaps also Chalmers, fall under the category of "academics who have conducted research of clear EA relevance".  Matthews doesn't fall under any of the categories listed, though he strikes me as someone worth including given his leading role at Future Perfect—the only explicitly EA project in mainstream journalism—and his long-standing involvement with the EA movement.

As the example of Matthews shows, the categories I identified aren't exhaustive. That was just my attempt to retroactively make sense of the tactic criterion I had followed in selecting these particular people. Despite still not having a super clear sense of the underlying categories, I felt reasonably confident that Pearce didn't qualify because (1) it seemed that there was no other potential category he could fall under besides that of "EA core figure" and (2)  ... (read more)

5
nil
First, I want to make it clear that I don’t question that any of the persons I listed in my previous comment should be removed from the wiki. I just disagree that not including Pearce is justified. Again, I honestly don’t think that it is true that Chalmers and Drexler are “connected to EA to a significant degree” while Pearce isn’t. Especially Chalmers: from what I know, he isn’t engaged w/ effective altruism, besides once agreeing for being interviewed at the 80,000 Hours podcast. As for the “attained eminence in their fields” condition, I do see that it may be harder to resolve for Pearce’s case since he isn’t an academic but rather an independent philosopher, writer, and advocate. But if Pearce’s field as suffering abolitionism, then the “attained eminence in their fields” condition does hold, in my view: he both is the founder of the “abolitionist project” and has written extensively on why’s and how’s of the project. Also, as I mentioned in the original comment proposing the entry, Pearce’s work has inspired many EAs, including Brain Tomasik, the Qualia Research Institute’s Andrés Gómez Emilsson, and the Center for Reducing Suffering’s Magnus Vinding, and the nascent field of welfare/compassionate biology. Also, Invincible Wellbeing research group has been inspired by Pearce's work as well. I don’t have any new arguments to make, and I don’t expect anyone involved to change their minds anyway. I only hope it may be worth time of others to contribute their perspectives on the dispute. And as Michael suggests above, it may be more productive at this point to consider how many entries on EA-relevant persons are desirable in the first place. Best regards, nil
4
MichaelA
[Just responding to one specific thing, which isn't central to what you're saying anyway. No need to respond to this.] For what it's worth, I think I agree with you re Chalmers (I think Pearce may be more connected to EA than Chalmers is), but not Drexler. E.g., Drexler has worked at FHI for a while, and the FHI office is also shared by GovAI (part of FHI, but worth listing separately), GPI, CEA, and I think Forethought. So that's pretty EA-y. Plus he originated some ideas that are quite important for a lot of EAs, e.g. related to nanotech, CAIS, and Paretotopia. (I'm writing quickly and thus leaning on acronyms and jargon, sorry.)
3
nil
I should have been more clear about Drexler: I don't dispute that he is “connected to EA to a significant degree”. But so is Pearce, in my view, for the reasons outlined in this thread.

(I think it's weird and probably bad that this comment of nil's has negative karma. nil is just clarifying what they were saying, and what they're saying is within the realm of reason, and this was said politely.)

8
Pablo
Hey nil, Chalmers was involved with EA in various ways over the years, e.g. by publishing a paper on the intelligence explosion and then discussing it at one of the Singularity Summits, briefly participating in LessWrong discussions, writing about mind uploading, interacting (I believe) with Luke Muehlhauser and Buck Shlegeris about their illusionist account of consciousness, etc. In any case, I agree with you (and Michael) that it may be more productive to consider the underlying reasons for restricting the number of entries on individual people. I generally favor an inclusionist stance, and the main reason for taking an exclusionist line with entries for individuals is that I fear things will get out of control if we adopt a more relaxed approach. I'm happy, for instance, with having entries for basically any proposed organization, as long as there is some reasonable link to EA, but it would look kind of weird if we allowed any EA to have their own entry. An alternative is to take an intermediate position where we require a certain degree of notability, but the bar is set lower, so as to include people like Pearce, de Grey, and others. We could, for instance, automatically accept anyone who already has their own Wikipedia entry, as long as they have a meaningful connection to EA (of roughly the same strength as we currently demand for EA orgs). Pearce would definitely meet this bar. How do others feel about this proposal?
0
nil
Perhaps voting on cases where there is a disagreement could achieve a wider inclusiveness or at least less controversy? Voters would be e.g. the moderators (w/ an option to abstain) and several persons who are familiar w/ the work of a proposed person. It may also help if inclusion criteria are more specific and are not hidden until a dispute arises.
5
Pablo
Hi nil, I've edited the FAQ to make our inclusion criteria more explicit.
1
nil
Thanks, Pablo. The criteria will help to avoid some future long disputes (and thus save time for more important things), although it wouldn't have prevented my creating the entry for David Pearce, for he does fit the second condition, I think. (We disagree, I know.)

I think discussion will probably usually be sufficient. Using upvotes and downvotes as info seems useful, but probably not letting them be decisive. 

It may also help if inclusion criteria are more specific and are not hidden until a dispute arises.

This might just be a case where written communication on the internet makes the tone seem off, but "hidden" sounds to me unfair and harsh. That seems to imply Pablo already knew what the inclusion criteria should be, and was set on them, but deliberately withheld them. This seems extremely unlikely. 

I think it's more like the wiki is only a few months old, and there's (I think) only one person paid to put substantial time into it, so we're still figuring out a lot of policies as we go - I think Pablo just had fuzzier ideas, and then was prompted by this conversation to make them more explicit, and then was still clearly open to feedback on those criteria themselves (rather than them already being set).

I do agree that it will help now that we have possible inclusion criteria written up, and it would be even better to have them shown more prominently somewhere (though with it still being clear that they're tentative and open to revision). Maybe this is all you meant?

7
nil
I didn't have in mind to sound harsh. Thanks for pointing this out: it now seems obvious to me that that part sounds uncharitable. I do appologise, belatedly :( What I meant is that currently these new, evolving inclusion criteria are difficult to find. And if they are used in dispute resolutions (from this case onwards), perhaps they should be referenced for contributors as part of the introduction text, for example.
4
Pablo
Thanks for the feedback. I have made a note to update the Wiki FAQ, or if necessary create a new document. Feel free to ping me if you don't see any updates within the next week or so. 

I personally feel that the proposal would allow for the inclusion of a number of people (not Pearce) who intuitively should not have their own Wiki entry, so I'm somewhat reluctant to adopt it. More generally, an advantage of having a more exclusionist approach for individuals is that the class of borderline cases is narrower, and so is therefore the expected number of discussions concerning whether a particular person should or should not be included. Other things equal, I would prefer to have few of these discussions, since it can be tricky to explicitly address whether someone deserves an entry (and the unpleasantness associated with having to justify an exclusionist position specifically—which may be perceived as expressing a negative opinion of the person whose entry is being considered—may unduly bias the discussion in an inclusionist direction).

FWIW, I agree that Hassabis and Drexler meet your proposed criteria and warrant entries, and that Chalmers and Caplan probably do (along with Hanson and Beckstead). But Matthews does seem roughly on par with Pearce to me. (Though I don't know that much about either of their work.) 

I also agree that Pearce seems to be a similar case to de Grey, so we might apply a similar principle to both.

Maybe it'd be useful to try switching briefly from the discussion of specific entries and criteria to instead consider: What are the pros and cons of having more or much more entries (and especially entries on people)? And roughly how many entries on people do we ultimately want? This would be similar to the inclusionism debate on Wikipedia, I believe. If we have reason to want to avoid going beyond like 50 or 100 or 200 or whatever entries on people, or we have reason to be quite careful about adding less prominent or central people to the wiki, or if we don't, then that could inform how high a "bar" we set.

Michael is correct that the inclusion criteria for entries of individual people hasn't been made explicit. In deciding whether a person was a fit subject for an article, I haven't followed any conscious procedure, but merely relied on my subjective sense of whether the person deserved a dedicated article. Looking at the list of people I ended up including, a few clusters emerge:

  1. people who have had an extraordinary positive impact, and that are often discussed in EA circles (Arkhipov, Zhdanov, etc.)
  2. people who have attained eminence in their fields and who are connected to EA to a significant degree (Pinker, Hassabis, Boeree, etc.)
  3. academics who have conducted research of clear EA relevance (Ng, Duflo, Parfit, Tetlock, etc.)
  4. historical figures that may be regarded as proto-EAs or that are seen as having inspired the EA movement (Bentham, Mill, Russell, etc.)
  5. "core figures" in the EA community (Shulman, Christiano, Tomasik, etc.)

Some people, such Bostrom, MacAskill, Ord, fit into more than one of these clusters. My sense is that David Pearce doesn't fit into any of the clusters. It seems relatively uncontroversial that he doesn't fit into clusters 1-4, so the relevant question—at least i... (read more)

2
MichaelA
FWIW, I think your comment is already a good step! I think I broadly agree that those people who fit into at least one of those clusters should typically have entries, and those who don't shouldn't. And this already makes me feel more of a sense of clarity about this. I still think substantial fuzziness remains. This is mostly just because words like "eminence" could be applied more or less strictly. I think that that's hard to avoid and maybe not necessary to avoid - people will probably generally agree, and then we can politely squabble about the borderline cases and thereby get a clearer sense of what we collectively think the "line" is. But I think "people who have had an extraordinary positive impact, and that are often discussed in EA circles (Arkhipov, Zhdanov, etc.)" may require further operationalisation, since what counts as extraordinary positive impact can differ a lot based on one's empirical, moral, epistemological, etc. views. E.g., I suspect that nil might think Pearce has been more impactful than most people who do have an entry, since Pearce's impacts are more targeted at suffering reduction. (nil can of course correct me if I'm wrong about their views.) So maybe we should say something like "people who are widely discussed in EA and who a significant fraction of EAs see as having had an extraordinary positive impact (Arkhipov, Zhdanov, etc.)"? (That leaves the fuzziness of "significant fraction", but it seems a step in the right direction by not just relying on a given individual's view of who has been extraordinarily impactful.) Then, turning back to the original example, there's the question: Would a significant fraction of EAs see Pearce as having had an extraordinary positive impact? I think I'd lean towards "no", though I'm unsure, both because I don't have a survey and because of the vagueness of the term "significant fraction". 

I think there's a relatively clear sense in which Arkhipov, Borlaug, and similar figures (e.g. winners of the Future of Life Award, names included in Scientists Greater than Einstein, and related characters profiled in Doing Good Better or the 80,000 Hours blog)  count as having had an extraordinary positive impact and Pearce does not, namely, the sense in which also Ord, MacAskill, Tomasik, etc. don't count. I think it's probably unnecessary to try to specify in great detail what the criterion is, but the core element seems to be that the former are all examples of do-gooding that is extraordinary from both an EA and a common-sense perspective, whereas if you wanted to claim that e.g. Shulman or Christiano are among humanity's greatest benefactors, you'd probably need to make some arguments that a typical person would not find very persuasive. (The arguments for that conclusion would also likely be very brittle and fail to persuade most EAs, but that doesn't seem to be so central.)

So I think it really boils down to the question of how core a figure Pearce is in the EA movement, and as noted, my impression is that he just isn't a core enough figure. I say this, incidentally, as someone who admires him greatly and who has been profoundly influenced by his writings (some of which I translated into Spanish a long time ago), although I have also developed serious reservations about various aspects of his work over the years.

2
MichaelA
1. If you mean that the vast majority of EAs would agree that Arkhipov, Borlaug, Zhdanov, and similar figures count as having had an extraordinary positive impact, or that that's the only reasonable position one could hold, I disagree, for reasons I'll discuss below. 2. But if you just mean that a significant fraction of EAs would agree that those figures count as having had an extraordinary impact, I agree. And, as noted in my previous comment, I think that using a phrasing like "people who are widely discussed in EA and who a significant fraction of EAs see as having had an extraordinary positive impact (Arkhipov, Zhdanov, etc.)" would probably work. 1. And that phrasing also seems fine if I'm wrong about (1), so maybe there's no real need to debate (1)? 2. (Relatedly, I also do ultimately agree that Arkhipov etc. should have entries.) Expanding on (1): * This is mostly due to crucial considerations that could change the sign or (relative) magnitude of the moral value of the near-term effects that these people are often seen as having had. For example: * It's not obvious that a US-Russia nuclear war during the Cold War would've caused a negative long-term future trajectory change. * I expect it would, and, for related reasons, am currently focused on nuclear risk research myself. * But I think one could reasonably argue that the case for this view is brittle and the case for e.g. the extraordinary positive impact of some people focused on AI is stronger (conditioning on strong longtermism). * Some EAs think extinction risk reduction is or plausibly is net negative. * Some EAs think population growth is or plausibly is net negative, e.g. for reasons related to the meat-eater problem or to differential progress. * It's plausible that expected moral impact is dominated by effects on the long-term future, farm animals, wild animals, invertebrates, or similar, in which case it may be both less clear that e.g. Borlaug and Zhdanov had a
4
MichaelA
I'm roughly neutral on this, since I don't have a very clear sense of what the criteria and "bars" are for deciding whether to make an entry about a given person. I think it would be good to have a discussion/policy regarding that.  I think some people like Nick Bostrom and Will MacAskill clearly warrant and entry, and some people like me clearly don't, and there's a big space in between - with Pearce included in it - where I could be convinced either way. (This has to do with relevance and notability in the context of the EA Forum Wiki, not like an overall judgement of these people or a popularity contest.) Some other people who are perhaps in that ambiguous space: * Nick Beckstead (no entry atm) * Elie Hassenfeld (no entry atm, but an entry for GiveWell) * Max Tegmark (no entry atm, but an entry for FLI) * Brian Tomasik (has an entry) * Stuart Russell (has an entry) * Hilary Greaves (has an entry) (I think I'd lean towards each of them having an entry except Hassenfeld and maybe Tegmark. I think the reason for The Hassenfeld Exception is that, as far as I'm aware, the vast majority of his work has been very connected with GiveWell. So it's very important and notable, but doesn't need a distinct entry. Somewhat similar with Tegmark inasmuch as he relates to EA, though he's of course notable in the physics community for non-FLI-related reasons. But I'm very tentative with all those views.)
1
nil
This makes sense to me, although one who is more familiar w/ their work may find their exclusion unwarranted. Thanks for clarifying! In this light I still think an entry for Pearce is justified, to a degree scientifically grounded proposals for abolishing suffering is an EA topic (and this is the main theme of Pearce's work). But I'm just one input of course. Regarding Tomasik, we have different intuitions here: if an entry for Tomasik may not be justified, then I would say this sets a high bar which only original EA founders could reach. (For Tomasik himself is a founder of an EA charity - the Foundational Research Institute / Center on Long-Term Risk - has written extensively on many topics highly relevant to EA, and an advisor at the Center for Reducing Suffering, another EA org.) Anyway, this difference doesn't probably matter in practice since you added that you lean towards Tomasik's having an entry.
9
Pablo
I agree with you that a Tomasik entry is clearly warranted. I would say that his entry is as justified as one on Ord or MacAskill; he is one of half a dozen or so people who have made the most important contributions to EA, in my opinion. I will respond to your main comment later, or tomorrow.
5
MichaelA
As noted, I do lean towards Tomasik having an entry, but "co-founder of an EA org"  + "written extensively on many topics highly relevant to EA" + "is an advisor for another EA org", or 1 or 2 of those things plus 1 or 2 similar things, includes a fair few people, including probably like 5 people I know personally and who probably shouldn't have their own entries.  I do think Tomasik has been especially prolific and his writings especially well-regarded and influential, which is a big part of why I lean towards an entry for him, but the criteria and cut offs do seem fuzzy at this stage. 

Maybe we should have a tag for each individual EA Fund, in addition to the existing tag Effective Altruism Funds tag? The latter could then be for posts relevant to EA Funds as a whole.

There are now 60 posts with the Effective Altruism Funds tag, and many readers may only be interested in posts relevant to one or two of the funds.

4
Pablo
Yes, good idea. Feel free to create them, otherwise I'll do it myself later today or tomorrow.

It might be worth going through the Effective Altruism Hub's resource collections and the old attempts to build EA Wikis (e.g., the Cause Prioritization wiki), to:

  • See if that inspires useful new entries/tags
    • E.g., they might cover some topic that we then realise is worth having an entry for
  • Find resources that can be given a relevant tag, or listed in Bibliography / Further reading / External links sections

I assume some of this has been done already, but someone doing it thoroughly seems worthwhile.

5
Catherine Low🔸
Thanks Michael!  I manage the EA Hub Resources, but much of the content has been slowly getting outdated.  I think the best action will be to incorporate the content in the Learn and Take Action sections of the EA Hub Resources into the EA Forum wiki, and redirect Hub visitors to the wiki. I'm unlikely to have the time to do this soon, so I would be delighted if someone else was keen to do this. Get in touch if you are keen to do this and I can assist + set up redirects when ready! Message me through the forum private messaging. The rest of the resources are designed for EA group  organisers and my current plan is to keep this outside of the wiki (but I'm happy for folks to try to change my mind!). I plan to move this content onto a new website in the next few months as the EA Hub team have decided to narrow their focus to the community directories. 
2
Pablo
I did this systematically for all the relevant wikis I was aware of, back when I started working on this project in mid 2020. Of course, it's likely that I have missed some relevant entries or references.
2
MichaelA
Ah, nice. What about for the EA Hub stuff? E.g., they've got a bunch of stuff on how to talk about EA, running EA-related events, and movement-building. And also curated collections for cause areas. And I don't think I've seen those things linked to from tag pages?
4
Pablo
I actually wasn't aware of their resources section (EA Hub has changed a lot over the years and I haven't stayed abreast of the latest changes). They used to have a wiki, which I did review, though some pages were not indexed by the Internet Archive. I wonder if they have migrated their old wiki content to the new resources page. In any case, I've made a note to investigate this further.
5
Catherine Low🔸
Hey Pablo!  You are right that the wiki is long dead. The current resources section was written independently from the wiki. As I just commented up the thread, with the new EA Forum wiki (which is wonderful!), I think the content on the EA Hub intended for all EAs should be merged into the wiki, and then I can retire those pages and set up redirects. More than happy to chat more about this!  
2
Pablo
Thanks for your message! Can you email me at stafforini.com preceded by MyName@, or share an email address where I can reach you? (EDIT: We have now contacted each other.)
2
MichaelA
Great that you two have connected! In the other thread, Catherine says: Yeah, I don't think the EA Forum Wiki needs to eat everything else - other options include: * Just include a link in Further reading or Bibliography to the external collection of resources * See e.g. the link to my own collection of resources from here * Look through the collection, give the appropriate tag to the Forum posts that are in that collection, and maybe include links to some other specific things in the Further reading or Bibliography section
2
MichaelA
Sounds good!

Academia or something like that

This could cover things like how (in)efficient academia is, what influences it has had and could have, the best ways to leverage or direct academia, whether people should go into academic or academia-related careers, etc.

E.g., Open Phil's post(s) on field-building and this post on How to PhD.

Related entries

field-building | meta-science | research methods | research training programs | scientific progress

---

It's possible that this is made redundant by other tags we already have? 

And my current suggested name and scope are... (read more)

4
Pablo
I think this would be a valuable article. Perhaps the title could be refined, but at the moment I can't think of any alternatives I like. So feel free to create it, and we can consider possible name variants in the future.
4
MichaelA
Ok, done!

Mind uploads, or Whole brain emulation, or maybe Digital minds

I think that:

  • These concepts overlap somewhat with artificial sentience
  • But these concepts (or at least mind uploads and WBE) are also meaningfully distinct from artificial sentience

But I could be wrong about either of those things.

Further reading

Age of Em

https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf

Related entries

artificial sentience | consciousness research | intelligence and neuroscience | long-term future | moral patienthood | non-humans and the long-term future | number of futur... (read more)

4
Pablo
Definitely. I already was planning to have an entry on whole brain emulation and have some notes on it... wait, I now see the tag already exists. Mmh, it seems we missed it because it was "wiki only". Anyway, I've removed the restriction now. Feel free to paste the 'further reading' and 'related entries' sections (otherwise I'll do it myself; I just didn't want to take credit for your work).
4
MichaelA
Cool, I've now added those related entries and the "roadmap" report (Age of Em was already cited). 

Non-longtermist arguments for GCR reduction, or Non-longtermist arguments for prioritising x-risks, or similar but with "reasons" instead of arguments, or some other name like that

The main arguments I have in mind are the non-longtermist 4 of the 5 arguments Toby Ord mentions in The Precipice, focusing on the past, the present, civilizational virtues, and cosmic significance.

Ideally, the entry would cover both (a) such arguments and (b) reasons why those arguments might be much weaker than the longtermist arguments and thus might not by themselves justify ... (read more)

2
Pablo
I think this would be a very useful article to have. It seems challenging to find a name for it, though. How about short-termist existential risk prioritization? I am not entirely satisfied with it, but I cannot think of other alternatives I like more. Another option, inspired by the second of your proposals, is short-termist arguments for prioritising existential risk. I think I prefer 'risk prioritization' over 'arguments for prioritizing' because the former allows for discussion of all relevant arguments, not just arguments in favor of prioritizing.
2
MichaelA
Hmm, I don't really like "short-termist" (or "near-termist"), since that only seems to cover what Ord calls the "present"-focused "moral foundation" for focusing on x-risks, rather than also the past, civilizational virtue, or cosmic significance perspectives.  Relatedly, "short-termist" seems like it implies we're still assuming a broadly utilitarianian-ish perspective but just not being longtermist, whereas I think it'd be good if these tags could cover more deontological and virtue-focused perspectives. (You could have deontological and virtue-focused perspectives that prioritise x-risk in a way that ultimately comes down to effects on the near-term, but not all such perspectives would be like that.) Some more ideas:  * Existential risk prioritization for non-longtermists * Alternative perspectives on existential risk prioritization * I don't really like tag names that say "alternative" in a way that just assumes everyone will know what they're alternative to, but I'm throwing the idea out there anyway, and we do have some other tags with names like that
2
Pablo
The reasons for caring about x-risk that Toby mentions are relevant from many moral perspectives, but I think we shouldn't cover them on the EA Wiki, which should be focused on reasons that are relevant from an EA perspective. Effective altruism is focused on finding the best ways to benefit others (understood as moral patients), and by "short-termist" I mean views that restrict the class of "others" to moral patients currently alive, or whose lives won't be in the distant future. So I think short-termist + long-termist arguments exhaust the arguments relevant from an EA perspective, and therefore think that all the arguments we should cover in an article about non-longtermist arguments  are short-termist arguments.
2
Pablo
It's not immediately obvious that the EA Wiki should focus solely on considerations relevant from an EA perspective. But after thinking about this for quite some time, I think that's the approach we should take, in part because providing a distillation of those considerations is one of the ways in which the EA Wiki could provide value relative to other reference works, especially on topics that already receive at least some attention in non-EA circles.
2
MichaelA
Hmm. I think I agree with the principle that "the EA Wiki should focus solely on considerations relevant from an EA perspective", but have a broader notion of what considerations are relevant from an EA perspective. (It also seems to me that the Wiki is already operating with a broader notion of that than you seem to be suggesting, given that e.g. we have an entry for deontology.) I think the three core reasons I have this view are: 1. effective altruism is actually a big fuzzy bundle of a bunch of overlapping things 2. we should be morally uncertain 3. in order to do good from "an EA perspective", it's in practice often very useful to understand different perspectives other people hold and communicate with those people in terms of those perspectives On 1 and 2: * I think "Effective altruism is focused on finding the best ways to benefit others (understood as moral patients)" is an overly strong statement. * Effective altruism could be understood as a community of people or as a set of ideas, and either way there are many different ways one could reasonably draw the boundaries. * One definition that seems good to me is this one from MacAskill (2019): * "Effective altruism is: (i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world. [...] * The definition is: [...] Tentatively impartial and welfarist. As a tentative hypothesis or a first approximation, doing good is about promoting wellbeing, with everyone’s wellbeing counting equally." (emphasis added, and formatting tweaked) * I think we should be quite morally uncertain. * And many seemingly smart and well-informed people have given non-welfarist or even non-consequentialist perspectives a lot of weight (see e.g. the PhilPapers survey). * And I myself see some force in arguments or intuition
4
Pablo
I'll respond quickly because I'm pressed with time. 1. I don't think EA is fuzzy to the degree you seem to imply. I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed). 2. I don't understand your point about moral uncertainty. You mention the fact that Will wrote a book about moral uncertainty, or the fact that Beckstead is open to non-consequentialism, as relevant in this context, but I don't see their relevance. EA, in the sense captured by the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view. (Will uses the term 'welfarism', but I don't think he is using it in a moral sense, since he states explicitly that his definition is non-normative.) (ADDED: there is one type of moral uncertainty that is relevant for EA, namely uncertainty about population axiology, because it concerns the class of beings whom EA is committed to helping, at least if we interpret 'others' in "helping others effectively" as "whichever beings count morally". Relatedly, uncertainty about what counts as a person's wellbeing is also relevant, at least if we interpret 'helping' in "helping others effectively" as "improving their wellbeing". So it would be incorrect to say that EA has no moral commitments; still, it is not committed to any particular moral theory.) 3. I agree it often makes sense to frame our concerns in terms of reasons that make sense to our target audience, but I don't see that as the role of the EA Wiki. Instead, as noted above, one key way in which the EA Wiki can add value is by articulating the distinctively EA perspective on the topic of interest. If I consult a Christian encyclopedia, or a libertarian encyclopedia, I want the entries to describe the reasons Christians and libertarians have for holding the views that they do, rather than the reasons
6
MichaelA
I think you make some good points, and that my earlier comment was a bit off. But I still basically think it should be fine for the EA Wiki to include articles on how moral perspectives different from the main ones in EA intersect with EA issues.  --- Yeah, I think the core of EA is something like what you described, but also that EA is fuzzy and includes a bunch of things outside that core. I think the "core" of EA, as I see it, also doesn't include anti-ageing work, and maybe doesn't include a concern for suffering subroutines, but the Wiki covers those things and I think that it's good that it does so. (I do think a notable difference between that and the other moral perspectives is that one could arrive at those focus areas while having a focus on "helping others". But my basic point here is that the core of EA isn't the whole of EA and isn't all that EA Wiki should cover.) Going back to "the EA Wiki should focus solely on considerations relevant from an EA perspective", I think that that's a good principle but that those considerations aren't limited to "the core of EA". --- Was the word "not" meant to be in there? Or did you mean to say the opposite? If the "not" is intended, then this seems to clash with you saying that discussion from an EA perspective would omit moral perspectives focused on the past, civilizational virtue, or cosmic significance? If discussion from an EA perspective would omit those things, then that implies that the EA perspective is committed to some set of moral moral views that excludes those things.  Maybe you're just saying that EA could be open to certain non-consequentialist views, but not so open that it includes those 3 things from Ord's book? (Btw, I do now recognise that I made a mistake in my previous comment - I wrote as if "helping others" meant the focus must be welfarist and impartial, which is incorrect.) --- I think moral uncertainty is relevant inasmuch as a bit part of the spirit of EA is trying to do good, w
4
Pablo
(Typing from my phone; apologies for any typos.) Thanks for the reply. There are a bunch of interesting questions I'd like to discuss more in the future, but for the purposes of making a decision on the issue that triggered this thread, on reflection I think it would be valuable to have a discussion of the arguments you describe. The reason I believe this is that existential risk is such a core topic within EA that an article on the different arguments that have been proposed to mitigate these risks is of interest even from a purely sociological or historical perspective. So even if we may not agree on the definition of EA, the relevance of moral uncertainty or other issues, luckily that doesn't turn out to be an obstacle for agreeing on this particular issue. Perhaps the article should be simply called arguments for existential risk prioritization and cover all the relevant arguments, including longtermist arguments, and we could in addition have a longer discussion of the latter in a separate article, though I don't have strong views on this. (As it happens, I have a document briefly describing about 10 such arguments that I wrote many years ago, which I could send if you are interested. I probably won't be able to work on the article within the next few weeks, though I think I will have time to contribute later.)
4
MichaelA
Ok, I've gone ahead and made the tag, currently with the name Moral perspectives on existential risk reduction. I'm still unsure what the ideal scope and name would be, and have left a long comment on the Discussion page, so we can continue adjusting that later.
2
Pablo
Great, I like the name.
4
Pablo
Makes sense. I created it (no content yet).

Make entries for many of the concepts featured on Conceptually

I read the content on that site in 2019 and found it useful. I haven't looked through what concepts are on there to see which ones we already have and which ones might be worth adding, but I expect it'd be useful for someone to do so. So I'm noting it here in case someone else can do that (that'd be my preferred outcome!), or to remind myself to do it in a while if I have time. 

2
Pablo
I like Conceptually, and during my early research I went through their list of concepts one by one, to decide which should be covered by the EA Wiki, though I may have missed some relevant entries. Thoughts on which ones we should include, that aren't already articles or are listed in our list of projected entries?

Epistemic challenge, or The epistemic challenge, or Epistemic challenges, or any of those but with "to longtermism" added

Relevant posts include the following, and presumably many more:

Related entries

  • cluelessness
  • longtermism
  • expected value
  • forecasting
2
MichaelA
Another idea: Long-range forecasting (or some other name covering a similar topic).  See e.g. https://forum.effectivealtruism.org/posts/s8CwDrFqyeZexRPBP/link-how-feasible-is-long-range-forecasting-open-phil Related entries: cluelessness | estimation of existential risk | forecasting | longtermism Given how much the scope of this entry/tag would overlap with the scope of an epistemic challenge to longtermism tag, and how much both would overlap with other entries/tags we already have, I think we should probably only have one or the other. (I could be wrong, though. Maybe we should have both but with one being wiki-only. Or maybe we should have both later on, once the Wiki has a larger set of entries and is perhaps getting more fine-grained.)
4
Pablo
I agree with having this tag and subsuming epistemic challenge to longtermism under it. We do already have forecasting and AI forecasting, so some further thinking may be needed to avoid overlap.
4
MichaelA
Ok, I've now made a long-range forecasting tag, and added a note there that it should probably subsume/cover the epistemic challenge to longtermism as well. And yeah, I'm open to people adjusting things later to reduce how many entries/tags we have on similar topics.
2
Pablo
Is the "epistemic challenge to longtermism" something like "the problem of cluelessness, as applied to longtermism", or is it something different?
2
MichaelA
People in EA sometimes use the term "cluelessness" in a way that's pretty much referring to the epistemic challenge or the idea that it's really really hard to predict long-term-future effects. But I'm pretty sure the philosophers writing on this topic mean something more specific and absolute/qualitative, and a natural interpretation of the word is also more absolute ("clueless" implies "has absolutely no clue"). I think cluelessness could be seen as one special case / subset of the broader topic of "it seems really really hard to predict long-term future effects".  I write about this more here and here. Here's an excerpt from the first of those links: Meanwhile, the epistemic challenge is the more quantitative, less absolute, and in my view more useful idea that:  * effects probably get harder to predict the further in future they are * this might mean we should focus on the near-term if that gradual decrease in our predictive power outweighs the increased scale of the long-term future compared to the nearer-term.  On that, here's part of the abstract of Tarsney's paper:

I think there should either be an entry for each of Accident risk, Misuse risk, and Structural risk, or a single entry that covers all three, or something like that.

Maybe these entries should just focus on AI, since that's where the terms were originally used (as far as I'm aware). On the other hand, I think the same concepts also make sense for other large-scale risks from technologies.

If the entries do focus on AI, maybe they should have AI in the name (e.g. AI accident risk or Accident risk from AI), or maybe not.

In this case, the reason I'm posting thi... (read more)

2
Pablo
There's an accidental harm article, which is meant to cover the risk of causing harm as an unintended effect of trying to do good, as discussed e.g. here. What you describe is somewhat different, since the risk results not so much from "attempts to do good" but from the development of a technology in response to consumer demand (or other factors driving innovation not directly related to altruism). Furthermore, misuse risk can involve deliberate attempts to cause harm, in addition to unintended harm. I guess all of these risks are instances of the broader category of "downside risk", so maybe we can have an article on that?
2
MichaelA
I think there are indeed overlaps between all these things. But I do think that the application of these terms to technological risk specifically or AI risk specifically is important enough to warrant its own entry or set of entries.  Maybe if you feel their distinctive scope is at risk of being unclear, that pushes in favour of sticking with the original AI-focused framing of the concepts, and maybe just mentioning in one place in the entry/entries that the same terms could also be applied to technological risk more broadly? Or maybe it pushes in favour of having a single entry focused on this set of concepts as a whole and the distinctions between them (maybe called  Accident, misuse, and structural risks)? I also wouldn't really want to say misuse risk is an instance of downside risk. One reason is that it may not be downside risk from the misuser's perspective, and another is that downside risk is often/usually used to mean a risk of a downside from something that is or is expected to be good overall. More on this from an older post of mine: Also, I think I see "accidental harm" as sufficiently covering standard uses of the term "downside risk" that there's not a need for a separate entry. (Though maybe a redirect would be good?)

Update: I've now made this entry.

Fermi estimation or Fermi estimates

Overlaps with some other things in the Decision Theory and Rationality cluster of the Tags Portal.

4
Pablo
I agree that this should be added. I weakly prefer 'Fermi estimation'.

Demandingness objection

I'd guess there are at least a few Forum posts quite relevant to this, and having a place to collect them seems nice, but I could be wrong about either of those points.

[This comment is no longer endorsed by its author]Reply
4
Pablo
I agree it's relevant. But we already have an article: demandingness of morality. (It's likely you haven't seen it because many of these articles were Wiki-only until very recently.)
2
MichaelA
Yeah, I just spotted that and the fact I had a new notification at the same time, and hoped it was anything other than a reply here so I could delete my shamefully redundant suggestion before anyone spotted it :D (I think what happened is that I used command+f on the tags portal before the page had properly loaded, or something.)

Update: I've now made this tag.

Charitable pledges or Altruistic pledges or Giving pledges (but that could be confused with the Giving Pledge specifically) or Donation pledges or similar

Maybe the first two names are good in that they could capture pledges about resources other than money (e.g., time)? But I can't off the top of my head think of any non-monetary altruistic pledges. 

This could serve as an entry on this important-seeming topic in general, and as a directory to a bunch of other entries or orgs on specific pledges (e.g., Giving Pledge, GWWC... (read more)

Antimicrobial resistance or Antibiotic resistance

Not sure enough EAs care about this and/or have written about this on the Forum for it to warrant an entry/tag?

(I don't personally have much interest in this topic, but I'm just one person.)

2
MichaelA
A couple relevant posts I stumbled upon: * https://forum.effectivealtruism.org/posts/8ERp3GbQ54Fw8ehuQ/antibiotic-resistance-and-meat-why-we-should-be-careful-in * https://forum.effectivealtruism.org/posts/2qXfME3Rrcd7mdnMr/ 

Update: I've now made this tag.

Something like Bayesianism

Arguments against having this entry/tag:

  • Maybe the topic is sufficiently covered by the entries on Epistemology and on Decision theory?
2
Pablo
Yeah, perhaps name it Bayesian reasoning or Bayesian epistemology?

Cognitive biases/Cognitive bias, and/or entries for various specific cognitive biases (e.g. Scope neglect)

I feel unsure whether we should aim to have just a handful of entries for large categories of biases, vs one entry for each of the most relevant biases (even if this means having 5+ or 10+ entries of this type)

4
Pablo
My sense is that it would be desirable to have both an overview  article about cognitive bias, discussing the phenomenon in general (e.g. the degree to which humans can overcome cognitive bias, the debate over how desirable it is to overcome them, etc.) as well as articles about specific instances of it.
2
MichaelA
I think you mean it'd be desirable to have both a general article on cognitive bias and one article each for various specific instances of it? Rather than having just one general article that covers both the topic as a whole and specific instances of it? Given my assumed interpretation of what you meant, I've now made an entry for Cognitive biases and another for Scope neglect. People could later add more, or delete some, or whatever. (I've now copied the content of this thread to the Discussion page on the Cognitive biases entry. If you or others would like to reply, please do so there.)

Nonlinear Fund

Maybe it's too early to make a tag for that org?

Update: I've now made this entry.

Instrumental vs. epistemic rationality

Some brief discussion here.

These terms may basically only be used on the LessWrong community, and may not be prominent or useful enough to warrant an entry here. Not sure.

2
Pablo
I think this would be useful to have.

Metaethical uncertainty and/or Metanormative uncertainty

These concepts are explained here.

I think it's probably best to instead have an entry on "Normative uncertainty" in general that has sections for each of those concepts, as well as sections that briefly describe (regular) Moral uncertainty and Decision-theoretic uncertainty and link to the existing tags on those concepts. (Also, the entry on Moral uncertainty could discuss the question of how to behave when uncertain what approach to moral uncertainty is best, which is metanormative uncertainty.) This... (read more)

Subjective vs. objective normativity

See here and here