One way I think EA fails to maximise impact is by its focus on legible, clear and attributable impact over actions where the impact is extremely difficult to estimate.

Writing Wikipedia articles on and around important EA concepts (except perhaps on infohazardous bioterrorism incidents) has low downside risk and extremely high upside risk, making these ideas much more easy to understand for policymakers and other people in positions of power who may come across them and google them. However, the feedback loops are virtually non-existent and the impact is highly illegible.

For example, there is currently no dedicated Wikipedia page for “Existential Risk” and “Global Catastrophic Biological Risk”.

Writing Wikipedia pages could be a particularly good use of time for people new to EA and people in university student groups who want to gain a better understanding of EA concepts or of EA-relevant policy areas.

Some other ideas for creating new Wikipedia articles or adding more detail to existing ones:

International Biosecurity and Biosafety Initiative for Science

Alternative Proteins

Governance of Alternative Proteins

Global Partnership Biological Security Working Group

Regulation of gain-of-function biological research by country

Public investment in alternative proteins by country

Space governance

Regulation of alternative proteins

UN Biorisk Working Group

Political Representation of Future Generations

Political Representation of Future Generations by Country

Political Representation of Animals

Joint Assessment Mechanism

Public investment in AI Safety research by country

International Experts Group of Biosafety and Biosecurity Regulators

Tobacco taxation by country

Global Partnership Signature Initiative to Mitigate Biological Threats in Africa

Regulations on lead in paint by country

Alcohol taxation by country

Regulation of dual-use biological research by country

Joint External Evaluations

Biological Weapons Convention funding by country

Comments14


Sorted by Click to highlight new comments since:

I broadly agree with this and have also previously made a case for Wikipedia editing on the Forum: https://forum.effectivealtruism.org/posts/FebKgHaAymjiETvXd/wikipedia-editing-is-important-tractable-and-neglected

As a caveat, there are some nuances to Wikipedia editing to make sure you're following community standards, which I've tried to lay out in my post. In particular, before investing a lot of time writing a new article, you should check if someone else tried that before and/or if the same content is already covered elsewhere. For example, there have been previous unsuccessful efforts to create an 'Existential risk' Wikipedia article. Those attempts failed in part because relevant content is already covered on the 'Global catastrophic risks' article.

Could this also be a good opportunity for pages written in languages other than English?

Yes very good point!

This article is several years old, but as of 2019, their machine translation tool was quite poor and my experience is that articles can have vastly different levels of depth in different languages, so simply getting French/Spanish/etc. articles up to the level of their English language analogues might be an easy win.

[anonymous]7
0
0

Thank you for your comment.

I believe that translators of EA articles should have a quality mindset and not only a mindset of translating x articles or y words in z time. Translators should translate from the articles with the most depth and those articles are mostly in English. Current article pageviews may determine priorities but we also need a depth of content on the subject and not only a handful of articles that are predicted to have more pageviews in the target language.

Translating articles about EA is low hanging fruit especially in Wikipedia language versions with more than several million speakers. We should not underestimate that one or 100 articles that we translate today will most likely remain in Wikipedia for decades even if not centuries even if totally changed by editors along the way.

There is a visibility gap of Effective Altruism in the Internet in general and in Wikipedia specifically. This and the fact that the impact of Wikipedia as a source of knowledge for the general public and to policy makers and decisors should not be ignored.

What I vehemently recommend is that there should not be payed editing promotion and investment. If individual EAs insist on this path what could happen is that EA will have a label for payed editing in Wikipedia. Payed editing in Wikipedia has a very bad reputation in the Wikipedian community and also outside of it and it stains EA and repels people. Voluntary translators are harder to come by perhaps but that should lead to an even more strong will by EA communities to reach out to its fellow members and argue for voluntary work on this matter. Edit-a-thons should be promoted by EA communities but with clear guidelines of Neutral Point of View (NPOV) editing and non-remunerized.

Edited: Corrected several typos by my part.

Note that it's much easier to improve existing pages than to add new ones.

More EA-relevant Wikipedia articles that don't yet exist:

  • Place premium
  • Population Ethics pages
    • Sadistic conclusion
    • Critical-threshold approaches
  • Cantril Ladder
  • Axelrod's Meta-Norm
  • Open-source game theory
  • Humane Technology
  • Chris Olah
  • Machine Learning Interpretability
    • Circuit
    • Induction head
  • Lottery Ticket Hypothesis
  • Grokking
  • Deep Double Descent
  • Nanosystems: Molecular Machinery Manufacturing and Computation
  • Global Priorities Institute
  • Scaling Laws for Large Language Models

Some of these articles are about AI capabilities, so perhaps not as great to write about.

Additionally, the following EA-relevant articles could be greatly improved:

That hasn’t been entirely my experience. In fact, when I made the page for the Foreign Dredge Act of 1906, I was pleasantly surprised at how quickly others jumped in to improve on my basic efforts - it was clearly a case of just needing the page to exist at all before it started getting the attention it deserved.

By contrast, I’ve found that trying to do things like good article nominations, where you’re trying to satisfy the demands of self-selected nonexpert referees, can be frustrating. The same is true for trying to improve pages already getting a lot of attention. Even minor improvements to the Monkeypox page during the epidemic were the subject of heated debate and accusations on the talk page. When a new page is created, it doesn’t have egos invested in it yet, so you don’t really have to argue with anybody very much.

I’d be interested in learning more about your experiences that leads you to say it’s harder to create than improve pages. I’m not that novice but you seem like you have a lot more experience than me.

Epistemic status: ~150 Wikipedia edits, of which 0 are genuine article creations (apart from redirects). I've mostly done slight improvements on non-controversial articles. Dunno about being a novice, but looking at your contributions on WP you've done more than me :-)

I was thinking mostly of the fact that you need to be autoconfirmed, i.e. more than 4 days old and ≥10 edits. I also have the intuition that creating an article is more likely to be wasted effort than an improvement to an existing article, because of widespread deletionism. An example for the deletionism is the Harberger tax article, which was nearly removed, much to my dismay.

Perhaps this is more true for the kind of article I'm interested in, which is relatively obscure concepts from science (with less heated debate), and less about current events (where edits might be more difficult due to controversy & edit wars).

I have also encountered deletionism. When I was improving the aptamer article for a good article nomination, the reviewer recommended splitting a section on peptide aptamers into a separate article. After some thinking, I did so. Then some random editor who I’d never interacted with before deleted the whole peptide aptamer article and accused me of plagiarism/copying it from someplace else on the internet, and never responded to my messages trying to figure out what he was doing or why.

It’s odd to me because the Foreign Dredge Act is a political issue, while peptide aptamers are an extremely niche topic. And the peptide aptamer article contained nothing but info that had been on Wikipedia for years, while I wrote the Dredge Act article from scratch. Hard to see rhyme or reason, and very frustrating that there’s no apparent process for dealing with a vandal who thinks of themselves as an “editor.”

Here are a couple of social science papers on the evidence that (well-written) Wikipedia articles have an impact on real world outcomes:

I think the main caveat (also mentioned in other comments) is that these papers are predicated on high quality edits or page creations that align with Wikipedia standards.

[anonymous]1
0
0

I honestly never thought that I would read a post on this forum about Wikipedia. To my happiness there is talk about Wikipedia in here as I found out today! :)

Great work!!

[anonymous]1
0
0

Thank you! :) Have a good day!

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region