Hide table of contents

The question is meant to be broad.

I invite y'all to share your ideas here, as they come to you.

Relatedly, if you see a project idea that has already been done, pointing it out as a reply would be useful!

For sharing existing project lists, I suggest doing it in the following post instead: Concrete project lists

-----

Motivation for asking: From now on, I intend to use my answer here to continuously document new ideas I come up with instead of having them logged privately in Google Docs. This is part of my goal of reducing the time between conceiving of an idea and sharing it (How much delay do you generally have between having a good new idea and sharing that idea publicly online?).

New Answer
New Comment


4 Answers sorted by

WHAT: A book like "Strangers Drowning", but focused on the "E" of EA rather than the "A" of EA.

WHY: narrative can be such a tremendous force in changing people's lives. It's often more powerful than argument (even for brainy people).

There's already a lot of world literature and newspaper stories on people who have been tremendously altruistic. There is much less literature about people who have been tremendously altruistic and -- this is key -- have been motivated by their altruism to care about effectiveness and listen to the evidence.

I'd love to have a book with biographies or stories that traces -- in narrative rather than argument -- people whose love for others has pushed them to care about effectiveness, care about evidence, and generally care about a results-oriented outlook that focuses on what 'really works at the end of the day'. (Note that the book should not generally be about people who care about effectiveness and evidence -- but only about people who have deliberately chosen to do so out of altruism (rather than, say, out nerdiness)).

Possible biographies could include: Florence Nightingale, Ignaz Semmelweis, Deng Xiaoping, figures from EA and utilitarianism, some theologians in the 2nd world war who pragmatically looked towards ending the killing (Bonhoeffer, Barth, etc?), etc. Not vouching for this list of examples at all -- it's more to give an idea.

By the way, creating such a book could be a project for EAs with a different skillset than the cliché EAs.

moving my answers in separate comments below this answer.

particularly useful feedback includes, but isn't limited to:

  • links to a similar project that was already done
  • connection with people interested in this project
  • analysis of the usefulness of the project

note: those are just ideas, they might not be a priority, or good at all

The Bullshit Awards

Proposal: Give prizes to people spotting / blowing the whistle on papers bullshitting its readers, and explaining why.

Details: There could be a Bullshit Alert Prize for the one blowing the whistle, and a Bullshit Award for the one having done the bullshitting. This would be similar to the Darwin Awards in that you don't want to be the source of such an award.

Example: An analysis that could have won this is Why We Sleep — a tale of institutional failure.

Note: I'm not sure whether that's a good way to go about fixing... (read more)

2
yhoiseth
Great idea! This sounds like a lot of fun. I'm also unsure about the net benefit. We might want to keep it as unaffiliated as possible from other EA organizations in order to avoid any spillover damage.

Belief Network

Last updated: 2020-03-30

Category: group rationality; signal boosting

Proposal: Track people's beliefs over time, and what information gave them the biggest update.

Details: It could be done at the same time than the EA survey every year. And/or it could be a website that people continuously update.

Motivation: The goals are

1) to track which information is the most valuable so that more people consume it, and

2) see how beliefs evolve (which might be evidence in itself about which beliefs are true; although, I think most, including myself, ... (read more)

Promise Prediction

Proposal: Have a prediction market on what politicians will accomplish in their next mandate.

Why: That way, it will make it easier for people to know how likely each policies are to be implemented, and it will make it harder for politicians to bullshit everyone.

Related: This project would complement really well the Polimeter which tracks the promises made by politicians. They are now part of the Vox Pop Labs.

Note: I think I've seen this idea somewhere else, but I don't remember where.

Shaking hands across the world

Category: Bringing powerful countries closer together

Idea: Handshake statue in Time Square and some equivalent place in China, where people can give each other a handshake across the world

Effectiveness: I don't know;doesn't seem effective, but also maybe such symbols are powerful and would bring the world closer to each other, hence increasing cooperation / reducing risk of wars

Source: Space Force TV show, s1e7 8:30

Royalty free AI images

Created: early 2019 (or maybe before) | Originally shared on EA Work

Cause area: AI safety

Proposal: Make a collection of (royalty) free images representing the idea of AI / AI safety / AI x-risk / AI risk that aren't anthropomorphizing AI or otherwise misportraying AI (both by searching for existing images and by creating more). This could be used by the media, local AI (safety) groups, etc.

Details: I think this is less of a problem than it used to be, but still think this could be valuable. If you want funding for that, you coul... (read more)

Philanthropy tax / Giving your 2 percents

Meta-proposal: Research what would be the consequences of implementing the proposal.

Proposal: Give the ability to citizens to decide where X% (say 2%) of their tax goes directly (it can be a charity or a government program)

Details: Of course, government can rebalance the rest of its budget in such a way that there's no counterfactual changes. But maybe it would still make a change. If not, then maybe the X% has to go to a charity. Or maybe the donations could be made for more specific governmental projects.

Reas... (read more)

Group for collective actions

Status: done, see: https://www.facebook.com/groups/LWCoordination/

Proposal: have a group to experiment with coordinating on small projects that require coordination

Example: I just posted a proposal about improving the Cause Prioritization Wiki. If 100 major edits get committed, then everyone does the edits they committed. This is useful because a wiki only becomes interesting when there's a lot of editors, so this allows the platform to get bootstrapped, and avoids the chicken-egg problem.

Comments: There's a meta-threa... (read more)

Quantified Doomsday Clock

Context:

Since the Doomsday Clock from the Bulletin of the Atomic Scientists doesn't have any clear methodology for why the clock advances or recedes, I am providing the Metaculus Doomsday Clock as an alternative. Currently the way it advances is by using the Metaculus median prediction of humanity going extinct by 2100 to determine how many minutes we are from midnight. It can be improved, so make suggestions in the comments.
https://sites.google.com/view/metaculus-doomsday-clock

(source: Matthew Barnett's Facebook wall)

P... (read more)

4
Kirsten
Rather than 2100 can I suggest the next century? Otherwise we'd move away from midnight as we move toward 2100 - very counterintuitive
3
Mati_Roy
yeah good point, I agree; thanks!
1
Mati_Roy
https://aicountdown.com/ links to 

Impact of the 5% payout rule

Category: meta-EA; research

Proposal: Research what would be the consequences of removing the 5% payout rule.

Motivating intuition: maybe it would help longer-termist causes (?) and it might also increase the global ratio of investing / consumption (?)

Date posted: 2020-03-06

Additional information:

A foundation must pay out 5% of its assets each year while a public charity may not.
Donors to a public charity receive greater tax benefits than donors to a foundation.
A public charity must collect at least 10% of its annual expenses fro
... (read more)

Increase the prize for the International Mathematics Olympiads

Rationale: It's a useful source of talent EAs have used, and the current prizes are pretty low (less than 100 USD each AFAIK).

I'd be willing to pitch in that prize. Please reach out to me if interested.

Rationalist Olympiads

Potential funding: EA Meta Fund

Ideas:

FDA Policy Think-tank (and/or advocacy group)

Science policy think tank (or advocacy group?)

Potential problem: it might accelerate all scientific progress, which isn’t relevant in the framework of technological differential progress, or possibly harmful (?) if, for example, AI parrallelizes better than AI safety

Related: https://causeprioritization.org/Improving_science

Sober September

Created: early 2019 (or maybe before) | Originally shared on EA Work

Cause area: aging

Dry Feb is a Canadian initiative that invites people to go sober for February to raise money for the Canadian Cancer Society: https://www.dryfeb.ca/.

Imagine this idea, but worldwide and for general medical research.

I would suggest fundraising for the Methuselah Foundation for its broad approach. They fund a lot of prizes which create market pressures for medical progress, so avoid the donors to have to figure out which research groups are the most effective.... (read more)

Decision Theory Interactive Guide

Created: early 2019 (or maybe before) | Originally shared on EA Work

Proposal: I think this could help understanding decision theories (especially functional decision theory). There could be some scenarios where the user has to choose an action or a decision procedure and see how this affects other parts of the scenario that are logically connected to the agent the user controls. For example: playing the prisoner dilemma with a copy of oneself, Newcomb’s problem, etc. Could be done in a similar way to Nicky Case's games.

EA StackExchange

Created: early 2019 (or maybe before) | Originally shared on EA Work

Create a quality StackExchange site so that the EA community can build up knowledge online.

Note: The previous attempt to do so failed (see: https://area51.stackexchange.com/proposals/97583/effective-altruism).

Maybe summarizing the book "Who Goes First? The Story of Self-experimentation in Medicine". Two possibly important thesis:

  • self-experimentation is important
  • medical innovations are available way before they get adopted

Category: research

Externalities of war predictions

See: link

Moved from my short form; created on 2020-02-28

Group to discuss information hazard

Context: Sometimes I come up with ideas that are very likely information hazard, and I don't share them. Most of the time I come up with ideas that are very likely not information hazard.

Problem: But also, sometimes, I come up with ideas that are in-between, or that I can't tell whether I should share them are not.

Solution hypothesis: I propose creating a group with which one can share such ideas to get external feedback on them and/or about whether they should be ... (read more)

2
Kirsten
Why wouldn't you just ask four people who you trust to review each idea in confidence? Why formalize it or insist they reciprocate it?

Altruist credits

Epistemic status: not sure if the idea works

Category: meta

Proposal: Pay someone with a 'donation gift card' or 'donation credits'

Details and rationale:

Often, when I work on a project approved by EAs, I don't necessarily want to be paid as much as I want to be able to have people work on my EA projects in the future.

Imagine you have a Donor Advisor Fund called the Altruist Bank which emits one Altruist Credit per USD you put into it. The Altruist Credit can be spent by saying to which charity you want the DAF to sen... (read more)

Coronavirus: Should I go to work?

UPDATE: An EA project I'm part of might do this

summary: have an app that helps people decide whether they shouldn't go to work

context: in the last 12 hours I spent maybe about 2 hours 'empowering' someone I know by giving them more information to help them decide whether they should take sick days

problem: knowing what's the probability one's infected (by the coronavirus) helps informing them about whether they should avoid going to work. the probability beyond which you should stay home is not ... (read more)

Forum Facebook page

Posted: 2020-03-07

Category: signal boosting

Proposal: Share the best (say >=100 karmas) posts on the EA Forum on a Facebook page called "Best of the EA Forum"

Why? So that people that naturally go on Facebook but not on the EA Forum can be exposed to that content

Note: If there's a way to get this list easily, it might facilitate the process.

Update: 2020-04-24

Experimental page using Zapier: https://www.facebook.com/EAForumKarma100/

x-post: https://www.facebook.com/groups/1392613437498240/permalink/2947443972015171/

I appreciate this idea! However, I'd prefer that people cross-post Forum content to groups that already have substantial/relevantly targeted audiences (actually, I'd really like people to do this more often), rather than creating a new group that could split off some of the Forum's readership. 

Having a Forum-focused Facebook group also seems like it would raise the chances of more discussion happening on Facebook rather than on the Forum posts themselves, which seems bad (comments harder to find later, not linked to anyone's profile, not open for karma voting, not eligible for the Comment Prize, etc.)

If the group really is just a collection of links that people can easily share in other groups, and if discussion comes back to the Forum, it could be a net positive. I'll be curious to see how it gets used.

thanks for your comment, I totally agree!

maybe we could ban comments? and delete the page if that doesn't end up working?

Rather than a ban, probably official discouragement + a polite reminder that people should add their comments to the Forum as well as the Facebook posts? If people really want to talk on Facebook, it seems bad to stop them, but gentle nudges go a long way!

I will document ideas from others I want to signal boost in replies to this comment

Comments1
Sorted by Click to highlight new comments since:
Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier