1 min read 10

23

The Effective Altruism Forum is a platform run by the Centre for Effective Altruism to facilitate discussions relevant to effective altruism and coordinate related projects.

Here are some resources for Forum users:

  1. 🖋️  Write on the EA Forum
  2. 🦋  Guide to norms on the Forum
  3. 🛠️  Forum User Manual
  4. 📜  Terms of Use / License

You can also sign up for the EA Forum Digest, a weekly email that goes out to subscribers to share some of our favorite EA Forum posts from the past week. We usually include some question or request posts, and we’re starting to share a “classic Forum post” per week, too. You can find some recent issues here

Comments7


Sorted by Click to highlight new comments since:

Just trying to make my way thru this new experience.  I am the "director" of a self-made project for women and children in Guatemala, Creando Mi Futuro.  We have been running for 8 years but since I'm now 86 years old and would like the project to continue after my death, we are planning to register as an NGO in Guatemala (our umbrella NGO is in California).   I have for a long time wanted to share ideas with other small non-profits; hope this might be a place to do so. 

Amazing activism and committed .At your age you still have impact to your community .thanks

Is there a way to do blog chains on the forum?  I'm thinking of something similar to the format on Ribbonfarm.com

Ah I figured it out! There are sequences! This is amazing! Thank you for this. I'm going to experiment to see if that gets the feature set I'm looking for: https://forum.effectivealtruism.org/library

Again I have to say: this forum is really well done! 

Curated and popular this week
 ·  · 55m read
 · 
Summary Last updated 2024-11-20. It's been a while since I last put serious thought into where to donate. Well I'm putting thought into it this year and I'm changing my mind on some things. I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I've managed to reason myself out of those emotions. Within x-risk: * AI is the most important source of risk. * There is a disturbingly high probability that alignment research won't solve alignment by the time superintelligent AI arrives. Policy work seems more promising. * Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development. In the rest of this post, I will explain: 1. Why I prioritize x-risk over animal-focused longtermist work and global priorities research. 2. Why I prioritize AI policy over AI alignment research. 3. My beliefs about what kinds of policy work are best. Then I provide a list of organizations working on AI policy and my evaluation of each of them, and where I plan to donate. Cross-posted to my website. I don't like donating to x-risk (This section is about my personal motivations. The arguments and logic start in the next section.) For more than a decade I've leaned toward longtermism and I've been concerned about existential risk, but I've never directly donated to x-risk reduction. I dislike x-risk on an emotional level for a few reasons: * In the present day, aggregate animal welfare matters far more than aggregate human welfare (credence: 90%). Present-day animal suffering is so extraordinarily vast that on some level it feels irresponsible to prioritize anything else, even though rationally I buy the arguments for longtermism. * Animal welfare is more neglected than x-risk (credence: 90%).[1] * People who prioritize x-risk often disregard animal welfare (or t
 ·  · 5m read
 · 
The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace. However, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed as a recommender in the most recent SFF grant round.[1]   Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures. But they’ve recently stopped funding several categories of work (my own categories, not theirs): * Many Republican-leaning think tanks, such as the Foundation for American Innovation. * “Post-alignment” causes such as digital sentience or regulation of explosive growth. * The rationality community, including LessWrong, Lightcone, SPARC, CFAR, MIRI. * High school outreach, such as Non-trivial. In addition, they are currently not funding (or not fully funding): * Many non-US think tanks, who don’t want to appear influenced by an American organisation (there’s now probably more than 20 of these). * They do fund technical safety non-profits like FAR AI, though they’re probably underfunding this area, in part due to difficulty hiring for this area the last few years (though they’ve hired recently). * Political campaigns, since foundations can’t contribute to them. * Organisations they’ve decided are below their funding bar for whatever reason (e.g. most agent foundations work). OP is not infallible so some of these might still be worth funding. * Nuclear security, since it’s on average less cost-effective than direct AI funding, so isn’t one of the official cause areas (though I wouldn’t be surprised if there were some good opportunities there). This means many of the organisations in these categories have only been able to access a a minority of the available philanthropic capital (in recent history, I’d guess ~25%). In the recent SFF grant round, I estimate they faced a funding bar 1.5 to 3 times higher. This creates a lot of opportunities for other donors
Nikola
 ·  · 1m read
 ·