Nov 10
Funding strategy week
A week for discussing funding diversification, when to donate, and other strategic questions. Read more.
Nov 17
Marginal funding week
Nov 24
Donation election
Dec 8
Why I donate week
Dec 15
Donation celebration
  This seems the opposite of what the data says up to 2024 Comparing 2024 to 2022, GH decreased by 9%, LTXR decreased by 13%, AW decreased by 23%, Meta decreased by 21% and "Other" increased by 23% I think the data for 2025 is too noisy and mostly sensitive to reporting timing (whether an org publishes their grant reports early in the year or later in the year) to inform an opinion
Thanks for the comment, Jim! 0.0374 % is my best guess, but I agree there is lots of uncertainty. For an exponent of the number of neurons of 0.188, which explains pretty well the welfare ranges in Bob's book about comparing animal welfare across species, the effects on soil animals would dominate even more easily. However, I would also not be surprised if 100 times as much consumption of the affected farmed shrimp, 3.74 % (= 100*3.74*10^-4), would have to be replaced by farmed fish for effects on soil animals to dominate, in which case I could easily electrically stunning shrimp increasing animal welfare. My main takeaway from the section where I discuss eletrically stunning shrimp is that I do not really know whether it increases or decreases welfare. I would still believe this even if my preferred way of comparing welfare across species was certain to be right. There is plenty of uncertainty in whether electrically stunning farmed shrimps increases or decreases the welfare of soil animals, and in the replacement of the consumption of farmed shrimps by other foods. I agree decreasing the uncertainty about comparing welfare across different potential beings should also be a priority. I just feel it is very hard to make progress on this in comparison to gaining a better understanding of the conditions of soil animals in different biomes.
These materials may be helpful for donors who are setting their philanthropy strategy.  1. This online philanthropy strategy tool asks you a few questions to help you select worldviews to include in your giving. 2. Template philanthropy strategy document, which you can populate -- ideally after using the tool. It would be great to see people using these tools and providing feedback on them. You can provide feedback by messaging me (Sanjay) on this forum (or any other way, if you already have my contact details). A huge thanks to Spencer E (who left SoGive to work at GWWC) and the other SoGive staff members who've been instrumental in laying the groundwork for this. It may be a useful exercise for donors who are sympathetic to worldview diversification.  * This post summarises the questions asked by the survey, and the analysis done * It will also summarise some reasons why this tool may not be helpful for some people in this forum.  The questions asked by the survey, and the analysis done Questions asked The survey focuses on the two areas that we at SoGive consider to be the most material. (1) cause/worldview selection (2) timing of donations (aka give now/give later); and the associated investment considerations For each of the questions, the survey includes a summary of the considerations that we consider to be most relevant, to help the user make better decisions. (1) Worldview selection  The survey asks the user two trade-off questions  1a) How many chicken lives improved are equivalent to one life saved 1b) how many future lives enabled are equivalent to one child's life saved today (2) Timing of donations 2a) The survey asks users to choose between (a) a more patient approach such as a capital preserving endowment or a patient philanthropy approach which aims to grow the capital and (b) spending down the funds within a finite time period. 2b) If somebody asks users whether they want to take a strategic approach to investment (ie selecti
That does seem right, thanks. I intended to include dictator-ish human takeover there (which seems to me to be at least as likely as misaligned AI takeover) as well, but didn't say that clearly. Edited to "relatively amoral forces" which still isn't great but maybe a little clearer.
$4260 raised to the Donation Election Fund
Donation Election Fund
$4260
Includes our match on the first $5000
Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Funding Strategy
Global health
Animal welfare
Existential risk
12 more
I notice the 'guiding principles' in the introductory essay on effectivealtruism.org have been changed. It used to list: prioritisation, impartial altruism, open truthseeking, and a collaborative spirit. It now lists: scope sensitivity, impartiality, scout mindset, and recognition of trade-offs.   As far as I'm aware, this change wasn't signalled. I understand lots of work has been recently done to improve the messaging on effectivealtruism.org -- which is great! -- but it feels a bit weird for 'guiding principles' to have been changed without any discussion or notice.  As far as I understand, back in 2017 a set of principles were chosen through a somewhat deliberative process, and then organisations were invited to endorse them. This feels like a more appropriate process for such a change. 
13
Linch
1d
0
crossposted from https://inchpin.substack.com/p/legible-ai-safety-problems-that-dont Epistemic status: Think there’s something real here but drafted quickly and imprecisely I really appreciated reading Legible vs. Illegible AI Safety Problems by Wei Dai. I enjoyed it as an impressively sharp crystallization of an important idea: 1. Some AI safety problems are “legible” (obvious/understandable to leaders/policymakers) and some are “illegible” (obscure/hard to understand) 2. Legible problems are likely to block deployment because leaders won’t deploy until they’re solved 3. Leaders WILL still deploy models with illegible AI safety problems, since they won’t understand the problems’ full import and deploy the models anyway. 4. Therefore, working on legible problems have low or even negative value. If unsolved legible problems block deployment, solving them will just speed up deployment and thus AI timelines. 1. Wei Dai didn’t give a direct example, but the iconic example that comes to mind for me is Reinforcement Learning from Human Feedback (RLHF): implementing RLHF for early ChatGPT, Claude, and GPT-4 likely was central to making chatbots viable and viral. 2. The raw capabilities were interesting but the human attunement was necessary for practical and economic use cases. I mostly agree with this take. I think it’s interesting and important. However (and I suspect Wei Dai will agree), it’s also somewhat incomplete. In particular, the article presumes that “legible problems” and “problems that gate deployment” are idempotent, or at least the correlation is positive enough that the differences are barely worth mentioning. I don’t think this is true.     For example, consider AI psychosis and AI suicides. Obviously this is a highly legible problem that is very easy to understand (though not necessarily to quantify or solve). Yet they keep happening, and AI companies (or at least the less responsible ones) seem happy to continue deploying models withou
Vince Gilligan (the Breaking Bad guy) has a new show Pluribus which is many things, but also illustrates an important principle, that being (not a spoiler I think since it happens in the first 10 minutes)... If you are SETI and you get an extraterrestrial signal which seems to code for a DNA sequence... DO NOT SYNTHESIZE THE DNA AND THEN INFECT A BUNCH OF RATS WITH IT JUST TO FIND OUT WHAT HAPPENS.  Just don't. Not a complicated decision. All you have to do is go from "I am going to synthesize the space sequence" to "nope" and look at that, x-risk averted. You're a hero. Incredible work.
People in effective altruism or adjacent to it should make some public predictions or forecasts about whether AI is in a bubble.  Since the timeline of any bubble is extremely hard to predict and isn’t the core issue, the time horizon for the bubble prediction could be quite long, say, 5 years. The point would not be to worry about the exact timeline but to get at the question of whether there is a bubble that will pop (say, before January 1, 2031).  For those who know more about forecasting than me, and especially for those who can think of good ways to financially operationalize such a prediction, I would encourage you to make a post about this. For now, an informal poll: