Software Developer at Giving What We Can, trying to make giving significantly and effectively a social norm.
It looks like this is driven entirely by Givewell/global health and development reduction, and that actually the other fields have been stable or even expanding.
This seems the opposite of what the data says up to 2024
Comparing 2024 to 2022, GH decreased by 9%, LTXR decreased by 13%, AW decreased by 23%, Meta decreased by 21% and "Other" increased by 23%
I think the data for 2025 is too noisy and mostly sensitive to reporting timing (whether an org publishes their grant reports early in the year or later in the year) to inform an opinion
Hopefully this is auspicious for things to come?
My understanding is that they already raise and donate millions of dollars per year to effective projects in global health (especially tuberculosis)
For what it's worth, their subreddit seems a bit ambivalent about explicit "effective altruism" connections (see here or here)
Btw, I would be surprised if the ITN framework was independently developed from first principles:
I used DoneThat for a while and also highly recommend it, especially given the low cost (5$/month)
As a piece of feedback, I think you should have included this video in the post: https://www.loom.com/share/53d45343051846ca8328ccd91fa4c3a8 and people should look at it before deciding whether to download it. It made me feel much more confident in the privacy aspects (especially when using one's own Gemini API key)
If you upload it to YouTube you can also easily embed it in a bunch of places (including this forum)
I personally found it a very refreshing change of language/thinking/style from the usual EA Forum/LessWrong post, and found spending some extra effort to (hopefully) understand it worth it and highly enjoyable.
My one sentence summary/translation would be that advocating for longtermism would likely benefit on the margin from using more of a virtue ethics approach (e.g. using saints and heroes as examples) instead of a rationalist/utilitarian approach, as most people feel even less of an obligation towards future beings than towards the global poor, and many of the most altruistic people act altruistically for emotional/spiritual reasons rather than rational ones.
I could have definitely misunderstood the post though, so someone correct me if I misinterpreted it, and there are a lot more valuable points. E.g. that most people agree on an abstract level that future people matter, and that actively causing them harm is bad. So I think it claims that longtermists should focus less on strengthening that case and more on other things. Another interesting point is that to "mitigate hazards we create for ourselves" we could take advantage of the fact that "causing harm is intuitively worse than not producing benefit" for most people.
I think SummaryBot below also did a good job at translating.
Reposting this comment from the CEO of Open Philanthropy 12 days ago, as I think some people missed it:
A quick update on this: Good Ventures is now open to supporting work that Open Phil recommends on digital minds/AI moral patienthood. We're still figuring out where that work should slot in (including whether we’d open a public call for applications) and will update people working in the field when we do. Additionally, Good Ventures are now open to considering a wider range of recommendations in right-of-center AI policy and a couple other smaller areas (e.g. in macrostrategy/futurism), though those will be evaluated on a case-by-case basis for now. We’ll hopefully develop clearer parameters for GV interest over time (and share more when we have those). In practice, given our increasing work with other donors, we don’t think any of this is a huge update; we’d like to continue to hear about and expect to be able to direct funding to the most promising opportunities whether or not they are a fit for Good Ventures.
(More info on the film's creation in the FLI interview: Suzy Shepherd on Imagining Superintelligence and "Writing Doom")
Correct link: https://www.youtube.com/watch?v=McnNjFgQzyc
Another FLI-funded YouTube channel is https://www.youtube.com/@Siliconversations, which has ~2M views on AI Safety
Posts on this topic that I liked:
I fairly strongly disagree with "be honest about your counterfactual impact—most people overestimate it.", and on only working at a nonprofit you consider effective if you think you're ~10x better than the counterfactual hire or "irreplaceable."
As an example, I'm confident that there are software developers who would have been significantly more impactful than me at my role at GWWC, but didn't apply, and the extra ~$/year that they are donating (if they are actually donating more in practice than what they would have) does not compensate for that.
I also think that there's a good chance that I would have done other vaguely impactful work, or donated more myself, if they had been hired instead of me, largely compensating for their missed donations.
Quick flag that the FAQ right below hasn't been updated
Not sure how useful this is, and you mentioned you can't speak for the choice of principles, but sharing on a personal note that the collaborative spirit value was one of the things I appreciated the most about EA when I first came across it.
I think that infighting is a major reason why EA and many similar movements achieve far less than they could. I really like when EA is a place where people with very different beliefs who prioritise very different projects can collaborate productively, and I think it's a major reason for its success. It seems more unique/specific than acknodwledging tradeoffs, more important to have explicitly written as a core value to prevent the community from drifting away from it, and a great value proposition.
As James, I also found it weird that what had become a canonical definition of EA was changed without a heads-up to its community.
In any case, thank you so much for all your work, and I'm grateful that thanks to you it survives as a paragraph in the essay.