February 2 - 8
The Scaling Series

Read Toby Ord's series here, and discuss it here, all week. 

Read Toby Ord's series here, and discuss it here, all week. 

Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
A bit sad to find out that Open Philanthropy’s (now Coefficient Giving) GCR Cause Prioritization team is no more.  I heard it was removed/restructured mid-2025. Seems like most of the people were distributed to other parts of the org. I don't think there were public announcements of this, though it is quite possible I missed something.  I imagine there must have been a bunch of other major changes around Coefficient that aren't yet well understood externally. This caught me a bit off guard.  There don't seem to be many active online artifacts about this team, but I found this hiring post from early 2024, and this previous AMA. 
EA Animal Welfare Fund almost as big as Coefficient Giving FAW now? This job ad says they raised >$10M in 2025 and are targeting $20M in 2026. CG's public Farmed Animal Welfare 2025 grants are ~$35M.   Is this right? Cool to see the fund grow so much either way.
I've been experimenting recently with a longtermist wiki, written fully with LLMs. Some key decisions/properties: 1. Fully LLM-generated, heavily relying on Claude Code. 2. Somewhat opinionated. Tries to represent something of a median longtermist/EA longview, with a focus on the implications of AI. All pages are rated for "importance". 3. Claude will estimates a lot of percentages and letter grades for things. If you see a percentage or grade, and there's no citation, it might well be a guess by Claude. 4. An emphasis on numeric estimates, models, and diagrams. I had it generate many related models to different topics, some are better than others. Might later take the best ones and convert to Squiggle models or similar.  5. Still early & experimental. This is a bit in-between an official wiki and a personal project of interest now. I expect that things will become more stable over time. For now, expect pages to change locations, and terminology to be sometimes inconsistent, etc. I overall think this space is pretty exciting right now, but it definitely brings challenges and requires cleverness. https://www.longtermwiki.com/ https://www.longtermwiki.com/knowledge-base/responses/epistemic-tools/tools/longterm-wiki/ Recently I've been working on some pages about Anthropic and the OpenAI Foundation's potentials for impact.  For example, see: https://www.longtermwiki.com/knowledge-base/organizations/funders/anthropic-investors/ https://www.longtermwiki.com/knowledge-base/organizations/funders/openai-foundation/ There's also a bunch of information on specific aspects of AI Safety, different EA organizations, and a lot more stuff.  It costs about $3-6 to add a basic page, maybe $10-$30 to do a nicer page. I could easily picture wanting even better later on. Happy to accept requests to add pages for certain organizations/projects/topics/etc that people here might be interested!  Also looking for other kinds of feedback! I should also flag that one way to use it is
It seems like a worthwhile project to ask/pressure Anthropic's founders to make their pledges legally binding.  Anthropic's founders have pledged to donate 80% of their wealth. Ozzie Gooen estimates that in a few years this could be worth >$40 billion. As Ozzie writes, adherence to the Giving Pledge (the Gates one) is pretty low: only 36% of deceased original pledgers met the 50% commitment. It's hard to follow through on such commitments, even for (originally) highly morally motivated people.
Lots of “entry-level” jobs require applicants to have significant prior experience. This seems like a catch-22: if entry-level positions require experience, how are you supposed to get the experience in the first place? Needless to say, this can be frustrating. But we don’t think this is (quite) as paradoxical as it sounds, for two main reasons.  1: Listed requirements usually aren't as rigid as they seem. Employers usually expect that candidates won’t meet all of the “essential” criteria. These are often more of a wish list than an exhaustive list of strict requirements. Because of this, you shouldn’t necessarily count yourself out because you fall a little short on the listed experience requirements. Orgs within EA are much better at communicating this explicitly, but it should be taken as a rule of thumb outside of EA as well. You should still think strategically about which roles you apply for, but this is something to factor in. 2: You can develop experience outside of conventional jobs. For a hiring manager, length of experience is a useful heuristic. It tells them you’ve probably picked up the skills needed for the role. But if you can show that you have these skills through other means, the exact amount of experience you have becomes far less important. A few of the best ways to do this: * Internships and fellowships. These are designed for people entering new fields and signal to employers that someone has already vetted you. They’re often competitive, but usually don’t require previous experience. * Volunteering. Organizations usually have lower bars for volunteers than paid positions, making this a more accessible option (usually). Look for advertised volunteering opportunities at orgs you’re interested in, or reach out to them directly. * Independent projects. Use your spare time to make something tangible you can show potential employers, like an app, portfolio, research paper, blog, or running an event. Obviously the most useful projects will v