Quick takes

Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more

The next PauseAI UK protest will be (AFAIK) the first coalition protest between different AI activist groups, the main other group being Pull the Plug, a new organisation focused primarily on current AI harms. It will almost certainly be the largest protest focused exclusively on AI to date.

In my experience, the vast majority of people in AI safety are in favor of big-tent coalition protests on AI in theory. But when faced with the reality of working with other groups who don't emphasize existential risk, they have misgivings. So I'm curious what people he... (read more)

Consider adopting the term o-risk.

William MacAskill has recently been writing a bunch about how if you’re a Long-Termist, it’s not enough merely to avoid the catastrophic outcomes. Even if we get a decent long-term future, it may still fall far short of the best future we could have achieved. This outcome — of a merely okay future, when we could have had a great future — would still be quite tragic.

Which got me thinking: EAs already have terms like x-risk (for existential risks, or things which could cause human extinction) and s-risk (for suffering risks,... (read more)

might want to check out this (only indirectly related but maybe useful). 

https://forum.effectivealtruism.org/posts/zuQeTaqrjveSiSMYo/a-proposed-hierarchy-of-longtermist-concepts
 

Personally don't mind o-risk think it has some utility but s-risk ~somewhat seems like it still works here. An O-risk is just a smaller scale s-risk no?
 

Thanks to everyone who voted for our next debate week topic! Final votes were locked in at 9am this morning. 

We can’t announce a winner immediately, because the highest karma topic (and perhaps some of the others) touches on issues related to our politics on the EA Forum policy. Once we’ve clarified which topics we would be able to run, we’ll be able to announce a winner. 

Once we have, I’ll work on honing the exact wording. I’ll write a post with a few options, so that you can have input into the exact version we end up discussing. 

PS: ... (read more)

Showing 3 of 6 replies (Click to show all)
4
Toby Tremlett🔹
Thanks for the comments @Clara Torres Latorre 🔸 @NickLaing @Aaron Gertler 🔸 @Ben Stevenson. This is all useful to hear. I should have an update later this month.

Nice one @Toby Tremlett🔹 . If the forum dictators decide that the democratically selected topic of democratic backsliding is not allowed, I will genuinely be OK with that decision ;).

17
Clara Torres Latorre 🔸
I think allowing this debate to happen would be a fantastic opportunity to put our money where our mouth is regarding not ignoring systemic issues: https://80000hours.org/2020/08/misconceptions-effective-altruism/#misconception-3-effective-altruism-ignores-systemic-change On the other hand, deciding that democratic backsliding is off limits, and not even trying to have a conversation about it, could (rightfully, in my view) be treated as evidence of EA being in an ivory tower and disconnected from the real world.

Consultancy Opportunities – Biological Threat Reduction 📢📢📢


The World Organisation for Animal Health (WOAH) is looking for two consultants to support the implementation of the Fortifying Institutional Resilience Against Biological Threats (FIRABioT) Project in Africa.  Supported by Global Affairs Canada's Weapons Threat Reduction Program, this high-impact initiative aims to support WOAH Members in strengthening capacities to prevent, detect, prepare, respond and recover from biological threats. The project also supports the implementation of th... (read more)

EA Animal Welfare Fund almost as big as Coefficient Giving FAW now?

This job ad says they raised >$10M in 2025 and are targeting $20M in 2026. CG's public Farmed Animal Welfare 2025 grants are ~$35M.  

Is this right?

Cool to see the fund grow so much either way.

Agree that it’s really great to see the fund grow so much!

That said, I don’t think it’s right to say it’s almost as large as Coefficient Giving. At least not yet... :) 

The 2025 total appears to exclude a number of grants (including one to Rethink Priorities) and only runs through August of that year. By comparison, Coefficient Giving’s farmed animal welfare funding in 2024 was around $70M, based on the figures published on their website.

7
Jeff Kaufman 🔸
Specifically within animal welfare (this wasn't immediately clear to me, and I was very confused how CG's grants could be so low)
4
Ben_West🔸
Ah yeah good point, I updated the text.

A bit sad to find out that Open Philanthropy’s (now Coefficient Giving) GCR Cause Prioritization team is no more. 

I heard it was removed/restructured mid-2025. Seems like most of the people were distributed to other parts of the org. I don't think there were public announcements of this, though it is quite possible I missed something. 

I imagine there must have been a bunch of other major changes around Coefficient that aren't yet well understood externally. This caught me a bit off guard. 

There don't seem to be many active online artifa... (read more)

A delightful thing happened a couple weeks ago, and it gives an example for why more people should comment on the forum. 

My forum profile is pretty scarce, less than a dozen comments, most of them are along the lines of 'I appreciate the work done here!'. Nevertheless, because I have linked in some social media profiles and set my city in the directory, a student from a nearby university reached out to ask about career advice after finding me on the forum. I gave her a personalised briefing on the local policy space and explained the details of how to... (read more)

Lots of “entry-level” jobs require applicants to have significant prior experience. This seems like a catch-22: if entry-level positions require experience, how are you supposed to get the experience in the first place? Needless to say, this can be frustrating. But we don’t think this is (quite) as paradoxical as it sounds, for two main reasons. 

1: Listed requirements usually aren't as rigid as they seem.

Employers usually expect that candidates won’t meet all of the “essential” criteria. These are often more of a wish list than an exhaustive list... (read more)

Anecdotally, it seems like many employers have become more selective about qualifications, particularly in tech where the market got really competitive in 2024 - junior engineers were suddenly competing with laid-off senior engineers and FAANG bros.

Also, per their FAQ, Capital One has a policy not to select candidates who don't meet the basic qualifications for a role. One Reddit thread says this is also true for government contractors. Obviously this may vary among employers - is there any empirical evidence on how often candidates get hired without meeti... (read more)

@Ryan Greenblatt and I are going to record another podcast together (see the previous one here). We'd love to hear topics that you'd like us to discuss. (The questions people proposed last time are here, for reference.) We're most likely to discuss issues related to AI, but a broad set of topics other than "preventing AI takeover" are on topic. E.g. last time we talked about the cost to the far future of humans making bad decisions about what to do with AI, and the risk of galactic scale wild animal suffering.

Showing 3 of 5 replies (Click to show all)

Much of the stuff that catches your interest on the 80,000 hours website's problem profiles would be something I'd like to watch you do a podcast on, or costly if I end up getting it from people whose work I'm less familiar with. Also, neurology, cogpsych/evopsych/epistasis (e.g. like this 80k podcast with Randy Neese, this 80k podcast with Athena Aktipis), and especially more quantitative modelling approaches to culture change/trends (e.g. 80k podcast with Cass Sunstein, 80k podcast with Tom Moynihan, 80k podcasts with David Duvenaud and Karnofsky). A lot... (read more)

2
Pablo
I’d be interested in seeing you guys elaborate on the comments you make here in response to Rob’s question that some control methods, such as AI boxing, may be “a bit of a dick move”.
7
Noah Birnbaum
I would love to hear any updated takes on this post from Ryan. 

Not sure who needs to hear this, but Hank Green has published two very good videos about AI safety this week: an interview with Nate Soares and a SciShow explainer on AI safety and superintelligence.

Incidentally, he appears to have also come up with the ITN framework from first principles (h/t @Mjreard).

Hopefully this is auspicious for things to come?

Showing 3 of 5 replies (Click to show all)
4
Arnold Beckham
Only if someone's inviting him perhaps? @akash 🔸 

I just emailed him, close to zero chance he will see it but if he does 🤞

14
Lorenzo Buonanno🔸
My understanding is that they already raise and donate millions of dollars per year to effective projects in global health (especially tuberculosis) For what it's worth, their subreddit seems a bit ambivalent about explicit "effective altruism" connections (see here or here)   Btw, I would be surprised if the ITN framework was independently developed from first principles: * He says exactly the same 3 things in the same order * They have known about effective altruism for at least 11 years (see the top comment here) * There have been many effective altruism themed videos in their "Project for Awesome" campaign several years * They have collaborated several times with 80,000 hours and Giving What We Can * There are many other reasonable things you can come up with (e.g. urgency)

I'm researching how safety frameworks of frontier labs (Anthropic RSP, OpenAI Preparedness Framework, DeepMind FSF) have changed between versions.

Before I finish the analysis, I'm collecting predictions to compare with actual findings later. 5 quick questions. Questions

Disclaimer: please take it with a grain of salt, questions drafted quickly with AI help, treating this as a casual experiment, not rigorous research.

Thanks if you have a moment

@Toby Tremlett🔹 and I will be repping the EA Forum Team at EAG SF in mid-Feb — stop by our office hours to ask questions, give us your hottest Forum takes, or just say hi and come get a surprise sweet! :)

Reminder: applications for EAG SF close soon (this Sunday!)

3
NickLaing
Unfortunately it's not a surprise sweet any more, you really messed that one up. 

The surprise is what kind of sweet! ^^

I’ve seen a few people in the LessWrong community congratulate the community on predicting or preparing for covid-19 earlier than others, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it. I looked into this, and as far as I can tell, this self-congratulatory narrative is a complete myth.

Many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.... (read more)

Showing 3 of 25 replies (Click to show all)

Following up a bit on this, @parconley. The second post in Zvi's covid-19 series is from 6pm Eastern on March 13, 2020. Let's remember where this is in the timeline. From my quick take above:

On March 8, 2020, Italy put a quarter of its population under lockdown, then put the whole country on lockdown on March 10. On March 11, the World Health Organization declared covid-19 a global pandemic. (The same day, the NBA suspended the season and Tom Hanks publicly disclosed he had covid.) On March 12, Ohio closed its schools statewide. The U.S. declared a nationa

... (read more)
2
Yarrow Bouchard 🔸
I spun this quick take out as a full post here. When I submitted the full post, there was no/almost no engagement on this quick take. In the future, I'll try to make sure to publish things only as a quick take or only as a full post, but not both. This was a fluke under unusual circumstances. Feel free to continue commenting here, cross-post comments from here onto the full post, make new comments on the post, or do whatever you want. Thanks to everyone who engaged and left interesting comments.
4
Jason
I like this comment. This topic is always at risk to devolving into a generalized debate between rationalists and their opponents, creating a lot of heat but not light. So it's helpful to keep a fairly tight focus on potentially action-relevant questions (of which the comment identifies one).

Hey y'all,

My TikTok algorithm recently presented me with this video about effective altruism, with over 100k likes and (TikTok claims) almost 1 million views. This isn't a ridiculous amount, but it's a pretty broad audience to reach with one video, and it's not a particularly kind framing to EA. As far as criticisms go, it's not the worst, it starts with Peter Singer's thought experiment and it takes the moral imperative seriously as a concept, but it also frames several EA and EA-adjacent activities negatively, saying EA quote "has an enormously well fund... (read more)

Showing 3 of 14 replies (Click to show all)
4
Charlie G 🔹
I actually share a lot of your read here. I think it is actually a very strong explanation of Singer's argument (the shoes-for-suit swap is a nice touch), and the observation about the motivation for AI safety warrants engagement rather than dismissal.  My one quibble with the video's content is the "extreme utilitarians" framing; as I'm one of maybe five EA virtue ethicists, I bristle a bit at the implication that EA requires utilitarianism, and in this context it reads as dismissive. It's a pretty minor issue though. I think that the video is still worth providing a counter-narrative to though, and I think that's actually going to be my primary disagreement. For me, that counter-narrative isn't that EA is perfect, but that taking a principled EA mindset towards problems actually leads towards better solutions, and has lead to a lot of good being done in the world already.  The issue with the video, which I should've been more explicit about in the original comment, is that when taken in the context of TikTok, it acts as a reinforcement to people who think that you can't try to make the world better. She presents a vision of EA where it initially tried to do good (while not mentioning any of the good it actually did, just the sacrifices that people made for it), and then that it was corrupted by people with impure intentions, and now no longer does.  Regardless of what you or I think of the AI safety movement, I think that the people who believe in it believe in it seriously, and got there primarily through reasoning from EA principles. It isn't a corruption of EA ideas of doing good, just a different way of accomplishing them, though we can (and should) disagree on how the weighting of these factors plays out. And it primarily hasn't supplanted the other ways that people within the movement are doing good, it's supplemented them. When the first exposure of EA ideas leads people towards the "things can't be better" meme, that's something that I think is worth
3
Charlie G 🔹
Thanks for the response, and to be honest it's something that I'd agree with too. I've edited my initial comment to better reflect what's actually true. I wouldn't call the EA Global that I've been to an "AI Safety Conference," but if Bay Area is truly different it wouldn't surprise me. "Well-funded" is also subjective, and I think it's likely that I was letting my reflexive defensiveness get in the way of engaging directly. That said, I think the broader point about it exposing a weakness in EA comms and the comments reflecting broad low-trust attitudes towards ideas like EA stand, and I hope people continue to engage with them.

Yep 100% agree with the weakness in EA comms. I'm happy there's been a fair amount of chat recently about this on the forum.

Despite the slightly terrifying implications of the breakdown in unity between America and the rest of the NATO alliance from a security perspective - I think it also offers a really promising opportunity r.e. shifting global AI development and governance towards a safer path in some scenarios. 

Right now US and China have adopted a 'race' dynamic...needless to say this is hugely dangerous and really raises the risk of irresponsible practices and critical errors as we enter the critical phase towards AGI from both AI super-powers. The rupture of UK/EU ... (read more)

Heads up for job-board users: you can now find more roles (1,200+) and set custom email alerts on our job board.

For added context: As promised earlier, we’re continuing to scale and improve our job board, to help talented people find impactful roles (including in causes, regions, and orgs that might have been underrepresented in EA so far). Mainly, you can now find more roles than before and set alerts for your chosen filters, in addition to smaller improvements, like being able to filter for highlighted roles. We’ll continue doing significant wo... (read more)

I was on it yesterday and thought "huh! 1200+ jobs is a lot!" Well done scaling this up!

4
Kestrel🔸
Amazing work! I really appreciate everything you're doing to get more people into jobs that meaningfully improve the world.

Rethink Priorities is hiring an AI Strategy Team Lead. The full job description and application form are available on our careers page.

If you know anyone who may be a strong fit for applied strategy, research, or programmatic work focused on reducing AI-related existential risks and securing positive outcomes, we’d appreciate you sharing this opportunity.

We warmly encourage anyone who thinks they might be a good fit to apply.

I'll be in D.C. on the 12th February and morning of the 13th, before heading to EAG. If people are around and want to meet, feel free to drop me a DM!

U.S. Politics should be a main focus of US EAs right now. In the past year alone, every major EA cause area has been greatly hurt or bottlenecked by Trump. $40 billion in global health and international development funds was lost when USAID shut down, which some researchers project could lead to 14 million more deaths by 2030. Trump has signed an Executive Order that aims to block states from creating their own AI regulations, and has allowed our most powerful chips to be exported to China. Trump has withdrawn funding from, and U.S. support for, internatio... (read more)

Showing 3 of 7 replies (Click to show all)

What do people think of the idea of pushing for a constitutional convention/amendment? The coalition would be ending presidential immunity + reducing the pardon powers + banning stock trading for elected officials. Probably politically impossible but if there were ever a time it might be now.

4
Sean_o_h
Maybe, but this also seems like the kind of extremely broadly salient thing where it would be more difficult for EAs to make a big difference on the margins with their work and funding compared to 'regular' EA causes. (though people should also focus time and money on things important to them)
6
ethai
https://www.powerfordemocracies.org/research/our-recommendations/ !!

Oscar Wilde once wrote that "people nowadays know the price of everything and the value of nothing." I can see a particular type of uncharitable EA-critic say the same about our movement, grossed out by how we try to put a price tag on human (or animal) lives. This is wrong.

What they should be appalled by is if a life was truly worth $3500, but that's not what we are claiming. The claim is that a life is invaluable. The world just happens to be in such a way that we can buy this incredibly precious thing for the meager cost of a few thousand. 

Nate Sor... (read more)

Load more