This is a special post for quick takes by Matt_Sharp. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Lab-grown meat approved for pet food in the UK 

"The UK has become the first European country to approve putting lab-grown meat in pet food.

Regulators cleared the use of chicken cultivated from animal cells, which lab meat company Meatly is planning to sell to manufacturers.

The company says the first samples of its product will go on sale as early as this year, but it would only scale its production to reach industrial volumes in the next three years."

https://www.bbc.co.uk/news/articles/c19k0ky9v4yo

Also in the article "The Animal and Plant Health Agency - part of the Department for Environment, Food & Rural Affairs - gave the product the go-ahead."

I think there are a bunch of EAs working at Defra - I wonder if they helped facilitate this?

I think the purpose of the 'overall karma' button on comments should be changed. 

Currently, it asks 'how much do you like this overall?'. I think this should be amended to something like 'how much do you think this is useful or important?'. 

This is because I think there is still a too strong correlation between 'liking' a comment, and 'agreeing' with it. 

For example, in the recent post about nonlinear, many people are downvoting comments by Kat and Emerson. Given that the post concerns their organisation, their responses should not be at risk of being hidden - their comments should be upvoted because it's useful/important to recognise their responses, regardless of whether someone likes/agrees with the content.

I think a nice (maybe better) heuristic is "Do you want to see more/less of this type of post/comment on the Forum?"

I worry this heuristic works if and only if people have reasonable substantive views about what kind of thing they want to see more/less on the Forum. 

For example, if people vote in accordance with the view 'I want to see more/less [things I like/dislike or agree/disagree with], then this heuristic functions just the same as like/dislike or agree/disagree vote (which I think would be bad). If people vote in accordance with the view 'I want to see more/less [posts which make substantive contributions, which others may benefit from, even if I strongly disagree with them/don't think they are well made]', then the heuristic functions much more like Matt's.

I agree with your high level point but not necessarily the example you give - I agree with Habryka's reasoning.

I have seen a handful of instances of people writing what I believe are useful contributions that might spark a discussion, but are controversial being downvoted.

Note that I downvoted their response (intentionally separating it from agree/disagree) because I saw them as attempts to enforce a bad norm, and some of them as a form of intimidation. I endorse downvoting them (and also think other people should do that).

Saturday night fun: ineffective fundraising

I've been rewatching an old 90s British satirical news programme, and came across this brutally brilliant sketch. It's almost proto-EA 

It was funny until he insulted her appearance. Then 🤢

Yeah, he's not supposed to be a pleasant character, and is typically satirising some of the nastiness of the British press (both then, but still relevant even now). In another episode his interviewing technique caused Australia and Hong Kong to declare war on each other:

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att