This is a special post for quick takes by Ben Stewart. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

On the recent post on Manifest, there’s been another instance of a large voting group (30-40ish [edit to clarify: 30-40ish karma, not 30-40ish individuals])arriving and downvoting any progressive-valenced comments (there were upvotes and downvotes prior to this, but in a more stochastic pattern). This is similar to what occured with the eugenics-related posts last year. Wanted to flag it to give a picture to later readers on the dynamics at play.

Manifold openly offered funding voting rings in their discord:

Just noting for anyone else reading the parent comment but not the screenshot, that said discussion was about Hacker News, not the EA Forum.

Also it was clearly not about Manifest. (Though it is nonetheless very cringe).

I would be surprised if it's 30-40 people. My guess is it's more like 5-6 people with reasonably high vote-strengths. Also, I highly doubt that the overall bias of the conversation here leans towards progressive-valenced comments being suppressed. EA is overwhelmingly progressive and has a pretty obvious anti-right bias (which like, I am a bit sympathetic to, but I feel like a warning in the opposite direction would be more appropriate)

My wording was imprecise - I meant 30-40ish in terms of karma. I agree the number of people is more likely to be 5-12. And my point is less about overall bias than just a particular voting dynamic - at first upvotes and downvotes occurring as is pretty typical, then a large and sudden influx of downvotes on everything from a particular camp.

There really should be a limit on the quantity of strong upvotes/downvotes one can deploy on comments to a particular post -- perhaps both "within a specific amount of time" and "in total." A voting group of ~half a dozen users should not be able to exert that much control over the karma distribution on a post. To be clear, I view (at least strong) targeted "downvoting [of] any progressive-valenced comments" as inconsistent with Forum voting norms.

At present, the only semi-practical fix would be for users on the other side of the debate to go back through the comments, guess which ones had been the targets of the voting group, and apply strong upvotes hoping to roughly neutralize the norm-breaking voting behavior of the voting group. Both the universe in which karma counts are corrupted by small voting groups and the universe in which karma counts are significantly determined by a clash between voting groups and self-appointed defenders seem really undesirable.

We implemented this on LessWrong! (indeed based on some of my own bad experiences with threads like this on the EA Forum)

The EA Forum decided to forum gate the relevant changes, but on LW people would indeed be prevented from voting like I think voting is happening here: https://github.com/ForumMagnum/ForumMagnum/commit/07e0754042f88e1bd002d68f5f2ab12f1f4d4908 

Thanks for the suggestion Jason! @JP Addison says that he forum-gated it at the time because he wanted to “see how it went over, whether they endorsed it on reflection. They previously wouldn’t have liked users treating votes as a scarce resource.” LW seems happy with how it’s gone, so we’ll go ahead and remove the forum-gating.

I really enjoyed this 2022 paper by Rose Cao ("Multiple realizability and the spirit of functionalism"). A common intuition is that the brain is basically a big network of neurons with input on one side and all-or-nothing output on the other, and the rest of it (glia, metabolism, blood) is mainly keeping that network running. 
The paper's helpful for articulating how that model's impoverished, and argues that the right level for explaining brain activity (and resulting psychological states) might rely on the messy, complex, biological details, such that non-biological substrates for consciousness are implausible. (Some of those details: spatial and temporal determinants of activity, chemical transducers and signals beyond excitation/inhibition, self-modification, plasticity, glia, functional meshing with the physical body, multiplexed functions, generative entrenchment.)
The argument doesn't necessarily oppose functionalism, but I think it's a healthy challenge to my previous confidence in multiple realisability within plausible limits of size, speed, and substrate. It's also useful to point to just how different artificial neural networks are from biological brains. This strengthens my feeling of the alien-ness of AI models, and updates me towards greater scepticism of digital sentience. 
I think the paper's a wonderful example of marrying deeply engaged philosophy with empirical reality.

[comment deleted]2
0
0
Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att
 ·  · 1m read
 · 
Hi everyone! I’m Caitlin, and I’ve just kicked off a 6-month, full-time career-transition grant to dive deep into AI policy and risk. You can learn more about my work here. What I’m Building I’m launching a TikTok and Instagram channel, @AICuriousGirl, to document my journey as I explore AI governance, misalignment, and the more tangible risks like job displacement and misuse. My goal is to strike a tone of skeptical optimism[1], acknowledging the risks of AI while finding ways to mitigate them. How You Can Help My friends and roommates have already given me invaluable feedback and would be thrilled to hear I'm not just relying on them anymore to be reviewers. I’m now seeking: * Technical and policy experts or other communicators who can * Volunteer 10-15 minutes/week to review draft videos (1-2 minutes in length) and share quick thoughts on: * Clarity * Accuracy * Suggestions for tighter storytelling First Drafts Below are links to my first two episodes. Your early feedback will shape both my content style and how I break down complex ideas into 1- to 2-minute TikToks. 1. Episode 1: What is this channel? 2. Episode 2: What jobs will be left? (Please note: I’ll go into misuse and misalignment scenarios in future videos.) Why TikTok? Short-form video platforms are where many non-technical audiences spend their time, and I’m curious whether they can be a vehicle for thoughtful discussion about AI policy.   If you’re interested, please reply below or DM me, and thank you in advance for lending your expertise! — Caitlin   1. ^ This phrase is not good, please help me think of a better one and I will buy you a virtual coffee or sth.