P

PSR

13 karmaJoined

Comments
5

The original article by Kristin Andrews makes a distinction between the question of the distribution of consciousness (which animals are conscious?) and the question of the dimension of consciousness in a given animal (How is the animal conscious? What does its mental life look like?). It then argues that we should first, as a kind of working hypothesis, assume that all animals are conscious, and instead study the dimension of consciousness question, to only then develop a theory of consciousness that is capable of answering the distribution question.

In my view, you should 'go all the way' with this and go directly to the conclusion that the 'distribution question' is ill-formed (or redundant), and only questions regarding the way mental lives are structured are meaningful. Any process or activity can semantically be labeled as a 'mental life' and the 'dimension question' is then ultimately a proxy for how similar you think these mental lives are to human mental lives (or some kind of idealized extrapolation of human mental life, the question then of course becoming what this extrapolation should look like). So if you want, you can assign processes (e.g. behavioral and neural processes) that are structured very differently from human mental lives a low score of 'consciousness' and processes very similar to human mental lives a very high score, without supposing an additional on-off property of sentience. In that sense, the 'dimension of consciousness' question just answers the reformulated 'distribution' question with it, if you then label processes with a very low score as 'unconscious'.

In my view, 'sentience' in the sense of the 'hard problem' is a red herring (and a philosophically ill-formed concept). The ethically relevant question is whether a mental life is structured to include interests that our moral values demand we respect.

As an aside, the summary rightfully notes that unreflectively assuming that something is unconcious can lead to ethical concerns. So it is a bit jarring how the text just seemingly states as a fact that modern LLMs are unconscious without giving any reasoning for this (though the text might also be interpreted to assume this for the sake of answering a hypothetical objection).

Thank you a lot for this detailed answer. Especially points where superforecasters have provably been wrong on AI-related questions are very interesting and are certainly a very relevant argument against updating too much in their direction. Some kind of track record of superforecasters, experts, and public figures making predictions would be extremely interesting. Do you know whether something like this can be found somewhere?

To push back a bit against it being hard to find a good reference class and superforecasters having to rely on vibes: Yes, it might be hard, but aren't superforecasters precisely those who have a great track record for finding a good methodology for making predictions, even when it's hard? AI extinction is probably not the only question where making a forecast is tricky.

Edit: Just a few days ago, we got this here, which is very relevant: https://forum.effectivealtruism.org/posts/fp5kEpBkhWsGgWu2D/assessing-near-term-accuracy-in-the-existential-risk

Sure, even a 0.15% probability by itself seems scary, though it might be low enough that you start wondering about trade-offs with delaying technological progress.

Apart from that, I would be interested how people with much higher P(doom) than that reconcile their belief with these numbers? Are there good reasons to believe that these numbers are not representative of the actual beliefs of superforecasters? Or that superforecasters are somehow systematically wrong or untrustworthy on this issue?

Hello everyone,

I have a question for those in the community that focus on AI safety: What do you make of superforecasters seemingly often having a very low P(doom)?

For example, in this survey (https://metr.org/blog/2025-08-20-forecasting-impacts-of-ai-acceleration/) superforecasters give a median P(doom) of 0.15% by 2100. You can find this number in the full write-up (https://docs.google.com/document/d/1QPvUlFG6-CrcZeXiv541pdt3oxNd2pTcBOOwEnSStRA/edit?usp=sharing), which is also linked in the blog post.

This is far below pretty much any value any prominent AI safety person talks about, which are typically 10%+ or even up to ~90%. Does this give you pause? Or how do you explain this?

PSR
11
2
0

Hi everyone! It seems quite plausible to me that EA can not indefinitely prevent itself from becoming a politically charged topic, once it becomes more prominent in public awareness.  What are the current ideas about how to handle this?