my name is jay. i’m a relapsing ex-philosopher specialising in philosophy of mind & cognitive/evolutionary psychology.
i'm always interested in opportunities (courses, internships, work) at the intersection of philosophy & AI. this includes work on AI ethics & governance.
Hey! I'm not sure I see the prima facie case for #1. What makes you think that building non-conscious AI would be more resource-intensive/expensive than building conscious AI? Current AIs are most likely non-conscious.
As for #2, I have heard such arguments before in other contexts (relating to meat industry) but I found them to be preposterous on the face of it.
What?
Inspired by the Philpapers survey, we are conducting a survey on experts’ beliefs about key topics pertaining to AI consciousness & moral status. These include:
🧠 Consciousness/sentience
⚖️ Moral status/moral agency
💥 Suffering risk (”S-risk”) related AI consciousness (e.g. AI suffering)
⚠️ Existential risk (”X-risk”) related to AI consciousness (e.g. resource competition with conscious AI)
Why?
Such a survey promises to enrich our understanding of key safety risks related to conscious AI in several ways.
📊 Most importantly, the results of this survey provide a general picture of experts’ views about the probability, promises, & perils of AI consciousness.
⚔️ Analysing the types of answers given by respondents might help to identify fault lines between industry, academia, & policy.
📈 Repeating the survey on an annual basis can assist in monitoring trends (e.g. updates in belief in response to technological advances/breakthroughs, differences in attitudes between industry & academia, emergent policy levers, etc.).