Hide table of contents

Note: This post was crossposted from The Digital Minds Newsletter by the EA Forum team, who encouraged the crosspost, with the authors’ permission. It was briefly mentioned in an earlier announcement post. The authors may not see or respond to comments here.


Welcome to the first edition of the Digital Minds Newsletter, collating all the latest news and research on digital minds, AI consciousness, and moral status.

Our aim is to help you stay on top of the most important developments in this emerging field. In each issue, we will share a curated overview of key research papers, organizational updates, funding calls, public debates, media coverage, and events related to digital minds. We want this to be useful for people already working on digital minds as well as newcomers to the topic.

This first issue looks back at 2025 and reviews developments relevant to digital minds. We plan to release multiple editions per year.

If you find this useful, please consider subscribing, sharing it with others, and sending us suggestions or corrections to digitalminds@substack.com.

Bradford, Lucius, and Will

In this issue:

  1. Highlights
  2. Field Developments
  3. Opportunities
  4. Selected Reading, Watching, & Listening
  5. Press & Public Discourse
  6. A Deeper Dive by Area
Brain Waves, Generated by Gemini

1. Highlights

In 2025, the idea of digital minds shifted from a niche research topic to one taken seriously by a growing number of researchers, AI developers, and philanthropic funders. Questions about real or perceived AI consciousness and moral status appeared regularly in tech reporting, academic discussions, and public discourse.

Anthropic’s early steps on model welfare

Following their support for the 2024 report “Taking AI Welfare Seriously”, Anthropic expanded its model welfare efforts in 2025 and hired Kyle Fish as an AI welfare researcher. Fish discussed the topic and his work in an 80,000 Hours interview. Anthropic leadership is taking the issue of AI welfare seriously. CEO Dario Amodei drew attention to the relevance of model interpretability to model welfare and mentioned model exit rights at the council on foreign relations.

Several of the year’s most notable developments came from Anthropic: they facilitated an external model welfare assessment conducted by Eleos AI Research, included references to welfare considerations in model system cards, ran a related fellowship program, introduced a “bail button” for distressed behavior, and made internal commitments around keeping promises and discretionary compute. In addition to hiring Fish, Anthropic also hired a philosopher—Joe Carlsmith—who has worked on AI moral patiency.

The field is growing

In the non-profit space, Eleos AI Research expanded its work and organized the Conference on AI Consciousness and Welfare, while two new non-profits, PRISM and CIMC, also launched. AI for Animals rebranded to Sentient Futures, with a broader remit including digital minds, and Rethink Priorities refined their digital consciousness model.

Academic institutions undertook novel research (see below) and organized important events, including workshops run by the NYU Center for Mind, Ethics, and Policy, the London School of Economics, and the University of Hong Kong.

In the private sector, Anthropic has been leading the way (see section above), but others have also been making strides. Google researchers organized an AI consciousness conference three years after firing Blake Lemoine. AE Studio expanded its research into subjective experiences in LLMs. And Conscium launched an open letter encouraging a responsible approach to AI consciousness.

Philanthropic actors have also played a key role this year. The Digital Sentience Consortium, coordinated by Longview Philanthropy, issued the first large-scale funding call specifically for research, field-building, and applied work on AI consciousness, sentience, and moral status.

Early signs of public discourse

Media coverage of AI consciousness, seemingly conscious behavior, and phenomena such as “AI psychosis” increased noticeably. Much of the debate focused on whether emotionally compelling AI behavior poses risks, often assuming consciousness is unlikely. High-profile comments, such as those by Mustafa Suleyman, and widespread user reports added to the confusion, prompting a group of researchers (including us) to create the WhenAISeemsConscious.org guide. In addition, major outlets such as the BBC, CNBC, The New York Times, and The Guardian published pieces on the possibility of AI consciousness.

Research advances

Patrick Butlin and collaborators published a theory-derived indicator method for assessing AI systems for consciousness, which is an updated version of the 2023 report. Empirical work by Anthropic researcher Jack Lindsey explored the introspective capacities of LLMs, as did work by Dillon Plunkett and collaborators. David Chalmers released papers on interpretability and what we talk to when we talk to LLMs. In our own research, we conducted an expert forecasting survey on digital minds, finding that most assign at least a 4.5% probability to conscious AI existing in 2025 and at least a 50% probability to conscious AI arriving by 2050.


2. Field Developments

Highlights from some of the key organizations in the field.

NYU Center for Mind, Ethics, and Policy

Eleos AI

Rethink Priorities

Longview Philanthropy

  • Launch of the Digital Sentience Consortium, a collaboration between Longview Philanthropy, Macroscopic Ventures, and The Navigation Fund. This included funding for:
    • Research fellowships for technical and interdisciplinary work on AI consciousness, sentience, moral status, and welfare.
    • Career transition fellowships to support people moving into digital minds work full-time.
    • Applied projects funding on topics such as governance, law, public communication, and institutional design for a world with digital minds.

Global Priorities Institute

  • GPI was closed. Its website lists work produced during GPI’s operation and features two sections on digital minds.

PRISM - The Partnership for Research into Sentient Machines

Sentience Institute

Sentient Futures

Other noteworthy organizations


3. Opportunities

If you are considering moving into this space, here are some entry points that opened or expanded in 2025. We will use future issues to track new calls, fellowships, and events as they arise.

Funding and fellowships

  • The Anthropic Fellows Program for AI safety research is accepting applications and plans to work with some fellows on model welfare; deadline January 12, 2026.
  • Good Ventures appears now open to supporting work on digital minds recommended by Coefficient Giving (previously Open Philanthropy).
  • Foresight Institute is accepting grant applications; whole brain emulations fall within the scope of one of its focus areas.
  • Macroscopic Ventures has AI welfare as a focus area and expects to significantly expand its grantmaking in the coming years.
  • Astera Institute was launched in 2025 and focuses on “bringing about the best possible AI future”.
  • The Longview Consortium for Digital Sentience Research and Applied Work is now closed.

Events and networks

  • The NYU Mind, Ethics, and Policy Summit will be held on April 10th and 11th, 2026. The Call for Expressions of Interest is currently open.
  • The Society for the Study of Artificial Intelligence and Simulation of Behaviour will hold a convention at the University of Sussex on the 1st and 2nd of July; Anil Seth will be the keynote speaker, and proposals for topics related to digital minds were invited.
  • Sentient Futures is holding a Summit in the Bay Area from the 6th to 8th of February. They will likely hold another event in London in the summer. Keep an eye on their website for details.
  • Benjamin Henke and Patrick Butlin will continue running a speaker series on AI agency in the spring. Remote attendance is possible. Requests to be added to the mailing list can be sent to benhenke@gmail.com. Speakers will include Blaise Aguera y Arcas, Nicholas Shea, Joel Leibo, and Stefano Palminteri.

Calls for papers


4. Selected Reading, Watching, & Listening

Books

In 2025, the following book drafts were posted, and the following books were published or announced:

Podcasts

This year, we’ve encountered many podcast guests discuss topics related to digital minds, and we’ve also listed to podcasts dedicated entirely to the topic.

Videos

  • Anthropic released interviews with Kyle Fish and Amanda Askell, both address model welfare.
  • Closer to Truth released a set of interviews from MindFest 2025.
  • Cognitive Revolution released an interview with Cameron Berg on LLMs reporting consciousness.
  • Google DeepMind’s Murray Shanahan discussed consciousness, reasoning, and the philosophy of AI.
  • ICCS released all the Keynotes from the International Center for Consciousness Studies, AI and Sentience Conference.
  • IMICS featured a talk from David Chalmers discussing identity and consciousness in LLMs.
  • The NYU Center for Mind, Ethics, and Policy has released a number of event recordings.
  • Science, Technology & the Future released a talk by Jeff Sebo on AI welfare from Future Day 2025.
  • Sentient Futures posted recording of talks from the AI, Animals, and Digital Minds conferences in London and New York.
  • TEDx featured Jeff Sebo discussing, “Are we even prepared for a sentient AI?”
  • PRISM released the recordings of the Conscious AI meetup group run in collaboration with Conscium.

Blogs and magazines


5. Press & Public Discourse

In 2025, there was an uptick of discussion of AI consciousness in the public sphere, with articles in the mainstream press and prominent figures weighing in. Below are some of the key pieces.

AI Welfare

Is AI consciousness possible?

Growing Field

Seemingly Conscious AI

  • Mustafa Suleyman, CEO of Microsoft AI, argued in “We must build AI for people; not to be a person” that “Seemingly Conscious AI” poses significant risks, urging developers to avoid creating illusions of personhood, given there is “zero evidence” of consciousness today.
    • Robert Long challenged the “zero evidence” claim, clarifying that the research Suleyman cited actually concludes there are no obvious technical barriers to building conscious systems in the near future.
  • The New York Times, Zvi Mowshowitz, Douglas Hofstadter, and several other reports describe “AI Psychosis,” a phenomenon where users interacting with chatbots develop delusions, paranoia, or distorted beliefs—such as believing the AI is conscious or divine—often reinforced by the model’s sycophantic tendency to validate the user’s own projections.
    • Lucius, Bradford, and collaborators launched the guide WhenAISeemsConscious.org, and Vox’s Sigal Samuel published practical advice to help users ground themselves and critically evaluate these interactions.

6. A Deeper Dive by Area

Below is a deeper dive by area, covering a longer list of developments from 2025. This section is designed for skimming, so feel free to jump to the areas most relevant to you.

Governance, policy, and macrostrategy

Consciousness research

Doubts about digital minds

Social science research

Ethics and digital minds

AI safety and AI welfare

AI and robotics developments

AI cognition and agency

Brain-inspired technologies


Thank you for reading! If you found this article useful, please consider subscribing, sharing it with others, and sending us suggestions or corrections to digitalminds@substack.com.

Bradford, Lucius, and Will

53

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

If you're interested in contributing to this space, you should check out the SPAR AI welfare projects! 

Some of them include: 

Larissa Schiavo, Jeff Sebo, and Toni Sims on: Should We Give AIs a Wallet? Toward a Framework for AI Economic Rights

Jeff Sebo, Diana Mocanu, Visa Kurki, and Toni Sims on: Preparing for AI Legal Personhood: Ethical, Legal, and Political Considerations

Arvo Munoz Moran on: Exploring Bayesian methods for modelling AI consciousness in light of state-of-the-art evidence and literature

Check them out and others here: sparai.org/projects/sp26

Curated and popular this week
Relevant opportunities