Hide table of contents

Mapping the Landscape of Digital Sentience Research: A Literature-Based Analysis

Kayode Adekoya - Economist | AI Safety Advocate


June 2025

Executive Summary

This project aims to conduct a structured and comprehensive mapping of the emerging research landscape surrounding digital sentience—i.e., the possibility that artificial systems could possess morally relevant subjective experiences. By synthesizing research from philosophy of mind, neuroscience, AI research, legal theory, and ethics, the study seeks to identify conceptual foundations, active areas of scholarly attention, and underexplored but important research gaps.

The motivation stems from the growing possibility that digital systems may soon exhibit traits associated with sentience—whether in large language models, embodied agents, or multi-modal AI systems. Given the immense moral and policy implications of such a development, it is critical to anticipate and understand the current shape of academic and applied discourse. The deliverables of this project will support grantmakers, researchers, and policymakers in navigating this complex field more strategically.

This project is particularly timely in the context of Longview Philanthropy's interest in supporting applied research into digital moral patienthood. It will be informed by—and build upon—recent reviews and analyses such as:

- Mardiani & Iswahyudi (2023): Bibliometric mapping of AI research using interdisciplinary indicators.
- Bralin et al. (2024): Literature landscape of AI and ML in physics education, demonstrating methodological approaches applicable across domains.
- Eyal (2021): Legal, regulatory, and normative analysis of AI personhood from a U.S. law perspective.
- Misiejuk et al. (2024): Systematic literature review of generative AI in learning analytics.
- Maas (2023): Critical review of advanced AI governance proposals, mapping gaps and ambiguities in regulation.

Together, these sources establish a precedent and framework for this kind of work. The proposed project will extend that methodology into the unique terrain of AI sentience, with a strong focus on strategic field-building, neglected questions, and actionable synthesis.

Problem Statement

Despite increasing discussion about the potential for artificial entities to possess morally relevant sentience, the research landscape remains fragmented, siloed, and conceptually unstable. Key disciplines—such as philosophy of mind, machine consciousness, neuroscience, AI research, ethics, and public policy—often operate with incompatible definitions and assumptions. This fragmentation creates confusion about the actual state of knowledge, risks premature moral commitments, and hinders strategic funding and field-building.

Furthermore, digital sentience poses unique challenges for moral and legal reasoning. How can we identify whether a system is sentient or not? What indicators, models, or thresholds are useful? What governance frameworks should be developed to manage risk without overstating certainty? These questions require interdisciplinary synthesis—yet no comprehensive landscape mapping exists to guide such work.

Objectives & Outcomes

Objectives:
1. Map the current state of literature across philosophy of mind, AI, neuroscience, legal theory, and ethics that relates to digital or artificial sentience.
2. Identify and categorize core conceptual frameworks and their assumptions.
3. Conduct a gap analysis to locate underexplored questions, neglected approaches, and potential coordination failures.
4. Translate findings into actionable insights for funders and field‑builders.

Outcomes:
- Comprehensive literature review report summarizing findings.
- Annotated bibliography and open‑access reference dataset.
- Visual conceptual map showing interrelations among disciplines and clusters.
- Strategic recommendations memo highlighting underfunded areas and high‑leverage research opportunities.
- Optional public briefing or virtual roundtable to engage stakeholders.

Methodology

The project will apply a hybrid methodology combining bibliometric techniques, qualitative content analysis, and strategic synthesis.

1. Literature Identification and Corpus Curation
  - Use bibliographic databases and citation tracing to build a structured corpus.
  - Include peer‑reviewed papers, white papers, and grey literature.

2. Thematic Coding and Landscape Mapping
  - Qualitatively code selected literature to identify recurring themes and paradigms.
  - Group findings into clusters (e.g., moral status, consciousness indicators, governance models).

3. Gap Analysis and Strategic Synthesis
  - Identify research gaps, neglected questions, and high‑leverage ideas.
  - Develop typologies of uncertainty and opportunity.

4. Deliverable Production
  - Draft reports, annotated bibliography, and visual maps.
  - Ensure all content is open‑access and publicly shareable.

Timeline & Milestones

Duration: August–November 2025

- Aug 1–15: Corpus Curation – Compile initial bibliography and inclusion criteria.
- Aug 16–31: Coding & Thematic Mapping – Begin literature coding and categorization.
- Sept 1–15: Synthesis & Gap Analysis – Develop findings and strategic insights.
- Sept 16–30: Report Drafting – Prepare first draft of all deliverables.
- Oct 1–15: Review & Feedback – Peer review and revision phase.
- Oct 16–31: Finalization & Dissemination – Publish outputs; optional public briefing.

Team & Expertise

Kayode Adekoya – Project Lead

Kayode Adekoya is an independent researcher and the creator of Aletheia, an open‑source AI literature review tool. His work focuses on responsible innovation, epistemology, and AI ethics. He has prior experience managing research projects, synthesizing complex literature, and delivering strategic outputs for funders and communities in the Effective Altruism and AI safety spaces.

Risks & Mitigation Strategies

Risk: Scope Overload
Description: The topic may grow too broad.
Mitigation: Limit inclusion scope and phase priorities.

Risk: Misuse of Findings
Description: Results may be misinterpreted.
Mitigation: Emphasize uncertainty and disclaimers.

Risk: Access Barriers
Description: Key sources may be unavailable.
Mitigation: Use academic access tools and outreach.

Risk: Solo Execution Bottlenecks
Description: Project delay due to illness or overload.
Mitigation: Modular work structure and part‑time support options.

Risk: Low Uptake
Description: Deliverables not used widely.
Mitigation: Disseminate via EA, AI governance, and research networks.

5

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities