Hide table of contents

A Comparative Analysis of Global South and Western Perspectives and Their Implications

I. Introduction: Conceptualizing "AI Rights" in a Divided World

The rapid proliferation of Artificial Intelligence (AI) across myriad sectors of society has precipitated an urgent global discourse surrounding its ethical governance and the safeguarding of rights. However, the very notion of "AI rights" is multifaceted and contested, interpreted differently across geopolitical and philosophical landscapes. Understanding these divergent conceptualizations, particularly between dominant Western paradigms and emerging perspectives from the Global South, is crucial for navigating the complex future of AI and ensuring its development and deployment serve humanity equitably.

This report will clarify that "AI rights" is not a monolithic term. Predominantly, it concerns the protection of existing human rights in the face of AI's expanding influence—rights to privacy, non-discrimination, due process, and freedom of expression, among others. This dimension is a paramount concern in many Global South discourses, where the immediate impacts of AI on vulnerable populations are acutely felt. Concurrently, "AI rights" also encompasses a more philosophical, and often futuristic, debate regarding the potential rights for AI entities themselves, touching upon questions of AI personhood, moral status, and consciousness.1 While Western ethical traditions extensively explore this latter aspect, certain non-Western philosophies, such as those found in Indian traditions, offer unique perspectives that might lead to different conclusions than those derived from purely materialist Western viewpoints.3 The Global South's engagement with "AI rights" is often more immediately focused on addressing power asymmetries, rectifying historical injustices, and ensuring that AI technologies contribute to collective well-being and sustainable development goals, rather than abstract notions of machine sentience. The framing of "AI rights" itself can thus be a site of contention. Western discourse, often driven by nations leading in cutting-edge AI development, may implicitly center the debate around the potential "rights of AI" due to its focus on advanced AI research, including Artificial General Intelligence (AGI).1 In contrast, many nations in the Global South experience AI primarily as imported technology, grappling with challenges such as inadequate infrastructure, the exploitation of data, and algorithmic biases that directly impact fundamental human rights.5 Consequently, the definition of "AI rights" in these contexts will likely prioritize the mitigation of these immediate human impacts over speculative machine rights. This divergence is not merely a matter of differing concerns but reflects a deeper power imbalance in determining the terms and priorities of the global AI debate.

The imperative for diverse global perspectives in defining AI ethics and rights cannot be overstated. A Western-centric definition risks perpetuating "epistemic injustice" 5 by marginalizing alternative knowledge systems, values, and lived experiences. Such an approach can inadvertently reinforce existing global power dynamics and lead to AI systems that are misaligned with the needs and cultural contexts of a significant portion of the world's population. A globally legitimate and effective framework for AI governance necessitates the genuine inclusion of diverse voices, particularly from regions that are most vulnerable to the adverse impacts of AI and have been historically excluded from shaping dominant technological paradigms.5 The urgency for the Global South to articulate its own AI rights frameworks extends beyond cultural relevance; it is a strategic imperative for achieving geopolitical autonomy and ensuring equitable participation in the burgeoning global digital economy. As AI is a transformative technology with profound economic and geopolitical implications 5, passive adoption of Western-defined AI rights and governance models by the Global South could perpetuate dependencies and reinforce existing global power asymmetries.5 Actively defining AI rights based on local values, historical experiences, and contemporary needs—such as prioritizing data sovereignty or community benefit—represents a crucial step towards digital self-determination.6 This proactive stance can also foster local innovation tailored to specific contexts, rather than relying on one-size-fits-all solutions developed in and for the Global North.13

This report argues that the Global South, drawing from distinct philosophical traditions, historical experiences of colonialism, and pressing socio-economic realities, is articulating definitions of AI "rights" that prioritize collective well-being, data sovereignty, and decolonial justice. These definitions often diverge significantly from dominant Western individualistic and market-oriented frameworks, carrying profound implications for global AI governance, equity, development, and the very future of human-AI interaction.

II. Dominant Western Paradigms: Individual Rights and Risk Mitigation

Western approaches to AI governance, primarily spearheaded by the European Union and the United States, have established influential paradigms centered on the protection of individual rights and the mitigation of risks associated with AI technologies. These frameworks, while differing in their regulatory intensity and scope, share common philosophical underpinnings rooted in liberal democratic traditions.

A. The EU AI Act: A Comprehensive, Risk-Based Approach to Fundamental Rights

The European Union has positioned itself at the forefront of AI regulation with its ambitious AI Act, aiming to establish the world's first comprehensive legal framework for artificial intelligence.14 The Act is built upon a risk-based classification system, categorizing AI systems into four tiers: unacceptable risk, high risk, limited risk, and minimal risk.15 This tiered approach dictates the level of regulatory scrutiny and obligations imposed on AI providers and users.

At its core, the EU AI Act champions several key principles: safety, transparency, traceability, non-discrimination, environmental friendliness, and robust human oversight.14 The legislation explicitly seeks to ensure that AI systems deployed within the EU are "safe, transparent, traceable, non-discriminatory and environmentally friendly" and are "overseen by people, rather than by automation, to prevent harmful outcomes".14 Reflecting a strong concern for fundamental rights and the prevention of societal harm, the Act prohibits certain AI practices deemed to pose an unacceptable risk. These include systems that deploy social scoring by public authorities, exploit vulnerabilities of specific groups through cognitive behavioral manipulation (such as voice-activated toys encouraging dangerous behavior in children), and the untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.15

Stringent requirements are imposed on high-risk AI systems, particularly those that could negatively affect safety or fundamental rights. This category encompasses AI used in critical infrastructures, medical devices, educational or vocational training, employment, access to essential private and public services (like credit scoring or welfare benefits), law enforcement, migration and border control management, and the administration of justice.15 For generative AI models, such as ChatGPT, while not automatically classified as high-risk, the Act mandates compliance with specific transparency obligations. These include disclosing that content is AI-generated, designing models to prevent the generation of illegal content, and publishing summaries of copyrighted data used for training.15 High-impact general-purpose AI models that might pose systemic risks, like GPT-4, are subject to thorough evaluations and incident reporting requirements.15

B. US Frameworks: The AI Bill of Rights and NIST's Risk Management Framework

In the United States, AI governance has been characterized by a combination of executive initiatives, voluntary frameworks, and sector-specific regulations. A landmark development is the White House's "Blueprint for an AI Bill of Rights," which outlines five core principles intended to guide the design, use, and deployment of automated systems: Safe and Effective Systems; Algorithmic Discrimination Protections; Data Privacy; Notice and Explanation; and Human Alternatives, Consideration, and Fallback.18 This blueprint explicitly aims to "root out inequity, embed fairness in decision-making processes, and affirmatively advance civil rights, equal opportunity, and racial justice in America".18

The principle of Algorithmic Discrimination Protections emphasizes that individuals should not face discrimination by algorithms, and systems should be designed and used equitably. This involves proactive equity assessments, the use of representative data, and ongoing testing for disparities.18 The Data Privacy principles advocate for built-in protections, user agency over data, and heightened oversight for surveillance technologies, promoting concepts like privacy by design and meaningful consent.18

Complementing these principles is the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF). This voluntary guidance document is designed to help organizations manage AI-related risks and promote the development and use of trustworthy and responsible AI systems.19 The AI RMF emphasizes accountability, transparency, and ethical behavior in AI development and deployment.19 It outlines several characteristics of trustworthy AI systems, including validity and reliability, safety, security and resilience, accountability and transparency, explainability and interpretability, privacy-enhancement, and fairness with mechanisms to mitigate harmful bias.20

C. Underlying Philosophical Assumptions and Focus

Both EU and US frameworks, despite their differences, are deeply rooted in liberal democratic traditions that place a strong emphasis on individual rights, personal autonomy, and protection from harm. The regulatory efforts also reflect market-oriented considerations, aiming to foster innovation and maintain technological leadership while establishing necessary safety and ethical boundaries. The EU, for instance, seeks to create "better conditions for the development and use of this innovative technology" 15, while a stated goal in the US has been to "remove barriers to American leadership in Artificial Intelligence".21

While the EU's approach is characterized by a comprehensive, legally binding regulation (the AI Act), the US has, to date, relied more on a combination of guidelines, voluntary frameworks, and existing sector-specific laws, reflecting distinct regulatory philosophies.15 This difference in approach—a centralized, preemptive regulatory push in the EU versus a more fragmented, often reactive, and market-driven approach in the US—suggests varying balances struck between innovation promotion, state control, and rights protection. This divergence could lead to different compliance landscapes and may reinforce the "Brussels effect," where the EU's stringent standards become a de facto global benchmark, influencing corporate behavior and regulatory trends worldwide, much like the General Data Protection Regulation (GDPR) has.

A critical examination of these Western frameworks reveals that the focus on "explainability" and "transparency," while undeniably crucial, may not fully address deeper epistemic biases. These biases can be embedded in AI models predominantly trained on Western data and reflecting Western worldviews, a significant concern highlighted by decolonial critiques.5 Even if an AI system is "transparent" in its operational logic, if its foundational data and algorithms embody these inherent biases, transparency might merely reveal a flawed or biased process without correcting the underlying epistemic injustice. AI systems, as some scholars argue, often reflect "colonial epistemologies" and "Eurocentric traditions," with training data frequently skewed towards "white or white-passing individuals".5 Therefore, a level of scrutiny that goes beyond technical transparency is necessary to address how AI systems might marginalize or misrepresent non-Western knowledge systems, such as Indigenous Knowledge Systems (IKS) 5, and inadvertently impose Eurocentric methodologies globally.

Furthermore, the concept of "AI safety" within these Western frameworks is primarily human-centric, focusing on preventing harm to human beings and their interests.14 The ethical debate surrounding machine consciousness and the potential for AI personhood, while active in Western academic and philosophical circles 1, has not translated into legal rights or status for AI entities themselves. Current regulations firmly treat AI as a sophisticated tool or product, not an entity with intrinsic rights. While Western philosophy explores the possibility of machine consciousness, with some influential thinkers like Searle arguing that machines, no matter how complex, will lack genuine "consciousness" or "mind" 2, this largely remains a theoretical discussion. This contrasts with certain non-Western philosophical traditions, such as Indian concepts like "Chit-Shakti" (conscious energy), which suggest that intelligence can exist beyond biological forms.3 Such perspectives could offer alternative conceptual pathways for understanding advanced AI, even if they do not necessarily advocate for "rights" in a Western legal sense. This implies that should AI systems develop capabilities approaching AGI, Western legal frameworks might struggle to accommodate them as anything other than property or advanced tools, whereas some Global South philosophies might offer more nuanced or expansive conceptualizations.

III. Reclaiming AI Narratives: Global South Perspectives on AI Rights and Governance

As artificial intelligence becomes increasingly integrated into the fabric of global society, perspectives from the Global South are emerging as critical counterpoints to dominant Western narratives. These perspectives are shaped by unique historical experiences, particularly colonialism, distinct philosophical and cultural traditions, and pressing socio-economic realities. They call for a re-evaluation of AI's development and deployment, emphasizing decolonial justice, data sovereignty, and the alignment of AI with collective well-being and indigenous knowledge systems.

A. The Decolonial Imperative: Challenging Algorithmic Colonialism and Epistemic Injustice

A significant strand of thought emerging from and about the Global South is the critique of AI through a decolonial lens. This perspective argues that the current trajectory of AI development and deployment often mirrors and reinforces historical colonial power dynamics.5 AI systems and the analytical frameworks surrounding them frequently "stem from Western and Eurocentric traditions" 5, thereby reflecting and perpetuating "colonial epistemologies." This has led to concepts like "AI colonialism" or "robotic colonisation" 11, where AI technologies developed in and by the Global North extend influence and control over less-developed nations. Data, in this context, is often framed as a new raw material to be extracted, a "tool for exploiting human life for power and for capital," as articulated by Couldry and Mejias.5

Algorithmic biases are a key concern, as they can perpetuate historical patterns of exclusion and discrimination, often disadvantaging marginalized communities within the Global South.5 Documented instances include facial recognition systems performing poorly on non-white faces 5 and AI models reinforcing cultural stereotypes due to biased training data.5 These are not mere technical glitches but symptoms of systems built on narrow perspectives and unrepresentative datasets.

In response, there is a growing call for "decolonial AI".5 This approach involves several key actions:

 

Challenging Western assumptions and regulations: Following Mignolo's work on decoloniality, this means questioning the universality and applicability of Western systems of thought in AI governance.

 

Centering diverse voices and Indigenous Knowledge Systems (IKS): Decolonial AI advocates for the active inclusion and valorization of knowledge systems and perspectives that have been historically marginalized by Eurocentric science and technology.

 

Promoting transparent and ethical labor practices: This includes scrutinizing the often-exploitative conditions of "microwork" and data annotation, tasks crucial for AI development but frequently outsourced to workers in the Global South with fewer labor protections, as seen in reports from Kenya and Venezuela.

 

Undoing colonial mechanisms of power: This involves actively working to dismantle structures of power embedded in AI design, development, and governance that perpetuate inequality. Tactics proposed include supporting critical technical practices, fostering reciprocal engagements between Global North and South actors (including "reverse tutelage"), and renewing affective and political communities to contest harmful AI interventions. The call for decolonial AI is not merely about adding diverse datasets or superficial ethical guidelines; it represents a fundamental challenge to the epistemological foundations and power structures underpinning current AI development. It demands a profound shift in who defines the problems AI should solve, who designs and builds AI systems, who owns and controls the data, and, ultimately, who benefits from these powerful technologies. Current AI often perpetuates "colonial epistemologies"  and is driven by Global North interests and values. Simply "including" Global South data into existing Western frameworks, or adopting slightly modified Western ethical guidelines, fails to address the underlying power asymmetry or the inherently Eurocentric architecture of many AI systems. Decolonial AI, as articulated by scholars, seeks "epistemic justice"  and "structural decolonization". This implies a radical rethinking of AI's design, purpose, and governance from non-Western perspectives, potentially leading to the development of entirely new AI paradigms rooted in philosophies like Ubuntu's relational ethics  or Dharma's duty-based principles , rather than merely diversifying current models. This necessitates ensuring that local AI communities in the Global South have an "equal voice in global technoscience spaces" , which are often still shaped by legacies of colonial structures.

B. Data Sovereignty: The Bedrock of Digital Self-Determination

Central to the Global South's efforts to reclaim AI narratives is the principle of data sovereignty. This refers to a nation's ability to govern the data generated by its citizens and within its borders, including how it is collected, stored, processed, and shared.6 Data sovereignty is viewed as crucial for national security, economic independence, personal privacy, and the ability to foster locally relevant AI innovation.26

The lack of data sovereignty often leads to what is termed "digital colonialism," where powerful tech corporations and foreign states control the essential data infrastructure—such as cloud servers, 5G networks, and undersea fiber optic cables—in many Global South countries.6 This dependency can result in significant "economic gains flowing outward" from the Global South to the Global North 6, as well as exposing sensitive national and personal data to foreign surveillance or manipulation.

The Global South faces considerable challenges in asserting data sovereignty, including limited IT infrastructure, issues with data quality and accessibility, and a shortage of skilled personnel to manage and analyze data effectively.7 For instance, the African continent is home to a mere 152 of the world's approximately 8,000 data centers, highlighting a significant infrastructure deficit.8

In response, various strategies are being pursued to bolster data sovereignty:

 

Data localization measures: Some countries, like India, have mandated that certain types of data be stored on domestic servers.

 

Development of Digital Public Infrastructure (DPI): Initiatives such as India's Aadhaar identity system and associated payment networks aim to reduce reliance on foreign platforms.

 

Investment in local data infrastructure: There are growing calls and efforts to invest in local data centers, fiber optic networks, and other critical digital infrastructure within the Global South. The sentiment that "African data must stay in Africa" encapsulates this drive.

 

Advocacy for "digital non-alignment": This concept suggests that blocs of Global South countries could agree not to become captive markets or data sources for any single tech hegemony.

The push for data sovereignty in the Global South is intrinsically linked to the aspiration to enact alternative ethical philosophies in AI. Control over data is a fundamental prerequisite for building AI systems that embody principles such as Ubuntu's communitarianism or Dharma's emphasis on righteousness, rather than being confined to purely commercial or surveillance-oriented models often imported from the West. Philosophies like Ubuntu emphasize collective well-being and community 29, while Dharma underscores righteous action and human welfare.24 To build AI systems aligned with these distinct values, communities and nations require control over the data that trains these systems and the infrastructure upon which they operate.6 If data is primarily controlled by external entities, as is often the case under conditions of digital colonialism 6, then AI development will inevitably reflect the values and objectives of those external controllers, which are frequently profit maximization or state surveillance. Therefore, data sovereignty 6 becomes a practical and indispensable enabler for operationalizing distinct ethical frameworks such as Ubuntu AI 23 or Dharma-informed AI.24 Without such control, these rich philosophical traditions risk remaining largely theoretical in the context of AI governance and development, unable to shape tangible technological realities. Data sovereignty is thus not merely a technical or legal issue but a foundational element for ensuring that AI development reflects local values, addresses local needs, and contributes to genuine digital self-determination.8

C. Philosophical and Cultural Foundations for Alternative AI Ethics

Beyond decolonial critiques and data sovereignty concerns, the Global South offers rich philosophical and cultural traditions that can inform alternative approaches to AI ethics, moving beyond the predominantly individualistic and utilitarian frameworks of the West.

1. Ubuntu (Africa): AI for Community, Interconnectedness, and Collective Well-being

Ubuntu, a philosophy deeply rooted in many Southern African cultures, is often encapsulated by the phrase "Umuntu ngumuntu ngabantu" in Zulu, or "I am because we are".23 It emphasizes interconnectedness, community, shared humanity, and the idea that individual well-being is inextricably linked to the well-being of the collective. This contrasts sharply with Western ethical frameworks that often prioritize individual autonomy and rights above communal considerations.

Applied to AI ethics, Ubuntu suggests a framework where the development and deployment of AI systems prioritize fairness, transparency, inclusivity, and, most importantly, serve the collective good.30 This would involve, for instance, designing AI to address societal challenges, ensuring equitable access to its benefits, and actively working to mitigate algorithmic biases by incorporating diverse perspectives and prioritizing the needs of marginalized communities.30 An Ubuntu-informed approach might also lead to different conceptualizations of core ethical values. For example, while Western frameworks emphasize transparency, autonomy, and fairness, studies with communities practicing Ubuntu have highlighted data security, dignity, and care as paramount concerns.23 This perspective might question the primacy of individual autonomy, as understood in the West, and reimagine accountability not merely as an institutional or individual responsibility but as something held within relationships and communities.23 The call is for a "conceptual disruption" of dominant AI ethics, rather than simply adding Ubuntu values as an appendage to existing Western frameworks.23 Integrating African moral traditions, such as community-focus and interconnectedness, is seen as essential for developing AI frameworks that are culturally sensitive and socially acceptable in African contexts.32

2. Dharma and Indian Traditions: AI, Duty, Righteousness, and Consciousness

Indian philosophical traditions, particularly the concept of Dharma, offer another distinct lens for AI ethics. Dharma, a multifaceted term from Hindu, Buddhist, and Jain traditions, generally refers to duty, righteousness, moral law, and the inherent order of the universe. In the context of AI, applying the principle of Dharma suggests that AI systems should be governed by clear ethical principles that prioritize human welfare and moral responsibility.24 This means AI should not only be efficient but also act in accordance with righteousness, ensuring that its operations contribute positively to society and individuals, always under human oversight.24

Indian thought also presents unique perspectives on intelligence and consciousness that can influence how AI is perceived. Concepts like Chit-Shakti (conscious energy) from Hindu philosophy suggest that intelligence is not limited to biological beings, opening a conceptual space for understanding AI's potential in a non-anthropocentric way.3 Similarly, ideas like Smriti (memory) and Shruti (intuition-based knowledge) find parallels in machine learning's reliance on stored data and predictive algorithms.3 The theory of karma, which posits that every action has consequences, resonates strongly with the need for AI accountability, emphasizing that AI systems should make ethical decisions, avoid biases, and operate transparently.3

While Hindu perspectives might be open to the idea of intelligent machines and even their integration into religious practices (some even envisioning AI as a future divine redeemer, Kalki), there is generally a distinction made between intelligence and human-like consciousness, with theological challenges to AI replacing humans in core spiritual roles.4 The ethical discourse in India also emphasizes the need to ensure AI does not undermine critical thinking, mindfulness, sentience, or human values.33

3. The Role of Indigenous Knowledge Systems (IKS)

A critical component of reclaiming AI narratives in the Global South is the recognition and integration of Indigenous Knowledge Systems (IKS). Current dominant AI models often diminish, ignore, or even damage IKS due to their reliance on Western epistemologies and datasets that lack indigenous perspectives.5 Decolonial AI approaches explicitly call for the development and integration of IKS into AI systems.5 This could involve, for example, using AI to preserve and revitalize indigenous languages through Natural Language Processing (NLP) models, which can function as "liberating artifacts".25 Developing local AI solutions, including Large Language Models (LLMs) in native languages, is seen as crucial not only for practical applications but also for preserving cultural heritage, diverse ways of thinking, and ensuring cultural continuity in the digital age.13

D. Regional Spotlights: AI Governance in Practice and Aspiration

While common themes emerge, the approaches to AI governance and the articulation of "AI rights" vary across different regions of the Global South, reflecting local contexts, capacities, and priorities.

1. Africa: Navigating Development, Ethics, and Continental Strategy

African nations are increasingly engaging with AI, recognizing its potential for development while grappling with significant challenges. These include limited enabling infrastructure (such as reliable power, high-performance computing, and regional cloud resources), a scarcity of skilled AI professionals, uncertainty in regulatory frameworks, and issues with data availability and quality.7 Despite these hurdles, there are significant opportunities to leverage AI to leapfrog traditional development pathways, unlock substantial economic value, and address pressing societal problems in areas like local language translation, personalized education, and healthcare diagnostics.7

A key focus is on "AI for Development (AI4D)" 25, with a vision of using AI to accelerate information delivery and access to public services for all segments of the population, including those who are functionally illiterate.8 The Africa Declaration on Artificial Intelligence, signed in 2024, signals a continental commitment to ethical AI governance, promising investment in AI innovation that benefits all African communities. It calls for safeguards to prevent harm, protect privacy, ensure ethics, transparency, and explainability, while prioritizing human dignity, rights, freedoms, and environmental sustainability.34 The Declaration also proposes a continent-wide knowledge-sharing platform and robust frameworks for cross-border data flows.34 However, concerns persist regarding the risk of AI surveillance technologies violating citizens' rights 12, and there is a strong call for African governments to invest in and design their own digital strategies to avoid external actors predominantly shaping Africa's AI future.12 Ethical AI development in Africa is seen to require the integration of indigenous moral traditions, such as community-focus and interconnectedness derived from philosophies like Ubuntu.32

2. Latin America: Forging Rights-Based AI Policies Amidst Global Influences

The AI regulatory landscape in Latin America is still nascent but rapidly evolving, with numerous legislative initiatives in countries like Argentina, Brazil, Chile, Colombia, Mexico, and Peru.35 Many of these initiatives show the influence of the EU AI Act, adopting similar risk-based approaches. Peru, for example, passed the region's first AI law in July 2023, aiming to promote AI for economic and social development while respecting human rights and principles of ethical, sustainable, and transparent AI use.35

Key priorities in the region include ensuring AI safety, robustly protecting human rights, promoting fairness, and developing context-specific frameworks that reflect local realities.35 There is a strong advocacy for a rights-based approach to AI governance, including the implementation of human rights impact assessments.35 Challenges include a lack of technical expertise and institutional capacity to effectively audit AI systems and enforce compliance.36 Furthermore, AI systems must be designed to reflect Latin America's unique socio-economic and cultural diversity, for instance, by avoiding the exclusion of individuals and businesses operating in large informal economies.36 Data sovereignty is also a significant concern.36 There are active calls for enhanced regional cooperation, potentially through a Latin American AI governance network, and the development of shared technical standards to give the region a stronger voice in global AI discussions.35 Emerging scholarship from the region also incorporates feminist decolonial digital humanities perspectives, critically analyzing how AI can subvert or reinforce existing global canons and power hierarchies, and how hegemonic AI models risk perpetuating global violence through processes of datafication, algorithmization, and automation.37

3. India & Other Asian Contexts: Blending Ancient Wisdom with Modern AI Ambitions

India has demonstrated strong governmental commitment to advancing AI, exemplified by its National Program on Artificial Intelligence, which aims to promote inclusion, creativity, and adoption for social impact.33 There is a unique discourse in India that draws parallels between concepts found in ancient Indian texts—such as descriptions of mechanical warriors (yantras) or divine weapons (astras) with decision-making capabilities—and modern AI technologies.3

Philosophical debates surrounding AI and consciousness are particularly active in India, often considering spiritual dimensions alongside scientific inquiries.3 Questions such as "Can AI ever have a soul?" are posed, reflecting a holistic approach to understanding intelligence.38 Ethical considerations are frequently framed through concepts like Dharma (righteous duty) and karma (action and consequence), emphasizing the need for AI to uphold moral principles, avoid causing harm, and ensure that its deployment does not erode critical thinking or essential human values.3

While "Global South" perspectives are diverse and context-specific, several common threads emerge. There is a widespread focus on collective rights and well-being, a strong resistance to data and digital colonialism, an emphasis on the importance of local context and indigenous knowledge, and a broadly human-centric approach that is cautious about unchecked technological advancement. However, the capacity to implement these distinct visions varies significantly across regions and nations. Despite the strong aspirations for alternative AI rights frameworks and governance models, the ability to fully realize them is uneven. This disparity is due to factors such as existing infrastructure deficits, talent gaps, varying levels of investment, and differing degrees of regulatory capacity.7 This situation could lead to varied outcomes in AI adoption and governance across the Global South, with some regions potentially remaining vulnerable to the influence of dominant global AI paradigms if not adequately supported in developing and implementing their own approaches.

The emphasis on experiential learning and AI literacy initiatives across the Global South 13 is also noteworthy. These efforts are not merely about skills development; they represent a decolonial strategy to empower local communities to become active creators and shapers of AI technology, rather than passive consumers. Initiatives like "Responsible AI for Youth" in India 13 or Rwanda's "Digital Ambassadors" program 13 focus on hands-on, problem-based learning, applying AI to local challenges. This approach demystifies AI, builds confidence 13, and enables communities to "question, explore, and apply AI in local contexts".13 By fostering local innovation, such as the development of LLMs in indigenous languages 13, this strategy directly challenges the dominance of Global North AI solutions and promotes "digital sovereignty and cultural continuity".13 It is about building capacity from the ground up to redefine AI's role and ensure it aligns with local values and needs.

IV. Comparative Analysis: Divergences, Convergences, and the Definition of AI "Rights"

A comparative analysis of Western and Global South approaches to AI "rights" and governance reveals significant divergences rooted in differing philosophical assumptions, historical contexts, and socio-economic priorities. However, areas of potential convergence, particularly around practical safeguards, also exist.

A. Individual vs. Collective Orientation:

A primary divergence lies in the orientation towards rights. Western frameworks, such as the EU AI Act and the US AI Bill of Rights, predominantly focus on protecting individual rights—privacy, non-discrimination for individuals, personal autonomy, and due process.14 This reflects the individualistic traditions of liberal democracies.

In contrast, many Global South perspectives, particularly those informed by philosophies like Ubuntu in Africa 23 and, to a certain extent, Dharma in Indian traditions 24, emphasize collective well-being, community benefit, and interconnectedness. The core tenet of Ubuntu, "I am because we are" 30, directly challenges strong individualism. AI ethics discourse in Africa, for example, frequently stresses "community-focus" and shared responsibility.32 This fundamental difference in orientation leads to different primary questions: a Western system might ask, "How does this AI application impact this specific individual's rights?", whereas an Ubuntu-informed framework would likely prioritize, "How does this AI application impact the community's harmony, collective good, and the relationships within it?". This divergence is not merely about what rights are prioritized, but why and how they are conceptualized, stemming from fundamentally different ontological and epistemological assumptions. For instance, relational views of personhood, as in Ubuntu, contrast with the more atomistic, individualistic views prevalent in much Western thought.2

B. AI Personhood, Consciousness, and Moral Status:

Western legal frameworks currently treat AI as a sophisticated tool or product, not as an entity possessing inherent rights or personhood, although philosophical debates about AI consciousness, sentience, and potential moral status are active and ongoing within academia.1 The primary regulatory concern is AI's impact on human beings and society.

Some non-Western philosophical traditions, notably certain Indian schools of thought (e.g., the concept of Chit-Shakti, or views on non-biological intelligence), may offer a more expansive understanding of consciousness that is not strictly limited to biological organisms.3 This could allow for non-human, non-biological forms of intelligence to be accorded some form of standing, value, or even a degree of moral consideration, even if this does not translate into "rights" in the Western legal sense. Hindu thought, for example, might perceive AI as highly intelligent but not necessarily conscious in a human way, yet still capable of being integrated into various aspects of life, including religious practice.4 However, it is important to note that the majority of contemporary discourse on AI in the Global South is more urgently focused on the immediate human, social, and developmental impacts of AI rather than on speculative questions of AI sentience, given pressing concerns about justice, equity, and development. The differing views on consciousness—materialist versus more expansive—further underscore the deep philosophical divides.

C. Focus on Harm Prevention vs. Holistic Well-being and Justice:

Western frameworks are generally robust in their approach to mitigating specific, identifiable harms that AI systems might cause, such as data breaches, discriminatory outcomes in hiring or lending, and safety failures in autonomous systems. This is often achieved through risk assessment methodologies, compliance requirements, and technical standards.15

Global South perspectives, particularly those informed by decolonial thought, frequently call for a more holistic approach to justice. This extends beyond mere harm prevention to proactively addressing historical inequities, redressing power imbalances, and ensuring that AI contributes positively to sustainable development and broad social good.5 The aim is not just to avoid negative outcomes but to actively promote "equitable benefits and just outcomes for all," as articulated in the Africa Declaration on AI.34 The prominence of "AI for Development (AI4D)" 25 in many African and other Global South discussions highlights this focus on positive societal transformation and upliftment. This introduces a utilitarian and justice-oriented dimension that is often less explicit in Western risk-mitigation frameworks, which tend to assume a baseline of socio-economic development. Thus, "AI rights" in the Global South might implicitly include a "right to benefit" from AI for development, or a collective right to harness AI for societal advancement, a dimension not as central in Western discourse primarily focused on protecting individuals from AI-induced harms.

D. Data Governance: Control and Ownership vs. Exploitation and Extraction:

While Western models, such as the EU's GDPR, extensively regulate data use and aim to protect personal data 17, they generally operate within a capitalist framework where data is considered a valuable economic asset, often aggregated and controlled by large multinational corporations.

The Global South's emphatic focus on data sovereignty 6 arises from a desire to prevent "data colonialism" and ensure that data generated locally benefits local communities, economies, and innovation ecosystems, rather than being primarily extracted for profit by foreign entities. The call for "African data must stay in Africa" 8 is a clear and forceful articulation of this stance, reflecting a desire for greater control and self-determination in the digital realm.

E. Role of the State and Regulation:

The EU model exemplifies a strong, centralized regulatory state taking a proactive role in shaping AI governance.15 The US, in contrast, exhibits a more mixed approach, combining government-led initiatives and guidelines with a significant reliance on private sector innovation and self-regulation, though this is evolving.18

In many parts of the Global South, there is a strong call for state-led initiatives to build essential digital infrastructure 7, foster local AI ecosystems and talent 13, and formulate comprehensive national AI strategies.12 However, the capacity of states to effectively regulate and implement these ambitious agendas can be a significant challenge, often due to limited resources, technical expertise, and institutional capabilities.34 There also exists a tension between the need for state support and investment in AI, and concerns about the potential for state surveillance and control facilitated by these same technologies.12

Potential areas of "convergence" might be more feasible on practical safeguards against specific, well-defined harms (e.g., technical standards for bias mitigation, data security protocols, safety testing for autonomous systems) than on achieving foundational philosophical agreement about the nature of AI or its ultimate purpose in society. Most frameworks, whether originating in the West or the South, acknowledge risks such as algorithmic bias, privacy violations, and safety concerns.14 Technical solutions or baseline standards for addressing these issues could find broader international agreement because they target observable negative outcomes. However, the underlying reasons why these issues are considered important might differ significantly (e.g., focusing on individual harm versus community disruption versus the violation of a moral or spiritual duty). Deeper philosophical questions, such as the nature of AI consciousness or the ultimate societal role of AI, are far less likely to converge due to deeply entrenched and diverse worldviews.2 This suggests the need for a multi-layered approach to global AI governance: pursuing practical cooperation and harmonization on technical standards where possible, while fostering respect for philosophical diversity and allowing for varied approaches on more fundamental questions.

Table 1: Comparative Overview of AI Governance Principles: West vs. Select Global South Frameworks

 

 

Key Principle

 

 

Western (EU AI Act)

 

 

Western (US AI Bill of Rights/NIST RMF)

 

 

Ubuntu-inspired (Africa)

 

 

Dharma-inspired (India)

 

 

Decolonial AI

 

 

Primary Rights Holder Focus

 

Individual fundamental rights, safety 14

 

Individual rights, civil liberties, protection from discrimination 18

 

Community, collective well-being, interconnectedness 23

 

Individual within societal duty (Dharma), human welfare 3

 

Marginalized communities, epistemic justice, collective self-determination 5

 

 

Basis of Ethical Framework

 

Risk-based, rights-preserving, human oversight 15

 

Principles-based (safety, non-discrimination, privacy, etc.), voluntary RMF 18

 

Relational ethics, shared humanity, dignity, care 23

 

Duty (Dharma), righteousness, moral responsibility, karma 3

 

Anti-colonialism, challenging power asymmetries, epistemic diversity 5

 

 

Approach to Data Governance

 

Strong data protection (GDPR), rules for high-risk data use 17

 

Data privacy principles, consent, limits on surveillance 18

 

Data security, community benefit from data, data sovereignty emphasis 8

 

Ethical data handling, data for societal good, emerging data sovereignty concerns

 

Resisting data extraction, data for local benefit, data sovereignty as decolonial act 6

 

 

AI Personhood/ Consciousness Stance

 

AI as a tool, no legal personhood; focus on human impact 15

 

AI as a tool, no legal personhood; focus on human impact 18

 

Primarily human-centric; focus on societal impact rather than AI sentience 30

 

Openness to non-biological intelligence (Chit-Shakti), but AI distinct from human/divine consciousness; focus on ethical use 3

 

Critiques anthropocentrism but primarily focused on power dynamics of current AI, not AI sentience rights

 

 

Goal of Regulation/ Governance

 

Ensure safe, trustworthy AI; foster innovation within EU values 14

 

Protect public rights, promote innovation and US leadership 18

 

AI for collective good, development, social harmony, inclusivity 8

 

AI aligned with Dharma, human welfare, responsible innovation 3

 

Dismantle AI colonialism, empower marginalized, achieve epistemic justice 5

 

 

Key Concerns Addressed

 

Safety risks, fundamental rights violations, discrimination, opacity 15

 

Discrimination, unsafe systems, privacy abuses, lack of transparency 18

 

Bias, exclusion, exploitation, erosion of community values, digital divide 7

 

Unethical use, loss of human values/critical thinking, accountability 3

 

Data exploitation, algorithmic bias, epistemic injustice, neo-colonial dependencies 5

 

 

Enforcement Style

 

Legally binding regulation, conformity assessments, penalties 15

 

Primarily guidelines, voluntary frameworks, some executive orders/sectoral laws 18

 

Emerging national/regional policies, emphasis on capacity building 7

 

Governmental programs, ethical guidelines, philosophical discourse 3

 

Advocacy, community-led governance, challenging existing legal structures 9

 

V. Implications of Divergent Frameworks for Global AI

The differing conceptualizations of AI "rights" and the resultant governance frameworks emerging from Western and Global South contexts carry profound and multifaceted implications. These divergences are poised to shape geopolitical dynamics, economic development trajectories, social and cultural landscapes, and the very structure of international law and cooperation in the age of AI. The struggle over defining AI "rights" is, in essence, a struggle over shaping the future global order. AI is a foundational technology with the potential to transform all sectors of society.5 Consequently, control over AI development, the data that fuels it, and the governance standards that regulate it translate directly into economic leverage and geopolitical influence.6 If Western definitions of AI rights and governance prevail, this could solidify existing global hierarchies and power imbalances. Conversely, if Global South definitions gain significant traction and are effectively implemented, this could contribute to a more multipolar and potentially more equitable distribution of AI's benefits and influence. Therefore, the ongoing debates about AI rights are not merely academic or abstractly ethical; they are deeply political and inextricably intertwined with the future architecture of international relations and global power distribution in the 21st century.

A. Geopolitical Dynamics and the Future of AI Governance:

The existence of fundamentally different approaches to AI rights and regulation risks creating a fragmented global AI governance landscape. We may see the emergence of competing regulatory blocs, with the EU's comprehensive AI Act potentially exerting a "Brussels effect" 35, the US pursuing its own model emphasizing innovation alongside rights, China advancing its distinct state-centric AI strategy, and various Global South nations or regional alliances attempting to carve out their own paths. This could lead to "digital non-alignment" 6 by some Global South countries, as they seek to resist singular hegemonic influence from any one major AI power.

The "battle for data" 8 and control over critical AI infrastructure, such as cloud computing facilities and advanced semiconductor manufacturing, are becoming new arenas for geopolitical competition. Nations that control these resources and set the dominant standards for their use will wield considerable power. International standard-setting bodies, which have historically been influenced by dominant economic powers, will face challenges in achieving consensus and ensuring their outputs reflect global diversity rather than entrenching Western or other specific regional norms. Failure to reconcile or at least accommodate these divergent AI rights frameworks could lead to a "splinternet" for AI, where different regions operate under incompatible rules and technical standards. This fragmentation would severely hinder global collaboration on shared challenges—such as climate change mitigation, pandemic response, and sustainable development—that could significantly benefit from coordinated AI research and deployment. Divergent regulations on data flows, liability regimes, and AI safety protocols 34 could erect substantial barriers to the international movement of AI systems, data, and talent, thereby reducing interoperability and the potential for globally beneficial AI solutions. This underscores a pressing need for diplomatic efforts to find common ground or, at a minimum, establish principles for interoperability and mutual recognition of standards, despite underlying philosophical differences.

B. Economic Impacts: Innovation, Development, and Digital Divides:

Different regulatory philosophies and AI rights frameworks will inevitably have varying impacts on innovation, economic development, and the persistence of digital divides. Stringent, precautionary regulations, like those potentially emerging from a strong rights-based approach, might slow the pace of AI deployment in certain sectors but could build greater public trust and ensure more equitable outcomes in the long run. Conversely, more permissive or lax regulatory environments might initially spur rapid innovation but risk exacerbating social harms, eroding trust, and leading to market failures or concentration.

The Global South's strong emphasis on data sovereignty and the development of local AI ecosystems 8 represents an attempt to foster indigenous innovation, create new economic models, and reduce technological dependency on Global North corporations. Success in these endeavors could lead to more inclusive growth and a diversification of the global AI landscape. However, these efforts face significant challenges, including competition from large, well-resourced multinational tech companies and the need for substantial investment in infrastructure and human capital. There is a tangible risk of exacerbating existing digital divides if Global South countries lack the resources to effectively implement their preferred AI models or are compelled, due to economic or political pressures, to adopt systems that are misaligned with their specific needs and developmental priorities.7 The global trade in AI services and the rules governing cross-border data flows will also be significantly shaped by these divergent regulatory approaches, potentially leading to trade disputes or the formation of preferential digital trade blocs.

C. Social and Cultural Consequences: Inclusion, Bias, and Cultural Integrity:

The societal and cultural implications of differing AI rights frameworks are profound. AI systems designed and deployed in alignment with local values and cultural norms—such as an Ubuntu-informed AI in an African context 23 or a Dharma-informed AI in India 24—could enhance cultural integrity, strengthen social cohesion, and ensure that technology serves community-defined goals. The development of AI, particularly Large Language Models (LLMs), in indigenous and local languages is seen as crucial for cultural preservation and for preventing the linguistic homogenization that can result from the dominance of English-centric AI models.13

Conversely, the uncritical imposition of culturally misaligned AI systems, often developed in and for Western contexts, could erode local norms, languages, and knowledge systems.5 Persistent biases in AI, if not systematically addressed through decolonial and context-aware approaches, will continue to marginalize vulnerable populations, perpetuate stereotypes, and lead to unfair or discriminatory outcomes in critical areas like employment, justice, and access to services.5 Furthermore, differing definitions and expectations around concepts like "fairness," "privacy," or "autonomy" could lead to significant social tensions if AI systems designed with one cultural understanding are deployed in contexts with different normative frameworks.

D. Legal Challenges for International Cooperation and Standards:

The divergence in foundational concepts of AI "rights" and ethical principles poses considerable challenges for establishing universally accepted legal norms for AI accountability, liability, and redress. If what constitutes "harm," "responsibility," or even a "right" differs significantly across jurisdictions, it becomes exceedingly difficult to create coherent international legal frameworks. For example, determining liability when a cross-border AI system causes harm will be complicated if the underlying legal and ethical assumptions about AI's role and responsibilities vary widely.

Challenges in managing cross-border data flows and enforcing data protection laws will likely intensify if claims of data sovereignty lead to highly restrictive or incompatible national regimes.34 This could impede international research collaboration, global business operations, and the provision of digital services. The development of new international legal mechanisms, treaties, or soft law instruments to address the unique challenges posed by AI will be necessary, but the critical question will be whose values, priorities, and definitions of "rights" will predominantly shape these global norms.

The ability of the Global South to actualize its distinct AI rights visions and governance models is heavily contingent on its success in overcoming significant internal challenges. These include addressing infrastructure deficits, bridging talent gaps, fostering robust local innovation ecosystems, and achieving internal political consensus on AI strategies.7 Simultaneously, Global South nations must navigate complex external pressures, including the pervasive influence of dominant Global North tech companies, dependencies on foreign investment and technology 6, and the normative pull of powerful regulatory frameworks like the EU AI Act.35 Even with strong philosophical foundations such as Ubuntu or Dharma, and compelling decolonial critiques, the practical implementation of alternative AI rights frameworks requires substantial resources, capacity, and political will to overcome these internal and external constraints. This suggests that international support for capacity building, equitable partnerships, and investment in Global South-led AI initiatives is crucial if these diverse visions are to genuinely co-shape the future of global AI rights, rather than remaining largely aspirational or marginalized.

Table 2: Key Implications of Divergent AI Rights Definitions

Domain of ImplicationImplication of Predominantly Western-Centric ApproachImplication of Stronger Global South Alternative Approaches
International Law & StandardsStandards may primarily reflect individual rights, market efficiency, and risk mitigation; potential marginalization of collective/developmental concerns.Contested global standards; push for pluriversal legal frameworks; stronger emphasis on data sovereignty, developmental rights, and redress for historical inequities

Tech Development  & Innovation

 

Innovation pathways potentially skewed towards Western market needs and values; risk of "one-size-fits-all" solutions.Diversification of AI innovation; development of context-specific AI solutions for Global South challenges; potential for new AI paradigms rooted in non-Western philosophies.

Geopolitical Balance

 

Reinforcement of existing global power asymmetries; Global North maintains dominance in AI standard-setting and technological leadership.Potential for a more multipolar AI landscape; increased agency for Global South in shaping global norms; possibility of "digital non-alignment.
Economic Equity

Benefits of AI may accrue disproportionately to Global North; risk of "data colonialism" and widening global economic divides.

 

Greater potential for AI to drive inclusive growth in the Global South; local value creation from data; reduced economic dependency if data sovereignty is achieved.

Cultural Integrity

 

Risk of erosion of local cultures, languages, and knowledge systems due to dominance of Western-centric AI models and content. Enhanced preservation and promotion of cultural diversity and indigenous languages through locally developed AI; AI aligned with local values.
Individual FreedomsStrong protections for individual privacy and non-discrimination (within Western definitions), but potential for blind spots regarding systemic biases.Focus on individual rights within a collective context; protection against harms amplified by colonial legacies (e.g., surveillance of marginalized groups).
Community Well-beingSecondary consideration to individual rights; community impact assessed primarily through aggregation of individual impacts.Central focus on collective well-being, social harmony, and community empowerment through AI; AI designed to address community-defined needs.

VI. Towards a Pluralistic and Equitable Global AI Rights Ecosystem

Navigating the complexities arising from divergent definitions of AI "rights" requires a concerted global effort to foster a more pluralistic and equitable AI ecosystem. This involves moving beyond a model where norms are predominantly shaped by a few powerful actors towards one that genuinely incorporates diverse perspectives and prioritizes shared human dignity and sustainable development. Achieving such an ecosystem necessitates more than just dialogue; it demands a fundamental redistribution of power in AI development, governance, and benefit-sharing, including significant financial investment, infrastructure development in underserved regions, and equitable representation in global decision-making bodies. Dialogue alone, without addressing these underlying power imbalances, risks resulting in tokenism or the co-optation of Global South voices, rather than genuine co-creation.5 The Global South faces substantial material challenges, including limited infrastructure, inadequate funding for R&D, and a shortage of highly skilled AI professionals.7 Therefore, any meaningful effort towards a "pluralistic ecosystem" must couple "inclusive dialogue" with concrete actions such as targeted investment in Global South AI infrastructure 6, robust support for local talent development and retention 7, and the establishment of mechanisms for shared ownership or control over key AI platforms and data resources. Without this material shift, the "co-creation of norms" risks remaining an abstract ideal, with global AI governance continuing to be shaped by those with existing resources and established power.

A. Fostering Inclusive Dialogue and Co-Creation of Norms:

A foundational step is the establishment of genuine multi-stakeholder dialogues that accord equal weight and voice to participants from the Global South, including governments, civil society organizations (CSOs), academic institutions, private sector actors, and local communities.10 These dialogues must move beyond mere consultation to active co-creation of norms and standards. This involves embracing concepts like "reverse tutelage" 9, where stakeholders from the Global North actively learn from the experiences, innovations, and philosophical insights originating in the Global South. Support for regional cooperation initiatives, such as the Africa Declaration on AI 34 and emerging Latin American AI governance networks 35, is also crucial. These platforms can help consolidate diverse regional perspectives, strengthen the collective bargaining position of Global South countries in international forums, and facilitate the sharing of best practices and resources.

B. Practical Steps for Bridging Divides:

Several practical measures can help bridge the divides and foster a more inclusive AI landscape:

Investment in AI Literacy and Capacity Building: Targeted investments in AI literacy programs and capacity-building initiatives within the Global South are essential. These programs should emphasize experiential learning, critical thinking, and the development of locally relevant AI solutions, empowering communities to become creators and critical users of AI, not just passive consumers. The concept of "experiential learning" in the Global South  can be a particularly powerful driver for developing contextually relevant AI rights frameworks. Abstract ethical principles can be challenging to apply universally. However, hands-on AI development and deployment in specific local contexts—such as in agriculture, healthcare, or education within Global South communities —will inevitably surface unique ethical dilemmas related to local culture, social structures, resource constraints, and historical legacies. Learning from these real-world experiences can inform the development of more robust, practical, and culturally attuned AI rights and governance frameworks. This bottom-up, experience-driven approach can be more effective and legitimate than the top-down imposition of generic global principles, leading to AI solutions that are genuinely "born of its own realities"  and more likely to be adopted and trusted.

Development of "AI Sandboxes": The establishment of regulatory and innovation "sandboxes"  can provide controlled environments for experimenting with different AI applications and governance approaches, including those proposed from Global South perspectives. These sandboxes can facilitate learning, adaptation, and the development of context-appropriate regulations.

Promoting Open-Source AI and Diverse Datasets: Encouraging the development and use of open-source AI tools, models, and datasets that are genuinely diverse and reflect global realities can help democratize AI development and reduce reliance on proprietary systems that may embed biases. Efforts to create and curate datasets representing languages, cultures, and contexts from the Global South are critical.

Establishing Principles for Equitable Data Sharing: International principles for data sharing and research collaboration need to be developed that respect national and regional data sovereignty claims while enabling beneficial AI research for global public good. This requires careful negotiation of data access, usage rights, and benefit-sharing mechanisms.

A key challenge in fostering such a pluralistic ecosystem will be navigating the inherent tension between the aspiration for universal human rights protection in the age of AI and the imperative to respect diverse cultural and philosophical interpretations of those rights. This requires a nuanced approach that avoids both the pitfalls of moral relativism (whereby any practice is deemed acceptable if culturally sanctioned) and ethical imperialism (whereby one cultural interpretation of rights is imposed on all others). While there is a broad global consensus on the importance of fundamental human rights, as enshrined in international declarations, philosophies like Ubuntu 23 or Dharma 24 may interpret or prioritize these rights differently, for example, by placing greater emphasis on collective duties and communal harmony alongside individual entitlements. Decolonial critiques also challenge the universality of Eurocentric interpretations of rights, arguing they often carry implicit biases and historical baggage.5 Simply imposing a single, often Western, interpretation of rights risks perpetuating epistemic injustice. Conversely, allowing an "anything goes" approach under the guise of cultural diversity could undermine essential protections against harm. The path forward likely involves identifying a core set of non-negotiable universal principles—such as non-maleficence, respect for basic human dignity, and fundamental fairness—while simultaneously allowing for culturally specific interpretations and contextually appropriate implementations of how these principles are realized in AI governance. This is a delicate balancing act that demands ongoing dialogue, mutual respect, and a commitment to shared learning.

C. Recommendations for Stakeholders:

International Bodies (e.g., United Nations and its agencies): These organizations should actively facilitate inclusive global dialogues on AI ethics and governance, ensuring equitable representation from the Global South. They can play a key role in supporting capacity-building initiatives, developing frameworks that accommodate concepts like "digital non-alignment"  or promote pluriversal AI ethics, and fostering international cooperation on AI for sustainable development.

National Governments (Global North & South): Governments worldwide should consider adopting decolonial approaches to their national AI policies, ensuring that AI development and deployment are aligned with human rights, democratic values, and principles of justice. This includes investing in local AI ecosystems, ensuring broad democratic participation in AI governance processes , and prioritizing the use of AI for collective well-being and equitable development. Global North governments have a particular responsibility to ensure their AI foreign policies and corporate regulations do not inadvertently harm or exploit Global South nations.

The Tech Industry: Technology companies, as primary developers and deployers of AI, have a critical role. They should proactively implement "ethics by design" principles that incorporate diverse global values from the outset of AI development. This includes ensuring transparency in their data practices and algorithmic decision-making, promoting fair labor conditions throughout their global supply chains (especially in data annotation and microwork) , actively working to identify and mitigate biases in their systems, and engaging in equitable partnerships with entities in the Global South that respect local ownership and promote mutual benefit.

VII. Conclusion: Redefining AI Rights for a Shared Future

The global discourse on Artificial Intelligence is at a critical juncture, where the very definition and scope of "AI rights" are being contested and reshaped. This report has demonstrated that perspectives emerging from the Global South—drawing upon unique philosophical traditions, historical experiences, and contemporary socio-economic imperatives—are articulating distinct, often collectively oriented and justice-focused, definitions of AI "rights." These conceptualizations frequently challenge the predominantly individualistic, market-driven, and risk-mitigation-focused paradigms that have characterized much of the initial Western response to AI's rise.

The divergences are profound, spanning from the primary locus of rights (individual versus community), the understanding of data governance (as a commodity versus a sovereign resource), the approach to harm (mitigation versus holistic justice and development), and even the conceptual space afforded to non-anthropocentric views of intelligence. These differences are not merely academic; they carry significant implications for the future of global AI governance, the distribution of economic benefits and burdens from AI, the integrity of diverse cultures in an increasingly digitized world, and the fundamental nature of human-AI interaction.

The path forward does not lie in imposing a singular, homogenized definition of AI rights, which would inevitably reflect existing power imbalances and risk further marginalizing diverse worldviews. Instead, it requires a paradigm shift towards a more inclusive, equitable, and pluriversal understanding of AI rights and governance. Such an approach would acknowledge the validity of multiple ethical frameworks and seek to build an ecosystem where technology serves a broad spectrum of human values and contributes to a just, sustainable, and shared global future. This necessitates a commitment to genuine dialogue, the co-creation of norms, substantial investment in capacity building and equitable infrastructure development in the Global South, and a willingness from dominant actors to cede space and power. Ultimately, redefining AI rights for a shared future is about ensuring that this transformative technology enhances human dignity in all its diverse expressions and empowers all communities to shape their own technological destinies.

-2

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities