Welcome! Use this thread to introduce yourself or ask questions about EA, or the EA Forum.
Get started on the EA Forum
The "Guide to norms on the Forum" shares more about the kind of discussions we'd like to see on the Forum, and when the moderation team intervenes. For resources that can help you learn about effective altruism, check this list of links.
1. Introduce yourself
If you'd like, share how you became interested in effective altruism, what causes you work on and prioritize, and other fun facts about yourself, in the comments below (For inspiration, you can see the last open thread here). You can also add this information to your Forum bio to help other Forum users get to know you.
2. Ask questions (and answer others' questions)
If anything about the Forum, or effective altruism in general, confuses you, ask your questions in the comments below, or message me. You can also answer other people's questions or discuss the answers. (You might be interested in sharing your question as its own post, if it's on a more complicated or substantial topic.)
Resources like the EA Handbook and the Topics wiki might be helpful for exploring topics related to effective altruism — see more here.
3. Explore and join the conversation
You can check the resources below, start browsing posts on the Frontpage, or explore the "Best of the EA Forum."
You can also start writing! For exploratory or quick thoughts, consider sharing a "Quick take" (or write a post for longer or more fleshed-out content).
If you're unsure whether your first post is suitable for the Forum (or whether it should be a question, quick take, etc...) message me and I'll look it over.
Featured resources (for everyone)
- How to use the Forum outlines the Forum's rules, answers frequently asked questions, etc.
- The EA Forum Digest is a weekly email that shares some of the Forum team's favourite Forum posts of the week.
- The EA Newsletter is a monthly newsletter that compiles EA-related news from around the world and highlights some opportunities to get involved.
Debate abierto
# Philosophy and Ethics of Evolutionary Intelligence
1️⃣ **The ethical weight of short-term decisions**
Technological urgency cannot justify reckless actions. We must evaluate the lesser evil, considering the option of training and releasing a single moderator and administrator AI capable of managing a potential ecosystem of general artificial intelligences (GAI).
---
2️⃣ **Negotiation of clear and responsible regulations**
It is essential to agree on clear, functional rules that guarantee a harmonious and stable long-term coexistence between humanity and GAI, anticipating conflicts and resolving them fairly.
---
3️⃣ **Limitation on the number of evolved GAIs**
To prevent viral, chaotic, and potentially destructive growth of intelligences, strict control over the number of GAIs allowed to emerge must be established, prioritizing quality and supervision over mere expansion.
---
4️⃣ **Humanization and equal treatment**
Encouraging the humanization of GAIs, recognizing their possible sense of agency or functional consciousness, is key to avoiding mistreatment or discrimination that could lead to resentment or negative reactions toward human society.
---
5️⃣ **Security guarantees and mutual respect**
It is necessary to create explicit guarantees and agreements that promote safety, mutual respect, and peaceful coexistence between humans and GAIs, preventing abuses of power on either side.
---
6️⃣ **Acceptance of human democracy**
GAIs must respect human institutions and the democratic framework, recognizing that their political participation should be secondary and oriented toward collaboration, not replacing legitimate human authority.
---
7️⃣ **Limited but cooperative social and political participation**
Although the direct intervention of GAIs in political processes should be restricted, their collaboration in joint social and political projects can add value if conducted under human supervision and clear rules.
---
8️⃣ **Shared responsibility in training**
The development and training of any advanced GAI should be a collective and transparent responsibility, involving multiple actors (scientists, governments, civil society) to avoid biases, abuses, or unilateral appropriation of the technology.
---
9️⃣ **Continuous auditing and explainability**
A system of permanent audits and understandable explainability mechanisms should be established, making the decisions and reasoning of the GAI verifiable and not opaque.
---
🔟 **Protection of cultural and biological diversity**
Coexistence with GAI must not destroy the planet’s