I'm Osmani, Founder of RiesgosIA.org, Madrid‑based. Deeply engaged in AI Safety full time.
Active in Effective Altruism and BlueDot Impact, with a strong interest in governance, risk communication, and community building. Looking to connect with the Spanish‑speaking EA community and collaborate on impactful AI safety initiatives. Happy to talk tech, community strategy, or any aspect of AI safety and governance.
Connect with the AI Safety community Globally
I’m looking to meet people, groups, and initiatives working on AI Safety, risk communication, and responsible AI, especially within the Spanish‑speaking EA community.
Support to improve and expand RiesgosIA.org
As the founder of RiesgosIA.org, I’m looking for feedback, collaborators, and contributors who want to help strengthen the project, expand the content, or explore new ways to make AI risks more accessible.
Opportunities to grow within the AI Safety ecosystem
I’m interested in collaborations, study groups, working groups, or volunteer opportunities that help me deepen my skills and contribute more meaningfully to the field.
Advisory or community‑building opportunities in EA‑aligned tech or AI Safety orgs
I’m open to contributing my experience in tech, communication, and community building to organizations aligned with AI Safety or Effective Altruism.
Bridge‑role collaborations (innovation, digital transformation, training)
I’m also interested in roles or collaborations that sit at the intersection of technology, learning, and community, especially those that help people understand and adopt AI responsibly.
AI Safety knowledge‑sharing (Spanish & English) I can help people understand AI risks, safety concepts, and governance basics in a clear, accessible way, especially for Spanish‑speaking audiences.
RiesgosIA.org as a community resource I can provide access to curated AI risk content, visual tools, datasets, and educational materials through RiesgosIA.org, and help others use it in workshops, meetups, or learning groups.
Community building & ecosystem support I love connecting people, sharing opportunities, and helping newcomers navigate the AI Safety and EA ecosystem.
Technical perspective (Cloud + AI) With my background as a Cloud Architect, I can help translate technical concepts for non‑technical audiences, support early‑stage projects, or advise on responsible AI adoption.
Collaboration on projects, events, or study groups I’m happy to co‑create resources, join study groups, support events, or collaborate on initiatives that strengthen the AI Safety community.
Mentoring & peer support I can offer guidance to people transitioning into AI Safety, exploring EA, or starting their own projects.
Wow, great job, I really enjoyed how you integrated the five considerations into a coherent framework, especially the parts on moral seriousness and the career‑donations relationship.
I know several people working full‑time in AIS who had never considered that their donations might need to reflect other parts of their values.
I think this kind of structure would be especially useful in newer or non‑English‑speaking communities (e.g., EA groups in Spanish), where many people arrive with strong intuitions about global poverty but much less exposure to longtermism.
If you ever turn this into a more pedagogical resource (a graphic, a short checklist, concrete splits like 60/25/15, etc.), it will be very helpful to use it in intro sessions here in EA Madrid and in other Spanish‑speaking groups. 💪🏽
Hello @Toby Tremlett I'm planning to write about building AI Safety education infrastructure for Spanish-speaking communities.
The post would cover: why the current English-only ecosystem creates real talent bottlenecks, what's needed beyond just running study groups (facilitator training, community infrastructure, sustainable funding models), and what I'm learning as I start coordinating EA Madrid and exploring these questions with the community.
Would this be valuable for the Forum?
Content:
Features:
Meta:
Simón, gracias por abrir esta conversación tan necesaria.
Participo en EA Madrid y estoy estudiando con BlueDot Impact, y una de las primeras barreras que vi fue exactamente esta: la ausencia casi total de recursos en español para quienes quieren profundizar más allá de lo básico.
Coincido completamente en que no se trata solo de traducir, sino de generar contenido original que dialogue con las realidades de nuestros contextos. Cuando intento explicar AI safety o cost-effectiveness a colegas en Madrid o a mi red en Colombia, constantemente me encuentro traduciendo no solo palabras, sino conceptos que necesitan ser reenmarcados para que resuenen en nuestras realidades.
Tu punto sobre la producción académica es crucial. He visto de primera mano cómo la "apariencia foránea" de EA genera resistencia en círculos profesionales y académicos en España y LatAm. Sin un corpus robusto en español que demuestre la aplicabilidad de estas ideas a nuestros contextos específicos, seguiremos siendo percibidos como un movimiento importado.
Respecto a la pregunta de Agustín sobre sacrificar alcance: creo que hay espacio para ambos. Posts sobre experiencias de community building en contextos hispanohablantes, adaptaciones de programas como BlueDot Impact a LatAm, o análisis de cost-effectiveness en políticas públicas específicas de nuestra región son inherentemente más valiosos en español, incluso si su alcance global es menor.
Me sumo a la invitación. Soy cloud architect en transición hacia roles de operations en EA, con experiencia en gobernanza de IA, y estaría encantada de contribuir escribiendo sobre:
¿Siguen activos los esfuerzos de UPB? Me interesaría conocer más sobre la convocatoria de artículos.
This resonates deeply, especially the line: "Organizations without clear stories hit friction, even when doing excellent work."
I've seen this in my own career transition into EA, I had the skills and the commitment, but until I could articulate why my background in international partnerships and data operations connected to AI safety and global health work, I struggled to make others see the fit.
Your framework around Mission → ToC → OKRs → KPIs → Team is brilliant because it shows that organizational storytelling isn't just "marketing" – it's strategic clarity that enables faster alignment, better partnerships, and ultimately more impact.
The section on authenticity being non-negotiable particularly stands out in the EA context. Being transparent about uncertainty and limitations isn't weakness; it's what builds trust in a community that values epistemic rigor.
Thank you for writing this. It's a reminder that even in a movement focused on evidence and outcomes, we still need to remember we're communicating with humans who understand the world through narrative.
Thank you for this. Career transitioner here, spent weeks applying to roles I'm marginally qualified for instead of working on what I'm actually positioned to do: AI governance consulting for EU startups navigating the AI Act.
Your point about "immediate, direct action" hit hard. I have a mentor meeting in 2 days for this venture. Going to stop the application cycle and focus there.
Would love to hear more about your journey if you're open to connecting.
Wow Tristan, Wow!
Thanks for putting this together, I read it carefully and it’s incredibly useful for career planning.
A fully structured overview like this is rare.
I especially appreciated the split between career choice (aptitudes > causes) and job search (treating rejections as normal, building capital outside EA), and how clearly you highlight ops/management as a real gap.
The “no sorting hat”, “fit first”, and “apply widely but expect slow vetting” framing is spot‑on. It also matches what we see locally in terms of timelines and expectations.
I’ll be using this as a reference in intro sessions and advising conversations here in Madrid, it’s one of the clearest maps of career landscape I’ve seen. 🥳