Hi everyone,
I’ve recently been diving into the world of open-source robotics, especially platforms like Reachy by Pollen Robotics, and it got me thinking how can we best align these emerging technologies with long-term goals in effective altruism?
Most discussions around AI safety focus on large models and digital systems, but embodied AI (robots with physical presence) could also have big societal impacts in areas like elder care, education access, or disaster response. Are there frameworks or research directions within EA that explore the altruistic potential or risk of robotics in this way?
Would love to hear thoughts on: Whether physical robotics deserves more attention in longtermist cause areas? How we might evaluate the cost-effectiveness or safety of robotics-based interventions? Any promising efforts that align robotics with global health, poverty reduction, or existential risk mitigation?
References:
https://www.theengineeringprojects.com/2024/04/what-is-robotics.html
https://80000hours.org/problem-profiles/artificial-intelligence-risk/
https://pollen-robotics.com/reachy/
I think the focus is generally placed on the cognitive capacities of AIs because it is expected that it will just be a bigger deal overall.
There is at least one 80,000 hours podcast episode on robotics. It tries to explain why it's hard to do ML on, but I didn't understand it.
Also, I think Max Tegmark wrote some stuff on slaughterbots in Life 3.0. Yikes!
You could try looking for other differential development stuff too if you want. I recently liked: AI Tools for Existential Security. I think it's a good conceptual framework for emerging tech / applied ethics stuff I think. Of course, still leaves you with a lot of questions :)