Ronen Bar

Co-founder @ The Moral Alignment Center
328 karmaJoined Working (6-15 years)

Bio

Participation
2

I am a social entrepreneur focused on advancing a new community-building initiative to ensure AI development benefits all sentient beings, including animals, humans, and future digital minds. For over a decade, my work has been at the intersection of technological innovation and animal advocacy, particularly in the alternative protein and investigative sectors.

I am the co-founder and former CEO of Sentient, a meta animal rights non-profit. My background includes work as an investigative journalist on television and undercover employment in slaughterhouses.

Feel free to reach out to me on LinkedIn or email (ronenbar07@gmail.com). 

I am looking for a co-founder and collaborators for the new initiative to ensure AI development benefits all sentientkind. I am happy to share ideas and receive feedback.

I have been practicing Vipassana meditation for several years.

How others can help me

I'm looking for collaborators, volunteers and a co-founder for the AI for All Sentient Beings initiative I've started (The Moral Alignment Center). I'm eager to connect with sentientists who care about animals, humans, and future digital minds. I'm open to feedback, idea-sharing, and deepening mutual understanding.

How I can help others

I offer free help with topics related to entrepreneurship, meta-activism, tech and animals, AI Moral Alignment, knowledge management systems, storytelling, language bias, journalism, and undercover investigations.

Comments
25

Topic contributions
2

I don't think we need to solve ethics in order to work on improving the ethics of models. Ethics may be something unsolvable, yet some AI models are and will be instilled with some values, or there will be some system to decide on the value section problem. I think more people need to work on that. 
Just now a great post relating to the value selection problem was published :
Beyond Short-Termism: How δ and w Can Realign AI with Our Values 
 

An ASI may be much smarter in helping oneself not feel so much suffering. We as humans are good in engineering our environment, but not the inside. AI may excel in inner engineering as well... very speculative

Thanks, very interesting post. 

I think this is a crucial question that demands much more research. The fear the digital minds will suffer is a great concern of mine, but also from this perspective I fear that an ASI which is not sentient may not have a robust morality, no intrinsic morality, hence may in the long run not care about biological and digital suffering.   

I tried to map the main questions arising from this topic in this post Will Sentience Make AI’s Morality Better? 

Thanks, interesting post, and I wonder how this can relate to AI and the ethical questions around AI vales

How do you think we can align humans with their own best values? is it more of a outside of the AI space societal work or is it also tied to the AI space? 

I think they are both connected and we should work on both, and we really need a new story as a species for us to be able to rise up to this challenge. 

And thinking more long term, when AGI builds a superintelligence, that will build the next agents, and humans are somewhere 5-6 scales down the intelligence scale, what chance do we have for moral consideration and care by those superior beings? unless we realize we need to care for all beings, and build an AI that cares for all beings... 

I think this "Human Alignment" your talking about is very important and neglected. You don't hear a many people call for an ethical transformation as a necessary adaptive step to the AGI era...  

Thanks Jason.

Yes, I agree that a "woke" backlash is a significant risk. I think the first step is research to define what a sentient-centric AI truly is: what is the vision and how it behaves in a world full of conflicts of interest between humans and animals.

In my view, the answer must be an AI that uplifts humanity and strengthens safety for humans too. A kind AI that aims for gradual change, and that, to an extent, accepts the reality that our current civilization is built on harming animals. We need an AI that helps shift this, but patiently and strategically.

I know it sounds grand, but I think as humanity, we need a new narrative, and probably the AI era will bring a new narrtaive anyway. So we should promote the stewardship narrative toward all sentient beings. If we are building something godlike, it makes sense that we grow into this kind of responsibility. This AI for sentientkind movement isn’t about “going vegan” and because of that it may trigger less public resistance. and this could tap into the compassion many people already have.

The hope is that with the tremendous power AI will give humans, to reshape both the world and themselves, the conflict of interest with animals will shrink dramatically, making it easier to accept and implement change.

Load more