“Do you know what the most popular book is? No, it’s not Harry Potter. But it does talk about spells. It’s the Bible, and it has been for centuries. In the past 50 years alone, the Bible has sold over 3.9 billion copies. And the second best-selling book? The Quran, at 800 million copies.
As Oxford Professor William MacAskill, author of the new book “What We Owe The Future”—a tome on effective altruism and “longtermism”—explains, excerpts from these millennia-old schools of thought influence politics around the world: “The Babylonian Talmud, for example, compiled over a millennium ago, states that ‘the embryo is considered to be mere water until the fortieth day’—and today Jews tend to have much more liberal attitudes towards stem cell research than Catholics, who object to this use of embryos because they believe life begins at conception. Similarly, centuries-old dietary restrictions are still widely followed, as evidenced by India’s unusually high rate of vegetarianism, a $20 billion kosher food market, and many Muslims’ abstinence from alcohol.”
The reason for this is simple: once rooted, value systems tend to persist for an extremely long time. And when it comes to factory farming, there’s reason to believe we may be at an inflection point.”
Read the rest on Forbes.
I'm currently working in technical AI safety, and I have two main thoughts on this:
1) We currently don't have the ability to robustly imbue AI with ANY values, let alone values that include all animals. We need to get a lot farther with solving this technical problem (the alignment problem) before we can meaningfully take any actions which will improve the longterm future for animals.
2) The AI Safety community generally seems mostly on board with animal welfare, but it's not a significant priority at all, and I don't think they take seriously the idea that there are S-risks downstream of human values (e.g. locking in wild-animal suffering). I'm personally pretty worried about this, not because I have a strong take about the probability of S-risks like this, but because the general vibe is just so apathetic about this kind of thing that I don't trust them to notice and take action if it were a serious problem.
Thanks for your comment. Are there any actions the EA community can take to help the AI Safety community prioritize animal welfare and take more seriously the idea that there are S-risks downstream or human values?