Epistemic status: Very quickly written, on a thought I've been holding for a year and that I haven't read elsewhere.
I believe that within this decade, there could be AGIs (Artificial General Intelligences) powerful enough that the values they pursue might have a value lock-in effect, at least partially. This means they could have a long-lasting impact on the future values and trajectory of our civilization (assuming we survive).
This brief post aims to share the idea that if your primary focus and concern is animal welfare (or digital sentience), you may want to consider engaging in targeted outreach on those topics towards those who will most likely shape the values of the first AGIs. This group likely includes executives and employees in top AGI labs (e.g. OpenAI, DeepMind, Anthropic), the broader US tech community, as well as policymakers from major countries.
Due to the risk of lock-in effects, I believe that the values of relatively small groups of individuals like the ones I mentioned (less than 3 000 people in top AGI labs) might have a disproportionately large impact on AGI, and consequently, on the future values and trajectory of our civilization. My impression is that, generally speaking, these people currently
a) don't prioritize animal welfare significantly
b) don't show substantial concern for digital minds sentience.
Hence if you believe those things are very important (which I do believe), and you think that AGI might come in the next few decades[1] (which a majority of people in the field believe), you might want to consider this intervention.
Feel free to reach out if you want to chat more about this, either here or via my contact you can find here.
- ^
Even more so if you believe, as I do along with many software engineers in top AGI labs, that it could happen this decade.
This is indeed a good idea (although it isn't that clear to me "how to do targeted outreach to people there" woud work, but I havent done targeted outreach before)
A future where the current situation would continue, but with AI making us more powerful, would in all likelihood be a very bad one if we are to include farmed animals (it gets more complicated if you include wild animals).
See the following relevant articles:
Optimistic longtermism would be terrible for animals:
If we don't end factory farming soon it might be there forever :
For me, it sounds likely that the "expected value" of the future depends mostly on what happens to farmed and wild animals. See the Moral Weight project : "Given hedonism and conditional on sentience, we think (credence: 0.65) that the welfare ranges of humans and the vertebrate animals of interest are within an order of magnitude of one another".
Why the expected numbers of farmed animals in the far future might be huge: