KS

Karen Singleton

Researcher
136 karmaJoined Seeking workWorking (15+ years)New Zealand

Bio

Participation
3

Researcher, strategic thinker and compassionate advocate focusing on the intersection of animal welfare, economics and systemic change. Previously teacher, civil servant and emergency manager. Volunteer Policy President for the Animal Justice Party Aotearoa New Zealand.

Comments
22

Great question! I'm thinking about how the economic disruptions from AI create opportunities to reshape the foundational rules before new systems crystalize.

For example, as AI automates more labour and potentially destabilises growth-oriented models, we might see experiments with post-growth economics, universal basic services or entirely new frameworks for measuring value. These transition moments are when assumptions about what "counts" economically become malleable, including whether animals are seen as production inputs or beings with inherent worth.

Right now, our economic systems have deeply embedded assumptions that treat animals as commodities, externalise ecological costs and prioritise efficiency over welfare. But during systemic transitions, these assumptions become visible and potentially changeable in ways they normally aren't.

I think this fits most naturally into your "predict and prepare" category, but with a focus on economic system design rather than just technological applications. Instead of just preparing for cheaper cultivated meat, we might also prepare for the governance frameworks that will determine how new economic models treat animals.

The policy levers might be things like: ensuring animal welfare is embedded in any new economic measurement systems, preventing harmful defaults from getting locked into emerging governance structures or influencing how post-growth economic experiments value different forms of life.

Does that distinction between technological applications and systemic economic design make sense? I suspect the latter might be more neglected right now.

I've been exploring some of these ideas in more depth [here].

Thanks for this very clear wake-up call. I've been wrestling with similar questions. I think this presents measured and sensible approaches.

I fully agree with your "This matters for animals and their advocates" section. The idea that AI issues are just for tech bros is such a dangerous blind spot. The potential scenarios you outline (intensified factory farming, ecosystem lock-in, breakthrough technologies) really drive home how transformative this could be for animals specifically.

I also strongly agree with the "build capacity" approach, especially your point that theories of change themselves need to shift. I think you're right that many current AI-for-animals efforts focus on improving efficiency within existing frameworks rather than grappling with how fundamentally different the strategic landscape might become.

On AI welfare, the research-first approach you highlight makes sense, any efforts here need to be careful, measured, and credible given the risks of getting this wrong.

I'm less convinced that "optimise for immediate results" is the right response to uncertainty about AI trajectories. If we're truly in a transition period, the most important work might actually be the hardest to measure in the short term, like influencing the foundational assumptions being built into emerging systems right now.

The "predict and prepare" approach feels most promising to me, but I think we might be too focused on specific AI applications (like cultivated meat acceleration) we're likely heading into entirely new economic and governance systems where the basic rules about value, ownership, and moral consideration are being rewritten. The assumptions embedded during these transition periods could determine whether animals are treated as production units or moral patients for generations. Therefore the economic frameworks emerging alongside AI development could be just as consequential as the AI systems themselves.

Questions I'm asking myself, and feel free to posit any answers! Are animal advocates engaging enough with the economic/governance transitions happening alongside AI development or are we too narrowly focused on the technology itself? Are we thinking big enough about the window of opportunity during these transitions?

Thanks again for this piece, I think it'll make a good shareable read to help others in our organisations understand the questions we should be asking/ what we should be doing.

Hi Tristan,

Thank you so much for taking the time to engage with this piece so thoughtfully! Your questions help clarify some key assumptions I'm making and highlight areas where I could be more precise. I appreciate the constructive pushback.


On Q1 (scale of suffering): Just to clarify one point of framing "As you acknowledge, the problem of capitalism is not that it causes animal exploitation, but that it's increased its scale." I don't feel this is my conclusion, I see industrial capitalism as having structurally embedded animal commodification (turning them into 'production units'), with neoliberalism then scaling that up massively. So capitalism created both a qualitative shift in how animals are conceptualised and enabled quantitative expansion. I'm therefore worried new paradigms could repeat similar structural exclusions regardless of scale.

Having said that, you're absolutely right, scale is important to understand. I perhaps haven't been clear enough that my planned inclusion/exclusion analysis would look at whether emerging systems will trend toward reducing animal numbers or finding new ways to exploit them at scale. 

For example, postgrowth paradigms with genuine consumption reduction might decrease total animal use, but if the focus remains on "ethical" local products without challenging underlying commodification, we could see welfare improvements with persistent numbers. For AI scenarios, the scale implications feel even more dramatic: post-scarcity could eliminate animal agriculture entirely, or it could make intensive systems so efficient that animal use expands in ways we haven't imagined. Developing better frameworks for estimating scale effects across paradigms could be a valuable contribution of this research.


On Q2 (multi-cause vs AI-focused): This is a really fair challenge to my framing, and you're right that I should provide more evidence for some of these claims. The institutional decay point particularly deserves more evidence, I was thinking of things like declining trust in democratic institutions and international cooperation, but you're right that "decay" might be too strong or not sufficiently global. I do think the fact that climate solutions being seriously considered largely sit within the current economic paradigm might actually reflect our dominance by that paradigm rather than its resilience, alternative approaches may simply not get adequate consideration in mainstream policy spaces.

I appreciate the suggestion to frame this more directly around AI takeoff, but I'm genuinely curious about how multiple factors might interact, especially energy constraints. I speak with others who believe AI will "starve itself" due to energy limitations, while others see AI as the primary disruptor, there are genuinely differing views out there. Rather than betting on one factor being the "main disruptor," I think the convergence itself creates the instability that opens windows for change. We might not know which combination of pressures will be decisive, but I do feel confident that the current trajectory is unsustainable and that economic systems are changing, the question is how, not whether. Right now I want to stay curious about all these factors rather than narrowing prematurely to one driver.


On Q3 (robustness across paradigms): This really gets to the heart of the strategic question, and I think you've identified the key tension in my approach. You're absolutely right that there's a case for focusing on interventions that are robust across economic paradigms rather than trying to influence each potential alternative. The AI governance angle you mention is compelling precisely because AI development seems more certain to happen than, say, degrowth adoption.
That said, I do think we can be confident that economic systems are changing, not just might change. The convergence of multiple pressures (AI, climate, energy constraints, etc.) creates instability that opens windows for change, even if we can't predict which factors will be most decisive. When I've shared this work elsewhere, others have pointed out that even the 2033 farmed animal projections I cite assume the current (unsustainable) system can be sustained for another eight years, which they find implausible. We might not know which disrupting factor will be primary, or how they'll combine, but the status quo trajectory seems untenable.
However, your point about robust interventions is well-taken. Maybe the most valuable approach is identifying leverage points that matter regardless of which paradigm emerges.


Thanks again for such a thoughtful engagement. These questions were really helpful!

Thank you so much for your comment! This is my first solo post on the forum and so it's nice to have a supportive first comment. Though I look forward to challenging comments too. 

Thanks so much for this thoughtful and clear breakdown, it's one of the most useful framings I've seen for thinking about strategy in the face of paradigm shifts.

The distinction between the “normal(ish)” and “transformed” eras is especially helpful, and I appreciate the caution around assuming continuity in our current levers. The idea that most of today’s advocacy tools may simply not survive or translate post-shift feels both sobering and clarifying. The point about needing a compelling story for why any given intervention’s effects would persist beyond the shift is well taken.

I also found the discussion of moral “lock-ins” particularly resonant. The idea that future systems could entrench either better or worse treatment of animals, depending on early influence, feels like a crucial consideration, especially given how sticky some value assumptions can become once embedded in infrastructure or governance frameworks. There’s probably a lot more to map here about what kinds of decisions are most likely to persist and where contingent choices could still go either way.

I’m exploring some of these questions from a different angle, focusing on how animal welfare might (or might not) be integrated into emerging economic paradigms (I hope to post on this soonish) but this post helped clarify the strategic terrain we’re navigating. Thanks again for putting this together.

Thanks for this, it’s a hugely valuable exploration and an invitation to the community to think beyond the short-term horizon. This mindset feels vital for anyone working at the intersection of AI, economic change and animal welfare.

I feel EA is generally good at identifying neglected problems within existing systems, but there's a whole category of neglectedness that emerges during transitions - where familiar advocacy approaches might lose traction, where new decision-makers enter the picture, where the very metrics of moral progress could shift. I find this space fascinating and full of opportunity (and risk), as it seems do you!

The deep dives you've shared on AI and animal advocacy illustrate this well. It shows how even our most established interventions (corporate campaigns, research, network building) could be fundamentally transformed. But what's particularly interesting is how these AI-driven changes are happening within our current economic paradigm. When we layer on the possibility of broader economic transitions the complexity multiplies.

We need to understand how values get embedded when paradigms shift. It's a different kind of tractability analysis: instead of asking "how do we solve this problem now?" we're asking "how do we ensure this problem remains solvable later?" or even better "how do we design out this problem during the shift?"

Thanks again for this thoughtful piece.

Thanks for this thoughtful and expansive post, it really succeeds as an invitation to think about animal advocacy under radically different economic conditions. I appreciated how it foregrounds questions over predictions, especially around how cultural narratives, institutional inertia and new leverage points like attention and moral imagination might shape the future of factory farming.

This connects closely to research I’m currently doing on how different economic paradigms may reduce (or risk entrenching) animal exploitation (I hope to share a Forum piece on it soonish). 

Your piece helped clarify some key gaps in current discourse, particularly around AI-aligned paradigms where animals are often entirely absent unless explicitly centred. 

I particularly appreciated your attention to asymmetrical strategies and the need for advocacy to remain relevant in a landscape where labour and funding may be less of a bottleneck, but attention and narrative become the scarce resources.

It’s exciting to see this kind of cross-paradigm, long-view thinking on the Forum. I hope we see more of it!

Thank you for this post, it articulates a tension I’ve felt for some time volunteering within a single-issue political party(animal focused). I found the section on limited legitimacy and the risks of shifting positioning without supporter consent particularly resonant, that trust is hard-won and easy to lose.

At present, we’ve tended to respond with “we only focus on X,” but as we mature as a party, I’m hopeful we’ll increasingly draw on our core values (which include compassion and nonviolence) to guide which issues we engage with and how. Our circle of compassion is already wide, even as an animal-focused party, from ecosystems to poverty to food systems, so there’s a natural (and values-aligned) path to grow into that broader voice. I hope, as we do, our supporters will recognise the same values that drew them in and walk with us.

Great post, clear and insightful! I really appreciate how you’ve presented this perspective on the cultivated meat debate. 

Your point about the substitution effect and that this may not replace conventional meat but instead compete with other alternative proteins is striking and not something I had considered before. I had assumed that meat-eaters would be more likely to embrace cultivated meat, but your argument that hybrid products might not attract them, given the reception to things like the Beyond or Impossible Burgers, is compelling.

I love that you're asking such critical questions and admire how you’ve kept the post concise while still offering constructive recommendations for the EA movement. 

My hope is that cultivated meat will enable us to replace animal products for companion animals, but I realise looking at your information on scale and cost I'm being pretty optimistic on that. Do you feel that will be a viable option at some point in the future?

Also, I'm embarrassed to admit I didn't know the reference to Omelas so thank you for introducing me to that story.

Thanks again for such a thought-provoking and well-balanced post!

Thank you for writing this comprehensive proposal. I agree with your conclusion it's not a case of if but when and we should be improving our pandemic planning now.

Industrial animal agriculture creates conditions where pathogens can evolve and spread rapidly between densely housed animals, potentially creating new zoonotic diseases that can jump to humans. This factor alone raises the likelihood of future pandemics and strengthens the case for robust early detection systems.

The comparison to fire protection spending provides a compelling perspective. It's striking that New Zealand spent nearly 3 times more on fire protection than pandemic preparedness, despite COVID-19 costing the country roughly 50 times more than annual fire damage. This kind of data-driven comparison makes a strong case for increasing pandemic surveillance investment.

I hope you're able to get this information to MoH!

Load more