TK

Tristan Katz

Ethicist @ University of Fribourg, Switzerland
65 karmaJoined Working (0-5 years)

Bio

Participation
4

I recently completed a PhD exploring the implications of wild animal suffering for environmental management. You can read my research here: https://scholar.google.ch/citations?user=9gSjtY4AAAAJ&hl=en&oi=ao

I am now considering options in AI ethics, governance, or the intersectino of AI and animal welfare.

Posts
1

Sorted by New

Comments
28

I like and agree with this post a lot. I just want to push back on this part:

You typically need 100-200+ applications to land a job.

These numbers are crazy. It may be that many people make so many applications, but they certainly shouldn't: with so many applications, you can't put in the effort needed to have a really good shot, and you're probably applying for many positions where you have next to no chance anyway. Not to mention that with so many rejections, I would be highly suspicious of whoever did end up hiring me (surely there's some bad reason why they didn't hire anyone else)!

So: better to apply where you have a real shot, make fewer high-quality applications, and end up in a position where you can take an offer from an employer you feel confident you want to work for. Ideally, you might even be able to negotiate. 

Also: I haven't looked into these numbers but I suspect they might also be inflated by job-searching requirements in many unemployment insurance schemes. 

Definitely agree. Also, in response to:

Then, try talking to people for awhile without drinking. You'll find once you get past the initial awkwardness, you start feeling good.

Well, having met Kat once, it was clear that she was very outgoing. I'm averagely outgoing, and it's clear to me that in some situations getting past that initial awkwardness is a huge advantage. 

Hi Karen, great post!

I totally share the perspective that the way things currently work, and the dominant institutions, are likely to be shaken up in the coming years/decades, and that this presents opportunities to try to steer things in the right direction. I agree that this is probably more impactful than trying to correct things after the fact.

I have a few questions about the framing or the emphasis, which I think could change the conclusions one reaches regarding what we should do:

  • Q1: how might these alternative paradigms impact the scale of suffering? As you acknowledge, the problem of capitalism is not that it causes animal exploitation, but that it's increased its scale. In evaluating the risk of each paradigm, I then would be interested to see your take on how the numbers of animals or severity of suffering might change with each paradigm.
  • Q2: multi-cause disruption or just AI? I share the perspective that AI may disrupt economic systems, but I'm less sure about the other factors you mentioned. Global inequality has increased since 1970, but if you look further back, inequality levels were higher. Climate change is going to have big effects, but despite calls by some people to rethink economic systems, the solutions being seriously considered seem to largely sit within the current economic paradigm. And then, I'm actually not aware of institutional decay at a global level - is there evidence for that, as a distinct phenomenon? 
    These questions lead me to wonder whether this could be framed more directly as a response to anticipated AI takeoff scenarios.
  • Q3: alternative economic systems or other systems/institutions? Related to the above, you shift from "these factors might disrupt economic systems" to (in your own words) "economic systems are changing". But it seems quite easy to imagine AI takeoff scenarios that still work within a growth-oriented capitalist system (and also for futures with climate change). If we're unsure about whether economic systems will change or not, one option is to hedge and try to affect all proposed paradigms, but another strategy would be to try to help animals in ways that are robust across different economic paradigms - such as by trying to influence AI development to encode more animal-friendly values. Of course, doing both would be good - but there would be an argument for focusing efforts on paradigms or institutions that we are confident will shape whatever changes occur in the future.

Let me know if any of those are unclear!

I found this post inspiring while reading it, but after reaching the end I realized that I have very little idea of what it's actually telling me to do. I'm writing this comment far too late in the hope that someone might tell me what I'm missing.

So - how do I avoid being bycatch? Here's my reconstruction of the argument and my evaluation of each points

1. Build up career capital slowly, over the long-term - ok, but AI timelines are short, and the future is unpredictable. A lot of EAs originally did exactly this by going into careers like medicine only to later find that it doesn't help them, given their updated beliefs. Arguably, being able to pivot is better. ❌

2. Start with small & concrete actions that provide evidence of your altruistic efforts. Ok, but isn't that exactly the kind of bycatch activities this post is warning against? ❌

3. Ask questions to figure out what needs to be done. Ok this point I think is generally good advice, and could help ensure that one's skills end up being useful. ✅

4. Take your time to figure things out. This point counters (1). Taking time to figure things out likely means delaying any commitment to a long-term plan, or switching track several times. I've done this myself, but I think this is what leaves people as bycatch. ❌

5. Engage in the community. This again is good advice, but I'm not sure how it prevents someone becoming bycatch. ❔

6. Not everything needs to be branded as EA. Ok, I suppose if I have some skills or career capital, this helps me from ensuring that they are recognized as useful instead 'churn'. ✅

7. I understand as be brave and try things. Again, this seems to be exactly what was warned against - the EA who writes blog posts, applies for grants etc but nothing happens. Trying things has a high cost and can leave people very demotivated in a competitive ecosystem. ❌

8. Follow EA principles. This seems more like asking the bycatch not to give up hope, rather than real advice. ❌

9. The wording of this one was strange: not being bycatch is obviously a way to not be bycatch. But I suppose what is meant is: try to be valuable very generally, rather than just being a committed EA employee. This is useful, but the real question is how, and as I've indicated I feel fairly unconvinced by most of the answers in the rest of the post.

Sure! And like I said, I do think this is valuable: it just seems more obviously valuable as a way to ensure the best outcomes (aligned AI), rather than as a means to avoid the worst outcomes. 

Thanks, again, for the response.

I acknowledge the examples I gave were kind of bad. If the probability of sentience here really is 6.8%, then that is significant. It's prompted me to look into that evidence and it is truly more than I thought. So that's an update.

I still think, even if they do deserve consideration, there's an argument to be made for delaying that consideration. The argument is of the form "the world isn't read yet". I'm very aware that most vegetarians and vegans are also environmentalists. But that's precisely because they think that environmental protection protects these animals - most have never thought about suffering in nature. My own anecdotal experience is that when I actually talk to such people, and make them aware of the ways that wild animals suffer, then they do tend to be in favor of interventions that would help them, at least if they're not too environmentally disruptive.

So I feel quite confident that the pro-conservation attitude is an intermediary step. People need to care for animals -> then they need to become aware of wild animal suffering -> then they will favor intervention in nature. 

If you only focus on the short term, taking such attitudes as fixed, then you can never hope to help very many of these animals.

Regarding your last point: I see. I thought this was an argument for "alignment via moral reasoning as an addition to alignment via control", not "alignment via moral reasoning instead of alignment via control." So you would hope that alignment via moral reasoning would displace or replace alignment via control.

In that case, your argument is plausible but... quite hopeful? I'm sure many people will pursue control methods regardless. I suppose you might argue that, if enough people buy your argument, then research on AI that is merely controlled will advance more slowly, and research on AI that does its own moral reasoning, and is therefore harder to misuse, would advance faster or at least in parallel. Then I would accept that this might reduce the chance of malevolent misuse, but that's quite a hopeful scenario! In less hopeful scenarios, I am unsure if people concerned with malevolent misuse ought to pursue this kind of work, or if they wouldn't be better off simply advocating for a pause/slow down.

Thanks for your response!

I think the cost from changes in public policies resulting from advocating for the consideration of effects on soil animals is realistically negligible.

I'm not sure why you say this - for animal advocacy organizations, you're potentially asking them to change their interventions, causing fewer vertebrate lives to be saved. Maybe I should have been clear that while Birch talks about public policies, I think we can apply the same reasoning to charitable policies here.

Could you give concrete examples? 

I suppose my thinking is that if we took all risks seriously that are 1) small in probability, and where we 2) largely just don't have the information to know, we'd be suffering from complex cluelessness. As Greaves points out, we face this kind of cluelessness in many ordinary situations: choosing a career, deciding whether or not to give up coffee, whether to have a kid. In each decision of this type there are many possible outcomes of small probability, but which could end up being quite important (e.g. your kid could end up becoming a dictator; a career might cause you to marry a different person, etc.). In personal decision-making, most people largely ignore all these possibilities, and focus on the more certain and/or likely outcomes. In public decision-making contexts, I think the sensible approach is something similar, but investing more resources into researching the different outcomes in order to lower uncertainty first.
I suspect you might say that in this case we have more evidence than the examples I've given, and I have to admit I've never looked into the evidence of sentience in nematodes, mites or springtails.

RP's probability of sentience of nematodes is 6.8 %, which is not that small. The probability of dying in a car crash is around 2.70*10^-9 per km (= 10^-6/370), and many people still consider it reasonable to fasten seat belts for increased safety on short trips, even if they would prefer it to be optional.

Seatbelts are a good analogy, because actually most people didn't think it important to fasten their seatbelts originally. It was only after policy makers became aware of the large number of casualties that they made laws and information campaigns to encourage the use of seatbelts. So this goes to show that when the risk is small and uncertain people tend to discount it; but when it becomes certain, and the cost of avoiding it is small, then people are willing to act. In the case you've presented, it is both small and uncertain. Hence my suggestion of focusing efforts on research first.

I personally think the proximate impacts are the driver of the overall effect.

By 'proximate' you mean short-term, and by 'the overall effect', you mean the long-term outcomes, right? Could you explain why you think that?

It seems very likely that worldviews will change slowly, requiring us to focus primarily on changing people now, in order to help most animals later. I expect that empathy for some animals (e.g. farmed animals) will gradually lead to empathy for others (e.g. wild animals). It is hard to expect people to care deeply about all animals when they're still eating some of them. So my theory of change starts with efforts that increase caring for those animals that people are closest to, and gradually encourages more radical empathy - the expanding moral circle. And if sentience is probabilistic, that's fine, it's just a circle with fuzzy edges. I assume that most 'traditional' animal rights activists also believe in this vision of progress. And as I said in my last comment, changing interventions now, due to the effects on animals with a small probability of sentience, might mean switching to interventions which less effectively lead to a nonspeciesist future, e.g. by not encouraging people to become vegan (since eating meat has been shown to hinder empathy for animals), or by causing environmentalists to oppose the policies of animal organizations. So by focusing on these animals instead of focusing on changing minds or policies, we might help animals in the short term while harming progress in the long term.

I still don't think anything beats a good bar of Whittaker's :)

Load more