Currently, I'm pursuing a bachelor degree in Biological Sciences in order to become a researcher in the area of biorisk, because I was confident that humanity would stop causing tremendous amounts of suffering upon other animals and would assume a net positive value in the future.
However, there was a nagging thought in the back of my head about the possibility that it would not do so, and I found this article suggesting that there is a real possibility that such horrible scenario might actually happen.
If there is indeed a very considerable chance that humanity will keep torturing animals at an ever growing scale, and thus keep having a negative net-value for an extremely large portion of its history, doesn't that mean that we should strive to make humanity more likely to go extinct, not less?
No, there is no way to be confident.
I think humanity is intellectually on a trajectory towards greater concern for non-human animals. But this is not a reliable argument. Trajectories can reverse or stall, and most of the world is likely to remain, at best, indifferent to and complicit in the increasing suffering of farmed animals for decades to come. We could easily "lock in" our (fairly horrific) modern norms.
But I think we should probably still lean towards preventing human extinction.
The main reason for this is the pursuit of convergent goals.
It's just way harder to integrate pro-extinction actions into the other things that we care about and are trying to do as a movement.
We care about making people and animals healthier and happier, avoiding mass suffering events / pandemics / global conflict, improving global institutions, and pursuing moral progress. There are many actions that can improve these metrics - reducing pandemic risk, making AI safer, supporting global development, preventing great power conflict - which also tend to reduce extinction risk. But there are very few things we can do that improve these metrics while increasing x-risk.
Even if extinction itself would be positive expected value, trying to make humans go extinct is a bit all-or-nothing, and you probably won't ever be presented with a choice where x-risk is the only variable at play. Most of the things you can do that increase human x-risk at the margins also probably increase the chance of other bad things happening. This means that there are very few actions that you could take with a view towards increasing x-risk that are positive expected value.
I know this is hardly a rousing argument to inspire you in your career in biorisk, but I think it should at least help you guard against taking a stronger pro-extinction view.