So, previously my conclusion was that it's a trade-off mostly between equality + contribution to alignment (I believe that my solution increases the probability of alignment) + AI faster so less people will die vs the risk of biological weapons.
I think the impact of equality and other things is higher because:
I asked 8 AI models about what is the expected number of people who will die from biological weapons of which development was assisted by AI (more or less, that wasn't the exact question) to have an idea how serious the problem is, since I don't have a lot of knowledge about biology. I removed the outliers (the biggest and the lowest answer) and got the average. The average was small relative to how many people die in 1 year from normal reasons. So, that suggests that the other problems are larger.
The idea is that if the powerful artificial intelligence is created later than sooner, then there is more time to prepare for biological weapons. But the event of humanity being prepared at X + Y time is dependent on the event of humanity being prepared at time X. Because the probability of both of those events depends on the same things like: are humans smart and considerate enough to prepare for that. Therefore, if the problem is not solved by time X, then it's also less likely to be solved by the time X + Y.
I also considered the impact on animals and other non-human agents. One thing that I paid attention to is the following fact. Humans have some empathy, so it's likely that they share some resources and/or benefits of AI with humans. If there is one person in the position of power, then there is a high variance of how much benefits animals (and other agents) will get because it depends on one person. For example, if that person will turn out to be a psychopath, then animals might suffer from that outcome. However, the more people there are in the position of power, the lower the variance, because the amount of benefits that animals will get will depend on more people. For animals to get nothing, all people in the position of power would have to be psychopaths. Therefore, all else equal, lack of concentration of power among humans is the safer option which is desirable given the diminishing returns of resources (utility = log resources). I'm not saying that that is the only factor at play, but I think it's worth noting.
So, previously my conclusion was that it's a trade-off mostly between equality + contribution to alignment (I believe that my solution increases the probability of alignment) + AI faster so less people will die vs the risk of biological weapons.
I think the impact of equality and other things is higher because:
If anyone has something to add, feel free.