D

damc4

Currently working on my projects @ No organization
3 karmaJoined

Comments
1

Answer by damc4*1
0
0

So, previously my conclusion was that it's a trade-off mostly between equality + contribution to alignment (I believe that my solution increases the probability of alignment) + AI faster so less people will die vs the risk of biological weapons.

I think the impact of equality and other things is higher because:

  1. I asked 8 AI models about what is the expected number of people who will die from biological weapons of which development was assisted by AI (more or less, that wasn't the exact question) to have an idea how serious the problem is, since I don't have a lot of knowledge about biology. I removed the outliers (the biggest and the lowest answer) and got the average. The average was small relative to how many people die in 1 year from normal reasons. So, that suggests that the other problems are larger.
  2. The idea is that if the powerful artificial intelligence is created later than sooner, then there is more time to prepare for biological weapons. But the event of humanity being prepared at X + Y time is dependent on the event of humanity being prepared at time X. Because the probability of both of those events depends on the same things like: are humans smart and considerate enough to prepare for that. Therefore, if the problem is not solved by time X, then it's also unlikely to be solved by the time X + Y.

If anyone has something to add, feel free.