Thank you for pursuing this line of argument, I think the question of legal rights for AI is a really important one. One thought i've had reading your previous posts about this, is whether it legal rights will matter not only for securing the welfare of AI but also for safeguarding humanity.
I haven't really thought this fully through, but here's my line of thinking:
As we are currently on track to create superintelligence and I don't think we can say anything with much confidence about whether the AI we create will value the same things as us, it might be important to set up mechanisms which make peaceful collaboration with humanity the most attractive option for superintelligent AI(s) to get what they want.
If your best bet for getting what you want involves eliminating all humans, you are a lot more likely to eliminate all humans!
How big a deal is the congressional commission? What is the historical track record of Congress implementing the commission's top recommendation?
With hindsight, this comment from Jan Kulveit looks prescient.
I strongly upvoted this post because I'm extremely interested in seeing it get more attention and, hopefully, a potential rebuttal. I think this is extremely important to get to the bottom of!
At first glance your critiques seem pretty damning, but I would have to put a bunch of time into understanding ACE's evaluations first before I would be able to conclude whether I agree your critiques (I can spend a weekend day doing this and writing up my own thoughts in a new post if there is interest).
My expectation is that if I were to do this I would come out feeling less confident than you seem to be. I'm a bit concerned that you haven't made an attempt at explaining why ACE might have constructed their analyses this way.
But like I'm pretty confused too. It's hard to think of much justification for the choice of numbers in the 'Impact Potential Score' and deciding the impact of a book based on the average of all books doesn't seem like the best way to approach things?
More dakka is to pour more firepower onto a problem. Two examples:
Example: "bright lights don't help my seasonal depression". More dakka: "have you tried even brighter lights?"
Example: we brainstormed ten ideas, none of them seem good. More dakka: "Try listing a 100 ideas"