Interesting!
I think my worry is people who don't think they need advice about what the future should look like. When I imagine them making the bad decision despite having lots of time to consult superintelligent AIs, I imagine them just not being that interested in making the "right" decision? And therefore their advisors not being proactive in telling them things that are only relevant for making the "right" decision.
That is, assuming the AIs are intent aligned, they'll only help you in the ways you want to be helped:
I do hope that people won't be so thoughtless as to impose their vision of the future without seeking advice, but I'm not confident.
I agree that the text an LLM outputs shouldn't be thought of as communicating with the LLM "behind the mask" itself.
But I don't agree that it's impossible in principle to say anything about the welfare of a sentient AI. Could we not develop some guesses about AI welfare by getting a much better understanding of animal welfare? (For example, we might learn much more about when brains are suffering, and this could be suggestive of what to look for in artificial neural nets)
It's also not completely clear to me what the relationship between the sentient being "behind the mask" is, and the "role-played character", especially if we imagine conscious, situationally-aware future models. Right now, it's for sure useful to see the text output by an LLM as simulating a character, which is nothing to do with the reality of the LLM itself, but could that be related to the LLM not being conscious of itself? I feel confused.
Also, even if it was impossible in principle to evaluate the welfare of a sentient AI, you might still want to act differently in some circumstances:
Why does "lock-in" seem so unlikely to you?
One story:
You could imagine AI welfare work now improving things by putting AI welfare on the radar of those people, so they're more likely to take AI welfare into account when making decisions.
I'd be interested in which step of this story seems implausible to you - is it about AI technology making "lock in" possible?
Good question! I share that intuition that preventing harm is a really good thing to do, and I find striking the right balance between self-sacrifice and pursuing my own interests difficult.
I think if you argue that that leads to anything close to a normal life you are being disingenuous
I think this is probably wrong for most people. If you make yourself unhappy by trying to force yourself to make sacrifices you don't want to make, I think most people will be much less productive. And I think that most people actually need a fairly normal social life etc. to avoid that. I believe this because I've seen and heard stories of people burning out from trying to work too hard, and I've come close myself.
I think the best way to have a large impact probably looks like working as hard as you sustainably can (for most people, I think this is working hard in a normal 9-5 work week or less), and spending enough time thinking seriously about the best strategy for you to make the biggest difference. It might also involve donating money, but again I think it's a good use of money to spend some money on what makes you happy, to prevent resentment and burn out.
I think misaligned AI values should be expected to be worse than human values, because it's not clear that misaligned AI systems would care about eg their own welfare.
Inasmuch as we expect misaligned AI systems to be conscious (or whatever we need to care about them) and also to be good at looking after their own interests, I agree that it's not clear from a total utilitarian perspective that the outcome would be bad.
But the "values" of a misaligned AI system could be pretty arbitrary, so I don't think we should expect that.
Interesting exercise, thanks! The link to view the questions doesn't work though. It says: