It's made me a bit more Longtermist. I think that one of the more plausible scenarios for infinite value is that God exists and actions that help each other out infinitely strengthen our eternal relationship, and such a judgment will generally result in doing conventionally good things. I also think that you should have some uncertainty about ethics, so you should want the AI to do reflection.
Majorly disagree! I think that while probably you'd expect an animal to behave aversively in response to stimuli, it's surprising that:
etc
Thank you!