I tried to figure out whether MIRI’s directions for AI alignment were good, by reading a lot of stuff that had been written online; I did a pretty bad job of thinking about all this.
I'm curious about why you think you did a bad job at this. Could you roughly explain what you did and what you should have done instead?
If you can manage it, head to the Seattle Secular Solstice on Dec 10, 2016. Many of us from Vancouver are going.
Notice that the narrowest possible offset is avoiding an action. This perfectly undoes the harm one would have done by taking the action. Every time I stop myself from doing harm I can think of myself as buying an offset of the harm I would have done for the price it cost me to avoid it.
I think your arguments against offsetting apply to all actions. The conclusion would be to never avoid doing harm unless it's the cheapest way to help.
I don't understand this. Have you written about this or have a link that explains it?