BB

Bentham's Bulldog

3899 karmaJoined

Comments
159

It’s a somewhat long post.  Want to come on the podcast to discuss?

I don't agree with that.  Cluelessness seems to only arise if you have reason to think that on average your actions won't make things better.  And yet even a very flawed procedure will, on average across worlds, do better than chance.  This seems to deal with epistemic cluelessness fine. 

Why can't you take seriously every plausible argument with huge implications? 

Thanks, yes I think I fired this post off too quickly without taking time to read deeper analysis of it.  I'll try to give your post a read when I get the chance. 

Interesting point, though I disagree--I think there are strong arguments for thinking that you should just maximize utility https://joecarlsmith.com/2022/03/16/on-expected-utility-part-1-skyscrapers-and-madmen/

It's made me a bit more Longtermist.  I think that one of the more plausible scenarios for infinite value is that God exists and actions that help each other out infinitely strengthen our eternal relationship, and such a judgment will generally result in doing conventionally good things.  I also think that you should have some uncertainty about ethics, so you should want the AI to do reflection.

Majorly disagree!  I think that while probably you'd expect an animal to behave aversively in response to stimuli, it's surprising that: 

  1. This distracts them from other aversive stimuli (nociception doesn't typically work that way--it's not like elbow twitches distract you and make you less likely to have other twitches.  
  2. They'd react to anaesthetic (they could just have some aversive behavior without anaesthetic).
  3. They'd rub their wounds.  

etc

Load more