G

Guive

58 karmaJoined

Comments
18

This is an article about moral philosophy, not the internal dynamics of the EA community, and it therefore does not belong on the "community" tab. 

Yeah, my understanding is there is debate about whether the loss in EV from having an emergency fund in low yield low risk assets is offset by the benefits of reduced risk. The answer will depend on personal risk tolerance, current net worth, expected career volatility, etc. The main point of my comment was just that a lot of people use default low yield savings accounts even though there's no reason to do that at all.  

That's a fair point, but a lot of the scenarios you describe would mean rapid economic growth and equities going up like crazy. The expectation of my net worth in 40 years on my actual views is way, way higher than it would be if I thought AI was totally fake and the world would look basically the same in 2065. That doesn't mean you shouldn't save up though (higher yields are actually a reason to save, not a reason to refrain from saving).

Thanks for this, Trevor. 

For what it's worth: a lot of people think emergency fund means cash in a normal savings account, but this is not a good approach. Instead, buy bonds or money market funds with your emergency savings, or put them in a specialized high yield savings account (which to repeat is likely NOT a savings account that you get by default from your bank). 

Or just put the money in equities in a liquid brokerage account.

Thanks for writing this. Do you have any thoughts on how to square giving AI rights with the nature of ML training and the need to perform experiments of various kinds on AIs?

For example, many people have recently compared fine-tuning AIs to have certain goals or engage in certain behaviors to brainwashing. If it were possible to grab human subjects off the street and rewrite their brains with RLHF, that would definitely be a violation of their rights. But what is the alternative---only deploying base models? And are we so sure that pre-training doesn't violate AI rights? A human version of the "model deletion" experiment would be something out of a horror movie. But I still think we should seriously consider doing that to AIs. 

I agree that it seems like there are pretty strong moral and prudential arguments for giving AIs rights, but I don't have a good answer to the above question. 

Does PPE not work or is the issue that people don't use it?

Load more