Yeah, my understanding is there is debate about whether the loss in EV from having an emergency fund in low yield low risk assets is offset by the benefits of reduced risk. The answer will depend on personal risk tolerance, current net worth, expected career volatility, etc. The main point of my comment was just that a lot of people use default low yield savings accounts even though there's no reason to do that at all.
That's a fair point, but a lot of the scenarios you describe would mean rapid economic growth and equities going up like crazy. The expectation of my net worth in 40 years on my actual views is way, way higher than it would be if I thought AI was totally fake and the world would look basically the same in 2065. That doesn't mean you shouldn't save up though (higher yields are actually a reason to save, not a reason to refrain from saving).
Thanks for this, Trevor.
For what it's worth: a lot of people think emergency fund means cash in a normal savings account, but this is not a good approach. Instead, buy bonds or money market funds with your emergency savings, or put them in a specialized high yield savings account (which to repeat is likely NOT a savings account that you get by default from your bank).
Or just put the money in equities in a liquid brokerage account.
In the case at hand, Matthew would have had to at some point represent himself as supporting slowing down or stopping AI progress. For at least the past 2.5 years, he has been arguing against doing that in extreme depth on the public internet. So I don't really see how you can interpret him starting a company that aims to speed up AI as inconsistent with his publicly stated views, which seems like a necessary condition for him to be a "traitor". If Matthew had previously claimed to be a pause AI guy, then I think it would be more reasonable for other adherents of that view to call him a "traitor." I don't think that's raising the definitional bar so high that no will ever meet it---it seems like a very basic standard.
I have no idea how to interpret "sellout" in this context, as I have mostly heard that term used for such situations as rappers making washing machine commercials. Insofar as I am familiar with that word, it seems obviously inapplicable.
From an antirealist perspective, at least on the 'idealizing subjectivism' form of antirealism, moral uncertainty can be understood as uncertainty about the result of an idealization process. Under this view, there exists some function that takes your current, naive values as input and produces idealized values as output—and your moral uncertainty is uncertainty about the output.
Thanks for writing this. Do you have any thoughts on how to square giving AI rights with the nature of ML training and the need to perform experiments of various kinds on AIs?
For example, many people have recently compared fine-tuning AIs to have certain goals or engage in certain behaviors to brainwashing. If it were possible to grab human subjects off the street and rewrite their brains with RLHF, that would definitely be a violation of their rights. But what is the alternative---only deploying base models? And are we so sure that pre-training doesn't violate AI rights? A human version of the "model deletion" experiment would be something out of a horror movie. But I still think we should seriously consider doing that to AIs.
I agree that it seems like there are pretty strong moral and prudential arguments for giving AIs rights, but I don't have a good answer to the above question.
This is an article about moral philosophy, not the internal dynamics of the EA community, and it therefore does not belong on the "community" tab.