I think that OpenAI is not worried about actors like DeepMind misusing AGI, but (a) is worried about actors that might not currently be on most people's radar misusing AGI, (b) thinks that scaling up capabilities enables better alignment research (but sees other benefits to scaling up capabilities too) and (c) is earning revenue for reasons other than direct existential risk reduction where it does not see a conflict in doing so.
Thank you for writing this.
Please could you add to the top of the Google doc:
This would make it easier for people to judge for themselves how much weight to put on your advice.
Thank you for this post. I agree with its central premise and I know that Michelle is already working on an impact evaluation that will contain a lot of this sort of information.
However, your post contains a couple of misleading points that I thought would be worth correcting.
For future reference, it may have been courteous to contact someone at Giving What We Can before posting this. In case that sounds intimidating I can assure you they are all very friendly :)
(Disclosure: I manage Giving What We Can's website as a volunteer.)
It's interesting to me that you refer to (CPU) clock speed. If my understanding is correct, when you change the clock speed of a CPU, you don't actually change the speed at which signals propagate through the CPU, you just change the length of the delay between consecutive propagations. (Technically, changes in temperature or voltage could have small side-effects on propagation speed, but let's ignore those for the sake of argument.) It seems to me that the length of the delay is not morally relevant, for the same reason that the length of a period of time during which I am unconscious is not morally relevant, all else being equal. I am curious if you agree, and if so, whether that changes any of your practical conclusions.
For what it's worth, it seems to me that both digital and biological minds are discrete in an important sense, regardless of whether physics is continuous. Indeed, for a digital simulation of a biological mind to even be possible, it has to rely on a discrete approximation being sufficient. But I think I'd have trouble making that argument precise to your satisfaction, so for now the thought experiment will have to do. Also, thank you for the post, I found it quite thought-provoking!