Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).
Something bouncing around my head recently ... I think I agree with the notion that "you can't solve a problem at the level it was created".
A key point here is the difference between "solving" a problem and "minimising its harm".
Why is this important? Because I think EA and AI Safety have historically focussed (and has their respective strengths in) harm-minimisation.
This applies obviously the micro. Here are some bad examples:
But I also think this applies at the macro:
Good point. In that case the hypothetical user isn't using it as a forum (i.e. for discourse)