It seems if we can't make the basic versions of these tools well aligned with us, we won't have much luck with future more advanced versions.
Therefore, all AI safety people should work on alignment and safety challenges with AI tools that currently have users (image generators, GPT, etc).
Agree? Disagree?
Some researchers are working on making real world models more aligned, and they either work on the cutting edge (as you suggest here), or maybe on something smaller (if their research is easier to start on a smaller model, maybe).
Some researchers work on problems like Agent Foundations (~ what is the correct mathematical way to model agents, utility functions, and things like that), and I assume they don't use actual models to experiment with (yet).
Some researchers are trying to make tools that will help other researchers.
And there are other directions.
You can see many of the agendas here:
(My understanding of) What Everyone in Technical Alignment is Doing and Why