Aman Patel and I have been thinking a lot about the AI Safety Pipeline. In the course of doing so we haven't come across any visualizations. Here's a second draft of a visualized pipeline of what organizations/programs are relevant to moving through the pipeline (mostly in terms of theoretical approaches). It is incomplete. There are orgs that are missing because we don't know enough about them (e.g., ERIs). The existing orgs are probably not exactly where they should be, and there's tons of variation within participants of each program; there should be really long tails. The pipeline moves left to right and we have placed organizations generally where they seem to fit in on the pipeline. The height of each program represents its capacity or number of participants.
This is a super rough draft, but it might be useful for some people to see the visualization and recognize where their beliefs/knowledge differs from ours.
We support somebody else putting more than an hour into this project and making a nicer/more accurate visual.
Current version as of 4/11:

It's misleading to say you're mapping the AIS pipeline but then to illustrate AIS orgs alone. Most AIS researchers will let their junior level research be non-AIS research with a professor at their school. Professional level research will usually be a research masters or PhD (or sometimes industry). So you should draw these.
Similar messaging must be widespread, because many of the undergraduates that I meet at EAGs get the impression that they are highly likely to have to drop out of school to do AIS. Of course, that can be a solid option if you have an AIS job lined up, or in some number other cases (the exact boundaries of which are subject to ongoing debate).
Bottom line - omitting academic pathways here and elsewhere will perpetuate a message that is descriptively inaccurate, and pushes people in an unorthodox direction whose usefulness is debateable.
Thank you for your comment. Personally, I'm not too bullish on academia, but you make good points as to why it should be included. I've updated the graphic and is says this "*I don’t know very much about academic programs in this space. They seem to vary in their relevance, but it is definitely possible to gain the skills in academia to contribute to professional alignment research. This looks like a good place for further interest: https://futureoflife.org/team/ai-existential-safety-community/"
If you have other ideas you would like expressed in the graphic I am happy to include them!