The institute is called Käte Hamburger Centre for Apocalyptic and Post-Apocalyptic Studies and is based in Heidelberg, Germany. They started in 2021 and initially received 9 million € of funding from the German government for the first four years.
AFAICT, they study sociological aspects of narratives of apocalypses, existential risks, and the end of the world.
They have engaged with EA thinking, and I assume they will have an interesting outside perspective of some prevalent worldviews in EA. For example, here is a recorded talk about longtermism (I have only skipped through it so far), which mentions MIRI, FHI, and What We Owe The Future.
I stumbled upon this today and thought it could interest some people here. Generally, I am very curious to learn more about alternative worldviews to EA that also engage with existential risk in epistemically sound ways. One criticism of EA that became more popular over the last months is that EA organizations engage too little with other disciplines and institutions with relevant expertise. Therefore, I suggest checking out the work of this Centre.
Please comment if you have engaged with them before and know more than I do.
I'd be careful not to confuse polished presentation, eloquent speaking and fundraising ability with good epistemics.
I watched the linked video and honestly thought it was a car crash epistemically speaking.
The main issue is I don't think any of her arguments would pass the ideological turing test. She says "Will MacAskill thinks X..." but if Will MacAskill was in the room he would obviously respond "Sorry no, that's not what I think at all..."
A real low point is when she points at a picture of Nick Bostrom, Stuart Russell, Elon Musk, Jaan Tallinn etc. and suggests their motivation for working on AI is to prove that men are superior to women.
Threatmodel homogeneity is a major ecosystem risk in alignment in particular. There's this broad sense of "eliezer and holden are interested in extinction-level events" leading to "it's not cool to be interested in subextinction level events" that leads some people to unclear reasoning, which at worst becomes "guess the password to secure the funding". The whole "if eliezer is right the stakes are so high but I have nagging questions or can't wrap my head around exactly what's going on with the forecasts" thing leads to 1. an impossible to be happy with pr... (read more)