Thanks for your comment!
Agree with your pros and cons.
"Existential security" seems like a great one within EA.
Have not seen the "procedural visions" one! Thanks for recommending it, will look.
On your course question: As we are currently working on something similar to this at Foresight right now I will answer similarly to what we are thinking of adding there. Which is to do ambitious worldbuilding in groups of 4-5 people with complementary skill sets. Will share more when our resources are online for that!
Thank you! :)
Thanks for the question!
I would say that it's not that people aren't aware of risks, my broad reflection is more in terms of how one relates to it. In the EA/X-risk community it is clear that one should take these things extremely seriously and do everything one can to prevent them. I often feel that even though researchers in general are very aware of potential risks with their technologies, they seem to get swept up in the daily business of just doing their work, and not reflecting very actively over the potential risks with it.
I don't know exactly why that is, it could be that they don't consider it their personal responsibility, or perhaps they feel powerless and that aiming to push progress forward is either the best or the only option? But that is a question that would be interesting to dig deeper into!
Agree it doesn't represent “the STEM community”. As in my reply to Jessica's longer comment, I agree there isn’t really such a thing as “the STEM community”, and if I were to write the post now I would want to better reflect the fact that this was asked to the Foresight community, in which most participants are working in one of our technical fields: neurotech, space tech, nanotech, biotech or computation. In the survey I ask if people identify themselves as STEM professionals, a question to which most answered yes (85% of respondents in this v. small survey).
Hi Jessica, thank you so much for your thorough read and response! I found it very useful.
I agree there isn’t really such a thing as “the STEM community”, and if I were to write the post now I would want to better reflect the fact that this was asked to the Foresight community, in which most participants are working in one of our technical fields: neurotech, space tech, nanotech, biotech or computation. In the survey I ask if people identify themselves as STEM professionals, a question to which most answered yes (85% of respondents in this v. small survey). So as you point out, most are not in the life sciences.
Regarding the demographics, our community is very male dominated. However, our team is all female, and we are actively trying to improve on this. I would be interested to hear if you at HI Engineers are doing anything on this, and have any learning that you can share? I did not collect any data on ethnicity or age in this survey.
As I stated in the post, and can only state again, this is very preliminary, so I agree one shouldn’t draw too much of a conclusion based on this. But I’m happy I put it out there as is so that I could get this useful feedback from you!
Regarding what technology means in the text, I would say that it refers to both informations and "physical" technologies. I’d be very interested to hear more about your outreach work with HI Engineers. Overall, your work looks very interesting, and so I hope you don’t mind if I reach out “off forum”! :)
We just launched a free, self-paced course: Worldbuilding Hopeful Futures with AI, created by Foresight Institute as part of our Existential Hope program.
The course is probably not breaking new conceptual ground for folks here who are already “red-pilled” on AI risks — but it might still be of interest for a few reasons:
It’s designed to broaden the base of people engaging with long-term AI trajectories, including governance, institutional design, and alignment concerns.
It uses worldbuilding as an accessible gateway for newcomers — especially those who aren’t in technical fields but still want to understand and shape AI’s future.
We’re inviting contributions from more experienced thinkers as well — to help seed more diverse, plausible, and strategically relevant futures that can guide better public conversations.
Guest lectures include:
Helen Toner (CSET, former OpenAI board) on frontier lab dynamics
Anton Korinek (Brookings) on economic impact of AI
Anthony Aguirre (FLI) on existential risk
Hannah Ritchie (Our World in Data) on grounded progress
Glen Weyl (RadicalxChange) on plural governance
Ada Palmer (historian & sci-fi author) on long-range thinking
If you’re involved in outreach, education, or mentoring, this might be a good resource to share. And if you're curious about how we’re trying to translate these issues to a wider audience — or want to help build out more compelling positive-world scenarios — we’d love your input.
👉 https://www.udemy.com/course/worldbuilding-hopeful-futures-with-ai/
Would love feedback or questions — and happy to incorporate critiques into the next iteration.