E

elteerkers

323 karmaJoined

Comments
31

FYI we recorded a podcast episode with Anthony Aguirre focused on the Tool AI scenario. We asked Anthony to stress test it, trade-offs, incentives, liability, and the plausibility of actually making a Tool AI future happen, and we also talk a bit about why we did this project overall and how it relates to the companion d/acc scenario: https://www.youtube.com/watch?v=JwJaFMi3Ydw&feature=youtu.be

Yeah I think my sense was definitely that people saw Tool AI as a great solution, but mostly interim. If we had phrased it as being “locked in forever,” the reactions might have looked very different? I've interpreted it more as that people seem to see it as preserving option value: we can still develop AGI later, but ideally after we’ve managed to integrate Tool AI into society, and set up systems to handle AGI better than if it came now when we're quite poorly prepared.

Really appreciate your points on capital dividend funds and the distributional side as well. If you’re up for it, would love if you shared these thoughts on the Metaculus tournament too, where we're running a comment prize exactly to surface perspectives like this: https://www.metaculus.com/tournament/foresight-ai-pathways/ :)

We just launched a free, self-paced course: Worldbuilding Hopeful Futures with AI, created by Foresight Institute as part of our Existential Hope program.

The course is probably not breaking new conceptual ground for folks here who are already “red-pilled” on AI risks — but it might still be of interest for a few reasons:

  • It’s designed to broaden the base of people engaging with long-term AI trajectories, including governance, institutional design, and alignment concerns.

  • It uses worldbuilding as an accessible gateway for newcomers — especially those who aren’t in technical fields but still want to understand and shape AI’s future.

We’re inviting contributions from more experienced thinkers as well — to help seed more diverse, plausible, and strategically relevant futures that can guide better public conversations.

Guest lectures include:

Helen Toner (CSET, former OpenAI board) on frontier lab dynamics

Anton Korinek (Brookings) on economic impact of AI

Anthony Aguirre (FLI) on existential risk

Hannah Ritchie (Our World in Data) on grounded progress

Glen Weyl (RadicalxChange) on plural governance

Ada Palmer (historian & sci-fi author) on long-range thinking

If you’re involved in outreach, education, or mentoring, this might be a good resource to share. And if you're curious about how we’re trying to translate these issues to a wider audience — or want to help build out more compelling positive-world scenarios — we’d love your input.

👉 https://www.udemy.com/course/worldbuilding-hopeful-futures-with-ai/

Would love feedback or questions — and happy to incorporate critiques into the next iteration.

Thanks for your comment!

Agree with your pros and cons.

"Existential security" seems like a great one within EA.

Have not seen the "procedural visions" one! Thanks for recommending it, will look.

On your course question: As we are currently working on something similar to this at Foresight right now I will answer similarly to what we are thinking of adding there. Which is to do ambitious worldbuilding in groups of 4-5 people with complementary skill sets. Will share more when our resources are online for that!

Thank you! :)

Thanks for the question!

I would say that it's not that people aren't aware of risks, my broad reflection is more in terms of how one relates to it. In the EA/X-risk community it is clear that one should take these things extremely seriously and do everything one can to prevent them. I often feel that even though researchers in general are very aware of potential risks with their technologies, they seem to get swept up in the daily business of just doing their work, and not reflecting very actively over the potential risks with it.

I don't know exactly why that is, it could be that they don't consider it their personal responsibility, or perhaps they feel powerless and that aiming to push progress forward is either the best or the only option? But that is a question that would be interesting to dig deeper into!

That's a good point. I'm unsure of what the best way of facilitating these meetings would be, so that it doesn't downplay the seriousness of the questions. But assuming good intentions, allowing for disagreements, and acknowledging the differences is enough and the best option.

Agree it doesn't represent “the STEM community”. As in my reply to Jessica's longer comment, I agree there isn’t really such a thing as “the STEM community”, and if I were to write the post now I would want to better reflect the fact that this was asked to the Foresight community, in which most participants are working in one of our technical fields: neurotech, space tech, nanotech, biotech or computation. In the survey I ask if people identify themselves as STEM professionals, a question to which most answered yes (85% of respondents in this v. small survey).

Load more