The Global Catastrophic Risk Institute (GCRI) is currently welcoming inquiries from people who are interested in seeking our advice and/or collaborating with us as part of our fourth annual Advising and Collaboration Program. Inquiries may cover any aspect of global catastrophic risk. We welcome inquiries from people at any career point, including students, any academic or professional background, and any place in the world. People from underrepresented groups are especially encouraged to reach out.

We are especially interested in inquiries from people whose interests overlap with ours. For details of our interests, please see our publications, topics, and our current funded AI policy projects. We encourage people to reach out to us if they are interested in any aspect of global catastrophic risk.

We welcome inquiries from both colleagues we already know and people we have not met before. This open call is a chance for us to catch up with the people we already know as well as a chance to start a new relationship with the people we have not met before. It is also a chance for anyone to talk with us about how to advance their career in global catastrophic risk, to explore potential synergies with our work, and to expand their networks in the global catastrophic risk community. We encourage new participants to read GCRI Executive Director Seth Baum’s Common Points of Advice for Students and Early-Career Professionals for themes frequently discussed in previous programs.

Participation does not necessarily entail any significant time commitment. It can consist of anything from a short email exchange to more extensive project work. In some cases, people may be able to get more involved by contributing to ongoing dialog, collaborating on research and outreach activities, co-authoring publications, or becoming GCRI Fellows. For examples of different types of participation, please read testimonials from the 2021 Program. Some funding is available for people who collaborate with us on project work. Details are available upon request.

Individuals interested in speaking with us or collaborating with us should email Ms. McKenna Fitzgerald, mckenna [at] gcrinstitute.org. Please include a short description of your background and interests, what you hope to get out of their interaction with GCRI, a resume/CV or a link to your professional website, where you are based, and how you heard about the program. It would also be helpful to include your name in the subject line of the email and any ideas for how you could contribute to GCRI’s projects in the body. 

For more information on ways to participate in GCRI activities, please view our Get Involved page.

We look forward to hearing from you.

EDIT: Clarified that the acronym GCRI stands for the Global Catastrophic Risk Institute.

4

0
0

Reactions

0
0
Comments3


Sorted by Click to highlight new comments since:

What's GCRI?

(Obviously I can find out by clicking some of the links, but it's somewhat confusing to read the post while not yet knowing.)

Thanks for this. I've edited the post to clarify that the acronym stands for the Global Catastrophic Risk Institute.

Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 7m read
 · 
Article 5 of the 1948 Universal Declaration of Human Rights states: "Obviously, no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment." OK, it doesn’t actually start with "obviously," but I like to imagine the commissioners all murmuring to themselves “obviously” when this item was brought up. I’m not sure what the causal effect of Article 5 (or the 1984 UN Convention Against Torture) has been on reducing torture globally, though the physical integrity rights index (which “captures the extent to which people are free from government torture and political killings”) has increased from 0.48 in 1948 to 0.67 in 2024 (which is good). However, the index reached 0.67 already back in 2001, so at least according to this metric, we haven’t made much progress in the past 25 years. Reducing government torture and killings seems to be low in tractability. Despite many countries having a physical integrity rights index close to 1.0 (i.e., virtually no government torture or political killings), many of their citizens still experience torture-level pain on a regular basis. I’m talking about cluster headache, the “most painful condition known to mankind” according to Dr. Caroline Ran of the Centre for Cluster Headache, a newly-founded research group at the Karolinska Institutet in Sweden. Dr. Caroline Ran speaking at the 2025 Symposium on the recent advances in Cluster Headache research and medicine Yesterday I had the opportunity to join the first-ever international research symposium on cluster headache organized at the Nobel Forum of the Karolinska Institutet. It was a 1-day gathering of roughly 100 participants interested in advancing our understanding of the origins of and potential treatments for cluster headache. I'd like to share some impressions in this post. The most compelling evidence for Dr. Ran’s quote above comes from a 2020 survey of cluster headache patients by Burish et al., which asked patients to rate cluster headach
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of