As part of the IIDM working group, we are running a survey to help us refine the scope of improving institutional decision-making (IIDM) as a cause area. Please click here to share your views. 

Background: While IIDM has drawn strong interest from the EA community, people often come to it with different ideas about what IIDM looks like or means in practice. Because of this wide range of potential definitions, the concept can seem fuzzy or overly broad to some. One of the core goals of our initiative is to better define IIDM so that rigorous analysis can be applied to it, and we and others can make more confident recommendations for funding and careers going forwards. (A first step toward creating a working definition of IIDM was included in our post announcing the working group late last year, which also includes a fuller description of our activities).

This survey will be used to gauge the diversity of perspectives in the EA community on what “counts” as IIDM. This helps us understand what the community thinks is important and has the most potential for impact, and in turn shapes the rest of our work. Concretely, for example, it will help us decide what topics we should cover in a directory of resources about IIDM. 

We are interested in your view regardless of whether you think IIDM should be amongst the top priorities or whether you’re sceptical. You’ll be presented with a list of topics and asked to rate how “in scope” you consider them to be. Depending on how familiar you are with the topics, we expect the survey will take you 10-25 minutes to complete.

This survey complements our other efforts to define the scope of IIDM. For example, we are attempting to disentangle IIDM from a more theoretical point of view. If you’re interested in helping with this or have other thoughts on how to develop IIDM, please get in touch with Ian David Moss, Laura Green or Vicky Clayton on improvinginstitutions@gmail.com.  

The survey will remain open until 28th April. Thanks in advance and we look forward to sharing the results with you in the next month or two! 

With particular thanks to Ishita Batra who developed the survey and in advance to Dilhan Perera who will be analysing the survey.

Comments4


Sorted by Click to highlight new comments since:

Thanks! May I use the doc on definitions to talk about iidm with outsiders? For instance, in a group studies on Political Philosophy?

Hi Ramiro, that would be fine, although I recommend you caveat with the context that this is all in development/subject to change/etc. Thanks!

I thought this survey was really well put together and I'm excited about the future of the working group!

Thanks! Glad that you're excited about it :)

Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of
 ·  · 7m read
 · 
The company released a model it classified as risky — without meeting requirements it previously promised This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work. After publication, this article was updated to include an additional response from Anthropic and to clarify that while the company's version history webpage doesn't explicitly highlight changes to the original ASL-4 commitment, discussion of these changes can be found in a redline PDF linked on that page. Anthropic just released Claude 4 Opus, its most capable AI model to date. But in doing so, the company may have abandoned one of its earliest promises. In September 2023, Anthropic published its Responsible Scaling Policy (RSP), a first-of-its-kind safety framework that promises to gate increasingly capable AI systems behind increasingly robust safeguards. Other leading AI companies followed suit, releasing their own versions of RSPs. The US lacks binding regulations on frontier AI systems, and these plans remain voluntary. The core idea behind the RSP and similar frameworks is to assess AI models for dangerous capabilities, like being able to self-replicate in the wild or help novices make bioweapons. The results of these evaluations determine the risk level of the model. If the model is found to be too risky, the company commits to not releasing it until sufficient mitigation measures are in place. Earlier today, TIME published then temporarily removed an article revealing that the yet-to-be announced Claude 4 Opus is the first Anthropic model to trigger the company's AI Safety Level 3 (ASL-3) protections, after safety evaluators found it may be able to assist novices in building bioweapons. (The