Hide table of contents

Epistemic status: Highly speculative quick Facebook post. Thanks to Anna Riedl for nudging me to share it here anyway.

Something I've noticed recently is that some people who are in a bad place in their lives tend to have a certain sticky sleazy black holey feel to them. Something around untrustworthiness, low integrity, optimizing for themselves regardless of the cost for the people around them. I've met people like that, and I think when others felt around me like my energy was subtly and indescribably off, it was due to me being sticky in that way, too.

Game-theoretically, it makes total sense for people to be a bit untrustworthy while they are in a bad place in their life. If you're in a place of scarcity, it is entirely reasonable to be strategic about where you put your limited resources. Then, it's just reasonable to only be loyal to others as long as you can get something out of it yourself and to defect as soon as they don't offer obvious short-term gains. And similarly, it make sense for the people around you to be a bit weary of you when you are in that place.

And now, a bit of a hot take: I think most if not all of Effective Altruism's recent scandals have been due to low-integrity sticky behavior. And, I think some properties of EA systematically make people sticky.

We might want to invest some thought and effort into fixing them. So, here's some of EA's sticky-people-producing properties I can spontaneously think of, plus first thoughts on how to fix them that aren't supposed to be final solutions:

1. Utilitarianism

Yudkowsky wrote a thing that I think is true: 

"Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you have become a god." 

Meanwhile, SBF and probably a bunch of other people in EA (including me at times) have gone all four quarters of the way. If there's no upper bound to when it is enough to make the numbers go up, you'll be in a place of scarcity no matter what, and will be incentivized to defect indefinitely.

I think an explicit belief of "defecting is not a good utilitarian strategy" doesn't help here: Becoming sticky is not a decision, but a subtle shift in your cognition that happens when your animal instincts pick up that your prefrontal cortex thinks you are in a place of scarcity.

Basically, I think Buddhism is what utilitarianism would be if it made sense and was human-brain-shaped: Optimizing for global optima, but from a place of compassion and felt oneness with all sentient beings, not from the standpoint of a technocratic puppet master.

2. Ever-precarious salaries

EA funders like to base their allocation of funds on evidence, and they like to be able to adjust course quickly as soon as there are higher expected-value opportunities. From the perspective of naive utilitarianism, this completely makes sense.

From the perspective of grantees, however, it feels like permanently having to justify your existence. And that is a situation that makes you go funny in the head in a way that is not conducive to getting just about any job done, unless it's a job like fraud that inherently involves short-term thinking and defecting on society. Whether or not you treat people as trustworthy and competent, you'll tend to find that you are right.

I don't know how to fix this. Especially at the place we are at now, where both the FTX collapse and funders' increased cautiousness made the precarity of EA funding even worse. Currently, I'm seeing two dimensions to at least partially solving this issue:

  1. Building healthier, more sustainable relationships between community members. That's why I'm building Authentic Relating Berlin in parallel to EA Berlin, and think about ways to safely(!) encourage memetic exchange between these communities. This doesn't help with the precarious funding itself, but with the "I feel like I have to justify my existence!"-aspect of writing a grant application.
  2. We might want to fundamentally redesign our institutions so that peopel feel trusted and we elicit trustworthy behavior in them.[1] For example, we might somehow want to offer longer-term financial security to community members that doesn't just rip off when they want to switch projects within the EA ecosystem. To give people more leeway, and to trust them more to do the best they can with the money they receive. I've found some organizations that had awesome success with similar practices in Frederic Laloux's "Reinventing Organizations", including a French manufacturing company named FAVI and the Dutch healthcare organization Buurtzorg. Some examples for EA meta work that I think are good progress towards finding forms of organizing that produce trustworthy people are Charity Entrepreneurship, the things Nonlinear builds (e.g. the Nonlinear Network),  AI Safety Support, alignment.wiki,  the various unconferences I've seen happening over the last years, as well as the Future Matters Project, a Berlin-based, EA-adjacent climate movement building org.

3. A not quite well-managed personal/professional overlap

EA sort of wants to be a professional network. At the same time, the kinds of people who tend to grow interested in EA have a lot of things in common they find few allies for in the rest of the world. So, it's just obvious that they also want to be friends with each other. Thus grow informal friend circles with opaque entry barriers everywhere around the official professional infrastructure. Thus grow house parties you'll want to get invited to so you can actually feel part of the tribe, and so you can tap into the informal high-trust networks which actually carry the weight of the professional infrastructure.

Some of the attempts within EA to solve this seem to be to push even more towards just being a professional network. I think that's dangerously wrong, because it doesn't remove the informal networks and their power. It just makes access to them harder, and people more desperate to get in.

Plus, humans are social animals, and if you stop them from socialling, they'll stop showing up.

I think the solution lays in exactly the opposite direction: Creating informal networks with low entry barriers and obvious ways in, so that feeling like you belong to the tribe is not something you have to earn, but something you get for free right at the start of your EA journey. That's what I've been working on with EA Berlin's communication infrastructure over the last months. Now, I'm trying to figure out how to interface it more graciously with impact-focused outreach and meetups.

  1. ^

    This is the aspect of this post I'm most unsure about.

53

0
0

Reactions

0
0

More posts like this

Comments12


Sorted by Click to highlight new comments since:

(Upvoted.)

Some of the attempts within EA to solve this seem to be to push even more towards just being a professional network. I think that's dangerously wrong, because it doesn't remove the informal networks and their power. It just makes access to them harder, and people more desperate to get in.

Somewhat relevant counterpoint:

For everyone to have the opportunity to be involved in a given group and to participate in its activities the structure must be explicit, not implicit. The rules of decision-making must be open and available to everyone, and this can happen only if they are formalized. This is not to say that formalization of a structure of a group will destroy the informal structure. It usually doesn't. But it does hinder the informal structure from having predominant control and make available some means of attacking it if the people involved are not at least responsible to the needs of the group at large. [...]

... an elite refers to a small group of people who have power over a larger group of which they are part, usually without direct responsibility to that larger group, and often without their knowledge or consent. [...] Elites are nothing more, and nothing less, than groups of friends who also happen to participate in the same political activities. They would probably maintain their friendship whether or not they were involved in political activities; they would probably be involved in political activities whether or not they maintained their friendships. It is the coincidence of these two phenomena which creates elites in any group and makes them so difficult to break.

These friendship groups function as networks of communication outside any regular channels for such communication that may have been set up by a group. If no channels are set up, they function as the only networks of communication. [...] 

Some groups, depending on their size, may have more than one such informal communications network. [...] In a Structured group, two or more such friendship networks usually compete with each other for formal power. This is often the healthiest situation, as the other members are in a position to arbitrate between the two competitors for power and thus to make demands on those to whom they give their temporary allegiance.

I partially agree.

I love that definition of elites, and can definitely see how it corresponds to to how money, power, and intellectual leadership in EA revolves around the ancient core orgs like CEA, OpenPhil, and 80k.

However, the sections of Doing EA Better that called for more accountability structures in EA left me a bit frightened. The current ways don't seem ideal, but I think there are innumerable ways how formalization of power can make institutions more rather than less molochian, and only a few that actually significantly improve the way things are done. Specifically, i see two types of avenues for formalizing power in EA that would essentially make things worse:

  1. Professional(TM) EA might turn into the outer facade of what is actually still run by the now harder to reach and harder to get into traditional elite. That's the concern I already pointed towards in the post above.
  2. The other way things could go wrong was if we built something akin to modern-day democratic nation states: Giant sluggish egregores of paperwork that reliably produce bad compromises nobody would ever have agreed to from first principles, via a process that is so time-consuming and ensnaring to our tribal instincts that nobody has energy left to have the important truth-seeking debates that could actually solve the problems at hand.

Personally, the types of solutions I'm most excited about are ones that enable thousands of people to coordinate decentralizedly around the same shared goal without having to vote or debate everything out. I think there are some organizations out there that have solved information flows and resource allocation way more efficiently than not only hierarchical technocratic organizations like traditional corporations, socialist economies, or the central parts of present-day EA, but also more efficiently than modern democracies.

For example, in regards to collective decisionmaking, I'm pretty excited about some things that happen in new social movements, the organizations that Frederic Laloux described (see above, or directly on https://reinventingorganizationswiki.com/en/cases/), or the Burning Man community. 

A decisionmaking process that seems to work in these types of decentralized organizations is the Advice Process. It is akin to how many things are already done in EA, and might deserve to be the explicit ideal we aspire to.

Here's a short description written by Burning Nest, a UK-based Burning Man-style event:

"The general principle is that anyone should be able to make any decision regarding Burning Nest.

Before a decision is made, you must ask advice from those who will be impacted by that decision, and those who are experts on that subject.

Assuming that you follow this process, and honestly try to listen to the advice of others, that advice is yours to evaluate and the decision yours to make."

Of course, this ideal gets a bit complicated if astronomical stakes, infohazards, the unilateralist's curse, and the fact that EA is spread out over a variety of continents and legal entities enter the gameboard.

I don't have a clear answer yet for how to make EA at large more Advice Process-ey, and maybe what we currently have actually is the best we can get. But, I'm currently bringing the way EA Berlin works closer and closer to this. And as I've already learned, this works way better when people trust each other, and when they trust me to trust them. The Advice Process is basically built on top of the types of high-trust networks that can only emerge if people with similar values are also allowed to interact in non-professional ways.

Therefore, if we optimize away from making the personal/professional overlap work, we might rob ourselves of the possibility to implement mechanisms like the Advice Process that might help us solve a bunch of our coordination problems, but require large high-trust networks to work effectively. Other social movements have innovated on decisionmaking processes before EA. It would just be too sad if we wouldn't hold ourselves to higher standards here than copying the established and outdated management practices of pre-startup era 20th century corporations.

Thanks for sharing your thoughts, I particularly appreciated you pointing out the plausible connection between experiencing scarcity and acting less prosocially / with less integrity. And I agree that experiencing scarcity in terms of social connections and money is unfortunately still sufficiently common in EA that I'm also pretty worried when people e.g. want to systematically tone down aspects that would make EA less of a community.

Game-theoretically, it makes total sense for people to be a bit untrustworthy while they are in a bad place in their life. If you're in a place of scarcity, it is entirely reasonable to be strategic about where you put your limited resources. Then, it's just reasonable to only be loyal to others as long as you can get something out of it yourself and to defect as soon as they don't offer obvious short-term gains.

One reservation I had, I think it'd be useful not to mix together trustworthiness vs. the ability to contribute to common resources and projects and to be there for friends. Trustworthiness to me captures things like being honest, only committing to things you expect to be able to do, being cooperative as a default. Even if I experience a lot of scarcity, I aspire to stay just as trustworthy. And that e.g. includes warning others up front that I have very little extra energy and I might have to stop contributing to a shared project at any time.

Yep, I agree with that point - being untrustworthy and underresourced are definitely not the same thing.

z
3
1
0

Good idea! Scarcity mindset is such an annoying tendency of human psychology. We should all become more like hippies.

How does your last point fit in there though? Scarcity mindset because you so desperately want to become part of the friends? Or friends as medicine for more trustworthiness, thus more stable networks, more trusting fundors, and more stable salaries? 

On a side note within the community, not only the precarious salaries, also the very high valuing of working in prestigous EA organisations vs. lack of opportunities may contribute to the problem. People want to belong and to be respected. Let's not forget to actually encourage people to seek all sorts of opportunities - according to their theory of change - and give a lot of respect and praise for trying and for failing

Yep, all of you putting energy into changing the world for the better - you deserve all the recognition! Change is non-linear and opportunities very random, so putting yourself out there and taking the risk of having an impact is the way to go! <3 

"How does your last point fit in there though?"

On second thought, I covered anything that's immediately relevant to this topic in section 2.2, which I quickly expanded from the Facebook post this is based on. So yea, 3. should probably be a different EA Forum post entirely. Sorry for my messy reasoning here.

I'll add more object-level discussion of 3. under Kaj Sotala's comment.

I feel like there's a large leap in the ToC between "throw more parties", "make funding less related to results" and producing more high integrity actors.

I agree with that statement, and I didn't intend to make either of those claims.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that