(Status: not draft amnesty, but posting in that spirit, since it's not as good as I'd want it to be but otherwise I probably won't ever post it)
In my experience, EA so far has been a high-trust community. That is, people generally trust other people to behave well and in accordance with the values of the community.
Being high-trust is great! It means that you can spend more time getting on with stuff and less time carefully checking each other for bad behaviour. It's also just nicer: It feels good and motivating to be trusted, and it is reassuring to support people you trust to do work.
I feel like a lot of posts I've seen recently have been arguing for the community to move to a low-trust regime, particularly with respect to EA organizations. That includes calls for:
- More transparency ("we need to rigorously scrutinise even your small actions in case you're trying to sneak bad behaviour past us")
- More elaborate governance ("there is a risk of governance capture and we need to seriously guard against it", "we don't trust the people currently doing governance")
Sometimes you have to move to low-trust regimes. It's common that organizations tend to move from high-trust to low-trust as they grow, due to the larger number of actors involved who can't all be assumed to be trustworthy. But I do not think that the EA community actually has the problems that require low-trust, and I think it would be very costly.
Specifically, I want to argue:
- Low-trust regimes are expensive, both in terms of resources and morale
- The people working in current EA orgs are in fact very trustworthy
- The EA community should remain high-trust (with checking)
Low-trust is costly
Low-trust regimes impose costs in at least three ways:
- Costlier cooperation
- Costlier delegation
- General efficiency taxes
The post Bad Omens in current EA Governance argues that due to the possibility of conflicts of interest we should break up the organisations which currently share ops support through EVF. This is a clear example of 1: if we can't trust people then we can't just share our resources, we have to keep everyone at arm's length. You can read in the comments various people explaining why this would be quite expensive.
Similarly, you can't just delegate power to people in a low-trust regime. What if they abuse it? Better to require explicit approval up the chain before they do anything serious like spend some money. But if you can't spend money you often can't do things, and activity ends up being blocked on approval, politics, and perception.
When you actually try to get anything done, low-trust regimes typically require lots of paper trails and approvals. Anyone who's worked in a larger organization can testify to how demoralizing and slow this can be. Since any decision can be questioned after the fact, there is no limit to how much "transparency" can be demanded, and how many pointless forms, proposals, reports, or forum posts can end up being produced. I think it is very easy to underestimate how destructive this can be to productivity.
Finally, it is plain demoralizing to be in a low-trust regime. High-trust says "Yes, we are on the same team, go and attack the problem with my blessing!". Low-trust says "I guess I have to work with you but I'm expecting you to try and steal from me as soon as you have the opportunity, so I'm keeping an eye on you". Where would you rather work?
Current people in EA organisations are trustworthy
(Disclaimer: I know quite a lot of people who work in EA organisations, so I'm definitely personally biased towards them.)
The FTX debacle has led to a lot of finger-pointing in recent months. A particular pattern has been posts listing large numbers of questions about the behaviour of particular organizations or individuals over the last few years. These often feel accusatory all by themselves: look at this big list of suspicious behaviour, surely something shady is going on! But it seems to me that in every instance that I've seen there has either been a good explanation or the failing has been at worst a) bad decisions made for good reasons, b) lapses in personal judgement, or c) genuine disagreements about which actions are worth doing.
Crucially, none of a), b) or c) are in my opinion things that justify a switch to low-trust. They suggest that we have normal, fallible people who are acting in good faith and doing their best. That's really the best that we can hope for! Low-trust measures won't help with any of this.
If anything, this argues that we could be higher-trust. Expending a lot of energy hunting for bad behaviour and not finding much is evidence that people are more trustworthy, not less!
Here are some examples. I've not attempted to be comprehensive, these are the ones that came to mind when I was writing this post. I'd be interested in examples that people think are neither explained nor at worst a), b) or c). I'm also including people saying things that turned out to be factually wrong, as these are examples of looking for bad behaviour and not finding it.
- Bad Omens in current EA Governance
- Some important questions for EA leadership
- Various things Will MacAskill did
- Unclear, but IMO b) at worst
- Why did 80k portray SBF as frugal?
- Various things Will MacAskill did
- Why did CEA buy Wytham Abbey?
- Why aren't EA orgs saying more about FTX?
- We're worried about legal risk and confused
- You might disagree, but seems like a) or c) at worst
- Who knew about SBF/Alamada and why didn't they Do Something?
- There's a lot of this, it's not clear what's going on, but my expectation is that this is a) or b).
Trust but verify
I want the EA community to remain high-trust. It's part of what makes us effective and I don't think we're justified in throwing it away now (if ever). Calls to make it low-trust make me feel sad and less like it's a community I want to be in. I think we should just decide to Not Do That.
There are some cheap things we can do that don't damage trust too much. For example, checking that people have behaved trustworthily from time to time is a good idea ("trust but verify" is a good motto).
Overall I have a few concrete suggestions for making things better:
- Make sure you're updating your beliefs about the trustworthiness of people based on the results of checking, not the fact that the checking is happening.
- If you agree with "trust but verify", make the background level of trust clear when you're proposing to do a bunch of aggressive verification, e.g. "I don't have any particular reason to expect bad behaviour here, but in the spirit of 'trust but verify' I would like to ask the following questions..."
- If you're proposing a change in behaviour, seriously consider the costs, which includes making it specific enough that the costs are clear.