SR

Stephen Robcraft

130 karmaJoined

Comments
4

A couple come to mind but, if you'll allow it, I would first respond to your prompt(s) with:

  • I don't think there are loads of examples of organisations with better governance (boards are weird, after all) overall - I'd argue that EA norms and practices lead to better governance, relative to traditional nonprofits, in some respects and worse in others. Nonprofits could generally do governance better.
  • I'm not sure it makes sense to isolate 2b and 3b here - 1a can also play a role in mitigating failure (and some combination of all three might be optimal)

The two stories that come to mind both seem realistic to me (I'd take the bet these have happened recently) but might not meet your bar for 'very bad'. However, I'd argue we can set the bar a bit higher (lower? depends how you look at it....) and aim for governance that mitigates against more mundane risks, providing the trade-off makes sense. I think it does.

Story 1 - A new-ish EA project/org has received 12 months of funding to do [something]. At the end of the 12 months, [something] has not been achieved but the money has been spent.

In this story, the funder has accepted that they are making a bet, that there's some level of experimentation going on, that there are lots of uncertainties and assumptions etc. However, in this story, it was perfectly possible for [something] to be delivered, or for some equally impactful [something else] to be identified and delivered. Neither happened, but the team has spent most, if not all, of its funding and has just failed to deliver. They might have a compelling story about what they'll do next year and get more funding, they might not. 

(1a) With a well-run Board of Trustees (made up of impartial, experienced, connected and credentialed people) overseeing the work of the less experienced project team and holding them to account, I think it's reasonable to imagine the team gets clearer, quicker about their objectives and how to deliver on these; more effectively monitors and responds to information about their progress during the year; more likely notices the ways in which they might change course in pursuit of impact; and so on. 

(3b) With more performance monitoring from the funder, both the funder and project team realise early on that things aren't going well. The funder can provide funder plus-type support to the team, make clear their expectations of the project team in the event that targets aren't met, or really take any other action that makes sense to try and maximise the impact of their funding.

(2b) I'd argue that there's nothing much here that will impact on whether or not the team is successful on this occasion. But it seems to me that the org/funder being transparent about what happened would be in keeping with EA principles, and would support others in the community making a judgment about donating to the team in the future. 

Story 2 - There's an organisation going along just fine, doing impactful community-building work. But they are leaking small amounts of money through lax management accounting. The amounts are small but not inconsequential when you consider the principle of cost-effectiveness and the counterfactual impact of the money being wasted.

In this story, the org has grown over the last few years and seen founders move on, key staff members move, junior team members step up and just a lot of change and turnover in general. Their financial accounting is absolutely fine (they outsource this to accountants), but rarely (if ever) have they reviewed management accounts to get a handle on where money is going. Why would they? No one has asked them to.

They have Zoom subscriptions that nobody uses because they have a Google Workspace account and just default to Google Meet. That Google Workspace account hasn't had a nonprofit discount applied. They have an Airtable team plan, with 27 collaborators who no longer work at the organisation, or who only looked at some data once, two years ago. They buy Huel for the office every week but aren't really clear on who's drinking it or how it contributes to their Theory of Change. 

All in all, $1000s a year are being wasted. I'd accept that implementing (1a) and (3b) to stop this kind of thing is a bit over the top - after all, this is just an issue with performance/competence/attention that can be fixed by having the right people and systems in place. But then it's (1a) that puts the right people in place and both (1a) and (3b) that can oversee/monitor work, incentivising and/or requiring good performance and ensuring attention on the right things.

I'd be very interested in hearing from those who responded to 10 - Checks and balances, as part of the work I do with the EA Good Governance Project. We've focused entirely on formal governance of EA organisations (through Boards of Trustees/Directors) but I have been thinking recently about how our work might consider a model of governance that includes:

  1. Formal governance
    1. Providing oversight through constituted bodies with decision-making authority (like Boards of Trustees/Directors)
    2. Requiring regulatory compliance
  2. Community governance
    1. Setting norms/expectations
    2. Holding individuals and organisations to account
  3. Funder/Market governance
    1. Allocating resources (through cause prioritisation and assessing individual funding bids)
    2. Performance monitoring (by requiring quarterly reviews, making funding conditional etc)

Forgive me, it's a bit rough as I planned to post something in the next week or two. This seemed like a good opportunity to start discussion though! My sense (through speaking to founders, exec staff and board members of EA orgs over the past few months; seeing the results of this survey) is something like: 

EA does 1b, 2a and 3a really well.

EA orgs often don't do 1a at all (not required when fiscally sponsored, or for certain types of entity), or that well (board members recruited from within closed networks, also no-one really does boards well). 

People are worried about 2b (more so than I expected, but about as much as I am!).

3b is done less than in traditional non-profits - a high-trust culture and belief that 2a and 3a are enough means this kind of thing is less relied upon. 

I worry that this is a recipe for not good things. I don't worry so much about power abuse (I also trust in 2a!) but do think a thoughtful/maturing community has some gaps to fill in how it/its orgs are governed.
 

With my 'Good Governance' hat on, I think there's something in this. 

I personally celebrate that there is a culture of considering often and deeply whether an organisation/project should continue. Outside of EA, I've worked >50 mission-driven teams and have rarely encountered individuals, let along whole organisations, that are willing to ask this question of themselves. In my consulting work, I've recommended that organisations shut down or close a programme maybe a dozen or so times and can only think of one that took this recommendation seriously.

While I think it's appropriate to celebrate that there are cancellations/closures, I worry that how this happens is less than ideal. I need to engage with this a bit more deeply to order my thoughts (and will do if anyone is interested in discussing this!) but my take is that there's room to improve with regards to:

1. Defining who closes organisations - I think this is quite clearly a board responsibility (maybe the fundamental reason why good boards should exist) and think there should be more separation between a founder/exec team from this kind of decision than I understand to be the case

2. Defining when and why organisations should close - This post asks whether early-stage projects are closing too early but I also wonder if there are other questions to consider here

3. Articulating alternatives to closing/cancelling - Could projects continue in a different form? Might other people want to and be able to take on the work? Could the project continue to do good outside the EA community (with different funders, principles, values)?

I am considering extinction scenarios for any and all species. I’m trying to understand why safeguarding against the extinction of humanity is prioritised, but this is not the case for any other species.

As for examples, the IUCN Red List is the most comprehensive database that I’m aware of - assessing and reporting the extinction risk to 1000s of species worldwide. I have seen some criticism of this list however (notably that the approach used to assess risk lacks transparency), so wonder if the EA community would have some value to add here?

https://www.iucnredlist.org/

What you (or the EA community more generally) could do about it will vary from species to species. Interventions might range from tackling deforestation, to reducing harm from human waste, to lobbying against hunting/fishing…. the list of possible interventions is huge! Again, I wonder if the EA community might have value to add in identifying effective interventions?

I’m not yet arguing for the EA community to do these things - just trying to understand what has been thought about/discussed by others, so I might better understand why this work is not prioritised.