Hide table of contents

Since FTX failure greatly touched EA community, there is a high chance for EA to increasingly add a lot of regulation and become quite a bureaucracy as time goes on. Implementing new rules and oversight is the usual go-to way of solving problems (like in finance, medicine, aviation). But established regulation are expensive to manage, very hard to change and it greatly slows down innovation. I am in favor of it, but since it is only the beginning, maybe it is very effective not to be too fast entangling ourselves in it?

Could more effective measures be found instead of ever more bureaucracy? For example, could normalizing action of whistleblowing be an answer? In particular, I propose extreme whistleblowing of tiny wrongdoings, as a thought experiment. If done right, it could reveal existing issues or prevent new shady stuff from slowly emerging in the future.

How could whistleblowing system work?

  • It may be a tool or process.
  • It would be well advertised and frequently used by everyone to report tiny wrongdoings of community members.
  • Tiny reports would accumulate to create clearer picture about consistent behavior of individuals.
  • Reports may be highly simplified to encourage people to use the system. For example, one can rate people interactions with basic categories and feeling in many cases (like rude, intolerant, hateful, risky...).
  • Reports would not be anonymous in order to be verifiable and accurately countable.
  • Reports would be accessed by most trusted organizations like CEA which would also need to become more trusted. For example, it may have to strengthen data protection multiple times which I may guess is needed anyway (like for all organizations).
  • Individuals should have a right to receive all anonymized data gathered about them in order to have an option for a peace of mind.
  • Reports would have an automatic expiration date (scheduled removal after some years) to have an option for individuals to change their behavior.
  • It has to be decided what are non issues as usual, so it would not hamper expression of ideas or creating other side effects which I will continue below.

Benefits:

  • This system would repel people from poor actions. But if someone is unable to abstain from doing shady stuff, they may feel repelled from being part of the community itself.
  • People who create issues would be spotted faster and their work stopped before it cause significant negative impact. Reporting sooner may prevent bigger things from escalating.
  • It it works, this might prevent establishing other less effective (more resource intensive) measures.
  • This would help examine applications for roles, events and grants.

Counterarguments:

  • People behavior are very complex so tiny mishaps may not be representative of person's character. But if we make evaluation instructions well known and agreed by the community, plus making sure it is sensitive to nuance, then we could expect higher evaluation quality.
  • Reporting is not acceptable in society (culture varies across countries), so it would be unpleasant to report other people, especially about tiny matters. But if we establish knowledge it is good for community, culture might change?
  • If tiny wrongdoings are common in the community, then this idea would face a lot of resistance. On the other hand, the more resistance there is, the more such a system might be needed. By the end of the day, the idea is not to punish, but to bring issues to light. If issues are known, they can be fixed. Fixing is the end goal.
  • Tiny wrongdoing is impossible to prevent or sometimes agree what that is. So the goal is not to pursue tiny things, but to gather enough clues to assemble larger picture if there is anything larger to be assembled.
  • EA already have similar processes, but it could be improved as number of actors in community grows.
  • I am unsure if it would create more trust environment (this is desirable), or fear environment (this is undesirable). Maybe it is the question of how far and how well this would be implemented.
  • What are other reasons for this not to work?

For people who enjoy movies, the film "The Whistleblower (2010)" is a fitting example displaying very disturbing corruption happening on a massive scale in United Nations mission in Bosnia, where almost everybody is turning a blind eye, because it does not fit their or their organization's interest, or because corruption slowly grew to hard to admit or manage levels (movie based on real facts).

Comments9


Sorted by Click to highlight new comments since:

I don't see how, if this system had been popularised five years ago, this would have actually prevented the recent problems. At best, we might have gotten a few reports of slightly alarming behaviour. Maybe one or two people would have thought "Hmm, maybe we should think about that", and then everyone would have been blindsided just as hard as we actually were.

Also...have you ever actually been in a system that operated like this? Let's go over a story of how this might go.

You're a socially anxious 20-year-old who's gone to an EA meeting or two. You're nervous, you want people to like you, but things are mostly going well. Maybe you're a bit awkward, but who's not? You hear about this EA reporting thing, and being a decent and conscientious person, you ask to receive all anonymized data about you, so you can see if there are any problems.

Turns out, there is! It's only a vague report - after all, we wanted it to be simplified, so people can use the system. Someone reported you under the category "intolerant". Why? What did you say? Did you say something offensive? Did someone overhear half a conversation? You have no idea what you did, who reported you, or how you can improve. Nobody's told you that it's not a big deal to get one or two reports, and besides, you're an anxious person at the best of times, you'd never believe them anyway. Given this problem, what should you do? Well, you have no idea what behaviour of yours caused the report, so you don't know. Your only solution is to be guarded at all times and very carefully watch what you say. This does not make it easy to enjoy yourself and make friends, and you always feel somewhat out of place. Eventually, you make excuses to yourself and just stop showing up for meetings.

This is definitely a made up story, but almost exactly this happened to me in my first year at my first job - I had an anonymous, non-specific complaint given to me by my manager, the only one I've ever received. I asked what I was supposed to do about that, and my manager had no good answer. Somewhat annoyed, I said maybe the best solution would be to just not make friends at work, and my manager actually agreed with me. Needless to say, I had much more cordial relationships with most colleagues after that. I was also older than 20 and I didn't actually care about being liked at my job much. I wanted to do well because it was my first job, but they were never my people. Eventually I grew up, got over it, and realised shit happens, but that takes time. I can imagine that if I were younger and amongst people whose ideology I admired, it would have stung far worse.

And...let's remember the first paragraph here. Why would such a system have actually worked? SBF gets some complaints about being rude or demanding in his job, and what? EA stops taking his money and refuses to take grants from the FTX Future Fund? I don't think such a system would ever have led to the kind of actions that would have discovered this ahead of time or significantly mitigated its effects on us.

If we're going to propose a system that encourages people to worry about any minor interaction being recorded as a black mark on them for several years within the community, imposing high costs on the type of socially anxious people who are highly unlikely to be predatory in the first place...well, let's at least make sure such a system solves the problem.

Nice points as always.

Main issue One of the main issues with FTX is taking super high risks. It was unacceptable long ago. If reporting would have been the norm, it seems likely that someone who seen the decision making process (and decisions made), would have made private disclosures to EA management (reported many times for many decisions). Would this information have prevented EA management from still taking a lot of money, or taking this seriously? I am leaning towards the answer of 'yes', because internal information is more valuable than public rumors. The action will surely be taken from this point onwards after being burned by this already. Your point about them being reported as "rude" in this situation is not the best example:)

And personalized stories you shared are important, I will take time to think more about such situations.

Strong agreement from me that EA orgs would improve if we had some whistleblowing mechanism, preferably also for things smaller than fraud, like treating candidates badly during an interview process or advertising a job in a false way

I have been thinking something similar, but had come to a few different conclusions from you. Now I'm wondering if we just need multiple complementary approaches:

  • I was thinking less about deliberate bad faith acts than people being bad at their jobs*
  • I would want something that isn't only visible to the 'most trusted organisations', since a) that assumes we've partially solved the problem we're addressing, b) there are ongoing questions about the level of their responsibility for the current hurricane, and c) the more people who see it, the more chances there are of spotting patterns
  • That means it would probably need to be open to everyone
  • That means it would have to be anonymous by default, though individuals could obviously identify themselves if they chose
  • That means it would need to apply some fairly strict epistemic standards, defined in advance, so it didn't just become a cesspool of slander
  • It would generally have mean more of an org-level focus rather than targeting individuals. 
  • My instinct is a policy of 'it's ok to name top managers of EA orgs (including retrospectively), but anyone further down the rung should be discussed anonymously'. It might make sense to specify the department of the org, so that the people running it take some responsibility

* Outside FTX I suspect this is more responsible for any culpability EA collectively has than any specific bad faith.

I think forum is a good place for what you described.

The forum is a generally bad place for pooling information in an easily retrievable way that gives equal emphasis to all of it, which is what we need for such information to be useful.

Sorry for being brief in my last answer. You made good reasonable points which I don't have much to add on.

I stick to my last answer that forum is a good place for that, because it is very hard and often close to impossible to create new services when functionality greatly overlaps with existing service. Think about Google+ which tried to compete with Facebook and what happened.
People use established service and forget to use similar one.

Forum is not perfect for it - yes, but for practical reasons I see it as the way to do epistemic standards and other things described in your comment. Forum is an established, central place for everything public like this.

Reports would be accessed by most trusted organizations like CEA

Are you suggesting reports should be non public?

I am suggesting tiny matters to be non public to achieve the goals described in the article. Discussion / disclosures can be public as well as their always are on the forum.

Which route is better? Or which one solves all the problems? Neither solves every layer, so multiple good solutions are needed.

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region