I think that key EA orgs (perhaps collectively) like the Center for Effective Altruism/Effective Ventures, Open Philanthropy, and Rethink Priorities should consider engaging an independent investigator (with no connection to EA) to try to identify whether key figures in those organisations knew (or can reasonably be inferred to have known, based on other things they knew) about the (likely) fraud at FTX.
The investigator should also be contactable (probably confidentially?) by members of the community and others who might have relevant information.
Typically a lawyer might be engaged to carry out the investigation, particularly because of professional obligations in relation to confidentiality (subject to the terms of reference of the investigation) and natural justice. But other professionals also conduct independent investigations, and there is no in principle reason why a lawyer needs to lead this work.
My sense is that this should happen very promptly. If anyone did know about the (likely) fraud at FTX, then delay potentially increases the risk that any such person hides evidence or spreads an alternative account that vindicates them.
I'm torn about whether to post this, as it may well be something that leadership (or lawyers) in the key EA orgs are already thinking about, and posting this prematurely might result in those orgs being pressured to launch an investigation hastily with bad terms of reference. On the other hand, I've had the concern that there is no whistleblower protection in EA for some time (raised in my March 2022 post on legal needs within EA), and others (e.g. Carla Zoe C) have made this point earlier still. I am not posting this because I have a strong belief that anyone in a key EA org did know - I have no information in this regard beyond vague speculation I have seen on Twitter.
If you have a better suggestion, I would appreciate you sharing it (even if anonymously).
Epistemic status: pretty uncertain, slightly anxious this will make the situation worse, but on balance think worth raising.
Relevant disclosure: I received a regrant from the FTX Future Fund to investigate the legal needs of effective altruist organisations.
Edit: I want to clarify that I don't think that any particular person knew. I still trust all the same community figures I trusted one week ago, other than folks in the FTX business. For each 'High Profile EA' I can think of, I would be very surprised if that person in particular knew. But even if we think there is only a 0.1% chance that any of the most influential, say, 100 EAs knew, then the chance that none of them knew is 0.999^100, which is about 90.4% (assuming we naively treat those as independent events). If we care about the top 1000 most influential EAs, then we could get that 90.4% chance with just a 0.01% chance of failure.
Edit: I think Ryan Carey's comment is further in the right direction than this post (subject to my view that an independent investigation should stick to fact-finding rather than making philosophical/moral calls for EA) plus I've also had other people contact me spitballing ideas that seem sensible. I don't know what the terms of reference of an investigation would be, but it does seem like simply answering "did anybody know" might be the wrong approach. If you have further suggestions for the sorts of things that should be considered, it might be worth dropping those into the comments.
If this comment is more about "how could this have been foreseen", then this comment thread may be relevant. I should note that hindsight bias means that it's much easier to look back and assess problems as obvious and predictable ex post, when powerful investment firms and individuals who also had skin in the game also missed this.
TL;DR:
1) There were entries that were relevant (this one also touches on it briefly)
2) They were specifically mentioned
3) There were comments relevant to this. (notably one of these was apparently deleted because it received a lot of downvotes when initially posted)
4) There has been at least two other posts on the forum prior to the contest that engaged with this specifically
My tentative take is that these issues were in fact identified by various members of the community, but there isn't a good way of turning identified issues into constructive actions - the status quo is we just have to trust that organisations have good systems in place for this, and that EA leaders are sufficiently careful and willing to make changes or consider them seriously, such that all the community needs to do is "raise the issue". And I think looking at the systems within the relevant EA orgs or leadership is what investigations or accountability questions going forward should focus on - all individuals are fallible, and we should be looking at how we can build systems in place such that the community doesn't have to just trust that people who have power and who are steering the EA movement will get it right, and that there are ways for the community to hold them accountable to their ideals or stated goals if it appears to, or risks not playing out in practice.
i.e. if there are good processes and systems in place and documentation of these processes and decisions, it's more acceptable (because other organisations that probably have a very good due diligence process also missed it). But if there weren't good processes, or if these decisions weren't a careful + intentional decision, then that's comparatively more concerning, especially in context of specific criticisms that have been raised,[1] or previous precedent. For example, I'd be especially curious about the events surrounding Ben Delo,[2] and processes that were implemented in response. I'd be curious about whether there are people in EA orgs involved in steering who keep track of potential risks and early warning signs to the EA movement, in the same way the EA community advocates for in the case of pandemics, AI, or even general ways of finding opportunities for impact. For example, SBF, who is listed as a EtG success story on 80k hours, has publicly stated he's willing to go 5x over the Kelly bet, and described yield farming in a way that Matt Levine interpreted as a Ponzi. Again, I'm personally less interested in the object level decision (e.g. whether or not we agree with SBF's Kelly bet comments as serious, or whether Levine's interpretation as appropriate), but more about what the process was, how this was considered at the time with the information they had etc. I'd also be curious about the documentation of any SBF related concerns that were raised by the community, if any, and how these concerns were managed and considered (as opposed to critiquing the final outcome).
Outside of due diligence and ways to facilitate whistleblowers, decision-making processes around the steering of the EA movement is crucial as well. When decisions are made by orgs that bring clear benefits to one part of the EA community while bringing clear risks that are shared across wider parts of the EA community,[3] it would probably be of value to look at how these decisions were made and what tradeoffs were considered at the time of the decision. Going forward, thinking about how to either diversify those risks, or make decision-making more inclusive of a wider range stakeholders[4], keeping in mind the best interests of the EA movement as a whole.
(this is something I'm considering working on in a personal capacity along with the OP of this post, as well as some others - details to come, but feel free to DM me if you have any thoughts on this. It appears that CEA is also already considering this)
If this comment is about "are these red-teaming contests in fact valuable for the money and time put into it, if it misses problems like this"
I think my view here (speaking only for the red-teaming contest) is that even if this specific contest was framed in a way that it missed these classes of issues, the value of the very top submissions[5] may still have made the efforts worthwhile. The potential value of a different framing was mentioned by another panelist. If it's the case that red-teaming contests are systematically missing this class of issues regardless of framing, then I agree that would be pretty useful to know, but I don't have a good sense of how we would try to investigate this.
This tweet seems to have aged particularly well. Despite supportive comments from high-profile EAs on the original forum post, the author seemed disappointed that nothing came of it in that direction. Again, without getting into the object level discussion of the claims of the original paper, it's still worth asking questions around the processes. If there was were actions planned, what did these look like? If not, was that because of a disagreement over the suggested changes, or the extent that it was an issue at all? How were these decisions made, and what was considered?
Apparently a previous EA-aligned billionaire ?donor who got rich by starting a crypto trading firm, who pleaded guilty to violating the bank secrecy act
Even before this, I had heard from a primary source in a major mainstream global health organisation that there were staff who wanted to distance themselves from EA because of misunderstandings around longtermism.
This doesn't have to be a lengthy deliberative consensus-building project, but it should at least include internal comms across different EA stakeholders to allow discussions of risks and potential mitigation strategies.
e.g. A critical review of GiveWell's 2022 cost-effectiveness model, Methods for improving uncertainty analysis in EA cost-effectiveness models, and
Biological Anchors external review