One of the largest cryptocurrency exchanges, FTX, recently imploded after apparently transferring customer funds to cover losses at their affiliated hedge fund. Matt Levine has good coverage, especially his recent post on their balance sheet. Normally a crypto exchange going bust isn't something I'd pay that much attention to, aside from sympathy for its customers, but its Future Fund was one of the largest funders in effective altruism (EA).
One reaction I've seen in several places, mostly outside EA, is something like, "this was obviously a fraud from the start, look at all the red flags, how could EAs have been so credulous?" I think this is mostly wrong: the red flags they cite (size of FTX's claimed profits, located in the Bahamas, involved in crypto, relatively young founders, etc.) are not actually strong indicators here. Cause for scrutiny, sure, but short of anything obviously wrong.
The opposite reaction, which I've also seen in several places, mostly within EA, is more like, "how could we have caught this when serious insitutional investors with hundreds of millions of dollars on the line missed it?" FTX had raised about $2B in external funding, including ~$200M from Sequoia, ~$100M from SoftBank, and ~$100M from the Ontario Teacher's Pension Plan. I think this argument does have some truth in it: this is part of why I'm ok dismissing the "obvious fraud" view of the previous paragraph. But I also think this lets EA off too easily.
The issue is, we had a lot more on the line than their investors did. Their worst case was that their investments would go to zero and they would have mild public embarrassment at having funded something that turned out so poorly. A strategy of making a lot of risky bets can do well, especially if spending more time investigating each opportunity trades off against making more investments or means that they sometimes lose the best opportunities to competitor funds. Half of their investments could fail and they could still come out ahead if the other half did well enough. Sequoia wrote after, "We are in the business of taking risk. Some investments will surprise to the upside, and some will surprise to the downside."
This was not our situation:
The money FTX planned to donate represented a far greater portion of the EA "portfolio" than FTX did for these institutional investors, The FTX Future Fund was probably the biggest source of EA funding after Open Philanthropy, and was ramping up very quickly.
This bankruptcy means that many organizations now suddenly have much less money than they expected: the FTX Future Fund's committed grants won't be paid out, and the moral and legal status of past grants is unclear. [1] Institutional investors were not relying on the continued healthy operation of FTX or any other single company they invested in, and were thinking of the venture capital segment of their portfolios as a long-term investment.
FTX and their affiliated hedge fund, Alameda Research, were founded and run by people from the effective altruism community with the explicit goal of earning money to donate. Their founder, Sam Bankman-Fried, was profiled by 80,000 Hours and listed on their homepage as an example earning to give, back when he was a first-year trader at Jane Street, and he was later on the board of the Centre for Effective Altruism's US branch. FTX, and Bankman-Fried in particular, represented in part an investment of reputation, and unlike typical financial investments reputational investments can go negative.
These other investors did have much more experience evaluating large startups than most EAs, but we have people in the community who do this kind of evaluation professionally, and it would also have been possible to hire an outside group. I suspect the main reason this didn't happen is that EA isn't a unified whole, it's a collection of individuals and organizations with similar goals and ways of thinking about the world. There are likely many things that would be worth it for "EA" to do that don't happen because it's not clear who would do them or even whether someone is already quietly doing the work. I hope building a better process for identifying and coordinating on this sort of work is one of the things that can come out of this collapse.
While at this stage it's still not clear to me whether more vetting would have prevented this abuse of customer funds (perhaps by leading to better governance at FTX or more robust separation between FTX and Alameda) or led EAs to be more cautious with FTX funding, I don't think it's enough to say that since Sequoia etc. missed it we most likely would have as well.
[1] Disclosure: my work may have been funded in part by FTX. I've
asked for my pay to be put on hold if it would be coming from an FTX
grant.
This is true; but the way more significant contributing factor of this sort is that impact on the world can go negative. We had more at stake because we think that defrauding customers is a huge harm to the world, and the purpose of investing in SBF is to create positive impact on the world. The market for FTX/FTT doesn't price in negative impact on humankind.
There's some discussion of whether implementing impact certificate markets—which might be more of an academic curiosity at this point—would have similar problems, where translating a utility function that goes negative (impact on the world) into one with a lower bound of zero (financial) would incentivize negative projects. As far as I can tell, cash prizes for positive impact projects have the same fundamental problem, though I'd love to be corrected here if I'm missing something. One way around this would be requiring a form of insurance (prior to entering impact markets, prize competitions, earning-to-give careers, AI-capabilities-research-in-the-interest-of-alignment, etc), though I think there are a lot of both practical and and incentive-flavored barriers to these emerging any time soon.
I'm curious whether there are other areas in EA where we systematically miss the necessity of oversight for protection against negative outcomes that we care about, where markets / regulatory and legal systems / social norms will be predictably insufficient watchdogs.