Just finished new book about FTX and Sam Bankman-Fried, launched today: "Going Infinite: The Rise and Fall of a New Tycoon" by Michael Lewis. The book itself is quite engaging and interesting, so I recommend it as a read.

The book talks about:

  • Early life and general personality
  • Working at Jane Steet Capital
  • Early days at Alameda and the falling out
  • A refreshed alameda
  • The early FTX days and actions of Sam
  • Post-FTX days, and where did the money go?

The book talks a decent bit about effective altruists in both good and bad light.

Some particularly interesting anecdotes and information according to the book (contains "spoilers"):

  • In early Alameda days, they apparently lost track of (as in, didn't know where it went) $ millions of XRP tokens, and Sam was just like "ehh, who cares, there is like 80% chance will show up eventually, so we can just count it as 80% of the value". This + general disorganisation + risk taking really pissed off many of the first wave of EAs working there, and a bunch of people left. Eventually, they actually "found" the XRP: it was in some crypto exchange they were using, and some software bug meant it was not labelled correctly, so they had to email them about it.
  • Where did all the lost FTX money go? At FTX the lack of organisation was similar, but much larger in scale. Last chapter has napkin calculations with in-goings vs out-goings for FTX. (Edit: See this below). While they clearly spent and lost lots of money, some of the assets were just lost track of because didn't care to keep track because other assets were so large that these were not that important/urgent. So far "the debtors have recovered approximately 7 billion dollars in assets, and they anticipate further recoveries", which could be an additional approx $7.2Billion to still be found (which might be sold for less as much of it non-cash, but at least $2Billion?), not even including potential clawbacks like investment into Anthropic. A naive reading suggests there could have been enough to repay all the affected customers?
     

EDIT: here is the "napkin math" given in the book of combined FTX+Alameda ingoings and outgoings over the course of a few years. So the question in the final chapters of the book is accounting for the $6 Billion discrepancy. The book clearly shows the customer funds were misused by Sam and Alameda, and the numbers are not to be taken at face value (for example, the profits at Alameda could be questioned), but possibly worth viewing at as a possible reference point for those interested in them but not willing to read the whole book:

Money In:

  • Customer Deposits: $15 billion
  • Investment from Venture Capitalists: $2.3 billion
  • Alameda Training Profits: $2.5 billion
  • FTX Exchange Revenues: $2 billion
  • Net Outstanding Loans from Crypto Lenders (mainly Genesis and BlockFi): $1.5 billion
  • Original Sale of FTT: $35 million
  • Total Money In: $23 billion

Money Out:

  • Return to Customers During the November Run: $5 billion
  • Amount Paid Out to CZ: $1.4 billion (excluding $500 million worth of FTT and $80 million worth of BNB tokens)
  • Sam's Private Investments: $4.4 billion (with at least $300 million paid for using shares and FTX)
  • Loans to Sam: $1 billion (used for political and EA donations to avoid stock dividends)
  • Loans to Nishad: $543 million (for similar purposes)
  • Endorsement Deals: $500 million (potentially more, including cases where FTX paid endorsers with FTX stock)
  • Buying and Burning Their Exchange Token FTT: $600 million
  • Out Expenses (Salaries, Lunch, Bahamas Real Estate): $1 billion
  • Total Money Out: $14.443 Billion

After the Crash:

  • $3 billion on hand.
  • $450 million stolen in hack

 

Here are the largest manifold markets on FTX repayment I could find for another reference point (note: still rather small):

Further Edit: Here are some other manifold markets:

65

2
3
1

Reactions

2
3
1

More posts like this

Comments24


Sorted by Click to highlight new comments since:

It is remarkable to me that the portrayal of EA in this book seemed more positive vibes than in a lot of popular articles about the actual charity stuff.

[anonymous]12
3
3

I do think the portrayal of EAs could be worse, but it's pretty bad? EAs are accused of being hypocritical (e.g., way more concerned with money than they would care to admit), culty, overly trusting, overconfident, and generally uncool.

I'd say there are two main aspects that impact negatively on EA portrayal. One I've mentioned below - Lewis goes out of his way to establish that the inner circle were 'the EAs', and implicitly seems to be making the point that Sam's mentality is a perfect match to EA mentality. But much more damning is how he depicts The Schism in early Alameda. Even though he is practically siding with Sam in the dispute, from what he describes it beggars belief how the EA community -and more so its top figures- didn't react in a stronger way after hearing what the Alameda quitters were saying. The pattern of the early Alameda mess very eerily prefigured what would happen, and Sam's shadiness. 

Creditors are expected by manifold markets to receive only 40c on each dollar that was invested on the platform (I didn't notice this info in the post when I previously viewed it). And, we do know why there is money missing: FTX stole it and invested it in their hedge fund, which gambled away and lost the money.

There's also fairly robust market for (at least larger) real-money claims against FTX with prices around 35-40 cents on the dollar. I'd expect recovery to be somewhat higher in nominal dollars, because it may take some time for distributions to occur and that is presumably priced into the market price. (Anyone with a risk appetite for buying large FTX claims probably thinks their expected rate of return on their next-best investment choice is fairly high, implying a fairly high discount rate is being applied here.)

It is a bit disheartening to see that some readers will take the book at face value.

Yes, instead they should take a play money low liquidity prediction market at face value

How about looking at the evidence instead? A large amount of which will soon be evaluated in public court, including testimony from co-conspirators, which is not looking good for SBF

It's kinda dumb to speculate now when we are getting a ton of extra evidence in the next month, but I personally think there's bugger all chance that all the missing billions are recovered. 

I've added manifold markets and more details from the book, not to be fully trusted on face value. Thought they spent/lost a lot of money, and misused funds in Alameda, they had huge amounts of money, so the book figures suggest that might not account for all the customer funds to be lost (if one writes off VC investment and similar)

aprilsun
-48
1
12
1
1

Ryan, you continue to make IMO very overconfident criticisms of others relating to FTX, and I think someone should finally draw public attention to the fact that you[1] were actually[2] the person who introduced SBF to earning to give.

I'm not quite sure how this all fits together, but I would guess that these two things are not unrelated. And even that you were probably aware of this, given how in your last post you referenced SBF's/"Hutch's" comments on stealing on your Felicifia website (before you pitched earning to give to him, incidentally)?

  1. ^

    My aim here is not to criticise you for this. Utilitarians were promoting earning to give at least as far back as 2006, Sam was exposed to utilitarianism a lot growing up and in any case, I don't share your apparent urge to place a lot of blame at the feet of any person or idea that played a causal role in this chain of events. My aim is to take some of the wind out of your sails -- I'm tired of you casting stones. If the motivation comes from psychological projection or deflection, I think it's important for others to know that.

  2. ^

    Michael Lewis tells us that William MacAskill reached out to SBF in SBF's junior year / "fall of 2012" i.e. September to December. You explained earning to give to SBF in July 2012.

I'm sorry, but it's not an "overconfident criticism" to accuse FTX of investing stolen money, when this is something that 2-3 of the leaders of FTX have already pled guilty to doing.

This interaction is interesting, but I wasn't aware of it (I've only reread a fraction of Hutch's messages since knowing his identity) so to the extent that your hypothesis involves me having had some psychological reaction to it, it's not credible. 

Moreover, these psychoanalyses don't ring true. I'm in a good headspace, giving FTX hardly any attention. Of course, I am not without regret, but I'm generally at peace with my past involvement in EA - not in need of exotic ways to process any feelings. If these analyses being correct would have taken some wind out of my sails, then their being so silly ought to put some more wind in.

"We know they stole it" does indeed seem overconfident at this point (and seem to be part of a pattern).

If you didn't know about the earning to give intro, fair enough.

On the meta-level, anonymously sharing negative psychoanalyses of people you're debating seems like very poor behaviour. 

Now, I'm a huge fan of anonymity. Sometimes, one must criticise some vindictive organisation, or political orthodoxy, and it's needed, to avoid some unjust social consequences.

In other cases, anonymity is inessential. One wants to debate in an aggressive style, while avoiding the just social consequences of doing so. When anonymous users misbehave, we think worse of anonymous users in general. If people always write anonymously, then writing anonymously is no-longer a political statement. We no-longer see anonymous writing as a sign that someone might be hiding from unjust retaliation.

Now, Aprilsun wants EA to mostly continue as normal, which is a majority position in EA leadership. And not to look to deeply into who is to blame for FTX, which helps to defend EA leadership. I don't see any vindictive parties or social orthodoxies being challenged. So why would anonymity be needed?

DC
14
4
0
1

Independent of aprilsun's spat with Ryan, I am incredibly grateful they have dug up and linked SBF's content on Felicifia. I had tried looking for it and failed, including asking people who I thought might know once or twice, and at least partly questioned whether it was an act of imagination on my part or someone else's that he had posted on there. And the interaction where it seems like Ryan persuaded him of earn-to-give is fascinating as a historical matter (and much less importantly, quite the unilateral first strike to establish a sour internet argument).

brook
Moderator Comment4
1
2

Speaking as a moderator, this comment seems to break a number of our norms. It isn't on-topic, and it's an unnecessary personal attack. I'd like to see better on the forum.

Adding an article from the Chronicle of Philantrophy for reference:

https://www.philanthropy.com/article/this-is-effective-altruism-new-book-offers-unflattering-glimpses-of-sam-bankman-frieds-philanthropy (free registration required)

I have not read Lewis' book, but the article appears to summarize several themes in the book related to EA. It is, as the title suggests, not a pro-EA piece.

Reading the book right now like everybody else, I guess. If Lewis is to be believed (complex in parts, as he is clearly seeing all this through Sam-tinted glasses), ALL the members of his inner circle (Caroline, but also Nishad and Wang) were committed EAs, which is something I find disturbing.

I didn’t think that was new information?

It was for me. Also, I had read about Tara and others leaving Alameda and having issues with Sam, but not the gory details.

That assessment seems accurate to me fwiw. 

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 30m read
 · 
Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn’t robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn’t robust has been previously underestimated in EA circles and I would like others (individuals, groups, and foundations) to take this uncertainty seriously, not just in words but in their actions. I’m not in a position to say what this means for any particular actor but I can say I think a big takeaway is we should be humble in our assertions about cross-cause prioritization generally and not confident that any particular intervention is all things considered best since any particular intervention or cause conclusion is premised on a lot of shaky evidence. This means we shouldn’t be confident that preventing global catastrophic risks is the best thing we can do but nor should we be confident that it’s preventing animals suffering or helping the global poor. Key arguments I am advancing:  1. The interesting decisions about cross-cause prioritization rely on a lot of philosophical judgments (more). 2. Generally speaking, I find the type of evidence for these types of co