[PS: If you're hitting a paywall, you can use this link]

Sam Bankman-Fried was recently live interviewed by Andrew Ross Sorkin from the New York Times as part of the DealBook summit, an interview which was scheduled before the FTX collapse and remained so.

The summary is that SBF tried to frame the collapse as a series of mistakes, propped by his ignorance on different aspects of accounting, risk management and what was happening at Alameda.  As the NY Times reports, parts of his narrative seem to contradict evidence regarding the comingling of funds between FTX and Alameda. 

For the most part, SBF also backed off many of the statements he made on Kelsey Piper's interview, especially regarding what some interpreted back then as him confessing of trying to deceive the public. Besides the occasional mention of donations, there was no mention of effective altruism.

Among some of the things he said:

  • “Was there comingling of funds?” “I didn't knowingly comingle funds. (…) I was surprised as to how exposed Alameda's position was.”
  • He pointed that he was misinformed by significant discrepancies between post facto accounting and what was informed by their internal dashboards.
  • “I wasn't running Alameda, I didn't know exactly what was going on. (…) Obviously that is a pretty bad mistake on my part, it was a pretty big oversight.”
  • “When did you think you knew there was a problem?” “November 6. That was the date when the tweet about CZ [Changpeng Zhao] came out.”
  • He said he entered the interview against the advice by his lawyers. This at least seems to match with the partly incriminatory statements he made during the interview. “There’s a time and a place for me to think about myself and my own future. I don’t think this is it.”
  • Regarding the interview with Kelsey Piper, he attempted to recontextualize the comments he had made on doing good as merely differentiating between real impact (giving the example of bed nets for malaria) and other instrumental donations “for the business”. His answer was not entirely clear. Regarding public deception, he later declared “I was as truthful as I’m knowledgeable to be, (…) I don’t know of times when I lied”.
  • Regarding spending on houses and other things, he said it was part of a strategy to attract tech talent for FTX. Specifically regarding buying a property for his parents, he said they had only stayed there temporarily and that it was company property.
  • As for why investors like Sequoia Capital missed the risk management problems in FTX, he said they were worried about upsides, which is natural in their role as investors.
  • On drugs, he said he had taken prescribed stimulants for focus, but there was nothing surprising in that. He said reports about parties were greatly exaggerated, “there were no wild parties. At our parties, we play board games.”.
  • On governance, he said he even thought, “if anything, we had too many boards”.  He pointed that even though he believes that there were over 12 boards in different entities with regulatory functions, the problem was that there was no one explicitly in charge of customer risk management.
  • Finally, he explained that it was still possible to make costumers whole, or at the very least it was entirely possible a month ago (regretting bankruptcy). He pointed out to the existence of several assets which were merely illiquid, and pointed out that FTX US and FTX Japan were probably still completely solvent (as opposed to the larger FTX International).
Comments16


Sorted by Click to highlight new comments since:

My only update is that I think this community (based on the EA forum only) is under-rating the PR damage from all of this. For a lot of people SBF =~EA, and this interview does not appear to be playing well (source: twitter, group chats etc.). I'm not sure what to do about it  but I thought I'd share that observation from outside the EA/LW bubble. A few other thoughts:

  • Unfortunately, I  think SBF's comments to Vox about ethics ("so the ethics stuff - mostly a front?") have been misread to mean  that his entire earning to give / EA worldview was somehow a cynical sham. While I think FTX's downfall indeed involved some risky and unethical  business dealings, I  don't think same is saying anything like this (obviously). In fact,  he may have even EV'd himself into taking some of these risks in service of his earnest philanthropic goals (epistemic status: who knows). 
  • Some people who really don't like EA, and longtermism in particular, are using the FTX downfall as a sort of proof that EA exists to launder the reputations of the wealthy. While I think these arguments have little merit, they are getting a lot of play in left-leaning circles and I think have the potential to do damage, especially to people with limited exposure to EA who are "getable" in the sense that they care about similar things to EAs and may now be less likely to work on / support EA cause areas.  

(...) for a lot of people SBF =~EA

This seems weird. I think PR wise, our biggest worry is what the first impressions of newcomers will be, and the vast, the vast majority of people haven't heard of it yet. I worry more about what the first articles on Google are going to be, rather than how we are actively being perceived right now.

 I'm still worried, but from my perspective general attitudes haven't changed that much yet and at most, people with pre-existing negative beliefs about EA have seen those confirmed.

Plus, I don't think we're under-rating the damage, it's just that there doesn't seem like there's much we can do.

(I should probably say my view is quite partial: I'm an organizer for a Spanish-speaking group and for the most part, the situation has seemed distant)

It seems like we're splitting hairs. What I'm saying is that millions and millions of people are hearing about EA for the first time via articles like this: https://www.nytimes.com/2022/12/01/technology/sam-bankman-fried-crypto-artificial-intelligence.html?smid=nytcore-ios-share&referringSource=articleShare 

Maybe this won't matter for the marginal Berkley EA Club joiner, but I'm worried this will do some harm for donors and non-EAs that EA orgs have to work with in order to be successfull. EA orgs often/usually have to interact with the outside world, sometimes with fairly establishment/conservative organizations who have leaders who read the NYT and WSJ. 

Maybe I'm raising this alarm because I experience this directly daily working in finance with a bunch of people who are on boards of foundations and nonprofits who now have a very negative view of EA. Maybe this is a niche concern and it doesn't matter in the grand scheme of things. 

Things that I think played particularly badly: 

  • Not being totally direct on his parents' real estate acquisitions. These are a bad look even if you buy the argument that the only way to find space in the Bahamas is to buy a few hundred million of extreme lux resort condos. I know a small handful of very wealthy people who would be able to immediately talk about how they have financed / held their RE assets. 
  • Not being totally direct on his relationship to Alameda: the guy lived with principals of and founded/owned 90% of the firm which was also located in the Bahamas (and maybe dated the CEO?). I have no more intel than what's available in the press but it just does not read as truthful as an outsider. 
  • Drug use: who cares, but it has become a meme and I think he would have been better served to say: 'yeah, I have a special patch-based scrip for ADHD meds I've used throughout my career - not sorry about that'. 

According to https://finance.yahoo.com/news/ftx-japan-unit-drafts-plan-124544036.html, FTX Japan has about $150 M in assets, which isn't much compared to what the whole FTX conglomerate owes.

My own impression is that SBF seemed honest. He probably took a significant legal risk by agreeing to the interview, and many of the mistakes he pointed out seem coherent with most accounts of the FTX collapse (where SBF would have committed negligence by doubling down on a series of decisions while operating with limited information).

The New York Times was right in pointing out that there is some contradictory evidence, especially because of how the accounts were set up, but I don't think this is strong evidence either.

That being said, I don't think we should update significantly either way from this interview.

[This comment is no longer endorsed by its author]Reply
wayne
11
4
16

Agree with your impression. But I would give this interview more weight than you do. In my experience - around 15 years of legal work (including as both a lawyer, and as a defendant) - it is exceedingly rare for a defendant who has bad intentions to speak openly about what they did. SBF is probably already a defendant in any number of cases, and some of them may eventually be criminal. 

The fact that he is speaking openly and transparently comes at significant personal risk and is much more consistent with the notion that he acted in good faith. 

I was probably 75-25 on good faith/fraud prior to this interview. I'm probably 80-20 or 85-15, after this interview. 

PS Great summary. This is helpful, as I missed part of the interview. 

He's not speaking openly and transparently. His answers are sometimes really evasive and he doesn't admit to any mistakes in a lot of detail. There are lots of reasons he might do interviews (thinking he's smarter than his lawyers; thinking he's in trouble either way and may as well enjoy the spotlight; thinking he's got a message to share that's more important than his personal fate; somehow thinking he's still got a shot at fundraising money[?], etc.)  I'm in favor of giving people a lot of goodwill if they transparently explain themselves, but you have to actually look if they're doing that vs. if they just say/pretend that they're doing that. 

What do we think might have been evasive in his answers? 

I would have to re-listen to the interview to see what mistakes he admitted. But I thought it was pretty clear what mistakes he admitted to, personally: failure to manage risk (e.g., understanding how likely it would be that there would be a simultaneous drop in collateral value, and a sudden withdrawal of deposits); failure to maintain corporate controls (e.g., creating red flags or account pauses, if certain accountholders like Alameda exceeded lending limits); and improper account segregation (admitted he did not know about commingling of funds). He did not admit to specific fraud, but that's consistent with the notion that he did not commit to fraud. He did admit to a number of things that, to me, provide a completely plausible account as to how FTX failed. And they are errors of risk management and even arrogance - but not malice or deceit. 

He starts the entire conversation saying how he's the CEO so he's in charge and responsible. But then he claims ignorance about so many things! He blames it all on Alameda, but it's left unclear how Alameda got FTX money to trade with in the first place. I definitely don't feel like he explained clearly what went wrong and where he messed up. The way it sounds like, he had no clue about the things that went on. That's not communicating openly and transparently! 

When there's a possibility of lying, you IMO can't just go by your gut feeling "does it sound like this person's story makes sense to me?" If you do this, you'll believe any liar who's good at coming up with plausible excuses. I think people have a pro-social duty to ask themselves "How would someone who tells the full truth sound different than someone who covers things up?" Maybe you did that and we disagree about how to do that comparison. But for me, it seems too suspicious how often Sam claims ignorance and how the narrative at the end doesn't make much sense. I feel like if I lost so much money, I'd know exactly what went wrong and I could explain it so it makes sense.

He was so evasive about the "co-mingling of funds" questions. His answers were about how there's margin/collateral and how Alameda had a position that got larger without him knowing and he was surprised. But that's not the question! He's being asked why the fuck Alameda was gambling with his exchange's money. He repeatedly dodged that question. (After repeated dodgeings he then talks about opening bank accounts and stuff like that, but that's awfully late and still doesn't explain much. "And I'm still looking in to that..." Really?!) 

 


 

This is helpful. I might give the interview another listen with these particular issues in mind. 

Here's a more recent interview with hard-hitting questions and follow-up questions. The evasiveness becomes more obvious when the interviewer is very prepared and doesn't immediately move on to the next question when an answer is evasive. 

It looks like SBF admitted to fraud in this interview. He admitted to knowingly commingling funds in a way that directly violates the terms of service, and he admitted that was their general business practice.

I guess there are multiple aspects to this. While he might seem to be open at the cost of personal legal risk, it might be that he's also telling an inaccurate story of what happened. (EDIT: slightly edited the wording regarding openness/good faith in this one paragraph after reading Lukas's take)

(Heavy speculation below)

A crucial point given SBF's significant involvement in EA and interest in utilitarianism is whether he actually believed in all of it, and how strongly.

There are some signs he believed in it strongly: being associated with EA rather than a more popular and commonly accepted movement, being very knowledgeable about utilitarianism, early involvement, donations etc.

If he did believe in it strongly, it could be that this is just him "doing [what he believes is] the most good" by potentially being dishonest about some things (whether this was due to bad intentions), in order to, perhaps (in his mind), deflect the harm he's caused EA and the future, at the cost of personal legal risk (which is minor in comparison from the utilitarian perspective). (Then again, at the same time, another (naive) utilitarian strategy might be to say "muhahaha I was evil all along!" and get people to think that he used EA as a cover and that he isn't representative of it? If that also works (in expectation, to him), I'm not so sure why he picked one over the other.)

This is all speculative, and a bit unusual for the average defendant, but SBF is quite unusual (as is EA, to be fair) and we might have to consider these unusual possibilities.

Not sure how you can comment in the direction of believing about mistakes. If you had money in my bank and I showed you £10,000 but then actually used this money for my bad trades and still showed you £10,000. Would you believe that it was an honest mistake if then when you tried to take your money back, they were not there but burnt in trades? Is it a mistake to use other people money for your means? If so, yes then he made a mistake. Did not steal. I would have thought that for money WYSIWYG principles apply. E.g. what you see is what you get and have. And not that what you see is ...maybe...what you have but not get.

It doesn't matter his intentions, it matters his actions and outcomes. 

And really, from an EA perspective, all that matters is the general view from the public and that impact on current and future members/organisations/grants. 

Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Recent opportunities in Building effective altruism
32
CEEALAR
· · 1m read