I just thought I'd make people aware of this, since I haven't seen anything else posted on the Forum about it (or maybe at this point we've stopped keeping track of FTX-related media productions? not sure). I assume there may be a wave of EA PR of unknown nature, depending on how the series chooses to portray the ideas and the movement. Maybe even worth for CEA or the like to make themselves available to the series producers for inquiries?

In any case, I'm looking forward to watch it!

https://cryptoslate.com/ozark-star-julia-garner-to-play-caroline-ellison-in-obama-netflix-miniseries-on-ftx-collapse/

17

0
2
1

Reactions

0
2
1
Comments9


Sorted by Click to highlight new comments since:

I feel firmly that a big mistake is people not standing on the EA label, I saw a lot of friends stop calling themselves EA after the SBF/FTX scandal but I think instead explaining it is a much better approach 
(1) EA is about giving your money away to help as many people as possible
(2) SBF lied and commited fraud
(3) SBF didn't practice EA
(4) SBF wasn't EA he was just saying he was

it's kind of like if someone says they're vegan whilst eating meat, you should point out the persons being dishonest in their labelling and doesn't represent veganism

While I agree that people shouldn't have renounced the EA label after the FTX scandal, I don't quite find your simile with veganism convincing. It seems to fail to include two very important elements:

  1. SBF's public significance within EA: this is more like if one of the most famous Vegan advocates in the planet, the one everybody knows about, was shown to actually not only consume meat, but have a rather big meat-packing plant.
  2. Proximity framing: I think one can make a case for SBF being a pure, naive Utilitarian who just Petersburgged himself to bankruptcy and fraud. While EA is not ideologically 'naive' Utilitarian, one can argue that its intellectual foundations aren't far from Sam's (in fact, they significantly overlap) and might non-trivially cast a shadow on them. It is common for EAs to make really counterintuitive EV calculations and take pride in giving support to stuff normies would find highly objectionable, while paying what from the outside might seems as only lip-service to 'oh, yeah, you should abide by socially established rules and norms' while paradoxically holding that such abiding is merely strategic and revocable.
  1. you're right I should have emphasised that better
  2. I'm not sure what petersburgged means but I think you mean he started out pure then gradually gave himself more justification for increasingly bad actions as time went on, in which case I agree that early on he was definitly ea (I remember he went vegan the day after a friend showed him it doesn't align with his (sbfs) values) so he was clearly commited to moral action at some point but I would say the sbf that commited the fraud etc was a distinct sam from the one that was ea

It was my lame attempt at making a verb out of the Petersburg Paradox, where a calculation of Expected Value of the type I play a coin-tossing game where if I get heads, the pot doubles, if I had tails, I lose everything. The EV is infinite, but in real life, you'll end up ruined pretty quick. SBF had a talk about this with Tyler Cowen and clearly enjoyed biting the bullet:

COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing? 
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually. 
COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing. 
BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical. 
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence? 
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.

I am rather assuming SBF was a radical, no holds barred, naive Utilitarian who just thought he was smart enough to not get caught with (from his pov) minor infringement of arbitrary rules and norms of the masses and that the risk was just worth it. 

I suppose you could say he petered out

This was shared as a quick take a month ago, and there was some discussion then. 

Oh! Fantastic, thanks for letting me know Toby - should've looked in the "COMMENTS" section of the search results as well! 

No worries! Not everyone is as terminally on the Forum as me lol

Terminally on the terminal

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference