All posts

New & upvoted

Week of Sunday, 25 December 2022
Week of Sun, 25 Dec 2022

Frontpage Posts

95
[anonymous]
· · 2m read

Quick takes

The Against Malaria Foundation is described as about 5-23x more cost-effective than cash transfers in GiveWell's calculations, while Founders Pledge thinks StrongMinds is about 6x more cost-effective.

But this seems kind of weird. What are people buying with their cash transfers? If a bednet would be 20x more valuable, than why don't they just buy that instead? How can in-kind donations (goods like bednets or vitamins) be so much better than just giving poor people cash? 

I can think of four factors that might help explain this.

  • Lack of information. Perhaps beneficiaries aren't aware of the level of malaria risk they face, or aren't aware of the later-life income benefits GiveWell models. Or perhaps they are aware but irrationally discount these benefits for whatever reason.
  • Lack of markets. Cash transfer recipients usually live in poor, rural areas. Their communities don't have a lot of buying power. So perhaps it's not worth it for, e.g., bednet suppliers to reach their area. In other words, they would purchase the highly-valuable goods with their cash transfers, but they can't, so they have to buy less valuable goods instead.
  • Economies of scale. Perhaps top-rated charities get a big discount on the goods they buy because they buy so many. This discount could be sufficiently large that, e.g., bednets are the most cost-effective option when they're bought at scale, but not the most cost-effective option for individual consumers.
  • Externalities. Perhaps some of the benefits of in-kind goods flow to people who don't receive cash transfers, such as children. These could  account for their increased cost-effectiveness if cash transfer recipients don't account for externalities.

I think each of these probably plays a role. However, a 20x gap in cost-effectiveness is really big. I'm not that convinced that these factors are strong enough to fully explain the differential. And that makes me a little bit suspicious of the result.

I'd be curious to hear what others think. If others have written about this, I'd love to read it. I didn't see a relevant question in GiveWell's FAQs.

The Against Malaria Foundation is described as about 5-23x more cost-effective than cash transfers in GiveWell's calculations, while Founders Pledge thinks StrongMinds is about 6x more cost-effective. But this seems kind of weird. What are people buying with their cash transfers? If a bednet would be 20x more valuable, than why don't they just buy that instead? How can in-kind donations (goods like bednets or vitamins) be so much better than just giving poor people cash?  I can think of four factors that might help explain this. * Lack of information. Perhaps beneficiaries aren't aware of the level of malaria risk they face, or aren't aware of the later-life income benefits GiveWell models. Or perhaps they are aware but irrationally discount these benefits for whatever reason. * Lack of markets. Cash transfer recipients usually live in poor, rural areas. Their communities don't have a lot of buying power. So perhaps it's not worth it for, e.g., bednet suppliers to reach their area. In other words, they would purchase the highly-valuable goods with their cash transfers, but they can't, so they have to buy less valuable goods instead. * Economies of scale. Perhaps top-rated charities get a big discount on the goods they buy because they buy so many. This discount could be sufficiently large that, e.g., bednets are the most cost-effective option when they're bought at scale, but not the most cost-effective option for individual consumers. * Externalities. Perhaps some of the benefits of in-kind goods flow to people who don't receive cash transfers, such as children. These could  account for their increased cost-effectiveness if cash transfer recipients don't account for externalities. I think each of these probably plays a role. However, a 20x gap in cost-effectiveness is really big. I'm not that convinced that these factors are strong enough to fully explain the differential. And that makes me a little bit suspicious of the result. I'd be curious to hear what

Looking back on leaving academia for EA-aligned research, here are two things I'm grateful for:

  1. Being allowed to say why I believe something.
  2. Being allowed to hold contradictory beliefs (i.e., think probabilistically).

In EA research, I can write: 'Mortimer Snodgrass, last author on the Godchilla paper (Gopher et al., 2021), told me "[x, y, z]".'

In academia, I had to find a previous paper to cite for any claim I made in my paper, even if I believed the claim because I heard it elsewhere. (Or, rather, I did the aforementioned for my supervisor's papers - I used to be the research assistant who cherry picked citations off Google Scholar.)

In EA research, I can write, 'I estimate that the model was produced in May 2021 (90% confidence interval: March–July 2021)', or, 'I'm about 70% confident in this claim', and even, 'This paper is more likely than not to contain an important error.'

In academia, I had to argue for a position, without conceding any ground. I had to be all-in on whatever I was claiming; I couldn't give evidence and considerations for and against. (If I did raise a counterargument, it would be as setup for a counter-counterargument.)

That's it. No further point to be made. I'm just grateful for my epistemic freedom nowadays.

Looking back on leaving academia for EA-aligned research, here are two things I'm grateful for: 1. Being allowed to say why I believe something. 2. Being allowed to hold contradictory beliefs (i.e., think probabilistically). In EA research, I can write: 'Mortimer Snodgrass, last author on the Godchilla paper (Gopher et al., 2021), told me "[x, y, z]".' In academia, I had to find a previous paper to cite for any claim I made in my paper, even if I believed the claim because I heard it elsewhere. (Or, rather, I did the aforementioned for my supervisor's papers - I used to be the research assistant who cherry picked citations off Google Scholar.) In EA research, I can write, 'I estimate that the model was produced in May 2021 (90% confidence interval: March–July 2021)', or, 'I'm about 70% confident in this claim', and even, 'This paper is more likely than not to contain an important error.' In academia, I had to argue for a position, without conceding any ground. I had to be all-in on whatever I was claiming; I couldn't give evidence and considerations for and against. (If I did raise a counterargument, it would be as setup for a counter-counterargument.) That's it. No further point to be made. I'm just grateful for my epistemic freedom nowadays.

This forum has many comments that boil down to something like "thanks/you're welcome/+1/this/me too/same/agree/disagree/great/etc" that dilute signal-to-noise

Most probably voted/reacted on their parent content, and the comment itself doesn't add much more than noise beyond that

This forum has many comments that boil down to something like "thanks/you're welcome/+1/this/me too/same/agree/disagree/great/etc" that dilute signal-to-noise Most probably voted/reacted on their parent content, and the comment itself doesn't add much more than noise beyond that

Random thoughts on grants (inspired by some recent posts):

  • Do (m)any grantmakers offer participation grants for certain unsuccessful grant applicants who submitted grants in good faith, to (help) defray the time and expense of applying? I briefly floated this idea at the end of a discussion about grants in medium-income countries, and I think it ties into a recent discussion about compensating applications for work trials to some extent.The grant-seeker and the grant-maker are semi-cooperatively producing a joint product of sorts -- a good grant portfolio. Both incur costs to make that product happen. It seems that common practice is that the grant-seeker and the grant-maker both bear their own costs for unsuccessful grants, but I can't think of any a priori reason that should be case vs. a potentially more satisfactory allocation of the burden. 
  • And in the recent discussion about work tests, the consensus was that job applicants should be (and generally were) compensated for trial tasks that took a few hours -- even though many of those tasks are standardized and thus do not have any substantive value for the would-be employer. Rather, they only have value to the semi-cooperative joint product of hiring the best employee. Even though both candidate and employer benefit from that process, we have (correctly, in my view) decided that the employer should bear more of the costs of the matching process by compensating the candidate for work trials.
  • Providing some minimal compensation for most unsuccessful applicants would presumably encourage more grant applications among the group to whom the offer was extended. The initial suggestion was to presumptively give such compensation to applicants from low/middle-income countries for various reasons, but one could imagine other objectives for selectively boosting applications (e.g., trying to encourage more applications among groups underrepresented in EA).
  • In my field (law), the court can kick out your case-initating document in a few different ways. That can include "with prejudice" (i.e., there is a fatal flaw that you can't fix), "without prejudice" (i.e., you may be able to fix the weakness and can try again), or without prejudice to refiling after a certain event (e.g., you didn't follow the proper process at the administrative agency but can come back after you complete those). For grantmakers who don't want to offer more complete feedback, even knowing which of those categories a rejection generally falls into would be helpful, e.g.:
    • We just don't think this is a viable idea absent a significant change in circumstances. You should consider moving on to another one.
    • We don't think you are the right person for this idea. You should consider moving on to another one.
    • We think you need more experience. Consider reapplying once you have it.
    • There's nothing  wrong with this grant proposal, it just didn't quite clear the funding bar this year (although it might have in prior years and might in future years).
10
Jason
3y
2
Random thoughts on grants (inspired by some recent posts): * Do (m)any grantmakers offer participation grants for certain unsuccessful grant applicants who submitted grants in good faith, to (help) defray the time and expense of applying? I briefly floated this idea at the end of a discussion about grants in medium-income countries, and I think it ties into a recent discussion about compensating applications for work trials to some extent.The grant-seeker and the grant-maker are semi-cooperatively producing a joint product of sorts -- a good grant portfolio. Both incur costs to make that product happen. It seems that common practice is that the grant-seeker and the grant-maker both bear their own costs for unsuccessful grants, but I can't think of any a priori reason that should be case vs. a potentially more satisfactory allocation of the burden.  * And in the recent discussion about work tests, the consensus was that job applicants should be (and generally were) compensated for trial tasks that took a few hours -- even though many of those tasks are standardized and thus do not have any substantive value for the would-be employer. Rather, they only have value to the semi-cooperative joint product of hiring the best employee. Even though both candidate and employer benefit from that process, we have (correctly, in my view) decided that the employer should bear more of the costs of the matching process by compensating the candidate for work trials. * Providing some minimal compensation for most unsuccessful applicants would presumably encourage more grant applications among the group to whom the offer was extended. The initial suggestion was to presumptively give such compensation to applicants from low/middle-income countries for various reasons, but one could imagine other objectives for selectively boosting applications (e.g., trying to encourage more applications among groups underrepresented in EA). * In my field (law), the court can kick out your case-ini

Resources on Climate Change

IPCC Resources

TIP: The IPCC links lead to pages that link to many reports. Assessments reports from the three working groups contain predictions with uncertainty levels (high, medium, low), and plenty of background information, supplementary material, and high-level summaries. EA's might want to start with the Technical Summaries from the latest assessment report and drill-down into full reports as needed.

Useful Websites and Reports

Noteworthy Papers

News and Opinions and Controversial Papers

Resources on Climate Change IPCC Resources * The 6th Assessment Reports * The Summary for Policymakers (Scientific Basis Report,Impacts Report,Mitigation Report) NOTE: The Summaries for Policymakers are approved line-by-line by representatives from participating countries. This censors relevant information from climate scientists. * The Synthesis Report: this is pending in 2023 * Climate Change: The Scientific Basis * Climate Change: Impacts * Climate Change: Mitigation * Key Climate Reports: The 6th (latest) Assessment Reports and additional reports covering many aspects of climate, nature, finance related to climate change prevention, mitigation and adaptation. * Emissions Gap Report: the gap refers to that between pledges and actual reductions as well as pledges and necessary targets. * Provisional State Of The Climate 2022: full 2022 report with 2022 data (reflecting Chinese and European droughts and heat waves) still pending. * United in Science 2022: A WMO and UN update on climate change, impact, and responses (adaptation and mitigation). * and many more. .. see the IPCC website for the full list. * Archive of Publications and Data: all Assessment Reports prior to the latest round. In addition, it contains older special reports, software and data files useful for purposes relevant to climate change and policy. TIP: The IPCC links lead to pages that link to many reports. Assessments reports from the three working groups contain predictions with uncertainty levels (high, medium, low), and plenty of background information, supplementary material, and high-level summaries. EA's might want to start with the Technical Summaries from the latest assessment report and drill-down into full reports as needed. Useful Websites and Reports * IPBES Global Assessment Report On Biodiversity * The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) * The World Wildlife Fund Living Planet Re

Week of Sunday, 18 December 2022
Week of Sun, 18 Dec 2022

Frontpage Posts

Quick takes

The whole/only real point of the effective altruism community is to do the most good.

If the continued existence of the community does the most good,
I desire to believe that the continued existence of the community does the most good;
If ending the community does the most good,
I desire to believe that ending the community does the most good;
Let me not become attached to beliefs I may not want.

31
Linch
3y
0
The whole/only real point of the effective altruism community is to do the most good. If the continued existence of the community does the most good, I desire to believe that the continued existence of the community does the most good; If ending the community does the most good, I desire to believe that ending the community does the most good; Let me not become attached to beliefs I may not want.

Should the EA Forum team stop optimizing for engagement?
I heard that the EA forum team tries to optimize the forum for engagement (tests features to see if they improve engagement). There are positives to this, but on net it worries me. Taken to the extreme, this is a destructive practice, as it would

  • normalize and encourage clickbait;
  • cause thoughtful comments to be replaced by louder and more abundant voices (for a constant time spent thinking, you can post either 1 thoughtful comment or several hasty comments. Measuring session length fixes this but adds more problems);
  • cause people with important jobs to spend more time on EA Forum than is optimal;
  • avoid community members and "EA" itself from keeping their identities small, as politics is an endless source of engagement;
  • distract from other possible directions of improvement, like giving topics proportionate attention, adding epistemic technology like polls and prediction market integration, improving moderation, and generally increasing quality of discussion.

I'm not confident that EA Forum is getting worse, or that tracking engagement is currently net negative, but we should at least avoid failing this exercise in Goodhart's Law.

Should the EA Forum team stop optimizing for engagement? I heard that the EA forum team tries to optimize the forum for engagement (tests features to see if they improve engagement). There are positives to this, but on net it worries me. Taken to the extreme, this is a destructive practice, as it would * normalize and encourage clickbait; * cause thoughtful comments to be replaced by louder and more abundant voices (for a constant time spent thinking, you can post either 1 thoughtful comment or several hasty comments. Measuring session length fixes this but adds more problems); * cause people with important jobs to spend more time on EA Forum than is optimal; * avoid community members and "EA" itself from keeping their identities small, as politics is an endless source of engagement; * distract from other possible directions of improvement, like giving topics proportionate attention, adding epistemic technology like polls and prediction market integration, improving moderation, and generally increasing quality of discussion. I'm not confident that EA Forum is getting worse, or that tracking engagement is currently net negative, but we should at least avoid failing this exercise in Goodhart's Law.

Canadian vs U.S. Law Schools (in short):

Similarities:

Law school is pretty stressful and features lots of reading and writing. But, graduates from the most prestigious universities gain highly-valuable network, connections, and credentials. Plus, law school develops highly transferrale skills like conducting detail-oriented research, organizational skills, written and oral advocacy.

Pros of Canadian Law schools:

Canadian law schools cost a lot less than U.S. schools - even for international students, but especially for locals. For e.g., Quebec locals pay 50x less than Harvard locals to attend a law school in Quebec.

Pros of U.S. Law schools:

U.S. law school graduates, especially when practicing in trade hubs like New York, have have access to super-high paying jobs. For e.g., top New York firms pay 10x as much as top Canadian firms for their first-year associates. 

U.S. law school graduates also have access to highly influential decision-making positions. The U.S. features more lawyers in Congress and Senate than Canada does in its equivalent chambers. U.S. policy has wider influence than Canadian policy. Last, the New York Bar is one of the best bars to hold to understand and influence international business practice. 

Differences in Recommendations

80K's career page recommends students who have a clear vision of what they want to do, have a high stress tolerance, and a good personal fit for lawyering. This advice ensures that the time, money, and energy that goes into law school is well spent. 

Since Canadian law schools cost less, Canadian law school can be for students who are still trying to figure out their career path since it provides highly transferrable skills and good career capital. 

That said,  unless they can move to the U.S., Canadian students looking to earn-to-give have better prospects in other careers or the U.S.

Canadian law school can be for students looking to influence global change or policy-making, but they'll face a steeper hill than U.S. law students since their credentials are less recognized than the U.S.' credentials (especially within the U.S.).

Canadian vs U.S. Law Schools (in short): Similarities: Law school is pretty stressful and features lots of reading and writing. But, graduates from the most prestigious universities gain highly-valuable network, connections, and credentials. Plus, law school develops highly transferrale skills like conducting detail-oriented research, organizational skills, written and oral advocacy. Pros of Canadian Law schools: Canadian law schools cost a lot less than U.S. schools - even for international students, but especially for locals. For e.g., Quebec locals pay 50x less than Harvard locals to attend a law school in Quebec. Pros of U.S. Law schools: U.S. law school graduates, especially when practicing in trade hubs like New York, have have access to super-high paying jobs. For e.g., top New York firms pay 10x as much as top Canadian firms for their first-year associates.  U.S. law school graduates also have access to highly influential decision-making positions. The U.S. features more lawyers in Congress and Senate than Canada does in its equivalent chambers. U.S. policy has wider influence than Canadian policy. Last, the New York Bar is one of the best bars to hold to understand and influence international business practice.  Differences in Recommendations 80K's career page recommends students who have a clear vision of what they want to do, have a high stress tolerance, and a good personal fit for lawyering. This advice ensures that the time, money, and energy that goes into law school is well spent.  Since Canadian law schools cost less, Canadian law school can be for students who are still trying to figure out their career path since it provides highly transferrable skills and good career capital.  That said,  unless they can move to the U.S., Canadian students looking to earn-to-give have better prospects in other careers or the U.S. Canadian law school can be for students looking to influence global change or policy-making, but they'll face a steeper hil

I wrote something about CICERO, Meta's new Diplomacy-playing AI. The summary:

  • CICERO is a new AI developed by Meta AI that achieves good performance at the board game Diplomacy. Diplomacy involves tactical and strategic reasoning as well as natural language communication: players must negotiate, cooperate and occasionally deceive in order to win.
    • CICERO comprises (1) a strategic model deciding which moves to make on the board and (2) a dialogue model communicating with the other players.
    • CICERO is honest in the sense that the dialogue model, when it communicates, always tries to communicate the strategy model's actual intent; however, it can omit information and change its mind in the middle of a conversation, meaning it can behave deceptively or treacherously.
  • Some who are concerned with risks from advanced AI think the CICERO research project is unusually bad or risky.
    • It has at least three potentially-concerning aspects:
      1. It may present an advancement in AIs' strategic and/or tactical capabilities.
      2. It may present an advancement in AIs' deception and/or persuasion capabilities.
      3. It may be illustrative of cultural issues in AI labs like Meta's.
    • My low-confidence take is that (1) and (2) are false because CICERO doesn't seem to contain any new insights that markedly advance either of these areas of study. Those capabilities are mostly the product of using reinforcement learning to master a game where tactics, strategy, deception and persuasion are useful, and I think there's nothing surprising or technologically novel about this.
    • I think, with low confidence, that (3) may be true, but perhaps no more true than of any other AI project of that scale.
  • Neural networks using reinforcement learning are always (?) trained in simulated worlds. Chess presents a very simple world; Diplomacy, with its negotiation phase, is a substantially more complex world. Scaling up AIs to transformative and/or general heights using the reinforcement learning paradigm may require more complex and/or detailed simulations.
    • Simulation could be a bottleneck in creating AGI because (1) an accurate enough simulation may already give you the answers you want, (2) an accurate and/or complex enough simulation may be AI-complete and/or (3) extremely costly.
    • Simulation could also not be a bottleneck because, following Ajeya Cotra's bio-anchors report, (1) we may get a lot of mileage out of simpler simulated worlds, (2) environments can contain or present problems that are easy to generate and simulate but hard to solve, (3) we may be able to automate simulation and/or (4) people will likely be willing to spend a lot of money on simulation in the future, if that leads to AGI.
    • CICERO does not seem like an example of a more complex or detailed simulation, since instances of CICERO didn't actually communicate with one another during self-play. (Generating messages was apparently too computationally expensive.)

The post is written in a personal capacity and doesn't necessarily reflect the views of my employer (Rethink Priorities).

I wrote something about CICERO, Meta's new Diplomacy-playing AI. The summary: * CICERO is a new AI developed by Meta AI that achieves good performance at the board game Diplomacy. Diplomacy involves tactical and strategic reasoning as well as natural language communication: players must negotiate, cooperate and occasionally deceive in order to win. * CICERO comprises (1) a strategic model deciding which moves to make on the board and (2) a dialogue model communicating with the other players. * CICERO is honest in the sense that the dialogue model, when it communicates, always tries to communicate the strategy model's actual intent; however, it can omit information and change its mind in the middle of a conversation, meaning it can behave deceptively or treacherously. * Some who are concerned with risks from advanced AI think the CICERO research project is unusually bad or risky. * It has at least three potentially-concerning aspects: 1. It may present an advancement in AIs' strategic and/or tactical capabilities. 2. It may present an advancement in AIs' deception and/or persuasion capabilities. 3. It may be illustrative of cultural issues in AI labs like Meta's. * My low-confidence take is that (1) and (2) are false because CICERO doesn't seem to contain any new insights that markedly advance either of these areas of study. Those capabilities are mostly the product of using reinforcement learning to master a game where tactics, strategy, deception and persuasion are useful, and I think there's nothing surprising or technologically novel about this. * I think, with low confidence, that (3) may be true, but perhaps no more true than of any other AI project of that scale. * Neural networks using reinforcement learning are always (?) trained in simulated worlds. Chess presents a very simple world; Diplomacy, with its negotiation phase, is a substantially more complex world. Scaling up AIs to transformative and/or general heights using t

Why doesn't EA have many career opportunities or recommendations for law students (especially outside the U.S.)?

Three reasons:

  1. Lawyering is highly-specific. Lawyers can do three things no one else can: notarize documents, provide legal advice, and become judges. All the other work - policy-making, legal research, and advocacy - can be done by others with enough motivation, organization, and search-engine savvy. As such, most of the work lawyers could be done without the cost, stress, and time of going to law school. As such, 80K sparsely recommends it, let alone tailors careers to that skillset.  
  2. Lawyering is problem-responsive. Budding organizations past the start-up stage - like the EA community - have little need for lawyers if they're not running into legal issues. It makes more sense for most mid-sized organizations to hire lawyers as needed instead of hiring them in-house. Only the largest EA Orgs, like Open Phil or CEA, find a regular need for lawyers. 
  3. Lawyering is geographically constrained. Lawyers are licenced in specific jurisdictions, creating an institutional barrier to in-depth collaboration. Legal professionals and academics that do collaborate either work on broad, sweeping analyses that day-to-day organizations are still figuring out how to implement, or they work on re-orienting cultural motherships over years of concentrated effort. These problems are hardly tractable (drawing little 80K attention), but their nature is unlikely to change. 
Why doesn't EA have many career opportunities or recommendations for law students (especially outside the U.S.)? Three reasons: 1. Lawyering is highly-specific. Lawyers can do three things no one else can: notarize documents, provide legal advice, and become judges. All the other work - policy-making, legal research, and advocacy - can be done by others with enough motivation, organization, and search-engine savvy. As such, most of the work lawyers could be done without the cost, stress, and time of going to law school. As such, 80K sparsely recommends it, let alone tailors careers to that skillset.   2. Lawyering is problem-responsive. Budding organizations past the start-up stage - like the EA community - have little need for lawyers if they're not running into legal issues. It makes more sense for most mid-sized organizations to hire lawyers as needed instead of hiring them in-house. Only the largest EA Orgs, like Open Phil or CEA, find a regular need for lawyers.  3. Lawyering is geographically constrained. Lawyers are licenced in specific jurisdictions, creating an institutional barrier to in-depth collaboration. Legal professionals and academics that do collaborate either work on broad, sweeping analyses that day-to-day organizations are still figuring out how to implement, or they work on re-orienting cultural motherships over years of concentrated effort. These problems are hardly tractable (drawing little 80K attention), but their nature is unlikely to change. 

Week of Sunday, 11 December 2022
Week of Sun, 11 Dec 2022

Frontpage Posts

103

Quick takes

Jonas V
104
31
1

EA Forum discourse tracks actual stakes very poorly

Examples:

  1. There have been many posts about EA spending lots of money, but to my knowledge no posts about the failure to hedge crypto exposure against the crypto crash of the last year, or the failure to hedge Meta/Asana stock, or EA’s failure to produce more billion-dollar start-ups. EA spending norms seem responsible for $1m–$30m of 2022 expenses, but failures to preserve/increase EA assets seem responsible for $1b–$30b of 2022 financial losses, a ~1000x difference.
  2. People are demanding transparency about the purchase of Wytham Abbey (£15m), but they’re not discussing whether it was a good idea to invest $580m in Anthropic (HT to someone else for this example). The financial difference is ~30x, the potential impact difference seems much greater still.

Basically I think EA Forum discourse, Karma voting, and the inflation-adjusted overview of top posts completely fails to correctly track the importance of the ideas presented there. Karma seems to be useful to decide which comments to read, but otherwise its use seems fairly limited.

(Here's a related post.)

104
Jonas V
3y
22
EA Forum discourse tracks actual stakes very poorly Examples: 1. There have been many posts about EA spending lots of money, but to my knowledge no posts about the failure to hedge crypto exposure against the crypto crash of the last year, or the failure to hedge Meta/Asana stock, or EA’s failure to produce more billion-dollar start-ups. EA spending norms seem responsible for $1m–$30m of 2022 expenses, but failures to preserve/increase EA assets seem responsible for $1b–$30b of 2022 financial losses, a ~1000x difference. 2. People are demanding transparency about the purchase of Wytham Abbey (£15m), but they’re not discussing whether it was a good idea to invest $580m in Anthropic (HT to someone else for this example). The financial difference is ~30x, the potential impact difference seems much greater still. Basically I think EA Forum discourse, Karma voting, and the inflation-adjusted overview of top posts completely fails to correctly track the importance of the ideas presented there. Karma seems to be useful to decide which comments to read, but otherwise its use seems fairly limited. (Here's a related post.)

A Personal Apology

I think I’m significantly more involved than most people I know in tying the fate of effective altruism in general, and Rethink Priorities in particular, with that of FTX. This probably led to rather bad consequences ex post, and I’m very sorry for this.

I don’t think I’m meaningfully responsible for the biggest potential issue with FTX. I was not aware of the alleged highly unethical behavior (including severe mismanagement of consumer funds) at FTX. I also have not, to my knowledge, contributed meaningfully to the relevant reputational laundering or branding that led innocent external depositors to put money in FTX. The lack of influence there is not because I had any relevant special knowledge of FTX, but because I have primarily focused on building an audience within the effective altruism community, who are typically skeptical of cryptocurrency, and because I have actively avoided encouraging others to invest in cryptocurrency. I’m personally pretty skeptical about the alleged social value of pure financialization in general and cryptocurrency in particular, and also I’ve always thought of crypto as a substantially more risky asset than many retail investors are led to believe.[1]

However, I think I contributed both reputationally and operationally to tying FTX in with the effective altruism community. Most saliently, I’ve done the following:

  1. I decided on my team’s strategic focus on longtermist megaprojects in large part due to anticipated investments in world-saving activities from FTX. Clearly this did not pan out.
  2. In much of 2022, I strongly encouraged the rest of Rethink Priorities, including my manager, to seriously consider incorporating influencing the Future Fund into our organization’s Theory of Change.
    1. Fortunately RP’s leadership was more generally skeptical of tying in our fate with cryptocurrency than I was, and took steps to minimize exposure.
  3. I was a regranter for the Future Fund in a personal capacity (not RP-related). I was happy about the grants I gave, but in retrospect of course this probably led to more headaches and liabilities to my regrantees.
    1. I tried to be cautious and tempered in my messaging to them, but I did not internalize that transactions are not final, and falsely implied in my dealings that all of the financial uncertainty comes before donations are in people’s bank accounts.
  4. In late 2021 and early 2022, I implicitly and explicitly encouraged EAs to visit FTX in the Bahamas. I would not be surprised if this was counterfactual: I do think my words have nontrivial weight in EA, and I also tried to help people derisk their visit to FTX.
  5. I tried to encourage some programmers I know to work at FTX, because I thought it was an unusually good and high-impact career opportunity for programmers.
    1. To the best of my knowledge, my encouragement did not counterfactually lead anybody to join FTX (though, alas, not for want of trying on my end).

I did the above without much due diligence on FTX, or thinking too hard about where they got their money. I did think about failure modes, but did not manage to land on the current one, and was also probably systematically overly optimistic about their general prospects. Overall, I’m not entirely sure what mistakes in reasoning or judgment lead me to mistakenly bet on FTX. [2]But clearly this seemed to be a very wrong call ex post, and likely led to a lot of harm both to EA’s brand, and to individuals. 

For all of that, I am very sorry. I will reflect on this, and hopefully do better going forward.

  1. ^

    Note that this decision process did not apply to my own investments, where I think it makes sense to be substantially more risk-neutral. I had a non-trivial internal probability that crypto in general, and FTX in particular, might be worth a lot more in the coming years, and in accordance with this belief I did lose a lot of money on FTX. 

  2. ^

    Frustratingly, I did have access to some credible-to-me private information that led me on net to be more rather than less trusting of them. This example is particularly salient for me on the dangers of trusting private information over less biased reasoning processes like base rates.

53
Linch
3y
4
A Personal Apology I think I’m significantly more involved than most people I know in tying the fate of effective altruism in general, and Rethink Priorities in particular, with that of FTX. This probably led to rather bad consequences ex post, and I’m very sorry for this. I don’t think I’m meaningfully responsible for the biggest potential issue with FTX. I was not aware of the alleged highly unethical behavior (including severe mismanagement of consumer funds) at FTX. I also have not, to my knowledge, contributed meaningfully to the relevant reputational laundering or branding that led innocent external depositors to put money in FTX. The lack of influence there is not because I had any relevant special knowledge of FTX, but because I have primarily focused on building an audience within the effective altruism community, who are typically skeptical of cryptocurrency, and because I have actively avoided encouraging others to invest in cryptocurrency. I’m personally pretty skeptical about the alleged social value of pure financialization in general and cryptocurrency in particular, and also I’ve always thought of crypto as a substantially more risky asset than many retail investors are led to believe.[1] However, I think I contributed both reputationally and operationally to tying FTX in with the effective altruism community. Most saliently, I’ve done the following: 1. I decided on my team’s strategic focus on longtermist megaprojects in large part due to anticipated investments in world-saving activities from FTX. Clearly this did not pan out. 2. In much of 2022, I strongly encouraged the rest of Rethink Priorities, including my manager, to seriously consider incorporating influencing the Future Fund into our organization’s Theory of Change. 1. Fortunately RP’s leadership was more generally skeptical of tying in our fate with cryptocurrency than I was, and took steps to minimize exposure. 3. I was a regranter for the Future Fund in a personal capacity (n

Consider radical changes without freaking out

As someone running an organization, I frequently entertain crazy alternatives, such as shutting down our summer fellowship to instead launch a school, moving the organization to a different continent, or shutting down the organization so the cofounders can go work in AI policy.

I think it's important for individuals and organizations to have the ability to entertain crazy alternatives because it makes it more likely that they escape local optima and find projects/ideas that are vastly more impactful.

Entertaining crazy alternatives can be mentally stressful: it can cause you or others in your organization to be concerned that their impact, social environment, job, or financial situation is insecure. This can be addressed by pointing out why these discussions are important, a clear mental distinction between brainstorming mode and decision-making, and a shared understanding that big changes will be made carefully.

 

Why considering radical changes seems important

The best projects are orders of magnitude more impactful than good ones. Moving from a local optimum to a global one often involves big changes, and the path isn't always very smooth. Killing your darlings can be painful. The most successful companies and projects typically have reinvented themselves multiple times until they settled on the activity that was most successful. Having a wide mental and organizational Overton window seems crucial for being able to make pivots that can increase your impact several-fold.

When I took on leadership at CLR, we still had several other projects, such as REG, which raised $15 million for EA charities at a cost of $500k. That might sound impressive, but in the greater scheme of things raising a few million wasn't very useful given that the best money-making opportunities could make a lot more per staff per year, and EA wasn't funding-constrained anymore. It took me way too long to realize this, and only my successor stopped putting resources into the project after I left. There's a world where I took on leadership at CLR, realized that killing REG might be a good idea, seriously considered the idea, got input from stakeholders, and then went through with it, within a few weeks of becoming Executive Director. All the relevant information to make this judgment was available at the time.

When I took on leadership at EA Funds, I did much better: I quickly identified the tension between "raising money from a broad range of donors" and "making speculative, hits-based grants", and suggested that perhaps these two aims should be decoupled. I still didn't go through with it nearly as quickly as I could have, this time not because of limitations of my own reasoning, but more because I felt constrained by the large number of stakeholders who had expectations about what we'd be doing.

Going forward, I intend to be much more relentless about entertaining radical changes, even when they seem politically infeasible, unrealistic, or personally stressful. I also intend to discuss those with my colleagues, and make them aware of the importance of such thinking.

 

How not to freak out

Considering these big changes can be extremely stressful, e.g.:

  • The organization moving to a different continent could mean breaking up with your life partner or losing your job.
  • A staff member was excited about a summer fellowship but not a school, such that discussing setting up a school made them think there might not be a role at the organization that matches their interests anymore.

Despite this, I personally don't find it stressful if I or others consider radical changes, partly because I use the following strategies:

  • Mentally flag that radical changes can be really valuable. Remind myself of my previous failings (listed above) and the importance of not repeating them. There's a lot of upside to this type of reasoning! Part of the reason for writing this shortform post is so I can reference it in the future to contextualize why I'm considering big changes.
  • Brainstorm first, decide later (or "babble first, prune later"): During the brainstorming phase, all crazy ideas are allowed and I (and my team) aim to explore novel ideas freely. We can always still decide against going through with big changes during the decision phase that will happen later. A different way to put this is that considering crazy ideas must not be strong evidence for them actually being implemented.  (For this to work, it's important that your organization actually has a sound decision procedure that actually happens later, and doesn't mix the two stages. It's also important for you to flag clearly that you're in brainstorming mode, not in decision-making mode.)
  • Implement big changes carefully, and create common knowledge of that intention. Big changes should not be the result of naïve EV maximization, but should carefully take into account the full set of options (avoiding false dichotomies), the value of coordination (maximizing joint impact of the entire team, not just the decision-maker), externalities on other people/projects/communities, existing commitments, etc. Change management is hard; big changes should involve getting buy-in from the people affected by the change.

 

Related: Staring into the abyss as a core life skill

36
Jonas V
3y
1
Consider radical changes without freaking out As someone running an organization, I frequently entertain crazy alternatives, such as shutting down our summer fellowship to instead launch a school, moving the organization to a different continent, or shutting down the organization so the cofounders can go work in AI policy. I think it's important for individuals and organizations to have the ability to entertain crazy alternatives because it makes it more likely that they escape local optima and find projects/ideas that are vastly more impactful. Entertaining crazy alternatives can be mentally stressful: it can cause you or others in your organization to be concerned that their impact, social environment, job, or financial situation is insecure. This can be addressed by pointing out why these discussions are important, a clear mental distinction between brainstorming mode and decision-making, and a shared understanding that big changes will be made carefully.   Why considering radical changes seems important The best projects are orders of magnitude more impactful than good ones. Moving from a local optimum to a global one often involves big changes, and the path isn't always very smooth. Killing your darlings can be painful. The most successful companies and projects typically have reinvented themselves multiple times until they settled on the activity that was most successful. Having a wide mental and organizational Overton window seems crucial for being able to make pivots that can increase your impact several-fold. When I took on leadership at CLR, we still had several other projects, such as REG, which raised $15 million for EA charities at a cost of $500k. That might sound impressive, but in the greater scheme of things raising a few million wasn't very useful given that the best money-making opportunities could make a lot more per staff per year, and EA wasn't funding-constrained anymore. It took me way too long to realize this, and only my successor st

It would be really great if EAs didn't take out their dismay over SBF's fraud on each other and didn't try to tear down EA as an institution. I am wrecked over what happened with FTX, too, and of course it was major violation of community trust that we all have to process. But you're not going to purify yourself by tearing down everyone else's efforts or get out ahead of the next scandal by making other EA orgs sound shady. EA will survive this whether you calm down or not but there's no reason to create strife and division over Sam et al.'s crimes when we could be coming together, healing, and growing stronger. 

It would be really great if EAs didn't take out their dismay over SBF's fraud on each other and didn't try to tear down EA as an institution. I am wrecked over what happened with FTX, too, and of course it was major violation of community trust that we all have to process. But you're not going to purify yourself by tearing down everyone else's efforts or get out ahead of the next scandal by making other EA orgs sound shady. EA will survive this whether you calm down or not but there's no reason to create strife and division over Sam et al.'s crimes when we could be coming together, healing, and growing stronger. 

Looks like we have a cost-saving way to prevent 7 billion male chick cullings a year.

I snipe at accelerationist anti-welfarists in the thread, but it's an empirical question whether removing horrifying parts of the horrifying system ends up delaying abolition and being net-harmful. It seems extremely unlikely (and assumes that one-shot abolition is possible) but I haven't modelled it.

Looks like we have a cost-saving way to prevent 7 billion male chick cullings a year. I snipe at accelerationist anti-welfarists in the thread, but it's an empirical question whether removing horrifying parts of the horrifying system ends up delaying abolition and being net-harmful. It seems extremely unlikely (and assumes that one-shot abolition is possible) but I haven't modelled it.

Week of Sunday, 4 December 2022
Week of Sun, 4 Dec 2022

Frontpage Posts

94
· · 7m read

Quick takes

Here's what I usually try when I want to get the full text of an academic paper:

  1. Search Sci-Hub. Give it the DOI (e.g. https://doi.org/...) and then, if that doesn't work, give it a link to the paper's page at an academic journal (e.g. https://www.sciencedirect.com/science...).
  2. Search Google Scholar. I can often just search the paper's name, and if I find it, there may be a link to the full paper (HTML or PDF) on the right of the search result. The linked paper is sometimes not the exact version of the paper I am after -- for example, it may be a manuscript version instead of the accepted journal version -- but in my experience this is usually fine.
  3. Search the web for "name of paper in quotes" filetype:pdf. If that fails, search for "name of paper in quotes" and look at a few of the results if they seem promising. (Again, I may find a different version of the paper than the one I was looking for, which is usually but not always fine.)
  4. Check the paper's authors' personal websites for the paper. Many researchers keep an up-to-date list of their papers with links to full versions.
  5. Email an author to politely ask for a copy. Researchers spend a lot of time on their research and are usually happy to learn that somebody is eager to read it.
Here's what I usually try when I want to get the full text of an academic paper: 1. Search Sci-Hub. Give it the DOI (e.g. https://doi.org/...) and then, if that doesn't work, give it a link to the paper's page at an academic journal (e.g. https://www.sciencedirect.com/science...). 2. Search Google Scholar. I can often just search the paper's name, and if I find it, there may be a link to the full paper (HTML or PDF) on the right of the search result. The linked paper is sometimes not the exact version of the paper I am after -- for example, it may be a manuscript version instead of the accepted journal version -- but in my experience this is usually fine. 3. Search the web for "name of paper in quotes" filetype:pdf. If that fails, search for "name of paper in quotes" and look at a few of the results if they seem promising. (Again, I may find a different version of the paper than the one I was looking for, which is usually but not always fine.) 4. Check the paper's authors' personal websites for the paper. Many researchers keep an up-to-date list of their papers with links to full versions. 5. Email an author to politely ask for a copy. Researchers spend a lot of time on their research and are usually happy to learn that somebody is eager to read it.

Ideology in EA

I think the "ideology" idea is about the normative specification of what EA considers itself to be, but there seem to be 3 waves of EA involved here:

  1. the good-works wave, about cost-effectively doing the most good through charitable works
  2. the existential-risk wave, building more slowly, about preventing existential risk
  3. the longtermism wave, some strange evolution of the existential risk wave, building up now

I haven't followed the community that closely, but that seems to be the rough timeline. Correct me if I'm wrong.

From my point of view, the narrative of ideology is about ideological influences defining the obvious biases made public in EA: free-market economics, apolitical charity, the perspective of the wealthy. EA's are visibly ideologues to the extent that they repeat or insinuate the narratives commonly heard from ideologues on the right side of the US political spectrum. They tend to:

  • discount climate change
  • distrust regulation and the political left
  • extoll or expect the free market's products to save us (TUA, AGI, ...)
  • be blind to social justice concerns
  • see the influence of money as virtuous, they trust money, in betting and in life
  • admire those with good betting skills and compare most decisions to bets
  • see corruption in government or bureaucracy but not in for-profit business organizations
  • emphasize individual action and the virtues of enabling individual access to resources

I see those communications made public, and I suspect they come from the influences defining the 2nd and 3rd waves of the EA movement, rather than the first, except maybe the influence of probabilism and its Dutch bookie thought experiment? But an influx of folks working in the software industry, where just about everyone sees themselves as an individual but is treated like a replaceable widget in a factory, know to walk a line, because they're still well-paid. There's not a strong push toward unions, worker safety, or ludditism. Social justice, distrust of wealth, corruption of business, failures of the free market (for example, regulation-requiring errors or climate change), these are taboo topics among the people I'm thinking of, because it can hurt their careers. But they will get stressed over the next 10-20 years as AI take over. As will the rest of the research community in Effective Altruism.

Despite the supposed rigor exercised by EA's in their research, the web of trust they spin across their research network is so strong that they discount most outside sources of information and even have a seniority-skewed voting system (karma) on their public research hub that they rely on to inform them of what is good information. I can see it with climate change discussions. They have skepticism toward information from outside the community. Their skepticism should face inward, given their commitments to rationalism.

And the problem of rationalized selfishness is obvious, big picture obvious, I mean obvious in every way in every lesson in every major narrative about every major ethical dilemma inside and outside religion, the knowledge boils down to selfishness (including vices) versus altruism. Learnings about rationalism should promote a strong attempt to work against self-serving rationalization (as in the Scout Mindset but with explicit dislike of evil), and see that rationalization stemming from selfishness, and provide an ethical bent that works through the tension between self-serving rationalization and genuine efforts toward altruism so that, if nothing else, integrity is preserved and evil is avoided. But that never happened among EA's.

However, they did manage to get upset about the existential guilt involved in self-care, for example, when they could be giving their fun dinner-out money to charity. That showed lack of introspection and an easy surrender to conveniently uncomfortable feelings. And they committed themselves to cost-effective charitable works. And developing excellent models of uncertainty as understood through situations amenable to metaphors involving casinos, betting, cashing out, and bookies. Now, I can't see anyone missing that many signals of selfish but naive interest in altruism going wrong. Apparently, those signals have been missed. Not only that, but a lot of people who aren't interested in the conceptual underpinnings of EA "the movement" have been attracted to the EA brand. So that's ok, so long as all the talk about rationalism and integrity and Scout Mindset is just talk. If so, the usual business can continue. If not, if the talk is not just smoke and mirrors, the problems surface quick because EA confronts people with its lack of rationality, integrity, and Scout Mindset.

I took it as a predictive indicator that EA's discount critical thinking in favor of their own brand of rationalism, one that to me lacks common-sense (for example, conscious "updating" is bizarrely inefficient as a cognitive effort). Further, their lack of interest in climate destruction was a good warning. Then the strange decision to focus ethical decisions on an implausible future and the moral status of possibly existent trillions of people in the future. The EA community shock and surprise at the collapse of SBF and FTX has been further indication of what is a lack of real-world insight and connection to working streams of information in the real world.

It's very obvious where the tensions are, that is, between the same things as usual: selfishness/vices and altruism. BTW, I suspect that no changes will be made in how funders are chosen. Furthermore, I suspect that the denial of climate change is more than ideology. It will reveal itself as true fear and a backing away from fundamental ethical values as time goes on. I understand that. If the situation seems hopeless, people give up their values. The situation is not hopeless, but it challenges selfish concerns. Valid ones. Maybe EA's have no stomach for true existential threats. The implication is that their work in that area is a sham or serves contrary purposes.

It's a problem because real efforts are diluted by the ideologies involved in the EA community. Community is important because people need to socialize. A research community emphasizes research. Norms for research communities are straightforward. A values-centered community is ... suspect. Prone to corruption, misunderstandings about what community entails, and reprisals and criticism to do with normative values not being served by the community day-to-day. Usually, communities attract the like-minded. You would expect or even want homogeneity in that regard, not complain about it.

If EA is just about professionalism in providing cost-effective charitable work that's great! There's no community involved, the values are memes and marketing, the metrics are just those involved in charity, not the well-being of community members or their diversity.

If it's about research products that's great! Development of research methods and critical thinking skills in the community needs improvement.

Otherwise, comfort, ease, relationships, and good times are the community requirements. Some people can find that in a diverse community that is values-minded. Others can't.

A community that's about values is going to generate a lot of churn about stuff that you can't easily change. You can't change the financial influences, the ideological influences, (most of) the public claims, and certainly not the self-serving rationalizations, all other things equal. If EA had ever gone down the path of exploring the trade-offs between selfishness and altruism with more care, they might have had hope to be a values-centered community. I don't see them pulling that off at this point. Just for their lack of interest or understanding. It's not their fault, but it is their problem.

I favor dissolution of all community-building efforts and a return to research and charity-oriented efforts by the EA community. It's the only thing I can see that the community can do for the world at large. I don't offer that as some sort of vote, but instead as a statement of opinion.

Ideology in EA I think the "ideology" idea is about the normative specification of what EA considers itself to be, but there seem to be 3 waves of EA involved here: 1. the good-works wave, about cost-effectively doing the most good through charitable works 2. the existential-risk wave, building more slowly, about preventing existential risk 3. the longtermism wave, some strange evolution of the existential risk wave, building up now I haven't followed the community that closely, but that seems to be the rough timeline. Correct me if I'm wrong. From my point of view, the narrative of ideology is about ideological influences defining the obvious biases made public in EA: free-market economics, apolitical charity, the perspective of the wealthy. EA's are visibly ideologues to the extent that they repeat or insinuate the narratives commonly heard from ideologues on the right side of the US political spectrum. They tend to: * discount climate change * distrust regulation and the political left * extoll or expect the free market's products to save us (TUA, AGI, ...) * be blind to social justice concerns * see the influence of money as virtuous, they trust money, in betting and in life * admire those with good betting skills and compare most decisions to bets * see corruption in government or bureaucracy but not in for-profit business organizations * emphasize individual action and the virtues of enabling individual access to resources I see those communications made public, and I suspect they come from the influences defining the 2nd and 3rd waves of the EA movement, rather than the first, except maybe the influence of probabilism and its Dutch bookie thought experiment? But an influx of folks working in the software industry, where just about everyone sees themselves as an individual but is treated like a replaceable widget in a factory, know to walk a line, because they're still well-paid. There's not a strong push toward unions, worker safety, or luddi

There's a lot of "criticize EA" energy in the air this month. It can be useful and energising. I'm seeing more criticisms than usual produced, and they're getting more engagement and changing more minds than usual.

It makes me a little nervous that criticism can get more traction with less evidence than usual right now. I'm trying to be conciously less critical than usual for the moment, and perhaps save any important criticisms for the new year.

20
Kirsten
3y
0
There's a lot of "criticize EA" energy in the air this month. It can be useful and energising. I'm seeing more criticisms than usual produced, and they're getting more engagement and changing more minds than usual. It makes me a little nervous that criticism can get more traction with less evidence than usual right now. I'm trying to be conciously less critical than usual for the moment, and perhaps save any important criticisms for the new year.

My opinionated and annotated summary / distillation of the SBF’s account of the FTX crisis based on recent articles and interviews (particularly this Bloomberg article).

Over the past year, the macroeconomy changed and central banks raised their interest rates which led to crypto losing value. Then, after a crypto crash in May, Alameda needed billions, fast, to repay its nervous lenders or would go bust.

According to sources, Alameda’s CEO Ellison said that she, SBF, Gary Wang and Nishad Singh had a meeting re: the shortfall and decided to loan Alameda FTX user funds. If true, they knowingly committed fraud.

SBF’s account is different:

Generally, he didn’t know what was going on at Alameda anymore, despite owing 90% of it. He disengaged because he was busy running FTX and for 'conflict of interest reasons'.[1] 

He didn’t pay much attention during the meeting and it didn’t seem like a crisis, but just a matter of extending a bit more credit to Alameda (from $4B by $6B[2] to ~$10B[3]). Alameda already traded on margin and still had collateral worth way more than enough to cover the loan, and, despite having been the liquidity provider historically, seemed to be less important over time, as they made up an ever smaller fraction of all trades.

Yet they still had larger limits than other users, who’d get auto-liquidated if their positions got too big and risky. He didn’t realize that Alameda’s position on FTX got much more leveraged, and thought the risk was much smaller. Also, a lot of Alameda’s collatoral was FTT, ~FTX stock, which rapidly lost value.

If FTX had liquidated, Alameda and maybe even their lenders, would’ve gone bust. And even if FTX didn’t take direct losses, users would’ve lost confidence, causing a hard-to-predict cascade of events.

If FTX hadn’t margin-called there was ~70% chance everything would be OK, but even if not, downside and risk would have been much smaller, and the hole more manageable.

SBF thought FTX and Alameda’s combined accounts were:

  1. Debt: $8.9B
  2. Assets:
  3. Cash: $9B
  4. 'Less liquid': $15.4B
  5. 'Illiquid': $3.2B

Naively, despite some big liabilities, they should be able to cover it.

But crucially, they actually had $8B less cash, since FTX didn’t have a bank account when they first started, users sent >$5B[4] to Alameda, and then their bad accounting double-counted by crediting both. Many users’ funds never moved from Alameda, and FTX users' accounts were credited with a notional balance that did not represent underlying assets held by FTX—users traded with crypto that did not actually exist.

This is why Alameda invested so much, while FTX didn’t have enough money when users tried to withdraw.[5]

They spent $10.75B on:[6]

  1. $4B for VC investments
  2. $2.5B to Binance buy out its investment in FTX (another figure is $3B)
  3. $1.5B for expenses
  4. $1.5B for acquisitions
  5. $1B labeled 'fuckups’[7]
  6. $0.25B for real estate

Even after FTX/Alameda profits (at least $10B[8]) and the VC money they raised ($2B[9] - aside: after raising $400M in Jan, they tried to raise money again in July[10] and then again in Sept.[11])—all this adds to minus $6.5B. The FT says FTX is short of $8B[12] of ~1M users’[13] money. In sum, this was because he didn’t realize that they spent way more than they made, paid very little attention to expenses, was really lazy about mental math, and there was a diffusion of responsibility amongst leadership.

While FTX.US was more like a bank and highly regulated and had as much reserves as users put in, FTX int’l was an exchange. Legally, exchanges don’t lend out users' funds, but users themselves lend out their funds to other users (of which Alameda was just one of). FTX just facilitated this. An analogy: file-sharing platforms like Napster never upload music themselves illegally, but just facilitate peer-to-peer sharing.

Much more than $1B (SBF ‘~$8B-$10B at its peak’[14]) of user funds opted into peer-to-peer lending / order book margin trading (others say that this was less than $4B[15]; all user deposits were $16B[16]). Also, while parts of the terms of service say that FTX never lends out users' assets, those are overridden by other parts of the terms of service and he isn’t aware that FTX violated the terms of use (see FTX Terms of Service).

For me, the key remaining questions are:

  1. Did many users legally agree to their crypto being lent out without meaning to, by accepting the terms of service, even if they didn’t opt into the lending program? If so, it might be hard to hold FTX legally accountable, especially since they’re in the Bahamas.  
  2. If they did effectively lend out customer funds, did they do it multiple times (perhaps repeatedly since the start of FTX), or just once?
  3. Did FTX make it look like users' money were very secure like a highly regulated bank and that their money wasn’t at risk e.g. by partnering with Visa for crypto debit cards[17] or by blurring the line between FTX.us (‘A safe and easy way to get into crypto’) and FTX.com?
  4. Did FTX sweep users to opt into peer-to-peer lending?
  1. ^
  2. ^
  3. ^
  4. Show all footnotes
My opinionated and annotated summary / distillation of the SBF’s account of the FTX crisis based on recent articles and interviews (particularly this Bloomberg article). Over the past year, the macroeconomy changed and central banks raised their interest rates which led to crypto losing value. Then, after a crypto crash in May, Alameda needed billions, fast, to repay its nervous lenders or would go bust. According to sources, Alameda’s CEO Ellison said that she, SBF, Gary Wang and Nishad Singh had a meeting re: the shortfall and decided to loan Alameda FTX user funds. If true, they knowingly committed fraud. SBF’s account is different: Generally, he didn’t know what was going on at Alameda anymore, despite owing 90% of it. He disengaged because he was busy running FTX and for 'conflict of interest reasons'.[1]  He didn’t pay much attention during the meeting and it didn’t seem like a crisis, but just a matter of extending a bit more credit to Alameda (from $4B by $6B[2] to ~$10B[3]). Alameda already traded on margin and still had collateral worth way more than enough to cover the loan, and, despite having been the liquidity provider historically, seemed to be less important over time, as they made up an ever smaller fraction of all trades. Yet they still had larger limits than other users, who’d get auto-liquidated if their positions got too big and risky. He didn’t realize that Alameda’s position on FTX got much more leveraged, and thought the risk was much smaller. Also, a lot of Alameda’s collatoral was FTT, ~FTX stock, which rapidly lost value. If FTX had liquidated, Alameda and maybe even their lenders, would’ve gone bust. And even if FTX didn’t take direct losses, users would’ve lost confidence, causing a hard-to-predict cascade of events. If FTX hadn’t margin-called there was ~70% chance everything would be OK, but even if not, downside and risk would have been much smaller, and the hole more manageable. SBF thought FTX and Alameda’s combined account

Proposal: I think building epistemic health infrastructure is currently the most effective way to improve EA epistemic health, and is the biggest gap in EA epistemics.

  • My definition of epistemic health infrastructure: a social, digital, or organizational structure that provides systematic safeguards against one or more epistemic health issues, by regulating some aspect of the intellectual processes within the community.
    • They can have different forms (social, digital, organizational, and more) or different focuses (individual epistemics, group epistemology, and more), but one thing that unites them is that they're reliable structures and systems, rather than ad hoc patches.

(note: in order to keep the shortform short I tried to be curt when writing the content below; as a result the tone may come out harsher than I intended)

We talk a lot about epistemic health, but have massively underinvested in infrastructure that safeguards epistemic health. While things like EA Forum and EAG and social circles at EA hubs are effective at spreading information and communicating ideas, to my knowledge there has been no systematic attempt at understanding (and subsequently improving) how they affect epistemic health.

Examples of things not-currently-existing that I consider epistemic health infrastructure: 

  • (Not saying that these are the most valuable ones; just that they fall into this category. They are examples and are only examples.)
  • mechanisms to poll/aggregate community opinions (e.g. a more systematized version of Nathan Young's Polis polls) on all kinds of important topics, with reliable mechanisms to execute actions according to poll results
  • something like the CEA community health team, but focusing on epistemic health, with better-defined duties and workflows, and with better transparency
  • EA forum features and (sorting/recommending/etc.) algorithms aiming to minimize groupthink/information cascades
  • (Many proposals from other community members about improving community epistemic health also fall into this category. I won't repeat them here.)

I plan to coordinate a discussion/brainstorming on this topic among people with relevant interests. Please do PM me if you're interested!

Proposal: I think building epistemic health infrastructure is currently the most effective way to improve EA epistemic health, and is the biggest gap in EA epistemics. * My definition of epistemic health infrastructure: a social, digital, or organizational structure that provides systematic safeguards against one or more epistemic health issues, by regulating some aspect of the intellectual processes within the community. * They can have different forms (social, digital, organizational, and more) or different focuses (individual epistemics, group epistemology, and more), but one thing that unites them is that they're reliable structures and systems, rather than ad hoc patches. (note: in order to keep the shortform short I tried to be curt when writing the content below; as a result the tone may come out harsher than I intended) We talk a lot about epistemic health, but have massively underinvested in infrastructure that safeguards epistemic health. While things like EA Forum and EAG and social circles at EA hubs are effective at spreading information and communicating ideas, to my knowledge there has been no systematic attempt at understanding (and subsequently improving) how they affect epistemic health. Examples of things not-currently-existing that I consider epistemic health infrastructure:  * (Not saying that these are the most valuable ones; just that they fall into this category. They are examples and are only examples.) * mechanisms to poll/aggregate community opinions (e.g. a more systematized version of Nathan Young's Polis polls) on all kinds of important topics, with reliable mechanisms to execute actions according to poll results * something like the CEA community health team, but focusing on epistemic health, with better-defined duties and workflows, and with better transparency * EA forum features and (sorting/recommending/etc.) algorithms aiming to minimize groupthink/information cascades * (Many proposals from other community members a

Load more weeks