This is a special post for quick takes by Austin. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Some reflections on the Manifest 2024 discourse:

  1. I’m annoyed (with “the community”, but mostly with human nature & myself) that this kind of drama gets so much more attention than eg typical reviews of the Manifest experience, or our retrospectives of work on Manifund, which I wish got even 10% of this engagement. It's fun to be self-righteous on the internet, fun to converse with many people who I respect, fun especially when they come to your defense (thanks!), but I feel guilty at the amount of attention this has sucked up for everyone involved.

    This bit from Paul Graham makes a lot more sense to me now:

    > When someone contradicts you, they're in a sense attacking you. Sometimes pretty overtly. Your instinct when attacked is to defend yourself. But like a lot of instincts, this one wasn't designed for the world we now live in. Counterintuitive as it feels, it's better most of the time not to defend yourself. Otherwise these people are literally taking your life.

    Kudos to all y'all who are practicing the virtue of silence and avoiding engaging with this.
  2. While it could have been much, much better written, on net I’m glad the Guardian article exists. And not just in a "all PR is good PR" sense, or even a “weak opponents are superweapons” sense; I think there's a legitimate concern there that's worthy of reporting. I like the idea of inviting the journalists to come to Manifest in the future.
  3. That said, I am quite annoyed that now many people who didn’t attend Manifest, may think of it as "Edgelordcon". I once again encourage people who weren't there to look at our actual schedule, or to skim over some of the many many post-Manifest reports, to get a more representative sense of what Manifest is like or about.
  4. If Edgelordcon is what you really wanted, consider going to something like Hereticon instead of Manifest, thanks.
  5. Not sure how many people already know this but I formally left Manifold a couple months ago. I'm the most comfortable writing publicly out of the 3 founders, but while I'm still on the company board, I expect Manifold vs my own views to diverge more over time.
  6. Also, Rachel and Saul were much more instrumental in making Manifest 2024 happen than me. Their roles were approximately co-directors, while I'm more like a producer of the event. So most of the credit for a well-run event goes to them; I wish more people engaged with their considerations, rather than mine. (Blame for the invites, as I mentioned, falls on me.)
  7. EA Forum is actually pretty good for having nuanced discussion: the threading and upvote vs agreevote and reactions all help compared to other online discourse. Kudos to the team! (Online text-based discourse does remain intrinsically more divisive than offline, though, which I don't love. I wish more people eg took up Saul on his offer to call with folks.)
  8. Overall my impression of the state of the EA community has ticked upwards as a result of this all this. I’m glad to be here!
  9. Some of my favorite notes amidst all this: Isa, huw, TracingWoodgrains, and Nathan Young on their experiences, Richard Ngo against deplatforming, Jacob and Oli on their thoughts, Bentham's Bulldog and Theo Jaffee on their defenses of the event, and Saul and Rachel on their perspectives as organizers.

cosigned, generally.

most strongly, i agree with:

  • (1), (3), (4)

i also somewhat agree with:

  • (2), (7), (8), (9)

[the rest of this comment is a bit emotional, a bit of a rant/ramble. i don't necessarily reflectively endorse the below, but i think it pretty accurately captures my state of mind while writing.]

but man, people can be mean. twitter is a pretty low bar, and although the discourse on twitter isn't exactly enjoyable, my impression of the EA forum has also gone down over the last few days. most of the comments that critique my/rachel's/austin's decisions (and many of the ones supporting our decisions!) have made me quite sad/anxious/ashamed in ways i don't endorse — and (most) have done ~nothing to reduce the likelihood that i invite speakers who the commenters consider racist to the next manifest.

i'm a little confused about the goals of a lot of the folks who're commenting. like, their (your?) marginal 20 minutes would be WAY more effective by... idk, hopping on a call with me or something?[1]  [june23-2024 — edit: jeff's comment has explained why: yes, 1:1 discussion with me is better for the goal of improving/changing manifest's decisions, but many of the comments are "trying to hash out what EA community ... norms should be in this sort of situation, and that seems ... reasonably well suited for public discussion."]

there have been a few comments that are really great, both some that are in support of our decisions & some that are against them — austin highlighted a few that i had in mind, like Isa's and huw's. and, a few folks have reached out independently to offer their emotional support, which is really kind of them. these are the things that make me agree with (8): i don't think that, in many communities, folks who might disagree with me on the object level would offer their emotional support for me on the meta-level.

i'm grateful to the folks who're disagreeing (& agreeing) with me constructively; to everyone else... idk, man, at least hold off on commenting until you've given me a call or let me buy you a coffee or something. [june23-2024 — see edit above]

  1. ^

    and i would explicitly encourage you, dear reader, to do so! please! i would like to talk to you much more than i would like to read your comment on the EA forum, and way more than i'd like to read your twitter post! i would very much like to adjust my decision-making process to be better, and insofar as you think that's good, please do so through a medium that's much higher bandwidth!

i'm a little confused about the goals of a lot of the folks who're commenting. like, their (your?) marginal 20 minutes would be WAY more effective by... idk, hopping on a call with me or something?

To the extent that people are trying to influence future Manifest decisions or your views in particular, I agree that 1:1 private discussion would often be better. But I read a lot of the discussion as people trying to hash out what EA community (broadly construed) norms should be in this sort of situation, and that seems to me like it's reasonably well suited for public discussion?

thanks, this has cleared things up quite a bit for me. i edited my comment to reflect it!

I’d strongly recommend against inviting them. If they decide to come, then I’d probably let them, but intentionally bringing in people who want to stir up drama is a bad idea and would ruin the vibes.

Fwiw, I think the main thing getting missed in this discourse is that even 3 out of your 50 speakers (especially if they're near the top of the bill) are mostly known for a cluster of edgy views that are not welcome in most similar spaces, people who really want to gather to discuss those edgy and typically unwelcome views will be a seriously disproportionate share of attendees, and this will have significant repercussions for the experience of the attendees who were primarily interested in the other 47 speakers.

Anthropic's donation program seems to have been recently pared down? I recalled it as 3:1, see eg this comment on Feb 2023. But right now on https://www.anthropic.com/careers:
> Optional equity donation matching at a 1:1 ratio, up to 25% of your equity grant

Curious if anyone knows the rationale for this -- I'm thinking through how to structure Manifund's own compensation program to tax-efficiently encourage donations, and was looking at the Anthropic program for inspiration.

I'm also wondering if existing Anthropic employees still get the 3:1 terms, or the program has been changed for everyone going forward. Given the rumored $60b raise, Anthropic equity donations are set to be a substantial share of EA giving going forward, so the precise mechanics of the giving program could change funding considerations by a lot.

One (conservative imo) ballpark:

  • If founders + employees broadly own 30% of outstanding equity
  • 50% of that has been assigned and vested
  • 20% of employees will donate
  • 20% of their equity within the next 4 years

then $60b x 0.3 x 0.5 x 0.2 x 0.2 / 4 = $90m/y. And the difference between 1:1 and 3:1 match is the difference between $180m/y of giving and $360m/y.

It's been confirmed that the donation matching still applies to early employees: https://www.lesswrong.com/posts/HE3Styo9vpk7m8zi4/evhub-s-shortform?commentId=oeXHdxZixbc7wwqna 

I would be surprised if the 3:1 match applied to founders as well. Also, I think 20% of employees donating 20% of their equity within the next 4 years is very optimistic.

My guess is that donations from Antrhopic/OpenAI will depend largely on what the founders decide to do with their money. Forbes estimates Altman and Daniela Amodei at ~$1B each, and Altman signed the Giving Pledge.


See also this article from Jan 8: 

At Anthropic’s new valuation, each of its seven founders — [...] — are set to become billionaires. Forbes estimates that each cofounder will continue to hold more than 2% of Anthropic’s equity each, meaning their net worths are at least $1.2 billion.

I don't think Forbes numbers are particularly reliable, and I think that there's a significant chance that Anthropic and/or OpenAI equity goes to 0; but in general, I expect founders to both have much more money than employees and be more inclined to donate significant parts of it (partly because of diminishing marginal returns of wealth)

It's a good point about how it applies to founders specifically - under the old terms (3:1 match up to 50% of stock grant) it would imply a maximum extra cost from Anthropic of 1.5x whatever the founders currently hold. That's a lot! 

Those bottom line figures doesn't seem crazy optimistic to me, though - like, my guess is a bunch of folks at Anthropic expect AGI on the inside of 4 years, and Anthropic is the go to example of "founded by EAs". I would take an even-odds bet that the total amount donated to charity out of Anthropic equity, excluding matches, is >$400m in 4 years time. 

I would take an even-odds bet that the total amount donated to charity out of Anthropic equity, excluding matches, is >$400m in 4 years time. 

If Anthropic doesn't lose >85% of its valuation (which can definitely happen) I would expect way more.

As mentioned above, each of its seven cofounders is likely to become worth >$500m, and I would expect many of them to donate significantly.

 

Anthropic is the go to example of "founded by EAs"

I find these kind of statements a bit weird. My sense is that it used to be true, but they don't necessarily identify themselves with the EA movement anymore: it's never mentioned in interviews, and when asked by journalists they explicitly deny it.

Missing-but-wanted children now substantially outnumber unwanted births. Missing kids are a global phenomenon, not just a rich-world problem. Multiplying out each country’s fertility gap by its population of reproductive age women reveals that, for women entering their reproductive years in 2010 in the countries in my sample, there are likely to be a net 270 million missing births—if fertility ideals and birth rates hold stable. Put another way, over the 30 to 40 years these women would potentially be having children, that’s about 6 to 10 million missing babies per year thanks to the global undershooting of fertility.

https://ifstudies.org/blog/the-global-fertility-gap

For reference - malaria kills 600k a year. Covid has killed 6m to date.

If you believe creating an extra life is worth about the same as preventing an extra death (very controversial, but I hold something like this) then increasing fertility is an excellent cause area.

What's the QALY cost of the sanctions on Russia? How does it compare to the QALY lost in the Ukraine conflict?

My sense of the media narrative has been "Russia/Putin bad, Ukraine good, sanctions good". But if you step back (a lot) and squint, both direct warfare and economic sanctions share the property of being negative-sum transactions. Has anyone done an order-of-magnitude calculation for the cost of this?

(extremely speculative)

Quick stab: Valuing one QALY at $100k (rough figure for US), Russian GDP was $1.4T;  the ruble has lost 30% of its value. If we take that to be a 10% contraction, $140B/$100k = 1.4M QALY lost; if 80 QALY = 1 life, then 17.5k lives lost.

Edit: re: downvotes for OP: to clarify, I support the downvotes and don't endorse the premise of the question - damage to the Russian economy and its indirect health effects are not the dominant consideration here. Because Ukraine will suffer much more, the question's premise is naive and insensitive.  I tried to answer this because I wanted to show how much harm Putin inflicted on Russia by starting this war indirectly and which might outweigh the direct casualties on the Russian side. 

Countries usually value a QALY at 1-3x their GDP.

But also, GDP reduction and QALYs might not commensurable in that way...

I have a more detailed note on diminishing returns here. In brief, according to the law of logarithmic utility—a simple rule of thumb is that a dollar is worth 1/X times as much if you are X times richer. So doubling someone's income is worth the same amount no matter where they start. If GDP per capita is $10k, a $1 reduction is 10x less bad as at $1k mark. In other words, people will probably rather give up money than health on that current margin. 

But there are ways to calculate this and it's probably gonna be bad...

One Lancet study suggests that the 2008 economic crisis caused 0.5 million  excess cancer-related deaths worldwide. This is just cancer, which is about 15% of global mortality and so a naive extrapolation might suggest mortality figures in the millions. There are 50m deaths per year globally, so maybe there was a 10% increase.

Russia has about 2m deaths per year.

GDP loss is projected to be similar to 2008 or Covid.

https://vizhub.healthdata.org/gbd-compare/ 

https://www.sciencedirect.com/science/article/pii/S0140673618314855 

https://ars.els-cdn.com/content/image/1-s2.0-S0140673618314855-mmc1.pdf

Thank you for taking the time to write this response!

I'm not exactly sure what premise downvoters are reading from my question. To be clear, I think the war is a horrible idea and it's important to punish defection in a negative-sum way (aka impose sanctions on countries in violation of international laws).

The main point I wanted to entertain was: it's sad when we have to impose sanctions on countries; lots of people will suffer. In the same way it's sad when a war is fought, and lots of people suffer. We should be careful not to treat economic punishment as qualitatively different or intrinsically superior to direct violence; its a question of how much net utility different responses produce for the world.

Thanks for clarifying - fwiw I didn't think you're ill-intentioned... and at its core your question re: innocent Russians suffering due to sanctions is a valid one - as you say, all suffering counts equally independent of who suffers (and Russians will definitely suffer much more so than most people who are living a relatively affluent life in the west). But because Ukrainians are currently disproportionately suffering much more than Russian, the question might have struck some people as tone-deaf or inappropriate. Even taking aside the terrible direct humanitarian impact of the war, just consider Russia's GDP per capita being $10k, while Ukraine's being $3k before the war and it'll have a much bigger hit to the economy.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that