This is a special post for quick takes by Austin. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Anthropic's donation program seems to have been recently pared down? I recalled it as 3:1, see eg this comment on Feb 2023. But right now on https://www.anthropic.com/careers:
> Optional equity donation matching at a 1:1 ratio, up to 25% of your equity grant

Curious if anyone knows the rationale for this -- I'm thinking through how to structure Manifund's own compensation program to tax-efficiently encourage donations, and was looking at the Anthropic program for inspiration.

I'm also wondering if existing Anthropic employees still get the 3:1 terms, or the program has been changed for everyone going forward. Given the rumored $60b raise, Anthropic equity donations are set to be a substantial share of EA giving going forward, so the precise mechanics of the giving program could change funding considerations by a lot.

One (conservative imo) ballpark:

  • If founders + employees broadly own 30% of outstanding equity
  • 50% of that has been assigned and vested
  • 20% of employees will donate
  • 20% of their equity within the next 4 years

then $60b x 0.3 x 0.5 x 0.2 x 0.2 / 4 = $90m/y. And the difference between 1:1 and 3:1 match is the difference between $180m/y of giving and $360m/y.

It's been confirmed that the donation matching still applies to early employees: https://www.lesswrong.com/posts/HE3Styo9vpk7m8zi4/evhub-s-shortform?commentId=oeXHdxZixbc7wwqna 

I would be surprised if the 3:1 match applied to founders as well. Also, I think 20% of employees donating 20% of their equity within the next 4 years is very optimistic.

My guess is that donations from Antrhopic/OpenAI will depend largely on what the founders decide to do with their money. Forbes estimates Altman and Daniela Amodei at ~$1B each, and Altman signed the Giving Pledge.


See also this article from Jan 8: 

At Anthropic’s new valuation, each of its seven founders — [...] — are set to become billionaires. Forbes estimates that each cofounder will continue to hold more than 2% of Anthropic’s equity each, meaning their net worths are at least $1.2 billion.

I don't think Forbes numbers are particularly reliable, and I think that there's a significant chance that Anthropic and/or OpenAI equity goes to 0; but in general, I expect founders to both have much more money than employees and be more inclined to donate significant parts of it (partly because of diminishing marginal returns of wealth)

It's a good point about how it applies to founders specifically - under the old terms (3:1 match up to 50% of stock grant) it would imply a maximum extra cost from Anthropic of 1.5x whatever the founders currently hold. That's a lot! 

Those bottom line figures doesn't seem crazy optimistic to me, though - like, my guess is a bunch of folks at Anthropic expect AGI on the inside of 4 years, and Anthropic is the go to example of "founded by EAs". I would take an even-odds bet that the total amount donated to charity out of Anthropic equity, excluding matches, is >$400m in 4 years time. 

I would take an even-odds bet that the total amount donated to charity out of Anthropic equity, excluding matches, is >$400m in 4 years time. 

If Anthropic doesn't lose >85% of its valuation (which can definitely happen) I would expect way more.

As mentioned above, each of its seven cofounders is likely to become worth >$500m, and I would expect many of them to donate significantly.

 

Anthropic is the go to example of "founded by EAs"

I find these kind of statements a bit weird. My sense is that it used to be true, but they don't necessarily identify themselves with the EA movement anymore: it's never mentioned in interviews, and when asked by journalists they explicitly deny it.

Some reflections on the Manifest 2024 discourse:

  1. I’m annoyed (with “the community”, but mostly with human nature & myself) that this kind of drama gets so much more attention than eg typical reviews of the Manifest experience, or our retrospectives of work on Manifund, which I wish got even 10% of this engagement. It's fun to be self-righteous on the internet, fun to converse with many people who I respect, fun especially when they come to your defense (thanks!), but I feel guilty at the amount of attention this has sucked up for everyone involved.

    This bit from Paul Graham makes a lot more sense to me now:

    > When someone contradicts you, they're in a sense attacking you. Sometimes pretty overtly. Your instinct when attacked is to defend yourself. But like a lot of instincts, this one wasn't designed for the world we now live in. Counterintuitive as it feels, it's better most of the time not to defend yourself. Otherwise these people are literally taking your life.

    Kudos to all y'all who are practicing the virtue of silence and avoiding engaging with this.
  2. While it could have been much, much better written, on net I’m glad the Guardian article exists. And not just in a "all PR is good PR" sense, or even a “weak opponents are superweapons” sense; I think there's a legitimate concern there that's worthy of reporting. I like the idea of inviting the journalists to come to Manifest in the future.
  3. That said, I am quite annoyed that now many people who didn’t attend Manifest, may think of it as "Edgelordcon". I once again encourage people who weren't there to look at our actual schedule, or to skim over some of the many many post-Manifest reports, to get a more representative sense of what Manifest is like or about.
  4. If Edgelordcon is what you really wanted, consider going to something like Hereticon instead of Manifest, thanks.
  5. Not sure how many people already know this but I formally left Manifold a couple months ago. I'm the most comfortable writing publicly out of the 3 founders, but while I'm still on the company board, I expect Manifold vs my own views to diverge more over time.
  6. Also, Rachel and Saul were much more instrumental in making Manifest 2024 happen than me. Their roles were approximately co-directors, while I'm more like a producer of the event. So most of the credit for a well-run event goes to them; I wish more people engaged with their considerations, rather than mine. (Blame for the invites, as I mentioned, falls on me.)
  7. EA Forum is actually pretty good for having nuanced discussion: the threading and upvote vs agreevote and reactions all help compared to other online discourse. Kudos to the team! (Online text-based discourse does remain intrinsically more divisive than offline, though, which I don't love. I wish more people eg took up Saul on his offer to call with folks.)
  8. Overall my impression of the state of the EA community has ticked upwards as a result of this all this. I’m glad to be here!
  9. Some of my favorite notes amidst all this: Isa, huw, TracingWoodgrains, and Nathan Young on their experiences, Richard Ngo against deplatforming, Jacob and Oli on their thoughts, Bentham's Bulldog and Theo Jaffee on their defenses of the event, and Saul and Rachel on their perspectives as organizers.

cosigned, generally.

most strongly, i agree with:

  • (1), (3), (4)

i also somewhat agree with:

  • (2), (7), (8), (9)

[the rest of this comment is a bit emotional, a bit of a rant/ramble. i don't necessarily reflectively endorse the below, but i think it pretty accurately captures my state of mind while writing.]

but man, people can be mean. twitter is a pretty low bar, and although the discourse on twitter isn't exactly enjoyable, my impression of the EA forum has also gone down over the last few days. most of the comments that critique my/rachel's/austin's decisions (and many of the ones supporting our decisions!) have made me quite sad/anxious/ashamed in ways i don't endorse — and (most) have done ~nothing to reduce the likelihood that i invite speakers who the commenters consider racist to the next manifest.

i'm a little confused about the goals of a lot of the folks who're commenting. like, their (your?) marginal 20 minutes would be WAY more effective by... idk, hopping on a call with me or something?[1]  [june23-2024 — edit: jeff's comment has explained why: yes, 1:1 discussion with me is better for the goal of improving/changing manifest's decisions, but many of the comments are "trying to hash out what EA community ... norms should be in this sort of situation, and that seems ... reasonably well suited for public discussion."]

there have been a few comments that are really great, both some that are in support of our decisions & some that are against them — austin highlighted a few that i had in mind, like Isa's and huw's. and, a few folks have reached out independently to offer their emotional support, which is really kind of them. these are the things that make me agree with (8): i don't think that, in many communities, folks who might disagree with me on the object level would offer their emotional support for me on the meta-level.

i'm grateful to the folks who're disagreeing (& agreeing) with me constructively; to everyone else... idk, man, at least hold off on commenting until you've given me a call or let me buy you a coffee or something. [june23-2024 — see edit above]

  1. ^

    and i would explicitly encourage you, dear reader, to do so! please! i would like to talk to you much more than i would like to read your comment on the EA forum, and way more than i'd like to read your twitter post! i would very much like to adjust my decision-making process to be better, and insofar as you think that's good, please do so through a medium that's much higher bandwidth!

i'm a little confused about the goals of a lot of the folks who're commenting. like, their (your?) marginal 20 minutes would be WAY more effective by... idk, hopping on a call with me or something?

To the extent that people are trying to influence future Manifest decisions or your views in particular, I agree that 1:1 private discussion would often be better. But I read a lot of the discussion as people trying to hash out what EA community (broadly construed) norms should be in this sort of situation, and that seems to me like it's reasonably well suited for public discussion?

thanks, this has cleared things up quite a bit for me. i edited my comment to reflect it!

I’d strongly recommend against inviting them. If they decide to come, then I’d probably let them, but intentionally bringing in people who want to stir up drama is a bad idea and would ruin the vibes.

Fwiw, I think the main thing getting missed in this discourse is that even 3 out of your 50 speakers (especially if they're near the top of the bill) are mostly known for a cluster of edgy views that are not welcome in most similar spaces, people who really want to gather to discuss those edgy and typically unwelcome views will be a seriously disproportionate share of attendees, and this will have significant repercussions for the experience of the attendees who were primarily interested in the other 47 speakers.

Missing-but-wanted children now substantially outnumber unwanted births. Missing kids are a global phenomenon, not just a rich-world problem. Multiplying out each country’s fertility gap by its population of reproductive age women reveals that, for women entering their reproductive years in 2010 in the countries in my sample, there are likely to be a net 270 million missing births—if fertility ideals and birth rates hold stable. Put another way, over the 30 to 40 years these women would potentially be having children, that’s about 6 to 10 million missing babies per year thanks to the global undershooting of fertility.

https://ifstudies.org/blog/the-global-fertility-gap

For reference - malaria kills 600k a year. Covid has killed 6m to date.

If you believe creating an extra life is worth about the same as preventing an extra death (very controversial, but I hold something like this) then increasing fertility is an excellent cause area.

What's the QALY cost of the sanctions on Russia? How does it compare to the QALY lost in the Ukraine conflict?

My sense of the media narrative has been "Russia/Putin bad, Ukraine good, sanctions good". But if you step back (a lot) and squint, both direct warfare and economic sanctions share the property of being negative-sum transactions. Has anyone done an order-of-magnitude calculation for the cost of this?

(extremely speculative)

Quick stab: Valuing one QALY at $100k (rough figure for US), Russian GDP was $1.4T;  the ruble has lost 30% of its value. If we take that to be a 10% contraction, $140B/$100k = 1.4M QALY lost; if 80 QALY = 1 life, then 17.5k lives lost.

Edit: re: downvotes for OP: to clarify, I support the downvotes and don't endorse the premise of the question - damage to the Russian economy and its indirect health effects are not the dominant consideration here. Because Ukraine will suffer much more, the question's premise is naive and insensitive.  I tried to answer this because I wanted to show how much harm Putin inflicted on Russia by starting this war indirectly and which might outweigh the direct casualties on the Russian side. 

Countries usually value a QALY at 1-3x their GDP.

But also, GDP reduction and QALYs might not commensurable in that way...

I have a more detailed note on diminishing returns here. In brief, according to the law of logarithmic utility—a simple rule of thumb is that a dollar is worth 1/X times as much if you are X times richer. So doubling someone's income is worth the same amount no matter where they start. If GDP per capita is $10k, a $1 reduction is 10x less bad as at $1k mark. In other words, people will probably rather give up money than health on that current margin. 

But there are ways to calculate this and it's probably gonna be bad...

One Lancet study suggests that the 2008 economic crisis caused 0.5 million  excess cancer-related deaths worldwide. This is just cancer, which is about 15% of global mortality and so a naive extrapolation might suggest mortality figures in the millions. There are 50m deaths per year globally, so maybe there was a 10% increase.

Russia has about 2m deaths per year.

GDP loss is projected to be similar to 2008 or Covid.

https://vizhub.healthdata.org/gbd-compare/ 

https://www.sciencedirect.com/science/article/pii/S0140673618314855 

https://ars.els-cdn.com/content/image/1-s2.0-S0140673618314855-mmc1.pdf

Thank you for taking the time to write this response!

I'm not exactly sure what premise downvoters are reading from my question. To be clear, I think the war is a horrible idea and it's important to punish defection in a negative-sum way (aka impose sanctions on countries in violation of international laws).

The main point I wanted to entertain was: it's sad when we have to impose sanctions on countries; lots of people will suffer. In the same way it's sad when a war is fought, and lots of people suffer. We should be careful not to treat economic punishment as qualitatively different or intrinsically superior to direct violence; its a question of how much net utility different responses produce for the world.

Thanks for clarifying - fwiw I didn't think you're ill-intentioned... and at its core your question re: innocent Russians suffering due to sanctions is a valid one - as you say, all suffering counts equally independent of who suffers (and Russians will definitely suffer much more so than most people who are living a relatively affluent life in the west). But because Ukrainians are currently disproportionately suffering much more than Russian, the question might have struck some people as tone-deaf or inappropriate. Even taking aside the terrible direct humanitarian impact of the war, just consider Russia's GDP per capita being $10k, while Ukraine's being $3k before the war and it'll have a much bigger hit to the economy.

Curated and popular this week
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Omnizoid
 ·  · 9m read
 · 
Crossposted from my blog which many people are saying you should check out!    Imagine that you came across an injured deer on the road. She was in immense pain, perhaps having been mauled by a bear or seriously injured in some other way. Two things are obvious: 1. If you could greatly help her at small cost, you should do so. 2. Her suffering is bad. In such a case, it would be callous to say that the deer’s suffering doesn’t matter because it’s natural. Things can both be natural and bad—malaria certainly is. Crucially, I think in this case we’d see something deeply wrong with a person who thinks that it’s not their problem in any way, that helping the deer is of no value. Intuitively, we recognize that wild animals matter! But if we recognize that wild animals matter, then we have a problem. Because the amount of suffering in nature is absolutely staggering. Richard Dawkins put it well: > The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In fact, this is a considerable underestimate. Brian Tomasik a while ago estimated the number of wild animals in existence. While there are about 10^10 humans, wild animals are far more numerous. There are around 10 times that many birds, between 10 and 100 times as many mammals, and up to 10,000 times as many both of reptiles and amphibians. Beyond that lie the fish who are shockingly numerous! There are likely around a quadrillion fish—at least thousands, and potentially hundreds of thousands o