Pronouns: she/her or they/them.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.
Oh, so if this is not IPO-contingent, what explains the timing on this? Why 2026 or 2027 and not 2025 or 2024?
I do know there are platforms like Forge Global and Hiive that allow for buying/selling shares in private startups on the secondary market. I just wonder why a lot of people would be selling their shares in 2026 or 2027, specifically, rather than holding onto them longer. I think many employees of these AI companies are true believers in the growth story and the valuation story for these companies, and might be reluctant to sell their equity at a time when they feel they're still in the most rapid growth phase of the company.
Any particular reason to think many people out of these dozens or hundreds of nouveau riche will want to donate to meta-EA? I understand the argument for people like Daniela Amodei and Holden Karnofsky to give to meta-EA (although, as noted in another comment, Daniela Amodei says she doesn't identify with effective altruism), but I don't understand the argument for a lot of smaller donors donating to meta-EA.
Interesting footnote about the Future of Life Institute. Would that apply to a software engineer working for OpenAI or Anthropic, or just a donation directly from one of those companies?
My general point about established charities like the Future of Life Institute or any other example you care to think about is that most donors will probably prefer to donate directly to charities rather than donating through an EA fund or a regranter. And most will probably want to donate to things other than meta-EA.
How much new funding is Austin Chen expecting? Is it conditional on an Anthropic IPO? Are your expectations conditional on an Anthropic IPO?
I suppose the whole crux of the matter is even if there is additional ~$300-400 million per year, what percentage will go into meta-EA, EA funds, general open grantmaking, or the broader EA community as opposed to GiveWell, GiveWell’s recommended charities, or existing charities like the Future of Life Institute? If it’s a low percentage, the conversation seems moot.
My intuition about patient philanthropy is this: if I have $1 million that I can spend philanthropically now or I can invest it for 100 years at a 7% CAGR and grow it to $868 million in 2126, I think spending the $1 million in 2026 will have a bigger, better impact than the $868 million in 2126.
Gross world product per capita (PPP) is around $24,000 now. It’s forecasted to grow at 2% a year. At 2% a year for 100 years, it will be $174,000 in 2126. So, the world on average will be much wealthier than the wealthiest nations today. The U.S. GDP per capita (PPP) is $90,000, Norway’s is $107,000 — I’m ignoring tax havens with distorted stats.
Why should the poor people of today give to the rich people of the future? How is that cost-effective?
The difference between the GiveWell estimate of the cost to save a life and the estimated statistical cost of saving a life in the U.S. is $3,500 vs. $9 million, so a ~2,500x difference. $1 million now could save 285 lives. $868 million in 2126 could save 96 lives — if we think poorer countries will have catch-up growth that brings them up to $90,000+ in GDP per capita (PPP).
The poorest countries may not have catch-up growth, and may not even grow commensurately with the world on average, but in that case, it makes it even more important to spend the $1 million on the poorest countries now to try to make sure that growth happens. Stimulating economic growth in sub-Saharan African countries where growth has been stagnant may be one of the most important global moral priorities. Thinking about 100 years in the future only makes it feel more urgent, if anything.
Plus, the risk that a foundation trying to invest money for 100 years doesn’t make it to 2126 seems high.
If you factor in the possibility of transformative technologies like much more advanced AI and robotics, biotech, and so on, and/or the possibility much faster per capita economic growth over the next 100 years, the case for spending now rather than waiting a century gets even stronger.
Also, looking back @trammell's takes have aged very well:
- It is unlikely we are in the most important time in history
- If not, it is good to save money for that time
Had Phil been listened to, then perhaps much of the FTX money would have been put aside, and things could have gone quite differently.
Unless you explicitly warn your donors that you’re going to sit on their money and do nothing with it, you might anger them by employing this strategy, such that they won’t donate to you again. (I don’t know if SBF would have noticed or cared because he couldn’t even sit through a meeting or an interview without playing a video game, but what applies to SBF doesn’t apply to most large donors.)
Also, if there is a most important time in history, and if we can ever know we’re in the most important time in history while we’re in it, it might be 100 years or 1,000 years from now, and obviously holding onto money that long is a silly strategy. (Especially if you think we’re going to start having 10% economic growth within 50 years due to AI, but even if you don’t.)
As a donor, I want to donate to charities that can "beat the market" in terms of their impact, i.e., the impact they create by spending the money now is big enough that it is bigger than the effects of investing the money and spending it in 5 years. I would be furious if I found out the charities I donate to were employing the invest-and-wait strategy. I can invest my own money or give it to someone who will spend it.
My thought process is vaguely, hazily something like this:
There’s a ~50% chance Anthropic will IPO within the next 2 years.
Conditional on an Anthropic IPO, there’s a ~50% chance any Anthropic billionaires or centimillionaires will give tons of money to meta-EA or EA funds.
Conditional on Anthropic billionaires/centimillionaires backing up a truck full of money to meta-EA and EA funds, there’s a ~50% chance that worrying about the potential corrupting effects of the money well in advance is a good allocation of time/energy/attention.
So, the overall chance this conversation is important to have now is ~10%.
The ~50% probabilities and the resulting ~10% probability are totally arbitrary. I don’t mean them literally. This is for illustrative purposes only.
But the overall point is that it’s like the Swiss cheese model of risk where three things have to go "wrong" for a problem to occur. But in this case, the thing that would go "wrong" is getting a lot of money, which has happened before with SBF’s chaotic giving, and has been happening continuously in a more careful way with Open Philanthropy (now Coefficient Giving) since the mid-2010s.
If SBF had made his billions from selling vegan ice cream and hadn’t done any scams or crimes, and if he had been more careful and organized in the way he gave the money (e.g. been a bit more like Dustin Moskovitz/Cari Tuna or Jann Tallinn), I don’t think people would be as worried about the prospect of getting a lot of money again.
Even if the situation were like SBF 2.0, it doesn’t seem like the downsides of that would be that bad or that hard to deal with (compared to how things in EA already are right now), so the logic of carefully preparing for a big impact risk on the ~10% — or whatever it is — chance it happens doesn’t apply. It’s a small impact risk with a low probability.
And, overall, I just think the conversations like this I see in EA are overly anxious, overly complicate things, and intellectualize too much. I don’t think they make people less corruptible.
Different in what ways? Edit: You kind of answered this in your edit, but what I’m getting at is: SBF’s giving was indiscriminate and disorganized. Do you think the Anthropic nouveau riche will give money as freely to random people in EA?
I’m also thinking that Daniela Amodei said this about effective altruism earlier this year:
I’m not the expert on effective altruism. I don’t identify with that terminology. My impression is that it’s a bit of an outdated term.
Maybe it was just a clumsy, off-the-cuff comment. But it makes me go: hmm.
She’s gonna give her money to meta-EA?