Yes that's a fair point. Do you think the claim itself is false?
I was under the impression that many YMBY/Abundance/Progress Studies-minded EA communities were operating with that theory of change, am I wrong?
Thanks for replying with data. I think what matters for EA fundraising strategy is the relative share of wealth in the top 0.1% and in the top 1% (or maybe top 10%), it's great that the share of wealth in the bottom 50% is increasing, but I don't expect many there to be significant donors (with important but rare exceptions).
It's also not clear to me how liquid is the wealth in social insurance programs, I don't expect it to be a viable source of donations/influence/impact (but of course it's great that more people are covered by insurance)
I also think that I was mistaken to mention "the last decades", as "the last 5-10 years" seems a more relevant time frame for changes in EA strategy.
In my opinion the perception that inequality is increasing could also be due to relative comparisons between the top 1%-10% and the top 0.1%-0.01%, as the former becomes relatively less influential.
Another random anecdote: I was reading the Wikipedia page of an ultramarathon runner, and apparently her father is a famous mathematician
I don't find it concerning at all.
People who don't come from privileged backgrounds (e.g. the 700 million people living on less of $2.15/day) don't have the resources to worry about helping others effectively, the vast majority doesn't speak English, and so on.
Random anecdote: I'm in a hospital for a minor visit right now, and I don't find it concerning at all that many doctors here are likely to be from privileged backgrounds
I think looking at the top 1% is a bit misleading, as the top 0.1% and the top 1% had very different growth rates in the last decades.
If the relative amount of wealth in the top 0.1% increases compared to the top 1%, it makes sense for EA to prioritize more the former (assuming constant relative tractability)
The School For Moral Ambition is hiring people to work on Tax Fairness!
There are also several orgs working on UHNWI advising/fundraising.
In theory, if we could move from these people donating say 3% of their wealth, to say 20%, I suspect that could unlock enormous global wins. Dramatically more than anything EA has achieved so far. [...] in fairness, I think there was little dedicated and smart effort to improve it.
20% would be absurd, but even moving the average from 3% to 3.5% would be more than anything EA has achieved so far. But there are millions of people with strong incentives to do so (including every single charity relying on donations), so it would be surprising for EA to have such a huge effect. I'm glad that many people are trying and I hope that they succeed.
Possibly a tangential point, but lots of people in many EA communities think that accelerating economic growth in the US is a top use of funds. Billionaires don't hoard wealth, but they invest it in companies and lend it to governments.
Perhaps advocating for higher taxes on extreme luxury goods (e.g. yacht fleets and luxury private jets), if done in a tractable way, could get more universal traction.
10 years ago, I assumed that as word got out about effective giving, many more rich people would start doing that.
In terms of giving more, I don't understand why you would assume that. I imagine that billionaires could always see that there was a huge number of people that they could have helped relatively cheaply. I wouldn't expect changing the effectiveness by a couple of orders of magnitude to change things much. A typical GiveDirectly donor is closer to a typical GiveDirectly beneficiary than to a 10-billionaire, even on a log scale of income. In terms of giving more effectively, GiveWell recently commissioned a report to explore why other funders don't fund opportunities GiveWell does, but they haven't published it yet. (I imagine because of everything going on with USAID right now)
Surprising compared to what reference class? It's true that Peter Singer came from an accomplished family (his maternal grandfather has a Wikipedia page) and apparently William MacAskill went to private school, but I don't know how rare this is for somewhat influential philosophy professors in prestigious universities.
If you replaced "EA researchers" in your quick-take with "professors" or even "researchers", I think it would still be true. (At least for some definition of "surprising")
I believe the nonprofit world attracts people with financial security. While compensation is often modest, the work can offer significant prestige and personal fulfillment.
For what it's worth, I think EA is absolutely non-representative of the non-profit world, salaries in EA are higher than average salaries. I know several people who make more in their EA role than they made in their previous role, and there are many EA people working in AI making tons of money.
Do you expect that the median employee at The Salvation Army comes from a wealthy family?
But the most obvious implication to me, for people in this community, is to realize that it's very difficult to access how impressive specific individual EAs/nonprofit people are, without understanding their full personal situations. Many prominent community members have reached their positions through a combination of merit, family/social networks, and fortunate life circumstances.
I'm curious why is it important for people in your EA community to assess how impressive someone is? Do you mean for hiring decisions? I think anyone anywhere has reached their position through a combination of merit, family/social networks, and fortunate life circumstances.
A lot of animal welfare work is technically "long-termist" in the sense that it's not about helping already existing beings.
That doesn't match the standard definition of longtermism: "positively influencing the long-term future is a key moral priority of our time", it seems to me that it's more about rejecting some narrow person-affecting views.
I suspect many people instead work on effective animal advocacy because that's where their emotional affinity lies and it's become part of their identity, because they don't like acting on theoretical philosophical grounds, and they feel discomfort imagining the reaction of their social environment if they were to work on AI/s-risk.
I think it's very tempting to assume that people who work on things that we don't consider the most important things to work on are doing so because of emotional/irrational/social reasons.
I'm imagine that some animal welfare people (and sometimes myself) see people working on extremely fun and interesting problems in AI, while making millions of dollars, with extremely vague theories for why this might be making things better and not worse for people millions of years for now, and imagine that they're doing so for non-philosophically-robust reasons. I currently believe that the social and economic incentives to work in AI are much greater than the incentives to work in animal welfare. But I don't think this is a useful framing (as it's too tempting and could explain anything), and we should instead weigh the arguments that people give for prioritizing one cause instead of another.
I think the tractability aspect of AI/s-risk work, and the fact that all previous attempts backfired (Singularity Institute, early MIRI, early DeepMind, early OpenAI, and we'll see with Anthropic) is the single main reason why some people are not prioritizing work in AI/s-risk at the moment, and it's not about extremely narrow person-affecting views (which I think are very rare).
The distinction comes in at the empirical uncertainty/speculativeness of reducing s-risk. But I'm not sure if that uncertainty is treated the same as uncertainty about shrimp or insect welfare.
I think those are different kinds of uncertainties, and it seems to me that they are both treated very seriously by people working in those fields.
As an attendee, I don't understand why you can't just only do a few 1-1s with people who you are confident it would be useful to talk to, and go to talks or work for most of the conference time.
I assume that's how the famous people you mentioned navigate these events.
I can imagine that for speakers it can be demoralizing to spend a lot of time preparing a talk for an empty audience, but that's separate from the issues you mentioned.
I think a main argument related to that perspective is that you shouldn't tax wealth but you should tax consumption (holding billions in stocks and bonds has positive externalities, buying a yacht fleet has negative externalities)
I obviously don't agree with it, so I'm likely not presenting the strongest version of the arguments, but you can see an example of people holding this view in the twitter screenshot above, and I think it's not uncommon