Lorenzo Buonanno🔸

Software Developer @ Giving What We Can
4796 karmaJoined Working (0-5 years)20025 Legnano, Metropolitan City of Milan, Italy

Bio

Participation
1

Hi!

I'm currently (Aug 2023) a Software Developer at Giving What We Can, helping make giving significantly and effectively a social norm.

I'm also a forum mod, which, shamelessly stealing from Edo, "mostly means that I care about this forum and about you! So let me know if there's anything I can do to help."

Please have a very low bar for reaching out!

I won the 2022 donor lottery, happy to chat about that as well

Posts
11

Sorted by New

Comments
603

Topic contributions
5

I would take an even-odds bet that the total amount donated to charity out of Anthropic equity, excluding matches, is >$400m in 4 years time. 

If Anthropic doesn't lose >85% of its valuation (which can definitely happen) I would expect way more.

As mentioned above, each of its seven cofounders is likely to become worth >$500m, and I would expect many of them to donate significantly.

 

Anthropic is the go to example of "founded by EAs"

I find these kind of statements a bit weird. My sense is that it used to be true, but they don't necessarily identify themselves with the EA movement anymore: it's never mentioned in interviews, and when asked by journalists they explicitly deny it.

I would be surprised if the 3:1 match applied to founders as well. Also, I think 20% of employees donating 20% of their equity within the next 4 years is very optimistic.

My guess is that donations from Antrhopic/OpenAI will depend largely on what the founders decide to do with their money. Forbes estimates Altman and Daniela Amodei at ~$1B each, and Altman signed the Giving Pledge.


See also this article from Jan 8: 

At Anthropic’s new valuation, each of its seven founders — [...] — are set to become billionaires. Forbes estimates that each cofounder will continue to hold more than 2% of Anthropic’s equity each, meaning their net worths are at least $1.2 billion.

I don't think Forbes numbers are particularly reliable, and I think that there's a significant chance that Anthropic and/or OpenAI equity goes to 0; but in general, I expect founders to both have much more money than employees and be more inclined to donate significant parts of it (partly because of diminishing marginal returns of wealth)

OWID says that ~45% of the population in Uganda has access to electricity, and that it more than doubled in the past 10 years. Does this match your experience?

I used to think that the exact philosophical axiologies and the handling of corner cases were really important to guide altruistic action, but I now think that many good things are robustly good under most reasonable moral frameworks.

 

these practical and intuitive methods are ultimately grounded in Singer’s deeply counterintuitive moral premises.

I don't think this is necessarily true. Many (I would argue most) other moral premises can lead you to value preventing child deaths or stunting, limiting the suffering of animals in factory farms, or ensuring future generations live positive, meaningful lives.

@WillieG mentioned Christianity, and indeed, EA for Christians has many Christians who care deeply about helping others and come from a very different moral background. (I think sometimes they mention this parable)

 

within the EA community, beyond working on their own projects, do people have the tendency to remind & suggest to others “what they could have done but didn’t?”

I don't have an answer to this question, but you might like these posts: Invisible impact loss (and why we can be too error-averse)  and Uncertain Optimizing and Opportunity Costs 

I think people regularly do encourage themselves and others to consider opportunity costs and counterfactuals, but I don't think it's specific to the EA community.

 

The principle becomes more challenging to accept when Singer extends it to a particular edge case.

I think this is the nature of edge cases. I don't think you need to agree with Singer on edge cases to value helping others. This vaguely reminded me of this Q&A answer from Derek Parfit where he very briefly talks about borderline cases and normative truths.

 

I do think things get trickier for e.g. shrimp welfare and digital sentience, and in those cases philosophical considerations are really important. But in my opinion the majority of EA work is not particularly sensitive to one's stance on utilitarianism.

Note that the hold-out set doesn't exist yet. https://x.com/ElliotGlazer/status/1880812021966602665

What does this mean for OpenAI's 25% score on the benchmark?

Note that only some of FrontierMath's problems are actually frontier, while others are relatively easier (i.e. IMO level, and Deepmind was already one point from gold on IMO level problems) https://x.com/ElliotGlazer/status/1870235655714025817

You might also be interested in this post: Measuring Good Better, as a very high level summary of different organizations views on measuring ‘good’ (apparently nobody uses DALYs!)

After reading the recent https://www.thenation.com/article/society/progressive-left-philanthropy-strategy/ and many similar articles, my understanding is that proponents of "system level" changes are sceptical of a neoliberal/market-driven approach, and want a more centrally planned economy, where opportunities and outcomes are guaranteed to be more equal, or at least everyone is guaranteed a basic amount of wealth.

 

My understanding is that they care primarily about things like increased inequality, homelessness and unemployment in the United States, and they believe that main causes for those issues are the greed of the top 0.01% and market regulations (or lack thereof) which favour the richest at the expense of the poorest.

 

So I would imagine that reading things like:

AGOA gives Sub-Saharan African countries duty-free access to the American market for a range of product categories — in particular, apparel, which has historically been a key stepping stone for countries pursuing export-led manufacturing growth. [...] With Chinese labor costs increasing and general protectionist pressures growing, there may be a window of opportunity for African manufacturing industries to grow before automation in high-income countries potentially leads to significant re-shoring — provided that AGOA does not expire beforehand. Advocating for a strong AGOA renewal bill could improve the odds for an African industrial transformation.

They would expect an AGOA renewal to increase inequality and unemployment in the US by replacing American jobs with sweatshops in countries with lower minimum wage/worker rights, enriching capitalists who would profit from exploiting less protected workers.

 

But this is definitely a position I struggle to understand, so it's likely that I'm misrepresenting it and would welcome other guesses/corrections.

Do they mention effective giving or collaborate with Doneer Effectief/the Tien Procent Club?

I mean the reasoning behind this seems very close to #2 no? The target audience they're looking at is probably more interested in neartermism than AI/longtermism and they don't think they can get much tractability working with the current EA ecosystem?

 

I think 2 and especially 3 are very likely, but I think it's also likely that Bregman was very impressed with AIM, and possibly found it more inspiring than 80k/CEA, and/or more pragmatic, or a better fit for the kind of people he wanted to reach regardless of their views on AI.

How many of them have made that choice recently though?


A lot![1]

80k seems to mostly care about x-risk, but (perhaps surprisingly) their messaging is not just "Holy Shit, X-Risk" or "CEOs are playing Russian roulette with you and your children".

They instead also cover a lot of cause-neutral EA arguments (e.g. scope sensitivity and the importance of effectiveness)

So I don't think it's surprising that Rutger doesn't recommend them if he doesn't share (or even actively disagrees with?) those priorities even if his current focus on persuading mid-career professionals to look into alternative proteins and tobacco prevention sounds very EA-ish in other respects.

Yeah agree with this, but I still think that 80k is more than useless for altruists who don't value the long-term future, or are skeptical of 80k's approach to trying to influence it.

I'm curious whether he mentioned ProbablyGood or if he's even aware of them?

My understanding is that the SMA team knows much more about the space than I do, so I'm sure they are aware of them if I'm aware of them.

  1. ^

    I don't have an exact number, but I would conservatively guess more than 100 people and more than $100k in total donations for 2024

Load more