Lorenzo Buonanno🔸

Software Developer @ Giving What We Can
4889 karmaJoined Working (0-5 years)20025 Legnano, Metropolitan City of Milan, Italy

Bio

Participation
1

Hi!

I'm currently (Aug 2023) a Software Developer at Giving What We Can, helping make giving significantly and effectively a social norm.

I'm also a forum mod, which, shamelessly stealing from Edo, "mostly means that I care about this forum and about you! So let me know if there's anything I can do to help."

Please have a very low bar for reaching out!

I won the 2022 donor lottery, happy to chat about that as well

Posts
11

Sorted by New

Comments
613

Topic contributions
5

I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
 

I think the confusion might stem from interpreting EA as "self-identifying with a specific social community" (which they claim they don't, at least not anymore) vs EA as "wanting to do good and caring about others" (which they claim they do, and always did)


Going point by point:

Dario, Anthropic’s CEO, was the 43rd signatory of the Giving What We Can pledge and wrote a guest post for the GiveWell blog. He also lived in a group house with Holden Karnofsky and Paul Christiano at a time when Paul and Dario were technical advisors to Open Philanthropy.

This was more than 10 years ago. EA was a very different concept / community at the time, and this is consistent with Daniela Amodei saying that she considers it an "outdated term"

 

Amanda Askell was the 67th signatory of the GWWC pledge.

This was also more than 10 years ago, and giving to charity is not unique to EA. Many early pledgers don't consider themselves EA (e.g. signatory #46 claims it got too stupid for him years ago)

 

Many early and senior employees identify as effective altruists and/or previously worked for EA organisations

Amanda Askell explicitly says "I definitely have met people here who are effective altruists" in the article you quote, so I don't think this contradicts it in any way

https://x.com/AmandaAskell/status/1905995851547148659

 

Anthropic has hired a "model welfare lead" and seems to be the company most concerned about AI sentience, an issue that's discussed little outside of EA circles.

That's false: https://en.wikipedia.org/wiki/Artificial_consciousness

 

On the Future of Life podcast, Daniela said, "I think since we [Dario and her] were very, very small, we've always had this special bond around really wanting to make the world better or wanting to help people" and "he [Dario] was actually a very early GiveWell fan I think in 2007 or 2008."
The Anthropic co-founders have apparently made a pledge to donate 80% of their Anthropic equity (mentioned in passing during a conversation between them here and discussed more here)

Their first company value states, "We strive to make decisions that maximize positive outcomes for humanity in the long run."

Wanting to make the world better, wanting to help people, and giving significantly to charity are not prerogatives of the EA community.

 

It's perfectly fine if Daniela and Dario choose not to personally identify with EA (despite having lots of associations) and I'm not suggesting that Anthropic needs to brand itself as an EA organisation

I think that's exactly what they are doing in the quotes in the article: "I don't identify with that terminology" and "it's not a theme of the organization or anything"

 

But I think it’s dishonest to suggest there aren’t strong ties between Anthropic and the EA community.

I don't think they suggest that, depending on your definition of "strong". Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.

 

I think it’s a bad look to be so evasive about things that can be easily verified (as evidenced by the twitter response).

I don't think X responses are a good metric of honesty, and those seem to be mostly from people in the EA community.

 

In general, I think it's bad for the EA community that everyone who interacts with it has to worry about being liable for life for anything the EA community might do in the future.

I don't see why it can't let people decide if they want to consider themselves part of it or not.

 

As an example, imagine if I were Catholic, founded a company to do good, raised funding from some Catholic investors, and some of the people I hired were Catholic. If 10 years later I weren't Catholic anymore, it wouldn't be dishonest for me to say "I don't identify with the term, and this is not a Catholic company, although some of our employees are Catholic". And giving to charity or wanting to do good wouldn't be gotchas that I'm secretly still Catholic and hiding the truth for PR reasons. And this is not even about being a part of a specific social community.

GiveWell posts a lot of interesting stuff on their blog and on their website, but in the past year they only reposted hiring announcements on the EA Forum.

E.g. I don't think that USAID Funding Cuts: Our Response and Ways to Help from 10 days ago was cross-posted here, but I think many readers would have found it interesting

Neil Buddy Shah also serves on Anthropic’s Long-Term Benefit Trust (see mentions of CHAI on this page)

Importantly, it seems that GiveWell only funds specific programs from CHAI, not CHAI as a whole. It could very well be the case that CHAI as a whole is inefficient and not particularly good at what they do, but GiveWell thinks those specific programs are cost-effective.

Disclaimer: this is only from looking at GiveWell's website and searching for "CHAI", I don't have any insider information

I think that is extremely unlikely, they have a lot to lose as soon as it's confirmed that the archived data is not manipulated.

Also, from the page you cite:

we emphasize that these attacks can in most cases be launched only by the owners of particular domains.

So they would need to claim that you took control of a relevant domain as well.

But even if something like that happened, you could show that the archive has not been tampered (e.g. by linking the exact resource containing the information, or mentioning the "about this capture" tool that was added by the web archive to mitigate this)

I strongly agree that the benefits of sharing the evaluation greatly outweigh the risks, but I'm not sure if sharing the it relatively early is best

  1. There is a risk of starting a draining back-and-forth which could block or massively delay publication. See e.g. Research Deprioritizing External Communication which was delayed by one year
  2. It would cost more time for the org to review a very early draft and point out mistakes that would be fixed anyway
  3. It could cause the org to take the evaluation less seriously, and be less likely to take action based on the feedback

 

I think the minimal version proposed by @Jason of just sending an advance copy a week or two in advance is an extremely low-cost policy that mitigates most of the risks and provides most of the benefits (but some limited back-and-forth would be ideal)

The original information is still archived, my understanding is that those attacks just inject other data that changes what is shown to the user, but as they mention it's easily detectable and the original information can still be recovered.

A bigger risk would be that the organization asks the archive to delete their data, but that would look very suspicious, and you could use multiple archives (e.g. https://archive.is/ )

As was mentioned by several commenters on your last article, I think it would be valuable to share your article with ACE or Sinergia Animal before publishing it here.

Sharing evaluations with the evaluated org before publishing would likely make your analyses both more useful and more accurate, I'm curious to know why you decided against this.

I agree that US policy is obviously very important, but

  1. The 2024 US Election doesn't seem to have been close (unlike e.g. the 2000 one). I'm skeptical that EAs could have changed things that much, given that they couldn't cause relatively easier policy changes (e.g. SB 1047)
  2. As you mentioned, Moskovitz & Tuna already were the largest democratic donors
  3. They were a minority, but several EAs favoured Trump over Harris, so their efforts would partially cancel out
  4. Politicizing EAs, or EA organizations, would have made them less effective after Trump's win and the whole "vibe shift", when collaborating with the current administration

 

generic US policy, especially focusing on long-term issues (like US governance, or US decisions on questions like Nuclear/bio/AI) might be a good use of EA funds.

I think it always has been? My sense is that lots of EA funds are already spent on US policy things, e.g. https://www.nti.org/analysis/ and https://www.governance.ai/research 

Update: the lottery has been drawn and the results are in! An anonymous donor won the right to recommend how to allocate $200k 

Congratulations to the winner and thanks to the 21 donors who collectively donated $108,577.65 this year

From here it seems that indeed ÂŤhe focuses on the design of the company's Responsible Scaling Policy and other aspects of preparing for the possibility of highly advanced AI systems in the future.Âť

It seems that lots of people with all sorts of roles at AI companies have the formal role "member of technical staff"

Load more