Hi!
I'm currently (Aug 2023) a Software Developer at Giving What We Can, helping make giving significantly and effectively a social norm.
I'm also a forum mod, which, shamelessly stealing from Edo, "mostly means that I care about this forum and about you! So let me know if there's anything I can do to help."
Please have a very low bar for reaching out!
I won the 2022 donor lottery, happy to chat about that as well
GiveWell posts a lot of interesting stuff on their blog and on their website, but in the past year they only reposted hiring announcements on the EA Forum.
E.g. I don't think that USAID Funding Cuts: Our Response and Ways to Help from 10 days ago was cross-posted here, but I think many readers would have found it interesting
Neil Buddy Shah also serves on Anthropicâs Long-Term Benefit Trust (see mentions of CHAI on this page)
Importantly, it seems that GiveWell only funds specific programs from CHAI, not CHAI as a whole. It could very well be the case that CHAI as a whole is inefficient and not particularly good at what they do, but GiveWell thinks those specific programs are cost-effective.
Disclaimer: this is only from looking at GiveWell's website and searching for "CHAI", I don't have any insider information
I think that is extremely unlikely, they have a lot to lose as soon as it's confirmed that the archived data is not manipulated.
Also, from the page you cite:
we emphasize that these attacks can in most cases be launched only by the owners of particular domains.
So they would need to claim that you took control of a relevant domain as well.
But even if something like that happened, you could show that the archive has not been tampered (e.g. by linking the exact resource containing the information, or mentioning the "about this capture" tool that was added by the web archive to mitigate this)
I strongly agree that the benefits of sharing the evaluation greatly outweigh the risks, but I'm not sure if sharing the it relatively early is best
I think the minimal version proposed by @Jason of just sending an advance copy a week or two in advance is an extremely low-cost policy that mitigates most of the risks and provides most of the benefits (but some limited back-and-forth would be ideal)
The original information is still archived, my understanding is that those attacks just inject other data that changes what is shown to the user, but as they mention it's easily detectable and the original information can still be recovered.
A bigger risk would be that the organization asks the archive to delete their data, but that would look very suspicious, and you could use multiple archives (e.g. https://archive.is/ )
As was mentioned by several commenters on your last article, I think it would be valuable to share your article with ACE or Sinergia Animal before publishing it here.
Sharing evaluations with the evaluated org before publishing would likely make your analyses both more useful and more accurate, I'm curious to know why you decided against this.
I agree that US policy is obviously very important, but
generic US policy, especially focusing on long-term issues (like US governance, or US decisions on questions like Nuclear/bio/AI) might be a good use of EA funds.
I think it always has been? My sense is that lots of EA funds are already spent on US policy things, e.g. https://www.nti.org/analysis/ and https://www.governance.ai/research
Update: the lottery has been drawn and the results are in! An anonymous donor won the right to recommend how to allocate $200k
Congratulations to the winner and thanks to the 21 donors who collectively donated $108,577.65 this year
From here it seems that indeed ÂŤhe focuses on the design of the company's Responsible Scaling Policy and other aspects of preparing for the possibility of highly advanced AI systems in the future.Âť
It seems that lots of people with all sorts of roles at AI companies have the formal role "member of technical staff"
I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
I think the confusion might stem from interpreting EA as "self-identifying with a specific social community" (which they claim they don't, at least not anymore) vs EA as "wanting to do good and caring about others" (which they claim they do, and always did)
Going point by point:
This was more than 10 years ago. EA was a very different concept / community at the time, and this is consistent with Daniela Amodei saying that she considers it an "outdated term"
This was also more than 10 years ago, and giving to charity is not unique to EA. Many early pledgers don't consider themselves EA (e.g. signatory #46 claims it got too stupid for him years ago)
Amanda Askell explicitly says "I definitely have met people here who are effective altruists" in the article you quote, so I don't think this contradicts it in any way
https://x.com/AmandaAskell/status/1905995851547148659
That's false: https://en.wikipedia.org/wiki/Artificial_consciousness
Wanting to make the world better, wanting to help people, and giving significantly to charity are not prerogatives of the EA community.
I think that's exactly what they are doing in the quotes in the article: "I don't identify with that terminology" and "it's not a theme of the organization or anything"
I don't think they suggest that, depending on your definition of "strong". Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.
I don't think X responses are a good metric of honesty, and those seem to be mostly from people in the EA community.
In general, I think it's bad for the EA community that everyone who interacts with it has to worry about being liable for life for anything the EA community might do in the future.
I don't see why it can't let people decide if they want to consider themselves part of it or not.
As an example, imagine if I were Catholic, founded a company to do good, raised funding from some Catholic investors, and some of the people I hired were Catholic. If 10 years later I weren't Catholic anymore, it wouldn't be dishonest for me to say "I don't identify with the term, and this is not a Catholic company, although some of our employees are Catholic". And giving to charity or wanting to do good wouldn't be gotchas that I'm secretly still Catholic and hiding the truth for PR reasons. And this is not even about being a part of a specific social community.