There are many examples of organizations with high funding transparency, including BERI (which I run), ACE, and MIRI (transparency page and top contributors page).
I think this dynamic is generally overstated, at least in the existential risk space that I work in. I've personally asked all of our medium and large funders for permission, and the vast majority of them have given permission. Most of the funding comes from Open Philanthropy and SFF, both of which publicly announce all of their grants—when recipients decided not to list those funders, it's not because the funders don't want them to. There are many examples of organizations with high funding transparency, including BERI (which I run), ACE, and MIRI (transparency page and top contributors page).
Nonprofit organizations should make their sources of funding really obvious and clear: How much money you got from which grantmakers, and approximately when. Any time I go on some org's website and can't find information about their major funders, it's a big red flag. At a bare minimum you should have a list of funders, and I'm confused why more orgs don't do this.
I think people would say that the dog was stronger and faster than all previous dog breeds, not that it was "more capable". It's in fact significantly less capable at not attacking its owner, which is an important dog capability. I just think the language of "capability" is somewhat idiosyncratic to AI research and industry, and I'm arguing that it's not particularly useful or clarifying language.
More to my point (though probably orthogonal to your point), I don't think many people would buy this dog, because most people care more about not getting attacked than they do about speed and strength.
As a side note, I don't see why preferences and goals change any of this. I'm constantly hearing AI (safety) researchers talk about "capabilities research" on today's AI systems, but I don't think most of them think those systems have their own preferences and goals. At least not in the sense that a dog has preferences or goals. I just think it's a word that AI [safety?] researchers use, and I think it's unclear and unhelpful language.
#taboocapabilities
What is "capabilities"? What is "safety"? People often talk about the alignment tax: the magnitude of capabilities/time/cost a developer loses by implementing an aligned/safe system. But why should we consider an unaligned/unsafe system "capable" at all? If someone developed a commercial airplane that went faster than anything else on the market, but it exploded on 1% of flights, no one would call that a capable airplane.
This idea overlaps with safety culture and safety engineering and is not new. But alongside recent criticism of the terms "safety" and "alignment", I'm starting to think that the term "capabilities" is unhelpful, capturing different things for different people.
This is truly crushing news. I met Marisa at a CFAR workshop in 2020. She was open, kind, and grateful to everyone, and it was joyful to be around her. I worked with her a bit revitalizing the EA Operations Slack Workspace in 2020, and had only had a few conversations with her since then, here and there at EA events. Marisa (like many young EAs) made me excited for a future that would benefit from her work, ambition, and positivity. Now she's gone. She was a good person, I'm glad she was alive, and I am so sad she's gone.
From everything I've seen, GWWC has totally transformed under your leadership. And I think this transformation has been one of the best things that's happened in EA during that time. I'm so thankful for everything you've done for this important organization.