This is a special post for quick takes by sawyer🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Nonprofit organizations should make their sources of funding really obvious and clear: How much money you got from which grantmakers, and approximately when. Any time I go on some org's website and can't find information about their major funders, it's a big red flag. At a bare minimum you should have a list of funders, and I'm confused why more orgs don't do this.

Hmm, reasonably fair point. I might add some language to the Lightcone/Lesswrong about pages.

This is ideal, yet many funders individual or otherwise either probably this or would rather you didn't. Maybe even most.

I think this is as good idea, but less important than many other factors about organisations.

I think this dynamic is generally overstated, at least in the existential risk space that I work in. I've personally asked all of our medium and large funders for permission, and the vast majority of them have given permission. Most of the funding comes from Open Philanthropy and SFF, both of which publicly announce all of their grants—when recipients decided not to list those funders, it's not because the funders don't want them to. There are many examples of organizations with high funding transparency, including BERI (which I run), ACE, and MIRI (transparency page and top contributors page).

That makes sense I was talking about my global health and development space only.

Even when that's true, the org could specify all the other sources of funding, and separate out 'anonymous donations' into either one big slice or one-slice-per-donor.

Yep! Something like this is probably unavoidable, and it's what all of my examples below do (BERI, ACE, and MIRI).

Why do you think that? (I agree fwiw)

(Not deeply thought through) Funders have a strong (though usually indirect) influence on the priorities and goals of the organization. Transparency about funders adds transparency about the priorities and goals of the organization. Conversely, lack of funder transparency creates the appearance that you're trying to hide something important about your goals and priorities. This sort of argument comes up a lot in US political funding, under the banners of "Citizens United", "SuperPACs", etc. I'm making a pretty similar argument to that one.

Underlying my feelings here is that I believe charities have an obligation to the public. The government is allowing people to donate their income to a charity, and then (if they donate enough) to not pay taxes on that income. That saves the donor ~30% of their income in taxes. I consider that 30% to be public money, i.e. money that would have otherwise gone to the government as taxes. So as a rule of thumb I try to think that ~30% of a US charity's obligations are to the public. The main way charities satisfy this obligation is by sticking to their IRS-approved exempt purpose and following all the rules of 501(c)(3)s. But another way charities can satisfy that obligation is by being really transparent about what they're doing and where their money comes from.

Literally never even considered it. Would you mind sharing an example of this being done well?

There are many examples of organizations with high funding transparency, including BERI (which I run), ACE, and MIRI (transparency page and top contributors page).

What is "capabilities"? What is "safety"? People often talk about the alignment tax: the magnitude of capabilities/time/cost a developer loses by implementing an aligned/safe system. But why should we consider an unaligned/unsafe system "capable" at all? If someone developed a commercial airplane that went faster than anything else on the market, but it exploded on 1% of flights, no one would call that a capable airplane.

This idea overlaps with safety culture and safety engineering and is not new. But alongside recent criticism of the terms "safety" and "alignment", I'm starting to think that the term "capabilities" is unhelpful, capturing different things for different people.

I don’t think the airplane analogy makes sense because airplanes are not intelligent enough to be characterized as having their own preferences or goals. If there were a new dog breed that was stronger/faster than all previous dog breeds, but also more likely to attack their owners, it would be perfectly straightforward to describe the dog as “more capable” (but also more dangerous).

I think people would say that the dog was stronger and faster than all previous dog breeds, not that it was "more capable". It's in fact significantly less capable at not attacking its owner, which is an important dog capability. I just think the language of "capability" is somewhat idiosyncratic to AI research and industry, and I'm arguing that it's not particularly useful or clarifying language.

More to my point (though probably orthogonal to your point), I don't think many people would buy this dog, because most people care more about not getting attacked than they do about speed and strength.

As a side note, I don't see why preferences and goals change any of this. I'm constantly hearing AI (safety) researchers talk about "capabilities research" on today's AI systems, but I don't think most of them think those systems have their own preferences and goals. At least not in the sense that a dog has preferences or goals. I just think it's a word that AI [safety?] researchers use, and I think it's unclear and unhelpful language.

#taboocapabilities

I think game playing AI is pretty well characterized as having the goal of winning the game, and being more or less capable of achieving that goal at different degrees of training. Maybe I am just too used to this language but it seems very intuitive to me. Do you have any examples of people being confused by it?

Within EA, work on x-risk is very siloed by type of threat: There are the AI people, the bio people, etc. Is this bad, or good?

Which of these is the correct analogy?

  1. "Biology is to science as AI safety is to x-risk," or 
  2. "Immunology is to biology as AI safety is to x-risk"

EAs seem to implicitly think analogy 1 is correct: some interdisciplinary work is nice (biophysics) but most biologists can just be biologists (i.e. most AI x-risk people can just do AI).

The "existential risk studies" model (popular with CSER, SERI, and lots of other non-EA academics) seems to think that analogy 2 is correct, and that interdisciplinary work is totally critical—immunologists alone cannot achieve a useful understanding of the entire system they're trying to study, and they need to exchange ideas with other subfields of medicine/biology in order to have an impact, i.e. AI x-risk workers are missing critical pieces of the puzzle when they neglect broader x-risk studies.

Today is Asteroid Day. From the website:

Asteroid Day as observed annually on 30 June is the United Nations sanctioned day of public awareness of the risks of asteroid impacts.  Our mission is to educate the public about the risks and opportunities of asteroids year-round by hosting events, providing educational resources and regular communications to our global audience on multiple digital platforms.

I didn't know about this until today. Seems like a potential opportunity for more general communication on global catastrophic risks.

Curated and popular this week
Relevant opportunities