(This is a 2016 post that someone recommended I crosspost)

People don't usually volunteer details about why they decided to do something, how they did it, or how it turned out, unless they have another goal in mind. You see people and organizations writing about cases where they've done better than expected, in the hope others will think better of them. You see writing that explains already-public cases of failure, casting it in a more positive light. You see people writing in the hope they'll be seen as an expert, to build up a reputation. Additionally, while most real decisions are made in people's heads as the output of a complicated process no one really understands, if you look at decision writeups you'll typically see something easy to follow that only contains respectable considerations and generally reflects well on the ones publishing it. If you're trying to learn from others, or evaluate them, this isn't much to go on.

Efforts to change this generally go under the banner of "transparency", and this is one of the components of the effective altruism (EA) movement, especially for EA organizations. GiveWell is especially known for putting this into practice but most EA organizations value transparency and prioritize it to some extent. Many individual EAs do this as well; for example, as someone earning to give I keep a donations page and have posted several spending updates.

This puts the members of the EA movement in a position as consumers of transparency: people and organizations are releasing information because it benefits the broader community. This is information that they could easily keep to themselves, since as a practical matter everything is private by default and requires effort to make available. Writing a report requires taking an enormous amount of detail and deciding what to communicate, which means it's very easy through selective inclusion to hide mistakes and present yourself or your organization in an artificially positive light. And not even intentionally! It's very easy to subconsciously shy away from writing things that might make you look bad, or might reflect badly on people you on the whole think highly of.

So imagine an organization makes something public, and voluntarily reports some kind of failure that people outside of the organization wouldn't have known about otherwise. If people react critically and harshly to this failure, it makes them much less likely to be willing to be so transparent in the future. And not just this organization: others will also see that sharing negative information doesn't go well.

When people react negatively to an organization sharing an instance of failure one thing they're doing is putting pressure on the norm that organizations should be sharing this sort of thing. If the norm is very strong, then the pressure is not going to keep people from sharing similar things in the future, and it also means that seeing a failure from this organization but not from others is informative. On the other hand, if the norm is weaker we need to be careful to nourish it, not pushing it harder than it can stand.

Comments2


Sorted by Click to highlight new comments since:

I think a problem here is when people don't know if someone is being fully honest/transparent/calibrated or using more conventional positive-slanted discourse norms. E.g. a situation where this comes up sometimes is taking and giving references for a job applicant. I think the norm with references is that they should be very positive, and you're supposed to do downward adjustments on the positivity to figure out what's going on (e.g. noticing if someone said someone was "reliable" versus "extremely reliable"). If an EA gives a reference for a job applicant using really transparent and calibrated languages, and then the reference-taker doesn't realize different discourse norms are in use and does their normal downward adjustment, they will end up with a falsely negative picture of the applicant. 

Similarly, I think in a community where some people or orgs are fully transparent and honest, and others are using more conventional pitch-like language, there's a risk of disadvantaging the honest and generally sowing a lot of confusion. 

Also, the more everyone expects everyone else to be super honest and transparent, in some ways, the more benefit to the first defector (since people might be more trusting and not suspect they're being self-promotional). 

I’ve talked to a granter and have the impression that transparency about their granting decisions is similarly tricky, as it’s often particularly sensitive for the receiver of rejections. It’s hopefully common practice to add a disclaimer what a rejection means, that there is unfortunately a big time bottleneck on the side of granters, and that the feedback (if given) is fairly unpolished and that sharing it transparently is a sign of trust that the receiver acknowledges this tradeoff.

Curated and popular this week
 ·  · 55m read
 · 
Summary Last updated 2024-11-20. It's been a while since I last put serious thought into where to donate. Well I'm putting thought into it this year and I'm changing my mind on some things. I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I've managed to reason myself out of those emotions. Within x-risk: * AI is the most important source of risk. * There is a disturbingly high probability that alignment research won't solve alignment by the time superintelligent AI arrives. Policy work seems more promising. * Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development. In the rest of this post, I will explain: 1. Why I prioritize x-risk over animal-focused longtermist work and global priorities research. 2. Why I prioritize AI policy over AI alignment research. 3. My beliefs about what kinds of policy work are best. Then I provide a list of organizations working on AI policy and my evaluation of each of them, and where I plan to donate. Cross-posted to my website. I don't like donating to x-risk (This section is about my personal motivations. The arguments and logic start in the next section.) For more than a decade I've leaned toward longtermism and I've been concerned about existential risk, but I've never directly donated to x-risk reduction. I dislike x-risk on an emotional level for a few reasons: * In the present day, aggregate animal welfare matters far more than aggregate human welfare (credence: 90%). Present-day animal suffering is so extraordinarily vast that on some level it feels irresponsible to prioritize anything else, even though rationally I buy the arguments for longtermism. * Animal welfare is more neglected than x-risk (credence: 90%).[1] * People who prioritize x-risk often disregard animal welfare (or t
 ·  · 5m read
 · 
The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace. However, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed as a recommender in the most recent SFF grant round.[1]   Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures. But they’ve recently stopped funding several categories of work (my own categories, not theirs): * Many Republican-leaning think tanks, such as the Foundation for American Innovation. * “Post-alignment” causes such as digital sentience or regulation of explosive growth. * The rationality community, including LessWrong, Lightcone, SPARC, CFAR, MIRI. * High school outreach, such as Non-trivial. In addition, they are currently not funding (or not fully funding): * Many non-US think tanks, who don’t want to appear influenced by an American organisation (there’s now probably more than 20 of these). * They do fund technical safety non-profits like FAR AI, though they’re probably underfunding this area, in part due to difficulty hiring for this area the last few years (though they’ve hired recently). * Political campaigns, since foundations can’t contribute to them. * Organisations they’ve decided are below their funding bar for whatever reason (e.g. most agent foundations work). OP is not infallible so some of these might still be worth funding. * Nuclear security, since it’s on average less cost-effective than direct AI funding, so isn’t one of the official cause areas (though I wouldn’t be surprised if there were some good opportunities there). This means many of the organisations in these categories have only been able to access a a minority of the available philanthropic capital (in recent history, I’d guess ~25%). In the recent SFF grant round, I estimate they faced a funding bar 1.5 to 3 times higher. This creates a lot of opportunities for other donors
Nikola
 ·  · 1m read
 · 
Recent opportunities in Building effective altruism
27
cescorza
· · 2m read