SL

Sarah Levin

589 karmaJoined

Comments
32

For context, GiveWell's relationship with CHAI dates to 2022, when GiveWell Managing Director Neil Buddy Shah departed to become CEO of CHAI. According to GiveWell's announcement, "this transition does not mark the end of Buddy’s relationship with GiveWell. It is important that GiveWell maintain strong connections with leading organizations in the global health sector." (Incidentally, Shah is also a member of Anthropic's long-term benefit trust.)

GiveWell announced Shah's departure in April 2022; Shah apparently started at CHAI in June; and in August GiveWell announced its first grant recommendation to CHAI, $10m for a new incubator program "to identify, scope, pilot, and ultimately scale cost-effective programs that GiveWell might fund". As planned, the incubator led to later GiveWell grant recommendations to CHAI, like CHAI's tuberculosis contact management program, and multiple grants to CHAI's oral rehydration and zinc distribution program.

Assuming you're correct that this grant is atypical for GiveWell, I would presume it's a result of their special relationship with Shah.

You pointed out the lack of staff continuity between the present CEA and the subset of then-CEA-now-EV which posted the doctored image, to argue that their behavior does not reflect on the present CEA, so that we have no particular reason to expect sketchy or adversarial comms from the present CEA.

Your argument about lack of staff continuity is valid as a local counterpoint which carries some weight (IMO not an extreme amount of weight, given the social and institutional links between the different orgs siloed under then-CEA-now-EV, but others might reasonably disagree). Nevertheless I object to your conclusion about present CEA, largely because of a separate incident involving present CEA staff. So, I brought up this other incident to explain why.

It's true that this is also an example of the kind of thing VettedCauses is worried about, but that's not what made me think of it here.

The current version of CEA employs Julia Wise, your wife. Previously Alexey Guzey sent Wise a draft of a post critical of her superior Will MacAskill and a request for confidentiality. Wise accidentally (or "accidentally") leaked the draft to MacAskill, who then used it to prepare an adversarial public response to the upcoming post rather than to give Guzey feedback ahead of publication as he'd requested. Neither Wise nor MacAskill disclosed this until after the leak was caught because MacAskill publicly responded to parts of the draft which were removed before publication. Wise remains in her role as CEA's community liaison, where she is the point person for confidential information from people who worry that leaks would provoke adversarial action from powerful community insiders.

At one point CEA released a doctored EAG photo with a "Leverage Research" sign edited to be bizarrely blank. (Archive page with doctored photo, original photo.) I assume this was an effort to bury their Leverage association after the fact.

The history of big foundations shows clearly that, after the founder's death, they revert to the mean and give money mostly to whatever is popular and trendy among clerks and administrators, rather than anything unusual which the donor might've cared about. If you look at the money flowing out of e.g. the Ford Foundation, you'll be hard-pressed to find anything which is there because Henry or Edsel Ford thought it was important, rather than because it's popular among the NGO class who staffs the foundation. See Henry Ford II's resignation letter.

If you want to accomplish anything more specific than "fund generic charities"—as anyone who accepts the basic tenets of EA obviously should—then creating a perpetual foundation is unwise.

I have personally heard several CFAR employees and contractors use the word "debugging" to describe all psychological practices, including psychological practices done in large groups of community members. These group sessions were fairly common.

In that section of the transcript, the only part that looks false to me is the implication that there was widespread pressure to engage in these group psychology practices, rather than it just being an option that was around. I have heard from people in CFAR who were put under strong personal and professional pressure to engage in *one-on-one* psychological practices which they did not want to do, but these cases were all within the inner ring and AFAIK not widespread. I never heard any stories of people put under pressure to engage in *group* psychological practices they did not want to do.

This looks pretty much right, as a description of how EA has responded tactically to important events and vibe shifts. Nevertheless it doesn't answer OP's questions, which I'll repeat:

  • What ideas that were considered wrong/low status have been championed here?
  • What has the movement acknowledged it was wrong about previously?
  • What new, effective organisations have been started?

Your reply is not about new ideas, or the movement acknowledging it was wrong (except about Bankman-Fried personally, which doesn't seem like what OP is asking about), or new organizations.

It seems important, to me, that EA's history over the last two years is instead mainly the story of changes in funding, in popular discourse, and in the social strategy of preexisting institutions. e.g. the FLI pause letter was the start of a significant PR campaign, but all the *ideas* in it would've been perfectly familiar to an EA in 2014 (except for "Should we let machines flood our information channels with propaganda and untruth?", which is a consequence of then-unexpected developments in AI technology rather than of intellectual work by EAs).

IIRC, while most of Alameda's early staff came from EA, the early investment came largely from Jaan Tallinn, a big Rationalist donor. This was a for-profit investment, not a donation, but I would guess that the overlapping EA/Rationalist social networks made the deal possible.

That said, once Bankman-Fried got big and successful he didn't lean on Rationalist branding or affiliations at all, and he made a point of directing his "existential risk" funding to biological/pandemic stuff but not AI stuff.

This is a good account of what EA gets from Rationality, and why EAs would be wise to maintain the association with rationality, and possibly also with Rationality.

What does Rationality get from EA, these days? Would Rationalists be wise to maintain the association with EA?

the costs of a bad hire are somewhat bounded, as they can eventually be let go.

This depends a lot on what "eventually" means, specifically. If a bad hire means they stick around for years—or even decades, as happened in the organization of one of my close relatives—then the downside risk is huge

OTOH my employer is able to fire underperforming people after two or three months, which means we can take chances on people who show potential even if there are some yellow flags. This has paid off enormously, e.g. one of our best people had a history of getting into disruptive arguments in nonprofessional contexts, but we had reason to think this wouldn't be an issue at our place... and we were right, as it turned out, but if we lacked the ability to fire relatively quickly, then I wouldn't have rolled those dice. 

The best advice I've heard for threading this needle is "Hire fast, fire fast". But firing people is the most unpleasant thing a leader will ever have to do, so a lot of people do it less than they should.

Load more