SL

Sarah Levin

438 karmaJoined

Comments
25

IIRC, while most of Alameda's early staff came from EA, the early investment came largely from Jaan Tallinn, a big Rationalist donor. This was a for-profit investment, not a donation, but I would guess that the overlapping EA/Rationalist social networks made the deal possible.

That said, once Bankman-Fried got big and successful he didn't lean on Rationalist branding or affiliations at all, and he made a point of directing his "existential risk" funding to biological/pandemic stuff but not AI stuff.

This is a good account of what EA gets from Rationality, and why EAs would be wise to maintain the association with rationality, and possibly also with Rationality.

What does Rationality get from EA, these days? Would Rationalists be wise to maintain the association with EA?

the costs of a bad hire are somewhat bounded, as they can eventually be let go.

This depends a lot on what "eventually" means, specifically. If a bad hire means they stick around for years—or even decades, as happened in the organization of one of my close relatives—then the downside risk is huge

OTOH my employer is able to fire underperforming people after two or three months, which means we can take chances on people who show potential even if there are some yellow flags. This has paid off enormously, e.g. one of our best people had a history of getting into disruptive arguments in nonprofessional contexts, but we had reason to think this wouldn't be an issue at our place... and we were right, as it turned out, but if we lacked the ability to fire relatively quickly, then I wouldn't have rolled those dice. 

The best advice I've heard for threading this needle is "Hire fast, fire fast". But firing people is the most unpleasant thing a leader will ever have to do, so a lot of people do it less than they should.

I can readily believe the core claims in this post, and I'm sure it's a frustrating situation for non-native English speakers. That said, it's worth keeping in mind that for most professional EA roles, and especially for "thought leadership", English-language communication ability is one of the most critical skills for doing the job well. It is not a problem that people who grew up practicing this skill will be "overrepresented" in these positions.

There is certainly a cosmic unfairness in this. It's also unfair that short people will be underrepresented among basketball players, but this does not mean there's a problem with basketball.

The actions to address this ought to be personal, not structural. It's worth some effort on the margin for native speakers to understand the experience and situation of non-native speakers—indeed this is one part of "English-language communication ability". I'm grateful to my foreign friends for explaining many aspects of this to me, it's helped me in a fair number of professional situations. Things like your talk at an international conference to educate people about this stuff seems like a great idea. And of course most non-native speakers who seek positions in EA (or other international movements) correctly put a great deal of effort into improving their fluency in the lingua franca.

I mostly agree with your larger point here, especially about the relative importance of FTX, but early Leverage was far more rationalist than it was EA. As of 2013, Leverage staff was >50% Sequences-quoting rationalists, including multiple ex-SIAI and one ex-MetaMed, compared with exactly one person (Mark, who cofounded THINK) who was arguably more of an EA than a rationalist. Leverage taught at CFAR workshops before they held the first EA Summit. Circa 2013 Leverage donors had strong overlap with SIAI/MIRI donors but not with CEA donors. etc.

I think trying to figure out the common thread "explaining datapoints like FTX, Leverage Research, [and] the LaSota crew" won't yield much of worth because those three things aren't especially similar to each other, either in their internal workings or in their external effects. "World-scale financial crime," "cause a nervous breakdown in your employee," and "stab your landlord with a sword" aren't similar to each other and I don't get why you'd expect to find a common cause. "All happy families are alike; each unhappy family is unhappy in its own way."

There's a separate question of why EAs and rationalists tolerate weirdos, which is more fruitful. But an answer there is also gonna have to explain why they welcome controversial figures like Peter Singer or Eliezer Yudkowsky, and why extremely ideological group houses like early Toby Ord's [EDIT: Nope, false] or more recently the Karnofsky/Amodei household exercise such strong intellectual influence in ways that mainstream society wouldn't accept. And frankly if you took away the tolerance for weirdos there wouldn't be much left of either movement.

Your “90% confidence interval” of… what, exactly? This looks like a confidence interval over the value of your own subjective probability estimate? And “90% as the mean” of… a bunch of different guesses you’ve taken at your “true” subjective probability? I can't imagine why anyone would do that but I can’t think what else this could coherently mean…?

If I can be blunt, I suspect you might be repeating probabilistic terms without really tracking their technical meaning, as though you’re just inserting nontechnical hedges. Maybe it’s worth taking the time to reread the map/territory stuff and then run through some calibration practice problems while thinking closely about what you’re doing. Or maybe just use nontechnical hedges more, they work perfectly well for expressing things like this.

...What on earth does "90% probability, with medium confidence" mean? Do you think it's 90% likely or not?

Great, this is useful data.

Results demonstrated that FTX had decreased satisfaction by 0.5-1 points on a 10-point scale within the EA community, but overall community sentiment remained positive at ~7.5/10

That's a big drop! In practice I've only ever seen this type of satisfaction scale give results between about 7/10 through 9.5/10 (which makes sense, right, if my satisfaction with EA is 3/10 then I'm probably not sticking around the community and answering member surveys), so that decline is a real big chunk of the scale's de facto range.

I suppose it's not surprising that the impact on perception is much bigger inside EA, where there's (appropriately) been tons of discourse on this, than in the general public.

Load more