A

Arepo

5275 karmaJoined

Participation
1

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
758

Topic contributions
18

Muchos hugs for this one. I'm selfishly glad you were in London long enough for us to meet, fwiw :)

I feel like this is a specific case of a general attitude in EA that we want to lock in our future selves to some path in case our values change. The more I think about this the worse it feels to me, since 

a) your future values might in fact be better than your current ones, or if you completely reject any betterness relation between values then it doesn't matter either way

b) your future self is a separate person. If we imagine the argument targeting any other person, it's horrible - it states that should lock them into some state that ensures they're forced to serve your (current) interests

I hope over time you reshift your networks to your real home <3

I'm not doing the course, but I'm pretty much always on the EA Gather, and usually on for coworking and accountabilitying compatible with my timezone (UTC+8). Feel free to hop on there and ping me - there's a good chance I'll be able to reply at least by text immediately, and if not pretty much always within 12 hours.

To be strictly accurate, perhaps I should have said 'the more you know about AI risks and AI safety, the higher your p(doom)'. I do think that's an empirically defensible claim. Especially insofar as most of the billions of people who know nothing about AI risks have a p(doom) of zero.

That makes it sound like a continuous function when it isn't really. Sure, people who've never or barely thought about it and then proceed to do so are likely to become more concerned - since they have a base of ~0 concern. That doesn't mean the effect will have the same shape or even same direction for people who have a reasonable initial familiarity with the issue.

Inasmuch as there is such an effect, it's also hard to separate from reverse causation, where people who are more concerned about such outcomes tend to engage more with the arguments for them.

As for incentives - sure, that's an effect. I also think the AI safety movement has its own cognitive biases: orgs like MIRI have an operational budget in the 10s if not 100s of millions; people who believe in high p(doom) and short timelines have little reason to present arguments fairly, leading to silly claims like the orthogonality thesis showing far more than it does, or to gross epistemic behaviour by AI safety orgs.

In any case, if the claim is that knowing more makes people have a higher p(doom), then you have to evidence that claim, not argue that they would do if it weren't for cognitive biases.

Finally if you want to claim that the people working at those orgs don't actually much about 'know about AI risks and AI safety' and so wouldn't be counterpoints to your revised claim, I think you need to evidence that. The arguments really aren't that complicated, and they've been out there for decades, and more recently shouted loudly by people trying to stop, pause or otherwise impede the work of the people working on AI capabilities - to the point where I find it hard to imagine there's anyone working on capabilities who doesn't have some level of familiarity with them (and, I would guess substantially more so than people on the left hand side of the discontinuous function).

I'm really glad you gave this talk, even as something of a sceptic of AI x-risk. As you say, this shouldn't be a partisan issue. I would contest one claim though:

> Generally, the more you know about AI, the higher your p(doom), or estimated probability that ASI would doom humanity to imminent extinction.

I don't see the evidence for this claim, and I keep seeing people in the doomer community uncritically repeat it. AI wouldn't be progressing if everyone who understood it became convinced it would kill us all. 

Ok, one might explain this progress via the unilateralists curse, but when the players include major divisions at Google, Meta and multiple startups with (according to Google) >1000 employees it would be quite a stretch to call them 'unilateralists'. In fact, I suspect when factoring in people working elsewhere with a deep knowledge of frontier models for their jobs, AI-positive workers substantially outnumber AI doomers. And, since their work actively requires it, they probably have a higher average understanding of AI.

What I see is some high profile AI researchers shifting to higher p-dooms and getting highlighted by the doomer community; but this is explainable by the doomer community being much more active and coordinated than the companies competing to advance the field, and by it just making for better news headlines. The AI-positive community seems to have much less incentive to tout people who shift in the other direction - or who simply review the arguments and remain unconvinced. (shoutout to titotal, who has written some excellent critiques of AI safety arguments)

To be clear, I think there's a trivial sense in which the claim is true - people who know very little about AI capabilities and learn more about it are likely to shift to being pessimistic about it from having no opinion. But I think the claim that people who are already knowledgeable about the field get more pessimistic as they gain even more knowledge - while plausibly true - needs some actual data to proclaim so boldly.

I resonate strongly with this, to the extent that I worry most of the main EA 'community-building' events don't actually do anything to build community. I can count on the fingers of 0 hands the number of meaningful lasting relationships I've subsequently developed with anyone I met at an EAG(x), after having been to maybe 10 over the course of 8ish years. 

That's not to say there's not value in the shorter term relationships that emerge from them - but by comparison I still think fondly of everyone I met at a single low-cost EA weekend retreat over a decade ago. 

Quick pimp for the EA Gather Town here - most of what goes on there is coworking, but regularly coworking in an informal environment can and frequently has led to meaningful friendships :)

Good luck with this - I've always wished for more global EA events!

To clarify, is this explicitly a rebranding of EAGx Virtual, or is it meant to be something qualitatively different? If the latter, could you say more about what the differences will be?

These were all (with the exception of 3) written in the six months following FTX, when it was still being actively discussed, and at least three other EA-relateed controversies had come out in succession. So I probably misremembered FTX specifically (thinking again, IIRC there was an FTX tag that was the original 'autodeprioritised' tag, whose logic maybe got reverted when the community separation logic was introduced). But I think it's fair to say that these were a de facto controversy-deprioritisation strategy (as the fourth post suggests) rather than a measured strategy that could be expected to stay equally relevant in calmer times.

I feel like people are treating the counterfactual as 'no way to filter out community posts'; whereas the forum software currently allows you to filter for any given tag, and could easily be tweaked to (or possibly already allows you to) filter out a particular tag.

So the primary counterfactual isn't 'no separation' it's 'greater transparency and/or community involvement in what gets tagged "community"'.

You may well be right that CEA is biased (its hard not to be) and the criteria could be 
made clearer.

My suggestion if the current separation is kept would be to reallow community tagging of posts, but require it to go above a certain threshold and/or have a delay, so that posts don't bounce back and forward between the two feeds.

I'm also not sure community posts get less attention (forum team can tell me).

FWIW I suspect both that being tagged community causes reduced attention, but that they get less attention overall since many low-karma posts slide off the feed without having time to get tagged. I.e. getting attention causes a post to be (more likely to be) tagged community.

I would love to see more events like this in the community. Honestly, those handful of low-budget attendee-run retreats I've been to have been worth substantially more than EAG(x)s 

Load more