HE

Holly Elmore ⏸️ 🔸

6963 karmaJoined

Posts
49

Sorted by New

Sequences
1

The Rodenticide Reduction Sequence

Comments
376

Honestly, I wasn't thinking of you! People planning their individual careers is one of the better reasons to engage with timelines imo. It's more the selection of interventions where I think the conversation is moot, not where and how individuals can connect to those interventions. 

The hypothetical example of people abandoning projects that culminate in 2029 was actually inspired by PauseAI-- there is a contingent of people who think protesting and irl organizing takes too long and that we should just be trying to go viral on social media. I think the irl protests and community is what make PauseAI a real force and we have greater impact, including by drawing social media attention, all along that path-- not just once our protests are big. 

That said, I do see a lot of people making the mistakes I mentioned about their career paths. I've had a number of people looking for career advice through PauseAI say things like, "well, obviously getting a PhD is ruled out", as if there is nothing they can do to have impact until they have the PhD. I think being a PhD student can be a great source of authority and a flexible job (with at least some income, often) where you have time to organize a willing population of students! (That's what I did with EA at Harvard.) The mistake here isn't even really a timelines issue; it's not modeling the impact distribution along a career path well. Seems like you've been covering this: 

>I also agree many people should be on paths that build their leverage into the 2030s, even if there's a chance it's 'too late'. It's possible to get ~10x more leverage by investing in career capital / org building / movement building, and that can easily offset. I'll try to get this message across in the new 80k AI guide
 

Yes, I agree. I think what we need to spend our effort on is convincing people that AI development is dangerous and needs to be handled very cautiously if at all, not that superintelligence is imminent and there's NO TIME. I don't think the exact level of urgency or the exact level of risk matters much after like p(doom)=5. The thing we need to convince people of is how to handle the risk. 

A lot of AI Safety messages expect the audience to fill in most of the interpretive details-- "As you can see, this forecast is very well-researched. ASI is coming. You take it from here."-- when actually what they need to know is what those claims mean for them and what they can do.

I thought you were taking issue with the claim they were overdiscussed and asking where.

The areas where timelines are overdiscussed are numerous. Policy and technical safety career advice are the biggest ime.

The community at large – and certainly specific EAs trying to distance themselves now – couldn’t have done anything to prevent FTX. They think they could have, and they think others see them as responsible, but this is only because EA was the center of their universe.


Ding! ding! ding!

You’re saying it nicely. I think it was irrationality and cowardice. I felt traumatized by the hysterical reaction and the abandoning of our shared work and community. I also felt angry about how my friends helped enemies of EA to destroy EA’s reputation for their own gain. 

I’ve stopped identifying with EA as much bc PauseAI is big tent and doesn’t hold all the EA precepts (that and much of the community is hostile to advocacy and way too corrupted by the AI industry…), but I always explain the connection and say that I endorse EA principles when I’m asked about it. It’s important to me to defend the values! 

No, I do not expect the people who replace them (or them not being replaced) to have much of an effect. I do not think they are really helping and I don’t think their absence would really hurt. The companies are following their own agenda and they’ll do that with or without specifc people in those roles.

(I don't particularly endorse any timeline, btw, partly bc I don't think it's a decision-relevant question for me.)

Much better epistemics and/or coordination -- out of reach now, put potentially obtainable with stronger tech.

Why are these the same category and why are you writing coordination off as impossible? It's not. We have literally done global nonproliferation treaties before.

This bizarre notion got embedded early in EA that technological feats are possible and solving coordination problems is impossible. It's actually the opposite-- alignment is not tractable and coordination is.

I think almost nothing would change at the labs, but that the EA AI Safety movement would become less impotent, more clear, and stand more of a chance of doing good.

Load more