HE

Holly Elmore ⏸️ 🔸

7627 karmaJoined

Posts
52

Sorted by New

Sequences
1

The Rodenticide Reduction Sequence

Comments
434

I think you hit the nail on the head— this forum is not a safe space for me. Like you said, I’m an all-time top poster, and yet I get snobby discouragement on everything I write since I started working on Pause, with the general theme that advocacy is not smart enough for EAs (and a secondary theme of wanting to work for AI companies).

This is a serious problem given what the EA Forum was supposed to be. It’s not a problem with following your rules for polite posts, but it’s against something more important— the purpose of the Forum and of the EA community.

But, I’ve clearly reached the end of my rope, and since I’d like to keep my account and be able to post new stuff here, I’ll just stop commenting.

As Carl says, society may only get one shot at a pause. So if we got it now, and not when we have a 10x speed up in AI development because of AI, I think that would be worse. It could certainly make sense now to build the field and to draft legislation. But it's also possible to advocate for pausing when some threshold or trigger is hit, and not now. It's also possible that advocating for an early pause burns bridges with people who might have supported a pause later.


This is so out of touch with the realities of opinion change. It sounds smart and it lets EAs and rationalists keep doing what they’re doing, which is why people repeat it. This claim that we would only get one shot at a pause is asinine— pause would become more popular as an option the more people were familiar with it. It’s only the AI industry and EA that do not like the idea of pausing and pretend like they’re gonna withdraw support that we actually never had if we do something they don’t like. 

The main thing we can do as a movement is gain popular support by talking about the message. There is no reliable way to “time” asks. None of that makes any sense. Honestly, most people who give this argument are industry apologists who just want you feel out of your league if you do anything against their interests. Hardware overhang was the same shit.

No I'm angry that people feel affronted by me pointing out that normal warning shot discourse entailed hoping for a disaster without feeling much need make sure that would be helpful. They should be glad that they have a chance to catch themselves, but instead they silently downvote. 

Just feels like so much of the vibe of this forum is people expecting to be catered to, like their support is some prize, rather than people wanting to find out for themselves how to help the world. A lot of EAs have felt comfortable dismissing PauseAI bc it's not their vibe or they didn't feel like the case was made in the right way or they think their friends won't support it, and it drives me crazy bc aren't they curious??? Don't they want to think about how to address AI danger from every angle?

I was curious about guesses as to why this happens to me lately (a lot of upfront disagree votes and karma hovering around zero until the views are high enough) but getting that answer is still pretty hard for me to hear without being angry.

To get a pause at any time you have to start asking now. It’s totally academic to ask about when exactly to pause and it’s not robust to try to wait until the last possible minute. Anyone taking pause advocacy seriously realizes this pretty quickly.


But honestly all I hear are excuses. You wouldn’t want to help me if Carl said it was the right thing to do or you’d have already realized what I said yourself. You wouldn’t be waiting for Carl’s permission or anyone else’s.  What you’re looking for is permission to stay on this corrupt be-the-problem strategy and it shows.

That’s not what Carl Shulman said, and the fact that people want to take it that way is telling. He messaged me recently to clarify that he meant unilateral pauses would be bad, something I still kind of disagree with but which isn’t something PauseAI advocates, and he said it way at the beginning of Pause talk. EAs just don’t want to arrest the tech momentum bc they see themselves as technocracy elite, not as humble grassroots organizers. They are disappointed at the chance we have to rally the public and want to find some way they don’t have to contribute to it.

People who identify as EAs in other countries might not be supportive of the AI companies, but they aren’t the ones on the ground in the Bay Area and DC that are letting me down so much. They aren’t the ones working for Anthropic or sitting on their fake board, listening to Dario’s claptrap that justifies what they want to do and believe anyway. They aren’t the ones denying my grant applications bc protests aren’t to their tastes. They aren’t the ones terrified of not having the vision to go for the Singularity, of being seen as “Luddites” for opposing a dangerous and recklessly pursued technology. Frankly they aren’t the influential ones.

Totally. I gave up some kinds of influence and the highest point on the moral high ground by not being totally vegan but I’ve gained another kind of influence from people who were ready for reducetarianism. 

I started eating dairy again (after 15 years of veganism) as part of a moral trade. Then, when the trade ended, I chose to continue eating dairy because of how much flexibility it had given me back. I can eat at the airport. There wasn't a constant food scarcity program running itself in the back of my mind taking a much bigger toll on my mental health than I had realized since I was a teenager. As much as this usually sounds like an excuse, I honestly could not conclude that it was better for the world for me to go back to that for the amount of suffering it prevents.

Load more