This is a special post for quick takes by Seth Ariel Green 🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I worry that the pro-AI/slow-AI/stop-AI has the salient characteristics of a tribal dividing line that could tear EA apart:

  • "I want to accelerate AI" vs "I want to decelerate AI" is a big, clear line in the sand that allows for a lot clearer signaling of one's tribal identity than something more universally agreeable like "malaria is bad"
  • Up to the point where AI either kills us or doesn't, there's basically in principle no way to verify that one side or the other is "right", which means everyone can keep arguing about it forever
  • The discourse around it is more hostile/less-trust-presuming than the typical EA discussion, which tends to be collegial (to a fault, some might argue)

You might think it's worth having this civl war to clarify what EA is about. I don't. I would like for us to get on a different track.

This thought prompted by discussion around one of Matthew_Barnett's quick takes.

For what it's worth, I really don't think many EAs are in the AI accelerationist camp at least. Matthew Barnett seems fairly unusual to me here. 

*Barnett

 

Sorry, fixed. Mistyped. 

Neat! Consider link posting as a top level post to make this easier to engage with?

I think if I end up writing something that's particularly EA-aligned, e.g. a cost-benefit analysis of some intervention, I'd do that. as is I'm happy to err on the side of not annoying people when promoting my stuff 😃 

Excited to read your work, Seth. Thanks for sharing

Thanks for the piece! Was thinking about this potential effect the other day as well, also for literature. Would think repetition could matter as well - one single exposure to one documentary may not be helpful, but multiple different ones may. Additionally, it would probably be more effective if some part of the documentary make the viewer feel connected personally. But these are conjectures and I am not sure.

I was just writing an email to a colleague about the difference between one-offs and repeated exposure. Just speculating here, but documentaries kind of are one-offs -- who in the world is going to watch Dominion a second time -- but op-eds, EA forum posts, etc. are more a a "repeated, spaced exposure" model of behavioral change. And that's going to mean a very different evaluation strategy.

As to personal connection to the material, you might enjoy 

Alblas2023“Meat” Me in the Middle: The Potential of a Social Norm Feedback Intervention in the Context of Meat Consumption – A Conceptual Replication10.1080/17524032.2022.2149587

Which basically tells people how much meat they're eating in comparison to a norm, and then gives them a 😃 or a :( depending on whether they're above or below average. So that's kind of an attempt to get people personally connected to the broader mission.

For more on this literature in general, see Meaningfully reducing meat consumption is an unsolved problem: meta-analysis 

Anyone else get a pig butchering scam attempt lately via DM on the forun? 

I just got the following message 

> Happy day to you, I am [X] i saw your profile today and i like it very much,which makes me to write to you to let you know that i am interested in you,therefore i will like you to write me back so that i will tell you further about myself and send you also my picture for you to know me physically. 

[EMAIL]

I reported the user on their profile and opened a support request but just FYI


 

Thanks for sharing Seth. Would you mind DMing me their name? I'll ban the account, and mods will look into this. 

We've got 'em. Apologies to anyone else who got this message. 

Curated and popular this week
Relevant opportunities