Hide table of contents

Updates

Dec 22, 2023 (Link updated January 2025)
I created a Slack workspace!
Here's the invite link: EA Left/Progressive Wing Slack (name not final)

Jan 10, 2025
The Slack workspace hasn't gained much traction yet, some members are slowly trickling in. I think a community-building effort is needed, so we can do activities/discussions to keep the workspace a bit livelier.

 

Notes

It seems like a Facebook group could be created for specific topics in left-wing thought (like economics and forms of government)

I feel like some people would also prefer a Discord server or something that's just not a Slack workspace.  I bet it would depend on the preference of the people who would be a part of this group, considering the amount of inactive groups out there, I don't think there is a consensus on where to go.

15

1
0

Reactions

1
0
New Answer
New Comment


6 Answers sorted by

Garrison Lovely's podcast comes to mind as a starting point on overlap and disagreements between the two communities: https://forum.effectivealtruism.org/posts/6NnnPvzCzxWpWzAb8/podcast-the-left-and-effective-altruism-with-habiba-islam

Idk of any online communities explicitly focused on this intersection, but would be interested in participating in one! Facebook groups historically have been good for this sort of thing (especially bc of the mod approval questions you could include), but I've basically stopped using FB entirely, as have lots of others I know. A Slack channel within the larger EA Slack may work (eagreconnect.slack.com), but I just experimented with this and there doesn't seem to be a native feature like the FB mod approval questions. You could have channel admins that add ... (read more)

Thank you for sharing! I wrote to Garrison, to see if they know of any such community.

There is a Facebook group on EA + diversity and inclusion: https://www.facebook.com/groups/diversityEA

 I've sometimes been interested in making a group on EA+ 'economic left' thought (socialism, anarchism, anti-capitalism and such) - I'll let you know if I ever do!

If you ever end up making such a group, I'd love to be notified. :)

Kindly notify me if you eventually make such a group

Hi Amber! That intersection is one I'm interested in. I'm writing to a few people to see if they already know of a community I join, and I will be updating the post and letting anyone interested know so they can join.

There are also Facebook groups for people with specific marginalised identities, which might also have some of that sort of content: e.g. there is one for LGBTQ people, and one for women and non-binary people. There may also be groups related to other identities: there are a bunch of "EA+X" related groups on FB so I'd say search there

I don't have the answer, but I'm eager to join the discussion, especially on whether it's possible to implement the principle of real impartiality in national politics. Our left-wing parties (speaking about Poland), no matter how progressive, never go so far as to include in their main programmatic demands regarding people with no connection to Poland (unless they at least fall into the category of EU migrants). Perhaps it is logically impossible for it to be otherwise. However, if it's any weaker kind of impossibility, it would be good to explore the area.

There used to be discord group with a lot of left wing EAs but it has since fizzled. https://discord.com/invite/vbXEkDwa

Let me know if you get a new group up and running.

Update for Dec 22, 2023
I created a Slack workspace!
Here's the invite link: EA Left/Progressive Wing Slack (name not final)

I guess there's a difference between EAs calling themselves 'center-left' and that apparently make 80% of EA according to Rethink Priorities surveys, which are probably EAs broadly open to ideas such as passively giving rights to minorities and encouraging a market economy that does a bit of redistribution, 

and those who call actively themselves 'leftists, who are in favor of structural change, breaking down patriarchy and are feminists, loath wealth-hoarding and tend to be extremely skeptical towards extreme rationalists who have no qualms discussing abortion without mentioning women's rights. 

I reckon the second kind will be much harder to find, but they exist!

... EAs calling themselves 'center-left' and that apparently make 80% of EA according to Rethink Priorities surveys

 

Roughly 80% (76.6%) consider themselves left or center left, of which 36.8% consider themselves "Left", while 39.8% consider themselves "Center left" (so quite similar).

3
Vaipan
Thanks David, I was thinking about this survey. I guess my point still stands--a leftist EA in Scandinavia doesn't mean the same thing as a leftist in the US, and my guess is that the majority of what these EAs call 'left' would be seen as center-left or even moderate right-wing in other countries (such as France or Sweden). 
7
David_Moss
It's worth noting that: * Results don't vary so dramatically across most countries in our data, with none of the countries with the largest number of EAs showing less than ~35% identifying as "Left". * The majority of EAs and the majority of EA left/center-leftists are outside the US
1
Larks
David can presumably answer this with the cross-tabs. My guess is that French and Scandinavian EAs also say they are left wing more frequently than right wing.  Also, while you're right there are geographical differences between countries along the left-right axis, I don't think you can summarize it as 'Americans are more right wing'. On many issues US leftists are much more extreme than europeans. 
2
David Mathers🔸
'On many issues US leftists are much more extreme than europeans. '  Do you have data for this?  I recall, but can't find a Financial Times article from year or two ago which gave polling showing that Dem voters in the US appear to be slightly more left-wing on social issues (other than abortion) than Labour voters in the UK. That supports "left is left-er in the US on social issues." But this was outweighed by conservatives voters in the UK being FAR to the left of Republicans on social issues, so it also supports "US more right-wing overall. And the cliché is that the UK is a right-wing outlier by Western European standards (though I haven't seen hard data backing that up, and I suspect that insofar as it is true, we're talking economic left rather than social).  I think left-leaning Americans are often keener on a specific set of taboos around talking in a sufficiently "politically correct/woke"* way. But that is not really the same thing as being more left-wing on substantive issues, not even social issues. (I'm not very keen on that way of talking, but I do believe in trans inclusion, except maybe in some sport,  probably support open borders and less restrictive drug laws, probably reject retributivism about punishment, am pro-choice, at least neutral to mildly favourable on deliberately trying to employ more women and people of colour in positions of influence etc.)  *I hate these terms, but there is no non-pejorative equivalent and everyone knows roughly what I mean. 
2
David_Moss
Confirmed. And not only that, but French EAs are more likely to say that they are Left, rather than Center left.
2
David Mathers🔸
2
David_Moss
I think this is responding to a comment by Larks, not me.
2
David Mathers🔸
You're right sorry. Will move it! 
1
David_Moss
I'm curious why this post got -3 worth of downvotes (at time of writing). It seems like a pretty straightforward statement of our results.
3
prisonpent
I didn't downvote you, but I would guess those who did were probably objecting to this Self-identified leftists, myself included, generally see modern liberalism as a qualitatively different ideology. Imagine someone at Charity Navigator[1] offhandedly describing EA as "basically the same as us". Now imagine that the longtermism discourse had gotten so bad that basically every successful EA organization could expect to experience periodic coup attempts, and "they're basically Charity Navigator" was the canonical way to insult people on the other side. That's what "left = very liberal" looks like from here.  1. ^ before they started doing impact ratings
6
David_Moss
It sounds like you are reading my comment as saying that "center left" is very similar to "left". But I think it's pretty clear from the full quote that that's not what I'm saying. The OP says that EA is 80% "center-left". I correct them, and say that EA is 36.8% left and 39.8% "Center left."  The "(so quite similar)" here refers to the percentages 36.8% and 39.8% (indeed, these are likely not even statistically significant differences).  I can see how, completely in the abstract, one could read the claim as being that "Left" and "Center left" are similar ideologies. But, in context, it only makes sense for me to be making the observation that the percentages of "Left" and "Center left" are quite similar (challenging OP's claim that EA is all Center left). If I were asserting that "Left" and "Center left" are "quite similar", then I'd be minimising my own claim (many EAs are "Left" not merely "Center left"). ---------------------------------------- That said, I'm not sure that mistake is the reason for the downvote, since my other comment also got downvoted. And that one just: * Shows the breakdown by countries * Confirms Larks' guess that "French and Scandinavian EAs also say they are left wing more frequently than right wing." * Adds that French EAs are more likely to say they are "Left" than "Center left".
3
prisonpent
Now that you point it out I agree that's the more plausible reading, but it genuinely wasn't the one that occurred to me first. 
Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read