Thanks for the thoughtful reply to the thoughtful reply. I appreciate the challenge of trying to bridge together two different frameworks you care about and the challenges of discussing it on the forum.
Two points I want to follow up on relate to the metacrsis.
So it's not a solution, it's a whole ecosystem of solutions that we have to work on, but they all have to be informed by understanding the problems and the interconnection of the whole well enough that you don't advantage one part while externalizing the cost to the other areas. So I think we've got to do... And I appreciate you being available late at night your time, I think we got to do the beginning first part of this thing that is I found meaningful in your work and was valuable for me to make more central of the embedded growth obligation in finance being coupled to diminishing returns in energy that are not easily overcomeable through the current renewable technologies and process or the efficiencies in process and that being a major fucking thing that we have to deal with and that being connected to so many of the other things. I think we did a good job of starting that and maybe starting to get to what some of the transition looks like, some of the cultural parts, some of the advanced policy parts can be our next conversation.
I'd be curious to hear from you, what are some wins the metacrisis movements have had, or proposed solutions?
I appreciate the efforts to try and bridge two projects you think are valuable. A few thoughts/comments/disagreements:
1. One way to read this seems to me like it could boil down to: if you like EA, but also want some more metacrisis/sensemaking/systems thinking than what EA typically offers, then that's us. Come say hi.
2. I feel like there's some irony here where EA conversation norms tend towards very direct communication, and sensemaking folks tend to speak in a more indirect way. In pitching integral altruism I can't help but get the feeling it is framed in fairly indirect language at times. It's hard to name the exact dynamic but I found myself working hard to understand parts of what this paragraph is trying to say (maybe that's just me):
Psychological, emotional and spiritual development can help us cultivate a genuine desire for the wellbeing of others, resulting in altruism grounded in truth rather than being driven by guilt or pride. Such growth can also improve our epistemics by shining light on What’s Going On For Us and inspire action by deeply connecting us to the value we’re fighting for.
3. Some of these points seem surprising to include as what is added by integral altruism as they seem to me as a regular part of EA discourse. I'm thinking about the sections that discuss valuing other things in life besides impact, and that inner work can lead to more impact.
4. I think a big decision point here is whether or not the merits of integral altruism will be argued on the territory of EA assumptions or not, and this post seems to move between the two. For example, you make the claim that there are real downsides to seeing x-risk in isolation rather than in the way it is interconnected with other problems. This seems big and important if true, and seems like something that could be argued comfortably within the framework of EA norms. I appreciate that puts the burden on you, but if you persuade folks here, I imagine that would be a big win for everyone. FWIW whenever I've listened to folks talk about the metacrisis I've literally not been able to understand the arguments. Could be a huge service to try and make the case for the metacrisis in EA friendly language.
Yeah I have, and my impression from those I've spoken with is that this has not been the case. You don't think most people whose job primarily involves sitting at a computer could have much of their job automated by a software engineer on call? For example:
How organisations with low AI usage can and should be using it more
There is a lot of discussion about how everyone should be using AI more, and efforts to increase use and literacy. So far in animal advocacy spaces where I work I’ve seen the following efforts to increase usage so far:
The above has made a real dent in AI usage, but much less than we should be aiming for given the gains left on the table. My sense is that the reason these actions have only seen incremental improvements is that:
I think the following would meaningfully improve how much individuals and organisations use AI:
What do people think? What have I missed?
Curious if you disagree but this strikes me as red flags (I skimmed these so let me know if I got anything wrong).
I'm very skeptical of any theory of change that relies on large parts of society behaving differently, unless there is very compelling evidence that this would work. I see this a lot in non-EA vegan advocacy where there is a claim that if everybody just did x differently (e.g. debated differently). Everybody very very rarely just does anything differently. One of the big values I see in EA is, for example, contributing to companies going cage-free at scale, while the rest of the vegan movement was failing to win individual hearts and minds or developing some social movement theory about how we're on the precipice of a new way of thinking spreading.
I've been curious what the metacrisis folks could produce because I respect some of the people involved and I take the critique seriously that EA doesn't focus on systemic issues or interrelated problems enough.
But it strikes me that folks looking at systemic/interrelated solutions should grapple with the fact that these are so much harder to do, and that, to me at least, the solutions proposed seem very unlikely to come close to remotely tackling the problem.
Caveat: I do appreciate all of this could just be due to my lack of deep engagement.