"we block all factory farm expansions -> people realize that we don't want factory farmed products in the UK at all -> public opinion shifts quickly -> multiple policy changes are now simultaneously possible"
I think this ToC is much less clean than it sounds.
"It’s not just AIM, all of EA has been shooting its own foot since inception with its criteria for accepting people. Don’t follow their example. Find more experienced veterans. Don’t consider so highly academic backgrounds."
In general, I'm somewhat sympathetic to the claim that EA in general can be too focused on young graduates from a few Universities, but I think its pretty hard to make that charge stick on AIM.
Some people who go through the program did indeed go to Ivy League/Oxbridge Unis, but many (including me) did not and the cohorts have a diverse range of people with different life experiences.
It is my understanding that AIM does try to attract people who have a lot of experience as well as young people, but, as I'm sure you can appreciate, when the 'job' includes almost no job security, low pay and potentially needing to relocate to the other side of the world, its often more attractive to younger folk with fewer commitments.
If you are considering not applying because you don't think you have the right 'CV' (for any reason) I would strongly recommend you DO apply. I almost counted myself out for this reason and I am very glad I put in my application.
Thanks for this post. I thought it was really interesting and I think you are probably right.
My main objection - however - would be that technologies developed by welfarists would struggle to compete in the market as it stands because we'd be trying to meet the often competing demands of improving welfare and improving economic outcomes for farmers (in order to have our tech adopted in the first place).
Companies that develop technologies that are purely delivering efficiency gains for farms are more likely to succeed in a profit focused market than these welfarist technologies and so welfarist animal tech companies might be competed out of the market.
It might be more efficient to focus our attention on creating the kinds of market conditions that incentivise profit-making companies to develop the tech we want to see.
What this means is focusing on what we're already doing - corporate campaigning, lobbying etc. - to create the conditions where farms/food companies have to care about animal welfare to some extend and then let the profit-seeking (counterfactually cheaper) money be spent on R&D to deliver the tech that meets those objectives.
Thanks for responding to my points! You didn't have to go through line by line, but its appreciated.
Obviously a line by line response to your line by line response to my line by line response to your article would be somewhat over the top. So I'll refrain!
The general point I'd make though is that this almost feels like an argument for something before you've decided what you want to argue for. There feels like a conceptual hole in the middle of this piece (as you say, people are still trying to work out what the problem is). But then you also respond to most of (not all) my points without actually giving a counter-argument, just claiming that I'm clearly mistaken. This makes it quite hard to actually engage with what you've written.
Maybe, as Alexander seems to think, I'm just a poor blinkered fool who can't understand other people's perspectives - but I am actually tryign to engage with what you've written here, not sh*t posting.
I was interested in this because I’m broadly sympathetic to the idea that we might not give enough attention to bigger systems. But for me, this post only really strengthened my EA tendencies.
So the core argument in favour of the metacrisis being ‘a thing’ (upon which the later arguments that we should take it seriously hang) seems to be:
a. Technology makes us more powerfu and the world is more interconnected
b. As a result, our capacity for self destruction has massively increased
c. Our ‘culture, the implicit assumptions, symobls, sense-making tools and values of society’ are not ‘mature’ enough to ensure this capacity is managed in a low risk manner
d. Therefore, some kind of existential risk is more likely
Propositions A and B seem basically correct to me. But I think proposition C is very weak. I have two main problems with it:
1) there is just so many different things inside of that grouping, the article only makes an argument as to why a set of implicit assumptions are a cause of the problem, then sneaks in all this other stuff in this one central paragraph. It seems highly likely to me that some things (like society's values) are more important to how well the world goes than others (like symbols)
2) I think C stands to be proved. While there are many problems with society and global coordination, it seems like often at the crunch global coordination pulls through (nuclear proliferation, chemical weapons and CFCs are examples). I think you can make an argument we don’t have the right tools, but I think equally you can make at least as strong an argument to say that we know exactly what the right tools are and we should be putting our efforts into strengthening global institutions of coordination.
I think the Diego character makes a number of other mistakes which I’m not sure are necessarily core to the argument, but certainly weaken my sense of its credibility for me:
I like the idea of doing more thinking through Socratic dialogues and there were a couple of jokes here which actually made me laugh out loud. But it has left me closer to thinking this integral/metacrisis thing is lacking in substance. Putting this author aside, it seems like many of the folk who talk about this stuff are merely engaging in self-absorbed obscurantism.
Even amongst academic conferences, this seems quite outside of my personal experience. I've been to a small number of academic animal law conferences and - while there is plenty to complain about from an EA perspective - most people were there because they cared about animals and the food was all vegan.
I'm going to read this full article more carefully and post a more considered comment later on, but I wanted to get this in early as my contribution to the conversation which I hope this article produces (because I think its a great piece):
I think your portrayal of 'short term pragmatism' is a bit of a straw man. I don't really recognise this view amongst the animal nonprofits that I speak to.
Yes, many people spend a lot of time talking about and thinking about winning the specific campaign that they are involved in right now (naturally), but those campaigns are usually tied into a longer term theory of victory which involves the end of factory farming.
It might be that there are differences in terms of how far away from that ultimate victory we are (a few decades or 50+ years for example) and so it might be that these specific campaigns feel too timid to some, but then we should be having a conversation about how we work our which timeline is more accurate and, therefore, what the appropriate level of ambition is.