This is awesome to see, congratulations on the campaign!
It's no secret that this is a pretty neglected space, especially when compared to how big the impact could potentially be. It's great you're sharing your results, and I hope to see it continue in years to come. The data you've gathered seems potentially useful in helping to guide and promote future campaigns too. Very impressed by the initiative here.
I've got a few questions, mostly about the numbers you've given.
I don't think it's clear how many participants there were. Is it the 70,000 engagements, or the 600 (or 287 - your numbers differ) daily check ins?
The seven million people reached is also interesting. What counts as being reached? I imagine seeing an advertisement or something, is that right?
I'm not surprised the follow through appears to be something like 1% - personal diet is hard to change anywhere in the world. Do you have a sense of where people fell out of the "funnel" the most, percentage wise?
I'm also surprised by how many people identified as non-binary in the results. Perhaps it's naive of me, but I would have expected fewer people to select that option due to social and political pressure. Do you think there's a selection effect here? As in, the people who commit to plant based eating are more likely to also think critically about other topics?
Lastly are you able to talk about your reasoning behind partnering with the ecology group and the interest in tree planting?
Thanks again for the work and for sharing the results too!
Thanks, I think you've done a decent job of identifying cruxes, and I appreciate the additional info too. Your comment about the XPT being from 2022 does update me somewhat.
One thing I'll highlight and will be thinking about: there's some tension between the two positions of
a) "recent AI developments are very surprising, so therefore we should update our p|doom to be significantly higher than superforecasters from 2022" and
b) "in 2022, superforecasters thought AI progress would continue very quickly beyond current day levels"
This is potentially partially resolved by the statement:
c) "superforecasters though AI progress would be fast, but it's actually very fast, so therefore we are right to update to be significantly higher".
This is a sensible take, and is supported by things like the Metaculus survey you cite. However, I think that if they thought it was already going to be fast, and yet still only had a small chance of extinction in 2022, then recent developments would make them give a higher probability, but not significantly higher. The exact amount it has changed, and what counts as "significantly higher" vs marginally higher has unfortunately been left as an exercise for the reader, and it's not the only risk, so I think I do understand your position.
Thanks Cody. I appreciate the thoughtfulness of the replies given by you and others. I'm not sure if you were expecting the community response to be as it is.
My expressed thoughts were a bit muddled. I have a few reasons why I think 80k's change is not good. I think it's unclear how AI will develop further, and multiple worlds seem plausible. Some of my reasons apply to some worlds and not others. The inconsistent overlap is perhaps leading to a lack of clarity. Here's a more general category of failure mode of what I was trying to point to.
I think in cases where AGI does lead to explosive outcomes soon, it's suddenly very unclear what is best, or even good. It's something like a wicked problem, with lots of unexpected second order effects and so on. I don't think we have a good track record of thinking about this problem in a way that leads to solutions even on a first order effects level, as Geoffrey Miller highlighted earlier in the thread. In most of these worlds, what I expect will happen is something like:
I think the impact of most actions here is basically chaotic. There are some things that are probably good, like trying to ensure it's not controlled by a single individual. I also think "make the world better in meaningful ways in our usual cause areas before AGI is here" probably helps in many worlds, due to things like AI maybe trying to copy our values, or AI could be controlled by the UN or whatever and it's good to get as much moral progress in there as possible beforehand, or just updates on the amount of morally aligned training data being used.
There are worlds where AGI doesn't take off soon. I think that more serious consideration of the Existential Risk Persuasion Tournament leads one to conclude that wildly transformational outcomes just aren't that likely in the short/medium term. I'm aware the XPT doesn't ask about that specifically, but it seems like one of the better data points we have. I worry that focusing on things like expected value leads to some kind of Pascal's mugging, which is a shame because the counterfactual - refusing to be mugged - is still good in this case.
I still think AI an issue worth considering seriously, dedicating many resources to addressing, etc. I think significant de-emphasis on other cause areas is not good. Depending on how long 80k make the change for, it also plausibly leads to new people not entering other causes areas in significant numbers for quite some time, which is probably bad in movement-building ways that is greater than the sum of its parts (fewer people leads to feelings of defeat, stagnation etc and few new people mean better, newer ideas can't take over).
I hope 80k reverse this change after the first year or two. I hope that, if they don't, it's worth it.
I applaud the decision to take a big swing, but I think the reasoning is unsound and probably leads to worse worlds.
I think there are actions that look like “making AI go well” that actually are worse than not doing anything at all, because things like “keep human in control over AI” can very easily lead to something like value lock-in, or at least leaving it in the hands of immoral stewards. It’s plausible that if ASI is developed and still controlled by humans, hundreds of trillions of animals would suffer, because humans still want to eat meat from an animal. I think it’s far from clear that factors like faster alternative proteins development outweigh/outpace this risk - it’s plausible humans will always want animal meat instead of identical cultured meat for similar reasons to why some prefer human-created art over AI-created art.
If society had positive valence, I think redirecting more resources to AI and minimising x-risk are worth it, the “neutral” outcome may be plausibly that things just scale up to galactic scales which seems ok/good, and “doom” is worse than that. However, I think that when farmed animals are considered, civilisation's valence is probably significantly negative. If the “neutral” option of scale up occurs, astronomical suffering seems plausible. This seems worse than “doom”.
Meanwhile, in worlds where ASI isn’t achieved soon, or is achieved and doesn’t lead to explosive economic growth or other transformative outcomes, redirecting people towards focusing on that instead of other cause areas probably isn’t very good.
Promoting a wider portfolio of career paths/cause areas seems more sensible, and more beneficial to the world.
Essentially the Brian Kateman view: civilisation's valence seems massively negative due to farmed animal suffering. This is only getting worse despite people being able to change right now. There's a very significant chance that people will continue to prefer animal meat, even if cultured meat is competitive on price etc. "Astronomical suffering" is a real concern.
Thanks for the response Rakefet!
Can I ask how many responses to the survey you got? That is still unclear to me and seems like one of the most important numbers for determining impact, and I dont have a good sense on how seriously to take any of these numbers without it.
I'm not sure about the effectiveness of lumping in climate change. I agree addressing it is important, but many issues are important - why not also donate to Against Malaria Foundation? Climate change is different in that people do associate it with diet more strongly, and I think that anything that leads to less animal suffering is good, regardless of the reasons people make that choice, but I worry about things like people choosing to eat fish instead of cows. I'm also not convinced that the org selected is most efficient in terms of actually taking carbon out of the atmosphere in comparison to lobbying efforts or something.
I'm not an expert, so I may be well wrong, but I think an org thats primarily concerned with impact would make different decisions to those made here. If they are, I'd be interested to hear the rationale or theory of change. If they're not, then I don't think that these numbers actually speak for themselves as claimed, and there's still lots we don't know, and it raises my belief that people looking to fund opportunities in this area would find higher impact orgs elsewhere.
You're under no obligation to respond to this rather pointed line of questioning/comments, but I thought it was important that I express these doubts. I do sincerely hope that this runs again in the future and the results are shared again.