Hi BiologyTranslated, thanks for sharing your thoughts, and welcome to the forum! A minor format note: your TL;DR could perhaps be more focused on the contents of your post, and less on the 80k post.
It sounds like you disagree with their change. I do as well, so I won't focus on that.
It sounds like you have other concerns or critiques too, and I'm not sure I share those, so in the interest of hashing out my thoughts, I'll respond to some of what you've written.
no public information or consultation was made beforehand, and I had no prewarning of this change
I understand it came as a shock, but I'm not sure what 80k giving advanced notice of the change would have actually accomplished here. Hypothetically, presume they said in December that they would pivot in March. How would that benefit others? I think the biggest impact would be less "shock", but I'm not sure, and we would still need to grapple with the new reality. Perhaps some kind of extra consultation with the community would be useful, but that does seem quite resource intensive and haphazard, especially if they think that this is an especially critical time. I presume that they had many discussions with experts and other community members before making such a change anyway. This is a guess and I may be wrong - others in the comments seem to feel it was a rushed decision.
Surely at least some heads up would go a long way in reducing short-term confusion or gaps.
You've written about two gaps/confusions I can identify:
1) Introductory programs may decline in quality, due to things like out of date info/linkrot
2) Because people don't feel like they have autonomy in career choice/cause area, new people may bounce off the movement.
On the first, I don't know much about linkrot, but I expect it won't be a major issue for a few years at least, though it depends on the cause area. My model is that most things don't move that quickly, and things like "number of animals eaten per year", "best guess of risk of imminent nuclear war" and "novel biohazards" are probably roughly static, or at least static enough that the intro material found in those sections are fine. 80k's existing introductory resources will probably be fine here for a while. If there are serious updates to things like "number of countries with nuclear weapons", I do hope that they reconsider and update things there.
On the second, they have discussed that they are ok with shrinking the funnel of new people coming in the 80k/the movement more generally to some degree, and it is still their best bet. I agree it's disappointing though.
What safeguards are in place in the community to prevent sudden upheaval?
I don't think there are any, and I think this is largely a strength of the movement, so I don't think it should change. They're an independent entity, and I think they should do what they think is best. It's not a democratic movement and while a more cause neutral org will be missed in future years, I do hope Probably Good or another competitor fills that gap. My guess is that 80k expected that people focused on other cause areas would disagree with this change anyway.
Does this signal a further divide between parts of the community?
I guess I'm not very interested in what it signals, but what it actually does. I don't think it divides people further. People in EA already disagree on various matters of ethics/fact, and I don't think an org saying "we've considered the arguments and believe one side is correct" is a significant issue. On an interpersonal level, I'm friends with people working in different cause areas despite my disagreements, and on an organisational level I think it's good that we try to decouple impact from other things where possible.
I might be off here, but I think an unwritten concern of yours is that there was a tonal issue with the communication. I don't think I had an issue with how it was communicated, especially considering the org members chatting in the comments, but others did seem to feel off-put but it and considered it almost callous. I can understand where they are coming from.
Should we actually all shift to considering direct AI alignment work over just reassessing what risks change in an AGI impacted future?
This is something that we should all think about, but I don't think so. I would be curious to hear 80k talk more about it though.
That's a long response. but you wrote about some interesting ideas and I liked thinking about them too. If you have time/interest, I'd be particularly interested in hearing about things you think they could do differently on a more specific level (presuming they were going to make the change) and what counterfactual impact you think it would have.
Thanks Cody. I appreciate the thoughtfulness of the replies given by you and others. I'm not sure if you were expecting the community response to be as it is.
My expressed thoughts were a bit muddled. I have a few reasons why I think 80k's change is not good. I think it's unclear how AI will develop further, and multiple worlds seem plausible. Some of my reasons apply to some worlds and not others. The inconsistent overlap is perhaps leading to a lack of clarity. Here's a more general category of failure mode of what I was trying to point to.
I think in cases where AGI does lead to explosive outcomes soon, it's suddenly very unclear what is best, or even good. It's something like a wicked problem, with lots of unexpected second order effects and so on. I don't think we have a good track record of thinking about this problem in a way that leads to solutions even on a first order effects level, as Geoffrey Miller highlighted earlier in the thread. In most of these worlds, what I expect will happen is something like:
I think the impact of most actions here is basically chaotic. There are some things that are probably good, like trying to ensure it's not controlled by a single individual. I also think "make the world better in meaningful ways in our usual cause areas before AGI is here" probably helps in many worlds, due to things like AI maybe trying to copy our values, or AI could be controlled by the UN or whatever and it's good to get as much moral progress in there as possible beforehand, or just updates on the amount of morally aligned training data being used.
There are worlds where AGI doesn't take off soon. I think that more serious consideration of the Existential Risk Persuasion Tournament leads one to conclude that wildly transformational outcomes just aren't that likely in the short/medium term. I'm aware the XPT doesn't ask about that specifically, but it seems like one of the better data points we have. I worry that focusing on things like expected value leads to some kind of Pascal's mugging, which is a shame because the counterfactual - refusing to be mugged - is still good in this case.
I still think AI an issue worth considering seriously, dedicating many resources to addressing, etc. I think significant de-emphasis on other cause areas is not good. Depending on how long 80k make the change for, it also plausibly leads to new people not entering other causes areas in significant numbers for quite some time, which is probably bad in movement-building ways that is greater than the sum of its parts (fewer people leads to feelings of defeat, stagnation etc and few new people mean better, newer ideas can't take over).
I hope 80k reverse this change after the first year or two. I hope that, if they don't, it's worth it.
I applaud the decision to take a big swing, but I think the reasoning is unsound and probably leads to worse worlds.
I think there are actions that look like “making AI go well” that actually are worse than not doing anything at all, because things like “keep human in control over AI” can very easily lead to something like value lock-in, or at least leaving it in the hands of immoral stewards. It’s plausible that if ASI is developed and still controlled by humans, hundreds of trillions of animals would suffer, because humans still want to eat meat from an animal. I think it’s far from clear that factors like faster alternative proteins development outweigh/outpace this risk - it’s plausible humans will always want animal meat instead of identical cultured meat for similar reasons to why some prefer human-created art over AI-created art.
If society had positive valence, I think redirecting more resources to AI and minimising x-risk are worth it, the “neutral” outcome may be plausibly that things just scale up to galactic scales which seems ok/good, and “doom” is worse than that. However, I think that when farmed animals are considered, civilisation's valence is probably significantly negative. If the “neutral” option of scale up occurs, astronomical suffering seems plausible. This seems worse than “doom”.
Meanwhile, in worlds where ASI isn’t achieved soon, or is achieved and doesn’t lead to explosive economic growth or other transformative outcomes, redirecting people towards focusing on that instead of other cause areas probably isn’t very good.
Promoting a wider portfolio of career paths/cause areas seems more sensible, and more beneficial to the world.
Essentially the Brian Kateman view: civilisation's valence seems massively negative due to farmed animal suffering. This is only getting worse despite people being able to change right now. There's a very significant chance that people will continue to prefer animal meat, even if cultured meat is competitive on price etc. "Astronomical suffering" is a real concern.
Thanks, I think you've done a decent job of identifying cruxes, and I appreciate the additional info too. Your comment about the XPT being from 2022 does update me somewhat.
One thing I'll highlight and will be thinking about: there's some tension between the two positions of
a) "recent AI developments are very surprising, so therefore we should update our p|doom to be significantly higher than superforecasters from 2022" and
b) "in 2022, superforecasters thought AI progress would continue very quickly beyond current day levels"
This is potentially partially resolved by the statement:
c) "superforecasters though AI progress would be fast, but it's actually very fast, so therefore we are right to update to be significantly higher".
This is a sensible take, and is supported by things like the Metaculus survey you cite. However, I think that if they thought it was already going to be fast, and yet still only had a small chance of extinction in 2022, then recent developments would make them give a higher probability, but not significantly higher. The exact amount it has changed, and what counts as "significantly higher" vs marginally higher has unfortunately been left as an exercise for the reader, and it's not the only risk, so I think I do understand your position.