A place to explain your preferences, discuss them, and maybe change your mind.
Some comments on this thread are cross-posted from a text box which appears when you reach the end of the voting process, but everyone is welcome to post here whenever.
You can read about all the candidates here.
Incidentally, I work on AI alignment and strongly agree with your points here, especially "Wild animal welfare is downstream (upstream, I think you mean?) from ~every other cause area"
I also think Wild Animal Initiative R&D may eventually wind up being extremely impactful for AI alignment.
Since it's so unbelievably neglected and potentially high impact, I view it as a fairly high EV neglected approach that could contribute enormously to AI alignment.
Additionally, and a bit more out there, but the more we invest in this today, the better it may be for us in acausal trade with future intelligences that we'd want to prioritize our wellbeing too.