If you are wondering how this change was received by the media, well
I think putting yourself out there in a database is a good idea, as is finding recruiters that can introduce you to opportunities.
As far as rejections... I think there is this common mindset that you just need to grind away at applications until you eventually make it through, but personally I think it's more likely that if one is being rejected from >90% of applications, that is a sign that something is wrong. I feel like these two are the most common (but there could be many others):
Hmm, I'm not confident that Bob is wrong here. It seems to me that there's a quite plausible argument that EA's involvement in AI has been net-negative, possibly so net-negative as to cancel out all of the rest of EA. You seem to assume that this was knowable in advance, but that's not necessarily so.
Your argument seems to assume that one should "shut up and multiply" and then run with that estimated EV number; but there have been many arguments on this forum and elsewhere about why we shouldn't trust naive EV estimates.
TBH my sense is that GiveWell is just being polite.
A perhaps more realistic motivation is that admitting animal suffering into GiveWell's models would implicitly force them to specify moral weights for animals (versus humans), and there is no way to do that without inviting huge controversy leaving at least some groups very upset. Much easier to say "sorry, not our wheelhouse" and effectively set animal weights to zero.
FWIW I agree with this decision (of GiveWell's).
My understanding is that some electric and water utilities did a similar thing in the early days of the pandemic, for the same reasons.