To share some anecdotal data: I personally have had positive experiences doing regular coaching calls with Kat this year and feel that her input has been very helpful.
I would encourage us all to put off updating until we also get the second side of the story - that generally seems like good practice to me whenever it is possible.
Thanks for the post!
A related question: Is LTFF more likely to fund a small AI safety research group than to fund individual independent AI Safety researchers?
So could we see a scenario where, if person A, B or C apply individually for an independent research grant, they might not meet your funding bar. But where, if similarly impressive people with a similarly good research agenda applied as a research group, they would be a more attractive funding opportunity for you?
Thanks for publishing this! I added it to this list of impactful org/project ideas