Currently doing local AI safety Movement Building in Australia and NZ.
Points 1, 2 and 5: These all seem like variants of feedback being good. Seems like if timelines are short, you probably want to take a shot directly at the goal/what needs to be done[1], even if the feedback mechanism isn't that good. If you don't take the shot, there's no guarantee that anyone else will. Whilst if timelines are longer, your risk tolerance will likely be lower and feedback mechanisms are one key way of reducing this.
Point 3: I expect a large proportion of this to be a founder selection effect.
Point 4: Seems to fall more under more capital which I already acknowledged as going the other way.
I suppose this lines up with "greater freedom to focus narrowly on useful work" which you consider outside the scope of the original article, whilst I see this as directly tied to how much we care about feedback.
"I think it's incumbent on us to support each other more to help each other get back to a place where we can earn to give or otherwise have a high impact again." - Do you have any thoughts on what kind of support would be most useful?