Exactly the kind of high-level perspective that I repeatedly find missing in the frequent "talent bottleneck" discussions on the forum. Thanks for the analysis, Weronika!
The next logical step would to run regular updates for a longitudinal view of the "EA" job market. (Just saying, definitely not trying to sneak more tasks into your current workload.)
Stripped of all AI-centred argumentation, the reply is left mostly empty. This suggests that judgmental forecasting, at least as exercised by FRI, should perhaps be thought of as a sub-domain of AI safety. In such a case, its impact would need to be evaluated in the portfolio context of all AI safety budgets, meaning a much higher hurdle rate would have to be cleared to justify its activities.
What more broadly applies to judgmental forecasting and online betting platforms -- and is also the basis for many arguments in this defence of forecasting -- is the circular reasoning regarding the field's importance, frequently repeated by the field's own and those adjacent to it. But, in contrast to the opinionated voices, the evidence is lacking. Merely stating that forecasting has informed some policy or that career decisions have been influenced is not sufficient. Similarly, whether its impact is positive or negative is taken at face value and never substantiated.
All this isn't to say that judgmental forecasting research or its funding should be dispensed with. In fact, hybrids that combine quantitative predictive models with expert judgment are among the foundational tools of large organisations' decision-making processes. However, I believe the field's association with online betting (high time we called things for what they are) as well as over-reliance on AI for its services is actually hurting it.
My feedback is that yet another website/airtable/tool of sorts collecting offers or serving as a candidate pipeline provides zero value to the community. The feeling I have is that there’s a glut in the market caused by lack of opportunities, I.e. budgeted permanent positions for mid-careers and above (not to be mistaken with needs — of course there’s demand for experience and seniority.)
What I‘d like to see instead, as someone trying to pivot to a meaningful career, is curated honest and periodically refreshed intel on the ask side, including:
I found this to be a very tasty morsel of epistemological digression, written in a style that seems deliberately nuanced to underscore its central argument: What does it mean to "know"? How does a rational mind construct and justify its beliefs?
The piece argues that beliefs should function as guides for anticipating experience. This can be distilled into the following framework: Each hypothesis (or inquiry) must include a testable prediction (an anticipation). When confronted with sufficient empirical evidence (sensory experience), this prediction should either be disproven or confirmed, leading to the rejection (or eviction) of the initial hypothesis or the formulation of a belief (i.e., knowledge).
Inferences, particularly the unseen (the atoms and the floor paragraph), must ultimately connect to observable outcomes or be otherwise falsifiable to hold any value in constructing our mental maps of reality. The author cautions against human tendencies:
What I think makes this text a valuable addition to the EA handbook is its emphasis on the virtues of reason and empiricism, essential qualities to maximising positive impact through evidence-based altruism.
Some counterarguments to the "AI safety field is maturing" claim -- and that it is still very much in its infancy -- which might also (at least to some extent) explain the talent issue:
While I appreciate your attempt to systematize the discourse around hiring in AIS, before collecting effort-intense panel data, it sometimes helps to turn the tables and ask yourself a fundamental question if the field is at all attractive to talent (esp. "top talent")? I'd say there's reasons abound to be sceptical. Starting with framing all things non-technical as generalist (while in fact each role is a specialization in its own right), through demanding a full mission-alignment, all the way to roll-the-dice type bet on a career in en emerging field that is funded by discretionary donations. Not to mention the reputational risk, by association, from some of the more volatile characters in the community.
In terms of next steps, a look at aggregate spend on roles within AIS should be taken. Might reveal much more than any self-reported "what pains me today" survey.