I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team.
The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.
The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:
Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.
My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.
Survey methodology and data analysis.
Thanks Vasco!
This bullet plus the other I quoted above suggest typical junior and senior hires have lifetimes of 40.2 (= 2.04*10^6/(50.7*10^3)) and 16.1 roles (= 7.31*10^6/(455*10^3)), which are unreasonably long. For 3 working-years per junior hire, and 10 working-years per senior hire, they would correspond to working at junior level for 121 years (= 40.2*3), and at senior level for 161 years (= 16.1*10).
We took a different approach to this here, where we looked at the ratio between the value people assigned to a role being filled at all and the value of a person joining the community, rather than the value of the first vs second most preferred hire.
If we look at those numbers, we only get a ratio of ~5 (for both junior and senior hires), i.e. however valuable people think a role being filled is, they think the value of getting a 'hire-level' person to the community is approximately 5x this.
This seems more in line with the number of additional roles that we might imagine a typical hire goes onto after being hired for their first role. That said, people might also have been imagining (i) that people's value produced increases (perhaps dramatically) after their first role, (ii) that people create value for the community outside the roles they're hired to.
Thanks for the comment Jessica! This makes sense. I have a few thoughts about this:
Hey Manuel,
I think the public posts should start coming out pretty soon (within the next couple of weeks).
That said I would strongly encourage movement builders and other decision-makers to reach out to us directly and request particular results when they are relevant to your work. We can often produce and share custom analyses within a day (much faster than a polished public post).
Many people believe that AI will be transformative, but choose not to work on it due to factors such a (perceived) lack of personal fit or opportunity, personal circumstances, or other practical considerations.
There may be various other reasons why people choose to work on other areas, despite believing transformative AI is very likely, e.g. decision-theoretic or normative/meta-normative uncertainty.
I think the possibility that outreach to younger age groups[1] might be net negative is relatively neglected. That said, the two possible reasons suggested here didn't strike me as particularly conclusive.
The main reasons why I'm somewhat wary of outreach to younger ages (though there are certainly many considerations on both sides):
These questions seem very uncertain, but also empirically tractable, so it's a shame that more hasn't been done to try to address them. For example, it seems relatively straightforward to compare the success rates of outreach targeting different ages.
We previously did a little work to look at the relationship between the age when people first got involved in EA and their level of engagement. Prima facie, younger age of involvement seemed associated with higher engagement, though there's a relative dearth of people who joined EA at younger ages, making the estimates uncertain (when comparing <20s to early 20s, for example), and we'd need to spend more time on it to disentangle other possible confounds.
Or it might be that 'life stages' are the relevant factor rather than age per se, i.e. a younger person who's already an undergrad might have similar outcomes when exposed to EA as a typical-age undergrad, whereas reaching out to people while in high school (regardless of age) might be associated with negative outcomes.
I think the difference between me and Yudkowsky has less to do with social effects on our speech and more to do with differing epistemic practices, i.e. about how confident one can reasonably be about the effects of poorly understood future technologies emerging in future, poorly understood circumstances.
This isn't expressing disagreement, but I think it's also important to consider the social effects of our speaking in line with different epistemic practices, i.e.,
I think these questions are relevant in a variety of ways:
One move which is sometimes made to suggest that these things aren't relevant, is to say that we only need to be concerned about awareness and attitudes among certain specific groups (e.g. policymakers or elite students). But even if we think that knowing about awareness and attitudes towards EA among certain groups is highly important, it doesn't suggest that broader public attitudes are not important.
As a practical matter, it's also worth bearing in mind that large representative surveys like this can generate estimate for some niche subgroups, for example, just not really niche ones like elite policymakers), particularly with larger sample sizes.
We didn't directly examine why worry is increasing, across these surveys. I agree that would be an interesting thing to examine in additional work.
That said, when we asked people why they agreed or disagree with the CAIS statement, people who agreed mentioned a variety of factors including "tech experts" expressing concerns and the fact that they had seen Terminator etc., and directly observing characteristics of AI (e.g. that it seemed to be learning faster than we would be able to handle). In the CAIS statement writeup, we only examined the reasons why people disagreed (the responses tended to be more homogeneous, because many people were just saying ~ it's a serious threat), but we could potentially do further analysis of why they agreed. We'd also be interested to explore this in future work.
It's also perhaps worth noting that we originally wanted to run Pulse monthly, which would allow us to track changes in response to specific events (e.g. the releases of new LLM versions). Now we're running it quarterly (due to changes in the funding situation), that will be less feasible.
Yeh, I definitely agree that asking multiple questions per object of interest to assess reliability would be good. But also agree that this would lengthen a survey that people already thought was too long (which would likely reduce response quality in itself). So I think this would only be possible if people wanted us to prioritise gathering more data about a smaller number of questions.
Fwiw, for the value of hires questions, we have at least seen these questions posed in multiple different ways over the years (e.g. here) and continually produce very high valuations. My guess is that, if those high valuations are misleading, this is driven more by factors like social desirability than difficulty/conceptual confusion. There are some other questions which have been asked in different ways across years (we made a few changes to the wording this year to improve clarity, but aimed to keep the same where possible), but I've not formally assessed how those results differ.