Currently doing local AI safety Movement Building in Australia and NZ.
Context: I've done local community building (running AI Safety ANZ), but also facilitated for BlueDot.
There's definitely a lot of advantages from being able to draw talent from anywhere in the world. I suspect that the competitiveness of local movement-building will vary massively by location. In terms of impact per dollar, groups at top global universities or in strategic locations (San Fransisco, London, Washington, Brussels, ect.) are most likely to be competitive.
It's also important to think on the margin rather than on average. You'd have to talk to the core BlueDot team to find out what they would do with marginal funding and how promising they think the folks they rejected are.
These proposals seem pretty good. One area I'm a bit less certain about though is the focus on growth.
I hadn't really thought very much about the morale implications of growing EA before. These could be strong reasons to aim for growth.
At the same time, I do think it's worth noting that there's a certain tension between a principles-first approach and emphasising growth. Firstly, if we're aiming to find people who strongly align with EA principles, rather than just resonating with one of the cause areas, that significantly narrows the pool. Secondly, it's easier to built a movement where people have a deep understanding of that movement's principles when the movement isn't growing too fast. Thirdly, when a community has a strong commitment to principles, it can often access strategies that are less dependent on the size of the community, than when the community's commitment to principles are weaker, leading to less value in growth.
I'm not saying that a growth strategy would be a mistake, just noting a deep tension here.
I'll also note one argument on the growth side: to the extent that EA talent is being pulled into focusing more narrowly on AI safety, EA needs to increase the rate at which it brings in new talent in order to keep the movement healthy/viable. I don't know how strong this consideration is as I don't have a deep understanding of how EA is doing outside of Australia (within Australia more growth would be beneficial b/c so much of our talent gets pulled overseas).
I guess the issue for arguing for AI tutoring interventions to increase earnings is that it would have to compete against AI tutoring interventions to assist folk working directly on high-priority issues and that comparison is unlikely to come out favourably (though the former has the advantage of being more sellable to traditional funders).
a) The link to your post on defining alignment research is broken
b) "Governing with AI opens up this whole new expanse of different possibilities" - Agreed. This is part of the reason why my current research focus is on wise AI advisors.
c) Regarding malaria vaccines, I suspect it's because the people who focused on high-quality evidence preferred bet nets and folk who were interested in more speculative interventions were drawn more towards long-termism.
In retrospect, it seems that LLM's were initially successful because they allowed engineers to produce certain capabilities in a way that almost maximally leaned on crystallized knowledge and minimally leaned on fluid intelligence.
It appears that LLM's have continued to be successful because we've gradually been able to get them to rely more on fluid intelligence.
The AI Safety Fundamentals course has done a good job of building up the AI safety community and you might want to consider running something similar for moral alignment.
One advantage of developing on a broader moral alignment field is that you might be able to produce a course that would still appeal to folks who are skeptical of either the animal rights or AI sentience strands.
I can share a few comments on my thoughts here if this is something you'd consider pursuing.
(I also see possible intersections with my Wise AI advisor research direction).
Sorry to hear it didn't work out and thank you for your service.
For what it's worth, often it's valuable to take a step back rather than to just keep hitting your head against a wall. This can provide space to develop a better sense of perspective and why things went the way they did, whether you might have had a shot if you approached things in a different way or whether something else might be a better fit for you.
One thing I'd be much more excited about seeing rather than "quantifying post-training variables and their effects" (but which I'm not planning to pursue) would be to take an old model and then to try to map post-training enhancements discovered over time and see how the maximum elicitable capabilities change.
I'm worried that quantifying post-training variables directly has significant capabilities externalities and that there's no obvious limit to how far post-training can be pushed.
Comments:
For the record, I see the new field of "economics of transformative AI" as overrated.
Economics has some useful frames, but it also tilts people towards being too "normy" on the impacts of AI and it doesn't have a very good track record on advanced AI so far.
I'd much rather see multidisciplinary programs/conferences/research projects, including economics as just one of the perspectives represented, then economics of transformative AI qua economics of transformative AI.
(I'd be more enthusiastic about building economics of transformative AI as a field if we were starting five years ago, but these things take time and it's pretty late in the game now, so I'm less enthusiastic about investing field-building effort here and more enthusiastic about pragmatic projects combining a variety of frames).