Interested in AI safety talent search and development.
Making and following through on specific concrete plans.
I think it's good to critically interrogate this kind of analysis. I don't want to discourage that. But as someone who publicly expressed skepticism about Flynn's chances, I think there are several differences that mean it warrants closer consideration. The polls are much closer for this race, Biden is well known and experienced at winning campaigns, and the differences between the candidates in this race seem much larger. Based on that it at least seems a lot more reasonable to think Biden could win and that it will be a close race worth spending some effort on.
"Something relevant to EAs that I don't focus on in the paper is how to think about the effect of campaigning for a policy given that I focus on the effect of passing one conditional on its being proposed. It turns out there's a method (Cellini et al. 2010) for backing this out if we assume that the effect of passing a referendum on whether the policy is in place later is the same on your first try is the same as on your Nth try. Using this method yields an estimate of the effect of running a successful campaign on later policy of around 60% (Appendix Figure D20).
Very interesting.
1. Did you notice an effect of how large/ambitious the ballot initiative was? I remember previous research suggesting consecutive piecemeal initiatives were more successful at creating larger change than singular large ballot initiatives.
2. Do you know how much the results vary by state?
3. How different do ballot initiatives need to be for the huge first advocacy effect to take place? Does this work as long as the policies are not identical or is it more of a cause specific function or something in between? Does it have a smooth gradient or is it discontinuous after some tipping point?
That's a good point. Although 1) if people leave a company to go to one that prioritizes AI safety, then this means there are fewer workers at all the other companies who feel as strongly. So a union is less likely to improve safety there. 2) It's common for workers to take action to improve safety conditions for them, and much less common for them to take action on issues that don't directly affect their work, such as air pollution or carbon pollution, and 3) if safety inclined people become tagged as wanting to just generally slow down the company, then hiring teams will likely start filtering out many of the most safety minded people.
I've thought about this before and talked to a couple people in labs about it. I'm pretty uncertain if it would actually be positive. It seems possible that most ML researchers and engineers might want AI development to go as quickly or more than leadership if they're excited about working on cutting edge technologies or changing the world or for equity reasons. I remember some articles about how people left Google for companies like OpenAI because they thought Google was too slow, cautious, and lost its "move fast and break things" ethos.
Interesting post - I particularly appreciated the part about the impact of Szilard's silence not really affecting Germany's technological development. This was recently mentioned in Leopold Aschenbrenner's manifesto as an analogy for why secrecy is important, but I guess it wasn't that simple. I wonder how many other analogies are in there and elsewhere that don't quite hold. Could be a useful analysis if anyone has the background or is interested.