Biosecurity at Open Phil
Others have made this point (e.g. Carl Shulman), but adding it here briefly: Since humans are K-strategists, our risk/reward psychology will be very risk-averse. The fitness cost of getting a limb ripped off heavily outweighs any fitness advantage of a good meal or mating opportunity. But for r-strategists, one good meal or one mating opportunity might easily be worth a high chance of losing a limb (since the fitness costs/benefits are far more skewed for rare upside). If the fitness cost/benefits are different and skewed in this way, we should expect the reward/punishment signal to evolve to be in line with this, making the psychology of an r-strategist potentially very alien to us.
Because you've been a public servant who took on the responsibility of shutting down the Soviet bioweapons program, securing loose nuclear material, and kickstarting a wildly successful early career program while at the DoD, I need to know: is it ever difficult being so awesome?
And, what would your advice be for younger folks aiming to follow in your footsteps?
Hi, thanks for raising these questions. I lead Open Philanthropy’s biosecurity and pandemic prevention work and I was the investigator of this grant. For context, in September last year, I got an introduction to Helena along with some information about work they were doing in the health policy space. Before recommending the grant, I did some background reference calls on the impact claims they were making, considered similar concerns to ones in this post, and ultimately felt there was enough of a case to place a hits-based bet (especially given the more permissive funding bar at the time).
Just so there’s no confusion: I think it’s easy to misread the nepotism claim as saying that I or Open Phil have a conflict of interest with Helena, and want to clarify that this is not the case. My total interactions with Helena have been three phone calls and some email, all related to health security work.
Excited to see this kind of analysis!
Worried that this is premature:
there is no reason for the great powers to ever deploy or develop planet-killing kinetic bombardment capabilities
This seems true to a first approximation, but if the risk we are preventing is tiny, then a tiny chance of dual-use becomes a big deal. The behavior of states suggests that we can't put less than a 1 in 10,000 chance on something like this. Some random examples:
Fwiw my (admittedly vibes-based) sense is that Palantir was a deliberate push to fill the niche of 'surveillance company' in a way that had guardrails and civil liberties protected.