This is a special post for quick takes by Peter Wildeford. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

This could be a long slog but I think it could be valuable to identify the top ~100 OS libraries and identify their level of resourcing to avoid future attacks like the XZ attack. In general, I think work on hardening systems is an underrated aspect of defending against future highly capable autonomous AI agents.

Relevant XKCD comic.

To further comment, this seems like it might be an intractable task, as the term "dependency hell" kind of implies. You'd have to scrap likely all of GitHub and calculate what libraries are used most frequently in all projects to get an accurate assessment. Then it's not clear to me how you'd identify their level of resourcing. Number of contributors? Frequency of commits?

Also, with your example of the XZ attack, it's not even clear who made the attack. If you suspect it was, say, the NSA, would you want to thwart them if their purpose was to protect American interests? (I'm assuming you're pro-American) Things like zero-days are frequently used by various state actors, and it's a morally grey question whether or not those uses are justified.

I also, as a comp sci and programmer, have doubts you'd ever be able to 100% prevent the risk of zero-days or something like the XZ attack from happening in open source code. Given how common zero-days seem to be, I suspect there are many in existing open source work that still haven't been discovered, and that XZ was just a rare exception where someone was caught. 

Yes, hardening these systems might somewhat mitigate the risk, but I wouldn't know how to evaluate how effective such an intervention would be, or even, how you'd harden them exactly. Even if you identify the at-risk projects, you'd need to do something about them. Would you hire software engineers to shore up the weaker projects? Given the cost of competent SWEs these days, that seems potentially expensive, and could compete for funding with actual AI safety work.

not sure if such a study would naturally also be helpful to potential attackers, perhaps even more helpful to attackers than defenders, so might need to be careful about whether / how you disseminate the information

My sense is that 100 is an underestimate for the number of OS libraries as important as that one. But I'm not sure if the correct number is 1k, 10k or 100k.

That said, this is a nice project, if you have a budget it shouldn't be hard to find one or a few OS enthusiasts to delegate this to.

I'd be interested in exploring funding this and the broader question of ensuring funding stability and security robustness for critical OS infrastructure. @Peter Wildeford is this something you guys are considering looking at?

The TV show Loot, in Season 2 Episode 1, introduces a SBF-type character named Noah Hope DeVore, who is a billionaire wonderkid who invents "analytic altruism", which uses an algorithm to determine "the most statistically optimal ways" of saving lives and naturally comes up with malaria nets. However, Noah is later arrested by the FBI for wire fraud and various other financial offenses.

I wonder if anyone else will getting a thinly veiled counterpart -- given that the lead character of the show seems somewhat based on MacKenzie Scott, this seems to be maybe a thing for the show.

If we are taking Transformative AI (TAI) to be creating a transformation at the scale of the industrial revolution ... has anyone thought about what "aligning" the actual 1760-1820 industrial revolution might've looked like or what it could've meant for someone living in 1720 to work to ensure that the 1760-1820 industrial revolution was beneficial instead of harmful to humanity?

I guess the analogy might break down though given that the industrial revolution was still well within human control but TAI might easily not be, or that TAI might involve more discrete/fast/discontinuous takeoffs whereas the industrial revolution was rather slow/continuous, or at least slow/continuous enough that we'd expect humans born in 1740 to reasonably adapt to the new change in progress without being too bewildered.

This is similar to, but I think still a bit distinct from, asking the question of "what would a longtermist EA in the 1600s have done?" ...A question I still think is interesting but many EAs I know are not all that interested, probably because our time periods are just too disanalogous.

Some people at FHI have had random conversations about this, but I don't think any serious work has been done to address the question.

Curated and popular this week
Relevant opportunities