This is a special post for quick takes by Peter Jordan. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Hot Take: Securing AI Labs could actually make things worse
There's a consensus view that stronger security at leading AI labs would be a good thing. It's not at all clear to me that this is the case.

Consider the extremes:

In a maximally insecure world, where anyone can easily steal any model that gets trained, there's no profit or strategic/military advantage to be gained from doing the training, so nobody's incentivised to invest much to do it. We'd only get AGI if some sufficiently-well-resourced group believed it would be good for everyone to have an AGI, and were willing to fund its development as philanthropy.

In a maximally secure world, where stealing trained models is impossible, whichever company/country got to AGI first could essentially dominate everyone else. In this world there's huge incentive to invest and to race.

Of course, our world lies somewhere between these two. State actors almost certainly could steal models from any of the big 3; potentially organised cybercriminals/rival companies too, but most private individuals could not. Still, it seems that marginal steps towards a higher security world make investment and racing more appealing as the number of actors able to steal the products of your investment and compete with you for profits/power falls.

But I notice I am confused. The above reasoning predicts that nobody should be willing to make significant investments in developing AGI with current levels of cyber-security, since if they succeeded their AGI would immediately be stolen by multiple gouvenments (and possibly rival companies/cybercriminals), which would probably nullify any return on the investment. What I observe is OpenAI raising $40 billion in their last funding round, with the explicit goal of building AGI.

So now I have a question: given current levels of cybersecurity, why are investors willing to pour so much cash into building AGI?

...maybe it's the same reason various actors are willing to invest into building open-source models, which is also mysterious to me.

Excited to hear people's thoughts!

p.s. In a not-entirely-successful attempt to keep this short, I've totally ignored misuse risks. If you're mainly worried about misuse then the case for marginal security is much stronger. That said, in obviously insecure worlds, it may be more obvious to AI companies that dangerous capabilities will be misused, and therefore shouldn't be created.

Curated and popular this week
Relevant opportunities