Zach Stein-Perlman

Looking for new projects
4432 karmaJoined Nov 2020Working (0-5 years)Berkeley, CA, USA

Bio

Participation
1

AI strategy & governance. ailabwatch.org.

Comments
435

Topic contributions
1

In November, leading AI labs committed to sharing their models before deployment to be tested by the UK AI Safety Institute.

I suspect Politico hallucinated this / there was a game-of-telephone phenomenon. I haven't seen a good source on this commitment. (But I also haven't heard people at labs say "there was no such commitment.")

The original goal involved getting attention. Weeks ago, I realized I was not on track to get attention. I launched without a sharp object-level goal but largely to get feedback to figure out whether to continue working on this project and what goals it should have.

I share this impression. Unfortunately it's hard to capture the quality of labs' security with objective criteria based on public information. (I have disclaimers about this in 4-6 different places, including the homepage.) I'm extremely interested in suggestions for criteria that would capture the ways Google's security is good.

Not necessarily. But:

  1. There are opportunity costs and other tradeoffs involved in making the project better along public-attention dimensions.
  2. The current version is bad at getting public attention; improving it and making it get 1000x public attention would still leave it with little; likely it's better to wait for a different project that's better positioned and more focused on getting public attention. And as I said, I expect such a project to appear soon.

Yep. But in addition to being simpler, the version of this project optimized for getting attention has other differences:

  • Criteria are better justified, more widely agreeable, and less focused on x-risk
  • It's done—or at least endorsed and promoted—by a credible org
  • The scoring is done by legible experts and ideally according to a specific process

Even if I could do this, it would be effortful and costly and imperfect and there would be tradeoffs. I expect someone else will soon fill this niche pretty well.

  1. Yep, that's related to my "Give some third parties access to models to do model evals for dangerous capabilities" criterion. See here and here.
  2. As I discuss here, it seems DeepMind shared super limited access with UKAISI (only access to a system with safety training + safety filters), so don't give them too much credit.
  3. I suspect Politico is wrong and the labs never committed to give early access to UKAISI. (I know you didn't assert that they committed that.)

Utilitarians aware of the cosmic endowment, at least, can take comfort in the fact that the prospect of quadrillions of animals suffering isn't even a feather in the scales. They shut up and multiply.

(Many others should also hope humanity doesn't go extinct soon, for various moral and empirical reasons. But the above point is often missed among people I know.)

Hmm, I think having the mindset behind effective altruistic action basically requires you to feel the force of donating. It's often correct to not donate because of some combination of expecting {better information/deconfusion, better donation opportunities, excellent non-donation spending opportunities, high returns, etc.} in the future. But if you haven't really considered large donations or don't get that donating can be great, I fail to imagine how you could be taking effective altruistic action. (For extremely rich people.) (Related indicator of non-EA-ness: not strongly considering causes outside the one you're most passionate about.)

(I don't have context on Bryan Johnson.)

See https://ea-internships.pory.app/board, you can filter for volunteer.

It would be helpful to mention if you have background or interest in particular cause areas.

Load more