Zach Stein-Perlman

@ ailabwatch.org
5502 karmaJoined Working (0-5 years)Berkeley, CA, USA
ailabwatch.org

Bio

Participation
1

AI strategy & governance. ailabwatch.org.

Comments
477

Topic contributions
1

I have a decent understanding of some of the space. I feel good about marginal c4 money for AIPN and SAIP. (I believe AIPN now has funding for most of 2026, but I still feel good about marginal funding.)

There are opportunities to donate to politicians and PACs which seem 5x as impactful as the best c4s. These are (1) more complicated and (2) public. If you're interested in donating ≥$20K to these, DM me. This is only for US permanent residents.

I'm confident the timing was a coincidence. I agree that (novel, thoughtful, careful) posting can make things happen.

I mostly agree with the core claim. Here's how I'd put related points:

  • Impact is related to productivity, not doing-your-best.
  • Praiseworthiness is related to doing-your-best, not productivity.
  • But doing-your-best involves maximizing productivity.
  • Increasing hours-worked doesn't necessarily increase long-run productivity. (But it's somewhat suspiciously convenient to claim that it doesn't, and for many people it would.)

I haven't read all of the relevant stuff in a long time but my impression is Bio/Chem High is about uplifiting novices and Critical is about uplifting experts. See PF below. Also note OpenAI said Deep Research was safe; it's ChatGPT Agent and GPT-5 which it said required safeguards.

I haven't really thought about it and I'm not going to. If I wanted to be more precise, I'd assume that a $20 subscription is equivalent (to a company) to finding a $20 bill on the ground, assume that an ε% increase in spending on safety cancels out an ε% increase in spending on capabilities (or think about it and pick a different ratio), and look at money currently spent on safety vs capabilities. I don't think P(doom) or company-evilness is a big crux.

fwiw I think you shouldn't worry about paying $20/month to an evil company to improve your productivity, and if you want to offset it I think a $10/year donation to LTFF would more than suffice.

The thresholds are pretty meaningless without at least a high-level standard, no?

One problem is that donors would rather support their favorite research than a mixture that includes non-favorite research.

I'm optimistic about the very best value-increasing research/interventions. But in terms of what would actually be done at the margin, most work that people would do for "value-increasing" reasons would be confused/doomed, I expect (and this is less true for AI safety).

I think for many people, positive comments would be much less meaningful if they were rewarded/quantified, because you would doubt that they're genuine. (Especially if you excessively feel like an imposter and easily seize onto reasons to dismiss praise.)

I disagree with your recommendations despite agreeing that positive comments are undersupplied.

Load more