JR

Judd Rosenblatt

CEO @ AE Studio

Bio

CEO at AE Studio

Comments
5

Incidentally, I work on AI alignment and strongly agree with your points here, especially "Wild animal welfare is downstream (upstream, I think you mean?) from ~every other cause area"

I also think Wild Animal Initiative R&D may eventually wind up being extremely impactful for AI alignment. 

Since it's so unbelievably neglected and potentially high impact, I view it as a fairly high EV neglected approach that could contribute enormously to AI alignment.

Additionally, and a bit more out there, but the more we invest in this today, the better it may be for us in acausal trade with future intelligences that we'd want to prioritize our wellbeing too.

Interestingly, this past week in DC, I saw Republicans members and staffers far more willing than many EAs in DC to accept and then consider how we should best leverage that Xi is likely an AI doomer. Possible hypothesis: I think it's because Democrats have imperfect models of Republicans' brains and are pretending as Republicans when thinking about China but don't go deep enough to realize that Republicans can consider evidence too.

4 - By the way, worth highlighting from the WSJ article is that Murati may have left due to frustrations about being rushed to deploy GPt-4o and not being given enough time to do safety testing, due to pressure to move fast to launch and take away attention from Google I/O. Sam Altman has a pattern of trying to outshine any news from a competitor and prioritizes that over safety. Here, this led to finding after launch that 4o "exceeded OpenAI’s internal standards for persuasion." This doesn't bode well for responsible future launches of more dangerous technology...

Also worth noting that "Mira Murati, OpenAI’s chief technology officer, brought questions about Mr. Altman’s management to the board last year before he was briefly ousted from the company"

Strongly agreed about more outreach there. What specifically do you imagine might be best?

I'm extremely concerned about AI safety becoming negatively polarized. I've spent the past week in DC meeting Republican staffers and members, who, when approached in the right frame (which most EAs cannot do), are surprisingly open to learning about and are default extremely concerned about AI x-risk.

I'm particularly concerned about a scenario in which Kamala wins and Republicans become anti AI safety as a partisan thing. This doesn't have to happen, but there's a decent chance it does. If Trump had won the last election, anti-vaxxers wouldn't have been as much of a thing–it'd have been "Trump's vaccine." 

I think if Trump wins, there's a good chance we see his administration exert leadership on AI  (among other things, see Ivanka's two recent tweets and the site she seems to have created herself to educate people about AI safety), and then Republicans will fall in line. 

If Kamala wins, I think there's a decent chance Republicans react negatively to AI safety because it's grouped in with what's perceived as woke bs–which is just unacceptable to the right. It's essential that it's understood as a totally distinct thing. I don't think left-leaning AI safety people sufficiently understand just how unacceptable it is. A good thought experiment might be to consider if Democrats would be into AI safety if it also meant banning gay marriage.

I'm fairly confident that most EAs simply cannot model the mind of a Republican (though they often think they can). This leads to planning and strategies that are less effective than they could be. In contrast, to be a right of center EA, you also need to effectively model the mind of a left of center EA/person (and find a lot of common ground), or you'd simply not be able to exist in this community. So, this means that the few right of center EAs (or people with previous right of center backgrounds EAs) I know are able to think far more effectively about the best strategies to accomplish optimal bipartisan end results for AI safety. 

Things do tend to become partisan inevitably. I see an ideal outcome potentially being that what becomes partisan is just how much AI safety is paired with "woke" stuff, with Democrats encouraging this and Republicans opposing it. The worst outcome might be that they're conflated and then Republicans, who would ideally exert great leadership on AI x-risk and drive forward a reasonable conservative agenda on it, wind up falling for the Ted Cruz narrative and blocking everything.

More development may at least indirectly contribute to hastening ubiquitous lab-grown meat becoming economically cheaper than non-lab-grown meat. 

A lot of uncertainty here because I have no idea how much (if at all) more development may cause this, but, if it does, it leads to fewer total moral atrocities.