L

LintzA

1463 karmaJoined Washington, DC, USA

Comments
42

Interesting! Hadn't read this newsletter yet. Excerpting the text here: "It remains a good idea for readers concerned about tail risks to consider getting a residency permit, or a passport, in countries such as Mexico, Panama, Paraguay, Uruguay, etc., in case the political climate in the US becomes more turbulent."

Thanks for pointing this out, I have a vague sense that Manifold markets can be non-ideal but don't really know why or what these biases you're talking about might be. I added a few more questions from other markets to compensate a bit + moved my disclaimer from the footnotes to the top of the section. These are still among the only public forecasts with actual numbers I know of, so seems better than nothing, right?

Also, fwiw, some well-informed people I know (who's estimates I can't share) have estimates quite similar to these prediction markets.

This would absolutely be a bad message to use, voters don't care about aid at all. You'd just use the best message-tested stuff available which generically moves the needle in the direction you want it to go

I helped work on this piece along with some other research attempting to assess tractability. I think it wasn't super obvious that it was the best way to spend money but it was probably cost-competitive with many top donation opportunities in expectation. There also may have been ways we could have influenced things early on if we'd been putting in the effort in, say, 2023 (e.g. trying to get Biden out faster). 

Politics work is basically always going to very low probability, high reward. In this election I'd argue the expected impacts counterbalanced the low probability of success. 

It's true that some EAs work in government but I think this piece lays out pretty well what that actually looks like, and it doesn't typically involve politics - it's more civil service type things. I'm pretty sure I know most EAs who work on direct pollitical work (e.g. elections) and it's quite a small number. 

 

That said, yeah, it's good that there are some people working in government and that does help broader EA understand the political situation a little better. 

Agree this is a very thorny problem and I am unsure of how to deal with it. Suspect there's some degree to which you can balance it usefully and that it's worth paying some cost of looking partisan but ultimately it's not really viable to coordinate everyone's level of partisanship. 


I think a big part of mitigating the costs is just trying to avoid the sense that you're speaking on behalf of EA or AI safety when talking about partisan stuff

I made a prediction that foreign aid would be cut significantly in ~September of last year (see below) so seems like there's some degree to which at least some cuts were predictable. I think the intervention I advocated for at the time,  stopping Trump from being elected, would have been the most straightforward action to take (and I think EV of doing that from even a pure global health perspective looked alright). 

I didn't predict specific DOGE cuts. That said, if one had, then trying to get the message to Musk that this matters could have been a reasonable action to take (and would have been usefully informed by better political analysis). Plausibly there's some messaging stuff one could do? 

Otherwise, the best thing to do might have been to have some contingency planning for large aid cuts? I'm not sure how much counterfactual value having those plans would end up being but it seems possible that with some preparation beforehand one could keep larger parts of PEPFAR alive maybe? Certainly it seemed like the sector was really overwhelmed upon learning of USAID cuts and that seems like some indication that more preparation would have been useful. 

Overall, I don't feel too strongly that knowing DOGE cuts were coming would have been super high leverage but I think it exemplifies politics as being something which can have extremely far reaching impacts on cause areas that EAs care about - even far beyond anything happening internal to the field. Same thing seems true of AI as well as animal welfare and pandemic prevention. In the extreme, the end of democracy in the US would seem pretty likely to have a bigger negative impact (by far) on all EA cause areas than basically anything one could do internal to the cause area. 

Exact prediction about aid cuts (which I made after maybe 1-2 hours of looking into this): 
"If [Harris] spends at the same level as Biden (and Trump reverts to his prior spending), getting her into office would lead to ~$16 billion going to international aid [over the full term] that otherwise wouldn’t have. " 
 

That's my bad, I did say 'automated' and should have been 'automatable'. Have now corrected to clarify

Do you have anything you recommend reading on that?

I guess I see a lot of the value of people at labs happening around the time of AGI and in the period leading up to ASI (if we get there). At that point I expect things to be very locked down such that external researchers don't really know what's happening and have a tough time interacting with lab insiders. I thought this recent post from you kind of supported the claim that working inside the labs would be good? - i.e. surely 11 people on the inside is better than 10? (and 30 far far better)

I do agree OS models help with all this and I guess it's true that we kinda know the architecture and maybe internal models won't diverge in any fundamental way from what's available OS. To the extent OS keeps going warning shots do seem more likely - I guess it'll be pretty decisive if the PRC lets Deepseek keep OSing their stuff (I kinda suspect not? But no idea really). 

I guess rather than concrete implications I should indicate these are more 'updates given more internal deployment' some of which are pushed back against by surprisingly capable OS models (maybe I'll add some caveats)

Load more