Given an aligned AGI, what is your point estimate for the TOTAL (across all human history) cost in USD of having aligned it?
To hopefully spare you a bit of googling without unduly anchoring your thinking, Wiki says the Manhattan Project cost $21-23 billion in 2018 USD, with only about 3.7% or $786m of that being research and development.
Q1: How closely does MIRI currently coordinate with the Long-Term Future Fund (LTFF)?
Q2: How effective do you currently consider [donations to] the LTFF relative to [donations to] MIRI? Decimal coefficient preferred if you feel comfortable guessing one.
Q3: Do you expect the LTFF to become more or less effective relative to MIRI as AI capability/safety progresses?
Low neglectedness can be outweighed by high importance or tractability. The hard part is being confident about tractability and room for more funding. I think one can make space for importance-focused efforts despite this uncertainty, especially with the consideration that rival actors are incentivized to increase it.
EA insights could be a valuable complement to existing ecosystems. Precisely because large political organizations have established roles to maintain, they may have operational or epistemic limitations. It's easy to draw analogies with large health charities that have received EA critique for marginal impact.