Working on various aspects of Econ + AI.
Here is a counterargument: focusing on the places where there is altruistic alpha is 'defecting' against other value systems. See discussion here
Agreed with this. I'm very optimistic about AI solving a lot of incentive problems in science. I don't know if the end case (full audits) as you mention will happen, but I am very confident we will move in a better direction than where we are now.
I'm working on some software now that will help a bit in this direction!
Since it seems like a major goal is of the Future Fund is to experiment and gain information on types of philanthropy —how much data collection and causal inference are you doing/plan to do on the grant evaluations?
Here are some ideas I quickly came up with that might be interesting.
In all these cases, you'd need to ex-post assess grant applications a few years later, including the ones you didn't fund on impact. Then these above strategies would let you assess the causal impact of your grants.
I'd say it's close and depends on the courses you are missing from an econ minor instead of a major. If those classes are 'economics of x' classes (such as media or public finance), then your time is better spent on research. If those classes are still in the core (intermediate micro, macro, econometrics, maybe game theory) I'd probably take those before research.
Of course, you are right that admissions care a lot about research experience - but it seems the very best candidates have all those classes AND a lot of research experience.
One case where this doesn't seem to apply is an economics Ph.D. For that, it seems taking very difficult classes and doing very well in them is largely a prerequisite for admissions. I am very grateful I took the most difficult classes and spent a large fraction of my time on schoolwork.
The caveat here is that research experience is very helpful too (working as an RA).
People often appeal to Intelligence Explosion/Recursive Self-Improvement as some win-condition for current model developers e.g. Dario argues Recursive Self-Improvement could enshrine the US's lead over China.
This seems non-obvious to me. For example, suppose OpenAI trains GPT 6 which trains GPT 7 which trains GPT 8. Then a fast follower could take GPT 8 and then use it to train GPT 9. In this case, the fast follower has a lead and has spent far less on R&D (since they didn't have to develop GPT 7 or 8 themselves).
I guess people are thinking that OpenAI will be able to ban GPT 8 from helping competitors? But has anyone argued for why they would be able to do that (either legally or technically)?