During EA Global San Francisco 2017, there was a panel discussion called "Celebrating Failed Projects." At one point, Nathan Labenz, the moderator, asks, "What are some projects that you guys are harboring in the backs of your respective minds that you'd love to see people undertake even if, and maybe especially where, the chance of ultimate success might be pretty low?" In response, Anna Salamon says, "There's a set of books that pretty often change people's lives, especially 18 year old type people's lives, hopefully in good directions. I think it would be lovely to make a list of five of those books and make a list of all the smart kids and mail the books to the smart kids. This has been on the list of obvious things to do for the last ten years but somehow nobody has ever done it. I didn't do it. I don't know. I really wish someone would do it. I think it would be really high impact."
If I had to choose five books related to effective altruism, I would probably choose:
1. Doing Good Better by William MacAskill
2. 80,000 Hours by Benjamin Todd and the 80,000 Hours Team
3. The Life You Can Save by Peter Singer
4. Animal Liberation by Peter Singer
5. Superintelligence by Nick Bostrom
However, I doubt that Salamon meant to limit the selection to books related to effective altruism. If you could choose five books on any topic, which five would you choose?
This is a bit tangential, but do you know if anyone has done an assessment of the impact of HPMoR? Cousin_it (Vladimir Slepnev) recently wrote:
Taking this one step further, it seems to me that HPMoR may have done harm by directing people's attentions (including Eliezer's own) away from doing the hard work of making philosophical and practical progress in AI alignment and rationality, towards discussion/speculation of the book and rational fic writing, thereby contributing to the decline of LW. Of course it also helped bring new people into the rationalist/EA communities. What would be a fair assessment of its net impact?
Back in ~2014, I remember doing a survey of top-contributing MIRI donors over the previous 3 years and a substantial fraction (1/4th?) had first encountered MIRI or EA or whatever through HPMoR. Malo might have the actual stats. It might even be in a MIRI blog post footnote somewhere.
But w.r.t. to research impact, someone could make a list of the 25 most useful EA researchers, or the 15 most useful "AI safety" researchers, or whatever kind of research you most care about, and find out what fraction of them were introduced to x-risk/EA/rationality/whatever through HPMoR.
I don't have a good sense for the what the net impact is.