CEA is pleased to announce the winners of the December 2019 EA Forum Prize!
In first place (for a prize of $750): “Thoughts on doing good through non-standard EA career pathways,” by Buck Shlegeris.
In second place (for a prize of $500): “Will the Treaty on the Prohibition of Nuclear Weapons affect nuclear deproliferation through legal channels?,” by Luisa Rodriguez.
In third place (for a prize of $250): “Managing risk in the EA policy space,” by weeatquince.
The following users were each awarded a Comment Prize ($50):
- Linchuan Zhang, for adding many useful contextual comments to his own post on climate change
- Michelle Hutchinson on the trap of feeling obliged to do “all the [EA] things”
- Matt Lerner’s job-hunting advice
- Gwern and David Althaus on the genetics of personality
- Maciek Zajac on lethal autonomous weapons (while the top-level post was published in November, the comment was published in December)
See this post for the previous round of prizes.
What is the EA Forum Prize?
Certain posts and comments exemplify the kind of content we most want to see on the EA Forum. They are well-researched and well-organized; they care about informing readers, not just persuading them.
The Prize is an incentive to create content like this. But more importantly, we see it as an opportunity to showcase excellent work as an example and inspiration to the Forum's users.
About the winning posts and comments
Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.
Thoughts on doing good through non-standard EA career pathways
Many people who want to do a lot of good are pursuing or will pursue careers that aren’t among the top suggestions of 80,000 Hours. Thus, it seems highly valuable to consider ways in which people can aim to increase their impact across a wide range of career paths.
Aside from tackling a promising topic, Buck’s post also does some specific things I like:
- He begins with an extended quote from another article, and later spends a lot of time addressing the particular example cited in that quote (the potential impact of anthropology). When content directly expands upon previous content that’s often a sign that we’re making progress in an area. I’d love to see more posts that explore the deeper implications of briefly-stated ideas in previous EA material.
- He presents most of his concrete advice in list form, making it relatively easy for someone to revisit this post and skim through it to find suggestions that might be applicable to their current position.
- He carefully points out the ways in which his advice does and doesn’t break with “standard” advice. For example, while he discusses ways to make an impact in careers that aren’t standard 80,000 Hours recommendations, he also notes that doing so might be a lot harder, and that there are still strong reasons to consider recommended positions.
One thing I’d have been interested to see: More real-world examples of people in the community who have done a lot of good through unusual career paths. This could have provided evidentiary support (or the opposite) for some of the ideas Buck presented.
Will the Treaty on the Prohibition of Nuclear Weapons affect nuclear deproliferation through legal channels?
Many Forum posts try to cover very broad topics and wind up struggling to do them justice — not because the authors haven’t done excellent work (they often have), but because the world is enormously detailed and complex.
This post, on the other hand, aims at a question that… is still detailed and complex, but also sufficiently narrow that the author can tackle what seem to be most of the key sub-questions.
Particular aspects I liked:
- The “questions that could be interesting to explore” section at the end. More posts should have these!
- The summary that opens the post, which describes both the author’s process and her conclusions. Again: more posts should have these!
- The formulation of probabilities for various claims: (you get the idea)
I do wish, however, that the post had been a bit more clear on how this question fits into Luisa’s overall perspective on nuclear risk, and what her conclusions in this post might imply for EA-aligned work in this space. (Of course, those might come up in some future post!)
Managing risk in the EA policy space
As more EA energy goes into policy change, the community will benefit from having good heuristics about making change happen. I appreciate the author’s focus on this important goal, as well as:
- Their use of bold text to call attention to important points.
- Their realistic approach to risk management, and their acknowledgement that risk can’t be entirely removed from political advocacy. (I sometimes see ideas being pushed against because they are “risky”, without much consideration for how those risks might be reduced, or how they actually impact the idea’s expected value.)
- Their willingness to call out specific ideas as being risky, and to explain the risks (rather than inventing a sample idea that doesn’t necessarily have actual supporters in the community, which is the approach I might have taken — and which I think wouldn’t have worked as well)
The winning comments
I won’t write up an analysis of each comment. Instead, here are my thoughts on selecting comments for the prize.
The voting process
The winning posts were chosen by five people:
- Aaron Gertler, a Forum moderator.
- Two of the highest-karma users at the time the new Forum was launched (Peter Hurford and Rob Wiblin).
- Two users who have a recent history of strong posts and comments (Larks and Khorton).
All posts published in the titular month qualified for voting, save for those in the following categories:
- Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)
- Posts linking to others’ content with little or no additional commentary
- Posts which accrued zero or negative net karma after being posted
- Example: a post which had 2 karma upon publication and wound up with 2 karma or less
Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.
Additionally, one of our judges (Larks) chose to withdraw his post “2019 AI Alignment Literature Review and Charity Comparison” from consideration.
Judges each had ten votes to distribute between the month’s posts. They also had a number of “extra” votes equal to [10 - the number of votes made last month]. For example, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any single post.
——
The winning comments were chosen by Aaron Gertler, though the other judges had the chance to evaluate the winners beforehand and veto comments they didn’t think should win.
Feedback
If you have thoughts on how the Prize has changed the way you read or write on the Forum, or ideas for ways we should change the current format, please write a comment or contact Aaron Gertler.
Larks' post was one of the best of the year, so it's nice of him to effectively make a hundreds-of-dollars donation to the EA Forum Prize!
You're welcome! It was a tough decision, as I did find it quite motivating last year, but figured it would create the appearance of a conflict of interest if it won this year.
Would it have been reasonable for you to have been secretively part of the process or something?
Some options:
I'd be curious what the signaling or public value of the public explanation, "Person X would have won 1st place, but removed themselves from the running" would be compared to "Person X won 1st place, but gave up the cash prize"