As always, my Forum-posting 'reach' exceeds my time-available 'grasp', so here are some general ideas I have floating around in various states of scribbles, notes, google doc drafts etc, but please don't view them as in any way finalised or a promise to write-up fully:
- AI Risk from a Moderate's Perspective: Over the last year my AI risk vibe has gone down, probably lower than many other EAs who work with this area. However, I'm also more concerned about it than many other people (especially people who think most of EA is good but AI risk is bonkers). I think my intuitions and beliefs make sense, but I'd like to write them down fully, answer potential criticisms, and identify cruxes at some point.
- Who holds EA's Mandate of Heaven: Trying to look at the post-FTX landscape of EA, especially amongst the leadership, through a 'Mandate of Heaven' lens. Essentially, various parts of EA leaderships have lost the 'right to be deferred to', but while some of this previous leadership/community emphasis has taken a step back, nothing has stepped in to fill the legitimacy vacuum. This post would look at potential candidates, and whether the movement needs something like this at all.
- A Pluralist Vision for 'Third Wave' EA: Ben's post has been in my mind for a long time. I don't at all claim to have to full answer to this, but I think some form of pluralism that counteracts latent totalism in EA may be a good thing. I think I'd personally tie this to proposals for EA democratisation, but I don't want to make that a load-bearing part of the piece.
- An Ideological Genealogy of e/acc: I've watched the rise of e/acc with a mixture of bewilderment, amusement, and alarm over the last year-and-a-half. It seems like a new ideology for a new age, but as Keynes said "Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back." I have some academic scribblers in mind, so it would be interesting to see if anything coherent comes out of it.
- EA EDA, Full 2023 Edition: Thanks to cribbing the work of other Forum users, I have metadata for (almost) every EA Forum post and comment, along with tag data, that was published in 2023. I've mostly got it cleaned up, but need to structure it into a readable product that tells us something interesting about the state of EA in 2023, rather than just chuck lots of graphs at the viewer.
- Kicking the Tires on 'Status': The LessWrong community and broader rationalist diaspora use the term 'status' a lot to explain the world (this activity is low/high status, this person is doing this activity to gain high status etc.), and yet I've almost never seen anyone define what this actually means, or compare it to alternative explanations. I think one of the primary LW posts grounds it in a book about improv theatre? So I might do I deep dive on it taking an eliminativism/deflationary stance on status and proposing a more idea-focused paradigm for understanding social behaviour.
Finally, updates to the Criticism of EA Criticism sequence will continue intermittently so long as bad criticisms continue or until my will finally breaks.
A post calling for more exploratory altruism that focuses on discovery costs associated with different potential interventions and the plausible ranges of impact of the associated intervention.
A public list that identified different unexplored, or underexplored, interventions could be really helpful.
I actually thought about this after listening to Spencer Greenberg's podcast- his observation that we shouldn't think about personal interventions, like whether to try a new drug or adopt a habit, in terms of naive expected value, but rather in terms of variance in effect. Even if a drug's average affect on someone is negative, if some people get a large benefit from it, it is worth testing to see if you are someone who benefits from it. If it really helps you, you can exploit it indefinitely, and if it hurts you, you can just stop and limit the bad effect.
Likewise a lot of really good interventions may have low "naive EV"; that is to say, if you were to irrevocably commit to funding it, it would be a poor choice. But the better question is, is this an intervention that could plausibly be high EV and have high room for exploitation? What are the costs associated with such discovery? With such an intervention, you could pay the discovery costs and exploit if it turns out to be high EV, and cut losses if it does not. It is worth considering that many promising interventions might look like bad bets at the outset, but still be worth discovery costs given the ability to capitalize on lottery winners.
I understand that there are many organizations like CEARCH and Rethink Priorities (and many others) that are involved in cause prioritization research. But I think if one of them, or another org, were to compile a list that focused on search costs and plausible ranges of impact, making this publicly available and easy for the public to convey thoughts and information, this could be a very useful tool to spot promising funding/research/experimentation opportunities.