As always, my Forum-posting 'reach' exceeds my time-available 'grasp', so here are some general ideas I have floating around in various states of scribbles, notes, google doc drafts etc, but please don't view them as in any way finalised or a promise to write-up fully:
- AI Risk from a Moderate's Perspective: Over the last year my AI risk vibe has gone down, probably lower than many other EAs who work with this area. However, I'm also more concerned about it than many other people (especially people who think most of EA is good but AI risk is bonkers). I think my intuitions and beliefs make sense, but I'd like to write them down fully, answer potential criticisms, and identify cruxes at some point.
- Who holds EA's Mandate of Heaven: Trying to look at the post-FTX landscape of EA, especially amongst the leadership, through a 'Mandate of Heaven' lens. Essentially, various parts of EA leaderships have lost the 'right to be deferred to', but while some of this previous leadership/community emphasis has taken a step back, nothing has stepped in to fill the legitimacy vacuum. This post would look at potential candidates, and whether the movement needs something like this at all.
- A Pluralist Vision for 'Third Wave' EA: Ben's post has been in my mind for a long time. I don't at all claim to have to full answer to this, but I think some form of pluralism that counteracts latent totalism in EA may be a good thing. I think I'd personally tie this to proposals for EA democratisation, but I don't want to make that a load-bearing part of the piece.
- An Ideological Genealogy of e/acc: I've watched the rise of e/acc with a mixture of bewilderment, amusement, and alarm over the last year-and-a-half. It seems like a new ideology for a new age, but as Keynes said "Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back." I have some academic scribblers in mind, so it would be interesting to see if anything coherent comes out of it.
- EA EDA, Full 2023 Edition: Thanks to cribbing the work of other Forum users, I have metadata for (almost) every EA Forum post and comment, along with tag data, that was published in 2023. I've mostly got it cleaned up, but need to structure it into a readable product that tells us something interesting about the state of EA in 2023, rather than just chuck lots of graphs at the viewer.
- Kicking the Tires on 'Status': The LessWrong community and broader rationalist diaspora use the term 'status' a lot to explain the world (this activity is low/high status, this person is doing this activity to gain high status etc.), and yet I've almost never seen anyone define what this actually means, or compare it to alternative explanations. I think one of the primary LW posts grounds it in a book about improv theatre? So I might do I deep dive on it taking an eliminativism/deflationary stance on status and proposing a more idea-focused paradigm for understanding social behaviour.
Finally, updates to the Criticism of EA Criticism sequence will continue intermittently so long as bad criticisms continue or until my will finally breaks.
I have a few ideas mulling in my head that I'm yet to decide if it would be useful. I'm unsure about posting them as not sure how popular the 'listicle' format would be as opposed to my normally very long and detailed posts/comments. These are:
Title: Top 5 Lessons Working in Frontline AI Governance
Summary: Things I've picked up regarding AI, risk, harms etc working in-industry in an AI governance role. Might end up being 10 things. Or 3. Depending how it goes.
Title: Top 5 Tips for Early Careers AI Governance Researchers
Summary: Similar to the above, things I wish I had known when I was an ECR.
Title: Why non-AGI/ASI systems pose the greatest longterm risks
Summary: Using a comparison to other technologies, some of my ideas as to why more 'normal' AI systems will always carry more risk and capacity for harm, and why EA's focus on 'super-AI' is potentially missing the wood for the trees.
Title: 5 Examples of Low-Hanging Policy Fruit to Reduce AI Risk
Summary: Again, a similar listicle-style of good research areas or angles that would be impactful 'wins' compared to the resources invested.
I think listicles would be a great style of post for Draft Amnesty (if you're interested). I'd be interested to see any of the listicles, and your 4th idea would be great to see as a more fleshed-out argument (though it is another one which could be a quickly stitched-together take to post on Draft Amnesty, possibly with a request for feedback in the comments before you do a full draft).