"American UBI: for and against"
"A brief history of Rosicrucianism & the Invisible College"
"Were almost all the signers of the Declaration of Independence high-degree Freemasons?"
"Have malaria case rates gone down in areas where AMF did big bednet distributions?"
"What is the relationship between economic development and mental health? Is there a margin at which further development decreases mental health?"
"Literature review: Dunbar's number"
"Why is Rwanda outperforming other African nations?"
"The longtermist case for animal welfare"
"Philosopher-Kings: why wise governance is important for the longterm future"
"Case studies: when has democracy outperform technocracy? (and vice versa)"
"Examining the tradeoff between coordination and coercion"
"Spiritual practice as an EA cause area"
"Tools for thought as an EA cause area"
"Is strong, ubiquitous encryption a net positive?"
"How important are coral reefs to ocean health? How can they be protected?"
"What role does the Amazon rainforest play in regulating the North American biosphere?"
"What can the US do to protect the Amazon from Bolsonaro?"
"Can the Singaporean governance model scale?"
"Is EA complacent?"
"Flow-through effects of widespread addiction"
I think that chapter in the Precipice is really good, but it's not exactly the sort of thing I have in mind.
Although Toby's less optimistic than I am, he's still only arguing for a 10% probability of existentially bad outcomes from misalignment.* The argument in the chapter is also, by necessity, relatively cursory. It's aiming to introduce the field of artificial intelligence and the concept of AGI to readers who might be unfamiliar with it, explain what misalignment risk is, make the idea vivid to readers, clarify misconceptions, describe the state of expert opinion, and add in various other nuances all within the span of about fifteen pages. I think that it succeeds very well in what it's aiming to do, but I would say that it's aiming for something fairly different.
*Technically, if I remember correctly, it's a 10% probability within the next century. So the implied overall probability is at least somewhat higher.