Hi everyone! I'm Tom Chivers, and I'll be doing an AMA here. I plan to start answering questions on Wednesday 17 March at 9am UK: I reckon I can comfortably spend three hours doing it, and if I can't get through all the questions, I'll try to find extra time.
Who I am: a science writer, and the science editor at UnHerd.com. I wrote a book, The Rationalist's Guide to the Galaxy – originally titled The AI Does Not Hate You – in 2019, which is about the rationalist movement (and, therefore, the EA movement), and about AI risk and X-risk.
My next book, How to Read Numbers, written with my cousin David, who's an economist, is about how stats get misrepresented in the news and what you can do to spot it when they are. It's out on March 18.
Before going freelance in January 2018, I worked at the UK Daily Telegraph and BuzzFeed UK. I've won two "statistical excellence in journalism" awards from the Royal Statistical Society, and in 2013 Terry Pratchett told me I was "far too nice to be a journalist".
Ask me anything you like, but I'm probably going to be best at answering questions about journalism.
If you haven't spent time on calibration training, I recommend it! Open Phil has a tool here: https://www.openphilanthropy.org/blog/new-web-app-calibration-training. Making good forecasts is a mix of 'understand the topic you're making a prediction about' and 'understand yourself well enough to interpret your own feelings of confidence'. Even if they mostly don't have expertise in the topic they're writing about, I think most people can become pretty well-calibrated with an hour or two of practice.
And that's a valuable service in its own right, I think. It would be a major gift to the public even if the only take-away readers got from predictions at the end of articles were 'wow, even though these articles sound confident, the claims almost always tend to be 50% or 60% probable according to the reporter; guess I should keep in mind these topics are complex and these articles are being banged out in a few hours rather than being the product of months of study, so of course things are going to end up being pretty uncertain'.
If you also know enough about a topic to make a calibrated 80% or 90% (or 99%!) prediction about it, that's great. But one of the nice things about probabilities is just that they clarify what you're saying -- they can function like an epistemic status disclaimer that notes how uncertain you really are, even if it was hard to make your prose flow without sounding kinda confident in the midst of the article. Making probabilistic predictions doesn't have to be framed as 'here's me using my amazing knowledge of the world to predict the future'; it can just be framed as an attempt to disambiguate what you were saying in the article.