Hide table of contents

Introduction

I used a recent Ask-Me-Anything (AMA) of Rethink Priorities to ask a series of questions about research in general (not limited to Rethink Priorities).

I’m posting these here severally to make them more visible. I’m not personally looking for more answers at this point, but if you think that readers would benefit from another perspective, I’d be delighted if you could add it.

Question

If you want to research a particular topic, how do you balance reading the relevant literature against thinking yourself and recording your thoughts? I’ve heard second-hand that Hilary Greaves recommends thinking first so to be unanchored by the existing literature and the existing approaches to the problem. Another benefit may be that you start out reading the literature with a clearer mental model of the problem, which might make it easier to stay motivated and to remain critical/vigilant while reading. (See this theory of mine.) Would you agree or do you have a different approach?

Jason Schukraft

I think it depends on the context. Sometimes it makes sense to lean toward thinking more and sometimes it makes sense to lean toward reading more. (I wouldn’t advise focusing exclusively on one or the other.) Unjustified anchoring is certainly a worry, but I think reinventing the wheel is also a worry. One could waste two weeks groping toward a solution to a problem that could have been solved in afternoon just by reading the right review article.

David Bernard

Another benefit of thinking before reading is that it can help you develop your research skills. Noticing some phenomena and then developing a model to explain it is a super valuable exercise. If it turns out you reproduce something that someone else has already done and published, then great, you’ve gotten experience solving some problem and you’ve shown that you can think through it at least as well as some expert in the field. If it turns out that you have produced something novel then it’s time to see how it compares to existing results in the literature and get feedback on how useful it is.

This said, I think this is more true for theoretical work than applied work, e.g. the value of doing this in philosophy > in theoretical economics > in applied economics. A fair amount of EA-relevant research is summarising and synthesising what the academic literature on some topic finds and it seems pretty difficult to do that by just thinking to yourself!

Michael Aird

I don’t think I really have explicit policies regarding balancing reading against thinking myself and recording my thoughts. Maybe I should.

I’m somewhat inclined to think that, on the margin and on average (so not in every case), EA would benefit from a bit more reading of relevant literatures (or talking to more experienced people in an area, watching of relevant lectures, etc.), even at the expense of having a bit less time for coming up with novel ideas.

I feel like EA might have a bit too much a tendency towards “think really hard by oneself for a while, then kind-of reinvent the wheel but using new terms for it.” It might be that, often, people could get to similar ideas faster and in a way that connects to existing work better (making it easier for others to find, build on, etc.) by doing some extra reading first.

Note that this is not me suggesting EAs should increase how much they defer to experts/others/existing work. Instead, I’m tentatively suggesting spending more time learning what experts/others/existing work has to say, which could be followed by agreeing, disagreeing, critiquing, building on, proposing alternatives, striking out in a totally different direction, etc.

(On this general topic, I liked the post The Neglected Virtue of Scholarship.)

Less important personal ramble:

I often feel like I might be spending more time reading up-front than is worthwhile, as a way of procrastinating, or maybe out of a sort-of perfectionism (the more I read, the lower the chance that, once I start writing, what I write is mistaken or redundant). And I sort-of scold myself for that.

But then I’ve repeatedly heard people remark that I have an unusually large amount of output. (I sort-of felt like the opposite was true, until people told me this, which is weird since it’s such an easily checkable thing!) And I’ve also got some feedback that suggested I should move more in the direction of depth and expertise, even at the cost of breadth and quantity of output.

So maybe that feeling that I’m spending too much time reading up-front is just mistaken. And as mentioned, that feeling seems to conflict with what I’d (tentatively) tend to advise others, which should probably make me more suspicious of the feeling. (This reminds me of asking “Is this how I’d treat a friend?” in response to negative self-talk [source with related ideas].)

Alex Lintz

I’ve been playing around with spending 15–60 min. sketching out a quick model of what I think of something before starting in on the literature (by no means a consistent thing I do though). I find it can be quite nice and help me ask the right questions early on.

(If one of the answers is yours, you can post it below, and I’ll delete it here.)

New Answer
New Comment

2 Answers sorted by

Thanks for writing this! I think about this a lot, and this helped clarify the problem for me.

The problem can be summarized as: there's a couple competing forces. There's not wanting to re-invent the wheel. Humanity makes progress by standing on the shoulders of giants.

On the other side, there's 1) anchoring (not getting stuck in how people think about things in the field) and 2) benefits of having your own model (force you to think actively and helps guide your reading).

The problem we're trying to solve is how to get the benefits of both.

One potential solution is to start off with small amounts of thinking on your own, like Alex Lintz described, then spending time on consuming existing knowledge. Then you can alternate between creating and consuming, starting off with the bulk of your time in consuming, with short periods of creating interspersed throughout, and the time spent creating can get longer and longer as time progresses.

Schools already work this way to a large extent. Most of your time as an undergraduate you are simply reading existing literature and only doing occasional novel contributions. Then when you're a PhD student you're focused mostly on making new contributions.

However, I do think that formal education does this suboptimally. To think creatively is a skill, and like all skills, the more you practice, the better you get. If you've spent the first 16 years of your education more or less regurgitating pre-prackaged information, you're not going to be as good at coming up with new ideas once you're finally in position to than if you had been practicing along the way. This definitely cross-applies to EA.

[comment deleted]1
0
0

I lean toward: When in doubt, read first and read more. Ultimately it's a balance and the key is having the two in conversation. Read, then stop and think about what you read, organize it, write down questions, read more with those in mind.

But thinking a lot without reading is, I'd posit, a common trap that very smart people fall into. In my experience, smart people trained in science and engineering are especially susceptible when it comes to social problems--sometimes because they explicitly don't trust "softer" social science, and sometimes because they don't know where to look for things to read.

And that's key: where do you go to find things to read? If like me you suspect there's more risk of under-reading than under-thinking, then it becomes extra important to build better tools for finding the right things to read on a topic you're not yet familiar with. That's a challenge I'm working on, and one where there's very easy room for improvement.

Yeah, I broadly share those views.

Regarding your final paragraph, here are three posts you might find interesting on that topic:

(Of course, a huge amount has also been written on that topic by people outside of the EA and rationality communities, and I don't mean to imply that anyone should necessarily read those posts rather than good things written by people outside of thos... (read more)

Comments2
Sorted by Click to highlight new comments since:

I just want to thank you for taking the time to make this sequence. I think that the format is clear and beautiful and I'm interested to learn more about EA researchers' approach to doing research.

Thank you! Also for the answer on the first question! (And thanks for encouraging me to go for this format.)

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 32m read
 · 
Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, Taiwan eventually democratized in the 1990s and today is one of the riche