Hide table of contents

As part of Effective Self-Help’s research into the most effective ways people can improve their wellbeing and productivity, we’ve compiled more than 100 practical productivity recommendations from 40 different articles written by the EA community.

The result is this database on Airtable. Please take a look and see what you find useful!

To the best of our knowledge, this table includes the recommendations made in every prior post offering productivity advice written on the EA Forum, LessWrong, or by someone connected to the effective altruism and rationality communities. You can view the list of articles included here. Let us know if there are articles we’ve missed.

 

A small taste of the discoveries awaiting you…

Link here: https://airtable.com/shrxxQ805blyYMyLH 

This database is intended as a living document and remains a work in progress. We would love to hear your thoughts on how we can make this most useful, whether in the comments or via a private message. 

In the next fortnight, we will publish a preliminary report into increasing productivity, synthesising and more thoroughly evaluating many of the recommendations included in this database.

The rest of this post provides an explanation of how the database works, why we made it, and a few of its potential limitations.

 

How does this table work?

As a whole, the table has 110 recommendations across 9 categories. If you’re unfamiliar with using Airtable, we recommend this short explainer. Key points to note are the following:

  • Many of the cells contain more information than is visible at the top level. Click on a given cell and then the arrow in the top right to see everything written there.
  • Airtable allows for easy filtering and sorting. Most usefully, you can:
    • organise the results by category (including filtering out any your not interested in)
    • rank them by cost (either per-month or one-off)


We’ve split the recommendations into the following rough categories:

  • Mental/Physical health
  • Working efficiently (helps you work faster)
  • Working effectively (helps you prioritise better/ work on the most important thing)
  • Distraction blocker (helps minimise time spent off-task)
  • Extensions (software or browser add-ons)
  • Security (file and account backups/ safety)
  • Coaching/ Training
  • Finances
  • Misc.

 

Why did we build this table?

Productivity as a mechanism for increasing impact

If you’re reading this article on the EA Forum, it’s a fairly safe bet that you’re either currently working on one of the world’s most pressing problems or hold serious aspirations to do so in the future. By increasing your productivity, you can increase the value of your output (and the size of your output) for a given day/week/year of work. In doing so, you increase the endline impact of your work.
 

Theory of Change: Recommendation implemented -> Increased work output (more work done per day and/or value of output increased) -> Increased impact.
 

While the gains we can expect from many of these changes are very small, many also seem very cheap and easy to implement. Stacked together, implementing several recommendations could produce notable increases in your productivity, and by consequence, your impact. For a rough estimate of how this may translate into impact, see this Guesstimate model. For a note of caution on translating increased productivity directly into increased impact, see this comment.

 

Productivity is trainable

“In low-complexity jobs, the top 1% averages 52% more [productivity] than the average employee. For medium-complexity jobs this figure is 85%, and for high-complexity jobs it is 127%” (Hunter et al., 1990)  

We believe these significant differences in productivity are largely trainable. Given that very few people recieve any formal education in working effectively and efficiently, it seems highly likely that there are low-hanging fruit for improving their productivity. This database is an attempt to identify those low-hanging fruit. 


 

A few potential issues worth noting

The database is off-puttingly long/ dense/ hard to navigate

Fair enough! The database is definitely a work in progress and could likely be better organised. Let us know in the comments if you have ideas for how we could improve it.

We’re also currently finalising a report highlighting what appear to be the most useful and/or cost-effective recommendations to implement aimed at increasing your productivity. You can sign up to our newsletter if you’d like to ensure you see this once it’s published. Otherwise, keep your eyes peeled for us publishing this on the Forum in about two weeks’ time!

 

How do I know which recommendations are most worthwhile?

We hope to add quick estimates of cost-effectiveness ($ per hour saved) for each recommendation in the near future. 

For now, we’d encourage you to apply a rough, intuitive version of the ITN framework:

  • Importance: how big a difference does it seem like this would make to my productivity?
  • Tractability: how cheap and/or easy would it be for me to do this?
  • Neglectedness: how weak/ strong am I already at optimising in this area?

 

The database does not include every recommendation made in each article we reviewed. 

This is for two primary reasons:

  1. Avoiding duplication
  2. Focusing on productivity
  • Many of the recommendations made in these articles are for products that may bring small increases to your happiness/ satisfaction but are we feel are unlikely to increase your productivity.
  • Hopefully, this narrower focus helps make the database more useful. At the very least, it made it substantially easier to complete. We encourage you to take a look at the articles we reviewed for themselves. You can find them linked in the database or in this spreadsheet.

 

Product links have generally been taken directly from the article where we found the recommendation. This means they are predominantly a mixture of US and UK websites and/or currencies.

We’d love to provide links tailored to specific countries so that implementing the recommendations is as simple as possible. Sadly, this just isn’t currently feasible with the time we have available.

 

The costs (per month and/or one-off) are inaccurate

For similar reasons as above, many of the product costs are only rough estimates. These are generally based on the first product we found or on the specific product recommended in the original article. Given this, please take the figures with a good few grains of salt.
 

Why have you just included articles written by people involved in EA/ Rationality/ similar?

About 90% of the articles included are from people involved in or adjacent to the EA and Rationality communities. Needless to say, there are almost certainly very high-quality recommendations from people unrelated to these communities that we are missing. However, we made this choice for a couple of key reasons: 

  1. Scope. Limiting this to EA/ Rationality authors kept building this database to a manageable task size.
  2. Tailored value. If we accept the premise that people in EA and Rationality often think and act similarly to each other, it seems reasonable that you, dear reader, may benefit most from recommendations made from within these communities.

Perhaps with more time and resources, the database could be expanded to include recommendations from a much wider and more diverse range of sources.

 

Who are Effective Self-Help, anyway?

We’re a small research organisation set up in November 2021 to offer more effective productivity and wellbeing advice. Up to now, we’ve been funded by pilot grants from the EA Infrastructure Fund. Take a look at our website or this post introducing the project for more information about what we do, what we hope to do, and why.

 

A final request…

If you find a recommendation you like and want to quickly help with our work, please fill out this 1-minute form letting us know what you’re planning to/ have already done. We can then send you a follow-up email in a month with a separate super-short form to estimate how useful this practice has been to you. 

Understanding the real-world effectiveness of providing resources and recommendations like these is hard. Collecting data on how useful any of our recommendations are to you makes this easier.

 

Acknowledgements

Thank you to everyone whose articles I have drawn the recommendations in this database from, and the many helpful commenters on these articles. This includes (in no particular order): 

Max Carpendale; Sam Bowman; Ryan Carey; Dwarkesh Patel; Rosie Campbell; Michael Aird; Melissa Neiman; Rachel Moskowitz; Mark Xu; Patrick Stadler; Neil Bowerman; Akash Wasil; Milan Cvitkovic; Will Bradshaw; Rob Wiblin; Arden Koehler; Daniel Frank; Katja Grace; Darmani; Michelle Hutchinson; Lynette Bye; Joey Savoie; Peter Wildeford; Daniel Kestenholz; Adam Zerner; Kat Woods; Elizabeth Van Nostrand; Aaron Bergman; Alexey Guzey; Jose Ricon; Philip Storry; Scott Alexander; Gavin Leech; David Megins-Nicholas; Yuri Akapov.
 

Thanks as well to Manon Gouiran and Simon Newstead for their help in reviewing this database, and to Michael Aird for first prompting my interest in building this.


 

Comments11


Sorted by Click to highlight new comments since:

Thanks for making this!

I would recommend ublock origin instead of adblock plus: it blocks more things, is faster, and just works better.

It's also the recommended ad blocker in the LW post you link to, so not sure where the recommendation for adblock plus comes from,

This is cool! Thanks for compiling. I really love Focusmate, glad to see it included. 

I wonder if it would be possible to allow people to vote for different recommendations so you could sort by # of endorsements? Just as a quick way to see which tools have been useful to the most people. 

Thanks!

And yes, I completely agree that a voting system would be a nice addition. To the best of my knowledge and research, I couldn’t find any way of doing this through Airtable.

Would happily add this though if someone can tell me how.

Don't know how to use Airtable, but a quick googling led me to this. The last reply (by kuovonne) in the linked thread seems useful.

Cheers for this! Think I’d skimmed the top of that thread and missed the last reply you highlighted.

Looks a little clunky but worth adding.

In your section on “which recommendations are most worthwhile?” you mention using the ITN framework. While this is probably makes for efficient communication since many readers are familiar with it, I have some qualms with applying the ITN framework to actual decisions. Per the framing you described (which I get is meant to be simple and intuitive, but nonetheless), neglectedness seems obsolesced by importance, and tractability would probably also be obsolesced except that it tries taking into account implementation costs (while not addressing other potential disadvantages?).

Personally, if more people were familiar with it, I would probably recommend an approach more like the COILS framework that I’ve written about:

  1. How efficient at X would I be without using this framework, or how prone to making Y mistake would I be without this framework?
  2. What does my expected usage/implementation of this framework look like (given cognitive or familiarity constraints)?
  3. How efficient would I be at X or prone to making Y mistake while using this framework?
  4. How good is an improvement in X / reduction in Y.

In reality, your description of the ITN framework seems a bit different from normal interpretations, which leads to it looking more like COILS. However, my loose sense is that it’s probably better to formally recognize the limitations of ITN for certain contexts (e.g., specific decisions) and explicitly identify/use alternate frameworks, rather than using “sort-of-ITN.”

This looks great. Thanks for making this!

I love the idea of this but I wish the recommendations were more evidence-based. For example I was a bit disappointed to see blue-light blocking glasses on the list given that

A recent study suggested that blue light-blocking glasses do not improve symptoms of digital eye strain. The American Academy of Ophthalmology does not recommend blue light-blocking glasses because of the lack of scientific evidence that blue light is damaging to the eyes.

Here is some info related to recommendations #1 and #9 about lighting. https://meaningness.com/sad-light-led-lux David Chapman has put quite a bit of research into maximizing lux per dollar. I intend to try a similar setup to him with some Jeep lights and a voltage transformer to get 40,000-60,000 lux in my room. I'm going to add a smart plug so I can use an Apple Shortcuts automation to have the lights turn on with my wake-up alarm.

P
1
0
0

I think you should have different entries for each one of the top nootropics, instead of a single one. I expect the effect on productivity to vary a lot between them.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that