Hide table of contents

Edit 11 Feb 2022: Jeremy made a post about starting a low-commitment LW Article club where he'd be linkposting articles on a weekly basis from this list for people to engage with! 

Context / Motivation: 

  • I am interested in thinking a lot recently about how we could share ideas from the rationalist community to EA and related subcommunities (perhaps communicating the same ideas in different ways).
  • I've been diving into LessWrong recently and remembered why I hadn't for a while - it's really overwhelming. Even with the sequences and curation, it's a lot of content, and it's not always obvious to me which posts I'd find most valuable.
  • I think it's better to read fewer posts in more depth to properly understand them.
  • I think it's likely that some posts or ideas will be much more relevant to EAs than others, but I'm not sure which ones

My ask:

  • I'd be interested in recommendations for standlone posts (e.g. All debates are bravery debates), specific concepts (e.g. schelling fence or doublecrux) , or specific sequences (e.g.)
  • If you have time, I'd love to know why it's valuable to you
New Answer
New Comment


8 Answers sorted by

I'm a big fan of some of the early LessWrong content, e.g.

More generally, I'd recommend much of the content by Scott Alexander ("Yvain"), Paul Christiano, Wei Dai, Gwern, Greg Lewis ("Thrasymachus"), Anna Salamon and Carl Shulman (I'm probably forgetting other names).

Privileging the Question changed my life in college. I don't know how useful it would be for the average person already involved in EA, but it played a huge role in my not getting distracted by random issues and controversies, and instead focusing on big-picture problems that weren't as inherently interesting. I'd at least recommend it to new members of university EA groups, if not "most community members".

This got me to leave my girlfriend and has remained a permanent way that I think:

https://www.lesswrong.com/posts/627DZcvme7nLDrbZu/update-yourself-incrementally

 

I read it as part of all the sequences, no idea how helpful it will be to others or as a standalone post

My take is that LessWrong is best understood as a mix of individual voices, each with their own style and concerns. The approach I'd recommend is to select one writer whose voice you find compelling, and spend some time digesting their ideas. A common refrain is "read the sequences," but that's not where I started. I like John Wentworth's writing.

Alternatively, you might find yourself interested in a particular topic. LessWrong's tags can help you both find an interesting topic and locate relevant posts, though it's not super fine grained or comprehensive.

One of the key sources of value on LessWrong is that it provides a common language for some complex ideas, presented in a relatively fun and accessible format. The combination of all those ideas can elevate thinking, although it's no panacaea. My intuition is that it's best to slowly follow your curiosity over a period of a few years, rather than trying to digest the whole thing all at once, or pick a couple highlights.

Any particular Wentworth posts that stand out to you? I'd like to include some in the LCLWBC (full credit to you for the name!), but I am not too familiar.

John had several posts highly ranked in the 2020 LessWrong review, and one in the 2019 LessWrong review, so there's a community consensus that they're good. There was also a 2018 LessWrong review, though John didn't place there.

In general, the review is a great resource for navigating more recent LW content. Although old posts are a community touchstone, the review includes posts that reflect the live interests of community members that have also been extensively vetted not only for being exciting, but for maintaining their value a year later.

1
Jeremy
Thank you!

I really like Ends Don't Justify Means (Among Humans) and think it's a bit underrated. (In that I don't hear people reference it much.)

I think I find the lesson generally useful: that in some cases it can be bad for me to "follow consequentialism," (because in some cases I'm an idiot) without consequentialism being itself bad.

The noncentral fallacy nicely categorizes a very common source of ethical disagreement in my experience.

[Edit:] Somewhat more niche, but considering how important AI risk is to many EAs, I'd also recommend Against GDP as a metric for timelines and takeoff speeds, for rebutting what is in my estimation a bizarrely common error in forecasting AI takeoff.

Comments9
Sorted by Click to highlight new comments since:

Maybe the thing to do would be to start a low-commitment LW book club? There's so many old posts that it doesn't feel fresh to comment on them, but having a way to put some group attention on a couple posts at a time might help.

I made a separate post to get the ball rolling and make sure this happens. 

Would love to do this !

Agreed - would love to participate in something like this, and would encourage other group members (esp. organizers) to as well!

I'd be interested in something like this. 

I'd also be interested in pursuing this idea! LW can definitely be overwhelming, and it'd be a fun (and useful) project to take a deep dive and perhaps produce a recommended reading list for others (broadly defined).

It took me a while to get rolling, but I have done a first Less Wrong repost here and will continue weekly as long as there is enough interest. 

Saved these all to pocket, thanks for the recommendations! 

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under