Hide table of contents

A helpful practice for making progress on one’s action-relevant beliefs is learning by writing. But many EAs—especially university students such as ourselves—need a bit more structure and accountability to write consistently. Reading is comfortable but writing is hard and scary—we both recognized the value of writing long before we made a habit of it. What’s worked for us, and what we hope may work for you, is starting an “EA writing group.”

The EA Forum is a great platform for sharing one’s writing, but writing for the forum can feel like a high bar and the topics one might gain the most from writing about—points of confusion, personal decisions, or unpolished ideas—might not feel like a good fit. A writing group can be an ideal environment for sharing uncertain, personal, or messy thoughts, and building a habit of writing.

What might an EA writing group look like?

Get a group of ~2-10 people, and have everyone write 1-3 pages weekly or bi-weekly. Then meet virtually or in-person to read and comment on each other’s writing (on Google Docs) for ~1 hour.

Possible modifications to the structure include:

  • Have everyone write about the same topic, question, or reading
  • Have a discussion afterwards (e.g. have 1-on-1s with people whose ideas you want to hear more about)
  • For university organizers: add a writing component to your in-depth fellowship

What should I write about?

  • Whatever seems most interesting or salient to you (e.g. pros and cons of a career decision, an idea you are uncertain about, an argument you would like to get on paper or condense, etc.)
  • Decision-relevant topics or “cruxes”
    • For example, “claims that are important if true, and might be true” (from Holden)
  • For inspiration, here are some example pieces from our EA writing group

I want to be in an EA writing group, but I’m not sure who to start one with!

If you’re interested in joining a virtual EA writing group, fill out this form by Friday April 22nd! We will pair you with a small group of people who have a similar level of EA experience.

Our experience doing an EA writing group

At Swarthmore College we’re part of a group independent study (thanks to Koji Flynn-Do) on cause prioritization with 11 students. Each week we do about 3 hours of reading and write a 1-2 page “take” or response. Once per week we have a 2 hour class where we spend the first hour reading and giving feedback on each other’s writing and spend the second hour doing 1-on-1s with the people who wrote about what we most want to discuss.

Please let us know if you have any questions or thoughts about this, or if this post inspired you to start a group of your own :)

Comments5


Sorted by Click to highlight new comments since:

I have been trying to implement the learning by writing tips for the past four weeks, and have really felt the need for a community for feedback + accountability + expectation setting so I'm not totally alone in the wilderness. I can imagine many others feeling the same way - thank you for organizing this, and I think this is a really interesting model I would like to experiment with in my own community building. 

Ooh, I've been doing similar stuff independently and think a group could be helpful!

When would the virtual Writing Group start + for how long would it run?

We are thinking the virtual group would start around the end of April and run for ~5 weeks—we would be happy to move things around based on everyone’s availability, though :)

Wow, looks like an empowering experience for a novice writer! Going to check those group writings in my spare time.

This is really cool!

[comment deleted]0
0
0
Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed