Hide table of contents

What did we do?

As organizers of the EA group at UC Irvine (UCI), we ran a reading group on Julia Galef’s book The Scout Mindset during the final 5 weeks of the academic year (2022-2023). The group had a total of 8 participants with an average of 5 per session. Compared to attendance rates earlier in the academic year, this is about average. The group was diverse in both gender and ethnicity. All participants committed to reading 1 section (i.e. 3 chapters) per week prior to our 1-hour in-person discussions. As organizers, we prepared questions to help guide the discussions. Straight after, we went out to dinner together to continue our conversations about The Scout Mindset, Effective Altruism and any personal updates. 
 

Why did we do it?

This academic year we experimented with different program structures. In the first 10 weeks, we ran the EA Introductory Program; in the second 10 weeks, we ran our own version of the EA In-Depth Program. In the last 10 weeks, we spent the first half running weekly workshops and the second half running the reading group. We decided to run a reading group because we thought it would provide structured weekly content and that it would be fairly easy to organize because we needed only to read the relevant chapters, make notes and devise questions (we had already booked a classroom on a weekly basis). We chose The Scout Mindset in particular because many of our members had expressed an interest in reading the book and we too had been intending to read it ourselves. Running this reading group kept us motivated to read the book from beginning to end by holding us accountable.
 

What went well?

We successfully completed the book in the intended time without losing participants (except for one who went to Boston). Our prepared questions were useful in guiding our discussions without constraining them. They also helped spark new discussions and allowed us to focus on the present discussion without frantically scouring our minds for the next question to ask. The questions we created encouraged participants to connect what they had read to their own experiences. We found that participants were more enthused by these types of questions than by straightforward terminological questions. They were also supportive and encouraging of others, which helped everyone feel comfortable sharing their personal stories. Notably, this included someone in our group with social anxiety. We were able to engage her in the discussion (in such a way that she felt comfortable) by sending her the questions in advance, enabling her to read out her answers, which she often connected to her own experiences in amazingly insightful ways.

 

What went badly?

Although the questions that encouraged the members to share their experiences were helpful, sometimes the discussion would go too far off-topic. We think we might have been too hesitant to interrupt and bring the discussion back on track. Separately, we could have promoted the reading group beyond our EA group. Since The Scout Mindset is not a book about altruism — nevermind effective altruism — other students, with little or no interest in EA, might have been interested in joining our reading group. Had we promoted the reading group more widely, we might have even attracted additional students to our upcoming EA Introductory Program. On the other hand, we were very satisfied with the number of participants in our group. Had more attended, the discussion might have been more diluted.

 

Conclusion

Overall, we were glad to have run this reading group and plan to run another reading group during the same time next year, after running the intro and in-depth programs earlier in the academic year.

13

1
0

Reactions

1
0

More posts like this

Comments7


Sorted by Click to highlight new comments since:

Nice one, thanks for sharing your experience and the discussion questions. I am thinking of trying this out with my local group, will let you know how we get on :)

Great! I'd love to hear how it goes!

Hi again! We finally did it ;) Your discussion questions were really helpful, thanks again for sharing them. I'd also love to make a few suggestions to them, if possible?

Yay! I'm glad they were helpful for your group! Suggest away! I think I've given everyone with the link commenting permission so you can comment directly on the doc or contact me directly (details on my profile page).

Thanks Neil, I've left some suggested changes (mostly just additional questions I found worked well) in your doc :)

Thanks a lot! I've approved them and added you as a co-author :)

Wonderful! Many thanks :)

Curated and popular this week
 ·  · 16m read
 · 
Applications are currently open for the next cohort of AIM's Charity Entrepreneurship Incubation Program in August 2025. We've just published our in-depth research reports on the new ideas for charities we're recommending for people to launch through the program. This article provides an introduction to each idea, and a link to the full report. You can learn more about these ideas in our upcoming Q&A with Morgan Fairless, AIM's Director of Research, on February 26th.   Advocacy for used lead-acid battery recycling legislation Full report: https://www.charityentrepreneurship.com/reports/lead-battery-recycling-advocacy    Description Lead-acid batteries are widely used across industries, particularly in the automotive sector. While recycling these batteries is essential because the lead inside them can be recovered and reused, it is also a major source of lead exposure—a significant environmental health hazard. Lead exposure can cause severe cardiovascular and cognitive development issues, among other health problems.   The risk is especially high when used-lead acid batteries (ULABs) are processed at informal sites with inadequate health and environmental protections. At these sites, lead from the batteries is often released into the air, soil, and water, exposing nearby populations through inhalation and ingestion. Though data remain scarce, we estimate that ULAB recycling accounts for 5–30% of total global lead exposure. This report explores the potential of launching a new charity focused on advocating for stronger ULAB recycling policies in low- and middle-income countries (LMICs). The primary goal of these policies would be to transition the sector from informal, high-pollution recycling to formal, regulated recycling. Policies may also improve environmental and safety standards within the formal sector to further reduce pollution and exposure risks.   Counterfactual impact Cost-effectiveness analysis: We estimate that this charity could generate abou
 ·  · 2m read
 · 
Note: This started as a quick take, but it got too long so I made it a full post. It's still kind of a rant; a stronger post would include sources and would have gotten feedback from people more knowledgeable than I. But in the spirit of Draft Amnesty Week, I'm writing this in one sitting and smashing that Submit button. Many people continue to refer to companies like OpenAI, Anthropic, and Google DeepMind as "frontier AI labs". I think we should drop "labs" entirely when discussing these companies, calling them "AI companies"[1] instead. While these companies may have once been primarily research laboratories, they are no longer so. Continuing to call them labs makes them sound like harmless groups focused on pushing the frontier of human knowledge, when in reality they are profit-seeking corporations focused on building products and capturing value in the marketplace. Laboratories do not directly publish software products that attract hundreds of millions of users and billions in revenue. Laboratories do not hire armies of lobbyists to control the regulation of their work. Laboratories do not compete for tens of billions in external investments or announce many-billion-dollar capital expenditures in partnership with governments both foreign and domestic. People call these companies labs due to some combination of marketing and historical accident. To my knowledge no one ever called Facebook, Amazon, Apple, or Netflix "labs", despite each of them employing many researchers and pushing a lot of genuine innovation in many fields of technology. To be clear, there are labs inside many AI companies, especially the big ones mentioned above. There are groups of researchers doing research at the cutting edge of various fields of knowledge, in AI capabilities, safety, governance, etc. Many individuals (perhaps some readers of this very post!) would be correct in saying they work at a lab inside a frontier AI company. It's just not the case that any of these companies as
 ·  · 1m read
 · 
The belief that it's preferable for America to develop AGI before China does seems widespread among American effective altruists. Is this belief supported by evidence, or it it just patriotism in disguise? How would you try to convince an open-minded Chinese citizen that it really would be better for America to develop AGI first? Such a person might point out: * Over the past 30 years, the Chinese government has done more for the flourishing of Chinese citizens than the American government has done for the flourishing of American citizens. My village growing up lacked electricity, and now I'm a software engineer! Chinese institutions are more trustworthy for promoting the future flourishing of humanity. * Commerce in China ditches some of the older ideas of Marxism because it's the means to an end: the China Dream of wealthy communism. As AGI makes China and the world extraordinarily wealthy, we are far readier to convert to full communism, taking care of everyone, including the laborers who have been permanently displaced by capital. * The American Supreme Court has established "corporate personhood" to an extent that is nonexistent in China. As corporations become increasingly managed by AI, this legal precedent will give AI enormous leverage for influencing policy, without regard to human interests. * Compared to America, China has a head start in using AI to build a harmonious society. The American federal, state, and municipal governments already lag so far behind that they're less likely to manage the huge changes that come after AGI. * America's founding and expansion were based on a technologically-superior civilization exterminating the simpler natives. Isn't this exactly what we're trying to prevent AI from doing to humanity?