For the EA London newsletter there is a section summarising updates and research from EA and EA related organisations and individuals. Someone mentioned it might be useful as a forum post so here it is. Let me know if I should keep posting here each month, somewhere else or not at all.

If you're interested in seeing previous months, they are here.

 

• Ways people trying to do good accidentally make things worse, and how to avoid them

• Charlotte Stix has started a newsletter covering the AI policy and strategy ecosystem in Europe

• Survey of EA org leaders about what skills and experience they most need

• Julie Wise writing on how no one is a statistic

• Vox has a new department, Future Perfect, reporting the news with an effective altruism angle, they have started with a podcast asking Bill Gates what they think about global poverty, AI and clean meat

• A post on whether people would give more to foreign aid if they knew the scale of global inequality 

• A reading list for people interested in learning more about RCTs not being the 'gold standard' in global development

• Extended notes on the book Superintelligence

• Michael Plant has created a happiness manifesto, arguing effective altruism can and should use happiness surveys to determine cost-effectiveness which would result in different charity recommendations

• Phil Hewinson has written a comprehensive summary of tech products that are helping people today

• Mind Ease is a new mental health intervention - also with in depth responses in the comments

• Lets Fund is a new organisation looking to help people discover, learn about and fund breakthrough research, policy and advocacy projects

• The new EA Angel group is looking for funders, applicants and volunteers to help improve the early-stage funding landscape in the effective altruism community

• CSER have curated 15 papers together in a special issue bringing together a wide range of research on existential and catastrophic risk. They also have five book recommendations related to these subjects

• A post on the potential bottlenecks and solutions in the existential risk ecosystem

• A deeper dive into providing pain relief to lower income countries and potential funding opportunities

• A new 80,000 hour career review on going into academic research

• BIT is running 18 RCTs to look into capacity building (including tax compliance, birth registration and education) in Indonesia, Bangladesh and Guatemala

• Podcast with economist Tyler Cowen suggesting that sustainable economic growth is the best way to safeguard the future of humanity

• Hilary Greaves on moral cluelessness, population ethics and the vision for GPI

• A look at potential negative externalities of cash transfers

• Paul Christiano on how humanity might progressively hand over decision-making to AI systems

• A new 80,000 hours article with potential careers to go into based on whether you already have a particular strength or expertise

• There are new management teams for EA funds

• Microsummaries of 150+ papers in the newest development economics research 

• Martin Rees has released a book looking at the future prospects of humanity. And an interview with Vox

• Peter Singer with an article looking at whether clean meat can save the planet

• Allan Dafoe from FHI with a document on the research agenda for AI governance

• Michelle Hutchinson on keeping absolutes in mind and not just looking at relative values

• A NYT feature on a project to give ex-felons voting rights, potentially re-enfranchising 1.5 million people, helped with funding from Open Philanthropy

• Open Phil with a summary of why they focus on scientific research, where they've granted $67 million and an open call for grant proposals

• Lewis Bollard looking at whether animal advocates should engage in U.S. politics

• A post on the value of being world class in a non traditional area

• A post looking at effective altruism and the law of diminishing marginal effect

• A summary of two possible interventions to reduce intimate partner violence

• A post on the rationale behind a GiveWell Incubation Grant to Evidence Action Beta

• A slide deck looking at psychedelics as a potential cause area

• GFI on the data behind why they use the term 'clean meat' and why it might be useful to sometimes refer to 'cultured meat'

• ODI have a toolkit  that provides a step-by-step approach to help researchers plan for, monitor and improve the impact their research has on policy and practice

• An FLI podcast looking at the role of automation in the nuclear sphere

• Utility Farm have announced the compassionate cat grant to attempt to reduce suffering of birds and small mammals

• A collection of resources for people looking into the generalist vs specialist question

• A curated list of podcasts in the areas of effective altruism, rationality, natural sciences, social sciences, politics and self-improvement

Sentientism as an upgrade of humanism

• Saulius has been looking at if there has been a change in the number of vegans and vegetarians over time

• A document looking at what the most effective individual actions are for reducing carbon emissions

• A Faunalytics post discussing the marketing challenges of clean meat being seen as unnatural

Comments2


Sorted by Click to highlight new comments since:

Awesome! Thanks for this David :) I would say that this seems really useful, and that posting here sounds like a good option. It also enables people / orgs to add things you potentially missed as comments.

Thanks a lot for this, very useful indeed. I think this list hasn't been mentioned: Awful AI - a curated list to track current scary usages of AI - hoping to raise awareness to its misuses in society.

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 8m read
 · 
And other ways to make event content more valuable.   I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. Also on my substack. When you imagine a session at an event going wrong, you’re probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there’s another big way that sessions go wrong that is sorely neglected: wasting everyone’s time, often without people noticing. Let’s give talks a break. They often suck, but event organizers are mostly doing the right things to make them not suck. I’m going to pick on two event formats that (often) suck, why they suck, and how to run more useful content instead. Panels Panels. (very often). suck. Reid Hoffman (and others) have already explained why, but this message has not yet reached a wide enough audience: Because panelists know they'll only have limited time to speak, they tend to focus on clear and simple messages that will resonate with the broadest number of people. The result is that you get one person giving you an overly simplistic take on the subject at hand. And then the process repeats itself multiple times! Instead of going deeper or providing more nuance, the panel format ensures shallowness. Even worse, this shallow discourse manifests as polite groupthink. After all, panelists attend conferences for the same reasons that attendees do – they want to make connections and build relationships. So panels end up heavy on positivity and agreement, and light on the sort of discourse which, through contrasting opinions and debate, could potentially be more illuminating. The worst form of shal