For the EA London newsletter there is a section summarising updates and research from EA and EA related organisations and individuals. Someone mentioned it might be useful as a forum post so here it is. Let me know if I should keep posting here each month, somewhere else or not at all.

If you're interested in seeing previous months, they are here.

 

• Ways people trying to do good accidentally make things worse, and how to avoid them

• Charlotte Stix has started a newsletter covering the AI policy and strategy ecosystem in Europe

• Survey of EA org leaders about what skills and experience they most need

• Julie Wise writing on how no one is a statistic

• Vox has a new department, Future Perfect, reporting the news with an effective altruism angle, they have started with a podcast asking Bill Gates what they think about global poverty, AI and clean meat

• A post on whether people would give more to foreign aid if they knew the scale of global inequality 

• A reading list for people interested in learning more about RCTs not being the 'gold standard' in global development

• Extended notes on the book Superintelligence

• Michael Plant has created a happiness manifesto, arguing effective altruism can and should use happiness surveys to determine cost-effectiveness which would result in different charity recommendations

• Phil Hewinson has written a comprehensive summary of tech products that are helping people today

• Mind Ease is a new mental health intervention - also with in depth responses in the comments

• Lets Fund is a new organisation looking to help people discover, learn about and fund breakthrough research, policy and advocacy projects

• The new EA Angel group is looking for funders, applicants and volunteers to help improve the early-stage funding landscape in the effective altruism community

• CSER have curated 15 papers together in a special issue bringing together a wide range of research on existential and catastrophic risk. They also have five book recommendations related to these subjects

• A post on the potential bottlenecks and solutions in the existential risk ecosystem

• A deeper dive into providing pain relief to lower income countries and potential funding opportunities

• A new 80,000 hour career review on going into academic research

• BIT is running 18 RCTs to look into capacity building (including tax compliance, birth registration and education) in Indonesia, Bangladesh and Guatemala

• Podcast with economist Tyler Cowen suggesting that sustainable economic growth is the best way to safeguard the future of humanity

• Hilary Greaves on moral cluelessness, population ethics and the vision for GPI

• A look at potential negative externalities of cash transfers

• Paul Christiano on how humanity might progressively hand over decision-making to AI systems

• A new 80,000 hours article with potential careers to go into based on whether you already have a particular strength or expertise

• There are new management teams for EA funds

• Microsummaries of 150+ papers in the newest development economics research 

• Martin Rees has released a book looking at the future prospects of humanity. And an interview with Vox

• Peter Singer with an article looking at whether clean meat can save the planet

• Allan Dafoe from FHI with a document on the research agenda for AI governance

• Michelle Hutchinson on keeping absolutes in mind and not just looking at relative values

• A NYT feature on a project to give ex-felons voting rights, potentially re-enfranchising 1.5 million people, helped with funding from Open Philanthropy

• Open Phil with a summary of why they focus on scientific research, where they've granted $67 million and an open call for grant proposals

• Lewis Bollard looking at whether animal advocates should engage in U.S. politics

• A post on the value of being world class in a non traditional area

• A post looking at effective altruism and the law of diminishing marginal effect

• A summary of two possible interventions to reduce intimate partner violence

• A post on the rationale behind a GiveWell Incubation Grant to Evidence Action Beta

• A slide deck looking at psychedelics as a potential cause area

• GFI on the data behind why they use the term 'clean meat' and why it might be useful to sometimes refer to 'cultured meat'

• ODI have a toolkit  that provides a step-by-step approach to help researchers plan for, monitor and improve the impact their research has on policy and practice

• An FLI podcast looking at the role of automation in the nuclear sphere

• Utility Farm have announced the compassionate cat grant to attempt to reduce suffering of birds and small mammals

• A collection of resources for people looking into the generalist vs specialist question

• A curated list of podcasts in the areas of effective altruism, rationality, natural sciences, social sciences, politics and self-improvement

Sentientism as an upgrade of humanism

• Saulius has been looking at if there has been a change in the number of vegans and vegetarians over time

• A document looking at what the most effective individual actions are for reducing carbon emissions

• A Faunalytics post discussing the marketing challenges of clean meat being seen as unnatural

15

0
0

Reactions

0
0

More posts like this

Comments2


Sorted by Click to highlight new comments since:

Awesome! Thanks for this David :) I would say that this seems really useful, and that posting here sounds like a good option. It also enables people / orgs to add things you potentially missed as comments.

Thanks a lot for this, very useful indeed. I think this list hasn't been mentioned: Awful AI - a curated list to track current scary usages of AI - hoping to raise awareness to its misuses in society.

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat