Hide table of contents

These monthly posts originated as the "Updates" section of the EA Newsletter.

You can also see last month's updates, or a repository of past newsletters.

Organization Updates

80,000 Hours

This month, Arden Koehler spoke to Owen Cotton-Barratt about the Research Scholars Programme, epistemic systems, and layers of defense against potential global catastrophes

The team also re-released two episodes from early 2020 that were overshadowed by COVID-19 and received less attention than they normally would have: 

Finally, two team members released posts on the EA Forum:

Anima International

Otwarte Klatki (Poland) published a report about phasing out live carps from sales by retailer chains in Poland. It is a part of the organization’s campaign to ban selling live carps in Poland. 

An Anima International representative participated in a debate with members of the European Parliament called “Fur Farming in Europe: Addressing Animal Welfare & Public Health Concerns.” The event brought together politicians, policymakers, and experts on animal welfare and veterinary epidemiology to discuss fur farming issues.

Открытые клетки (Anima International’s Russia-based organization) published their first petition for a cage-free campaign.

Otwarte Klatki and SOKO Tierschutz published shocking footage from mink gassing on a Polish farm. This video is one of many publications from 2020 raising awareness of cruelty in the Polish fur industry.

Animal Charity Evaluators

Animal Charity Evaluators (ACE) recently developed and publicly shared their first standardized wage formula, describing the factors they included and the reasoning behind them. They also summarized the organizational improvements they made over the past year.

ACE is currently working on their next strategic plan and setting 2021 goals. They will publish their 2020 Year in Review in the next few months.

Animal Ethics

Animal Ethics published an account of their work in 2020. It details their educational work on wild animal advocacy, their progress in promoting the study of wild animal suffering in academia, and their educational work on the moral consideration of animals internationally.

They also published a study examining how existing data about wild animals admitted to rescue centers and animal sanctuaries in Greece provides insight into natural harms affecting wild animals. The study also increases our understanding of how some natural harms may interact with anthropogenic harms, and suggests future research regarding animal suffering in the wild.

Aquatic Life Institute

The Aquatic Life Institute (ALI) launched six stakeholder coalitions in 2020 that are working to elevate aquatic welfare. Aquatic Animal Alliance (AAA) grew their ranks to over 15 global nonprofits across six continents and published a first-of-its-kind, comprehensive guide to welfare for wild and farmed aquatic animals. AAA is currently providing feedback to product certifiers and government agencies. Organizations interested in joining AAA may apply here.

ALI is currently seeking a Managing Director. Prospective applicants can learn more here

ALI's Effective Animal Advocacy (EAA) online events will continue this year. Those interested in staying in the loop can sign up for the EAA events mailing list and contact Rocky Schwartz at rocky@ali.fish if interested in presenting.

Berkeley Existential Risk Initiative

BERI’s first public fundraiser was a success, raising over $60,000 to support new collaborations in 2021. More info is available here. They also received a $247,000 grant at the recommendation of Jaan Tallinn, in support of their machine learning research engineer program at the Center for Human-Compatible Artificial Intelligence (CHAI).

The Cellular Agriculture Society

CAS published an update on how they will be directing their core programming towards video production for cellular agriculture.

Centre for Effective Altruism

CEA announced that two of their projects — Giving What We Can and EA Funds — have begun to operate as independent organizations

They also released an update on their progress in the fourth quarter of 2020, a post summarizing their new organizational strategy, and a guide to areas of community building they’ve chosen not to work on.

Center for Human-Compatible AI

Congratulations to Caroline Jeanmaire, Director of Strategic Research and Partnerships at CHAI, who has been recognized as one of 100 Brilliant Women in AI Ethics in 2021. The list is published annually by Women in AI Ethics, which has the mission to make AI more diverse and accessible, in part by recognizing rising stars in the field.

CHAI would like to thank Mr. Ben Hoskin for his generous donation of $20,000. His support enables our mission to reorient AI research towards provably beneficial systems.

In December 2020, CHAI PhD candidate Daniel Filan launched the AI X-risk Research Podcast (AXRP). In each episode, Daniel interviews the author of a paper and discusses how its ideas might help us to reduce existential risk from powerful AI. The first three episodes feature CHAI researchers Andrew Critch, Rohin Shah, and Adam Gleave.

Thomas Krendl Gilbert co-authored “AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks,” published for the IEEE International Symposium on Technology and Society 2020

Rohin Shah published his PhD dissertation, “Extracting and Using Preference Information from the State of the World.” Congratulations to Rohin, who is now working as a research scientist on the technical AGI safety team at DeepMind. 

On 17 December, CHAI hosted the third Positive AI Economic Futures Workshop in collaboration with the World Economic Forum and the XPRIZE Foundation. More than 100 participants met virtually to plan scenarios of a future economy and society transformed by AI. The workshop brought together AI experts, science fiction authors, economists, policymakers, and social scientists.

Center on Long-Term Risk

Researchers from the Center on Long-Term Risk published three posts last month:

Charity Entrepreneurship

The second round of applications to Charity Entrepreneurship’s 2021 Incubation Program will be held from 15 March to 15 April

The program will be held online from 28 June to 27 August 2021. CE is specifically seeking candidates who could start charities working on animal welfare (shrimp welfare and feed fortification), alcohol regulation, family planning, and EA meta. In their recent EA forum post, CE explained this year’s focus on EA meta and their top three charity recommendations in this area.

In 2021, CE is focused on developing the ecosystem for charity entrepreneurs. If you can recommend organizations for potential partnerships on e.g. startup grants or research, always feel free to reach out.

Faunalytics 

Faunalytics published their 2020 Year In Review and outlined their 2021 Plans and Priorities. In 2021, they will set their next research agenda, and invite advocates who would like to participate in their study selection process to learn more here. Advocates interested in conducting their own research won’t want to miss the Reducetarian Foundation’s latest podcast episode: How To Conduct Effective Research with Faunalytics’ Jo Anderson.

Additionally, Faunalytics is hiring! Faunalytics’ Research Scientists work closely with the Research Director on their original research program, which includes conducting research, presenting research to the public, and providing direct support to advocates. View the full job description and learn how to apply here.

Fish Welfare Initiative

Fish Welfare Initiative has initiated their communication with farmer-supporting organizations in India, and believes that they are close to securing their first commitment. In the meantime, they continue to survey local farming conditions.

FWI is also currently hiring for the following:

Lastly, FWI was honored to be included on Giving What We Can’s top recommended individual charities for 2021

Future of Humanity Institute

In December, Greg Lewis and Cassidy Nelson from the Biosecurity Research Group, along with Allan Dafoe from GovAI, published “The biosecurity benefits of genetic engineering attribution” in Nature Communications

Jan Brauner (DPhil Scholar) and Mrinank Sharma (DPhil Affiliate) published “Inferring the effectiveness of government interventions against COVID-19” in Science. The paper currently has one of the highest attention scores (measured by Altmetric) of all Science publications.

The UN’s latest Human Development Report features a seven-page piece by Toby Ord on existential risk and the protection of humanity’s long-term potential.

Allan Dafoe and Ben Garfinkel co-authored “Beyond Privacy Trade-offs with Structured Transparency.” Allan also co-authored “Open Problems in Cooperative AI.”

Pitfalls of learning a reward function online,” co-authored by Stuart Armstrong, has been accepted to the International Joint Conferences on Artificial Intelligence.

GiveWell

Thank you very much to GiveWell’s supporters in the effective altruism community for a great end-of-year giving season! 

GiveWell produced a short video explaining their work — please check it out and share if you like.

GiveWell is currently seeking researchers to identify, analyze, and compare the giving opportunities that can most cost-effectively save or improve the lives of the global poor:

  • Senior Research Associate: You have more than six years of relevant work or educational experience, often including a master's degree or PhD.
  • Senior Researcher: You have over a decade of relevant work experience, often involving both a PhD and a few years of work experience, or a master's degree and many years of work experience.

GiveWell is also seeking a Philanthropy Advisor to build long-term relationships with GiveWell supporters. 

Giving What We Can

Giving What We Can now operates independently of the Centre for Effective Altruism (with ongoing operational support). 2020 was the biggest year on record for new pledges; GWWC’s members have now pledged over $2 billion and donated over $200 million.

Toby Ord was featured in a Vox article about Giving What We Can. GWWC was featured in a number of podcasts including “Making Sense,” “Your World, Your Money,” “Hear This Idea,” and “The Odin Podcast.”

The Good Food Institute

  • GFI Israel and Aleph Farms organized an event for Israeli Prime Minister Benjamin Netanyahu, the first head of state to taste cultivated meat. Netanyahu declared that we need to “Give people the choice, the freedom to choose ... I think Israel needs to be a leader in this direction.” He committed to appointing an alternative proteins czar to coordinate the government’s commitment across all relevant agencies.
  • Singapore became the first government to grant regulatory approval for cultivated meat, which allowed Eat Just to make history as the first cultivated meat company to serve restaurant-goers. GFI has been working for years with Singapore on both the scientific and regulatory fronts, and was delighted to have been invited to offer a global perspective on both of these firsts for cultivated meat.

The Humane League

After a relentless campaign by the Open Wing Alliance, THL secured a global cage-free commitment from Restaurant Brands International (RBI), the parent company of Burger King, Popeye’s, and Tim Hortons. This far-reaching commitment, the first from a major restaurant group, will cover 25,000 restaurant locations in more than 100 countries, including China and several countries in the Middle East and Africa. 

Thanks to pressure from The Humane League and their coalition partners, Whole Foods became the first major US retailer to adopt chicken welfare standards in alignment with the Better Chicken Commitment. THL UK also secured Better Chicken Commitments from three companies. They include the Jan Zandbergen Group, the European Union’s largest meat company, impacting 180 million chickens. 

Jacob Peacock, Director of THL Labs, gave a virtual presentation, “Reducing Meat Consumption: Strategies, Evidence, Outlooks.” The recording is available for viewing. 

Open Philanthropy

Open Philanthropy announced grants including $2.2M to the International Centre of Insect Physiology and Ecology to support research on malaria prevention, $890K to the MIT Media Lab to support research on methods for securely screening DNA synthesis orders, $600K to 1Day Sooner for general support, and $500K to the Centre for Effective Altruism to support a longtermist incubator. 

They also announced their allocation of $100M to GiveWell top charities, published a set of suggestions for individual donors from Open Philanthropy staff, and summarized their AI governance grantmaking so far.

Ought

Ought is working on building Elicit, a tool to automate and scale open-ended reasoning about the future. They are adding GPT-3 based research assistant features to help forecasters with early steps in their workflows. If you’re interested in becoming a beta tester, please see more information here.

Rethink Priorities

Over the last month, Derek Foster published the second and third parts of his review of measurements of subjective wellbeing. Rethink Priorities also welcomed new staff: Distinguished Researcher Dr. David Reinstein, and Operations and Development Associate Dr. Dominika Krupocin.

Wild Animal Initiative

Wild Animal Initiative submitted a comment requesting that the EPA revise their environmental risk assessment of the avian pesticide Avitrol to account for the threat it poses to wild animal welfare.

WAI researchers designed a field experiment to test the welfare effects of a commercially available pigeon contraceptive, and they are currently seeking funding to launch it. 

Staff Researcher Jane Capozzelli wrote about using biotelemetry to study wild animal welfare. Deputy Director Cameron Meyer Shorb explained how WAI defines welfare, which is central to their mission to understand and improve the lives of wild animals.

Add your own update

If your organization isn't shown here, you can provide an update in a comment.

You can also email me if you'd like to be one of the organizations I ask for updates each month. (I may not accept all such requests. Whether I include an org depends on its size, age, focus, track record, etc.)

25

0
0

Reactions

0
0

More posts like this

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T