Hide table of contents

Recordings from various 2023 EA conferences are now live on our YouTube channel. These include talks from EAG Bay AreaEAG LondonEAG BostonEAGxLatAmEAGxIndiaEAGxNordics, and EAGxBerlin (alongside many other talks from previous years).

In an effort to cut costs, this year some of our conferences had fewer recorded talks than normal, though we still managed to record over 100 talks across the year. This year also involved some of our first Spanish-language content, recorded at EAGxLatAm in Mexico City. Listening to talks can be a great way to learn more about EA and stay up to date on EA cause areas, and recording them allows people who couldn’t attend (or who were busy in 1:1 meetings) to watch them in their own time.

Some highlighted talks are displayed below:

EA Global: Bay Area

Discovering AI Risks with AIs | Ethan Perez

In this talk Ethan presents on how AI systems like ChatGPT can be used to help uncover potential risks in other AI systems, such as tendencies towards power-seeking, self-preservation, and sycophancy.

How to compare welfare across species | Bob Fischer

People farm a lot of pigs. They farm even more chickens. And if they don’t already, they’re soon to farm even more black soldier flies. How should EAs distribute their resources to address these problems? And how should EAs compare benefits to animals with benefits to humans? 

This talk outlines a framework for answering these questions. Bob Fischer argues that we should use estimates of animals’ welfare ranges to compare how much good different interventions can accomplish. He also suggests some tentative welfare range estimates for several farmed species. 

EA Global: London

Taking happiness seriously: Can we? Should we? A debate | Michael Plant, Mark Fabian

Effective altruism is driven by the pursuit to maximize impact. But what counts as impact? One approach is to focus directly on improving people’s happiness — how they feel during and about their lives. 

In this session, Michael Plant and Mark Fabian discuss how and whether to do this, and what it might mean for doing good differently. Michael starts by presenting the positive case — why happiness matters and how it can be measured — then shares the Happier Lives Institute’s recent research on the implications and suggesting directions for future work. Mark Fabian acts as a critical discussant and highlights key weaknesses and challenges with ‘taking happiness seriously’. After their exchange, these issues open up to the floor.

Panel on nuclear risk | Rear Admiral John Gower, Patricia Lewis, Paul Ingram

This panel joins together Rear Admiral John Gower, Patricia Lewis, and Paul Ingram for a panel on a conversation exploring the future of arms control, managing nuclear tensions with Russia, China's changing nuclear strategy, and more. 

EA Global: Boston

Opening session: Thoughts from the community | Arden Koehler, Lizka Vaintrob, Kuhan Jeyapragasan

In this opening session, hear talks from three community members (Lizka Vaintrob, Kuhan Jeyapragasan, and Arden Koehler) as they give some thoughts on EA and the current state of the community.

Screening all DNA synthesis and reliably detecting stealth pandemics | Kevin Esvelt

Pandemic security aims to safeguard the future of civilisation from exponentially spreading biological threats. In this talk, Kevin outlines two distinct scenarios—"Wildfire" and "Stealth"—by which pandemic-causing pathogens could cause societal collapse. He then explains the ‘Delay, Detect, Defend’ plan to prevent such pandemics, including the key technological programmes his team oversees to mitigate pandemic risk: a DNA synthesis screening system that prevents malicious actors from synthesizing and releasing pandemic-causing pathogens; a pathogen-agnostic wastewater biosurveillance system for early detection of novel pathogens; AI/bio capability evaluations and technical risk mitigation strategies; and pandemic-proof PPE.

EAGxLatAm

Effective Altruism in Low and Middle Income Countries (LMICs) | Panel

This panel has speakers share their experiences and takeaways from working on community building projects in LMICs, namely the Philippines, South Africa, Russia, Nigeria, Mexico, Brazil, and Colombia.

The panel consists of Jordan Pieters, Zakariyau Yusuf, Elemerei Cuevas, Leo Arrunda, Angela Aristizábal, Sandra Malagón, and Aleksandr Berezhnoi.

EAGxIndia

Cause area — Air Quality in South Asia | Santosh Harish 

The session introduces air pollution in South Asia as an EA cause area, and provides a brief overview of the South Asian Air Quality program at Open Philanthropy. Santosh outlines major sub-strategies that we will be focusing on and the types of grant opportunities that are likely to be cost-effective.

EAGxNordics

What can we say about the size of the future? | Anders Sandberg

In this thought-provoking talk, Anders touches upon various factors that could shape the trajectory of humanity, drawing from multiple disciplines to provide a broad perspective. He explores the implications of different potential outcomes and how understanding these possibilities can inform our actions in the present.

EAGxCambridge

Fireside Chat | Lord Martin Rees

Lord Martin Rees is the Astronomer Royal and Co-founder of the Centre for the Study of Existential Risk. He is a former President of the Royal Society, former Master of Trinity College, and Emeritus Professor of Cosmology and Astrophysics, and is the author of 10 books including ‘If Science is to Save Us’ and ‘Our Final Century’. The interview covers both his career and his views on key open questions in the field of existential risk studies.

EAGxBerlin

Intercausal Impacts and the Power of Food System Change | Chris Popa

This talk explores the concept of intercausal impacts and analyses food system change as a prime example, given that our current food system not only causes vast amounts of animal suffering but also is a key driver in many other of the world’s most pressing problems.
 

Comments5


Sorted by Click to highlight new comments since:

These are useful, thanks. I would suggest we also enable/permit a lower-quality recording to be posted or shared of the other talks. It should be fairly costless to have a few people record and post these with camera phones, etc., and I believe it would add substantial value.

Thanks for the suggestion David — we've thought about this and might consider it for the future, but I worry it would be a fair amount of work for a low-quality product (that I expect wouldn't get many views). However for our recent Boston event we did take audio recordings of most talks and are planning to have many of them written up as Forum posts soon.

Audio recordings would be good, thanks.

Not sure about the benefit/cost. Am I naive to think something like:

  • Tripod (or a small stabilizer on a desk)
  • Volunteer (or paid person) in each room, sits at front or operates tripod
  • Uses own camera phone
  • Uploads to YouTube directly from phone

Time cost: Maybe 1-2 hours of 'equivalent extra person work' per 1-hour session (say 90 minutes).

Benefit: If even 5-10 people watch the videos, I suspect the value outweighs the cost.

  • Enabling them to shift time; e.g., do 1-on-1's if attending ...

  • Encouraging some people to not come in person (saving tremendous expense obviously)

  • Presenter and their team can re-watch the video to improve their own presentation, as well as using it for onboarding etc.

My guess (very rough) is the value 'per watcher who spends at least 20 minutes viewing on the talk' has about 20% of the value of the 90 minutes spent by the person filming and uploading on average.

(Obviously more so if it's a highly productive person doing the watching, or if the speaker themselves watches it to improve their presentation.)

So I guess if at least 5 people watch the average video for 20 minutes or more, this would be worth doing. Not sure how that compares to the statistics you've seen on usage.

Could it be enabled on a 'strictly voluntary basis', i.e., give permission for people to record certain sessions, announce this, and upload it to an (unofficial?) channel?

Are there plans to release the videos from EAGx Virtual?

Yes! We'll need to review footage and confirm with speakers, but they should be up soon :) 

Curated and popular this week
Echo Huang
 ·  · 6m read
 · 
Summary Reading full research (with a complete reference list) This article examines how voluntary governance frameworks in Corporate Social Responsibility (CSR) and AI domains can complement each other to create more effective AI governance systems. By comparing ISO 26000 and NIST AI RMF, I identify: Key findings: * Current AI governance lacks standardized reporting mechanisms that exist in CSR * Framework effectiveness depends on ecosystem integration rather than isolated implementation * The CSR ecosystem model offers valuable lessons for AI governance Main issues identified: 1. Communication barriers between governance and technical implementation 2. Rapid AI advancement outpacing policy development 3. Lack of standardized metrics for AI risk assessment Recommendations: 1. Develop standardized AI risk reporting metrics comparable to GRI standards 2. Create sector-specific implementation modules while maintaining baseline comparability 3. Establish clear accountability mechanisms and verification protocols 4. Build cross-border compliance integration  Understanding ISO 26000: A Model for Effective Policy Ecosystems The Foundation and Evolution of ISO 26000 ISO 26000, established in 2010, represents one of the most comprehensive attempts at creating a global framework for social responsibility. Its development involved experts from over 90 countries and 40 international organizations, creating a global standard. Unlike narrower technical frameworks, ISO 26000 takes a holistic approach to organizational accountability, recognizing that an organization's social and environmental impact directly affects its operational effectiveness. What makes ISO 26000 particularly interesting is its ecosystem integration. The framework doesn't operate alone - it's part of a sophisticated web of interconnected standards, reporting mechanisms, and regulatory requirements. This integration isn't accidental; it's a deliberate response to the limitations of volunt
Omnizoid
 ·  · 9m read
 · 
Crossposted from my blog which many people are saying you should check out!    Imagine that you came across an injured deer on the road. She was in immense pain, perhaps having been mauled by a bear or seriously injured in some other way. Two things are obvious: 1. If you could greatly help her at small cost, you should do so. 2. Her suffering is bad. In such a case, it would be callous to say that the deer’s suffering doesn’t matter because it’s natural. Things can both be natural and bad—malaria certainly is. Crucially, I think in this case we’d see something deeply wrong with a person who thinks that it’s not their problem in any way, that helping the deer is of no value. Intuitively, we recognize that wild animals matter! But if we recognize that wild animals matter, then we have a problem. Because the amount of suffering in nature is absolutely staggering. Richard Dawkins put it well: > The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In fact, this is a considerable underestimate. Brian Tomasik a while ago estimated the number of wild animals in existence. While there are about 10^10 humans, wild animals are far more numerous. There are around 10 times that many birds, between 10 and 100 times as many mammals, and up to 10,000 times as many both of reptiles and amphibians. Beyond that lie the fish who are shockingly numerous! There are likely around a quadrillion fish—at least thousands, and potentially hundreds of thousands o
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
Recent opportunities in Building effective altruism
2
32
CEEALAR
· · 1m read