Hide table of contents

Hi folks, 
 

The ask: does anyone know of a repository/list of EA speakers I could reach out to for a paid speaking opportunity? If not, please read further and suggest folks you think would be a good fit! 

Note: 

  • If the speaker is very high profile and is likely to draw an audience of 200-250 people from across research institutes and universities, we can pay an honorarium of 10,000 US dollars (I'm thinking probably like...Cass Sunstein or Peter Singer level of fame?)
  • If the speaker is lower profile and would likely generate an audience of ~50 people from within our organization, we can pay an honorarium of 1,500 US dollars.

I'm envisioning a talk that uses data/visualizations and maybe a dash of humor to support a claim along the lines of "it's worth trying to figure out how your work can be as impactful as possible." I don't think the organization would respond well to anyone suggesting that folks should leave our organization to do something different (like how 80,000 Hours often offers broad career advice). But I think they could be receptive to the idea that the research they are doing could be made even more impactful if they shifted into an EA mindset. (Especially if the speaker offered some tools/starting places for how to go about prioritizing work/research opportunities based on expected impact.) Another framework for a talk could be like "I did this really impactful research. Here's how I know it was really impactful. I can tell you a bit about the research. And then I'll also give you insight into what I considered beforehand that made it likely for it to be impactful."


A bit more context: I'm working on integrating a number of EA ideas into a large research organization. (You can learn more here if you're interested.) The company, being a research organization, loves data and science and logic, and their mission is to "improve the human condition by turning knowledge into practice."

One component of my project includes teaching others in my organization about Effective Altruism (i.e., "community building").  And given the culture of my organization, I think it's likely many people in my organization have the data-driven minds and value set that are common in the EA community. SO, I think these people are very likely to find Effective Altruism useful, interesting, worth contributing to, etc. 

I recently learned that I could get funding to bring a speaker to my organization, but I don't have a top-of-mind list of speaker ideas. It would take me a while of going down EA YouTube spirals to narrow down potential candidates, and then I couldn't be sure they'd even be the type of people who would be interested in doing a talk in response to an email from a random person with no connections. So, I figured crowd sourcing ideas from you all could be helpful. 

Of course, please reach out if you have any questions or clarifications! I'm not sure what kind of information is helpful to folks when thinking about planning speaking events, and I don't know if it's okay to have such specific topic requests for the talk. 

10

0
0

Reactions

0
0
New Answer
New Comment


2 Answers sorted by

Thanks for posting this question! You can see an incomplete list of speakers from past EA Global conferences here: https://www.eaglobal.org/speakers/ 

And you can see lots of videos here: https://www.youtube.com/c/EffectiveAltruismVideos/featured 

(Although you might already be aware of both of these resources.)

Thank you so much, Lizka! I will take a look at these!

Meta comment: I’m a little surprised this question hasn’t gotten any interest. I think I’m even the first upvote!

Thoughts:

This seems tricky? There is a spreadsheet of speakers available created by someone at CEA. The version I came across is out of date (2019)—but still has a lot of names and contact information.

However, the spreadsheet is for EA community builders. I’m not sure someone will hand it over easily publicly.

The bigger issue is that the star power and aesthetic you’re going for might be different than what is available.

$10k might not do it for Peter Singer who has an epic schedule and comp that is pretty high (which implies impact/time tradeoffs for monetary considerations), and many senior EAs (and intermediate ones) are preoccupied with numerous projects and culture/scaling initiatives where their time involvement is important and the funding situation is not the constraint.

I think focusing on the impact and narrative of what you are doing is important, as well as being personally likeable and trusted.

One idea: I might consider not going for the celebrity star power approach and instead implement a strategy centered on a really talented/interesting community builder and use a device, such as drafting on diversity or some other political issue, eg women leaders, Ukraine. There are literal EA UA refugees who are community builders and I am pretty sure will take $10k for their country or for EA or themselves.

You can headline with this issue, but then punch through with a real powerful argument for EA during the actual talk.

I guess this requires some craftiness from both you and the speaker but also would express some skill too.

Thank you so much for these ideas and thoughts! (And my apologies it has taken so long to respond.) I plan to start with the list Lizka posted, but will absolutely think about whether I can pull off the strategy you mentioned if I come up empty handed from a more straightforward approach. 

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that