Hide table of contents

Hi folks, 
 

The ask: does anyone know of a repository/list of EA speakers I could reach out to for a paid speaking opportunity? If not, please read further and suggest folks you think would be a good fit! 

Note: 

  • If the speaker is very high profile and is likely to draw an audience of 200-250 people from across research institutes and universities, we can pay an honorarium of 10,000 US dollars (I'm thinking probably like...Cass Sunstein or Peter Singer level of fame?)
  • If the speaker is lower profile and would likely generate an audience of ~50 people from within our organization, we can pay an honorarium of 1,500 US dollars.

I'm envisioning a talk that uses data/visualizations and maybe a dash of humor to support a claim along the lines of "it's worth trying to figure out how your work can be as impactful as possible." I don't think the organization would respond well to anyone suggesting that folks should leave our organization to do something different (like how 80,000 Hours often offers broad career advice). But I think they could be receptive to the idea that the research they are doing could be made even more impactful if they shifted into an EA mindset. (Especially if the speaker offered some tools/starting places for how to go about prioritizing work/research opportunities based on expected impact.) Another framework for a talk could be like "I did this really impactful research. Here's how I know it was really impactful. I can tell you a bit about the research. And then I'll also give you insight into what I considered beforehand that made it likely for it to be impactful."


A bit more context: I'm working on integrating a number of EA ideas into a large research organization. (You can learn more here if you're interested.) The company, being a research organization, loves data and science and logic, and their mission is to "improve the human condition by turning knowledge into practice."

One component of my project includes teaching others in my organization about Effective Altruism (i.e., "community building").  And given the culture of my organization, I think it's likely many people in my organization have the data-driven minds and value set that are common in the EA community. SO, I think these people are very likely to find Effective Altruism useful, interesting, worth contributing to, etc. 

I recently learned that I could get funding to bring a speaker to my organization, but I don't have a top-of-mind list of speaker ideas. It would take me a while of going down EA YouTube spirals to narrow down potential candidates, and then I couldn't be sure they'd even be the type of people who would be interested in doing a talk in response to an email from a random person with no connections. So, I figured crowd sourcing ideas from you all could be helpful. 

Of course, please reach out if you have any questions or clarifications! I'm not sure what kind of information is helpful to folks when thinking about planning speaking events, and I don't know if it's okay to have such specific topic requests for the talk. 

10

0
0

Reactions

0
0
New Answer
New Comment


2 Answers sorted by

Thanks for posting this question! You can see an incomplete list of speakers from past EA Global conferences here: https://www.eaglobal.org/speakers/ 

And you can see lots of videos here: https://www.youtube.com/c/EffectiveAltruismVideos/featured 

(Although you might already be aware of both of these resources.)

Thank you so much, Lizka! I will take a look at these!

Meta comment: I’m a little surprised this question hasn’t gotten any interest. I think I’m even the first upvote!

Thoughts:

This seems tricky? There is a spreadsheet of speakers available created by someone at CEA. The version I came across is out of date (2019)—but still has a lot of names and contact information.

However, the spreadsheet is for EA community builders. I’m not sure someone will hand it over easily publicly.

The bigger issue is that the star power and aesthetic you’re going for might be different than what is available.

$10k might not do it for Peter Singer who has an epic schedule and comp that is pretty high (which implies impact/time tradeoffs for monetary considerations), and many senior EAs (and intermediate ones) are preoccupied with numerous projects and culture/scaling initiatives where their time involvement is important and the funding situation is not the constraint.

I think focusing on the impact and narrative of what you are doing is important, as well as being personally likeable and trusted.

One idea: I might consider not going for the celebrity star power approach and instead implement a strategy centered on a really talented/interesting community builder and use a device, such as drafting on diversity or some other political issue, eg women leaders, Ukraine. There are literal EA UA refugees who are community builders and I am pretty sure will take $10k for their country or for EA or themselves.

You can headline with this issue, but then punch through with a real powerful argument for EA during the actual talk.

I guess this requires some craftiness from both you and the speaker but also would express some skill too.

Thank you so much for these ideas and thoughts! (And my apologies it has taken so long to respond.) I plan to start with the list Lizka posted, but will absolutely think about whether I can pull off the strategy you mentioned if I come up empty handed from a more straightforward approach. 

Curated and popular this week
 ·  · 14m read
 · 
Introduction In this post, I present what I believe to be an important yet underexplored argument that fundamentally challenges the promise of cultivated meat. In essence, there are compelling reasons to conclude that cultivated meat will not replace conventional meat, but will instead primarily compete with other alternative proteins that offer superior environmental and ethical benefits. Moreover, research into and promotion of cultivated meat may potentially result in a net negative impact. Beyond critique, I try to offer constructive recommendations for the EA movement. While I've kept this post concise, I'm more than willing to elaborate on any specific point upon request. Finally, I contacted a few GFI team members to ensure I wasn't making any major errors in this post, and I've tried to incorporate some of their nuances in response to their feedback. From industry to academia: my cultivated meat journey I'm currently in my fourth year (and hopefully final one!) of my PhD. My thesis examines the environmental and economic challenges associated with alternative proteins. I have three working papers on cultivated meat at various stages of development, though none have been published yet. Prior to beginning my doctoral studies, I spent two years at Gourmey, a cultivated meat startup. I frequently appear in French media discussing cultivated meat, often "defending" it in a media environment that tends to be hostile and where misinformation is widespread. For a considerable time, I was highly optimistic about cultivated meat, which was a significant factor in my decision to pursue doctoral research on this subject. However, in the last two years, my perspective regarding cultivated meat has evolved and become considerably more ambivalent. Motivations and epistemic status Although the hype has somewhat subsided and organizations like Open Philanthropy have expressed skepticism about cultivated meat, many people in the movement continue to place considerable hop
 ·  · 7m read
 · 
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.  Rates of Growth The authors summarise their argument as follows: > Currently, total global research effort grows slowly, increasing at less than 5% per year. But total AI cognitive labour is growing more than 500x faster than total human cognitive labour, and this seems likely to remain true up to and beyond the point where the cognitive capabilities of AI surpasses all humans. So, once total AI cognitive labour starts to rival total human cognitive labour, the growth rate of overall cognitive labour will increase massively. That will drive faster technological progress. MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.