For helpful comments I thank Emma Williamson and Chloe Shrager
 

TLDR: It’s easy to give a TEDx Talk at your university, yet I think very few university students do. I think this talk has a unique opportunity to frame EA from a personal perspective, in a way that makes it more palatable to your average audience.  I would advise against giving a “This is what Effective Altruism is” talk for various reasons (PR risks most notably). Instead, I would encourage people specifically interested in communications to deliver a talk to their university on something like “I care about the world. Here are some tools I learned from EA on how I can do my best to help,” or “I want to do good. EA principles helped me do that.” After doing my talk, my university EA group gained much more traction, many students reached out wanting to learn more about EA, audience members donated to AMF, hundreds of people received a Doing Good Better book, and a handful of students told me they are considering changing their career path.  
 


Watch Ted Talk Here


In this post I will go over 

  1. Who should give a TED Talk?
  2. Lessons learned from giving a TEDx Talk
  3. Why I might be wrong

 

Part 1: Who should give a TED talk
 

I don’t encourage all people to give a Ted or Tedx talk. If you are thinking about giving a talk about EA, definitely consult others before doing so.  I want to specify that this post is directed at people interested in communications.  Specifically, I think giving a university Tedx talk on EA requires the ability to digest EA ideas and frame them in an appealing way for a “normal” college audience.  If you are well versed in public speaking (or feel like you have the potential to be), are more eloquent than I am, and feel like your university may be a target audience for these ideas, I would encourage you to consider giving a Tedx Talk.  


Part 2: Lessons learned  from giving a TED Talk
 

Pros:
 

  1. “Weird EA ideas” can be communicated in a non-weird way. We should probably do this more often. There are many university students out there who are already thinking about these ideas but don’t know EA exists.  Ted and TedX talks are a great way of reaching these audiences.
  2. Talking to an audience about how much you care about these principles may inspire others to care.
    1. After the talk, three parents independently came up to me and said “This inspired me to donate to the Against Malaria Foundation.
  3. Being a student that takes these ideas seriously may rub off on other students to take these ideas seriously.
    1. So far, over twenty students at Georgetown have reached out to me after seeing the talk (either from being in the audience or watching the youtube clip) with various requests such as “Can we meet to talk through plans to shift my area of study towards one that better helps the world?” or “I’ve had thoughts like these before and can very much relate to the talk but I had no idea something like EA existed, can you teach me more?” or my favorite “Because of this talk I am changing my career plans” (three students told me this.) 
      b. After the talk, students reached out about getting involved with the Georgetown EA group, and our website had substantially more views.
  4. Tedx Talks are a great way to encourage people to read books.  I (well actually EA           Books Direct) provided everyone in the audience with a copy of Doing Good Better.       Days after the talk I walked around campus and saw a few people reading it on the          lawn together. Seeing this made me smile.
  5. Ted talks reach a large audience.  They’re also a well-trusted brand. EA ideas                  may be taken more seriously when communicated through a credible, known source. 
     


Cons: 

  1. This isn’t the best information I could have provided for an Intro to EA talk.  As I was preparing, I felt strong tension between what would appeal to a college audience and what ideas might more accurately represent the nuances in today’s EA ideas. When writing the script, I didn’t feel like I had enough space/time in the speech to dive into longtermism or x-risk.  It felt too rushed and unthought-out to try to scatter everything from the effectiveness mindset to moral circle expansion to differences in impact to AI, Biorisk, and other topics in Xrisk/ Longtermism, and add in enough personal anecdotes to prevent from delivering a lecture-y talk.  I also felt like I wouldn't be the best person to deliver content on a philosophy I’m still relatively new to.  My hope was that once people heard the talk and looked up Effective Altruism they would end up finding arguments for Xrisk / Longtermism presented by someone better than myself.
  2. I messed up a few times. I think I accidentally made up a number that just doesn’t exist.  I was so unbelievably nervous, which derailed me from my script a bit.  I didn’t want to post this or have the youtube link be public at all for a while due to how self-critical I was being about the talk.  This was my first time publicly speaking and I think you can tell.  This is why I include the section about who should be giving EA Ted talks – the demeanor and confidence of the speaker can make all the difference between if audience members are excited about learning more about EA after the talk or not. Although I still got good traction, I think I could’ve done a much better job in terms of confidence and gotten even more people engaged (the effect of seeing a really good speaker and thinking “wow, I want to be like them, maybe I should get involved in what they are doing”).
  3. I was really, and I mean REALLY nervous before delivering this talk.  I emailed the Ted directors the night before saying I was too scared and wouldn’t be able to give the speech.  (Luckily they had me come in at 10pm the night before to practice on the stage so I felt a bit better).  I’ve never publicly spoken before and the thought of giving a talk to a large audience made me feel like I was going to puke.  And I did puke, about 20 minutes before walking on stage.  I also broke out in hives. And I cried a little.

 They say face your fears though. 

 

Part 3: Why I might be wrong
 

Ted or Tedx talks could definitely be a HUGE PR risk to EA!  So don’t do this without talking to any PR people in the EA space.  Also don’t do this without first running your script by many EA’s and non EA’s.  Again, I would encourage a talk more in the form of “I care deeply about the world, how do I help it?” rather than “This is what EA is and I'm going to lecture you for 15 minutes about it.”  

 

Feel free to reach out to me if you have any questions about the talk or are considering doing one yourself! You can email me at kfc20@georgetown.edu 
 

Comments6
Sorted by Click to highlight new comments since:

Edit: turned this comment into a separate post: https://forum.effectivealtruism.org/posts/TqNAgPpNwu6dCrycN/how-to-get-ea-ideas-onto-the-tedx-stage

Thanks for your post! I organized a TEDx event which took place in April of this year so I'd like to add my insights into two ways in which more EA TEDx talks can be initiated  (A. joining existing TEDx events & B. organizing TEDx events) .

A. Get yourself (or someone suitable) into a TEDx event line-up 

(NB: you don't need to be a student! Nor someone related to the university at which the event is held!)

Step 1: spot event organizers

      •Do you already know someone involved in organizing a TEDx event?

      •Alternatively: take a look at this map https://www.ted.com/tedx/events and see which events are +/- 2-12 months from now and at a distance you'd travel to (at your own costs/maybe there are some EA funds available for a cost like this?), go to the relevant event pages, check who the organizers are at the bottom of the page, and find their contact info (Linkedin/social media page DMs/whatever the internet can find you)

Step 2: contact the spotted event organizer(s)

      •Send the organizer(s) a message introducing yourself, asking if they are still looking for speakers for their line-up, and pitching your idea for a TEDx talk. Contact as many as you can for optimal odds :)

(one of the people who ended up being a speaker at our event got himself into the line-up by finding me on LinkedIn and messaging me, and maybe 2-3 people tried in total, so it's possible! And likely not something so many people do that you wouldn't stand out by proactively trying.)

 

B. Organize a TEDx event and invite an EA speaker

•You can organize an event for your university or another type of event such as a 'studio event', a TEDx youth event (for schools), a business event (internally for a company), a library event and more. Anyone can take  initiative and apply for a license with TED which (when approved) allows you to use the 'TEDx' platform in exchange for adhering to their rules.

   note: one of these TEDx rules is diversity of topics of the talks at an event, so an event with several speakers on EA ideas might be difficult to get away with (but it may be possible when you approach it strategically).

•Organizing a TEDx event is a large time commitment but it could be worthwhile if you want to gain skills/career capital and at the same time offer a stage to EA ideas. I personally feel that I learned a lot (!) from organizing this (e.g. leading a team, finances, logistics, project management etc.), and I think it has quite some CV value as well as the name TEDx looks good.

•To get a better picture of what organizing a TEDx event looks like, check out the organizer's guide linked below and feel free to contact me for questions/advice (alexandrabos@live.nl) https://www.ted.com/participate/organize-a-local-tedx-event/tedx-organizer-guide

Great job on the talk! :)

I'd be curious to know in more detail how giving the books to the audience was done

Thank you so much for giving this talk! I think it was very motivating and well-spoken.

This was my first time publicly speaking and I think you can tell.

For what it's worth, I couldn't tell. There was a slip at one point but it didn't impact the overall message at all.

Although I still got good traction, I think I could’ve done a much better job in terms of confidence and gotten even more people engaged.

I don't think the difference would have been necessarily significant. I think that people that are likely to become highly engaged and do a lot of good are probably very receptive to these ideas, and once exposed to them will easily find other resources.

By the way, I've seen people estimate the value of an extra highly engaged EA at more than ~$100,000. If your talk caused even ~3 more people to take meaningful action, or to take it sooner than they would have otherwise, it did quite a lot of good!

Such an amazing talk, well done!! :) 

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 30m read
 · 
Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn’t robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn’t robust has been previously underestimated in EA circles and I would like others (individuals, groups, and foundations) to take this uncertainty seriously, not just in words but in their actions. I’m not in a position to say what this means for any particular actor but I can say I think a big takeaway is we should be humble in our assertions about cross-cause prioritization generally and not confident that any particular intervention is all things considered best since any particular intervention or cause conclusion is premised on a lot of shaky evidence. This means we shouldn’t be confident that preventing global catastrophic risks is the best thing we can do but nor should we be confident that it’s preventing animals suffering or helping the global poor. Key arguments I am advancing:  1. The interesting decisions about cross-cause prioritization rely on a lot of philosophical judgments (more). 2. Generally speaking, I find the type of evidence for these types of co