Hide table of contents

Tl;dr:  This weekend (15-17 November), we will broadcast 30+ hours of live EAGxVirtual 2024 speaker talks! Everyone, including those that aren’t registered for the conference, can watch. 

View the schedule and watch live here. Comment on this post to discuss the livestream.

This is part of our goal of making content and opportunities accessible to the global EA community. 

The livestream is only a portion of the programming available to registered attendees. Full access includes 20+ office hours, 25+ meetups, workshops, and access to Swapcard networking with 1000+ attendees. Sign up to get notified about the next virtual conference. 
 

———
 

In our announcement post, we explained why we are hosting a virtual event – to connect the global EA community. We’ll do this by providing access to EA content, the ability to meet others through Swapcard 1-on-1s and cause area/affinity group meetups, giving visibility to high-impact opportunities, and other unique programming.

In our update post, we shared a preview of the programming, highlighting a few speakers, the organisation fair, meetups, lightning talk session, and mentorship program. 

 

What we are excited about this weekend

Welcoming EA community members from 86 countries

  • 27 people are the only ones who registered from their country, so this may be one of the best opportunities they have to engage with EA ideas and community. 
     

Seeing a record 100+ content sessions

  • While in-person EAG events emphasize 1:1s, we are less certain this is the best focus for EAGxVirtual. Many attendees are newcomers to EA, so we’ve designed programming to help them start engaging with ideas and connect with their regional communities. We don’t have all the answers yet and hope to get some more in the coming weeks.   

 

Why we’re publicly livestreaming the talks

EAGxVirtual 2024 offered free registration, so it might seem like everyone interested would already be signed up.

We’ve actually received emails from attendees asking if they should attend. For every question we received, there are likely others who have the same doubts. Here are some hesitations that people have expressed: 

  • They cannot commit to the entire weekend or confirm their schedule in advance
  • Imposter syndrome/don’t feel qualified enough to attend
  • Don’t want to take the place of others
  • Uncertainty about the experience level/age of attendees
  • Past online events didn’t meet expectations

We understand that people have different work and life circumstances. So in the interest of accessibility and transparency, we decided it was worthwhile to trial publicly streaming the live talk sessions. We welcome your feedback on this decision.

This does not provide access to the other portions of the conference, such as the virtual meetups, 1:1 booking function in Swapcard, or office hours. 

These livestreamed talk recordings will be instantly available on Swapcard after a session ends. 

Watch the livestream

 

Huge thanks to the EA Community for supporting us

Although the conference hasn’t ended, we’re already so grateful to the local EA groups, partner organizations, volunteers, and community members making this weekend possible.

Here are a few screenshots of from EA Manchester, AE Brasil, EA Christians, and EA Hong Kong 💙

 

How to participate

We hope that EAGxVirtual 2024 will help to seed conversations, collaborations, and new opportunities--even for people who don't attend. 

  • For registered attendees:
    • Besides watching the content sessions, we encourage you to attend meetups, book 1:1s, and engage with other attendees 
    • If you’ve attended this year’s conference, share your experience in this post's comments. It may help provide guidance or confidence to someone else in a similar situation

       

  • For people watching the livestream only:

 

47

0
0
4

Reactions

0
0
4

More posts like this

Comments7
Sorted by Click to highlight new comments since:

A number of people invited me to 1:1s to ask me for career advice in my field, which is software engineering. Mostly of the "how do I get hired" kind rather than the "how do I pick a career path that's most in line with EA strategic priorities" kind that 80,000 Hours specializes in. Unfortunately I'm not very good at this kind of advice (I haven't looked for a new job in more than eight years) and haven't been able to find anywhere else I could send people to that would be more helpful. I think there used to be an affinity group or something for EA software engineers, but I don't think it's active anymore.

Anyone know of anything like this? If not, and if you're the kind of person who's well-positioned to start a group like this, consider this a request for one.

You are right, EA Software Engineers group is no longer active. Their virtual events were quite useful, and you can still access the recordings and slides here.

EA Data Science group hosts events sometimes, and their channel on EA Anywhere Slack is pretty active.

In addition, I used to lead the EA Public Interest Tech Slack community, which was subsequently merged into the EA Software Engineers community (the Discord for which still exists btw). All of these communities eventually got merged into the #role-software-engineers channel of the EA Anywhere Slack.

I think there was too much fragmentation among slightly different EA affinity groups aimed at tech professionals - there was also EA Tech Network for folks working at tech companies, which I believe was merged into High Impact Professionals.

I'm not sure why the EA SWE community dissipated after all the consolidation that occurred. I think the lack of community leadership may have played a role. Also, it seems like EA SWEs are already well served by other communities, including AI safety (for which a lot of SWEs have the right skills) and effective giving communities like Giving What We Can (since many SWE roles are well-paid).

Lingering thoughts on the talk "How to Handle Worldview Uncertainty" by Hayley Clatterbuck (Rethink Priorities):

The talk proposed several ways that altruists with conflicting values can bargain in mutually beneficial ways, like loans, wagers, and trades, and suggested that the EA community should try to implement these more in practice and design institutions and mechanisms that incentivize them.

I think the EA Donation Election is an example of a community-wide mechanism for brokering trades between multiple anonymous donors. To illustrate this, consider a simple example of a trade, where Alice and Bob are donors with conflicting altruistic priorities. Alice's top charity is Direct Transfers Everywhere and her second favorite is Pandemics No More. Bob's top charity is Lawyers for Chickens, and his second favorite is Pandemics No More. Bob is concerned that Alice's donating to Direct Transfers Everywhere would cancel out the animal welfare benefits of his donating to Lawyers for Chickens, so he proposes that they both donate to their second choice, Pandemics No More.

The Donation Election does this in an automated, anonymous, community-wide way by using a mechanism like ranked-choice voting (RCV) to select winning charities. (The 2024 election uses RCV; the 2023 election used a points-based system similar to RCV.) Suppose that Alice and Bob are voting in the Donation Election—and for simplicity, we'll pretend that the election uses RCV. If their first-choice charities (Direct Transfers Everywhere and Lawyers for Chickens) are not that popular among the electorate, those candidates will be eliminated, and Alice and Bob's votes reallocated to Pandemics No More. This achieves the same outcome as the trade in the previous example automatically, even though Alice and Bob may not have ever personally met and agreed to that trade.

Update: The 2024 Donation Election is using straight-up ranked-choice voting; details here.

Making EAGxVirtual more accessible has been our aspiration since 2022, and I'm excited we reached a new milestone with the public livestream!

Despite the common advice to "focus on 1-1s, you can always watch talk recordings later", I found most people (including myself!) only watch talks if they attend them live. And the virtual conference allows us to invite speakers from anywhere in the world who wouldn't be able to present at in-person EAGx.

Here are some sessions I'm personally excited about:

You can access recordings of past talks and explore the agenda here.

There are several talks that aim to provide frameworks and considerations when approaching career choice. 

Career-related talks:

Cause area-specific career talks:

 

Several interactive workshops that are available to EAGxVirtual 2024 attendees only:

  • Career Impact Workshop: Finding a Role That's Good For You and Good For the World
  • More Than the Obvious: Unexplored Paths to High-Impact Careers
  • Career transition strategies
Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 3m read
 · 
Hi all, This is a one time cross-post from my substack. If you like it, you can subscribe to the substack at tobiasleenaert.substack.com. Thanks Gaslit by humanity After twenty-five years in the animal liberation movement, I’m still looking for ways to make people see. I’ve given countless talks, co-founded organizations, written numerous articles and cited hundreds of statistics to thousands of people. And yet, most days, I know none of this will do what I hope: open their eyes to the immensity of animal suffering. Sometimes I feel obsessed with finding the ultimate way to make people understand and care. This obsession is about stopping the horror, but it’s also about something else, something harder to put into words: sometimes the suffering feels so enormous that I start doubting my own perception - especially because others don’t seem to see it. It’s as if I am being gaslit by humanity, with its quiet, constant suggestion that I must be overreacting, because no one else seems alarmed. “I must be mad” Some quotes from the book The Lives of Animals, by South African writer and Nobel laureate J.M. Coetzee, may help illustrate this feeling. In his novella, Coetzee speaks through a female vegetarian protagonist named Elisabeth Costello. We see her wrestle with questions of suffering, guilt and responsibility. At one point, Elisabeth makes the following internal observation about her family’s consumption of animal products: “I seem to move around perfectly easily among people, to have perfectly normal relations with them. Is it possible, I ask myself, that all of them are participants in a crime of stupefying proportions? Am I fantasizing it all? I must be mad!” Elisabeth wonders: can something be a crime if billions are participating in it? She goes back and forth on this. On the one hand she can’t not see what she is seeing: “Yet every day I see the evidences. The very people I suspect produce the evidence, exhibit it, offer it to me. Corpses. Fragments of