EAG London ‘25, one of the largest EAGs ever, starts next week (June 6–June 8). 

I’ve created this thread so that we can start queuing up some impactful conversations before the conference begins (and so that Forum users who aren't attending can participate in the discussion). It’ll be pinned until the end of EAG (June 8). 

Reminder: you can find all the attendees and set up meetings with them in Swapcard

If you’re coming to EAG, consider leaving a comment with:

  • Any takes you want to discuss/ stress test at EAG
  • Uncertainties you want to resolve at EAG
  • Goals you have for the conference

These are all suggestions. You can also share Forum posts you have written, as well as any other information that might help people decide to meet with you. Feel free to take part in the discussion even if you aren't coming to EAG. 

Also, if you are attending, don’t forget to include your swapcard link

And to add a bit of context for readers, you can (optionally) comment via this debate slider (vote before you comment, you’ll be prompted to add your comment once you vote) to label your comment with where you sit on the mentee—mentor axis.

Do you expect to be more of a mentor or a mentee?
44 votes. Voting has now closed,
AL
AK
CK
I
L
JP
JHO
RT
S
A
PB
N
T
W
O
K
LF
T
R
F
JXL
PM
S
AM
H
L
L
MLT
J
DB
M
JS
LA
Mentor
Mentee

To define the terms, a mentor is someone who expects that the most impactful thing they can do at EAG is share their network and knowledge with mentees. A mentee is someone who can have a much greater impact if they connect with the right mentors. Reminder — including this information is optional. 

19

0
0

Reactions

0
0
Comments20


Sorted by Click to highlight new comments since:

I'm going to be doing an internship at one of leading NLP labs in some country in Eastern Europe - they publish a few papers at ICLR and NeurIPS every year. I have a chance to come up with an idea and convince them to work on this idea with me. They have no idea about AI safety, have never heard about EA or LW, but they are somewhat culturally aligned (if you squint a bit you could say that they operate by Crocker's Rules, but they would call it having direct and no bullshit approach). 

My goal is to find a very concrete idea to work on that is relatable for people with NLP background. I'm thinking of this as more of a field building project rather than AI safety project.  

My main goal for this EAG is to find great material to include in my essay about broiler chickens and the Better Chicken Commitment campaign (to be published early 2026 in French). I'm looking for embodied characters, epic campaigning stories and powerful anecdotes.

As writing a book is half writing and half developing your public image as an author, I'd also love to connect with folks who have experience as: public figures, opinion leaders, social media influencers, or best-selling authors.

Here's my Swapcard :)

Hey! I have written this post to help people get the most out of conferences. Perhaps it can be a useful read before the conference :)

Tacit knowledge: how I *exactly* approach EAG(x) conferences

I'd love to talk to people broadly interested in ways to reduce the burden of extreme suffering in humans, which I think is weirdly neglected in EA. The vast majority of global health & wellbeing work is based on DALY/QALY calculations (and, to a lesser extent, WELLBY), which I believe fail to capture the most severe forms of suffering. There's so much low-hanging fruit in this space, starting with just cataloguing the largest sources of extreme human suffering globally.

I'm eager to talk to potential collaborators, donors, and really anyone interested in the topic. :)

Here's my Swapcard.

At this EAG I'm hoping to talk to some great Forum authors who I haven't met in person yet, but also to identify people who could get (and generate) a lot of value from writing on the Forum, but haven't tried it.

You can help me (and the Forum) by recommending:

  • People who are coming to EAG with expertise and writing ability, but no Forum presence.
  • People who've been in the EA bubble for a while, and want to do a better job of sharing their work. 

Here's my swapcard. (NB- edited link, to get the link to your profile, look yourself up in the attendees tab)

 I'm running a Lake District EA summer holiday this year!

https://sites.google.com/view/ea-lakes-2025/

Friday 22nd - Tuesday 26th August 2025 (over the August bank holiday)
We're gonna hit up a bunch of the cool northern Lake District sites (Keswick, Ennerdale, Loweswater) in the afternoons, and have attendee-submitted SIG slots in the mornings and evenings. It's £120 if you can afford it, £60 if you can't or if you give £100+ to an effective charity and send the receipt. Not for profit, all surplus donated to an effective charity chosen by attendees.

I want to talk to and connect with people about UK EA community building, especially what I can do to support and engage GWWC pledgers and anyone involved in undercompensated direct work in a cause-neutral, person-empowering way. I'm not at EAG (too expensive) but please come along to my thing or reach out to me on the Forum if you want a chat!

You can get a subsidized free ticket if you apply for it :-)

I did the maths and that cancels out half my donations for the whole year :(

Great reasoning! If you haven't already, would include a consideration for yourself of how much you think you would 1) contribute to others' impact (inspiring donation %?) and 2) how much it'd improve your own (new career, new projects, new donation opportunities discovered) in the equation. These events are well-funded for generally pretty good reasons :)

It'd work out at a lower event costing, I reckon! But not at this one, assuming extrapolation from previous EAGs.

Don't worry I'll just go to other stuff. Heck, I will run other stuff. And I will turn up when it's cost-effective!

The event costings for (some) EA events I've seen bandied about everywhere seem... really expensive. I'm extremely sure we can get cheaper things going with the right kinds of community connections and skillsets to draw on. And I say that as someone with event-runner experience.

Stupid question, new meeting requests don't seem to automatically pop up on my swap card - they are probably the most important thing there but are hard to find. 

Can anyone suggest a way to easily find them ,- I didn't want to miss any!

You should be able to see all of them in the "my event" section.

Thanks that's helpful!

Would still love an easier place to find them :D

Do you expect to be more of a mentor or a mentee?

 

I'm very active in space governance and I'm excited to chat about how that crosses over with many other EA cause areas. 

Link to my swapcard

Do you expect to be more of a mentor or a mentee?

 

Ready to share my community building insights and experiences, but there to hash out some professional cruxes

So much big picture, so few details

Do you expect to be more of a mentor or a mentee?

From previous EAG(x) experience, adjusted for expected partial success at being more of a mentee (would be otherwise more mentor-like).

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T