EAG London ‘25, one of the largest EAGs ever, starts next week (June 6–June 8).
I’ve created this thread so that we can start queuing up some impactful conversations before the conference begins (and so that Forum users who aren't attending can participate in the discussion). It’ll be pinned until the end of EAG (June 8).
Reminder: you can find all the attendees and set up meetings with them in Swapcard.
If you’re coming to EAG, consider leaving a comment with:
- Any takes you want to discuss/ stress test at EAG
- Uncertainties you want to resolve at EAG
- Goals you have for the conference
These are all suggestions. You can also share Forum posts you have written, as well as any other information that might help people decide to meet with you. Feel free to take part in the discussion even if you aren't coming to EAG.
Also, if you are attending, don’t forget to include your swapcard link.
And to add a bit of context for readers, you can (optionally) comment via this debate slider (vote before you comment, you’ll be prompted to add your comment once you vote) to label your comment with where you sit on the mentee—mentor axis.
To define the terms, a mentor is someone who expects that the most impactful thing they can do at EAG is share their network and knowledge with mentees. A mentee is someone who can have a much greater impact if they connect with the right mentors. Reminder — including this information is optional.
I'm going to be doing an internship at one of leading NLP labs in some country in Eastern Europe - they publish a few papers at ICLR and NeurIPS every year. I have a chance to come up with an idea and convince them to work on this idea with me. They have no idea about AI safety, have never heard about EA or LW, but they are somewhat culturally aligned (if you squint a bit you could say that they operate by Crocker's Rules, but they would call it having direct and no bullshit approach).
My goal is to find a very concrete idea to work on that is relatable for people with NLP background. I'm thinking of this as more of a field building project rather than AI safety project.