Hide table of contents

TL;DR:

Apply now
  • Applications for EAG Bay Area close this Sunday, Feb 9th!
  • It’s already on track to be one of our biggest ever US EAGs, but we’d like to make it even bigger.
  • Due to our current catering and venue costs, it’s relatively cheap at the moment to add extra attendees. So please don’t avoid applying because you’re worried about cost or taking someone else’s space!
  • If you’ve been accepted already, please register as soon as possible to confirm your place and get access to Swapcard.

Updates and Reminders

We wanted to share a few quick updates and reminders about EAG: Bay Area, happening Feb. 21–23. We’d love for you to apply (deadline Feb. 9th), and encourage friends and colleagues (especially ones in/near the Bay Area) to apply, perhaps by sharing this post! 

  • This year’s EA Global: Bay Area will not be focused only on global catastrophic risks (as it was last year) and will be the same as other EAGs. We're dropping the GCR-focus because CEA is aiming to focus on principles-first community building, and because a large majority of attendees last year said they would have attended a non-GCR focused event anyway.
    • We welcome applications regardless of the cause you’re focused on, as long as it’s informed by EA principles.
  • We recently wrote a post discussing the admissions bar for EA Global and received feedback that the approval rate to EAG is higher than many readers suspected (~84% acceptance rate). The post also discusses why we have an admissions process, what we look for in applications, and why you should apply.
  • As of 2024, we’ve decided to weigh EA context less heavily for those with significant work experience, to encourage engagement from more mid- to late-career professionals.
  • Given our current contracts with our catering provider and venue, it is relatively cheap for us to accept additional attendees. Please don’t avoid applying out of concern you might be taking up someone else’s spot.

Why you should apply to EA Global

We suspect there are many people who clearly meet the admissions bar who are not applying

While the number of applications to EA Global 2024 are more than double five years earlier, they have declined each of the last two years. There are likely several contributing factors here, such as a reduction in travel support availability, general trends in community building, and limited marketing efforts over the past couple of years. (We're excited though that we're on track to reverse this trend and increase attendance significantly in 2025!)

We also suspect that declining numbers could, in some part, be influenced by a widespread belief that the admissions bar for EA Global is high. However, this most recent year we approved around 84% of applications. Currently, we suspect there are many people who will clearly meet the bar who are not applying.

In general, we are excited to receive more applications in 2025 and beyond; one of our core aims moving forward is to increase EAG attendance while maintaining attendee satisfaction and curation. 

Events are expensive; we don’t want this to deter applications

EA Globals cost a significant amount of money to run. Some anecdotal feedback suggests that people in our core target audience are not applying for fear that the value they expect to gain is not worth the cost to our team. 

While we sincerely appreciate support from attendees and thoughtfulness towards the cost of our events, we believe that subsidising EA Global attendance is a good use of EA resources, based on analyses of our feedback surveys and actions taken by attendees as a result of the event. Additionally, the marginal costs of extra attendees can vary due to a range of considerations, including various fixed costs that come with events. In the case of EAG: Bay Area, because of minimum spend requirements in our catering contract, we can absorb more attendees without increasing our food costs by much. Our team has the best context on relevant cost considerations—getting admitted to an EAG means that we are willing to cover your attendance. Feel free to defer to our judgement. 

If you’re considering applying to EA Global, we encourage you to apply. If you would like support from our team in deciding whether EA Global is right for you, please reach out to hello@eaglobal.org or comment below. 

How others have gotten value out of EA Global

We believe EA Global has a strong track record of providing value to attendees:

  • Many EA orgs come to EAGs actively looking to hire for roles — you’ll be able to chat directly with them at our Organization Fair, and send 1-1 meeting requests to relevant employees.
  • Many senior professionals in a variety of EA cause areas attend EAG with the main aim of providing advice and mentorship to newer community members. EAG is a great place to speak to people who work in career paths you're considering and get advice.
  • People working at the cutting edge of many cause areas will be attending and able to share their insights through talks, workshops and 1-1s.

Below, you can see some highlights from EAG Boston:

 

And we're excited to preview some of the great speakers we'll be hosting:

 

Let us know if you have any questions in the comments, or by emailing hello@eaglobal.org

Comments7


Sorted by Click to highlight new comments since:

Just submitted my application. This post was the encouragement that motivated me to apply, so thank you.

Just wanted to say, for anyone on the fence about attending these conferences, DO IT!

I just returned from it, and am leaving lit up and inspired about all the people and organizations working so hard to change the world.  There's a lot of negativity out there, but also - so much great working being done.

If you're considering diving in deeper into EA, please do it (and tell everyone you know!). 

I feel like this is too short notice with EAG conferences. Three weeks is not a lot of time between receiving your decision and flying to the Bay Area making arrangements. Maybe it is because I am a student.

I'm not from the EAG team - but this event was actually announced and advertised a long time ago. This (from what I understand) is a last push to get extra attendees :)

That's correct, thanks Toby :) Although, it's really important for us to know if our advertising has been reaching people. We definitely want to know if this post is the first time someone's hearing about EAG, especially if they would have attended had they heard about it earlier. 

I would like to attend, but I have too much schoolwork and would have trouble catching up. A summer conference would be more accessible for me as a college student.

Thanks Wyatt, we're aware these timings can be hard for students. We're looking into what we could organise in the summer to be more accessible.

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att
 ·  · 1m read
 · 
Disclaimer: Post written in a personal capacity. These are personal opinions and do not in any way represent my employer's views TL;DR: * I do not think we will produce high reliability methods to evaluate or monitor the safety of superintelligent systems via current research paradigms, with interpretability or otherwise. * Interpretability still seems a valuable tool and remains worth investing in, as it will hopefully increase the reliability we can achieve. * However, interpretability should be viewed as part of an overall portfolio of defences: a layer in a defence-in-depth strategy * It is not the one thing that will save us, and it still won’t be enough for high reliability. EDIT: This post was originally motivated by refuting the claim "interpretability is the only reliable path forward for detecting deception in advanced AI", but on closer reading this is a stronger claim than Dario's post explicitly makes. I stand by the actual contents of the post, but have edited the framing a bit, and also emphasised that I used to hold the position I am now critiquing, apologies for the mistake Introduction There’s a common argument made in AI safety discussions: it is important to work on interpretability research because it is a realistic path to high reliability safeguards on powerful systems - e.g. as argued in Dario Amodei’s recent “The Urgency of Interpretability”.[1] Sometimes an even stronger argument is made, that interpretability is the only realistic path to highly reliable safeguards - I used to believe both of these arguments myself. I now disagree with these arguments. The conceptual reasoning is simple and compelling: a sufficiently sophisticated deceptive AI can say whatever we want to hear, perfectly mimicking aligned behavior externally. But faking its internal cognitive processes – its "thoughts" – seems much harder. Therefore, goes the argument, we must rely on interpretability to truly know if an AI is aligned. I am concerned this line of