Hide table of contents

I am usually really curious to get a taste of the overall atmosphere and insights gained from EAGs or EAGx events that I don’t attend. These gatherings, which host hundreds or even thousands of Effective Altruists, serve as valuable opportunities to exchange knowledge and potentially offer a snapshot of the most pressing EA themes and current projects. I attended EAGxNordics and, as a student, I will share my observations in a bullet-point format. Thank you Adash H-Moller for great comments and suggestions. Other attendees are very welcome to add their experiences / challenge these perspectives in the comments:

Lessons learned:

  • The majority of the participants seemed to have come from (obviously) Sweden, Norway, Finland, Estonia, Denmark and The Netherlands. Small countries with relatively tight EA communities.
  • I was particularly impressed with the line-up of speakers from non-EA-labeled think tanks and institutes - I think it provides a strong benefit, especially to those EAs who are quite familiar with EA but would not find out about these adjacent initiatives. It also serves to reduce the extent to which we're in our own bubble.
  • I talked to numerous participants of the Future Academy - who all learned about EA through that program. They shared great experiences in policy, entrepreneurship and education (from before they knew about EA) and I think they are a great addition to the community.
  • Attendees can be more ambitious: Both in their conference experience and in their approach to EA. I spoke to too many students who had <5 1on1s planned, whereas these are regarded as one of the best ways to operate during a conference. Also, in terms of the career plans and EA projects I asked about, I would have loved to see bigger goals then the ones I heard. 
  • I attended talks by employees of (the) GFI, Charity Entrepreneurship and The Simon Institute. The things they had in common:
    •  They work on problems that are highly neglected (One speaker cited from a podcast: “No one is coming, it is up to us”)
    • They do their homework thoroughly
    • A key factor for their impact is their cooperation with local NGOs, governments and intergovernmental organizations. 
  • (Suggested by Adash) The talk by an employee of Nähtamatud Loomad about ‘invisible animals’ was great and provided useful insight into what corporate lobbying actually looks like on the ground - I think specific, object-level content is great for keeping us grounded.
  • There could be more focus on analyzing EA as a community and considering what EA needs more of / needs to do differently, I asked a few people exactly those questions.
  • A lot of people talked about AI Safety
    • I felt there was a large group of students who were excited about contributing to this field 
    • Participants with other backgrounds mentioned this as well and multiple participants voiced the preference for (a) more balanced content/narrative around topics like global development, animal welfare, etc.
    • (Suggested by Adash: N is small so take with a pinch of salt) I found the conversations I had with some early-career AI safety enthusiasts to show a lack of understanding of paths to x-risk and criticisms of key assumptions. I’m wondering if the early-stage AI field-building funnel might cause an echo chamber of unexamined AI panic that undermines general epistemics and cause-neutral principles.

59

0
0

Reactions

0
0

More posts like this

Comments4


Sorted by Click to highlight new comments since:

Thanks for this feedback and insight!

There could be more focus on analyzing EA as a community and considering what EA needs more of / needs to do differently

I think I disagree here. In my opinion, past EAGx events have had too much focus on the EA community and I think the same can be said of this forum. I expect this is because many people (esp. newer members) have opinions about the EA community, whereas far fewer have expertise in object-level challenges.

I'm glad this event corrected for that. It's possible it over-corrected, but I'm not convinced.

I found the conversations I had with some early-career AI safety enthusiasts to show a lack of understanding of paths to x-risk and criticisms of key assumptions. I’m wondering if the early-stage AI field-building funnel might cause an echo chamber of unexamined AI panic that undermines general epistemics and cause-neutral principles.

I think people new to EA not knowing a lot about specific cause areas they're excited about isn't more true for AI x-risk than other cause areas. For example, I suspect if you asked animal welfare or global health enthusiasts who are as new as the folks into AI safety you talked to about the key assumptions relating to different animal welfare or global health interventions, you'd get similar results. It just seems to matter more for AI x-risk though since having an impact there relies more strongly on having better models. 

This is absolutely the case for global health and development. Development is really complicated, and I think EAs tend to vastly overrate just how how certain we are about what works the best.

When I began working full time in the space, I spent about the first six months getting continuously smacked in the face by just how much there is to know, and how little of it I knew.

I think introductory EA courses can do better at getting people to dig deep. For example, I don't think its unreasonable to have attendees actually go through a CEA by Givewell and discuss the many key assumptions that are made. For a workshop I did recently for a Danish high school talent programme, we created simplified versions which they had no trouble engaging with.

Participants with other backgrounds mentioned this as well and multiple participants voiced the preference for (a) more balanced content/narrative around topics like global development, animal welfare, etc.
 

Thanks for this feedback! FWIW I agree the balance was off here!

More from RobPra
Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 13m read
 · 
  There is dispute among EAs--and the general public more broadly--about whether morality is objective.  So I thought I'd kick off a debate about this, and try to draw more people into reading and posting on the forum!  Here is my opening volley in the debate, and I encourage others to respond.   Unlike a lot of effective altruists and people in my segment of the internet, I am a moral realist.  I think morality is objective.  I thought I'd set out to defend this view.   Let’s first define moral realism. It’s the idea that there are some stance independent moral truths. Something is stance independent if it doesn’t depend on what anyone thinks or feels about it. So, for instance, that I have arms is stance independently true—it doesn’t depend on what anyone thinks about it. That ice cream is tasty is stance dependently true; it might be tasty to me but not to you, and a person who thinks it’s not tasty isn’t making an error. So, in short, moral realism is the idea that there are things that you should or shouldn’t do and that this fact doesn’t depend on what anyone thinks about them. So, for instance, suppose you take a baby and hit it with great force with a hammer. Moral realism says: 1. You’re doing something wrong. 2. That fact doesn’t depend on anyone’s beliefs about it. You approving of it, or the person appraising the situation approving of it, or society approving of it doesn’t determine its wrongness (of course, it might be that what makes its wrong is its effects on the baby, resulting in the baby not approving of it, but that’s different from someone’s higher-level beliefs about the act. It’s an objective fact that a particular person won a high-school debate round, even though that depended on what the judges thought). Moral realism says that some moral statements are true and this doesn’t depend on what people think about it. Now, there are only three possible ways any particular moral statement can fail to be stance independently true: 1. It’s
 ·  · 2m read
 · 
Summary Arkose is an early-stage AI safety fieldbuilding nonprofit focused on accelerating the involvement of experienced machine learning professionals in technical AI safety research through direct outreach, one-on-one calls, and public resources. Between December 2023 and June 2025, we had one-on-one calls with 311 such professionals. 78% of those professionals said their initial call accelerated their involvement in AI safety[1].  Unfortunately, we’re closing due to a lack of funding.  We remain excited about other attempts at direct outreach to this population, and think the right team could have impact here. Why are we closing? Over the past year, we’ve applied for funding from all of the major funders interested in AI safety fieldbuilding work, and several minor funders. Rather than try to massively change what we're doing to appeal to funders, with a short funding runway and little to no feedback, we’re choosing to close down and pursue other options. What were we doing? Why? * Calls: we ran 1:1 calls with mid-career machine learning professionals. Calls lasted an average of 37 minutes (range: 10-79), and we had a single call with 96% of professionals we spoke with (i.e. only 4% had a second or third call with us). On call, we focused on: * Introducing existential and catastrophic risks from AI * Discussing research directions in this field, and relating them to the professional’s areas of expertise. * Discussing specific opportunities to get involved (e.g. funding, jobs, upskilling), especially ones that would be a good fit for the individual. * Giving feedback on their existing plans to get involved in AI safety (if they have them). * Connecting with advisors to support their next steps in AI safety, if appropriate (see below). * Supportive Activities: * Accountability: after calls, we offered an accountability program where participants set goals for next steps, and we check in with them. 114 call participants set goals for check