Summary
- Recently we kicked off the first national coordination forum between EA-adjacent groups focused on AI safety in the Netherlands
- The forum aims to foster collaboration, update on progress, and align on priorities and theory of change
- We focused on updating and prioritisation, so no tangible outcomes yet,
- Goal is to continue the coordination experiment with at least two more monthly meetings
- We identified the following priorities: (1) find solutions for advanced engagement post AI safety fundamentals course, (2) develop a strategy for engaging with established academics, (3) co-organise one national advocacy event, and (4) increase operational capacity
- This article serves as an overview for interested individuals and groups; intended as an initial reference point, anticipating positive developments and tangible outputs in the future
Intro
In a notable progress for the AI safety landscape in the Netherlands, we recently started an experiment hosting a national coordination forum for field-building and advocacy initiatives. This forum serves as a crucial platform, aiming to unite diverse stakeholders with a shared interest in advancing AI safety (AIS). We decided to experiment with regularly meeting for two quarters to assess the usefulness. Prior to the forum, the different groups had largely sporadic and informal interactions. The growing desire to understand each other's initiatives and explore potential synergies has spurred this coordination effort. We have various university groups that run intro and AI Safety Fundamentals (AISF) courses independently as well as several advocacy organisations. Generally, people seem excited about growing the AIS field in the Netherlands (NL) and see relatively high engagement in their groups. In the rest of the post we present (1) an overview of the groups involved, (2) recent developments, (3) bottlenecks in the safety landscape, and (4) the forum’s objectives.
Overview of Groups Involved
- Safety Initiatives led by students and researchers from various universities including Amsterdam, Delft, Eindhoven, Groningen, and Utrecht
- European Network for AI Safety (ENAIS)
- Existential Risk Observatory
- PauseAI
Each group brought one or two representatives. Not represented were groups focused on AI policy and independent researchers based in NL. While the forum is focused on field building and advocacy, we appreciate input from researchers. If you are a researcher and would like to give input, please do get in touch!
Recent Developments
We reached out to the organisers of various AI Safety initiatives from NL and present some statistics about the attendees of their events:
- Amsterdam: 20 completed AISF, 6 attended hackathons
- Delft: 48 completed AISF, 55 attended hackathons
- Groningen: 40 completed AGI SF, 7 attended hackathons
- Tilburg: 19 completed AISF, 6 attended hackathons
- Utrecht: 35 completed AISF, 0 attended hackathons
Total of 162 completed local cohorts of AISF and 74 attended hackathons. The AI Safety Netherlands WhatsApp community, which was created at the '22 retreat, now has 115 members.
We asked groups to share the important developments of the last months:
- OpenAI event:
- Virtual talk and live Q&A with OpenAI researchers
- Led by Jelle Donders
- Held at Amsterdam, Delft Eindhoven, and Groningen with over 1,500 attendees
- Groups in Amsterdam, Delft, Eindhoven, and Utrecht ran or are running AISF cohorts
- Amsterdam: hosted a panel discussion with professors on AI risk; held weekly AI risk lunches at the university
- ENAIS: received funding for people’s part-time salaries; prioritised work; launched various field-building projects
- Groningen: prioritised and distributed tasks in the team
- Eindhoven: hosted alignment jams (from Apart Research)
- Delft: organised paper deep dives
- PauseAI: organised multiple protests to raise public awareness and to advocate for pausing largest training runs
Bottlenecks in the NL Safety Landscape
We asked what challenges people encounter and what bottlenecks they see in the NL safety landscape (these are unranked):
- Shared theory of change and set of assumptions
- Academics who offer/support AIS projects
- Coordination between local groups
- Organisational capacity at the group and national level
- Not sharing opportunities between groups
- Not enough activism and public awareness for AI risks
- Catastrophic risks are a taboo topic in the media
- Gap in career funnel after AISF
- Lack of jobs in AIS, nationally as well as internationally
- Lack of practical projects that people can work on
Epistemic Status
Our understanding is based on numerous informal discussions among members before the first national coordination forum. We also conducted a survey targeting known group representatives, gathering valuable insights. We had the initial two coordination forum meetings, with more scheduled in the next months.
Conclusions of the First Forum
These are our findings
- Engagement at the university groups is relatively high and many people are interested in the fundamentals courses
- The main challenges lie in
- Limited organisational capacity as many groups are run by volunteers
- Gap in career funnel for advanced and practical engagement with AIS, including jobs
- Lack of senior mentors from academia, research and industry for fostering eager talent and developing the field
We voted on candidate objectives for the forum and aligned on the following:
- Post-Course Gap Solutions: Investigate and experiment with 3 activities for advanced engagement post-AISF (e.g. mentorship programme) by the end of Q2 of 2024.
- University Engagement Strategy: Develop a strategy for engaging with established academics in the Netherlands by the end of Q1 of 2024.
- Collaborative Efforts for Events: Co-organise one national event attracting a minimum of 200 attendees by the end of Q2 of 2024.
These are tentative specifications which we are going to refine and commit to over the next weeks. Lastly, we are looking into additional operational capacity to coordinate and execute on field-building activities.
Our initial plan was to develop objectives and form small working groups to deliver on them. However, after the second meeting, it became clear that the members currently do not have the capacity to contribute to these working groups in any substantial way. Therefore, we will probably stick to simply sharing information and discussing issues, and not concretely working on the objectives and main challenges that AIS groups face.
We invite you to comment on this post or reach out regarding the bottlenecks we highlighted and potential solutions. Or let us know what your group is doing well or finds challenging!