Hide table of contents

Crosspost; edit: added target group description; announcing closed first application round


###############
First application round closed:

We closed the first application round on Friday 21st February. Confirmations will be sent on Sunday 22nd.

Very limited space for further participants will be available. Feel free to apply until Friday, February 28th 2020.
###############


NEAD (Netzwerk für Effektiven Altruismus Deutschland) in collaboration with EA Oxford is excited to announce a weekend workshop with senior biosecurity experts.

In a nutshell

  • Weekend workshop in Oxford, 2nd of May, Friday evening – Sunday noon
  • 20–25 participants, European audience
  • Speakers: Jaime Yassif (NTI), Cassidy Nelson (FHI), Piers Millett (FHI, WHO)
  • Depending on funding situation: Free / ticket fee (80–150€) / mini-stipends
  • Target group: EA aligned; Ambitious to contribute to GCBR reduction; Applicants from diverse backgrounds welcome

Goals:

Develop, communicate and establish a responsible, cautious and supportive culture for EAs around contributing to the delicate area of GCBR reduction.

  • Map the current biosecurity space
  • Understand what defines ‘robustly good’ in the context of GCBR reduction
  • Develop concrete, focused and actionable ways of contributing to GCBR reduction
  • Provide follow-up actions for participants and the EA biosecurity community as a whole

Further Information and application

See our blogpost on NEAD's website.

39

0
0

Reactions

0
0
Comments8


Sorted by Click to highlight new comments since:

That's great!

I've been working on a biosecurity event (Catalyst) that's happening later this month in SF. It's going to be a larger and less purely EA audience (and thus I expect it to have less of a working-group atmosphere) but I'd be happy to connect afterwards and share any takeaways on biorisk event organization.

Absolutely :)

side-note: Appreciate how accessible you made this

I appreciate the "solidarity ticket" system & the public (but not too prominent) announcement here in the forum. I have the impression retreats like this one are often shared within closed networks only and/or have high entrance barriers (fees etc.), and to me, this seems like a better way to actually reach the most relevant people, even if they're not in the right networks already. There might be more considerations that I'm not aware of, though.

Is there already an online platform for EAs in biosecurity (besides the Facebook group)?

Other subgroups (EA Community Builders, EAs working in policy, EAs in Operations etc.) have active discussions in Slack workspaces, which seem great value. If you don't have that yet, I'd consider starting one and thinking carefully about criteria before inviting people (see "start with who" medium blog post)

If this does exist, I as EA Community Builder would appreciate hearing about it, so I can direct relevant group members there. Please comment or email me (manuel.allgaier@ea-berlin.org). Thanks!

This seems potentially high value, thanks a lot for the initiative!

Do you a specific target audience in mind? E.g. people with biology or policy background, people already working in biosecurity or just (EA-aligned) people generally interested in the field? I'll forward it to some EAs working in biosecurity now, happy to share it further if you wish.

All the best for your workshop!

in the application form we state

Target group:
- EA aligned
- Ambitious to contribute to GCBR reduction
- Applicants from diverse backgrounds welcome

Nice! Am I remembering correctly, that this workshop was seeded during a conversation at the last EAG in London?

Indeed :)

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
Hi all, This is a one time cross-post from my substack. If you like it, you can subscribe to the substack at tobiasleenaert.substack.com. Thanks Gaslit by humanity After twenty-five years in the animal liberation movement, I’m still looking for ways to make people see. I’ve given countless talks, co-founded organizations, written numerous articles and cited hundreds of statistics to thousands of people. And yet, most days, I know none of this will do what I hope: open their eyes to the immensity of animal suffering. Sometimes I feel obsessed with finding the ultimate way to make people understand and care. This obsession is about stopping the horror, but it’s also about something else, something harder to put into words: sometimes the suffering feels so enormous that I start doubting my own perception - especially because others don’t seem to see it. It’s as if I am being gaslit by humanity, with its quiet, constant suggestion that I must be overreacting, because no one else seems alarmed. “I must be mad” Some quotes from the book The Lives of Animals, by South African writer and Nobel laureate J.M. Coetzee, may help illustrate this feeling. In his novella, Coetzee speaks through a female vegetarian protagonist named Elisabeth Costello. We see her wrestle with questions of suffering, guilt and responsibility. At one point, Elisabeth makes the following internal observation about her family’s consumption of animal products: “I seem to move around perfectly easily among people, to have perfectly normal relations with them. Is it possible, I ask myself, that all of them are participants in a crime of stupefying proportions? Am I fantasizing it all? I must be mad!” Elisabeth wonders: can something be a crime if billions are participating in it? She goes back and forth on this. On the one hand she can’t not see what she is seeing: “Yet every day I see the evidences. The very people I suspect produce the evidence, exhibit it, offer it to me. Corpses. Fragments of