Hide table of contents

Crosspost; edit: added target group description; announcing closed first application round


###############
First application round closed:

We closed the first application round on Friday 21st February. Confirmations will be sent on Sunday 22nd.

Very limited space for further participants will be available. Feel free to apply until Friday, February 28th 2020.
###############


NEAD (Netzwerk für Effektiven Altruismus Deutschland) in collaboration with EA Oxford is excited to announce a weekend workshop with senior biosecurity experts.

In a nutshell

  • Weekend workshop in Oxford, 2nd of May, Friday evening – Sunday noon
  • 20–25 participants, European audience
  • Speakers: Jaime Yassif (NTI), Cassidy Nelson (FHI), Piers Millett (FHI, WHO)
  • Depending on funding situation: Free / ticket fee (80–150€) / mini-stipends
  • Target group: EA aligned; Ambitious to contribute to GCBR reduction; Applicants from diverse backgrounds welcome

Goals:

Develop, communicate and establish a responsible, cautious and supportive culture for EAs around contributing to the delicate area of GCBR reduction.

  • Map the current biosecurity space
  • Understand what defines ‘robustly good’ in the context of GCBR reduction
  • Develop concrete, focused and actionable ways of contributing to GCBR reduction
  • Provide follow-up actions for participants and the EA biosecurity community as a whole

Further Information and application

See our blogpost on NEAD's website.

39

0
0

Reactions

0
0
Comments8


Sorted by Click to highlight new comments since:

That's great!

I've been working on a biosecurity event (Catalyst) that's happening later this month in SF. It's going to be a larger and less purely EA audience (and thus I expect it to have less of a working-group atmosphere) but I'd be happy to connect afterwards and share any takeaways on biorisk event organization.

Absolutely :)

side-note: Appreciate how accessible you made this

I appreciate the "solidarity ticket" system & the public (but not too prominent) announcement here in the forum. I have the impression retreats like this one are often shared within closed networks only and/or have high entrance barriers (fees etc.), and to me, this seems like a better way to actually reach the most relevant people, even if they're not in the right networks already. There might be more considerations that I'm not aware of, though.

Is there already an online platform for EAs in biosecurity (besides the Facebook group)?

Other subgroups (EA Community Builders, EAs working in policy, EAs in Operations etc.) have active discussions in Slack workspaces, which seem great value. If you don't have that yet, I'd consider starting one and thinking carefully about criteria before inviting people (see "start with who" medium blog post)

If this does exist, I as EA Community Builder would appreciate hearing about it, so I can direct relevant group members there. Please comment or email me (manuel.allgaier@ea-berlin.org). Thanks!

This seems potentially high value, thanks a lot for the initiative!

Do you a specific target audience in mind? E.g. people with biology or policy background, people already working in biosecurity or just (EA-aligned) people generally interested in the field? I'll forward it to some EAs working in biosecurity now, happy to share it further if you wish.

All the best for your workshop!

in the application form we state

Target group:
- EA aligned
- Ambitious to contribute to GCBR reduction
- Applicants from diverse backgrounds welcome

Nice! Am I remembering correctly, that this workshop was seeded during a conversation at the last EAG in London?

Indeed :)

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 1m read
 · 
I recently read a blog post that concluded with: > When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else that matters in your life. But if your tens of thousands of hours at work aren't something you expect to look back on with pride, consider whether there's something else you could be doing professionally that you could feel good about.
 ·  · 7m read
 · 
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.  Rates of Growth The authors summarise their argument as follows: > Currently, total global research effort grows slowly, increasing at less than 5% per year. But total AI cognitive labour is growing more than 500x faster than total human cognitive labour, and this seems likely to remain true up to and beyond the point where the cognitive capabilities of AI surpasses all humans. So, once total AI cognitive labour starts to rival total human cognitive labour, the growth rate of overall cognitive labour will increase massively. That will drive faster technological progress. MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.