Hide table of contents

I'm thinking of creating a group for EAs applying for jobs. If there already is one, please say where it is.

The vision would be a place where people can encourage one another when they get rejections, point to resources, read each others applications etc. 

So imagine such a group existed. Where would you expect to find it? Then I'll set one up there. 

13

0
0

Reactions

0
0
New Answer
New Comment


6 Answers sorted by

An independent discord

This is an invite which won't expire https://discord.gg/BEFBqX6Qap

I made a discord. Anyone can join here. https://discord.gg/aYS7Hb2s 

1
maximumpeaches
That invite link no longer works. Can you share steps on how to join? Thanks.
1
Nathan Young
Sorry. https://discord.gg/BEFBqX6Qap

I like that kind of platform, but Slack seems more widely used by EAs compared to Discord. If this gets the most votes, I'd consider checking whether people actually prefer discord or instead rather use Slack. 

2
Nathan Young
I think slack has higher barriers to entry. I'm going to try a discord and if it gets no takeup I can try something else. 

Maybe the existing EA Career Discussion Facebook group would be a good place to start such peer support groups, and then they could create their own facebook groups / Slack / etc?

1
Nathan Young
I think anyone is welcome to create their own facebook equivalent but I am going to make a discord.

Slack (EA Anywhere, EA Global or new Slack)

I think you have better chances of success if you use exisiting suitable platforms instead of setting up new ones. 

EA Anywhere seems the best fit for that, so I'd recommend to check with Marisa (EA Anywhere organiser) whether she thinks their Slack would be a good place for this. 

1
Nathan Young
Sure but you only have a better chance among the people who are already on the slack. EA anywhere has 176 members. That doesn't seem like that much additional benefit.

As part of another EA discord

Did you end up creating the Discord server? I tried to follow the invite link you posted here, but it didn't work. I would like to join if possible.

An open WhatsApp group pointed at by a forum post

Whatsapp does not have a "thread-function", so I'd expect this to get chaotic rather quickly. 

Comments3
Sorted by Click to highlight new comments since:

I think this is a good idea, especially if the person managing the group is really good at community building and encouragement. I think this works better if it is a Slack or Discord, so that there can be multiple channels for multiple groups to be created, such as those based on causes or career path interests, or forming cohorts or accountability partners between people. I think the ideal would be groups of ~3-6 people encouraging each other and being transparent about their EA job applications. Those groups can then transition to Facebook or WhatsApp groups if they prefer that.

They could probably have one social call to meet each other and say where they plan on applying, then they could just catch up weekly via an update, and then they could have a call to catch up every month or two.

I think it's worthwhile, and lots of local groups do that already (like EA Berlin). 

Maybe local groups can do most of that as they have the benefit of people being able to meet in person (after corona) and sharing similar culture, job prospects etc, and then everyone who does not have a local group nearby can use EA Anywhere?

Would also recommend asking on the EA Groups Slack. Are you on there yet? The invite link should be somewhere on www.eahub.org, but let me know if you can't find it. 

[comment deleted]1
0
0
Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that