Hide table of contents


Note: The Forum has been postponed to later in 2025.  We’re working hard to bring you an incredible event – stay tuned for more details!

-----------------------------------------------------------------------------

We’re excited to announce the Vancouver Forum for Effective Altruism!

Whether you’re new to Effective Altruism, or have already made high-impact work a part of your life, you’re invited to join the Forum.

Join us this April 4-5 in Vancouver, Canada to make new connections, learn about the latest in EA, and enjoy a meal together.

 

Feel free to join for either or both events:

  • Friday Dinner and Social: April 4, 6-9pm
  • Saturday Forum: April 5, 10am - 5pm
    • Career workshops
    • Talks from local EAs
    • Networking opportunities
    • Vegan Lunch provided

 

Register here (note: event postponed)

 

Tickets are Pay What You Can - here’s a great link to help guide your decision.

 

 

A big thanks to Rethink Charity for sponsoring this event!

33

0
0
2

Reactions

0
0
2
Comments5


Sorted by Click to highlight new comments since:

What is the deadline to apply? :)

Great question Gergo!  Tickets are available up until the event on April 4.  However, there is a risk that they sell out before then, as seat are limited - so we recommend locking in your spots ASAP. 

Gotcha! One bit of feedback I have is that unless you expect to fill all spots easily, it might be easier to get a sense of the number of people interested if you put out a deadline. You can still extend it or just turn to rolling applications afterward. My experience is that people otherwise just like to wait until the last moment (most applications come at the same day as the deadline, at least for courses I ran)

Seems like this will be a great event.  Thank you, Danielle and Rethink Charity!

Thank you for your support Eric!

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 6m read
 · 
TLDR: This 6 million dollar Technical Support Unit grant doesn’t seem to fit GiveWell’s ethos and mission, and I don’t think the grant has high expected value. Disclaimer: Despite my concerns I still think this grant is likely better than 80% of Global Health grants out there. GiveWell are my favourite donor, and given how much thought, research, and passion goes into every grant they give, I’m quite likely to be wrong here!   What makes GiveWell Special? I love to tell people what makes GiveWell special. I giddily share how they rigorously select the most cost-effective charities with the best evidence-base. GiveWell charities almost certainly save lives at low cost – you can bank on it. There’s almost no other org in the world where you can be pretty sure every few thousand dollars donated be savin’ dem lives. So GiveWell Gives you certainty – at least as much as possible. However this grant supports a high-risk intervention with a poor evidence base. There are decent arguments for moonshot grants which try and shift the needle high up in a health system, but this “meta-level”, “weak evidence”, “hits-based” approach feels more Open-Phil than GiveWell[1]. If a friend asks me to justify the last 10 grants GiveWell made based on their mission and process, I’ll grin and gladly explain. I couldn’t explain this one. Although I prefer GiveWell’s “nearly sure” approach[2], it could be healthy to have two organisations with different roles in the EA global Health ecosystem. GiveWell backing sure things, and OpenPhil making bets.   GiveWell vs. OpenPhil Funding Approach What is the grant? The grant is a joint venture with OpenPhil[3] which gives 6 million dollars to two generalist “BINGOs”[4] (CHAI and PATH), to provide technical support to low-income African countries. This might help them shift their health budgets from less effective causes to more effective causes, and find efficient ways to cut costs without losing impact in these leaner times. Teams of 3-5
 ·  · 2m read
 · 
TL;DR Starting an Effective Altruism (EA) group might be one of the highest-impact opportunities available right now. Here’s how you can get involved: * University students: Apply to the Centre for Effective Altruism’s Organiser Support Programme (OSP) by Sunday, June 22. * City or national group organisers: You’re invited, too. See details here! * Interested in mentorship? Apply to mentor organisers by Wednesday, June 18. * Know someone who could be an organiser or mentor? Forward this post or recommend them directly. OSP provides mentorship, workshops, funding support, and practical resources to build thriving EA communities. Why Starting an EA Group Matters EA Groups, especially university groups, are often the very first exposure people have to effective altruism principles such as scale, tractability, and neglectedness. One conversation, one fellowship, one book club - these seemingly small moments can reshape someone’s career trajectory. Changing trajectories matters - even if one person changes course because of an EA group and ends up working in a high-impact role, the return on investment is huge. You don’t need to take our word for it: * 80,000 Hours: "Probably one of the highest-impact volunteer opportunities we know of." * Rethink Priorities: Only 3–7% of students at universities have heard of EA, indicating neglectedness and a high potential to scale. * Open Philanthropy: In a survey of 217 individuals identified as likely to have careers of particularly high altruistic value from a longtermist perspective, most respondents reported first encountering EA ideas during their college years. When asked what had contributed to their positive impact, local groups were cited most frequently on their list of biggest contributors. This indicates that groups play a very large role in changing career trajectories to high-impact roles. About the Organiser Support Programme (OSP) OSP is a remote programme by the Centre for Effective Altruism designed