Geography has much to do with your economic status, including access to knowledge and opportunities. I am from Zambia and was privileged to stumble upon the Effective Altruism community that aligns with SOME of my values and the kind of life I want to live. There are a lot of conversations I feel I can't participate in on this forum because of my geography and limited work/academic experience/network so far - I probably will someday.  

I am so motivated to do this because I never had any career guidance at any point in my life. After five years of working as a product design consultant,  co-founding some businesses, and a master's degree, I haven't resolved what I want to do with my career, so I am very happy to be building an EA community in my country while I engage with the 80,000 HOURS book online and in person.  

I am also being mentored by Felix Lee, co-founder of ADPList, which is great because I can learn from his experiences going from being a product designer to his mission to solve many shortcomings of the current education system that hinder the choice to pursue potential career paths.

Some goals that I have and will track in this thread for accountability

  • Grow the page's following on Instagram and TikTok to 1000.
  • Facilitate monthly online meetings where followers meet local professionals working in EA cause areas.
  • Connect with companies like Udemy, Data Camp, and Coursera to provide scholarships to the community because most of us can't afford the few hundred dollars to pay for those courses. 
  • THIS IS A BIG ONE - Organise one career fair for students and graduates.

On a more personal note, I look forward to becoming an effective communicator in the EA community. I also have my fingers crossed that I will become a human-centered design lecturer at a university in Zambia.

Picture Credit: https://www.instagram.com/chamwa_tells_stories/

Check out the pages:

https://www.instagram.com/star.careers/

https://web.facebook.com/star.careerszm/

https://www.tiktok.com/@star.careerszm 

46

0
0

Reactions

0
0
Comments4


Sorted by Click to highlight new comments since:

At least when I used it a few years back, if you just wrote coursera that you were a broke student and couldn't afford the course, they'd give it to you for free. Unsure if that's still the case, but it's likely worth giving it a go and seeing what happens!

This is still the case. I used it for my Google UX certification. Not a lot of people know that and they turn away just at the thought of buying a $9 course.

Update #1

  • Initially, I wanted to only use Zambian sounds for social media to make sure I was reaching my country people, but that didn't get me as much reach and engagement as trending sounds. I started on around 200 views but have grown to close to 900 views. I will make sure to add location going forward.
  • There is more engagement on TikTok regarding reach, likes, saves, and new followers than on Facebook and Instagram. 
  • I have shared posts from chapters 1-4, but I am also mixing them up with ideas that are relevant but not discussed in the 80,000 Hours book. 
  • I am experimenting with different ways of delivering the content, e.g., graphic designs, curated pictures from Pinterest, and b-roll videos of myself that don't show my face. I like using videos. Once I have grasped all the ideas in the book, I intend to start showing my face and speaking more on camera to repurpose the content.

Update #2 

A lot is happening even when you think nothing is happening.

  • This is huge and maybe unrelated, but I am happy I was invited to take the charity Entrepreneurship test task 1. I don't know if I will succeed, but it's been a very enlightening experience, and I got to explore this project as a long-term option. I have come a long way reading about EA, interacting with less than five so far, and talking about EA at my Toast Masters meetings. This is an important note because if you are new to the EA community or believe you are "a normie," there is still so much growth, regardless of your circumstances.
  • I am ready to start hosting virtual meetings, with a special thanks to the ea resources. I have reached the stage where I need to focus on cause areas relevant to my country, for example, nuclear weapons and Zambia. This is where prioritization research skills come in 😅.
  • I am considering switching from building an audience to LinkedIn to build my first set of converting audiences. The reason is that the professionals I seek are on LinkedIn and have their audience there, too. However, the audience I want to reach is on TikTok. 
Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 32m read
 · 
Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, Taiwan eventually democratized in the 1990s and today is one of the riche