About the program
Hi! We’re Chana and Aric, from the new 80,000 Hours video program.
For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts.
But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it!
80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them.
[Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take]
We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change.
Our first long-form video
For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power.
Why?
We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.)
We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
I have for a while been thinking about this idea of "effective altruism" but have a couple of questions about it more fundamentally.
The first is purely practical - why is it that for contributions to a thing to be doing a lot of good, it must be one in which not a lot of people are working on them, specifically, are required? Ultimately, we need everyone doing good, because evil is an intolerable path for a human to live by, and one could argue that the absence of good is at least "half of evil", but that means that, if we are to approach that seriously, then we will necessarily have lots of people working on lots of issues.
But the second is more philosophical, and kind of related to that "we need everyone doing good" and "evil is intolerable": does this "effective altruism" not merely constitute a moral decision-making method, but also a moral judgment to pass on other people in that if you don't help as many people as another because of what you don't have (financial, talent, circumstances, etc.), then you are a more evil or less good person, even if you are still making the best choices with what you do have? If so, is that sort of "relative evil" tolerable? But yet, if so, then why consider it "evil" at all, which inherently must imply (for it to be even morally meaningful - that is, to have relevance to how we should and should not act) a certain level of intolerance?
The reason I ask these is that for a while now I have been dogged by feeling like I am an evil person - and not being recognized and judged accordingly - because my mind seems to naturally operate on some framework that seems rather broadly along these lines, and seems to invite comparisons based on total utility generation with attendant self-flagellation.
two big things:
one: replaceability often nukes the utility of doing something. lets say i am gonna get a job at Redwood. there is some expected value from my outputs, but the real calculation is [expected value of my outputs] - [expected value of the outputs of who would have been hired that isn't me]. of course i'm also freeing up their time by taking the job, so there is a sort of cascade, but in many cases its between them get hired and doing not much.
two: vast majority people arent trying at all to do a lot of good, so naturally if you are, you will do things that few others are trying to do
I am also curious about another thing: For me, I identified over my long 31 years of lifetime that was spent mostly behind a computer, that the 3 biggest challenges facing humankind so far are an unhealthy relationship with nature, the lack of a socio-cultural-political milieu that provides a solid guarantee of global peace (just look at with Russia now!), and finally the lack of similar on the ethical development and deployment of technology.
What do you think?
Moreover, given that I am hopefully at a point where I can make the transition from mental health recovery and college to finally a "proper" career and to break free of the shackles of the computer screen, what should I be aiming at if I want to maximize the utility value on all these fronts, and why should I accept that, and why should I accept the evidence, and where can I find countermanding arguments as to those whys?