About the program
Hi! We’re Chana and Aric, from the new 80,000 Hours video program.
For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts.
But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it!
80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them.
[Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take]
We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change.
Our first long-form video
For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power.
Why?
We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.)
We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
Is there anyone doing research from an EA perspective on the impact of Nigeria becoming a great power by the end of this century? Nigeria is projected to be the 3rd largest country in the world by 2100 and potentially appears to be having exponential growth in GDP per capita. I'm not claiming that it's a likely outcome that Nigeria is a great power in 2100 but nor does it seem impossible. It isn't clear to me that Nigeria has dramatically worse institutions than India but I expect India to be a great power by 2100. It seems like it'd be really valuable for someone to do some work on this given it seems really neglected.
I don't know, but I think it would be great to look into.
There was a proposal to make a "Rising Powers" or "BRICS" tag, but the community was most interested in making one for China. I'd like to see more discussion of other rising powers, including the other BRICS countries.
I agree! I think there's some issue here (don't know if there's a word for it) where maybe some critical mass of effort on foreign powers is focused on china, leaving other countries with a big deficit or something. I'm not sure what the solution here is, perhaps other than to make some kind of "the case for becoming a [country X] specialist" for a bunch of potentially influential countries.
Yeah that sounds right, I don't even know how many people are working on strategy based around India becoming a superpower, which seems completely plausible.
Maybe this isn't something people on the forum do, but it is something I've heard some EAs suggest. People often have a problem when they become EAs that they now believe this really strange thing that potentially is quite core to their identity now and that can feel quite isolating. A suggestion that I've heard is that people should find new, EA friends to solve this problem. It is extremely important that this does not come off as saying that people should cut ties with friends and family who aren't EAs. It is extremely important that this is not what you mean. It would be deeply unhealthy for us as community is this became common.
Two books I recommend on structural causes and solutions to global poverty. The bottom billion by Paul Collier focuses on the question how can you get failed and failing states in very poor countries to middle income status and has a particular focus on civil war. It also looks at some solutions and thinks about the second order effects of aid. How Asia works by Joe Studwell focus on the question of how can you get poor countries with high quality, or potentially high quality governance and reasonably good political economy to become high income countries. It focuses exclusively on the Asian developmental state model and compares it with neoliberalish models in other parts of Asia that are now mostly middle income countries.
I think empirical claims can be discriminatory. I was struggling with how to think about this for a while, but I think I've come to two conclusions. The first way I think that empirical claims can be discrimory is if they express discriminatory claims with no evidence, and people refusing to change their beliefs based on evidence. I think the other way that they can be discriminatory is when talking about the definitions of socially constructed concepts where we can, in some sense and in some contexts, decide what is true.
If preference utilitarianism is correct there may be no utility function that accurately describes the true value of things. This will be the case if people's preferences aren't continuous or aren't complete, for instance if they're expressed as a vector. This generalises to other forms of consequentialism that don't have a utility function baked in.
What do you mean by correct?
When you say "this generalizes to other forms of consequentialism that don't have a utility function baked in", what does "this" refer to? Is it the statement: "there may be no utility function that accurately describes the true value of things" ?
Do the "forms of consequentialism that don't have a utility function baked in" ever intend to have a fully accurate utility function?
A 6 line argument for AGI risk
(1) Sufficient intelligence has capitalities that are ultimately limited by physics and computability
(2) An AGI could be sufficiently intelligent that it's limited by physics and computability but humans can't be
(3) An AGI will come into existence
(4) If the AGIs goals aren't the same as humans, human goals will only be met for instrumental reasons and the AGIs goals will be met
(5) Meeting human goals won't be instrumentally useful in the long run for an unaligned AGI
(6) It is more morally valuable for human goals to be met than an AGIs goals