Jeroen Willems🔸

Content creator / YouTuber @ A Happier World
1387 karmaJoined Working (0-5 years)Brussels, Belgium
youtube.com/ahappierworldyt

Bio

Participation
5

Organizer @ EA Belgium, YouTuber @ A Happier World.

My name is pronounced roughly as "yeroon" (IPA: jəˈʀun). You can leave anonymous feedback here: https://admonymous.co/jeroen_w Currently residing in Brussels, Belgium. I obtained my master's degree in audiovisual arts, specializing in television, from RITCS in 2020. During my studies, I learned how to direct, write scripts, film, edit and develop concepts. I have since applied and refined all these skills by creating YouTube videos for my channel, A Happier World. A Happier World is a YouTube channel that explores exciting ideas with the potential to radically improve the world. The videos tackle the world's most pressing problems and offer effective ways we can solve them. Topics covered include global health, poverty, animal welfare, artificial intelligence, pandemics, climate change, and even moral philosophy. I have been actively involved with EA Brussels/Belgium since 2016. My experience in hosting and organizing events has provided valuable insights into effectively communicating EA ideas. Pronouns: he/him

Sequences
1

A Happier World videos and transcripts

Comments
163

I checked parts of the study, and the 0.12% figure is for P(AI-caused existential catastrophe by 2100) according to the "AI skeptics". This is what is written about the definition of existential catastrophe just before it: 

Participants made an initial forecast on the core question they disagreed about (we’ll call this U, for “ultimate question”): by 2100, will AI cause an existential catastrophe? We defined “existential catastrophe” as an event in which at least one of the following occurs:

  1. Humanity goes extinct
  2. Humanity experiences “unrecoverable collapse,” which means either:
    1. <$1 trillion global GDP annually [in 2022 dollars] for at least a million years (continuously), beginning before 2100; or
    2. Human population remains below 1 million for at least a million years (continuously), beginning before 2100. 

That sounds similar to the classic existential risk definition? 

(Another thing that's important to note is that the study specifically sought forecasters skeptical of AI. So it doesn't tell us much if anything about what a group of random superforecasters would actually predict!)

I am very very surprised your 'second bucket' contains the possibility of humans potentially having nice lives! I suspect if you had asked me the definition of p(doom) before I read your initial comment, I would actually have mentioned the definition of existential risks that includes the permanent destruction of future potential. But I simply never took that second part seriously? Hence my initial confusion. I just assumed disempowerment or a loss of control would lead to literal extinction anyway, and that most people shared this assumption. In retrospect, that was probably naive of me. Now I'm genuinely curious how much of people's p(doom) estimates actually comes from actual extinction versus other scenarios...

Interesting, I thought p(doom) was about literal extinction? If it also refers to unrecoverable collapse, then I'm really surprised that takes up 15-30% of your potential scenarios! I always saw that part of the existential risk definition as negligible.

You're right that this is an important distinction to make.

You make a fair point, but what other tool do we have than our voice? I've read Matthew's last post and skimmed through others. I see some concerning views, but I can also understand how he arrives at them. But what puzzles me often with some AI folks is the level of confidence needed to take such high-stakes actions. Why not err on the side of caution when the stakes are potentially so high?

Perhaps instead of trying to change someone's moral views, we could just encourage taking moral uncertainty seriously? I personally lean towards hedonic act utilitarianism, yet I often default to 'common sense morality' because I'm just not certain enough.

I don't have strong feelings on know how to best tackle this. I won't have good answers to any questions. I'm just voicing concern and hoping others with more expertise might consider engaging constructively.

Good point, I guess my lasting impression wasn't entirely fair to how things played out. In any case, the most important part of my message is that I hope he doesn't feels discouraged from actively participating in EA. 

On top of mentioning a specific opportunity, I think this post makes a great case in general for considering work like this (great wage & benefits, little experience necessary, somewhat mundane, shiftwork). I do feel a bit uncomfortable about the part where you mention using personal sway to influence the hiring process though, as this could undermine fair hiring practices, but I could be overreacting. 

Thanks for sharing this, while I personally believe the shift in focus on AI is justified (I also believe working on animal welfare is more impactful than global poverty), I can definitely sympathize with many of the other concerns you shared and agree with many of them (especially LessWrong lingo taking over, the underreaction to sexism/racism, and the Nonlinear controversy not being taken seriously enough). While I would completely understand in your situation if you don't want to interact with the community anymore, I just want to share that I believe your voice is really important and I hope you continue to engage with EA! I wouldn't want the movement to discourage anyone who shares its principles (like "let's use our time and resources to help others the most"), but disagrees with how it's being put into practice, from actively participating. 

I'm not sure how to word this properly, and I'm uncertain about the best approach to this issue, but I feel it's important to get this take out there.

Yesterday, Mechanize was announced, a startup focused on developing virtual work environments, benchmarks, and training data to fully automate the economy. The founders include Matthew Barnett, Tamay Besiroglu, and Ege Erdil, who are leaving (or have left) Epoch AI to start this company.

I'm very concerned we might be witnessing another situation like Anthropic, where people with EA connections start a company that ultimately increases AI capabilities rather than safeguarding humanity's future. But this time, we have a real opportunity for impact before it's too late. I believe this project could potentially accelerate capabilities, increasing the odds of an existential catastrophe. 

I've already reached out to the founders on X, but perhaps there are people more qualified than me who could speak with them about these concerns. In my tweets to them, I expressed worry about how this project could speed up AI development timelines, asked for a detailed write-up explaining why they believe this approach is net positive and low risk, and suggested an open debate on the EA Forum. While their vision of abundance sounds appealing, rushing toward it might increase the chance we never reach it due to misaligned systems.

I personally don't have a lot of energy or capacity to work on this right now, nor do I think I have the required expertise, so I hope that others will pick up the slack. It's important we approach this constructively and avoid attacking the three founders personally. The goal should be productive dialogue, not confrontation.

Does anyone have thoughts on how to productively engage with the Mechanize team? Or am I overreacting to what might actually be a beneficial project?

No guest bedrooms. We encouraged tents and sleeping bags. Some people just went home for the night, while others came only for one day. This meant for both editions only 5-8 people ended up staying overnight, with most of them sleeping indoors in the living room.

Load more