Volunteer organizer @ EA Belgium, YouTuber @ A Happier World.
My name is pronounced roughly as "yeroon" (IPA: jəˈʀun). You can leave anonymous feedback here: https://admonymous.co/jeroen_w
For me, it doesn't need to be hard-working or smarter people. Anyone you can cowork with who is supportive will do. But my challenge is to actually create such an environment! Online doesn't work that well for me, it needs to be in-person. It's so much more impactful than any other productivity hack.
It's OK to eat honey
I try to avoid it, but it's hard for me to believe it's as bad or worse than most animal products. Especially in the quantities it's usually consumed. Who eats a kg of honey per year? I do think the treatment of bees is very unclear. But I've also heard that some non-animal products involve a lot of insects, like avocados, so I'm curious how it compares.
I checked parts of the study, and the 0.12% figure is for P(AI-caused existential catastrophe by 2100) according to the "AI skeptics". This is what is written about the definition of existential catastrophe just before it:
Participants made an initial forecast on the core question they disagreed about (we’ll call this U, for “ultimate question”): by 2100, will AI cause an existential catastrophe? We defined “existential catastrophe” as an event in which at least one of the following occurs:
- Humanity goes extinct
- Humanity experiences “unrecoverable collapse,” which means either:
- <$1 trillion global GDP annually [in 2022 dollars] for at least a million years (continuously), beginning before 2100; or
- Human population remains below 1 million for at least a million years (continuously), beginning before 2100.
That sounds similar to the classic existential risk definition?
(Another thing that's important to note is that the study specifically sought forecasters skeptical of AI. So it doesn't tell us much if anything about what a group of random superforecasters would actually predict!)
I am very very surprised your 'second bucket' contains the possibility of humans potentially having nice lives! I suspect if you had asked me the definition of p(doom) before I read your initial comment, I would actually have mentioned the definition of existential risks that includes the permanent destruction of future potential. But I simply never took that second part seriously? Hence my initial confusion. I just assumed disempowerment or a loss of control would lead to literal extinction anyway, and that most people shared this assumption. In retrospect, that was probably naive of me. Now I'm genuinely curious how much of people's p(doom) estimates actually comes from actual extinction versus other scenarios...
You make a fair point, but what other tool do we have than our voice? I've read Matthew's last post and skimmed through others. I see some concerning views, but I can also understand how he arrives at them. But what puzzles me often with some AI folks is the level of confidence needed to take such high-stakes actions. Why not err on the side of caution when the stakes are potentially so high?
Perhaps instead of trying to change someone's moral views, we could just encourage taking moral uncertainty seriously? I personally lean towards hedonic act utilitarianism, yet I often default to 'common sense morality' because I'm just not certain enough.
I don't have strong feelings on know how to best tackle this. I won't have good answers to any questions. I'm just voicing concern and hoping others with more expertise might consider engaging constructively.
On top of mentioning a specific opportunity, I think this post makes a great case in general for considering work like this (great wage & benefits, little experience necessary, somewhat mundane, shiftwork). I do feel a bit uncomfortable about the part where you mention using personal sway to influence the hiring process though, as this could undermine fair hiring practices, but I could be overreacting.
Thanks for sharing this, while I personally believe the shift in focus on AI is justified (I also believe working on animal welfare is more impactful than global poverty), I can definitely sympathize with many of the other concerns you shared and agree with many of them (especially LessWrong lingo taking over, the underreaction to sexism/racism, and the Nonlinear controversy not being taken seriously enough). While I would completely understand in your situation if you don't want to interact with the community anymore, I just want to share that I believe your voice is really important and I hope you continue to engage with EA! I wouldn't want the movement to discourage anyone who shares its principles (like "let's use our time and resources to help others the most"), but disagrees with how it's being put into practice, from actively participating.
Appreciate you publicly sharing this! I was only employed for ~3 months after taking the pledge ~1.5 years ago. I didn't donate 10% of the money I earned in those 3 months out of fear of running out of money while unemployed. Recently employed again and committed to donating 10%. Since it's not that a big amount, I'm still trying to make up for those 3 months slowly by setting aside slightly above 10% each month (already did for my first paycheck!).
Even when on the trial pledge I could relate to the issue of suddenly having to donate a large amount at the end of the year. While I do have a small amount of monthly donations set up, my main trick is to add money each month to a digital 'wallet' in my banking app specifically for donations which I then use EOY.