@Toby Tremlett🔹 is there a way to see the final debate week banner? I wanted to include a screenshot in the slides for my local group's next meetup, but can't find a way to access the banner now that debate week is over.
I appreciate the context, thank you. However, two points came to mind:
Either way, I don't think anyone can really judge whether the investment was a good decision based on the currently availabe information. Which is why I'd appreciate a more detailed explanation from CEA.
number taken from the wiki entry on CEA. I chose to use this comparison because I couldn't immediately find recent numbers for how much money CEA is spending in total, but I assume that 15 million is a significant portion of it.
What is the main idea this video is trying to convey? Based on the title and description, I assumed the goal would be to introduce key ideas of longtermism/x-risks and promote WWOTF. It did the latter, but I don't think the video presents longtermist ideas in a very clear way.
Earlier today, I watched the video with a couple of friends who have never heard about longtermism and x-risks before. It did not do a good job at sparking discussion. When talking about the video, the main takeaways were something like:
Afterwards, I suggested reading Will's guest essay in the NYT. From my impression, that article got my friends a lot more excited about reading WWOTF and seemed to resolve the confusion about longtermism and EA. In the future, I will definitely send the NYT article to people as an introduction to longtermism or this WWOTF book review by Ali Abdaal for people who just really prefer watching videos.
What concerns are there that you think the mechanize founders haven't considered? I haven't engaged with their work that much, but it seems like they have been part of the AI safety debate for years now, with plenty of discussion on this Forum and elsewhere (e.g. I can't think of many AIS people that have been as active on this Forum as @Matthew_Barnett has been for the last few years). I feel like they have communicated their models and disagreements a (more than) fair amount already, so I don't know what you would expect to change in further discussions?