Alex Long

46 karmaJoined


I appreciate that this post was short and concise enough that I could read the whole thing during my 10 minute lunch break ❤️🙏

Super cool! Yes, for sure! The Southwest is under-represented in EA :)

Even if AI kills every human on Earth that doesn't mean it's an existential threat necessarily. Humans evolved naturally once so who's to say that couldn't happen again if we all died? Who's to say it hasn't already happened on another planet? If you want to really go to an extreme, maybe we're in a simulation so none of us exist anyways and even if our AI does kill us all there'd still be plenty of intelligent life forms outside of the simulation who'd be fine.

A common thing I hear is that if there are potentially trillions or more possible humans who could exist in the future then any marginal reduction of existential risk is worth doing, but that view seems to be making some big assumptions about the universe we live in and how intelligent life emerged. I haven't heard anyone mention this before in ex-risk / longterm discussion but I'm sure I'm not the first one to think it so I figured I'd post it here.

As an experiment I converted the google sheets into a Notion database and plugged a map into it if anyone's curious. (Keep in mind this isn't in sync with the gsheet anymore) Map view of Notion database: