I am the director of Tlön, an organization that translates content related to effective altruism, existential risk, and global priorities research into multiple languages.
After living nomadically for many years, I recently moved back to my native Buenos Aires. Feel free to get in touch if you are visiting BA and would like to grab a coffee or need a place to stay.
Every post, comment, or wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License.
The extinction of all Earth-originating intelligent life (including AIs) seems extremely unlikely.
Extremely unlikely to happen... when? Surely all Earth-originating intelligent life will eventually go extinct, because the universe’s resources are finite.
Here's another summary. I used Gemini 2.0 Flash (via the API), and this prompt:
The following is a series of comments by Habryka, in which he makes a bunch of criticisms of the effective altruism (EA) movement. Please look at these comments and provide a summary of Habryka’s main criticisms.
80k has made important contributions to our thinking about career choice, as seen e.g. in their work on replaceability, career capital, personal fit, and the ITN framework. This work does not assume a position on the neartermism vs. longtermism debate, so I think the author’s neartermist sympathies can’t fully explain or justify the omission.
Hello. As it happens, right now I'm editing an interview I conducted with @Jaime Sevilla two months ago. Things got delayed for a variety of reasons, but this episode should be out soon.
Methionine restriction has been shown to increase mean and maximum lifespan in various organisms, particularly rodents. Studies show it can increase lifespan by 30-40% in rats and mice, with effect sizes similar to those of calorie restriction. The lower methionine content of plant-based diets should be seen as a plus rather than a minus, I think.
Thanks for the useful exchange.
It may be useful to consider whether you think your comment would pass a reversal test: if the roles were reversed and it was an EA criticizing another movement, but the criticism was otherwise comparable (e.g. in tone and content), would you also have expressed a broadly positive opinion about it? If yes, that would suggest we are disagreeing about the merits of the letter. If no, it seems it’s primarily a disagreement about the standards we should adopt when evaluating external criticism.
Thanks for the clarification. I didn’t mean to be pedantic: I think these discussions are often unclear about the relevant time horizon. Even Bostrom admits (somewhere) that his earlier writing about existential risk left the timeframe unspecified (vaguely talking about "premature" extinction).
On the substantive question, I’m interested in learning more about your reasoning. To me, it seems much more likely that Earth-originating intelligence will go extinct this century than, say, in the 8973th century AD (conditional on survival up to that century). This is because it seems plausible that humanity (or its descendants) will soon develop technology with enough destructive potential to actually kill all intelligence. Then the question becomes whether they will also successfully develop the technology to protect intelligence from being so destroyed. But I don’t think there are decisive arguments for expecting the offense-defense balance to favor either defense or offense (the strongest argument for pessimism, in my view, is stated in the first paragraph of this book review). Do you deny that this technology will be developed "over time horizons that are brief by cosmological standards”? Or are you confident that our capacity to destroy will be outpaced by our capacity to prevent from destruction?