[Just noting that an online AI detector says the above comment is most likely written by a human and then "AI polished"; I strongly prefer that you just write the unpolished version even if you think it's "worse".]
Yeah, you’re right. I usually use AI mostly for translation, but this time I asked it to rewrite some parts that had come out a bit tangled. It said the same things, but expressed them a bit too much in its own way, and later I half-regretted leaving the text like that, too.
But I do also think HIA and reprogenetics are very good interventions even if there were no AGI X-risk, so for anyone who cares about interventions like that, they should be a top cause area.
I mostly agree with this. On whether they should be a top cause area, less so. As long as it stays framed as a marginal investment or a "secondary strategy" against AI catastrophic risk, it seems more justifiable and defensible to the general public, institutions, or people who might join EA. Making it a top cause area would mean going all-in on it. I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don't face to the same degree (both reputationally for EA, and in terms of actual risks from adopting the technology).
This is largely true, yeah. However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.
That's a good point, but even if HIA demonstrated that we don't really need AGI, it seems unlikely that society as a whole would give up pursuing it if it could get there first. That said, I agree that even a small increase in the chances of avoiding the risk matters a lot given the stakes.
not remotely on track to being solved in time, and pouring more resources into the problem basically doesn't help at the moment.
I'm not too optimistic about AI alignment. But does that mean you'd estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)
Here are some quick thoughts that come to mind after reading your post. I find HIA and reprogenetics to be fascinating topics, but I see several critical hurdles if we frame them primarily as tools for mitigating AI-related existential risk.
The biggest logical hurdle is time. AI development is moving at a breakneck pace, while biological HIA interventions (such as embryo selection) take decades to manifest in the real world. An enhanced human born today will not be an active researcher for at least 20–25 years. If AGI arrives within a 15-year window, human intelligence will simply lag behind at the most critical juncture.
I notice you address this objection by arguing that even a 10-year acceleration in a 40–50 year horizon still represents a meaningful reduction in existential risk. I find this partially compelling — but it seems to assume that AGI timelines are long enough for HIA to matter at all, which remains deeply uncertain. On shorter timelines, the argument loses most of its force. Addressing AI X-risk by trying to create smarter humans who might then solve the problem is also a highly indirect strategy; it seems more tractable to focus directly on AI alignment.
We could also consider a complementary path: the top priority remains creating a safe, aligned AI. Once achieved, we can use that superintelligence to help us develop HIA and advanced biotechnology far more rapidly and safely than we ever could on our own.
Furthermore, just as we fear unaligned AI, we should fear "unaligned" superintelligent humans. This risk may be even greater, as humans are not "programmed" for pure rationality; we are driven by complex emotions, tribalism, and deep-seated cognitive biases. Therefore, any HIA research should prioritize and fund moral enhancement (e.g., increasing empathy and compassion, reducing cognitive biases) alongside cognitive gains. This is crucial to avoid creating highly intelligent but destructive actors.
If we imagine a future philanthropic program to make these enhancements accessible for free, one could hypothesize a form of "bundling": making the cognitive upgrade conditional on a voluntary moral/character upgrade. While not a state mandate — and admittedly open to hard questions about who defines "moral improvement" and the risk of paternalism — it would act as a soft requirement for those choosing to use subsidized resources, thereby incentivizing positive social evolution.
A clear advantage of HIA over pure AI development is the guarantee of consciousness. If a non-conscious, unaligned AI were to replace us, it would result in a "dead universe" devoid of beings capable of experiencing value. Ensuring that conscious beings remain the primary agents of our future is a vital safeguard.
Beyond X-risk, human enhancement has massive potential for human well-being, such as eradicating genetic diseases. However, for this to be an ethical intervention rather than a dystopian one, the technology must be as open, accessible, and available by default as possible to everyone, regardless of social class or geography, to prevent the emergence of unbridgeable inequalities.
In light of these points, I see HIA as a "secondary strategy." It could make sense to allocate a portion of funds to this area for the sake of portfolio diversification, a sort of hedge investment against the uncertainty of our long-term future.
This is the first post I've read on this topic here. I find it quite surprising to hear these things about the EA community, although I had already heard some whispers from other sources. I should preface this by saying that I don't personally know anyone in the US EA community, so I am taking what is written here at face value.
I would argue that, besides being unethical, these behaviors are also strategically harmful (though I suspect this isn't a new argument). How can we expect the general public to trust a community that defines itself as altruistic when such dynamics are tolerated? This lack of trust inevitably extends to the advice and organizations that EA promotes.
Furthermore, such an environment certainly does not encourage women and girls to join or stay in the community, thereby alienating a significant source of talent and potential impact.
I also agree that we can accept individual weaknesses or imperfections if there is still the possibility of doing more good than harm overall. An example for comparison is continuing to eat meat: although it is unethical, it is often seen as acceptable if a person offsets or outweighs the harm through effective donations. In this way, we don't lose potential impact by alienating people unnecessarily.
However, there is a major strategic and relational difference between the two cases. While eating meat is harmful, it is unfortunately not yet considered a serious ethical problem by most of society. Sexist behavior and harassment, on the other hand, generally are. Consequently, tolerating or internally justifying sexism damages the community's reputation in a way that eating meat does not, in addition to causing direct and immediate harm to members of the community itself.
Therefore, for the sake of the EA community itself, it seems crucial that these cases be condemned. Obviously, I'm not saying we should abandon rationality in favor of disproportionate emotional reactions. But hiding or downplaying such episodes seems more likely to backfire than to genuinely protect the community.
Great post! It really made me dive into the perspective of that era and empathize with how such a future was quite inscrutable from their point of view at the time.
Reflecting on it, at least for known historical cases, it seems that moral catastrophes like these happen at the intersection of three concomitant elements: the needs/interests of the dominant group (cheap meat, safe drugs, scientific progress), the advent of new techniques/technologies to satisfy those needs (factory farming, animal testing), and a specific moral framework (the common view that animal suffering is not morally salient).
This strikes me as potentially interesting because, while the first two factors might be particularly hard to predict or act upon a priori, the third could perhaps offer a more stable and tractable lever. Improving our ethical framework could be seen as a default, 'evergreen' strategy to mitigate the risk of future catastrophes from a long-term perspective. By mainstreaming the idea that avoiding the suffering of any sentient being is of great moral importance, we might build a general defense against new forms of moral catastrophes.
Of course, this wouldn't exclude other approaches; rather, they could work in parallel. While moral progress might serve as the long-term foundation, we could then evaluate on a case-by-case basis whether technological fixes or other targeted interventions would be more effective for specific risks as they emerge. As you mentioned, staying alert to how new needs and technologies evolve remains crucial.
This is an interesting and potentially important topic. I happened to reflect on something similar a few years ago. My intuition regarding this is that non-affective consciousness does not exist, even if it might seem otherwise. I believe all conscious experiences have an intrinsic positive or negative value for us and can therefore be considered 'affective,' even though we are often unaware of it.
I would say that for many types of conscious experiences—perhaps even most of them—this value is so low that it falls below our threshold of intellectual awareness of feeling something pleasant or unpleasant. However, as a whole, this perception exists, albeit present in a 'background' sort of way. Indeed, our daily experience is peppered with conscious perceptions that seem neutral or affectively indifferent: the colors of objects, shapes, and many other sensations not directly traceable to being pleasant or unpleasant. But these taken together (and I believe even individually) form a 'life output' for that specific moment, which we could define as worthy or unworthy of being lived.
Furthermore, I would argue that if a conscious experience leads you to act toward something, it is likely an experience that holds affective value for us.
In this sense, p-Vulcans would not exist. This is because consciousness of something would imply a certain emotional tone or a like/dislike of what is being experienced/perceived. Thus, a 'complete' p-Vulcan would be nothing more than a philosophical zombie. Alternatively, we could imagine conscious individuals who are unable to realize they are experiencing affective states, even though, within certain limits, they actually do experience them. This could be either because these affective states are of very low intensity, or because the intellectual capacity to rationalize or express their positive/negative perceptions is lacking.
If p-Vulcans did exist, I would say that—as counterintuitive as it may seem—it might actually make sense to sacrifice the life of an entire planet of p-Vulcans for a shrimp. Precisely because, for them, their existence and any perception they might have within it would be completely indifferent, completely 'tasteless,' so to speak. Destroying that planet would be somewhat like shutting down a server populated by LLMs conversing with each other: there is complexity, there is information exchange, but there is 'no one who cares.' However, we find this difficult to imagine, and I believe our struggle stems precisely from the fact that we cannot decouple the concept of experience from something that also has affective value. It is impossible to separate the two. If we succeed in doing so, what remains is simply an LLM or a philosophical zombie.
Of course, in such an ethical dilemma, if placed in a real-world context, one would have to consider the overall effects of sacrificing the Vulcan rather than the shrimp. Obviously, since the Vulcan is an agent with far more potential than the shrimp, they might have the capacity to change the world in positive or negative ways infinitely greater than the shrimp (instrumental value). This would be the most important factor to weigh when making the choice. But this, of course, goes beyond the main theme of the post.
I also find the theme of the complexity of conscious experiences raised by the post interesting. In fact, one could imagine a trolley problem where we must choose between two affective experiences of the same value, but one is highly complex and more extended (such as human happiness) and the other is very simple, more intense, but less extended (the equivalent happiness of a shrimp). In this sense, perhaps it would be better to choose the former; in theory, they would be equivalent, but given the uncertainty one might choose this option on the assumption that greater conscious complexity/extension is preferable.
This was pretty much the first thing I thought of when I heard about the 10% pledge, and I was actually surprised to see so little of this reasoning here. It’s fairly obvious that for someone on a low income, 10% is still a huge sacrifice, whereas for the wealthy, giving away 10% doesn't change much. Even though they give much more in absolute terms, they’re still left with far more money than they actually need.
This is especially important from the perspective of the rich: with 'only' a 10% commitment, a lot of potential generosity goes to waste - money that could make a massive difference. But it would also help those on lower incomes, allowing them to join the pledge without it being financially draining. That said, I think the percentage should never drop to zero; psychologically, it would imply that low earners can't or shouldn't contribute. Not only is that untrue, but it would also feel very 'exclusionary' for many.
Finally, I’d say the calculation should be adjusted for the cost of living and local salaries. For instance, $39k might not be much in the US, but in many other countries - and not necessarily poor ones, like where I live - it’s actually a good salary.
Thank you Vasco for the welcome and for your prompt reply!
Yes, I understand the underlying logic of your reasoning; my intention was only to highlight how paradoxical it might seem when interpreted from a "narrative" perspective. This is, after all, the way most of us instinctively tend to think/feel, even if it’s often not strictly correct.
Thank you for the update on agricultural land use. I hadn't seen your comment. I am absolutely in agreement with you on that point (regarding the need for more research due to uncertainty).
I was wondering if—in the case that it were true that agricultural land decreases the number of nematodes & Co.—given that we write here so that new ideas might be put into action, and thus based on the consensus these ideas can obtain, wouldn't it be more advantageous to highlight more the non-problematic cases for someone who cares about animals?
For example, for a person deeply committed to the animal cause, it will be harder to accept the solution of purchasing animal products. But confirming the idea that it's better to finance cage-free or broiler chicken campaigns for this additional reason (reducing soil animals) could be much more easily accepted, and thus put into practice. Whereas an idea like consuming more beef might risk provoking repulsion and rejection, perhaps even in a place like this (though this is just a hypothesis, as I don't know this forum that well).
We could call this the "Probability that someone will act upon reading the post." Though I suppose it's difficult to quantify.
Conversely, strongly advocating for a controversial/counter-intuitive thesis, as you have partly done, could also contribute to attracting attention and thus generating the desired effects.
Thinking better about the whole issue, even if nematodes had net-positive lives, the course of action could still be controversial, as it would be practically a perfect example of the Repugnant Conclusion.
In general, you are absolutely right to draw attention to this issue. One could argue that it's probably not a relevant topic. But if it were relevant, it would be extremely relevant. And that fact makes it effectively relevant (at least given our current state of knowledge).
Your reasoning is certainly interesting to me, and you might even be right.
However, I believe the most problematic point boils down to this: we are being asked to commit an act involving certain, morally grave harm (increasing the suffering of farmed animals, which we know to be sentient) in exchange for a potential benefit (reducing nematode suffering) that rests upon a chain of rather uncertain hypotheses.
This idea closely resembles the "fat man" scenario in the trolley problem, only elevated to the nth power. It actively proposes inflicting harm upon someone (the farmed animals / the fat man) for a much greater benefit (the nematodes & Co / the people who would be hit by the train). And this is already an example that many people instinctively reject.
But the situation here is infinitely more paradoxical.
We could frame it this way: "You must push the fat man off the bridge. We don't know if he will stop the train. In fact, we don't even know if there are really people on the tracks to save."
"There is a very low probability that there are billions of people on the tracks (i.e., that nematodes & Co are sentient, that their lives are net-negative, and that their consciousness has a certain moral value)."
"But most, most likely, there is no one there (e.g., they are not sentient; the probability that they are is low, e.g., 6.8\% for nematodes, according to the RP data you cited yourself)."
"Yet, the man you push will surely die."
We are committing a certain murder for a totally speculative benefit. This is a harm that will be 90% or more a large-scale, self-contained harm that yields no benefit whatsoever.
But in reality, an even more fitting analogy might be this, since we are talking about reducing suffering, not saving lives in an absolute sense:
"We know there are several serial killers in the world who will torture a high number of people. For this reason, we take one person, make them be born and raise them, with the sole purpose of constantly torturing them on live TV, so that many of the killers will be distracted by the show and will torture their victims less."
To be truly consistent with what we have discussed, however, it would sound more like this:
"Most probably, there are NO several serial killers in the world who torture many people. But since there is a minimal chance that they do exist, we decide to take a person, make them be born, raise them, and constantly torture them on live TV so that the hypothetical serial killers will be distracted by the show and hypothetically kill their victims earlier."
Now, I know this might be more of an anecdote than a completely rational argument. Since we are forced to act in reality, and we are constantly acting, we must do so with the information available to us, even if it is partial. And that is what you tried to reason about. However, we are essentially exchanging a certain (and grave) harm for an improbable benefit (albeit of astronomical proportions).
And this is without even considering the fact that, for all we know, the lives of these animals might be net-positive. In this case, the harm to the farmed animals would be compounded by the harm to the nematodes & Co. In this scenario, our previous story might sound something like this: "There are no serial killers in the world. We torture a guy on TV thinking there are, and some people who watch the show draw inspiration from it and start killing people themselves."
In general, the uncertainty seems so high as to not justify new actions before obtaining new information that would grant us greater clarity, as you rightly suggest.
Furthermore, given that this involves inflicting harm in exchange for something that could have a massive output, why choose the path of harm when there could be many others? It’s a bit like saying, "Would you push the fat man off the bridge?" "Yes, if it were the only option." Most people would try to find other options, so as not to have to kill the fat man and still save the unfortunates on the tracks. Couldn't we just derail the train with a large rock, for example? Reality is often rich with creative possibilities.
And in this case, I don't see why we should actively inflict harm on someone (the farmed animals) simply because this indirectly achieves something greatly good. Couldn't we just do that good thing ourselves?
Why should we reduce the number of nematodes & Co. through farming, which is harmful in its own right? We might as well kill these animals directly if we decide they must live less! I bet we could be far more efficient. We might as well develop a "killer mixture" for these species, or who knows what else?
Admittedly, I'm venturing into "science fiction" here, and the idea wouldn't be very marketable. But this would be the sensible path.
So, is this a weak hypothesis too? Perhaps we could invent some excuse. It is true that a project openly intending to "kill" nematodes would not succeed, nor could a charity pursuing this mission define itself as such in the eyes of public opinion. But perhaps there could be some ingenious ways to frame this as something good and morally approvable? That way, everyone would not see us as the mad people who want to exterminate nematodes, but as someone doing something good (or perhaps irrelevant). And whose secret purpose is to reduce the number of nematodes on Earth. A win-win situation.
Of course, it is not guaranteed that such a thing is possible, and thus this is also pure speculation, but couldn't finding such a creative solution be a very promising way to do good?
Thanks for such a detailed answer! Sorry for the slow reply on my part.
Yeah, this makes sense to me.
The communication strategy you've outlined seems right. I'd say society currently doesn't take AI existential risks all that seriously, so a framing centered on "empowering parents to give their kids an exceptionally healthy happy life" is likely to be much more compelling and effective.
I’ve had a chance to look a little bit closer at the other comments and the links you shared, which I found interesting (though I haven't gone through everything). A few additional observations though: