RZ

Riccardo Zucco

39 karmaJoined Working (0-5 years)

Comments
14

Thanks for the detailed response and the links! 

The exponent-based approach is interesting, though I'm still a little uncertain about its validity. I'll check out the posts!

Hi Vasco, thanks for the useful data!

Very simple organisms could still matter a lot despite having much less intense experiences.

I agree, assuming they are conscious.

I think the case of elephants and whales actually highlights why using total neuron count as a proxy for welfare range can be a little tricky. If we look at the African elephant's 257 billion neurons, it's a staggering number. But most of those neurons are located in the cerebellum, primarily dedicated to motor control of their large bodies. This suggests that neuron count alone is too crude a metric (though perhaps useful when comparing organisms with very different brains). Parameters like encephalization quotient or cortical neuron density might do better, though I'm not sure any of them cleanly captures intensity of experience rather than cognitive complexity. That said, these would be really only meaningful for vertebrates, which perhaps just underlines how hard the welfare range question actually is for organisms very different from us.

I've always found the functionalist view more intuitively compelling. The idea that experiential intensity simply scales down with the number of neurons seems hard to accept: it implies that simpler organisms live something like a barely-there flicker of experience, which also places us humans at the apex of perceived intensity in the universe. That seems to me like a sort of anthropocentrism, which could be a little suspicious.

I also think there's a distinction worth drawing between the "dimensionality" of an experience (how many qualitative states a mind can occupy) and its intensity. A simple mind might have very few "keys," but still hit each of them hard. A shrimp might have a very narrow experiential range, with little going on beyond basic valenced states, but that needn't make those states less intense. If that's right, "simpler brain" doesn't automatically mean "smaller welfare range."

Riccardo Zucco
1
0
0
30% ➔ 50% disagree

I am extremely uncertain on this point. While there is a possibility that an aligned AI could be immensely beneficial for animals, I believe this is an outcome we absolutely cannot take for granted.

Broadly speaking, it is difficult to assess such a scenario without knowing the specific form an 'aligned' AI will take and what a world where humans coexist with an AGI or ASI will actually look like. As some have pointed out, if this AI were to simply 'lock in' current human values indefinitely, it would likely be really bad for animals.

It seems probable, however, that in the long run, an aligned AI scenario would eliminate our need for animals in most current capacities (factory farming, drug testing, etc.). Therefore, AI could potentially improve things for animals directly harmed by humans, though this would still depend on our willingness to move past these practices once they are no longer necessary.

But the real long-term stakes likely lie with wild animals, and here the risks appear far more significant. If we model an aligned AI on current human values, it might seek to preserve Nature for as long as possible, on our planet and potentially others (perhaps even 'seeding' other planets with wildlife). Given the immense scale of natural suffering, this could result in damage of colossal proportions.

Personally, I’d guess that for this to be acceptable (and adopted by institutions), we should initially propose the technology for less controversial goals, like removing diseases or promoting health. Increasing intelligence might also be a potentially non-controversial goal. But proposing to act immediately on personality and more "trivial" traits might backfire. I think a trajectory like that would be more effective in practice

 

If you meant in terms of the actual rollout,

Yeah, I meant in terms of practical adoption. A democratic state will initially face strong pressure to restrict or ban technologies that the majority of the population strongly disagrees with. Even though this topic is already debated, this debate probably still feels pretty 'alien' to ordinary people. I don't think a large portion of the public could easily accept it, especially in its broad 'total liberty' version. 

Human reproduction is seen as something sacred. To intervene in a way that feels justifiable to common people, you’d need a justification that’s just as 'sacred' or important. Fighting diseases definitely fits that for most reasonable people. Even increasing intelligence or creativity could be seen as obviously useful, even if not sacred. But claiming the right to choose the fine details of your child's personality would look like the classic 'playing God' scenario, which could turn a lot of people against the whole thing. Even worse, allowing total liberty over 'trivial' traits (though I agree they aren't often actually so trivial) would act as a perfect strawman for anyone wanting to attack this. It gives the idea of children as 'consumer products' you pick at a supermarket based on trends, like choosing a dog breed because it’s fashionable. These associations would be horrific for many people and maybe would overshadow the actual concrete benefits of these technologies. 

​I think we tend to underestimate how much people would resist change when it comes to deeply rooted traditions, and probably even more for basic biological functions like natural reproduction. We can just look at the rejection of GMOs: they are mostly proven to be safe, yet they are still banned or hated in many places. 

My point is that by strongly advocating for everything at once, we may risk an 'all-or-nothing' rejection. Giving people time to get used to the technology and seeing that nothing 'demonic' happens seems like a more plausible way to gain long-term acceptance. Not that discussing everything now is unreasonable, but we should be aware that it might be a hard thing to pull off. And therefore try to focus on at least saving the less controversial interventions (such as preventing disease and improving intelligence).

That said, the fact that this could potentially be a big new business might be a strong incentive, especially in a country like the US. So maybe I'm being too pessimistic here.

​I agree with the rest of your observations. I don't think the critical points I raised are, in themselves, sufficient reasons not to adopt the technology, but it's obviously important to have them clear from the start and try to prevent them as much as possible.

​Sorry (again) for this very late reply!

Thanks for such a detailed answer! Sorry for the slow reply on my part.

"some EAs should do some more investigation into whether this could make sense as a cause area for substantially more investment"

Yeah, this makes sense to me.

to borrow from an old CFAR tagline, I'm saying something like "reprogenetics for its own sake, for the sake of X-risk reduction", if that makes any sense.

The communication strategy you've outlined seems right. I'd say society currently doesn't take AI existential risks all that seriously, so a framing centered on "empowering parents to give their kids an exceptionally healthy happy life" is likely to be much more compelling and effective.

I’ve had a chance to look a little bit closer at the other comments and the links you shared, which I found interesting (though I haven't gone through everything). A few additional observations though: 

  • I’m not sure I’m in favor of a liberty as broad as what’s proposed in the links. Personally, I’d guess that for this to be acceptable (and adopted by institutions), we should initially propose the technology for less controversial goals, like removing diseases or promoting health. Increasing intelligence might also be a potentially non-controversial goal. But proposing to act immediately on personality and more "trivial" traits might backfire. I think a trajectory like that would be more effective in practice.
  • A vision of genomic emancipation based on freedom of choice and plurality might work in the democratic West, but other states don't necessarily see those as values, so it seems unlikely they would adopt a similar vision.
  • Even if democratic states "led the way" by proposing this vision (or another ethical framework for reprogenetics), you would need strong international institutions to establish a common global regulation. That doesn't seem to be the case in today’s world, which feels like it’s moving toward a breakdown of international rules and a decrease in the global influence of Western democracies.
  • Dictatorial regimes would likely impose certain characteristics to make themselves more competitive (perhaps also unethical ones). At that point, democracies might be forced to adapt to certain "mandatory" enhancements for their citizens just to stay competitive.
  • All of this would make the relationship between parents and children even harder. Where before you could only blame chance for your traits, there would now be actual people responsible for many of your characteristics. This is even more true if parents choose not to modify you, leaving you at a disadvantage while everyone else "improved" their children.
  • Wouldn't it be worth focusing, in parallel, on technologies that allow for this when someone is already an adult and can choose for themselves? Especially regarding HIA. This would solve several ethical problems, particularly the fact that it wouldn't be a choice made by someone else. It would also be perceived as less "unnatural," I think. In a way, people already try to do this with the limited tools we have now. I realize this is mostly a technological problem since such tech is currently "sci-fi," but that probably won't be the case forever.

[Just noting that an online AI detector says the above comment is most likely written by a human and then "AI polished"; I strongly prefer that you just write the unpolished version even if you think it's "worse".]

 

Yeah, you’re right. I usually use AI mostly for translation, but this time I asked it to rewrite some parts that had come out a bit tangled. It said the same things, but expressed them a bit too much in its own way, and later I half-regretted leaving the text like that, too.

 

But I do also think HIA and reprogenetics are very good interventions even if there were no AGI X-risk, so for anyone who cares about interventions like that, they should be a top cause area.

I mostly agree with this. On whether they should be a top cause area, less so. As long as it stays framed as a marginal investment or a "secondary strategy" against AI catastrophic risk, it seems more justifiable and defensible to the general public, institutions, or people who might join EA. Making it a top cause area would mean going all-in on it. I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don't face to the same degree (both reputationally for EA, and in terms of actual risks from adopting the technology).

 

This is largely true, yeah. However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.

That's a good point, but even if HIA demonstrated that we don't really need AGI, it seems unlikely that society as a whole would give up pursuing it if it could get there first. That said, I agree that even a small increase in the chances of avoiding the risk matters a lot given the stakes.

 

not remotely on track to being solved in time, and pouring more resources into the problem basically doesn't help at the moment.

I'm not too optimistic about AI alignment. But does that mean you'd estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)

Here are some quick thoughts that come to mind after reading your post. I find HIA and reprogenetics to be fascinating topics, but I see several critical hurdles if we frame them primarily as tools for mitigating AI-related existential risk.

The biggest logical hurdle is time. AI development is moving at a breakneck pace, while biological HIA interventions (such as embryo selection) take decades to manifest in the real world. An enhanced human born today will not be an active researcher for at least 20–25 years. If AGI arrives within a 15-year window, human intelligence will simply lag behind at the most critical juncture.

I notice you address this objection by arguing that even a 10-year acceleration in a 40–50 year horizon still represents a meaningful reduction in existential risk. I find this partially compelling — but it seems to assume that AGI timelines are long enough for HIA to matter at all, which remains deeply uncertain. On shorter timelines, the argument loses most of its force. Addressing AI X-risk by trying to create smarter humans who might then solve the problem is also a highly indirect strategy; it seems more tractable to focus directly on AI alignment.

We could also consider a complementary path: the top priority remains creating a safe, aligned AI. Once achieved, we can use that superintelligence to help us develop HIA and advanced biotechnology far more rapidly and safely than we ever could on our own.

Furthermore, just as we fear unaligned AI, we should fear "unaligned" superintelligent humans. This risk may be even greater, as humans are not "programmed" for pure rationality; we are driven by complex emotions, tribalism, and deep-seated cognitive biases. Therefore, any HIA research should prioritize and fund moral enhancement (e.g., increasing empathy and compassion, reducing cognitive biases) alongside cognitive gains. This is crucial to avoid creating highly intelligent but destructive actors.

If we imagine a future philanthropic program to make these enhancements accessible for free, one could hypothesize a form of "bundling": making the cognitive upgrade conditional on a voluntary moral/character upgrade. While not a state mandate — and admittedly open to hard questions about who defines "moral improvement" and the risk of paternalism — it would act as a soft requirement for those choosing to use subsidized resources, thereby incentivizing positive social evolution.

A clear advantage of HIA over pure AI development is the guarantee of consciousness. If a non-conscious, unaligned AI were to replace us, it would result in a "dead universe" devoid of beings capable of experiencing value. Ensuring that conscious beings remain the primary agents of our future is a vital safeguard.

Beyond X-risk, human enhancement has massive potential for human well-being, such as eradicating genetic diseases. However, for this to be an ethical intervention rather than a dystopian one, the technology must be as open, accessible, and available by default as possible to everyone, regardless of social class or geography, to prevent the emergence of unbridgeable inequalities.

In light of these points, I see HIA as a "secondary strategy." It could make sense to allocate a portion of funds to this area for the sake of portfolio diversification, a sort of hedge investment against the uncertainty of our long-term future.

This is the first post I've read on this topic here. I find it quite surprising to hear these things about the EA community, although I had already heard some whispers from other sources. I should preface this by saying that I don't personally know anyone in the US EA community, so I am taking what is written here at face value.

I would argue that, besides being unethical, these behaviors are also strategically harmful (though I suspect this isn't a new argument). How can we expect the general public to trust a community that defines itself as altruistic when such dynamics are tolerated? This lack of trust inevitably extends to the advice and organizations that EA promotes.

Furthermore, such an environment certainly does not encourage women and girls to join or stay in the community, thereby alienating a significant source of talent and potential impact.

I also agree that we can accept individual weaknesses or imperfections if there is still the possibility of doing more good than harm overall. An example for comparison is continuing to eat meat: although it is unethical, it is often seen as acceptable if a person offsets or outweighs the harm through effective donations. In this way, we don't lose potential impact by alienating people unnecessarily.

However, there is a major strategic and relational difference between the two cases. While eating meat is harmful, it is unfortunately not yet considered a serious ethical problem by most of society. Sexist behavior and harassment, on the other hand, generally are. Consequently, tolerating or internally justifying sexism damages the community's reputation in a way that eating meat does not, in addition to causing direct and immediate harm to members of the community itself.

Therefore, for the sake of the EA community itself, it seems crucial that these cases be condemned. Obviously, I'm not saying we should abandon rationality in favor of disproportionate emotional reactions. But hiding or downplaying such episodes seems more likely to backfire than to genuinely protect the community.

Great post! It really made me dive into the perspective of that era and empathize with how such a future was quite inscrutable from their point of view at the time.

Reflecting on it, at least for known historical cases, it seems that moral catastrophes like these happen at the intersection of three concomitant elements: the needs/interests of the dominant group (cheap meat, safe drugs, scientific progress), the advent of new techniques/technologies to satisfy those needs (factory farming, animal testing), and a specific moral framework (the common view that animal suffering is not morally salient).

This strikes me as potentially interesting because, while the first two factors might be particularly hard to predict or act upon a priori, the third could perhaps offer a more stable and tractable lever. Improving our ethical framework could be seen as a default, 'evergreen' strategy to mitigate the risk of future catastrophes from a long-term perspective. By mainstreaming the idea that avoiding the suffering of any sentient being is of great moral importance, we might build a general defense against new forms of moral catastrophes.

Of course, this wouldn't exclude other approaches; rather, they could work in parallel. While moral progress might serve as the long-term foundation, we could then evaluate on a case-by-case basis whether technological fixes or other targeted interventions would be more effective for specific risks as they emerge. As you mentioned, staying alert to how new needs and technologies evolve remains crucial.

Load more