The cause of human intelligence amplification (aka HIA, human intelligence enhancement, human intelligence augmentation) seems to have very little discussion on this forum; those terms (in quotations) give only a handful of search results. I think this is a big mistake. In particular, I think that reprogenetics is a good cause area that should get more resources.

Reprogenetics is biotechnology used to empower parents to make genomic choices on behalf of their future children. Reprogenetics would most likely work for HIA, is morally good, and is likely technically feasible and acceleratable.

Let's have a discussion about this. We can talk here in comments. If you have substantial thoughts / a strong position against, we could have a discussion or debate on a call and post it to YouTube.

At a meta level, it may be that EA as a whole (or all the individual EAs, mysteriously) have simply dismissed HIA in general or reprogenetics in particular. In other words, some decision has already been made to not pursue those causes, and this decision is not open for discussion or debate. That would prima facie go against the ideals of EA. I think that would be ok, in some sense. A person or a group has some natural right to make a decision for itself, without having to explain it, and it makes sense for people to defer to leaders. However, I do think that if this is the case, then EA quite strongly owes a public statement to that effect. That way, interested parties can draw their own informed conclusions about how to pursue principles of actually effective actual altruism.

A few points:

  • "This is actually immoral?"

    • I don't think so. I do think it requires a large ongoing conversation between society, various groups, scientists, and so on.
    • I also think there are genuine risks (see "Potential perils of germline genomic engineering"). These risks should be headed off with concrete actions and with theory. Accordingly, there are genuine open questions in how society can orient around reprogenetics beneficially.
    • But I think reprogenetics is fundamentally quite consonant with a very humanistic pluralistic liberal vision, that would be quite beneficial for nearly everyone, and that is deeply opposed to eugenics. See "The principle of genomic liberty" and "Genomic emancipation" and "Genomic emancipation contra eugenics".
  • "This can't be accelerated because all the good science is already funded and you don't know anything about this."

    • I'm sympathetic to this; biotechnology is very difficult and can't be solved with anything like a drive-by investigation. I think most HIA methods are not promising (see "Overview of strong human intelligence amplification methods").
    • That said, I do think there are very significant areas (mainly in reprogenetics) that could benefit from a lot more funding. This is a question of priorities, and I think the current priorities are incorrect, because the upsides are so big.
    • I am not a trained biologist and these are not peer-reviewed articles, but my assertion is that the main conclusions of the reasoning I lay out in "Visual roadmap to strong human germline engineering" and in more detail in "Methods for strong human germline engineering" would largely stand up to critique. My conclusions imply several relevant areas that could very much use more funding to go faster.
    • If you'd like, you could nominate an expert in genetics and/or stem cell biology who you would believe, if they told you that reprogenetics is feasible and/or acceleratable. Then, if that person is game, I would gladly pay them for their time to critically evaluate my arguments in some form (after a discussion). (I have done this some, in a piecemeal way; I'm not fully satisfied with these verifications, and I'm happy to meet knowledgeable critics who will entertain technical questions about speculative biotechnologies.)
  • "This is very taboo."

    • It's probably not nearly as taboo as you think it is. Public opinion is quite split (think something like 45/55 for preventing disease, 30/70 for increasing intelligence), and is probably open to discussion. You're probably being overly sensitive to low-context optics.
    • People want the potential downside risks to be taken very seriously; no one asked you to literally stop thinking about it.
  • "Isn't this pointless because AGI is coming so soon?"

    • I don't think so. I don't think it makes sense to be super confident in short timelines—say, >80% on <15 years. See "Views on when AGI comes and on strategy to reduce existential risk" and "Do confident short timelines make sense?".
    • Further, the main plausible hope even on short timelines would be a pause / delay / slowdown. That is, of course, the top priority. But in the long run, you still need an out!
    • Even on pretty aggressive timelines (median <15 years), getting an out in 40 years rather than in 50 years (because you accelerated strong reprogenetics and strong HIA) is still a quite substantial decrease in existential risk. Like a percentage point or something. That's pretty good! Hello?!? (See "The benefit of intervening sooner", though some background assumptions there are rather questionable.)
  • "Does this actually help with existential risk?"

  • "EA can't do super weird stuff purely on the basis of existential risk."

    • I'm sympathetic to this. Pursuing a controversial, risky technology for intense, non-concrete reasons is a pretty fraught stance to take. One has to ask, am I doing bad things in bad ways for supposedly good reasons?
    • However, I think that HIA in general, and even more so reprogenetics in particular (because of empowering parents to decrease disease risks etc.), can be done in a way that is quite likely to be quite beneficial for almost all individuals and for society (humanity). (I don't think this should be obvious to you a priori, and I'm not so confident of this; let's discuss!)
    • Furthermore, if it is the case that reprogenetics is good for individuals and for humanity, then there is a way to pursue it that is top-to-bottom ethical. In other words, we can pursue this in a way that is truly and simply good; we can know that it's good and can stand tall about it.

24

1
1

Reactions

1
1

More posts like this

Comments14
Sorted by Click to highlight new comments since:

I think that repeatedly re-opening discussions on any form of eugenics actively undermines the work many EAs are doing in the global south and severely risks our reputation and credibility as a movement in the global health space. Given the history of discussing this topic within EA, I do not believe that anyone in this community has the precision and tact to discuss proposals around eugenics without causing these harms, if it is even possible to do so at all (I do not believe it is).

I also believe that discussing eugenics on the forum undermines attempts to make EA more welcoming to a large number of racial groups, because of the association with forms of oppression and genocide against those groups. I believe that all of these harms persist even if you don’t specifically talk about where you might believe the existing differences in intelligence lie, because of that history. I believe that there are many people who would make fantastic EAs who are turned off of this movement because of this association.

I believe that members of the EA movement and its leaders should loudly and sharply condemn all forms of race science, human biodiversity, and more broadly, eugenics, because of these harms.

I am also, frankly, tired of having to write this comment every 6 months.

Thanks for engaging!

Given the history of discussing this topic within EA,

Thanks for your links in the other comment; I had been searching for human intelligence amplification, but not "genetic enhancement". (I generally avoid the term "enhancement" in this context because I believe it is subtley philosophically incorrect--it bakes in a degree of eugenical thinking, in that it kinda sounds like it presumes some notion of "better" and therefore presumes some notion of "good", which is a core outlook of eugenics.)

Glancing at those links, I can understand some more why you might have a reaction like this, haha. I would submit myself as different from that history. I'm serious about this area; I view moral and societal aspects as equally important to technical aspects; I'm not a trained expert but I have been studying for a few years; and I'm here to actually think these things through, ideally working more with some EAs.

I also believe that discussing eugenics on the forum undermines attempts to make EA more welcoming to a large number of racial groups, because of the association with forms of oppression and genocide against those groups. [...] I believe that there are many people who would make fantastic EAs who are turned off of this movement because of this association.

This makes total sense. I would be curious to hear from / talk with anyone who is turned off by reprogenetics in general or turned off from EA because of reprogenetics in particular. I'd like to understand the issue better and understand where people are coming from better. (I understand that might be difficult because maybe most people would just not want to talk about it, but nevertheless. Maybe someone reading this is like "I was almost turned off by this stuff, but I stuck around." and would be up for chatting.)

I think there's a couple dimensions:

  • There's a set of real problems around ideology / policy / social stances. In particular, how could society think about the use of reprogenetics, in a way that doesn't affirm or apply eugenic motivations? I've thought about this a lot; see for example here: https://berkeleygenomics.org/articles/Genomic_emancipation_contra_eugenics.html I think that reprogenetics can be pursued in a way that is clearly, firmly, truly good--including being not racist, not totalitarian, pluralist, liberal, and egalitarian. That's not supposed to be obvious, and I think there are substantive open questions here; part of pursuing this cause would be working this out more seriously, including merging with lots of groups and perspectives (e.g. hearing more from disability groups, more generally giving proper equal seat at the table to advocacy groups in general, thinking about what sorts of state policies or professional norms can work, etc.).
  • Suppose arguendo that there is such a good stance for reprogenetics. Then there is still the problem of EA communicating this as a social movement. In other words, a newcomer to EA would not necessarily by default be able to parse out whether EA's inclusion of some work on reprogenetics is eugenical or not, even if it is in fact not eugenical. There would be significant work of communication (which is continuous with ongoing reevaluation and giving more people or groups their proper seat at the table). I haven't thought about this much. I'm curious if you think this is infeasible even assuming the previous point is true?

I believe that all of these harms persist even if you don’t specifically talk about where you might believe the existing differences in intelligence lie, because of that history.

Reprogenetics is orthogonal to ancestry groups; it would be a set of tools that could be offered to individual couples who want kids. I'm against eugenic policies such as paying certain types of people to have kids or not have kids, anything about immigration, etc. I think there is a positive ideology (I mean, a coherent ideology that gives explicit answers to the relevant questions) that is good and that is anti-racist and anti-eugenics. The only interest that I have in differences in intelligence or any other trait, are differences between individuals with or without a given allele.

I believe that you want to deploy this technology in a way that avoids coercions and avoids racism. The problem is that you aren't in charge of society: once the tech is out there, you don't get a large say in how it gets used. Those decisions go to the public in the case of democracies, and to a handful of scumbags in the case of dictatorships and oligarchies. 

A quick look through history will show that basically anytime one group of people sees another group as genetically or racially inferior, discrimination and atrocities result. I see no reason to think that this trend will not continue if we create new groups of people. If Bulgarians embrace genetic "amplification", to improve their "intelligence" and "morals", but Romanians ban it, human history indicates that Bulgarians will look at Romanians as their inferiors, and treat them accordingly. 

The problem is that you aren't in charge of society: once the tech is out there, you don't get a large say in how it gets used.

Right. That's why I'm not like "hm let me write down a list of good things to do with this technology and allow those, and write down a list of bad things to ban, and then that solves everything". Instead I'm like "ok, there's a big set of questions around how society can take stances around this technology; let's figure out whether and how such a stance can actually result in overwhelmingly good outcomes for humanity--i.e. figure out what that stance is, and figure out how to figure it out (e.g. who to bring in to give voice to), figure out how to get to society having that stance, etc.". See for example https://berkeleygenomics.org/articles/Genomic_emancipation_contra_eugenics.html

Regarding your second paragraph, I'd appreciate some metadata. For example, is this a worry that you're just now thinking of? Is it something you've investigated a bunch and have a lot of detail about? Is this something you feel confident about, or not? Is this something you're interested in thinking about? Are you putting this forward as a compelling reason to not investigate more about whether reprogenetics should be a top cause (as opposed, for example, to one major downside risk that would have to be considered and evaluated as part of such an investigation)?

Anyway, on the object level, I'm interested in thinking about it. I mentioned a class of such worries here https://berkeleygenomics.org/articles/Potential_perils_of_germline_genomic_engineering.html#internal-misalignment but haven't investigated that particular worry.

I don't feel very worried about it because in fact these children would be quite varied in themselves as a class, and there would be quite a lot of variation, so that there's no clear distinction between kids resulting from reprogenetics vs. not. See the diagram in this subsection: https://berkeleygenomics.org/articles/Genomic_emancipation.html#intelligence Further, by default these kids would have varied backgrounds, grow up in different places, etc. But, maybe it's a more likely risk than I'm guessing at the moment.

That said, I do think it's very important, for this and many other reasons, to make reprogenetic technologies very accessible (inexpensive, widespread, legal, functional, safe, applicable to anyone), so that there isn't siloing into some small class. I also want this technology to be developed and deployed in a liberal, diverse democracy first, for this reason and for other reasons.

I think these are fair points, but the tone seems deconstructive and a bit condescending. I think it's possible to disagree and to caution loudly while still respecting that the post was made in good faith.

Hi, this has been discussed plenty of times before, often very controversially:

Here are two write-ups from Reflective Altruism, a criticism blog, on the EA Forum’s engagement with this topic area.

I think that more than enough ink has been spilled on this topic on this forum and I don’t see this post adding a lot to it. I think a better version of this post would engage with the existing discussion while treading very carefully around the impacts that discussing eugenics has on the goal of the EA Forum to be a welcoming and inclusive space for everyone. I will leave my object-level thoughts on your post in a different comment.

Thanks! (See other comment for my response.)

I think that more than enough ink has been spilled on this topic on this forum and I don’t see this post adding a lot to it.

That's fair. This post is pretty quickly-written, with the intent of reaching out to EA. The links in the post link to much more substantial thoughts, which I would guess have been absent from the previous discussions.

Here are some quick thoughts that come to mind after reading your post. I find HIA and reprogenetics to be fascinating topics, but I see several critical hurdles if we frame them primarily as tools for mitigating AI-related existential risk.

The biggest logical hurdle is time. AI development is moving at a breakneck pace, while biological HIA interventions (such as embryo selection) take decades to manifest in the real world. An enhanced human born today will not be an active researcher for at least 20–25 years. If AGI arrives within a 15-year window, human intelligence will simply lag behind at the most critical juncture.

I notice you address this objection by arguing that even a 10-year acceleration in a 40–50 year horizon still represents a meaningful reduction in existential risk. I find this partially compelling — but it seems to assume that AGI timelines are long enough for HIA to matter at all, which remains deeply uncertain. On shorter timelines, the argument loses most of its force. Addressing AI X-risk by trying to create smarter humans who might then solve the problem is also a highly indirect strategy; it seems more tractable to focus directly on AI alignment.

We could also consider a complementary path: the top priority remains creating a safe, aligned AI. Once achieved, we can use that superintelligence to help us develop HIA and advanced biotechnology far more rapidly and safely than we ever could on our own.

Furthermore, just as we fear unaligned AI, we should fear "unaligned" superintelligent humans. This risk may be even greater, as humans are not "programmed" for pure rationality; we are driven by complex emotions, tribalism, and deep-seated cognitive biases. Therefore, any HIA research should prioritize and fund moral enhancement (e.g., increasing empathy and compassion, reducing cognitive biases) alongside cognitive gains. This is crucial to avoid creating highly intelligent but destructive actors.

If we imagine a future philanthropic program to make these enhancements accessible for free, one could hypothesize a form of "bundling": making the cognitive upgrade conditional on a voluntary moral/character upgrade. While not a state mandate — and admittedly open to hard questions about who defines "moral improvement" and the risk of paternalism — it would act as a soft requirement for those choosing to use subsidized resources, thereby incentivizing positive social evolution.

A clear advantage of HIA over pure AI development is the guarantee of consciousness. If a non-conscious, unaligned AI were to replace us, it would result in a "dead universe" devoid of beings capable of experiencing value. Ensuring that conscious beings remain the primary agents of our future is a vital safeguard.

Beyond X-risk, human enhancement has massive potential for human well-being, such as eradicating genetic diseases. However, for this to be an ethical intervention rather than a dystopian one, the technology must be as open, accessible, and available by default as possible to everyone, regardless of social class or geography, to prevent the emergence of unbridgeable inequalities.

In light of these points, I see HIA as a "secondary strategy." It could make sense to allocate a portion of funds to this area for the sake of portfolio diversification, a sort of hedge investment against the uncertainty of our long-term future.

[Just noting that an online AI detector says the above comment is most likely written by a human and then "AI polished"; I strongly prefer that you just write the unpolished version even if you think it's "worse".]

if we frame them primarily as tools for mitigating AI-related existential risk.

I did frame it that way, because decreasing existential risk should be the top priority in terms of causes. But I do also think HIA and reprogenetics are very good interventions even if there were no AGI X-risk, so for anyone who cares about interventions like that, they should be a top cause area.

An enhanced human born today will not be an active researcher for at least 20–25 years.

Well, we could say 15-20 years (I think John von Neumann started making significant contributions to math around age 20), but yeah.

If AGI arrives within a 15-year window, human intelligence will simply lag behind at the most critical juncture.

This is largely true, yeah. However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.

but it seems to assume that AGI timelines are long enough for HIA to matter at all, which remains deeply uncertain. On shorter timelines, the argument loses most of its force.

I'm not sure how you're using the phrase "shorter timelines" here. If you mean "when AGI actually comes", then see above. If you mean "someone's strategic probabilistic distribution over when AGI comes", then I disagree. See https://tsvibt.blogspot.com/2022/08/the-benefit-of-intervening-sooner.html. Even with quite agressive timelines, HIA acceleration can still decrease X-risk by at least in the ballpark of a percentage point (or more).

Addressing AI X-risk by trying to create smarter humans who might then solve the problem is also a highly indirect strategy; it seems more tractable to focus directly on AI alignment.

I spent about a decade researching AGI alignment, much of that time at MIRI; my conclusion, which I believe is agreed upon by a significant portion of the AGI alignment research community, is that this problem is extremely difficult, and not remotely on track to being solved in time, and pouring more resources into the problem basically doesn't help at the moment. If someone is making strategic decisions based on the fact that there is disagreement on this point, I would urge you to notice that the prominent optimists will not debate the pessimists.

[Just noting that an online AI detector says the above comment is most likely written by a human and then "AI polished"; I strongly prefer that you just write the unpolished version even if you think it's "worse".]

 

Yeah, you’re right. I usually use AI mostly for translation, but this time I asked it to rewrite some parts that had come out a bit tangled. It said the same things, but expressed them a bit too much in its own way, and later I half-regretted leaving the text like that, too.

 

But I do also think HIA and reprogenetics are very good interventions even if there were no AGI X-risk, so for anyone who cares about interventions like that, they should be a top cause area.

I mostly agree with this. On whether they should be a top cause area, less so. As long as it stays framed as a marginal investment or a "secondary strategy" against AI catastrophic risk, it seems more justifiable and defensible to the general public, institutions, or people who might join EA. Making it a top cause area would mean going all-in on it. I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don't face to the same degree (both reputationally for EA, and in terms of actual risks from adopting the technology).

 

This is largely true, yeah. However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.

That's a good point, but even if HIA demonstrated that we don't really need AGI, it seems unlikely that society as a whole would give up pursuing it if it could get there first. That said, I agree that even a small increase in the chances of avoiding the risk matters a lot given the stakes.

 

not remotely on track to being solved in time, and pouring more resources into the problem basically doesn't help at the moment.

I'm not too optimistic about AI alignment. But does that mean you'd estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)

Yeah, you’re right.

Ok, thanks for noting! (It occurred to me after I wrote that that translation would be a major use case and obviously a good one.)

I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don't face to the same degree

You're right, it certainly wouldn't make sense for it to immediately jump to being a top priority cause, yeah, even if I'm arguing it should maybe be one eventually. If we're being granular about the computations I'm bidding for, it would be more like "some EAs should do some more investigation into whether this could make sense as a cause area for substantially more investment".

it seems more justifiable and defensible to the general public, institutions, or people who might join EA.

Interesting. Regarding people who might join EA, I don't think I quite see it, but the point is interesting and I'll maybe think about it a bit more.

That said, in terms of societal justification, I would want to distinguish between motivations about AGI X-risk, and concrete aims and intentions with reprogenetics. The latter is what I'd propose to collectively work on. That would still involve intelligence amplification, and transparently so, as is owed to society. But the actual plan, and the pitch to society, would be more broad. It would be about the whole of reprogenetics. So it would include empowering parents to give their kids an exceptionally healthy happy life, and so on, and it would include policy, professional, social, and moral safeguards against the major downside risks.

In other words, to borrow from an old CFAR tagline, I'm saying something like "reprogenetics for its own sake, for the sake of X-risk reduction", if that makes any sense.

In a bunch more detail, I want to distinguish:

  • (motivation) my background motivation for devoting a lot of effort to HIA and reprogenetics (HIA helping decrease AGI X-risk)
  • (explanation of motivation) how I describe/explain/justify my background motivation to people / the public / etc.
  • (concrete aims) the concrete aims/targets that I pursue with my actions within the space of reprogenetics
  • (explanation of aims) how I describe/explain/justify/commit-to concrete aims
  • (proposed societal motivation) What I'm putting forward as a vision / motivation for developing and deploying reprogenetics that would be good and would justify doing so

For honesty's sake, I personally strongly aim to think and communicate so that:

  • My public explanation of my motivation gives an honest (truthful, open, salient, clear) presentation of my actual motivation.
  • My public explanation of my concrete aims is likewise honest.
  • Both my motivations and my concrete aims are clearly presented.
  • My concrete aims have clear boundaries around them. For example, I might commit to certain actions on the basis of my publicly stated concrete aims.
  • My concrete aims are consonant with my proposed societal motivation.

This serves multiple purposes. For example:

  • I want to work out how, and argue to the public, that reprogenetics is good "on its own terms"; in particular, I want to argue that it's good even if you don't buy into anything about AGI X-risk. This is a stronger position I want to argue for, and expose my position to critique on the basis of.
  • I want to work out and communicate to the public / stakeholders a vision of how society can orient around reprogenetics that is beneficial to ~everyone. This involves working out societal coordination. The flag of [figuring out what to coordinate on and how] would be more about the concrete aims and the proposed societal motivation, and not about my background motivations.

I would suggest that EA could do something similar. That might work differently / not work at all, in the context of a large social movement. I haven't thought about that, it's an interesting question.

it seems unlikely that society as a whole would give up pursuing it if it could get there first.

Yeah, I'm quite uncertain on this point. I'm interested in understanding better the details of why AGI is actually being pursued, and under what conditions various capabilities researchers might walk away from that research. But that's a whole other intellectual project that I don't have bandwidth for; I'd strongly encourage someone to pick that one up though!

I'm not too optimistic about AI alignment. But does that mean you'd estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)

I do think that the current marginal dollar is much better spent on either supporting a global ban on AGI research, and/or HIA, compared to marginal alignment research. That's definitely a controversial opinion, but I'll stand on that (and FWIW, not that I should remotely be taken to speak for them, but for example I would suspect that Yudkowsky and Soares would agree with this judgement). I'm actually unsure whether I personally think the benefit of HIA is more in "some of the kids might solve alignment" vs. "some of the kids might figure out some other way to make the world safe"; I've become quite pessimistic about solving AGI alignment, but that's kinda idiosyncratic.

However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.

I don't think this is a real contribution. I don't think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades. I think they're trying to make it because they think they can. 

And also because they [rightly or wrongly] believe that AGI will be more cost effective, more controllable, need less sleep and have higher problem solving potential than even the smartest possible humans. And be here a lot sooner. (And in some of the AGI fantasies, a route to making humans genetically smarter anyway!)
-

Even if one assumes near term "AGI" has a fairly low ceiling,[1] it seems like "intelligence augmentation" is unpromising as an EA intervention.[2] The necessary research is complex, expensive, long term and dependent not just on germline engineering, but on academic research to understand what intelligence is in less shallow terms than we currently do. It's not clear that there are individual tractable interventions. The quantifiable impact - if it actually worked - would presumably be a tiny proportion of people sufficiently rich and focused on maximising their offspring's intelligence paying to select a few genes somewhat correlated with intelligence for "designer babies", with the possibility this might translate enough into real world outcomes to turn a handful of children with already above average prospects into particularly capable and influential individuals. It is not obvious these children will grow up to use their greater talent (real or perceived) for mitigating existential risk or any other sort of greater good[3] Humans with rich, driven parents who've been taught about their superiority to ordinary humans from birth don't sound immune to "alignment problems" either....

As far as germline engineering goes, the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.

 

  1. ^

    I do actually, but it's not fashionable here, or indeed at MIRI!

  2. ^

    at least, viewed through EA's analytical lens rather than associated cultural tendency to overestimate the importance of individual intelligence..,

  3. ^

    I mean, what percentage of the world's smartest people focuses on that now?

Thanks for engaging substantively!

I don't think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades.

I don't feel confident about this in any direction. However, my sense is that it's one of the top positive justifications that people use for making AGI (I mean, justifications that would apply in the absence of race dynamics). Not specifically "there won't be enough smart people"--but rather, "humanity doesn't currently have the brainpower to solve the really pressing problems", e.g. cancer, longevity, etc. If you tell an isolated person or company to stop their AGI research, they can just say "well it doesn't matter because someone else will do this research anyway, why not me". But what about a strong global ban? Then you get objections like "well hold on a minute, maybe this AI stuff is pretty good, it could cure cancer and so on". That's the justification that I'm trying to push against by saying "look, we can get all that good stuff on a pretty good timeline without crazy x-risk".

Regarding your next paragraph, there's a lot of claims there, which I largely think are incorrect, but it's kinda hard to respond to them in a way that is both satisfyingly detailed+convincing but also short enough for a comment. I would point you to my research, which addresses some of these questions: https://berkeleygenomics.org/Explore

If you're interested in discussing this at more length, I'd love to have you on for a podcast episode. Interested?

the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.

Yeah this is another quite large potential benefit of reprogenetics that I'm excited about. It would require that the technology ends up "safe, accessible, and powerful".

I guess, just to state where some of the disagreements lie:

As far as germline engineering goes, the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.

Regarding this, see also my comment here: https://forum.effectivealtruism.org/posts/QLugEBJJ3HYyAcvwy/new-cause-area-human-intelligence-amplification?commentId=5yxEpv9vFRABptHyd

More from TsviBT
Curated and popular this week
Relevant opportunities