Yeah, you’re right.
Ok, thanks for noting! (It occurred to me after I wrote that that translation would be a major use case and obviously a good one.)
I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don't face to the same degree
You're right, it certainly wouldn't make sense for it to immediately jump to being a top priority cause, yeah, even if I'm arguing it should maybe be one eventually. If we're being granular about the computations I'm bidding for, it would be more like "some EAs should do some more investigation into whether this could make sense as a cause area for substantially more investment".
it seems more justifiable and defensible to the general public, institutions, or people who might join EA.
Interesting. Regarding people who might join EA, I don't think I quite see it, but the point is interesting and I'll maybe think about it a bit more.
That said, in terms of societal justification, I would want to distinguish between motivations about AGI X-risk, and concrete aims and intentions with reprogenetics. The latter is what I'd propose to collectively work on. That would still involve intelligence amplification, and transparently so, as is owed to society. But the actual plan, and the pitch to society, would be more broad. It would be about the whole of reprogenetics. So it would include empowering parents to give their kids an exceptionally healthy happy life, and so on, and it would include policy, professional, social, and moral safeguards against the major downside risks.
In other words, to borrow from an old CFAR tagline, I'm saying something like "reprogenetics for its own sake, for the sake of X-risk reduction", if that makes any sense.
In a bunch more detail, I want to distinguish:
For honesty's sake, I personally strongly aim to think and communicate so that:
This serves multiple purposes. For example:
I would suggest that EA could do something similar. That might work differently / not work at all, in the context of a large social movement. I haven't thought about that, it's an interesting question.
it seems unlikely that society as a whole would give up pursuing it if it could get there first.
Yeah, I'm quite uncertain on this point. I'm interested in understanding better the details of why AGI is actually being pursued, and under what conditions various capabilities researchers might walk away from that research. But that's a whole other intellectual project that I don't have bandwidth for; I'd strongly encourage someone to pick that one up though!
I'm not too optimistic about AI alignment. But does that mean you'd estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)
I do think that the current marginal dollar is much better spent on either supporting a global ban on AGI research, and/or HIA, compared to marginal alignment research. That's definitely a controversial opinion, but I'll stand on that (and FWIW, not that I should remotely be taken to speak for them, but for example I would suspect that Yudkowsky and Soares would agree with this judgement). I'm actually unsure whether I personally think the benefit of HIA is more in "some of the kids might solve alignment" vs. "some of the kids might figure out some other way to make the world safe"; I've become quite pessimistic about solving AGI alignment, but that's kinda idiosyncratic.
Thanks for engaging substantively!
I don't think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades.
I don't feel confident about this in any direction. However, my sense is that it's one of the top positive justifications that people use for making AGI (I mean, justifications that would apply in the absence of race dynamics). Not specifically "there won't be enough smart people"--but rather, "humanity doesn't currently have the brainpower to solve the really pressing problems", e.g. cancer, longevity, etc. If you tell an isolated person or company to stop their AGI research, they can just say "well it doesn't matter because someone else will do this research anyway, why not me". But what about a strong global ban? Then you get objections like "well hold on a minute, maybe this AI stuff is pretty good, it could cure cancer and so on". That's the justification that I'm trying to push against by saying "look, we can get all that good stuff on a pretty good timeline without crazy x-risk".
Regarding your next paragraph, there's a lot of claims there, which I largely think are incorrect, but it's kinda hard to respond to them in a way that is both satisfyingly detailed+convincing but also short enough for a comment. I would point you to my research, which addresses some of these questions: https://berkeleygenomics.org/Explore
If you're interested in discussing this at more length, I'd love to have you on for a podcast episode. Interested?
the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
Yeah this is another quite large potential benefit of reprogenetics that I'm excited about. It would require that the technology ends up "safe, accessible, and powerful".
The problem is that you aren't in charge of society: once the tech is out there, you don't get a large say in how it gets used.
Right. That's why I'm not like "hm let me write down a list of good things to do with this technology and allow those, and write down a list of bad things to ban, and then that solves everything". Instead I'm like "ok, there's a big set of questions around how society can take stances around this technology; let's figure out whether and how such a stance can actually result in overwhelmingly good outcomes for humanity--i.e. figure out what that stance is, and figure out how to figure it out (e.g. who to bring in to give voice to), figure out how to get to society having that stance, etc.". See for example https://berkeleygenomics.org/articles/Genomic_emancipation_contra_eugenics.html
Regarding your second paragraph, I'd appreciate some metadata. For example, is this a worry that you're just now thinking of? Is it something you've investigated a bunch and have a lot of detail about? Is this something you feel confident about, or not? Is this something you're interested in thinking about? Are you putting this forward as a compelling reason to not investigate more about whether reprogenetics should be a top cause (as opposed, for example, to one major downside risk that would have to be considered and evaluated as part of such an investigation)?
Anyway, on the object level, I'm interested in thinking about it. I mentioned a class of such worries here https://berkeleygenomics.org/articles/Potential_perils_of_germline_genomic_engineering.html#internal-misalignment but haven't investigated that particular worry.
I don't feel very worried about it because in fact these children would be quite varied in themselves as a class, and there would be quite a lot of variation, so that there's no clear distinction between kids resulting from reprogenetics vs. not. See the diagram in this subsection: https://berkeleygenomics.org/articles/Genomic_emancipation.html#intelligence Further, by default these kids would have varied backgrounds, grow up in different places, etc. But, maybe it's a more likely risk than I'm guessing at the moment.
That said, I do think it's very important, for this and many other reasons, to make reprogenetic technologies very accessible (inexpensive, widespread, legal, functional, safe, applicable to anyone), so that there isn't siloing into some small class. I also want this technology to be developed and deployed in a liberal, diverse democracy first, for this reason and for other reasons.
I think there are ways to get the info without threatening the coherence of your system. For example, you can try to understand, and then absorb into intuition, alternative root/basic intuitions. Cf. https://en.wikipedia.org/wiki/World_Hypotheses by Pepper, and Lakoff's ideas on metaphors. As a concrete example in the case of timelines, I would offer https://www.lesswrong.com/posts/wgqcExv9AgN5MuJuY/bioanchors-2-electric-bacilli as an intuition for longer / less confident timelines that you could try to understand intuitively (without necessarily believing). Having multiple of these hypotheses is one good way to make yourself more fluent in viewing multiple alternative states of the world as being intuitively plausible / worth investigating / asking for falsification.
Thanks! (See other comment for my response.)
I think that more than enough ink has been spilled on this topic on this forum and I don’t see this post adding a lot to it.
That's fair. This post is pretty quickly-written, with the intent of reaching out to EA. The links in the post link to much more substantial thoughts, which I would guess have been absent from the previous discussions.
Thanks for engaging!
Given the history of discussing this topic within EA,
Thanks for your links in the other comment; I had been searching for human intelligence amplification, but not "genetic enhancement". (I generally avoid the term "enhancement" in this context because I believe it is subtley philosophically incorrect--it bakes in a degree of eugenical thinking, in that it kinda sounds like it presumes some notion of "better" and therefore presumes some notion of "good", which is a core outlook of eugenics.)
Glancing at those links, I can understand some more why you might have a reaction like this, haha. I would submit myself as different from that history. I'm serious about this area; I view moral and societal aspects as equally important to technical aspects; I'm not a trained expert but I have been studying for a few years; and I'm here to actually think these things through, ideally working more with some EAs.
I also believe that discussing eugenics on the forum undermines attempts to make EA more welcoming to a large number of racial groups, because of the association with forms of oppression and genocide against those groups. [...] I believe that there are many people who would make fantastic EAs who are turned off of this movement because of this association.
This makes total sense. I would be curious to hear from / talk with anyone who is turned off by reprogenetics in general or turned off from EA because of reprogenetics in particular. I'd like to understand the issue better and understand where people are coming from better. (I understand that might be difficult because maybe most people would just not want to talk about it, but nevertheless. Maybe someone reading this is like "I was almost turned off by this stuff, but I stuck around." and would be up for chatting.)
I think there's a couple dimensions:
I believe that all of these harms persist even if you don’t specifically talk about where you might believe the existing differences in intelligence lie, because of that history.
Reprogenetics is orthogonal to ancestry groups; it would be a set of tools that could be offered to individual couples who want kids. I'm against eugenic policies such as paying certain types of people to have kids or not have kids, anything about immigration, etc. I think there is a positive ideology (I mean, a coherent ideology that gives explicit answers to the relevant questions) that is good and that is anti-racist and anti-eugenics. The only interest that I have in differences in intelligence or any other trait, are differences between individuals with or without a given allele.
[Just noting that an online AI detector says the above comment is most likely written by a human and then "AI polished"; I strongly prefer that you just write the unpolished version even if you think it's "worse".]
if we frame them primarily as tools for mitigating AI-related existential risk.
I did frame it that way, because decreasing existential risk should be the top priority in terms of causes. But I do also think HIA and reprogenetics are very good interventions even if there were no AGI X-risk, so for anyone who cares about interventions like that, they should be a top cause area.
An enhanced human born today will not be an active researcher for at least 20–25 years.
Well, we could say 15-20 years (I think John von Neumann started making significant contributions to math around age 20), but yeah.
If AGI arrives within a 15-year window, human intelligence will simply lag behind at the most critical juncture.
This is largely true, yeah. However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.
but it seems to assume that AGI timelines are long enough for HIA to matter at all, which remains deeply uncertain. On shorter timelines, the argument loses most of its force.
I'm not sure how you're using the phrase "shorter timelines" here. If you mean "when AGI actually comes", then see above. If you mean "someone's strategic probabilistic distribution over when AGI comes", then I disagree. See https://tsvibt.blogspot.com/2022/08/the-benefit-of-intervening-sooner.html. Even with quite agressive timelines, HIA acceleration can still decrease X-risk by at least in the ballpark of a percentage point (or more).
Addressing AI X-risk by trying to create smarter humans who might then solve the problem is also a highly indirect strategy; it seems more tractable to focus directly on AI alignment.
I spent about a decade researching AGI alignment, much of that time at MIRI; my conclusion, which I believe is agreed upon by a significant portion of the AGI alignment research community, is that this problem is extremely difficult, and not remotely on track to being solved in time, and pouring more resources into the problem basically doesn't help at the moment. If someone is making strategic decisions based on the fact that there is disagreement on this point, I would urge you to notice that the prominent optimists will not debate the pessimists.
A basic issue with a lot of deliberate philanthropy is the tension between:
The kneejerk solution I'd propose is "proof of novel work". If you want funding to do X, you should show that you've done something to address X that others haven't done. That could be a detailed insightful write-up (which indicates serious thinking / fact-finding); that could be some you did on the side, which isn't necessarily conceptually novel but is useful work on X that others were not doing; etc.
I assume that this is an obvious / not new idea, so I'm curious where it doesn't work. Also curious what else has been tried. (E.g. many organizations do "don't apply, we only give to {our friends, people we find through our own searches, people who are already getting funding, ...}".)
The Berkeley Genomics Project is fundraising for the next forty days and forty nights at Manifund: https://manifund.org/projects/human-intelligence-amplification--berkeley-genomics-project
I guess, just to state where some of the disagreements lie:
Regarding this, see also my comment here: https://forum.effectivealtruism.org/posts/QLugEBJJ3HYyAcvwy/new-cause-area-human-intelligence-amplification?commentId=5yxEpv9vFRABptHyd