Wei Dai

4301 karmaJoined


Sorted by New
· · 1m read


As I’ve said elsewhere, I have more complicated feelings about genetic enhancement. I think it is potentially beneficial, but also tends to be correlated with bad politics, and it could be the negative social effects of allowing it outweigh the benefits.

I appreciate you keeping on open mind on genetic enhancement (i.e., not grouping it with racism and fascism, or immediately calling for it to be banned). Nevertheless, it fills me with a sense of hopelessness to consider that one of the most thoughtful groups of people on Earth (i.e., EAs) might still realistically decide to ban the discussion of human genetic enhancement (I'm assuming that's the implied alternative to "allowing it"), on the grounds that it "tends to be correlated with bad politics".

When I first heard about the idea of greater than human intelligence (i.e., superintelligence), I imagined that humanity would approach it as one of the most important strategic decision we'll ever face, and there would be worldwide extensive debates about the relative merits of each possible route to achieving that, such as AI and human genetic enhancement. Your comment represents such a divergence from that vision, and occurring in a group like this...

If even we shy away from discussing a potentially world-altering technology simply because of its political baggage, what hope is there for broader society to engage in nuanced, good-faith conversations about these issues?

I think paying AIs to reveal their misalignment and potentially to work for us and prevent AI takeover seems like a potentially very promising intervention.

I'm pretty skeptical of this. (Found a longer explanation of the proposal here.)

An AI facing such a deal would be very concerned that we're merely trying to trick it into revealing its own misalignment (which we'd then try to patch out). It seems to me that it would probably be a lot easier for us to trick an AI into believing that we're honestly presenting it such a deal (including by directly manipulating it's weights and activations), than to actually honestly present such an deal and in doing so cause the AI to believe it.

Further, I think there is a substantial chance that AI moral patienthood becomes a huge issue in coming years and thus it is good to ensure that field has better views and interventions.

I agree with this part.

A couple of further considerations, or "stops on the crazy train", that you may be interested in:

(These were written in an x-risk framing, but implications for s-risk are fairly straightforward.)

As far as actionable points, I've been advocating working on metaphilosophy or AI philosophical competence, as a way of speeding up philosophical progress in general (so that it doesn't fall behind other kinds of intellectual progress, such as scientific and technological progress, that seem likely to be greatly sped up by AI development by default), and improving the likelihood that human-descended civilization(s) eventually reach correct conclusions on important moral and philosophical questions, and will be motivated/guided by those conclusions.

In posts like this and this, I have lamented the extreme neglect of this field, even among people otherwise interested in philosophy and AI, such as yourself. It seems particularly puzzling why no professional philosopher has even publicly expressed a concern about AI philosophical competence and related risks (at least AFAIK), even as developments such as ChatGPT have greatly increased societal attention on AI and AI safety in the last couple of years. I wonder if you have any insights into why that is the case.

Lower than 1%? A lot more uncertainty due to important unsolved questions in philosophy of mind.

I agree that there is a lot of uncertainty, but don't understand how that is compatible with a <1% likelihood of AI sentience. Doesn't that represent near certainty that AIs will not be sentient?

The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.

Thanks for the clarification. Why doesn't this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)? I feel like maybe you're adopting the framing of "comparative advantage" too much in a situation where the idea doesn't work well (because the situation is too adversarial / not cooperative enough). It seems a bit like a country, after suffering a military defeat, saying "We're better scholars than we are soldiers. Let's pursue our comparative advantage and reallocate our defense budget into our universities."

Rather, I think its impact will come from advocating for not-super-controversial ideas, but it will be able to generate them in part because it avoided the effects I listed in my comment above.

This part seems reasonable.

I've also updated over the last few years that having a truth-seeking community is more important than I previously thought - basically because the power dynamics around AI will become very complicated and messy, in a way that requires more skill to navigate successfully than the EA community has. Therefore our comparative advantage will need to be truth-seeking.

I'm actually not sure about this logic. Can you expand on why EA having insufficient skill to "navigate power dynamics around AI" implies "our comparative advantage will need to be truth-seeking"?

One problem I see is that "comparative advantage" is not straightforwardly applicable here, because the relevant trade or cooperation (needed for the concept to make sense) may not exist. For example, imagine that EA's truth-seeking orientation causes it to discover and announce one or more politically inconvenient truths (e.g. there are highly upvoted posts about these topics on EAF), which in turn causes other less truth-seeking communities to shun EA and refuse to pay attention to its ideas and arguments. In this scenario, if EA also doesn't have much power to directly influence the development of AI (as you seem to suggest), then how does EA's truth-seeking benefit the world?

(There are worlds in which it takes even less for EA to be shunned, e.g., if EA merely doesn't shun others hard enough. For example there are currently people pushing for EA to "decouple" from LW/rationality, even though there is very little politically incorrect discussions happening on LW.)

My own logic suggests that too much truth-seeking isn't good either. Would love to see how to avoid this conclusion, but currently can't. (I think the optimal amount is probably a bit higher than the current amount, so this is not meant to be an argument against more truth-seeking at the current margin.)

You probably didn't have someone like me in mind when you wrote this, but it seems a good opportunities to write down some of my thoughts about EA.

On 1, I think despite paying lip service to moral uncertainty, EA encourages too much certainty in the normative correctness of altruism (and more specific ideas like utilitarianism), perhaps attracting people like SBF with too much philosophical certainty in general (such as about how much risk aversion is normative), or even causing such general overconfidence (by implying that philosophical questions in general aren't that hard to answer, or by suggesting how much confidence is appropriate given a certain amount of argumentation/reflection).

I think EA also encourages too much certainty in descriptive assessment of people's altruism, e.g., viewing a philanthropic action or commitment as directly virtuous, instead of an instance of virtue signaling (that only gives probabilistic information about someone's true values/motivations, and that has to be interpreted through the lenses of game theory and human psychology).

On 25, I think the "safe option" is to give people information/arguments in a non-manipulative way and let them make up their own minds. If some critics are using things like social pressure or rhetoric to manipulate people into being anti-EA (as you seem to implying - I haven't looked into it myself), then that seems bad on their part.

On 37, where has EA messaging emphasized downside risk more? A text search for "downside" and "risk" on https://www.effectivealtruism.org/articles/introduction-to-effective-altruism both came up empty, for example. In general it seems like there has been insufficient reflection on SBF and also AI safety (where EA made some clear mistakes, e.g. with OpenAI, and generally contributed to the current AGI race in a potentially net negative way, but seem to have produced no public reflections on these topics).

On 39, seeing statements like this (which seems overconfident to me) makes me more worried about EA, similar to how my concern about each AI company is inversely related to how optimistic it is about AI safety.

The problem of motivated reasoning is in some ways much deeper than the trolley problem.

The motivation behind motivated reasoning is often to make ourselves look good (in order to gain status/power/prestige). Much of the problem seems to come from not consciously acknowledging this motivation, and therefore not being able to apply system 2 to check for errors in the subconscious optimization.

My approach has been to acknowledge that wanting to make myself look good may be a part of my real or normative values (something like what I would conclude my values are after solving all of philosophy). Since I can't rule that out for now (and also because it's instrumentally useful), I think I should treat it as part of my "interim values", and consciously optimize for it along with my other "interim values". Then if I'm tempted to do something to look good, at a cost to my other values or perhaps counterproductive on its own terms, I'm more likely to ask myself "Do I really want to do this?"

BTW I'm curious what courses you teach, and whether / how much you tell your students about motivated reasoning or subconscious status motivations when discussing ethics.

The CCP's current appetite for AGI seems remarkably small, and I expect them to be more worried that an AGI race would leave them in the dust (and/or put their regime at risk, and/or put their lives at risk), than excited about the opportunity such a race provides.

Yeah, I also tried to point this out to Leopold on LW and via Twitter DM, but no response so far. It confuses me that he seems to completely ignore the possibility of international coordination, as that's the obvious alternative to what he proposes, that others must have also brought up to him in private discussions.

But we’re so far away from having that alternative that pining after it is a distraction from the real world.

For one thing, we could try to make OpenAI/SamA toxic to invest in or do business with, and hope that other AI labs either already have better governance / safety cultures, or are greatly incentivized to improve on those fronts. If we (EA as well as the public in general) give him a pass (treat him as a typical/acceptable businessman), what lesson does that convey to others?

I should add that there may be a risk of over-correcting (focusing too much on OpenAI and Sam Altman), and we shouldn't forget about other major AI labs, how to improve their transparency, governance, safety cultures, etc. This project (Zach Stein-Perlman's AI Lab Watch) seems a good start, if anyone is interested in a project to support or contribute ideas to.

Load more