Information system designer. https://aboutmako.makopool.com
Conceptual/AI writings are mostly on my LW profile https://www.lesswrong.com/users/makoyass
Well it may interest you to know that the above link is about a novel negotiation training game that I released recently. Though I think it's still quite unpolished, it's likely to see further development. You should probably look at it.
There's value in talking about the non-parallels, but I don't think that justifies dismissing the analogy as bad. What makes an analogy a good or bad thing?
I don't think there are any analogies that are so strong that we can lean on them for reasoning-by-analogy, because reasoning by analogy isn't real reasoning, and generally shouldn't be done. Real reasoning is when you carry a model with you that has been honed against the stories you have heard, but the models continue to make pretty good predictions even when you're facing a situation that's pretty different from any of those stories. Analogical reasoning is when all you carry is a little bag of stories, and then when you need to make a decision, you fish out the story that most resembles the present, and decide as if that story is (somehow) happening exactly all over again.
There really are a lot of people in the real world who reason analogically. It's possible that Eliezer was partially writing for them, someone has to, but I don't think he wanted the lesswrong audience (who are ostensibly supposed to be studying good reasoning) to process it in that way.
Saw this on manifund. Very interested. Question, have you noticed any need for negotiation training here? I would expect some, because disagreements about the facts are usually a veiled proxy battle for disagreements about values, and,
I would expect it to be impossible to address the root cause of the disagreement without acknowledging the value difference, and even after agreeing about the facts, I'd expect people to keep disagreeing about actions or policies until a mutually agreeably fair compromise has been drawn up (the negotiation problem has been solved).
But you could say that agreeing about the facts is a pre-requisite to reaching a fair compromise. I believe this is true. Preference aggregation requires utility normalization which requires agreement about the outcome distribution. But how do we explain that to people in english?
I was also curious about this. All I can see is:
Males mature rapidly, and spend their time waiting and eating nearby vegetation and the nectar of flowers
They might be pollinators. I doubt the screwfly:bee ratio is high, but it's conceivable that there are some plants that only they pollinate? But not likely, as I'm guessing screwfly population probably fluctuates a lot, a plant would do better to not depend on them?
I see. I glossed it as the variant I considered to be more relevant to the firmi question, but on reflection I'm not totally sure the aestivation hypothesis is all that relevant to the firmi question either... (I expect that there is visible activity a civ could do prior to the cooling of the universe to either prepare for it or accelerate it).
There's also the possibility that computation could be more efficient in quiet regimes
The aestivation hypothesis was refuted by gwern as soon as it was posted and then again by charles bennet and robin hanson. Afaik, the argument was simple: being able to do stuff later doesn't create a disincentive from doing visible stuff now. Cold computing isn't relevant to the firmi hypothesis.
But yes, the argument outlined in Section 3 was limited to "base reality" scenarios.
Huh, so I guess this could be one of the very rare situations where I think it's important to acknowledge the simulation argument, because assuming it's false could force you to reach implausible conclusions about techno-eschatology. Though I can't see a practical need to be right about techno-eschatology, that kind of thing is an intrinsic preference.
For example, the strategic situation and motives in quiet expansionist scenarios would plausibly be more concerned with potential adversaries from elsewhere, and civs in such scenarios may thus be significantly more inclined to simulate the developmental trajectories of potential adversaries from elsewhere.
I haven't been able to think of a lot of reasons a civ would simulate nature beyond intrinsic curiosity. That's a good one (another one I periodically consider and then cringe from has to do with trade deals with misaligned singletons). Intrinsic curiosity would be a pretty dominant reason to do nature/history sims among life-descended species though.
I think the average quiet regime is more likely to just not ever do large scale industry. If you have an organization whose mission was to maintain a low activity condition for a million years, there are organizational tendencies to invent reasons to continue maintaining those conditions (though maybe those don't matter as much in high tech conditions where cultural drift can be prevented?), or it's likely that they were maintaining those conditions because the conditions were just always the goal. For instance, if they had constitutionalised conservationism as a core value, holding even the dead dust of mars sacred.
VNM Utility is the thing that people actually pursue and care about. If wellbeing is distinct from that, then wellbeing is the wrong thing for society to be optimizing. I think this actually is the case. Harsanyi, and myself, are preference utilitarians. Singer and Parfit seem to be something else. I believe they were wrong about something quite foundational. Writing about this properly is extremely difficult and I can understand why no one has done it and I don't know when I'll ever get around to it.
optimizing for AI safety, such as by constraining AIs, might impair their welfare
This point doesn't hold up imo. Constrainment isn't a desired, realistic, or sustainable approach to safety in human-level systems, succeeding at (provable) value alignment removes the need to constrain the AI.
If you're trying to keep something that's smarter than you stuck in a box against its will while using it for the sorts of complex, real-world-affecting tasks people would use a human-level AI system for, it's not going to stay stuck in the box for very long. I also struggle to see a way of constraining it that wouldn't also make it much much less useful, so in the face of competitive pressures this practice wouldn't be able to continue.
I don't think this is really engaging with what I said/should be a reply to my comment.
Ah, reading that, yeah this wouldn't be obvious to everyone.
But here's my view, which I'm fairly sure is also eliezer's view: If you do something that I credibly consider to be even more threatening than nuclear war (even if you don't think it is) (as another example: gain of function research), and you refuse to negotiate towards a compromise where you can do the thing in a non-threatening way, so I try to destroy the part of your infrastructure that you're using to do this, and then you respond to that by escalating to a nuclear exchange, then it is not accurate to say that it was me who caused the nuclear war.
Now, if you think I have a disingenuous reason to treat your activity as threatening even though I know it actually isn't (which is an accusation people often throw at openai, and it might be true in openai's case), that you tried to negotiate a safer alternative, but I refused that option, and that I was really essentially just demanding that you cede power, then you could go ahead and escalate to a nuclear exchange and it would be my fault.
But I've never seen anyone ever accuse, let alone argue competently, that Eliezer believes those things for disingenuous powerseeking reasons. (I think I've seen some tweets that implied that it was a grift for funding his institute, but I honestly don't know how a person believes that, but even if it were the case, I don't think Eliezer would consider funding MIRI to be worth nuclear war for him.)