T

tylermjohn

1025 karmaJoined

Comments
128

Yup. The constitution of the democratic community is inherently value laden. Even prioritizing conscious beings or beings with preferences is a value judgment. I don't think there's any option here but to debate and argue over who gets a seat at the table in a realpolitik kind of way, and then use deep democracy to extend standing to other beings. If, for example, only adult humans vote, you and I can still use our votes to extend political standing to animals, and if there is no tyranny of the majority and democracy does justice to the cardinality in people's preferences, then people who care a lot about animals can give them a correspondingly large amount of status. 

Makes sense! There's some old writers in the utilitarian tradition like James Griffin that define utilitarianism in the broader way, but I do think your articulation is probably more common.

Huh, I mean it just is formally equivalent to the sum of log utilities in the bargaining situation! But "utilitarianism" is fuzzy :)

Yes, the idea of finding a preference aggregation mechanism that does much better than modern electoral systems at capturing the cardinality of societal preferences is, I think, really core to what I'm doing here, so I probably should have brought this out a bit more than I did!

I'm only saying it's in tension with the diagnosis as "emphasis on individual action, behavior & achievement over collective."

I agree with all of your concrete discussion and think it's important. 

In the traditional Nash bargaining setup you evaluate people's utilities in options relative to the default scenario, and only consider options that make everyone at least as well off. This makes it individually rational for everyone to participate because they will be made better off by the bargain. That's different from, say, range voting. 

Thanks very much for the added last four paragraphs! We're in strong agreement re: trade being a great way to approximate granular, deep preference aggregation, particularly if you have a background of economic equality.

I'm excited to read the linked section of No Easy Eutopia. I agree that there's no fully neutral way to aggregate people's preferences and preserve cardinality. But I do think there are ways that are much more neutral, and that command much broader consent, and that they can be a big improvement over alternative mechanisms.

No problem on the chaotically written thoughts, to be fair to you my post was (due to its length) very unspecific. And that meant we could hammer out more of the details in the comments, which seems appropriate.

And then I guess both of as are in some kind of agreement that this kind of stuff (deliberate structured initatives to inject some democracy into the models) ends up majorly determining outcomes from AGI.


Yeah I think this is plausible and a good point of agreement, plus a promising leverage point. But I do kind of expect normal capitalist incentives will dominate anything like this, and that governments won't intervene except for issues of safety, as you seem to.

I find this somewhat confusing

Nash is formally equivalent to the sum of log utilities (preferences) in the disagreement set, so it's a prioritarian transformation of preference utilitarianism over a particular bargain.

I agree that it can come drastically far apart from totalist utilitarianism. What I actually like about it is that it's a principled way to give everyone's values equal weight that preserves the cardinality in people's value functions and is arbitrarily sensitive to changes in individuals' values, and that it doesn't require interpersonally comparable utilities, making it very workable. I also like that it maximizes a certain weighted sum of efficiency and equality. As an antirealist who thinks I have basically unique values, I like that it guarantees that my values have some sway over the future.

One thing I don't like about Nash is that it's a logistic form of prioritarianism, and over preferences rather than people. That means that my strongest preferences don't get that much more weight over my weakest preferences. Perhaps for that reason simple quadratic voting does better. It's in some ways less elegantly grounded, but it's also more well-understood by the broader world.

I'm seeing the position as a principled way to have a fair compromise across different people's moral viewpoints, which also happens to do pretty well by the lights of my own values. It's not attempting to approximate classical utilitarianism directly, but instead to give me some control over the future in the areas that matter the most to me, and thereby allow me to enact classical utilitarianism. There might be better such approaches, but so far this is the one that seems most promising to me at the moment.

Nice. I encountered a similar crux the other week in a career advice chat when someone said "successful people find the skills with which they really excel and exploit that repeatedly to get compounding returns" to which I responded with "well, people aren't the only things that can have compounding returns, organizations can also have compounding returns, so maybe I should keep helping organizations succeed to capture their compounding returns."

On the flip side, the fact that EA has focused so much on community building and talent seems like a certain kind of communitarianism, putting the success of the whole above any individual. 

Nice! I'll have to read this. 

I agree defaults are a problem, especially with large choice problems involving many people. I honestly haven't given this much thought, and assume we'll just have to sacrifice someone or some desideratum to get tractability, and that will kind of suck but such is life.

I'm more wedded to Nash's preference prioritarianism than the specific set-up, but I do see that once you get rid of Pareto efficiency relative to the disagreement point it's not going to be individually rational for everyone to participate. Which is sad.

One further thing that might help you get in my brain a bit is that I really am thinking this as more like "what values should we be aiming at to guide the future" and being fairly agnostic on mechanism rather than something like "let's put democracy in The AGI's model spec". And I really am envisioning the argument as something like: "Wow, it really seems like the future is not a utilitarian one. Maybe sensitivity to minority values like utilitarianism is the best thing we can ask for." — rather than something like "democracy good!" And that could mean a lot of different things. On Mondays, Wednesdays, and Fridays I think it means avoiding too much centralization and aiming for highly redistributive free market based futures as an approximation of Deep Democracy. 

Load more