C

ClayShentrup

59 karmaJoined

Comments
31

i prioritized the center for election science because reforming our social choice mechanisms offers the highest expected utility of any available intervention. specifically, moving to methods that maximize voter satisfaction efficiency (vse)—such as approval or score voting—yields massive downstream benefits by improving government decision-making quality. as the analysis at ScoreVoting.net/LivesSaved explains, the economic and humanitarian impact of even a slight improvement in the quality of elected officials dwarfs the impact of direct aid like malaria nets. optimizing the decision-making stack is the necessary precursor to solving other global challenges effectively.

i prioritized the center for election science because reforming our social choice mechanisms offers the highest expected utility of any available intervention. specifically, moving to methods that maximize voter satisfaction efficiency (vse)—such as approval or score voting—yields massive downstream benefits by improving government decision-making quality. as the analysis at rangevoting.org/livessaved demonstrates, the economic and humanitarian impact of even a slight improvement in the quality of elected officials dwarfs the impact of direct aid like malaria nets. optimizing the decision-making stack is the necessary precursor to solving other global challenges effectively.

This is clearly not true. The example I gave was foreign aid, which benefits foreigners at the expense of citizens. Since only one of these groups can vote, there is little reason to think that the preferences of this subgroup will align with overall human welfare.

that's incorrect. a rational entity's goal is to rationalize the net utility of the smallest group that includes itself. genes are just trying to maximize their expected number of copies made. the appearance of "altruism" is an illusion caused by:

  1. kin selection.
  2. reciprocal altruism.

it's logically and empirically proven that you cannot actually aim for maximizing the welfare of the "universe". if you try to maximize the sum of utility, that would justify trying to make as many new people as possible, so as not to "pre-murder" them and it would mean people should decrease their personal utility as much as possible, as long as it increases net utility. or kill one person if it helps cause two people to be born. whereas if you try to maximize average utility, then you want to kill people who are less happy than average. both of these are obviously untenable and don't remotely fit with observed actual human behavior. this is arguably the most elementary fact in the whole of ethical theory.

https://plato.stanford.edu/entries/repugnant-conclusion/

i discuss all of this in my "ethics 101" primer here.

This is true for most EA cause areas. Existential risk work is about protecting the interests of future generations; animal welfare work is about protecting the interests of animals - neither of which groups can vote.

the point is that if you want to altruistically help future generations, or animals for that matter, it makes sense to do so in the most efficient way possible. but the fundamental desire to be truly altruistic in the first place is irrational. "altruism" as we normally use the term is just the selfish behavior of genes trying to help copies of. themselves that happen to be in other bodies. again, clearly explained in this veritasium video and is just trivial biology 101.

No methodology or source is given for why we should expect a 5% decline in the risk of 2 billion deaths.

it's absolutely given, right there in plain english. the BR figures are cited, and there are multiple plausible independent lines of reasoning from which to derive comparable figures. i don't know why you're just ignoring that as if it's not right there written plain as day.

"best for human welfare" just means the sum of all individual (self interested) utilities. so voter preferences cannot be opposite of what is best for human welfare, by definition.

caveat: there's a disparity between intrinsic and instrumental preferences, in other words voters don't actually know what they want. but to solve that you need an entirely different paradigm, namely election by jury.

better voting methods give you the best you can get from the mediocre human brains you have to work with.

> These numbers just seem totally made up. Why should we believe that approval voting has anything like such a large impact?

the page directly addresses that question quite incisively, citing the bayesian regret figures. the upgrade from plurality voting to score voting is roughly double the effect of having democracy in the first place. and approval voting is just the binary (slightly less optimal but dead simple and politically practical) version of score voting.

i think the evidence is pretty straightforward. e.g. bayesian regret figures by princeton math phd warren smith show that approval voting roughly doubles the human welfare impact of democracy.

https://www.rangevoting.org/BayRegsFig

doing some ballpark math to see how many lives that would save:

https://www.rangevoting.org/LivesSaved

i've never seen any EA cause that could even remotely compete with such a massive improvement, especially given the cost is essentially zero, once you've spent the relatively minor cost to run a ballot measure. as opposed to e.g. malaria nets, where you have to pay an ongoing cost to produce them. fixing the voting method is the policy that lubricates the gears for all other policies.

the idea that there's some viable alternative to expected utility maximization is just thoroughly refuted by everything we know about decision making.

http://www.rangevoting.org/UtilFoundns

http://www.rangevoting.org/Mill

http://www.rangevoting.org/OmoUtil.html

  1. Maximizing expected value can have counterintuitive implications like prioritizing insects over humans or pursuing astronomical payoffs with tiny probabilities.

no. that's an argument about which entities you choose to consider. rational expected value calculus is to care about the smallest set of people that includes yourself. or, more specifically, for a gene to care about itself only.

Alternatives like contractualism and various forms of risk aversion may better align with moral intuitions.

"risk aversion" is just decreasing marginal utility. e.g. if you take a guarantee of a million dollars over a 50% shot at 3 million. u = log2(wealth), so this is an expected utility calculation of:

100% of 1M = 19.93
vs.
50% of 3M = 21.517/2 = 10.7585

thus the guarantee of 1M obviously makes sense unless you're already quite wealthy.

Practical decision-making requires wrestling with moral and empirical uncertainties.

what is "moral" uncertainty? morality is just "genes maximizing their expected number of copies made".

what do you mean "default"? you just have a utility for each option and the best option is the one that maximizes net utility.

https://www.rangevoting.org/BayRegDum

It also seems a bit circular because if you want to build a Deep Democracy AGI, then that means you value Deep Democracy, so you're still aligning AGI to your values

 

no, you're aligning it to what everyone values.

J.C.Harsanyi, in a 2-page article involving no mathematics whatever [J.Political Economy 61,5 (1953) 434-435], came up with the following nice idea: "Optimizing social welfare" means "picking the state of the world all individuals would prefer if they were in a state of uncertainty about their identity." I.e. if you are equally likely to be anybody, then your expected utility is the summed utility in the world divided by the number of people in it – i.e. average utility. Then by the linear-lottery property (Lin) of von Neumann utility, it follows that social utility is averaging.

source

this was my premise much more so with election by jury.

https://www.electionbyjury.org/manifesto

full disclosure: i co-founded the center for election science which advocates approval voting.

quadratic voting has been pretty deeply debunked.

OK, thanks for explaining. It's literally just election by a jury instead of the public.

Looks like the summary bot explained it below.

UPDATE: I significantly restructured the flow so that it now dives immediately into the basic criteria and description of the idea.

Load more